Message ID | 20231027153458.GMZTvYou1tlK6HD8/Y@fat_crate.local |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp692521vqb; Fri, 27 Oct 2023 08:35:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE92D+dOF6nATRLV6lAy1yAjdD0F6eRnuomy8adu9YNN3CXa043onis5xWVxfUIDCdcCTMs X-Received: by 2002:a25:2044:0:b0:d9a:4692:817b with SMTP id g65-20020a252044000000b00d9a4692817bmr9764152ybg.25.1698420934198; Fri, 27 Oct 2023 08:35:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698420934; cv=none; d=google.com; s=arc-20160816; b=DEEw7tHpK1RUctmSfExXXNzaTyAj1XF4GnZa/1UFXuCVpU1fa557GItZ/vqhxiAGze SmXyWxu96wFAwRfcBUI6nSTu8tw6+q79d0VMpN/89u5KGGK2Iq/FYRF26IY22Xs+k/Uo dsEe7kLbxTmU2RZQ3tNg4lYuocLb0CDLvSYedN2eMxVKVbUSWQMkRKkE6AUAUDLZa9OS 5smD3tOZWnVINlXaFYUXq1ax7N3XtXWlCNnx4hMlSodvD5YT2f+pd3waPAHzm8gw8Wuu hcrqAf4QBzqugoAq8ViHTpS9yQsKaiXEgHDJ5JbW78IwNo3MSpm1DsLURhKpzIudqg4L OuSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=XHOnqXTZrq1c88NmZSEuOQ4vGOEbUT+2leErliknXQk=; fh=LIAit08ByDgkRLqlEmM1nmhI6SEZnuCWjlP1SRPYJ5Q=; b=vFQt96D/LLkOF4ERoItuPTqmI0ah6vVYYuYpdT2HQhS6ehVpf4n/ZjF8NC0o7O47GW Jp60LVOnwhOXqRbTfUFHKmYi+Q8XGByNmbeySioL4B43o7eOY71ZTHKryd/tHSjqFpzf LMjztad/3ewZ0NbVjwj1rLtl5o8pH6mQGXOqScJ8kOnSxLYcsM9cFbsMQj8ZZRptz9pl ykjFx8FuWxxr5uoyGn5Tt0pj3dRUs264M+q/ylcvz2MsJWQUEDa4PQfhXeIZEg+X+/K5 deMP0FZ3WKSo++ZhZSLTaMlL21UiroqbErLhA+swJg2I+qSy9QEiscYnAijz9C2NqjSx LS7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@alien8.de header.s=alien8 header.b="Ux4mmu6/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id 7-20020a250b07000000b00d9a42c75a5csi2976894ybl.653.2023.10.27.08.35.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Oct 2023 08:35:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@alien8.de header.s=alien8 header.b="Ux4mmu6/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 0B97A83C0E1C; Fri, 27 Oct 2023 08:35:31 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346289AbjJ0PfL (ORCPT <rfc822;a1648639935@gmail.com> + 25 others); Fri, 27 Oct 2023 11:35:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346277AbjJ0PfJ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 27 Oct 2023 11:35:09 -0400 Received: from mail.alien8.de (mail.alien8.de [65.109.113.108]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CC40AF for <linux-kernel@vger.kernel.org>; Fri, 27 Oct 2023 08:35:07 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.alien8.de (SuperMail on ZX Spectrum 128k) with ESMTP id 162DF40E01A4; Fri, 27 Oct 2023 15:35:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at mail.alien8.de Authentication-Results: mail.alien8.de (amavisd-new); dkim=pass (4096-bit key) header.d=alien8.de Received: from mail.alien8.de ([127.0.0.1]) by localhost (mail.alien8.de [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 4tkJqkoZ2WmC; Fri, 27 Oct 2023 15:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=alien8; t=1698420903; bh=XHOnqXTZrq1c88NmZSEuOQ4vGOEbUT+2leErliknXQk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Ux4mmu6/0gNDVYw5Aukc0WfhM6+P+jTIFRk9yEliV2XJS6/6j9c1EL2CVVzZGLSMv uIfeVMF0XgJVdpzcCOJlki+bBSn9ieh+DXKS+Sh7kxAUCGngPAOf0aiykXkOV0gfYz CM1fDlBmIfeDoLHg/v21UyJ4h6YqQC+xBdho0QFv6xjlC4iW6dgzuOsbbI+75koysQ B6U2Kh4UpJqdBymcKcEmDCLrtKAzAhK8GJApir7bNRfGEhH4npjfHdGJj+LpZbhJ9+ FgGsnDeCh5OIMYjqUNVJVJ8TGsuj/3qaIL+O1rA01PTVCTQFdntESt5tK2H1rmA9sH NdNBrdvfKZy1l7vu+IhwRP0GqcyywzIp0YMf0/sfbpp8TCXQS6VqLKYiZ1lzKwaImM cXP9A+x8LRN2PZ0O6sTTLKEIsTPTmqAKrlb6xoOQhzq+UHE0RIxk20pJE35x3OikWG thTcD2N0/gx9ZUOLdqJAoCzDx47iX018OYViafDFWBSPxupBpNjxNZN3QseCc4U60u noQPe+DD+UTENAD6YC1kH/X9UvdAw/+Y+dVthHtGyYe4wItVqqk9L1AalJ0K/hpYAR 7LWRkZbKiWQdzYJTo0cu7bNJFJDkC1VAkVpdcrFHzkpcWee1/RTMiqxFBSwJWJ2NRW 74IxjQSEa5JfuRfHM72d06MQ= Received: from zn.tnic (pd95304da.dip0.t-ipconnect.de [217.83.4.218]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature ECDSA (P-256) server-digest SHA256) (No client certificate requested) by mail.alien8.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 83D4440E0173; Fri, 27 Oct 2023 15:34:59 +0000 (UTC) Date: Fri, 27 Oct 2023 17:34:58 +0200 From: Borislav Petkov <bp@alien8.de> To: Peter Zijlstra <peterz@infradead.org> Cc: X86 ML <x86@kernel.org>, Kishon VijayAbraham <Kishon.VijayAbraham@amd.com>, LKML <linux-kernel@vger.kernel.org> Subject: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD Message-ID: <20231027153458.GMZTvYou1tlK6HD8/Y@fat_crate.local> References: <20230622095212.20940-1-bp@alien8.de> <20230703125419.GJ4253@hirez.programming.kicks-ass.net> <20230704074631.GAZKPOV/9BfqP0aU8v@fat_crate.local> <20230704090132.GP4253@hirez.programming.kicks-ass.net> <20230704092222.GBZKPkzgdM8rbPe7zA@fat_crate.local> <20231027153327.GKZTvYR3qslaTUjtCT@fat_crate.local> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20231027153327.GKZTvYR3qslaTUjtCT@fat_crate.local> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Fri, 27 Oct 2023 08:35:31 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1769397462380532189 X-GMAIL-MSGID: 1780923429559135425 |
Series |
[1/2] x86/alternative: Add per-vendor patching
|
|
Commit Message
Borislav Petkov
Oct. 27, 2023, 3:34 p.m. UTC
From: "Borislav Petkov (AMD)" <bp@alien8.de> Date: Fri, 27 Oct 2023 14:24:16 +0200 AMD does not have the requirement for a synchronization barrier when acccessing a certain group of MSRs. Do not incur that unnecessary penalty there. While at it, move to processor.h to avoid include hell. Untangling that file properly is a matter for another day. Some notes on the performance aspect of why this is relevant, courtesy of Kishon VijayAbraham <Kishon.VijayAbraham@amd.com>: On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The ipi-bench is modified so that the IPIs are sent between two vCPUs in the same CCX. This also requires to pin the vCPU to a physical core to prevent any latencies. This simulates the use case of pinning vCPUs to the thread of a single CCX to avoid interrupt IPI latency. In order to avoid run-to-run variance (for both x2AVIC and AVIC), the below configurations are done: 1) Disable Power States in BIOS (to prevent the system from going to lower power state) 2) Run the system at fixed frequency 2500MHz (to prevent the system from increasing the frequency when the load is more) With the above configuration: *) Performance measured using ipi-bench for AVIC: Average Latency: 1124.98ns [Time to send IPI from one vCPU to another vCPU] Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from 48 vCPUs simultaneously] *) Performance measured using ipi-bench for x2AVIC: Average Latency: 1172.42ns [Time to send IPI from one vCPU to another vCPU] Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from 48 vCPUs simultaneously] From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is x2AVIC performance to be better or equivalent to AVIC. Upon analyzing the perf captures, it is observed significant time is spent in weak_wrmsr_fence() invoked by x2apic_send_IPI(). With the fix to skip weak_wrmsr_fence() *) Performance measured using ipi-bench for x2AVIC: Average Latency: 1117.44ns [Time to send IPI from one vCPU to another vCPU] Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from 48 vCPUs simultaneously] Comparing the performance of x2AVIC with and without the fix, it can be seen the performance improves by ~4%. Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option with and without weak_wrmsr_fence() on a Zen4 system also showed significant performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores CCX or CCD and just picks random vCPU. Average throughput (10 iterations) with weak_wrmsr_fence(), Cumulative throughput: 4933374 IPI/s Average throughput (10 iterations) without weak_wrmsr_fence(), Cumulative throughput: 6355156 IPI/s [1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> --- arch/x86/include/asm/barrier.h | 18 ------------------ arch/x86/include/asm/processor.h | 18 ++++++++++++++++++ 2 files changed, 18 insertions(+), 18 deletions(-)
Comments
On Fri, Oct 27, 2023 at 05:34:58PM +0200, Borislav Petkov wrote: > +static inline void weak_wrmsr_fence(void) > +{ > + alternative("mfence; lfence", "", ALT_VENDOR(X86_VENDOR_AMD)); > +} Well, you see, AFAICT the non-serializing MSRs thing is an Intel thing, so everything !Intel wants this gone, no? Definitely the Hygon thing wants this right along with AMD, because that's basically AMD, no?
On Fri, Oct 27, 2023 at 08:56:41PM +0200, Peter Zijlstra wrote: > Well, you see, AFAICT the non-serializing MSRs thing is an Intel thing, > so everything !Intel wants this gone, no? > > Definitely the Hygon thing wants this right along with AMD, because > that's basically AMD, no? Because of ce4e240c279a ("x86: add x2apic_wrmsr_fence() to x2apic flush tlb paths") and it being there since 2009 and getting called unconditionally. Hygon sure, but the other vendors? I can't even test on some. Thus the more conservative approach here.
On Fri, Oct 27, 2023 at 09:16:33PM +0200, Borislav Petkov wrote:
> Thus the more conservative approach here.
And on a second thought, I don't need any of that new stuff - I simply
need a synthetic flag which says "MSRs need fencing" and set it on
everything but AMD and Hygon. And we've solved this type of issue
gazillion times already - why am I reinventing the wheel?!
:-\
On 27/10/2023 8:29 pm, Borislav Petkov wrote: > On Fri, Oct 27, 2023 at 09:16:33PM +0200, Borislav Petkov wrote: >> Thus the more conservative approach here. > And on a second thought, I don't need any of that new stuff - I simply > need a synthetic flag which says "MSRs need fencing" and set it on > everything but AMD and Hygon. And we've solved this type of issue > gazillion times already - why am I reinventing the wheel?! Quoteth the APM (rev 3.41, June 2023): 16.11.2 WRMSR / RDMSR serialization for x2APIC Register The WRMSR instruction is used to write the APIC register set in x2APIC mode. Normally WRMSR is a serializing instruction, however when accessing x2APIC registers, the serializing aspect of WRMSR is relaxed to allow for more efficient access to those registers. Consequently, a WRMSR write to an x2APIC register may complete before older store operations are complete and have become globally visible. When strong ordering of an x2APIC write access is required with respect to preceding memory operations, software can insert a serializing instruction (such as MFENCE) before the WRMSR instruction. So which is right? This commit message, or the APM? (and yes, if you're waiting on an APM update then the commit message should at least note that one is coming.) But, to the issue at hand. There are other non-serialising MSRs on AMD CPUs, including the FS/GS base MSRs on more modern parts which is enumerated in 8000_0021.eax[1]. So there isn't a boolean "MSRs need fencing, yes/no". It is vendor *and* model specific as to whether a particular MSR is serialising or non-serialising. *And* it's vendor specific as to what the fencing sequence is. Intel require mfence;lfence, while on AMD, mfence suffices. Most MSR writes don't want to be architecturally serialising, and Intel are introducing WRMSRNS for this purpose. But ICR writes *do* need to be ordered with respect to stores becoming globally visible, and it was an error for MSR_X2APIC_ICR to be specified as non-serialising. The only sanity-preserving (pseudo) API for this is something like: vendor_msr_fence = { mfence;lfence (Intel) | mfence (AMD, Hygon) | ... } and for each MSR separately, something like: ALTERNATIVE("", vendor_msr_fence, $VENDOR_NEEDS_MSR_$X_FENCE); because that's the only one which properly separates "what fence to use" and "do I need to fence this MSR on the current vendor". ~Andrew
On Fri, Oct 27, 2023 at 09:09:05PM +0100, Andrew Cooper wrote: > There are other non-serialising MSRs on AMD CPUs, including the FS/GS > base MSRs on more modern parts which is enumerated in 8000_0021.eax[1]. > > So there isn't a boolean "MSRs need fencing, yes/no". Well, I was implying that "MSRs need fencing" refers to this particular use case where weak_wrmsr_fence() is used - IA32_TSC_DEADLINE and X2APIC MSRs. So the feature bit should be named something more specific: X86_FEATURE_APIC_TSC_MSRS_NEED_FENCING or so. If we have to do something for the other case, yes, we will have to either generalize this or add yet another flag.
On Fri, Oct 27, 2023 at 09:09:05PM +0100, Andrew Cooper wrote: > Quoteth the APM (rev 3.41, June 2023): > > 16.11.2 WRMSR / RDMSR serialization for x2APIC Register > > The WRMSR instruction is used to write the APIC register set in x2APIC > mode. Normally WRMSR is > a serializing instruction, however when accessing x2APIC registers, the > serializing aspect of WRMSR > is relaxed to allow for more efficient access to those registers. > Consequently, a WRMSR write to an > x2APIC register may complete before older store operations are complete > and have become globally > visible. When strong ordering of an x2APIC write access is required with > respect to preceding memory > operations, software can insert a serializing instruction (such as > MFENCE) before the WRMSR > instruction. > > So which is right? This commit message, or the APM? (and yes, if > you're waiting on an APM update then the commit message should at least > note that one is coming.) To clarify this one: there will be a CPUID bit which states that an MFENCE is not needed ATM and it'll be added to the APM. I'll add a note about it to the commit message too. Thx.
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 35389b2af88e..0216f63a366b 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -81,22 +81,4 @@ do { \ #include <asm-generic/barrier.h> -/* - * Make previous memory operations globally visible before - * a WRMSR. - * - * MFENCE makes writes visible, but only affects load/store - * instructions. WRMSR is unfortunately not a load/store - * instruction and is unaffected by MFENCE. The LFENCE ensures - * that the WRMSR is not reordered. - * - * Most WRMSRs are full serializing instructions themselves and - * do not require this barrier. This is only required for the - * IA32_TSC_DEADLINE and X2APIC MSRs. - */ -static inline void weak_wrmsr_fence(void) -{ - asm volatile("mfence; lfence" : : : "memory"); -} - #endif /* _ASM_X86_BARRIER_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 4b130d894cb6..a43ff8d0ad24 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -752,4 +752,22 @@ enum mds_mitigations { extern bool gds_ucode_mitigated(void); +/* + * Make previous memory operations globally visible before + * a WRMSR. + * + * MFENCE makes writes visible, but only affects load/store + * instructions. WRMSR is unfortunately not a load/store + * instruction and is unaffected by MFENCE. The LFENCE ensures + * that the WRMSR is not reordered. + * + * Most WRMSRs are full serializing instructions themselves and + * do not require this barrier. This is only required for the + * IA32_TSC_DEADLINE and X2APIC MSRs. + */ +static inline void weak_wrmsr_fence(void) +{ + alternative("mfence; lfence", "", ALT_VENDOR(X86_VENDOR_AMD)); +} + #endif /* _ASM_X86_PROCESSOR_H */