x86/barrier: Do not serialize MSR accesses on AMD

Message ID 20230622095212.20940-1-bp@alien8.de
State New
Headers
Series x86/barrier: Do not serialize MSR accesses on AMD |

Commit Message

Borislav Petkov June 22, 2023, 9:52 a.m. UTC
  From: "Borislav Petkov (AMD)" <bp@alien8.de>

AMD does not have the requirement for a synchronization barrier when
acccessing a certain group of MSRs. Do not incur that unnecessary
penalty there.

While at it, move to processor.h to avoid include hell. Untangling that
file properly is a matter for another day.

Some notes on the performance aspect of why this is relevant, courtesy
of Kishon VijayAbraham <Kishon.VijayAbraham@amd.com>:

On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM
shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The
ipi-bench is modified so that the IPIs are sent between two vCPUs in the
same CCX. This also requires to pin the vCPU to a physical core to
prevent any latencies. This simulates the use case of pinning vCPUs to
the thread of a single CCX to avoid interrupt IPI latency.

In order to avoid run-to-run variance (for both x2AVIC and AVIC), the
below configurations are done:

  1) Disable Power States in BIOS (to prevent the system from going to
     lower power state)

  2) Run the system at fixed frequency 2500MHz (to prevent the system
     from increasing the frequency when the load is more)

With the above configuration:

*) Performance measured using ipi-bench for AVIC:
  Average Latency:  1124.98ns [Time to send IPI from one vCPU to another vCPU]

  Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from
  				     48 vCPUs simultaneously]

*) Performance measured using ipi-bench for x2AVIC:
  Average Latency:  1172.42ns [Time to send IPI from one vCPU to another vCPU]

  Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from
  				     48 vCPUs simultaneously]

From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is
x2AVIC performance to be better or equivalent to AVIC. Upon analyzing
the perf captures, it is observed significant time is spent in
weak_wrmsr_fence() invoked by x2apic_send_IPI().

With the fix to skip weak_wrmsr_fence()

*) Performance measured using ipi-bench for x2AVIC:
  Average Latency:  1117.44ns [Time to send IPI from one vCPU to another vCPU]

  Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from
  				     48 vCPUs simultaneously]

Comparing the performance of x2AVIC with and without the fix, it can be seen
the performance improves by ~4%.

Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option
with and without weak_wrmsr_fence() on a Zen4 system also showed significant
performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores
CCX or CCD and just picks random vCPU.

  Average throughput (10 iterations) with weak_wrmsr_fence(),
        Cumulative throughput: 4933374 IPI/s

  Average throughput (10 iterations) without weak_wrmsr_fence(),
        Cumulative throughput: 6355156 IPI/s

[1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
---
 arch/x86/include/asm/barrier.h   | 18 ------------------
 arch/x86/include/asm/processor.h | 19 +++++++++++++++++++
 2 files changed, 19 insertions(+), 18 deletions(-)
  

Comments

Peter Zijlstra July 3, 2023, 12:54 p.m. UTC | #1
On Thu, Jun 22, 2023 at 11:52:12AM +0200, Borislav Petkov wrote:
> From: "Borislav Petkov (AMD)" <bp@alien8.de>
> 
> AMD does not have the requirement for a synchronization barrier when
> acccessing a certain group of MSRs. Do not incur that unnecessary
> penalty there.

So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
ordering constraints unlike the Intel ones?

Can we pls haz a document link for that, also a comment?

> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
> ---
>  arch/x86/include/asm/barrier.h   | 18 ------------------
>  arch/x86/include/asm/processor.h | 19 +++++++++++++++++++
>  2 files changed, 19 insertions(+), 18 deletions(-)

Moving this code while changing it meant I had to look at it _3_ times
before I spotted you changed it :/

> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index b216ac80ebcc..983406342484 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -735,4 +735,23 @@ bool arch_is_platform_page(u64 paddr);
>  #define arch_is_platform_page arch_is_platform_page
>  #endif
>  
> +/*
> + * Make previous memory operations globally visible before
> + * a WRMSR.
> + *
> + * MFENCE makes writes visible, but only affects load/store
> + * instructions.  WRMSR is unfortunately not a load/store
> + * instruction and is unaffected by MFENCE.  The LFENCE ensures
> + * that the WRMSR is not reordered.
> + *
> + * Most WRMSRs are full serializing instructions themselves and
> + * do not require this barrier.  This is only required for the
> + * IA32_TSC_DEADLINE and X2APIC MSRs.
> + */
> +static inline void weak_wrmsr_fence(void)
> +{
> +	if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
> +		asm volatile("mfence; lfence" : : : "memory");

Both instructions are 3 bytes, a 6 byte nop would be better, no?

	asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));

or something ?

> +}
> +
>  #endif /* _ASM_X86_PROCESSOR_H */
> -- 
> 2.35.1
>
  
Borislav Petkov July 4, 2023, 7:46 a.m. UTC | #2
On Mon, Jul 03, 2023 at 02:54:19PM +0200, Peter Zijlstra wrote:
> So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
> ordering constraints unlike the Intel ones?

Yah, that's the default situation. Only those two - TSC_DEADLINE and
x2APIC MSRs - and on *Intel* are special.

> Can we pls haz a document link for that, also a comment?

Why document the default? The SDM is already documenting this exception.
For everything else WRMSR is serializing.

> Moving this code while changing it meant I had to look at it _3_ times
> before I spotted you changed it :/

I figured it is a simple enough patch - no need to do a sole movement
one.

> Both instructions are 3 bytes, a 6 byte nop would be better, no?

Why? You wanna save the branch insn when sending IPIs through the
x2APIC? Does that really matter? I doubt it...

> 	asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));

There's no X86_FEATURE_AMD :)
  
Peter Zijlstra July 4, 2023, 9:01 a.m. UTC | #3
On Tue, Jul 04, 2023 at 09:46:31AM +0200, Borislav Petkov wrote:
> On Mon, Jul 03, 2023 at 02:54:19PM +0200, Peter Zijlstra wrote:
> > So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
> > ordering constraints unlike the Intel ones?
> 
> Yah, that's the default situation. Only those two - TSC_DEADLINE and
> x2APIC MSRs - and on *Intel* are special.

So they are normal MSRs like all other? AMD doesn't have any exceptions
for MSRs, they all the same?

> > Both instructions are 3 bytes, a 6 byte nop would be better, no?
> 
> Why? You wanna save the branch insn when sending IPIs through the
> x2APIC? Does that really matter? I doubt it...

Dunno, code density, speculation, many raisons to avoid jumps :-)

> > 	asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));
> 
> There's no X86_FEATURE_AMD :)

I know, but that's easily fixed.
  
Borislav Petkov July 4, 2023, 9:22 a.m. UTC | #4
On Tue, Jul 04, 2023 at 11:01:32AM +0200, Peter Zijlstra wrote:
> So they are normal MSRs like all other? AMD doesn't have any exceptions
> for MSRs, they all the same?

Yap, as far as I know.

> Dunno, code density, speculation, many raisons to avoid jumps :-)

Looking at x2apic_send_IPI asm:

	cmpb	$2, boot_cpu_data+1(%rip)	#, boot_cpu_data.x86_vendor
# arch/x86/kernel/apic/x2apic_phys.c:44: 	u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
	movq	__per_cpu_offset(,%rdi,8), %rdx	# __per_cpu_offset[cpu_7(D)], __per_cpu_offset[cpu_7(D)]
	movzwl	(%rdx,%rax), %edx	# *_8,
# ./arch/x86/include/asm/processor.h:753: 	if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
	je	.L114	#,
# ./arch/x86/include/asm/processor.h:754: 		asm volatile("mfence; lfence" : : : "memory");
#APP
# 754 "./arch/x86/include/asm/processor.h" 1
	mfence; lfence

So gcc already does mix unrelated insns so that they can all go in
parallel. So it is a

	CMP RIP-relative
	JE

So yeah, I guess, on the one hand we want to avoid conditional jumps
but, on the other, sprinkling alternatives everywhere without a good
reason is a waste. Especially if this branch is going to be
predicted-taken most of the time and it wouldn't matter.

So I'm still not convinced. We could measure it on my Coffeelake box
which says

"Switched APIC routing to cluster x2apic."

but I don't think it'll be visible.

> > > 	asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));
> > 
> > There's no X86_FEATURE_AMD :)
> 
> I know, but that's easily fixed.

Yeah, there's X86_VENDOR_AMD too. I can see the confusion ensue.

I'm wondering if we could make:

	alternative("mfence; lfence;", "", X86_VENDOR_AMD);

work...

Might come in handy in the future.
  
Borislav Petkov Oct. 27, 2023, 3:33 p.m. UTC | #5
On Tue, Jul 04, 2023 at 11:22:22AM +0200, Borislav Petkov wrote:
> I'm wondering if we could make:
> 
> 	alternative("mfence; lfence;", "", X86_VENDOR_AMD);
> 
> work...

Seems to work in my guest here and it don't look half as bad. See
replies to this message.
  

Patch

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0216f63a366b 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -81,22 +81,4 @@  do {									\
 
 #include <asm-generic/barrier.h>
 
-/*
- * Make previous memory operations globally visible before
- * a WRMSR.
- *
- * MFENCE makes writes visible, but only affects load/store
- * instructions.  WRMSR is unfortunately not a load/store
- * instruction and is unaffected by MFENCE.  The LFENCE ensures
- * that the WRMSR is not reordered.
- *
- * Most WRMSRs are full serializing instructions themselves and
- * do not require this barrier.  This is only required for the
- * IA32_TSC_DEADLINE and X2APIC MSRs.
- */
-static inline void weak_wrmsr_fence(void)
-{
-	asm volatile("mfence; lfence" : : : "memory");
-}
-
 #endif /* _ASM_X86_BARRIER_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b216ac80ebcc..983406342484 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -735,4 +735,23 @@  bool arch_is_platform_page(u64 paddr);
 #define arch_is_platform_page arch_is_platform_page
 #endif
 
+/*
+ * Make previous memory operations globally visible before
+ * a WRMSR.
+ *
+ * MFENCE makes writes visible, but only affects load/store
+ * instructions.  WRMSR is unfortunately not a load/store
+ * instruction and is unaffected by MFENCE.  The LFENCE ensures
+ * that the WRMSR is not reordered.
+ *
+ * Most WRMSRs are full serializing instructions themselves and
+ * do not require this barrier.  This is only required for the
+ * IA32_TSC_DEADLINE and X2APIC MSRs.
+ */
+static inline void weak_wrmsr_fence(void)
+{
+	if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+		asm volatile("mfence; lfence" : : : "memory");
+}
+
 #endif /* _ASM_X86_PROCESSOR_H */