[v4,6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation

Message ID 20231027-delay-verw-v4-6-9a3622d4bcf7@linux.intel.com
State New
Headers
Series Delay VERW |

Commit Message

Pawan Gupta Oct. 27, 2023, 2:39 p.m. UTC
  During VMentry VERW is executed to mitigate MDS. After VERW, any memory
access like register push onto stack may put host data in MDS affected
CPU buffers. A guest can then use MDS to sample host data.

Although likelihood of secrets surviving in registers at current VERW
callsite is less, but it can't be ruled out. Harden the MDS mitigation
by moving the VERW mitigation late in VMentry path.

Note that VERW for MMIO Stale Data mitigation is unchanged because of
the complexity of per-guest conditional VERW which is not easy to handle
that late in asm with no GPRs available. If the CPU is also affected by
MDS, VERW is unconditionally executed late in asm regardless of guest
having MMIO access.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/kvm/vmx/vmenter.S |  3 +++
 arch/x86/kvm/vmx/vmx.c     | 19 ++++++++++++++-----
 2 files changed, 17 insertions(+), 5 deletions(-)
  

Comments

Josh Poimboeuf Dec. 1, 2023, 8:02 p.m. UTC | #1
On Fri, Oct 27, 2023 at 07:39:12AM -0700, Pawan Gupta wrote:
> -	vmx_disable_fb_clear(vmx);
> +	/*
> +	 * Optimize the latency of VERW in guests for MMIO mitigation. Skip
> +	 * the optimization when MDS mitigation(later in asm) is enabled.
> +	 */
> +	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> +		vmx_disable_fb_clear(vmx);
>  
>  	if (vcpu->arch.cr2 != native_read_cr2())
>  		native_write_cr2(vcpu->arch.cr2);
> @@ -7248,7 +7256,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>  
>  	vmx->idt_vectoring_info = 0;
>  
> -	vmx_enable_fb_clear(vmx);
> +	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> +		vmx_enable_fb_clear(vmx);
>  

It may be cleaner to instead check X86_FEATURE_CLEAR_CPU_BUF when
setting vmx->disable_fb_clear in the first place, in
vmx_update_fb_clear_dis().
  
Pawan Gupta Dec. 20, 2023, 1:25 a.m. UTC | #2
On Fri, Dec 01, 2023 at 12:02:47PM -0800, Josh Poimboeuf wrote:
> On Fri, Oct 27, 2023 at 07:39:12AM -0700, Pawan Gupta wrote:
> > -	vmx_disable_fb_clear(vmx);
> > +	/*
> > +	 * Optimize the latency of VERW in guests for MMIO mitigation. Skip
> > +	 * the optimization when MDS mitigation(later in asm) is enabled.
> > +	 */
> > +	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> > +		vmx_disable_fb_clear(vmx);
> >  
> >  	if (vcpu->arch.cr2 != native_read_cr2())
> >  		native_write_cr2(vcpu->arch.cr2);
> > @@ -7248,7 +7256,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> >  
> >  	vmx->idt_vectoring_info = 0;
> >  
> > -	vmx_enable_fb_clear(vmx);
> > +	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> > +		vmx_enable_fb_clear(vmx);
> >  
> 
> It may be cleaner to instead check X86_FEATURE_CLEAR_CPU_BUF when
> setting vmx->disable_fb_clear in the first place, in
> vmx_update_fb_clear_dis().

Right. Thanks for the review.
  

Patch

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index b3b13ec04bac..139960deb736 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -161,6 +161,9 @@  SYM_FUNC_START(__vmx_vcpu_run)
 	/* Load guest RAX.  This kills the @regs pointer! */
 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
+	/* Clobbers EFLAGS.ZF */
+	CLEAR_CPU_BUFFERS
+
 	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
 	jnc .Lvmlaunch
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 24e8694b83fc..a05c6b80b06c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7226,16 +7226,24 @@  static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 
 	guest_state_enter_irqoff();
 
-	/* L1D Flush includes CPU buffer clear to mitigate MDS */
+	/*
+	 * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
+	 * mitigation for MDS is done late in VMentry and is still
+	 * executed in spite of L1D Flush. This is because an extra VERW
+	 * should not matter much after the big hammer L1D Flush.
+	 */
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
-	else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
-		mds_clear_cpu_buffers();
 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
 		 kvm_arch_has_assigned_device(vcpu->kvm))
 		mds_clear_cpu_buffers();
 
-	vmx_disable_fb_clear(vmx);
+	/*
+	 * Optimize the latency of VERW in guests for MMIO mitigation. Skip
+	 * the optimization when MDS mitigation(later in asm) is enabled.
+	 */
+	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
+		vmx_disable_fb_clear(vmx);
 
 	if (vcpu->arch.cr2 != native_read_cr2())
 		native_write_cr2(vcpu->arch.cr2);
@@ -7248,7 +7256,8 @@  static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 
 	vmx->idt_vectoring_info = 0;
 
-	vmx_enable_fb_clear(vmx);
+	if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
+		vmx_enable_fb_clear(vmx);
 
 	if (unlikely(vmx->fail)) {
 		vmx->exit_reason.full = 0xdead;