[08/12] KVM: nSVM: Use KVM-governed feature framework to track "vVM{SAVE,LOAD} enabled"

Message ID 20230217231022.816138-9-seanjc@google.com
State New
Headers
Series KVM: x86: Add "governed" X86_FEATURE framework |

Commit Message

Sean Christopherson Feb. 17, 2023, 11:10 p.m. UTC
  Track "virtual VMSAVE/VMLOAD exposed to L1" via a governed feature flag
instead of using a dedicated bit/flag in vcpu_svm.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/governed_features.h | 1 +
 arch/x86/kvm/svm/nested.c        | 2 +-
 arch/x86/kvm/svm/svm.c           | 5 ++---
 arch/x86/kvm/svm/svm.h           | 1 -
 4 files changed, 4 insertions(+), 5 deletions(-)
  

Comments

Yu Zhang Feb. 21, 2023, 3:23 p.m. UTC | #1
On Fri, Feb 17, 2023 at 03:10:18PM -0800, Sean Christopherson wrote:
> Track "virtual VMSAVE/VMLOAD exposed to L1" via a governed feature flag
> instead of using a dedicated bit/flag in vcpu_svm.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/governed_features.h | 1 +
>  arch/x86/kvm/svm/nested.c        | 2 +-
>  arch/x86/kvm/svm/svm.c           | 5 ++---
>  arch/x86/kvm/svm/svm.h           | 1 -
>  4 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_features.h
> index 0335576a80a8..b66b9d550f33 100644
> --- a/arch/x86/kvm/governed_features.h
> +++ b/arch/x86/kvm/governed_features.h
> @@ -9,6 +9,7 @@ KVM_GOVERNED_X86_FEATURE(GBPAGES)
>  KVM_GOVERNED_X86_FEATURE(XSAVES)
>  KVM_GOVERNED_X86_FEATURE(NRIPS)
>  KVM_GOVERNED_X86_FEATURE(TSCRATEMSR)
> +KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD)
>  
>  #undef KVM_GOVERNED_X86_FEATURE
>  #undef KVM_GOVERNED_FEATURE
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 30e00c4e07c7..6a96058c0e48 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -107,7 +107,7 @@ static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu)
>  
>  static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm)
>  {
> -	if (!svm->v_vmload_vmsave_enabled)
> +	if (!guest_can_use(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD))
>  		return true;
>  
>  	if (!nested_npt_enabled(svm))
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index dd4aead5462c..b3f0271c73b9 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -1162,8 +1162,6 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
>  
>  		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0);
>  		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0);
> -
> -		svm->v_vmload_vmsave_enabled = false;
>  	} else {
>  		/*
>  		 * If hardware supports Virtual VMLOAD VMSAVE then enable it
> @@ -4153,7 +4151,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>  
>  	svm->lbrv_enabled = lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV);
>  
> -	svm->v_vmload_vmsave_enabled = vls && guest_cpuid_has(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD);
> +	if (vls && !guest_cpuid_is_intel(vcpu))
> +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD);

Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 

If yes, how about adding some comments? Thanks!

B.R.
Yu
  
Yu Zhang Feb. 21, 2023, 3:33 p.m. UTC | #2
> Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
> VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 

Oh, sorry. I missed the guest_cpuid_has() in kvm_governed_feature_check_and_set().
So please just ignore my 2nd question.

As to the check of guest_cpuid_is_intel(), is it necessary?

B.R.
Yu
  
Sean Christopherson Feb. 21, 2023, 11:48 p.m. UTC | #3
On Tue, Feb 21, 2023, Yu Zhang wrote:
> > Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
> > VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 
> 
> Oh, sorry. I missed the guest_cpuid_has() in kvm_governed_feature_check_and_set().
> So please just ignore my 2nd question.
> 
> As to the check of guest_cpuid_is_intel(), is it necessary?

Yes?  The comment in init_vmcb_after_set_cpuid() says:

		/*
		 * We must intercept SYSENTER_EIP and SYSENTER_ESP
		 * accesses because the processor only stores 32 bits.
		 * For the same reason we cannot use virtual VMLOAD/VMSAVE.
		 */

but I'm struggling to connect the dots to SYSENTER.  I suspect the comment is
misleading and has nothing to do 32-bit vs. 64-bit (or I'm reading it wrong) and
should be something like:

	/*
	 * Disable virtual VMLOAD/VMSAVE and intercept VMLOAD/VMSAVE if the
	 * guest CPU is Intel in order to inject #UD.
	 */

In other words, a non-SVM guest shouldn't be allowed to execute VMLOAD/VMSAVE.
  
Yu Zhang Feb. 22, 2023, 6:49 a.m. UTC | #4
On Tue, Feb 21, 2023 at 03:48:07PM -0800, Sean Christopherson wrote:
> On Tue, Feb 21, 2023, Yu Zhang wrote:
> > > Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
> > > VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 
> > 
> > Oh, sorry. I missed the guest_cpuid_has() in kvm_governed_feature_check_and_set().
> > So please just ignore my 2nd question.
> > 
> > As to the check of guest_cpuid_is_intel(), is it necessary?
> 
> Yes?  The comment in init_vmcb_after_set_cpuid() says:
> 
> 		/*
> 		 * We must intercept SYSENTER_EIP and SYSENTER_ESP
> 		 * accesses because the processor only stores 32 bits.
> 		 * For the same reason we cannot use virtual VMLOAD/VMSAVE.
> 		 */
> 
> but I'm struggling to connect the dots to SYSENTER.  I suspect the comment is
> misleading and has nothing to do 32-bit vs. 64-bit (or I'm reading it wrong) and
> should be something like:
> 
> 	/*
> 	 * Disable virtual VMLOAD/VMSAVE and intercept VMLOAD/VMSAVE if the
> 	 * guest CPU is Intel in order to inject #UD.
> 	 */
> 
> In other words, a non-SVM guest shouldn't be allowed to execute VMLOAD/VMSAVE.

Yes. Such interpretation makes sense. And vmload/vmsave shall be intercepted
if guest CPU is Intel and #UD shall be injected. I guess this is done indirectly
by judging the EFER_SVME not set in EFER in nested_svm_check_permissions()?

And as to X86_FEATURE_V_VMSAVE_VMLOAD, should the guest_cpuid_has() return true
at all for a Intel guest?

B.R.
Yu
  
Sean Christopherson Feb. 22, 2023, 4:39 p.m. UTC | #5
+Maxim

On Wed, Feb 22, 2023, Yu Zhang wrote:
> On Tue, Feb 21, 2023 at 03:48:07PM -0800, Sean Christopherson wrote:
> > On Tue, Feb 21, 2023, Yu Zhang wrote:
> > > > Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
> > > > VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 
> > > 
> > > Oh, sorry. I missed the guest_cpuid_has() in kvm_governed_feature_check_and_set().
> > > So please just ignore my 2nd question.
> > > 
> > > As to the check of guest_cpuid_is_intel(), is it necessary?
> > 
> > Yes?  The comment in init_vmcb_after_set_cpuid() says:
> > 
> > 		/*
> > 		 * We must intercept SYSENTER_EIP and SYSENTER_ESP
> > 		 * accesses because the processor only stores 32 bits.
> > 		 * For the same reason we cannot use virtual VMLOAD/VMSAVE.
> > 		 */
> > 
> > but I'm struggling to connect the dots to SYSENTER.  I suspect the comment is
> > misleading and has nothing to do 32-bit vs. 64-bit (or I'm reading it wrong) and
> > should be something like:
> > 
> > 	/*
> > 	 * Disable virtual VMLOAD/VMSAVE and intercept VMLOAD/VMSAVE if the
> > 	 * guest CPU is Intel in order to inject #UD.
> > 	 */
> > 
> > In other words, a non-SVM guest shouldn't be allowed to execute VMLOAD/VMSAVE.
> 
> Yes. Such interpretation makes sense. And vmload/vmsave shall be intercepted
> if guest CPU is Intel and #UD shall be injected. I guess this is done indirectly
> by judging the EFER_SVME not set in EFER in nested_svm_check_permissions()?

Nope, my interpretation is wrong.  vmload_vmsave_interception() clears the upper
bits of SYSENTER_{EIP,ESP}

	if (vmload) {
		svm_copy_vmloadsave_state(svm->vmcb, vmcb12);
		svm->sysenter_eip_hi = 0;
		svm->sysenter_esp_hi = 0;
	} else {
		svm_copy_vmloadsave_state(vmcb12, svm->vmcb);
	}

From commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"):
    
    3. Disable vmload/vmsave virtualization if vendor=GenuineIntel.
       (It is somewhat insane to set vendor=GenuineIntel and still enable
       SVM for the guest but well whatever).
       Then zero the high 32 bit parts when kvm intercepts and emulates vmload.

Presumably AMD hardware loads only the lower 32 bits, which would leave garbage
in the upper bits and even leak state from L1 to L2 (again ignoring the fact that
exposing SVM to an Intel vCPU is bonkers).

I'll opportunistically massage the comment to make it more explicit about why
VMLOAD needs to be intercepted.
 
That said, clearing the bits for this seems wrong.  That would corrupt the MSRs
for 64-bit Intel guests.  The "target" of the fix was 32-bit L2s, i.e. I doubt
anything would notice.

    This patch fixes nested migration of 32 bit nested guests, that was
    broken because incorrect cached values of SYSENTER msrs were stored in
    the migration stream if L1 changed these msrs with
    vmload prior to L2 entry.

Maxim, would anything actually break if KVM let L1 load 64-bit values for the
SYSENTER MSRs?

> And as to X86_FEATURE_V_VMSAVE_VMLOAD, should the guest_cpuid_has() return true
> at all for a Intel guest?

Yes, because guest CPUID is userspace controlled.  Except for emulating architectural
side effects, e.g. size of XSAVE area, KVM doesn't sanitize guest CPUID.
  
Yu Zhang Feb. 24, 2023, 9:25 a.m. UTC | #6
On Wed, Feb 22, 2023 at 08:39:01AM -0800, Sean Christopherson wrote:
> +Maxim
> 
> On Wed, Feb 22, 2023, Yu Zhang wrote:
> > On Tue, Feb 21, 2023 at 03:48:07PM -0800, Sean Christopherson wrote:
> > > On Tue, Feb 21, 2023, Yu Zhang wrote:
> > > > > Sorry, why guest_cpuid_is_intel(vcpu)? Is it becasue that a AMD host with virtual
> > > > > VMSAVE/VMLOAD capability will always expose this feature for all AMD guests? 
> > > > 
> > > > Oh, sorry. I missed the guest_cpuid_has() in kvm_governed_feature_check_and_set().
> > > > So please just ignore my 2nd question.
> > > > 
> > > > As to the check of guest_cpuid_is_intel(), is it necessary?
> > > 
> > > Yes?  The comment in init_vmcb_after_set_cpuid() says:
> > > 
> > > 		/*
> > > 		 * We must intercept SYSENTER_EIP and SYSENTER_ESP
> > > 		 * accesses because the processor only stores 32 bits.
> > > 		 * For the same reason we cannot use virtual VMLOAD/VMSAVE.
> > > 		 */
> > > 
> > > but I'm struggling to connect the dots to SYSENTER.  I suspect the comment is
> > > misleading and has nothing to do 32-bit vs. 64-bit (or I'm reading it wrong) and
> > > should be something like:
> > > 
> > > 	/*
> > > 	 * Disable virtual VMLOAD/VMSAVE and intercept VMLOAD/VMSAVE if the
> > > 	 * guest CPU is Intel in order to inject #UD.
> > > 	 */
> > > 
> > > In other words, a non-SVM guest shouldn't be allowed to execute VMLOAD/VMSAVE.
> > 
> > Yes. Such interpretation makes sense. And vmload/vmsave shall be intercepted
> > if guest CPU is Intel and #UD shall be injected. I guess this is done indirectly
> > by judging the EFER_SVME not set in EFER in nested_svm_check_permissions()?
> 
> Nope, my interpretation is wrong.  vmload_vmsave_interception() clears the upper
> bits of SYSENTER_{EIP,ESP}
> 
> 	if (vmload) {
> 		svm_copy_vmloadsave_state(svm->vmcb, vmcb12);
> 		svm->sysenter_eip_hi = 0;
> 		svm->sysenter_esp_hi = 0;
> 	} else {
> 		svm_copy_vmloadsave_state(vmcb12, svm->vmcb);
> 	}
> 
> From commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"):
>     
>     3. Disable vmload/vmsave virtualization if vendor=GenuineIntel.
>        (It is somewhat insane to set vendor=GenuineIntel and still enable
>        SVM for the guest but well whatever).
>        Then zero the high 32 bit parts when kvm intercepts and emulates vmload.
> 
> Presumably AMD hardware loads only the lower 32 bits, which would leave garbage
> in the upper bits and even leak state from L1 to L2 (again ignoring the fact that
> exposing SVM to an Intel vCPU is bonkers).
Is it because L1 is a VM migrated from Intel platform to AMD's?

So w/o commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"):
1> L1 could be a "GenuineIntel" with SVM capability (bizarre as it is), running
in 64-bit mode.
2> With no interception of MSR writes to the SYSENTER_EIP/ESP, L1 may set its
SYSENTER_EIP/ESP to a 64-bit value successfully (though sysenter/sysexit may
fail).
3> L2 could be in 32-bit mode. And if virtual vmload/vmsave is enabled for L1,
only lower 32 bits of those MSRs will be loaded, leaking the higher 32 bits.

Is above scenario the reason of Maxim's fix?

But why it is related to nested migration? 


> I'll opportunistically massage the comment to make it more explicit about why
> VMLOAD needs to be intercepted.
>  
> That said, clearing the bits for this seems wrong.  That would corrupt the MSRs
> for 64-bit Intel guests.  The "target" of the fix was 32-bit L2s, i.e. I doubt
> anything would notice.
> 
>     This patch fixes nested migration of 32 bit nested guests, that was
>     broken because incorrect cached values of SYSENTER msrs were stored in
>     the migration stream if L1 changed these msrs with
>     vmload prior to L2 entry.

> 
> Maxim, would anything actually break if KVM let L1 load 64-bit values for the
> SYSENTER MSRs?
> 
> > And as to X86_FEATURE_V_VMSAVE_VMLOAD, should the guest_cpuid_has() return true
> > at all for a Intel guest?
> 
> Yes, because guest CPUID is userspace controlled.  Except for emulating architectural
> side effects, e.g. size of XSAVE area, KVM doesn't sanitize guest CPUID.
> 

B.R.
Yu
  
Sean Christopherson Feb. 24, 2023, 4:16 p.m. UTC | #7
On Fri, Feb 24, 2023, Yu Zhang wrote:
> On Wed, Feb 22, 2023 at 08:39:01AM -0800, Sean Christopherson wrote:
> > +Maxim
> > 
> > On Wed, Feb 22, 2023, Yu Zhang wrote:
> > > On Tue, Feb 21, 2023 at 03:48:07PM -0800, Sean Christopherson wrote:
> > Nope, my interpretation is wrong.  vmload_vmsave_interception() clears the upper
> > bits of SYSENTER_{EIP,ESP}
> > 
> > 	if (vmload) {
> > 		svm_copy_vmloadsave_state(svm->vmcb, vmcb12);
> > 		svm->sysenter_eip_hi = 0;
> > 		svm->sysenter_esp_hi = 0;
> > 	} else {
> > 		svm_copy_vmloadsave_state(vmcb12, svm->vmcb);
> > 	}
> > 
> > From commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"):
> >     
> >     3. Disable vmload/vmsave virtualization if vendor=GenuineIntel.
> >        (It is somewhat insane to set vendor=GenuineIntel and still enable
> >        SVM for the guest but well whatever).
> >        Then zero the high 32 bit parts when kvm intercepts and emulates vmload.
> > 
> > Presumably AMD hardware loads only the lower 32 bits, which would leave garbage
> > in the upper bits and even leak state from L1 to L2 (again ignoring the fact that
> > exposing SVM to an Intel vCPU is bonkers).
> Is it because L1 is a VM migrated from Intel platform to AMD's?

I believe so.

> So w/o commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"):
> 1> L1 could be a "GenuineIntel" with SVM capability (bizarre as it is), running
> in 64-bit mode.
> 2> With no interception of MSR writes to the SYSENTER_EIP/ESP, L1 may set its
> SYSENTER_EIP/ESP to a 64-bit value successfully (though sysenter/sysexit may
> fail).

Yes, though the MSRs don't need to be passed through, KVM emulates the full 64 bits
if the guest CPUID model is Intel.

> 3> L2 could be in 32-bit mode. And if virtual vmload/vmsave is enabled for L1,
> only lower 32 bits of those MSRs will be loaded, leaking the higher 32 bits.
> 
> Is above scenario the reason of Maxim's fix?

Yes, that's my understanding.

> But why it is related to nested migration? 

I understand why it's related, but I don't understand why we bothered to add "support"
for this.

In theory, if L1 is migrated by L0 while L1 is running an L2 that uses SYSENTER,
problems will occur.  I'm a bit lost as to how this matters in practice, as KVM
doesn't support cross-vendor nested virtualization, and if L1 can be enlightened
to the point where it can switch from VMX=>SVM during migration, what's the point
of doing a migration?
  
Sean Christopherson June 29, 2023, 4:50 p.m. UTC | #8
On Wed, Feb 22, 2023, Sean Christopherson wrote:
> +Maxim
> 
> On Wed, Feb 22, 2023, Yu Zhang wrote:
> I'll opportunistically massage the comment to make it more explicit about why
> VMLOAD needs to be intercepted.
>  
> That said, clearing the bits for this seems wrong.  That would corrupt the MSRs
> for 64-bit Intel guests.  The "target" of the fix was 32-bit L2s, i.e. I doubt
> anything would notice.
> 
>     This patch fixes nested migration of 32 bit nested guests, that was
>     broken because incorrect cached values of SYSENTER msrs were stored in
>     the migration stream if L1 changed these msrs with
>     vmload prior to L2 entry.

Aha!  Finally figured out what this code is doing.  KVM intercepts VMLOAD so that
KVM can correctly model the VMLOAD behavior of dropping bits 63:32, i.e. to clear
svm->sysenter_eip_hi and svm->sysenter_esp_hi.

So the code is correct.  I'll add this comment:

	/*
	 * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that
	 * VMLOAD drops bits 63:32 of SYSENTER (ignoring the fact that exposing
	 * SVM on Intel is bonkers and extremely unlikely to work).
	 */
  
Yu Zhang June 30, 2023, 10 a.m. UTC | #9
On Thu, Jun 29, 2023 at 09:50:34AM -0700, Sean Christopherson wrote:
> On Wed, Feb 22, 2023, Sean Christopherson wrote:
> > +Maxim
> > 
> > On Wed, Feb 22, 2023, Yu Zhang wrote:
> > I'll opportunistically massage the comment to make it more explicit about why
> > VMLOAD needs to be intercepted.
> >  
> > That said, clearing the bits for this seems wrong.  That would corrupt the MSRs
> > for 64-bit Intel guests.  The "target" of the fix was 32-bit L2s, i.e. I doubt
> > anything would notice.
> > 
> >     This patch fixes nested migration of 32 bit nested guests, that was
> >     broken because incorrect cached values of SYSENTER msrs were stored in
> >     the migration stream if L1 changed these msrs with
> >     vmload prior to L2 entry.
> 
> Aha!  Finally figured out what this code is doing.  KVM intercepts VMLOAD so that
> KVM can correctly model the VMLOAD behavior of dropping bits 63:32, i.e. to clear
> svm->sysenter_eip_hi and svm->sysenter_esp_hi.
> 
> So the code is correct.  I'll add this comment:
> 
> 	/*
> 	 * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that
> 	 * VMLOAD drops bits 63:32 of SYSENTER (ignoring the fact that exposing
> 	 * SVM on Intel is bonkers and extremely unlikely to work).
> 	 */
> 
Oh.. Because L2 will never be a 64-bit Intel guest, and the emulation of vmload
shall follow APM's requirement(to clear the upper 32 bits)?

Thanks a lot for bring me back to this discussion... I totally forgot it. :)

B.R.
Yu
 
Thanks a lot for this explanation, Sean!
  

Patch

diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_features.h
index 0335576a80a8..b66b9d550f33 100644
--- a/arch/x86/kvm/governed_features.h
+++ b/arch/x86/kvm/governed_features.h
@@ -9,6 +9,7 @@  KVM_GOVERNED_X86_FEATURE(GBPAGES)
 KVM_GOVERNED_X86_FEATURE(XSAVES)
 KVM_GOVERNED_X86_FEATURE(NRIPS)
 KVM_GOVERNED_X86_FEATURE(TSCRATEMSR)
+KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD)
 
 #undef KVM_GOVERNED_X86_FEATURE
 #undef KVM_GOVERNED_FEATURE
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 30e00c4e07c7..6a96058c0e48 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -107,7 +107,7 @@  static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu)
 
 static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm)
 {
-	if (!svm->v_vmload_vmsave_enabled)
+	if (!guest_can_use(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD))
 		return true;
 
 	if (!nested_npt_enabled(svm))
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index dd4aead5462c..b3f0271c73b9 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1162,8 +1162,6 @@  static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
 
 		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0);
 		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0);
-
-		svm->v_vmload_vmsave_enabled = false;
 	} else {
 		/*
 		 * If hardware supports Virtual VMLOAD VMSAVE then enable it
@@ -4153,7 +4151,8 @@  static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 
 	svm->lbrv_enabled = lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV);
 
-	svm->v_vmload_vmsave_enabled = vls && guest_cpuid_has(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD);
+	if (vls && !guest_cpuid_is_intel(vcpu))
+		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD);
 
 	svm->pause_filter_enabled = kvm_cpu_cap_has(X86_FEATURE_PAUSEFILTER) &&
 			guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index a523cfcdd12e..1e3e7462b1d7 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -258,7 +258,6 @@  struct vcpu_svm {
 	bool soft_int_injected;
 
 	/* optional nested SVM features that are enabled for this guest  */
-	bool v_vmload_vmsave_enabled      : 1;
 	bool lbrv_enabled                 : 1;
 	bool pause_filter_enabled         : 1;
 	bool pause_threshold_enabled      : 1;