[1/8] KVM: SVM: extract VMCB accessors to a new file

Message ID 20221107145436.276079-2-pbonzini@redhat.com
State New
Headers
Series KVM: SVM: fixes for vmentry code |

Commit Message

Paolo Bonzini Nov. 7, 2022, 2:54 p.m. UTC
  Having inline functions confuses the compilation of asm-offsets.c,
which cannot find kvm_cache_regs.h because arch/x86/kvm is not in
asm-offset.c's include path.  Just extract the functions to a
new file.

No functional change intended.

Cc: stable@vger.kernel.org
Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/avic.c         |   1 +
 arch/x86/kvm/svm/nested.c       |   1 +
 arch/x86/kvm/svm/sev.c          |   1 +
 arch/x86/kvm/svm/svm.c          |   1 +
 arch/x86/kvm/svm/svm.h          | 200 ------------------------------
 arch/x86/kvm/svm/svm_onhyperv.c |   1 +
 arch/x86/kvm/svm/vmcb.h         | 211 ++++++++++++++++++++++++++++++++
 7 files changed, 216 insertions(+), 200 deletions(-)
 create mode 100644 arch/x86/kvm/svm/vmcb.h
  

Comments

Sean Christopherson Nov. 7, 2022, 5:08 p.m. UTC | #1
On Mon, Nov 07, 2022, Paolo Bonzini wrote:
> Having inline functions confuses the compilation of asm-offsets.c,
> which cannot find kvm_cache_regs.h because arch/x86/kvm is not in
> asm-offset.c's include path.  Just extract the functions to a
> new file.
> 
> No functional change intended.
> 
> Cc: stable@vger.kernel.org
> Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly")
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm/avic.c         |   1 +
>  arch/x86/kvm/svm/nested.c       |   1 +
>  arch/x86/kvm/svm/sev.c          |   1 +
>  arch/x86/kvm/svm/svm.c          |   1 +
>  arch/x86/kvm/svm/svm.h          | 200 ------------------------------
>  arch/x86/kvm/svm/svm_onhyperv.c |   1 +
>  arch/x86/kvm/svm/vmcb.h         | 211 ++++++++++++++++++++++++++++++++

I don't think vmcb.h is a good name.  The logical inclusion sequence would be for
svm.h to include vmcb.h, e.g. SVM requires knowledge about VMCBs, but this requires
vmcb.h to include svm.h to dereference "struct vcpu_svm".

Unlike VMX's vmcs.h, the new file isn't a "pure" VMCB helper, it also holds a
decent amount of KVM's SVM logic.

What about making KVM self-sufficient?  The includes in asm-offsets.c are quite
ugly

 #include "../kvm/vmx/vmx.h"
 #include "../kvm/svm/svm.h"

or as a stopgap to make backporting easier, just include kvm_cache_regs.h?

>  void svm_leave_nested(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c
> index 8cdc62c74a96..ae0a101329e6 100644
> --- a/arch/x86/kvm/svm/svm_onhyperv.c
> +++ b/arch/x86/kvm/svm/svm_onhyperv.c
> @@ -8,6 +8,7 @@
>  #include <asm/mshyperv.h>
>  
>  #include "svm.h"
> +#include "vmcb.h"
>  #include "svm_ops.h"
>  
>  #include "hyperv.h"
> diff --git a/arch/x86/kvm/svm/vmcb.h b/arch/x86/kvm/svm/vmcb.h
> new file mode 100644
> index 000000000000..8757cda27e3a
> --- /dev/null
> +++ b/arch/x86/kvm/svm/vmcb.h
> @@ -0,0 +1,211 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Kernel-based Virtual Machine driver for Linux
> + *
> + * AMD SVM support - VMCB accessors
> + */
> +
> +#ifndef __SVM_VMCB_H
> +#define __SVM_VMCB_H
> +
> +#include "kvm_cache_regs.h"

This should include "svm.h" instead of relying on the parent to include said file.
  
Paolo Bonzini Nov. 7, 2022, 5:36 p.m. UTC | #2
On 11/7/22 18:08, Sean Christopherson wrote:
> On Mon, Nov 07, 2022, Paolo Bonzini wrote:
>> Having inline functions confuses the compilation of asm-offsets.c,
>> which cannot find kvm_cache_regs.h because arch/x86/kvm is not in
>> asm-offset.c's include path.  Just extract the functions to a
>> new file.
>>
>> No functional change intended.
>>
>> Cc: stable@vger.kernel.org
>> Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly")
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>   arch/x86/kvm/svm/avic.c         |   1 +
>>   arch/x86/kvm/svm/nested.c       |   1 +
>>   arch/x86/kvm/svm/sev.c          |   1 +
>>   arch/x86/kvm/svm/svm.c          |   1 +
>>   arch/x86/kvm/svm/svm.h          | 200 ------------------------------
>>   arch/x86/kvm/svm/svm_onhyperv.c |   1 +
>>   arch/x86/kvm/svm/vmcb.h         | 211 ++++++++++++++++++++++++++++++++
> 
> I don't think vmcb.h is a good name.  The logical inclusion sequence would be for
> svm.h to include vmcb.h, e.g. SVM requires knowledge about VMCBs, but this requires
> vmcb.h to include svm.h to dereference "struct vcpu_svm".
> Unlike VMX's vmcs.h, the new file isn't a "pure" VMCB helper, it also holds a
> decent amount of KVM's SVM logic.

Yes, it's basically the wrappers that KVM uses to access the VMCB fields.

> What about making KVM self-sufficient?

You mean having a different asm-offsets.h file just for arch/x86/kvm/?

> The includes in asm-offsets.c are quite ugly
> 
>   #include "../kvm/vmx/vmx.h"
>   #include "../kvm/svm/svm.h"
> 
> or as a stopgap to make backporting easier, just include kvm_cache_regs.h?

The problem is that the _existing_ include of kvm_cache_regs.h in svm.h 
fails, with

arch/x86/kernel/../kvm/svm/svm.h:25:10: fatal error: kvm_cache_regs.h: 
No such file or directory
    25 | #include "kvm_cache_regs.h"
       |          ^~~~~~~~~~~~~~~~~~
compilation terminated.

The other two solutions here are:

1) move kvm_cache_regs.h to arch/x86/include/asm/ so it can be included 
normally

2) extract the structs to arch/x86/kvm/svm/svm_types.h and include that 
from asm-offsets.h, basically the opposite of this patch.

(2) is my preference if having a different asm-offsets.h file turns out 
to be too complex.  We can do the same for VMX as well.

Paolo

>>   void svm_leave_nested(struct kvm_vcpu *vcpu);
>> diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c
>> index 8cdc62c74a96..ae0a101329e6 100644
>> --- a/arch/x86/kvm/svm/svm_onhyperv.c
>> +++ b/arch/x86/kvm/svm/svm_onhyperv.c
>> @@ -8,6 +8,7 @@
>>   #include <asm/mshyperv.h>
>>   
>>   #include "svm.h"
>> +#include "vmcb.h"
>>   #include "svm_ops.h"
>>   
>>   #include "hyperv.h"
>> diff --git a/arch/x86/kvm/svm/vmcb.h b/arch/x86/kvm/svm/vmcb.h
>> new file mode 100644
>> index 000000000000..8757cda27e3a
>> --- /dev/null
>> +++ b/arch/x86/kvm/svm/vmcb.h
>> @@ -0,0 +1,211 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Kernel-based Virtual Machine driver for Linux
>> + *
>> + * AMD SVM support - VMCB accessors
>> + */
>> +
>> +#ifndef __SVM_VMCB_H
>> +#define __SVM_VMCB_H
>> +
>> +#include "kvm_cache_regs.h"
> 
> This should include "svm.h" instead of relying on the parent to include said file.
>
  
Sean Christopherson Nov. 7, 2022, 6:14 p.m. UTC | #3
On Mon, Nov 07, 2022, Paolo Bonzini wrote:
> On 11/7/22 18:08, Sean Christopherson wrote:
> > What about making KVM self-sufficient?
> 
> You mean having a different asm-offsets.h file just for arch/x86/kvm/?

Yeah.

> > The includes in asm-offsets.c are quite ugly
> > 
> >   #include "../kvm/vmx/vmx.h"
> >   #include "../kvm/svm/svm.h"
> > 
> > or as a stopgap to make backporting easier, just include kvm_cache_regs.h?
> 
> The problem is that the _existing_ include of kvm_cache_regs.h in svm.h
> fails, with
> 
> arch/x86/kernel/../kvm/svm/svm.h:25:10: fatal error: kvm_cache_regs.h: No
> such file or directory
>    25 | #include "kvm_cache_regs.h"
>       |          ^~~~~~~~~~~~~~~~~~
> compilation terminated.

Duh.  Now the changelog makes more sense...

> The other two solutions here are:
> 
> 1) move kvm_cache_regs.h to arch/x86/include/asm/ so it can be included
> normally
> 
> 2) extract the structs to arch/x86/kvm/svm/svm_types.h and include that from
> asm-offsets.h, basically the opposite of this patch.
> 
> (2) is my preference if having a different asm-offsets.h file turns out to
> be too complex.  We can do the same for VMX as well.

What about adding dedicated structs to hold the non-regs params for VM-Enter and
VMRUN?  Grabbing stuff willy-nilly in the assembly code makes the flows difficult
to read as there's nothing in the C code that describes what fields are actually
used.  And due to 32-bit's restriction on the number of params, maintaining the
ad hoc approach will be annoying as passing in new info will require shuffling,
and any KVM refactorings will need updates to asm-offsets.c, e.g. "spec_ctrl"
should really live in "struct kvm_vcpu" since it's common to both AMD and Intel.

That would also allow fixing the bugs introduced by commit bb06650634d3 ("KVM:
VMX: Convert launched argument to flags").  nested_vmx_check_vmentry_hw() never
fully enters the guest; at worst, it triggers a "VM-Exit on VM-Enter" consistency
check.  Thus there's no need to load the guest's spec control and zero chance that
the guest can write to spec control.

E.g. as a very rough starting point

---
 arch/x86/include/asm/kvm_host.h |  8 ++++++++
 arch/x86/kvm/svm/svm.c          | 13 ++++++++++---
 arch/x86/kvm/svm/svm.h          |  4 ++--
 arch/x86/kvm/vmx/nested.c       |  8 ++++++--
 arch/x86/kvm/vmx/run_flags.h    |  8 --------
 arch/x86/kvm/vmx/vmx.c          | 32 +++++++++-----------------------
 arch/x86/kvm/vmx/vmx.h          |  4 +---
 7 files changed, 36 insertions(+), 41 deletions(-)
 delete mode 100644 arch/x86/kvm/vmx/run_flags.h

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 415113dea951..d56fe6151656 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -204,6 +204,14 @@ enum exit_fastpath_completion {
 };
 typedef enum exit_fastpath_completion fastpath_t;
 
+struct kvm_vmrun_params {
+	...
+};
+
+struct kvm_vmenter_params {
+	...
+};
+
 struct x86_emulate_ctxt;
 struct x86_exception;
 union kvm_smram;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 627c126cd9bb..7df9ea3ad3f1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3925,16 +3925,23 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_intercepted)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
+	struct kvm_vmrun_params vmrun = {
+		.regs = vcpu->arch.regs,
+		.vmcb01 = svm->vmcb01.ptr,
+		.vmcb = svm->current_vmcb->ptr,
+		.spec_ctrl = svm->current_vmcb->ptr,
+		.spec_ctrl_intercepted = spec_ctrl_intercepted,
+	};
 
 	guest_state_enter_irqoff();
 
 	if (sev_es_guest(vcpu->kvm)) {
-		__svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted);
+		__svm_sev_es_vcpu_run(&params);
 	} else {
 		struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
 
-		__svm_vcpu_run(svm, __sme_page_pa(sd->save_area),
-			       spec_ctrl_intercepted);
+		params.save_save_pa = __sme_page_pa(sd->save_area);
+		__svm_vcpu_run(vcpu->arch.regs, &params);
 	}
 
 	guest_state_exit_irqoff();
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 9d940d8736f0..bf321b755a15 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -483,7 +483,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm);
 
 /* vmenter.S */
 
-void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted);
-void __svm_vcpu_run(struct vcpu_svm *svm, unsigned long hsave_pa, bool spec_ctrl_intercepted);
+void __svm_sev_es_vcpu_run(struct kvm_vmrun_params *params);
+void __svm_vcpu_run(unsigned long *regs, struct kvm_vmrun_params *params);
 
 #endif
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 61a2e551640a..da6c9b8c3a4f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3058,6 +3058,11 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
 
 static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 {
+	struct kvm_vmenter_params params = {
+		.launched = vmx->loaded_vmcs->launched,
+		.spec_ctrl = this_cpu_read(x86_spec_ctrl_current),
+		.spec_ctrl_intercepted = true,
+	};
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long cr3, cr4;
 	bool vm_fail;
@@ -3094,8 +3099,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 		vmx->loaded_vmcs->host_state.cr4 = cr4;
 	}
 
-	vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
-				 __vmx_vcpu_run_flags(vmx));
+	vm_fail = __vmx_vcpu_run(vcpu->arch.regs, &params);
 
 	if (vmx->msr_autoload.host.nr)
 		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
deleted file mode 100644
index edc3f16cc189..000000000000
--- a/arch/x86/kvm/vmx/run_flags.h
+++ /dev/null
@@ -1,8 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __KVM_X86_VMX_RUN_FLAGS_H
-#define __KVM_X86_VMX_RUN_FLAGS_H
-
-#define VMX_RUN_VMRESUME	(1 << 0)
-#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
-
-#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 05a747c9a9ff..307380cd2000 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -847,24 +847,6 @@ static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
 	return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap, msr);
 }
 
-unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
-{
-	unsigned int flags = 0;
-
-	if (vmx->loaded_vmcs->launched)
-		flags |= VMX_RUN_VMRESUME;
-
-	/*
-	 * If writes to the SPEC_CTRL MSR aren't intercepted, the guest is free
-	 * to change it directly without causing a vmexit.  In that case read
-	 * it after vmexit and store it in vmx->spec_ctrl.
-	 */
-	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
-		flags |= VMX_RUN_SAVE_SPEC_CTRL;
-
-	return flags;
-}
-
 static __always_inline void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
 		unsigned long entry, unsigned long exit)
 {
@@ -7065,9 +7047,14 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 }
 
 static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
-					struct vcpu_vmx *vmx,
-					unsigned long flags)
+					struct vcpu_vmx *vmx)
 {
+	struct kvm_vmenter_params params = {
+		.launched = vmx->loaded_vmcs->launched,
+		.spec_ctrl = vmx->spec_ctrl,
+		.spec_ctrl_intercepted = msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL),
+	};
+
 	guest_state_enter_irqoff();
 
 	/* L1D Flush includes CPU buffer clear to mitigate MDS */
@@ -7084,8 +7071,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 	if (vcpu->arch.cr2 != native_read_cr2())
 		native_write_cr2(vcpu->arch.cr2);
 
-	vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
-				   flags);
+	vmx->fail = __vmx_vcpu_run(vcpu->arch.regs, &params);
 
 	vcpu->arch.cr2 = native_read_cr2();
 
@@ -7185,7 +7171,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	kvm_wait_lapic_expire(vcpu);
 
 	/* The actual VMENTER/EXIT is in the .noinstr.text section. */
-	vmx_vcpu_enter_exit(vcpu, vmx, __vmx_vcpu_run_flags(vmx));
+	vmx_vcpu_enter_exit(vcpu, vmx);
 
 	/* All fields are clean at this point */
 	if (static_branch_unlikely(&enable_evmcs)) {
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index a3da84f4ea45..4eb196f88b47 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -421,9 +421,7 @@ struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr);
 void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu);
 void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
 void vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, unsigned int flags);
-unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx);
-bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs,
-		    unsigned int flags);
+bool __vmx_vcpu_run(unsigned long *regs, struct kvm_vmenter_params *params);
 int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr);
 void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu);
 

base-commit: 0443d79faa4575a5871b54801ed4a36eecce32e3
--
  
Paolo Bonzini Nov. 7, 2022, 6:51 p.m. UTC | #4
On 11/7/22 19:14, Sean Christopherson wrote:
> On Mon, Nov 07, 2022, Paolo Bonzini wrote:
>> On 11/7/22 18:08, Sean Christopherson wrote:
>>> What about making KVM self-sufficient?
>>
>> You mean having a different asm-offsets.h file just for arch/x86/kvm/?
> 
> Yeah.

I'll try tomorrow, wish me luck. :)

>> The problem is that the _existing_ include of kvm_cache_regs.h in svm.h
>> fails, with
>>
>> arch/x86/kernel/../kvm/svm/svm.h:25:10: fatal error: kvm_cache_regs.h: No
>> such file or directory
>>     25 | #include "kvm_cache_regs.h"
>>        |          ^~~~~~~~~~~~~~~~~~
>> compilation terminated.
> 
> Duh.  Now the changelog makes more sense...

I'll add the commit message, though I also would not mind getting rid of 
this patch.

>> The other two solutions here are:
>>
>> 1) move kvm_cache_regs.h to arch/x86/include/asm/ so it can be included
>> normally
>>
>> 2) extract the structs to arch/x86/kvm/svm/svm_types.h and include that from
>> asm-offsets.h, basically the opposite of this patch.
>>
>> (2) is my preference if having a different asm-offsets.h file turns out to
>> be too complex.  We can do the same for VMX as well.
> 
> What about adding dedicated structs to hold the non-regs params for VM-Enter and
> VMRUN?  Grabbing stuff willy-nilly in the assembly code makes the flows difficult
> to read as there's nothing in the C code that describes what fields are actually
> used.

What fields are actually used is (like with any other function) 
"potentially all, you'll have to read the source code and in fact you 
can just read asm-offsets.c instead".  What I mean is, I cannot offhand 
see or remember what fields are touched by svm_prepare_switch_to_guest, 
why would __svm_vcpu_run be any different?  And the new SVM assembly 
code is quite readable overall.

> And due to 32-bit's restriction on the number of params, maintaining the
> ad hoc approach will be annoying as passing in new info will require shuffling,
> and any KVM refactorings will need updates to asm-offsets.c, e.g. "spec_ctrl"
> should really live in "struct kvm_vcpu" since it's common to both AMD and Intel.

So what? :)  We're talking of 2 fields (regs, spec_ctrl) for VMX and 4 
(regs, spec_ctrl, svm->current_vmcb and svm->vmcb01---for simplicity all 
of them) for SVM, and they don't change very often at all.  If 
hypothetically KVM had used similar assembly code from the get go, there 
would have been 3 changes in 15 years: spec_ctrl was added for SSBD, and 
the other two fields correspond to two changes to the nesed VMCB 
handling (svm->current_vmcb was first introduced together with vmcb02, 
and later VMSAVE/VMLOAD started always using vmcb01).  That's not too bad.

> That would also allow fixing the bugs introduced by commit bb06650634d3 ("KVM:
> VMX: Convert launched argument to flags").  nested_vmx_check_vmentry_hw() never
> fully enters the guest; at worst, it triggers a "VM-Exit on VM-Enter" consistency
> check.  Thus there's no need to load the guest's spec control and zero chance that
> the guest can write to spec control.

Hmm, I'm not sure how it's related to this series.  Is the problem also 
with the three-argument limit?

Paolo

> E.g. as a very rough starting point
> 
> ---
>   arch/x86/include/asm/kvm_host.h |  8 ++++++++
>   arch/x86/kvm/svm/svm.c          | 13 ++++++++++---
>   arch/x86/kvm/svm/svm.h          |  4 ++--
>   arch/x86/kvm/vmx/nested.c       |  8 ++++++--
>   arch/x86/kvm/vmx/run_flags.h    |  8 --------
>   arch/x86/kvm/vmx/vmx.c          | 32 +++++++++-----------------------
>   arch/x86/kvm/vmx/vmx.h          |  4 +---
>   7 files changed, 36 insertions(+), 41 deletions(-)
>   delete mode 100644 arch/x86/kvm/vmx/run_flags.h
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 415113dea951..d56fe6151656 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -204,6 +204,14 @@ enum exit_fastpath_completion {
>   };
>   typedef enum exit_fastpath_completion fastpath_t;
>   
> +struct kvm_vmrun_params {
> +	...
> +};
> +
> +struct kvm_vmenter_params {
> +	...
> +};
> +
>   struct x86_emulate_ctxt;
>   struct x86_exception;
>   union kvm_smram;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 627c126cd9bb..7df9ea3ad3f1 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3925,16 +3925,23 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
>   static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_intercepted)
>   {
>   	struct vcpu_svm *svm = to_svm(vcpu);
> +	struct kvm_vmrun_params vmrun = {
> +		.regs = vcpu->arch.regs,
> +		.vmcb01 = svm->vmcb01.ptr,
> +		.vmcb = svm->current_vmcb->ptr,
> +		.spec_ctrl = svm->current_vmcb->ptr,
> +		.spec_ctrl_intercepted = spec_ctrl_intercepted,
> +	};
>   
>   	guest_state_enter_irqoff();
>   
>   	if (sev_es_guest(vcpu->kvm)) {
> -		__svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted);
> +		__svm_sev_es_vcpu_run(&params);
>   	} else {
>   		struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
>   
> -		__svm_vcpu_run(svm, __sme_page_pa(sd->save_area),
> -			       spec_ctrl_intercepted);
> +		params.save_save_pa = __sme_page_pa(sd->save_area);
> +		__svm_vcpu_run(vcpu->arch.regs, &params);
>   	}
>   
>   	guest_state_exit_irqoff();
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 9d940d8736f0..bf321b755a15 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -483,7 +483,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm);
>   
>   /* vmenter.S */
>   
> -void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted);
> -void __svm_vcpu_run(struct vcpu_svm *svm, unsigned long hsave_pa, bool spec_ctrl_intercepted);
> +void __svm_sev_es_vcpu_run(struct kvm_vmrun_params *params);
> +void __svm_vcpu_run(unsigned long *regs, struct kvm_vmrun_params *params);
>   
>   #endif
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 61a2e551640a..da6c9b8c3a4f 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3058,6 +3058,11 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
>   
>   static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
>   {
> +	struct kvm_vmenter_params params = {
> +		.launched = vmx->loaded_vmcs->launched,
> +		.spec_ctrl = this_cpu_read(x86_spec_ctrl_current),
> +		.spec_ctrl_intercepted = true,
> +	};
>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
>   	unsigned long cr3, cr4;
>   	bool vm_fail;
> @@ -3094,8 +3099,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
>   		vmx->loaded_vmcs->host_state.cr4 = cr4;
>   	}
>   
> -	vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
> -				 __vmx_vcpu_run_flags(vmx));
> +	vm_fail = __vmx_vcpu_run(vcpu->arch.regs, &params);
>   
>   	if (vmx->msr_autoload.host.nr)
>   		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> deleted file mode 100644
> index edc3f16cc189..000000000000
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ /dev/null
> @@ -1,8 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0 */
> -#ifndef __KVM_X86_VMX_RUN_FLAGS_H
> -#define __KVM_X86_VMX_RUN_FLAGS_H
> -
> -#define VMX_RUN_VMRESUME	(1 << 0)
> -#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
> -
> -#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 05a747c9a9ff..307380cd2000 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -847,24 +847,6 @@ static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
>   	return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap, msr);
>   }
>   
> -unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> -{
> -	unsigned int flags = 0;
> -
> -	if (vmx->loaded_vmcs->launched)
> -		flags |= VMX_RUN_VMRESUME;
> -
> -	/*
> -	 * If writes to the SPEC_CTRL MSR aren't intercepted, the guest is free
> -	 * to change it directly without causing a vmexit.  In that case read
> -	 * it after vmexit and store it in vmx->spec_ctrl.
> -	 */
> -	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
> -		flags |= VMX_RUN_SAVE_SPEC_CTRL;
> -
> -	return flags;
> -}
> -
>   static __always_inline void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
>   		unsigned long entry, unsigned long exit)
>   {
> @@ -7065,9 +7047,14 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
>   }
>   
>   static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> -					struct vcpu_vmx *vmx,
> -					unsigned long flags)
> +					struct vcpu_vmx *vmx)
>   {
> +	struct kvm_vmenter_params params = {
> +		.launched = vmx->loaded_vmcs->launched,
> +		.spec_ctrl = vmx->spec_ctrl,
> +		.spec_ctrl_intercepted = msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL),
> +	};
> +
>   	guest_state_enter_irqoff();
>   
>   	/* L1D Flush includes CPU buffer clear to mitigate MDS */
> @@ -7084,8 +7071,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>   	if (vcpu->arch.cr2 != native_read_cr2())
>   		native_write_cr2(vcpu->arch.cr2);
>   
> -	vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
> -				   flags);
> +	vmx->fail = __vmx_vcpu_run(vcpu->arch.regs, &params);
>   
>   	vcpu->arch.cr2 = native_read_cr2();
>   
> @@ -7185,7 +7171,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
>   	kvm_wait_lapic_expire(vcpu);
>   
>   	/* The actual VMENTER/EXIT is in the .noinstr.text section. */
> -	vmx_vcpu_enter_exit(vcpu, vmx, __vmx_vcpu_run_flags(vmx));
> +	vmx_vcpu_enter_exit(vcpu, vmx);
>   
>   	/* All fields are clean at this point */
>   	if (static_branch_unlikely(&enable_evmcs)) {
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index a3da84f4ea45..4eb196f88b47 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -421,9 +421,7 @@ struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr);
>   void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu);
>   void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
>   void vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, unsigned int flags);
> -unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx);
> -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs,
> -		    unsigned int flags);
> +bool __vmx_vcpu_run(unsigned long *regs, struct kvm_vmenter_params *params);
>   int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr);
>   void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu);
>   
> 
> base-commit: 0443d79faa4575a5871b54801ed4a36eecce32e3
  
Paolo Bonzini Nov. 8, 2022, 8:52 a.m. UTC | #5
On 11/7/22 19:14, Sean Christopherson wrote:
> On Mon, Nov 07, 2022, Paolo Bonzini wrote:
>> On 11/7/22 18:08, Sean Christopherson wrote:
>>> What about making KVM self-sufficient?
>>
>> You mean having a different asm-offsets.h file just for arch/x86/kvm/?
> 
> Yeah.

Doh, it would have been enough to add #ifdef COMPILE_OFFSETS to 
svm/svm.h, but it was also pretty easy to generate a separate 
asm-offsets file so why not.

Paolo
  

Patch

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 6919dee69f18..cc651a3310b1 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -26,6 +26,7 @@ 
 #include "x86.h"
 #include "irq.h"
 #include "svm.h"
+#include "vmcb.h"
 
 /* AVIC GATAG is encoded using VM and VCPU IDs */
 #define AVIC_VCPU_ID_BITS		8
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 4c620999d230..6a90aefb7a8e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -28,6 +28,7 @@ 
 #include "cpuid.h"
 #include "lapic.h"
 #include "svm.h"
+#include "vmcb.h"
 #include "hyperv.h"
 
 #define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 28064060413a..73a229a9975b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -25,6 +25,7 @@ 
 #include "mmu.h"
 #include "x86.h"
 #include "svm.h"
+#include "vmcb.h"
 #include "svm_ops.h"
 #include "cpuid.h"
 #include "trace.h"
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 58f0077d9357..cd71f53590b2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -43,6 +43,7 @@ 
 #include "trace.h"
 
 #include "svm.h"
+#include "vmcb.h"
 #include "svm_ops.h"
 
 #include "kvm_onhyperv.h"
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 6a7686bf6900..222856788153 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -22,8 +22,6 @@ 
 #include <asm/svm.h>
 #include <asm/sev-common.h>
 
-#include "kvm_cache_regs.h"
-
 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
 
 #define	IOPM_SIZE PAGE_SIZE * 3
@@ -327,27 +325,6 @@  static __always_inline bool sev_es_guest(struct kvm *kvm)
 #endif
 }
 
-static inline void vmcb_mark_all_dirty(struct vmcb *vmcb)
-{
-	vmcb->control.clean = 0;
-}
-
-static inline void vmcb_mark_all_clean(struct vmcb *vmcb)
-{
-	vmcb->control.clean = VMCB_ALL_CLEAN_MASK
-			       & ~VMCB_ALWAYS_DIRTY_MASK;
-}
-
-static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
-{
-	vmcb->control.clean &= ~(1 << bit);
-}
-
-static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
-{
-        return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
-}
-
 static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
 {
 	return container_of(vcpu, struct vcpu_svm, vcpu);
@@ -363,161 +340,6 @@  static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
  */
 #define SVM_REGS_LAZY_LOAD_SET	(1 << VCPU_EXREG_PDPTR)
 
-static inline void vmcb_set_intercept(struct vmcb_control_area *control, u32 bit)
-{
-	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
-	__set_bit(bit, (unsigned long *)&control->intercepts);
-}
-
-static inline void vmcb_clr_intercept(struct vmcb_control_area *control, u32 bit)
-{
-	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
-	__clear_bit(bit, (unsigned long *)&control->intercepts);
-}
-
-static inline bool vmcb_is_intercept(struct vmcb_control_area *control, u32 bit)
-{
-	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
-	return test_bit(bit, (unsigned long *)&control->intercepts);
-}
-
-static inline bool vmcb12_is_intercept(struct vmcb_ctrl_area_cached *control, u32 bit)
-{
-	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
-	return test_bit(bit, (unsigned long *)&control->intercepts);
-}
-
-static inline void set_dr_intercepts(struct vcpu_svm *svm)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	if (!sev_es_guest(svm->vcpu.kvm)) {
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_WRITE);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_WRITE);
-	}
-
-	vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
-	vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
-
-	recalc_intercepts(svm);
-}
-
-static inline void clr_dr_intercepts(struct vcpu_svm *svm)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	vmcb->control.intercepts[INTERCEPT_DR] = 0;
-
-	/* DR7 access must remain intercepted for an SEV-ES guest */
-	if (sev_es_guest(svm->vcpu.kvm)) {
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
-		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
-	}
-
-	recalc_intercepts(svm);
-}
-
-static inline void set_exception_intercept(struct vcpu_svm *svm, u32 bit)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	WARN_ON_ONCE(bit >= 32);
-	vmcb_set_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit);
-
-	recalc_intercepts(svm);
-}
-
-static inline void clr_exception_intercept(struct vcpu_svm *svm, u32 bit)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	WARN_ON_ONCE(bit >= 32);
-	vmcb_clr_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit);
-
-	recalc_intercepts(svm);
-}
-
-static inline void svm_set_intercept(struct vcpu_svm *svm, int bit)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	vmcb_set_intercept(&vmcb->control, bit);
-
-	recalc_intercepts(svm);
-}
-
-static inline void svm_clr_intercept(struct vcpu_svm *svm, int bit)
-{
-	struct vmcb *vmcb = svm->vmcb01.ptr;
-
-	vmcb_clr_intercept(&vmcb->control, bit);
-
-	recalc_intercepts(svm);
-}
-
-static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit)
-{
-	return vmcb_is_intercept(&svm->vmcb->control, bit);
-}
-
-static inline bool nested_vgif_enabled(struct vcpu_svm *svm)
-{
-	return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK);
-}
-
-static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm)
-{
-	if (!vgif)
-		return NULL;
-
-	if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm))
-		return svm->nested.vmcb02.ptr;
-	else
-		return svm->vmcb01.ptr;
-}
-
-static inline void enable_gif(struct vcpu_svm *svm)
-{
-	struct vmcb *vmcb = get_vgif_vmcb(svm);
-
-	if (vmcb)
-		vmcb->control.int_ctl |= V_GIF_MASK;
-	else
-		svm->vcpu.arch.hflags |= HF_GIF_MASK;
-}
-
-static inline void disable_gif(struct vcpu_svm *svm)
-{
-	struct vmcb *vmcb = get_vgif_vmcb(svm);
-
-	if (vmcb)
-		vmcb->control.int_ctl &= ~V_GIF_MASK;
-	else
-		svm->vcpu.arch.hflags &= ~HF_GIF_MASK;
-}
-
-static inline bool gif_set(struct vcpu_svm *svm)
-{
-	struct vmcb *vmcb = get_vgif_vmcb(svm);
-
-	if (vmcb)
-		return !!(vmcb->control.int_ctl & V_GIF_MASK);
-	else
-		return !!(svm->vcpu.arch.hflags & HF_GIF_MASK);
-}
-
 static inline bool nested_npt_enabled(struct vcpu_svm *svm)
 {
 	return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE;
@@ -567,28 +389,6 @@  void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
 #define NESTED_EXIT_DONE	1	/* Exit caused nested vmexit  */
 #define NESTED_EXIT_CONTINUE	2	/* Further checks needed      */
 
-static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu)
-{
-	struct vcpu_svm *svm = to_svm(vcpu);
-
-	return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK);
-}
-
-static inline bool nested_exit_on_smi(struct vcpu_svm *svm)
-{
-	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_SMI);
-}
-
-static inline bool nested_exit_on_intr(struct vcpu_svm *svm)
-{
-	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_INTR);
-}
-
-static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
-{
-	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
-}
-
 int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
 			 u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
 void svm_leave_nested(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c
index 8cdc62c74a96..ae0a101329e6 100644
--- a/arch/x86/kvm/svm/svm_onhyperv.c
+++ b/arch/x86/kvm/svm/svm_onhyperv.c
@@ -8,6 +8,7 @@ 
 #include <asm/mshyperv.h>
 
 #include "svm.h"
+#include "vmcb.h"
 #include "svm_ops.h"
 
 #include "hyperv.h"
diff --git a/arch/x86/kvm/svm/vmcb.h b/arch/x86/kvm/svm/vmcb.h
new file mode 100644
index 000000000000..8757cda27e3a
--- /dev/null
+++ b/arch/x86/kvm/svm/vmcb.h
@@ -0,0 +1,211 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Kernel-based Virtual Machine driver for Linux
+ *
+ * AMD SVM support - VMCB accessors
+ */
+
+#ifndef __SVM_VMCB_H
+#define __SVM_VMCB_H
+
+#include "kvm_cache_regs.h"
+
+static inline void vmcb_mark_all_dirty(struct vmcb *vmcb)
+{
+	vmcb->control.clean = 0;
+}
+
+static inline void vmcb_mark_all_clean(struct vmcb *vmcb)
+{
+	vmcb->control.clean = VMCB_ALL_CLEAN_MASK
+			       & ~VMCB_ALWAYS_DIRTY_MASK;
+}
+
+static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
+{
+	vmcb->control.clean &= ~(1 << bit);
+}
+
+static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
+{
+        return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
+}
+
+static inline void vmcb_set_intercept(struct vmcb_control_area *control, u32 bit)
+{
+	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
+	__set_bit(bit, (unsigned long *)&control->intercepts);
+}
+
+static inline void vmcb_clr_intercept(struct vmcb_control_area *control, u32 bit)
+{
+	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
+	__clear_bit(bit, (unsigned long *)&control->intercepts);
+}
+
+static inline bool vmcb_is_intercept(struct vmcb_control_area *control, u32 bit)
+{
+	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
+	return test_bit(bit, (unsigned long *)&control->intercepts);
+}
+
+static inline bool vmcb12_is_intercept(struct vmcb_ctrl_area_cached *control, u32 bit)
+{
+	WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT);
+	return test_bit(bit, (unsigned long *)&control->intercepts);
+}
+
+static inline void set_dr_intercepts(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	if (!sev_es_guest(svm->vcpu.kvm)) {
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_WRITE);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_WRITE);
+	}
+
+	vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
+	vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
+
+	recalc_intercepts(svm);
+}
+
+static inline void clr_dr_intercepts(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	vmcb->control.intercepts[INTERCEPT_DR] = 0;
+
+	/* DR7 access must remain intercepted for an SEV-ES guest */
+	if (sev_es_guest(svm->vcpu.kvm)) {
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
+		vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
+	}
+
+	recalc_intercepts(svm);
+}
+
+static inline void set_exception_intercept(struct vcpu_svm *svm, u32 bit)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	WARN_ON_ONCE(bit >= 32);
+	vmcb_set_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit);
+
+	recalc_intercepts(svm);
+}
+
+static inline void clr_exception_intercept(struct vcpu_svm *svm, u32 bit)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	WARN_ON_ONCE(bit >= 32);
+	vmcb_clr_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit);
+
+	recalc_intercepts(svm);
+}
+
+static inline void svm_set_intercept(struct vcpu_svm *svm, int bit)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	vmcb_set_intercept(&vmcb->control, bit);
+
+	recalc_intercepts(svm);
+}
+
+static inline void svm_clr_intercept(struct vcpu_svm *svm, int bit)
+{
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+
+	vmcb_clr_intercept(&vmcb->control, bit);
+
+	recalc_intercepts(svm);
+}
+
+static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit)
+{
+	return vmcb_is_intercept(&svm->vmcb->control, bit);
+}
+
+static inline bool nested_vgif_enabled(struct vcpu_svm *svm)
+{
+	return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK);
+}
+
+static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm)
+{
+	if (!vgif)
+		return NULL;
+
+	if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm))
+		return svm->nested.vmcb02.ptr;
+	else
+		return svm->vmcb01.ptr;
+}
+
+static inline void enable_gif(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = get_vgif_vmcb(svm);
+
+	if (vmcb)
+		vmcb->control.int_ctl |= V_GIF_MASK;
+	else
+		svm->vcpu.arch.hflags |= HF_GIF_MASK;
+}
+
+static inline void disable_gif(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = get_vgif_vmcb(svm);
+
+	if (vmcb)
+		vmcb->control.int_ctl &= ~V_GIF_MASK;
+	else
+		svm->vcpu.arch.hflags &= ~HF_GIF_MASK;
+}
+
+static inline bool gif_set(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = get_vgif_vmcb(svm);
+
+	if (vmcb)
+		return !!(vmcb->control.int_ctl & V_GIF_MASK);
+	else
+		return !!(svm->vcpu.arch.hflags & HF_GIF_MASK);
+}
+
+static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK);
+}
+
+static inline bool nested_exit_on_smi(struct vcpu_svm *svm)
+{
+	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_SMI);
+}
+
+static inline bool nested_exit_on_intr(struct vcpu_svm *svm)
+{
+	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_INTR);
+}
+
+static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
+{
+	return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
+}
+
+#endif