[2/4] KVM: x86: Rely solely on preempted_in_kernel flag for directed yield

Message ID 20240110003938.490206-3-seanjc@google.com
State New
Headers
Series KVM: Clean up "preempted in-kernel" logic |

Commit Message

Sean Christopherson Jan. 10, 2024, 12:39 a.m. UTC
  Snapshot preempted_in_kernel using kvm_arch_vcpu_in_kernel() so that the
flag is "accurate" (or rather, consistent and deterministic within KVM)
for guest with protected state, and explicitly use preempted_in_kernel
when checking if a vCPU was preempted in kernel mode instead of bouncing
through kvm_arch_vcpu_in_kernel().

Drop the gnarly logic in kvm_arch_vcpu_in_kernel() that redirects to
preempted_in_kernel if the target vCPU is not the "running", i.e. loaded,
vCPU, as the only reason that code existed was for the directed yield case
where KVM wants to check the CPL of a vCPU that may or may not be loaded
on the current pCPU.

Cc: Like Xu <like.xu.linux@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)
  

Comments

Yuan Yao Jan. 10, 2024, 7:55 a.m. UTC | #1
On Tue, Jan 09, 2024 at 04:39:36PM -0800, Sean Christopherson wrote:
> Snapshot preempted_in_kernel using kvm_arch_vcpu_in_kernel() so that the
> flag is "accurate" (or rather, consistent and deterministic within KVM)
> for guest with protected state, and explicitly use preempted_in_kernel
> when checking if a vCPU was preempted in kernel mode instead of bouncing
> through kvm_arch_vcpu_in_kernel().
>
> Drop the gnarly logic in kvm_arch_vcpu_in_kernel() that redirects to
> preempted_in_kernel if the target vCPU is not the "running", i.e. loaded,
> vCPU, as the only reason that code existed was for the directed yield case
> where KVM wants to check the CPL of a vCPU that may or may not be loaded
> on the current pCPU.
>
> Cc: Like Xu <like.xu.linux@gmail.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/x86.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 415509918c7f..77494f9c8d49 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5062,8 +5062,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  	int idx;
>
>  	if (vcpu->preempted) {
> -		if (!vcpu->arch.guest_state_protected)
> -			vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
> +		vcpu->arch.preempted_in_kernel = kvm_arch_vcpu_in_kernel(vcpu);
>
>  		/*
>  		 * Take the srcu lock as memslots will be accessed to check the gfn
> @@ -13093,7 +13092,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
>
>  bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_arch_vcpu_in_kernel(vcpu);
> +	return vcpu->arch.preempted_in_kernel;
>  }
>
>  bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> @@ -13116,9 +13115,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>  	if (vcpu->arch.guest_state_protected)
>  		return true;
>
> -	if (vcpu != kvm_get_running_vcpu())
> -		return vcpu->arch.preempted_in_kernel;
> -

Now this function accepts vcpu parameter but can only get
information from "current" vcpu loaded on hardware for VMX.
I'm not sure whether need "WARN_ON(vcpu != kvm_get_running_vcpu())"
here to guard it. i.e. kvm_guest_state() still
uses this function (although it did chekcing before).

>  	return static_call(kvm_x86_get_cpl)(vcpu) == 0;
>  }
>
> --
> 2.43.0.472.g3155946c3a-goog
>
>
  
Sean Christopherson Jan. 10, 2024, 5:13 p.m. UTC | #2
On Wed, Jan 10, 2024, Yuan Yao wrote:
> On Tue, Jan 09, 2024 at 04:39:36PM -0800, Sean Christopherson wrote:
> > @@ -13093,7 +13092,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> >
> >  bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> >  {
> > -	return kvm_arch_vcpu_in_kernel(vcpu);
> > +	return vcpu->arch.preempted_in_kernel;
> >  }
> >
> >  bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> > @@ -13116,9 +13115,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> >  	if (vcpu->arch.guest_state_protected)
> >  		return true;
> >
> > -	if (vcpu != kvm_get_running_vcpu())
> > -		return vcpu->arch.preempted_in_kernel;
> > -
> 
> Now this function accepts vcpu parameter but can only get information from
> "current" vcpu loaded on hardware for VMX.  I'm not sure whether need
> "WARN_ON(vcpu != kvm_get_running_vcpu())" here to guard it. i.e.
> kvm_guest_state() still uses this function (although it did chekcing before).

Eh, I don't think it's worth adding a one-off kvm_get_running_vcpu() sanity check.
In the vast majority of cases, if VMREAD or VMWRITE is used improperly, the
instruction will fail at some point due to the pCPU not having any VMCS loaded.
It's really just cross-vCPU checks that could silently do the wrong thing, and
those flows are so few and far between that I'm comfortable taking a "just get
it right stance".

If we want to add sanity checks, I think my vote would be to plumb @vcpu down
into vmcs_read{16,32,64,l} and add sanity checks there, probably with some sort
of guard so that the sanity checks can be enabled only for debug kernels.
  
Yuan Yao Jan. 11, 2024, 12:47 p.m. UTC | #3
On Wed, Jan 10, 2024 at 09:13:28AM -0800, Sean Christopherson wrote:
> On Wed, Jan 10, 2024, Yuan Yao wrote:
> > On Tue, Jan 09, 2024 at 04:39:36PM -0800, Sean Christopherson wrote:
> > > @@ -13093,7 +13092,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> > >
> > >  bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> > >  {
> > > -	return kvm_arch_vcpu_in_kernel(vcpu);
> > > +	return vcpu->arch.preempted_in_kernel;
> > >  }
> > >
> > >  bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> > > @@ -13116,9 +13115,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> > >  	if (vcpu->arch.guest_state_protected)
> > >  		return true;
> > >
> > > -	if (vcpu != kvm_get_running_vcpu())
> > > -		return vcpu->arch.preempted_in_kernel;
> > > -
> >
> > Now this function accepts vcpu parameter but can only get information from
> > "current" vcpu loaded on hardware for VMX.  I'm not sure whether need
> > "WARN_ON(vcpu != kvm_get_running_vcpu())" here to guard it. i.e.
> > kvm_guest_state() still uses this function (although it did chekcing before).
>
> Eh, I don't think it's worth adding a one-off kvm_get_running_vcpu() sanity check.
> In the vast majority of cases, if VMREAD or VMWRITE is used improperly, the
> instruction will fail at some point due to the pCPU not having any VMCS loaded.
> It's really just cross-vCPU checks that could silently do the wrong thing, and
> those flows are so few and far between that I'm comfortable taking a "just get
> it right stance".
>
> If we want to add sanity checks, I think my vote would be to plumb @vcpu down
> into vmcs_read{16,32,64,l} and add sanity checks there, probably with some sort
> of guard so that the sanity checks can be enabled only for debug kernels.

I got your point.

Reviewed-by: Yuan Yao <yuan.yao@intel.com>
  

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 415509918c7f..77494f9c8d49 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5062,8 +5062,7 @@  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	int idx;
 
 	if (vcpu->preempted) {
-		if (!vcpu->arch.guest_state_protected)
-			vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
+		vcpu->arch.preempted_in_kernel = kvm_arch_vcpu_in_kernel(vcpu);
 
 		/*
 		 * Take the srcu lock as memslots will be accessed to check the gfn
@@ -13093,7 +13092,7 @@  bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
 
 bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
 {
-	return kvm_arch_vcpu_in_kernel(vcpu);
+	return vcpu->arch.preempted_in_kernel;
 }
 
 bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
@@ -13116,9 +13115,6 @@  bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.guest_state_protected)
 		return true;
 
-	if (vcpu != kvm_get_running_vcpu())
-		return vcpu->arch.preempted_in_kernel;
-
 	return static_call(kvm_x86_get_cpl)(vcpu) == 0;
 }