[1/5] KVM: VMX: Open code writing vCPU's PAT in VMX's MSR handler

Message ID 20230503182852.3431281-2-seanjc@google.com
State New
Headers
Series KVM: x86: Clean up MSR PAT handling |

Commit Message

Sean Christopherson May 3, 2023, 6:28 p.m. UTC
  From: Wenyao Hai <haiwenyao@uniontech.com>

Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing
through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr().

Note, MSR_IA32_CR_PAT is 0x277, and is very subtly handled by

	case 0x200 ... MSR_IA32_MC0_CTL2 - 1:

in kvm_set_msr_common().

Signed-off-by: Wenyao Hai <haiwenyao@uniontech.com>
[sean: massage changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)
  

Comments

Kai Huang May 3, 2023, 11 p.m. UTC | #1
On Wed, 2023-05-03 at 11:28 -0700, Sean Christopherson wrote:
> From: Wenyao Hai <haiwenyao@uniontech.com>
> 
> Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing
> through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr().

What's the value of doing so, besides saving a function of kvm_set_msr_common()?

PAT change shouldn't be something frequent so shouldn't in a performance
critical path.  Given the PAT logic on Intel and AMD are basically the same ,
isn't it better to do in kvm_set_msr_common()?

For instance, given mtrr code is also in common x86, if we ever want to add some
additional logic to, i.e. calculate effective memtype, isn't better to do handle
pat in common code too?

> 
> Note, MSR_IA32_CR_PAT is 0x277, and is very subtly handled by
> 
> 	case 0x200 ... MSR_IA32_MC0_CTL2 - 1:
> 
> in kvm_set_msr_common().
> 
> Signed-off-by: Wenyao Hai <haiwenyao@uniontech.com>
> [sean: massage changelog]
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 44fb619803b8..53e249109483 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2294,12 +2294,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		    get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT)
>  			get_vmcs12(vcpu)->guest_ia32_pat = data;
>  
> -		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
> +		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
>  			vmcs_write64(GUEST_IA32_PAT, data);
> -			vcpu->arch.pat = data;
> -			break;
> -		}
> -		ret = kvm_set_msr_common(vcpu, msr_info);
> +
> +		vcpu->arch.pat = data;
>  		break;
>  	case MSR_IA32_MCG_EXT_CTL:
>  		if ((!msr_info->host_initiated &&
  
Sean Christopherson May 3, 2023, 11:25 p.m. UTC | #2
On Wed, May 03, 2023, Kai Huang wrote:
> On Wed, 2023-05-03 at 11:28 -0700, Sean Christopherson wrote:
> > From: Wenyao Hai <haiwenyao@uniontech.com>
> > 
> > Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing
> > through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr().
> 
> What's the value of doing so, besides saving a function of kvm_set_msr_common()?

To avoid complicating a very simple operation (writing vcpu->arch.pat), and to
align with SVM.

> PAT change shouldn't be something frequent so shouldn't in a performance
> critical path.  Given the PAT logic on Intel and AMD are basically the same ,
> isn't it better to do in kvm_set_msr_common()?

I could go either way on calling into kvm_set_msr_common().  I agree that
performance isn't a concern.  Hmm, and kvm_set_msr_common() still has a case
statement for MSR_IA32_CR_PAT, so handling the write fully in vendor code won't
impact the code generation for other MSRs.

Though I am leaning towards saying we should either handle loads and stores to
vcpu->arch.pat in common code _or_ vendor code, i.e. either teach VMX and SVM to
handle reads of PAT, or have their write paths call kvm_set_msr_common().  A mix
of both is definitely odd.

I don't have strong preference on which of those two we choose.  I dislike duplicating
logic across VMX and SVM, but on the other hands it's so little code.  I think
I'd vote for handling everything in vendor code, mostly because this gives the
appearance that the write can fail, which is silly and misleading.

		ret = kvm_set_msr_common(vcpu, msr_info);

> For instance, given mtrr code is also in common x86, if we ever want to add some
> additional logic to, i.e. calculate effective memtype, isn't better to do handle
> pat in common code too?

FWIW, I highly doubt we'll ever have code like that.  The truly effective memtype
calculations are too different between Intel and AMD, and doing anything useful
with the guest's effective memtype is likely a fool's errand.

> > Note, MSR_IA32_CR_PAT is 0x277, and is very subtly handled by
> > 
> > 	case 0x200 ... MSR_IA32_MC0_CTL2 - 1:
> > 
> > in kvm_set_msr_common().
> > 
> > Signed-off-by: Wenyao Hai <haiwenyao@uniontech.com>
> > [sean: massage changelog]
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> >  arch/x86/kvm/vmx/vmx.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 44fb619803b8..53e249109483 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -2294,12 +2294,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		    get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT)
> >  			get_vmcs12(vcpu)->guest_ia32_pat = data;
> >  
> > -		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
> > +		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
> >  			vmcs_write64(GUEST_IA32_PAT, data);
> > -			vcpu->arch.pat = data;
> > -			break;
> > -		}
> > -		ret = kvm_set_msr_common(vcpu, msr_info);
> > +
> > +		vcpu->arch.pat = data;
> >  		break;
> >  	case MSR_IA32_MCG_EXT_CTL:
> >  		if ((!msr_info->host_initiated &&
  
Kai Huang May 3, 2023, 11:41 p.m. UTC | #3
On Wed, 2023-05-03 at 16:25 -0700, Sean Christopherson wrote:
> On Wed, May 03, 2023, Kai Huang wrote:
> > On Wed, 2023-05-03 at 11:28 -0700, Sean Christopherson wrote:
> > > From: Wenyao Hai <haiwenyao@uniontech.com>
> > > 
> > > Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing
> > > through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr().
> > 
> > What's the value of doing so, besides saving a function of kvm_set_msr_common()?
> 
> To avoid complicating a very simple operation (writing vcpu->arch.pat), and to
> align with SVM.
> 
> > PAT change shouldn't be something frequent so shouldn't in a performance
> > critical path.  Given the PAT logic on Intel and AMD are basically the same ,
> > isn't it better to do in kvm_set_msr_common()?
> 
> I could go either way on calling into kvm_set_msr_common().  I agree that
> performance isn't a concern.  Hmm, and kvm_set_msr_common() still has a case
> statement for MSR_IA32_CR_PAT, so handling the write fully in vendor code won't
> impact the code generation for other MSRs.
> 
> Though I am leaning towards saying we should either handle loads and stores to
> vcpu->arch.pat in common code _or_ vendor code, i.e. either teach VMX and SVM to
> handle reads of PAT, or have their write paths call kvm_set_msr_common().  A mix
> of both is definitely odd.

Agreed.  Alternatively we can move SVM's setting vcpu->arch.pat to common code.

> 
> I don't have strong preference on which of those two we choose.  I dislike duplicating
> logic across VMX and SVM, but on the other hands it's so little code.  I think
> I'd vote for handling everything in vendor code, mostly because this gives the
> appearance that the write can fail, which is silly and misleading.
> 
> 		ret = kvm_set_msr_common(vcpu, msr_info);

No opinion either.  First glance is having 

	case MSR_IA32_CR_PAT:
		vcpu->arch.pat = data;

in kvm_set_msr_common() is clearer because it is symmetrical to the read path.

Anyway your decision :)

> 
> > For instance, given mtrr code is also in common x86, if we ever want to add some
> > additional logic to, i.e. calculate effective memtype, isn't better to do handle
> > pat in common code too?
> 
> FWIW, I highly doubt we'll ever have code like that.  The truly effective memtype
> calculations are too different between Intel and AMD, and doing anything useful
> with the guest's effective memtype is likely a fool's errand.

I thought the logic of getting effective memtype should be just the same between
Intel and AMD but it seems there's slight difference.  I agree with you it's
unlikely to have such code in common.

But looks setting vcpu->arch.pat in common code is more flexible.  Anyway no
opinion here.

> 
> > > Note, MSR_IA32_CR_PAT is 0x277, and is very subtly handled by
> > > 
> > > 	case 0x200 ... MSR_IA32_MC0_CTL2 - 1:
> > > 
> > > in kvm_set_msr_common().
> > > 
> > > Signed-off-by: Wenyao Hai <haiwenyao@uniontech.com>
> > > [sean: massage changelog]
> > > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > > ---
> > >  arch/x86/kvm/vmx/vmx.c | 8 +++-----
> > >  1 file changed, 3 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 44fb619803b8..53e249109483 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -2294,12 +2294,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > >  		    get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT)
> > >  			get_vmcs12(vcpu)->guest_ia32_pat = data;
> > >  
> > > -		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
> > > +		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
> > >  			vmcs_write64(GUEST_IA32_PAT, data);
> > > -			vcpu->arch.pat = data;
> > > -			break;
> > > -		}
> > > -		ret = kvm_set_msr_common(vcpu, msr_info);
> > > +
> > > +		vcpu->arch.pat = data;
> > >  		break;
> > >  	case MSR_IA32_MCG_EXT_CTL:
> > >  		if ((!msr_info->host_initiated &&
  
Sean Christopherson May 4, 2023, 5:23 p.m. UTC | #4
On Wed, May 03, 2023, Kai Huang wrote:
> On Wed, 2023-05-03 at 16:25 -0700, Sean Christopherson wrote:
> > On Wed, May 03, 2023, Kai Huang wrote:
> > > On Wed, 2023-05-03 at 11:28 -0700, Sean Christopherson wrote:
> > > > From: Wenyao Hai <haiwenyao@uniontech.com>
> > > > 
> > > > Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing
> > > > through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr().
> > > 
> > > What's the value of doing so, besides saving a function of kvm_set_msr_common()?
> > 
> > To avoid complicating a very simple operation (writing vcpu->arch.pat), and to
> > align with SVM.
> > 
> > > PAT change shouldn't be something frequent so shouldn't in a performance
> > > critical path.  Given the PAT logic on Intel and AMD are basically the same ,
> > > isn't it better to do in kvm_set_msr_common()?
> > 
> > I could go either way on calling into kvm_set_msr_common().  I agree that
> > performance isn't a concern.  Hmm, and kvm_set_msr_common() still has a case
> > statement for MSR_IA32_CR_PAT, so handling the write fully in vendor code won't
> > impact the code generation for other MSRs.
> > 
> > Though I am leaning towards saying we should either handle loads and stores to
> > vcpu->arch.pat in common code _or_ vendor code, i.e. either teach VMX and SVM to
> > handle reads of PAT, or have their write paths call kvm_set_msr_common().  A mix
> > of both is definitely odd.
> 
> Agreed.  Alternatively we can move SVM's setting vcpu->arch.pat to common code.
> 
> > 
> > I don't have strong preference on which of those two we choose.  I dislike duplicating
> > logic across VMX and SVM, but on the other hands it's so little code.  I think
> > I'd vote for handling everything in vendor code, mostly because this gives the
> > appearance that the write can fail, which is silly and misleading.
> > 
> > 		ret = kvm_set_msr_common(vcpu, msr_info);
> 
> No opinion either.  First glance is having 
> 
> 	case MSR_IA32_CR_PAT:
> 		vcpu->arch.pat = data;
> 
> in kvm_set_msr_common() is clearer because it is symmetrical to the read path.
> 
> Anyway your decision :)

Duh, the obvious answer is to do 

	ret = kvm_set_msr_common(vcpu, msr_info);
	if (ret)
		break;

	<vendor code here>

That's an established pattern for other MSRs, and addresses my main concern of
not unwinding the VMCS updates in the should-be-impossible scenario of
kvm_set_msr_common() failing after the kvm_pat_valid() check.

Thanks Kai!
  

Patch

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 44fb619803b8..53e249109483 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2294,12 +2294,10 @@  static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		    get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT)
 			get_vmcs12(vcpu)->guest_ia32_pat = data;
 
-		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
 			vmcs_write64(GUEST_IA32_PAT, data);
-			vcpu->arch.pat = data;
-			break;
-		}
-		ret = kvm_set_msr_common(vcpu, msr_info);
+
+		vcpu->arch.pat = data;
 		break;
 	case MSR_IA32_MCG_EXT_CTL:
 		if ((!msr_info->host_initiated &&