[06/13] KVM: SVM: Add VNMI bit definition

Message ID 20221117143242.102721-7-mlevitsk@redhat.com
State New
Headers
Series SVM: vNMI (with my fixes) |

Commit Message

Maxim Levitsky Nov. 17, 2022, 2:32 p.m. UTC
  From: Santosh Shukla <santosh.shukla@amd.com>

VNMI exposes 3 capability bits (V_NMI, V_NMI_MASK, and V_NMI_ENABLE) to
virtualize NMI and NMI_MASK, Those capability bits are part of
VMCB::intr_ctrl -
V_NMI(11) - Indicates whether a virtual NMI is pending in the guest.
V_NMI_MASK(12) - Indicates whether virtual NMI is masked in the guest.
V_NMI_ENABLE(26) - Enables the NMI virtualization feature for the guest.

When Hypervisor wants to inject NMI, it will set V_NMI bit, Processor
will clear the V_NMI bit and Set the V_NMI_MASK which means the Guest is
handling NMI, After the guest handled the NMI, The processor will clear
the V_NMI_MASK on the successful completion of IRET instruction Or if
VMEXIT occurs while delivering the virtual NMI.

To enable the VNMI capability, Hypervisor need to program
V_NMI_ENABLE bit 1.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
---
 arch/x86/include/asm/svm.h | 7 +++++++
 arch/x86/kvm/svm/svm.c     | 6 ++++++
 2 files changed, 13 insertions(+)
  

Comments

Borislav Petkov Nov. 17, 2022, 2:37 p.m. UTC | #1
On Thu, Nov 17, 2022 at 04:32:35PM +0200, Maxim Levitsky wrote:
> @@ -5029,6 +5031,10 @@ static __init int svm_hardware_setup(void)
>  		svm_x86_ops.vcpu_get_apicv_inhibit_reasons = NULL;
>  	}
>  
> +	vnmi = vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);

s/boot_cpu_has/cpu_feature_enabled/
  
Sean Christopherson Nov. 17, 2022, 4:42 p.m. UTC | #2
On Thu, Nov 17, 2022, Borislav Petkov wrote:
> On Thu, Nov 17, 2022 at 04:32:35PM +0200, Maxim Levitsky wrote:
> > @@ -5029,6 +5031,10 @@ static __init int svm_hardware_setup(void)
> >  		svm_x86_ops.vcpu_get_apicv_inhibit_reasons = NULL;
> >  	}
> >  
> > +	vnmi = vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);
> 
> s/boot_cpu_has/cpu_feature_enabled/

Why?  This is rarely run code, won't cpu_feature_enabled() unnecessarily require
patching?

And while we're on the topic... https://lore.kernel.org/all/Y22IzA9DN%2FxYWgWN@google.com
  
Borislav Petkov Nov. 17, 2022, 5:07 p.m. UTC | #3
On Thu, Nov 17, 2022 at 04:42:57PM +0000, Sean Christopherson wrote:
> Why? This is rarely run code, won't cpu_feature_enabled()
> unnecessarily require patching?

Because we want one single interface to test X86_FEATURE flags. And
there's no need for the users to know whether it wants patching or not -
we simply patch *everywhere* and that's it.

> And while we're on the topic... https://lore.kernel.org/all/Y22IzA9DN%2FxYWgWN@google.com

Because static_ or boot_ is not relevant to the user - all she
wants to know is whether a cpu feature has been enabled. Thus
cpu_feature_enabled().

And yes, at the time I protested a little about unnecessary patching.
And tglx said "Why not?". And I had no good answer to that. So we can
just as well patch *everywhere*.

And patching is soo not a big deal anymore considering all the other
things we do to kernel code at build time and runtime. objdump output
compared to what's actually running has in some cases no resemblance
whatsoever.

Thx.
  
Sean Christopherson Nov. 17, 2022, 8:27 p.m. UTC | #4
On Thu, Nov 17, 2022, Maxim Levitsky wrote:
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 03acbe8ff34edb..08a7b2a0a29f3a 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -230,6 +230,8 @@ module_param(dump_invalid_vmcb, bool, 0644);
>  bool intercept_smi = true;
>  module_param(intercept_smi, bool, 0444);
>  
> +bool vnmi = true;

This can be __ro_after_init, e.g. see
https://lore.kernel.org/all/20221110013003.1421895-4-seanjc@google.com

> +module_param(vnmi, bool, 0444);

The exposure of "vnmi" to userspace belongs in the last patch.  E.g. this patch
should only stub in vnmi:

  bool __ro_after_init vnmi;

and then the last patch that actually enables it can default it to true and expose
the param to users.

It obviously doesn't matter in the end, but it technically makes this series
broken, e.g. nested_sync_int_ctl_from_vmcb02() pivots on "vnmi" without the extra
V_NMI_ENABLE check, and advertising the vnmi is enabled when it actually isn't is
wrong/misleading.
  
Sean Christopherson Nov. 17, 2022, 8:33 p.m. UTC | #5
On Thu, Nov 17, 2022, Borislav Petkov wrote:
> On Thu, Nov 17, 2022 at 04:42:57PM +0000, Sean Christopherson wrote:
> > Why? This is rarely run code, won't cpu_feature_enabled()
> > unnecessarily require patching?
> 
> Because we want one single interface to test X86_FEATURE flags. And
> there's no need for the users to know whether it wants patching or not -
> we simply patch *everywhere* and that's it.
> 
> > And while we're on the topic... https://lore.kernel.org/all/Y22IzA9DN%2FxYWgWN@google.com
> 
> Because static_ or boot_ is not relevant to the user - all she
> wants to know is whether a cpu feature has been enabled. Thus
> cpu_feature_enabled().
> 
> And yes, at the time I protested a little about unnecessary patching.
> And tglx said "Why not?". And I had no good answer to that. So we can
> just as well patch *everywhere*.

Ah, I missed that memo.


Paolo,

Since it sounds like static_cpu_has() is going the way of the dodo, and ditto for
boot_cpu_has() except for flows that don't play nice with patching (none of which
are in KVM), should we do a KVM-wide conversion to cpu_feature_enabled() at some
point in the near future?
  

Patch

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 4352b46dd20c90..d8474e4b04ac05 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -198,6 +198,13 @@  struct __attribute__ ((__packed__)) vmcb_control_area {
 #define X2APIC_MODE_SHIFT 30
 #define X2APIC_MODE_MASK (1 << X2APIC_MODE_SHIFT)
 
+#define V_NMI_PENDING_SHIFT 11
+#define V_NMI_PENDING (1 << V_NMI_PENDING_SHIFT)
+#define V_NMI_MASK_SHIFT 12
+#define V_NMI_MASK (1 << V_NMI_MASK_SHIFT)
+#define V_NMI_ENABLE_SHIFT 26
+#define V_NMI_ENABLE (1 << V_NMI_ENABLE_SHIFT)
+
 #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
 #define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1)
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 03acbe8ff34edb..08a7b2a0a29f3a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -230,6 +230,8 @@  module_param(dump_invalid_vmcb, bool, 0644);
 bool intercept_smi = true;
 module_param(intercept_smi, bool, 0444);
 
+bool vnmi = true;
+module_param(vnmi, bool, 0444);
 
 static bool svm_gp_erratum_intercept = true;
 
@@ -5029,6 +5031,10 @@  static __init int svm_hardware_setup(void)
 		svm_x86_ops.vcpu_get_apicv_inhibit_reasons = NULL;
 	}
 
+	vnmi = vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);
+	if (vnmi)
+		pr_info("Virtual NMI enabled\n");
+
 	if (vls) {
 		if (!npt_enabled ||
 		    !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) ||