[RFC,v2] x86/sev-es: Include XSS value in GHCB CPUID request

Message ID 20230524155619.415961-1-john.allen@amd.com
State New
Headers
Series [RFC,v2] x86/sev-es: Include XSS value in GHCB CPUID request |

Commit Message

John Allen May 24, 2023, 3:56 p.m. UTC
  When a guest issues a cpuid instruction for Fn0000000D_x0B (CetUserOffset), the
hypervisor may intercept and access the guest XSS value. For SEV-ES, this is
encrypted and needs to be included in the GHCB to be visible to the hypervisor.
The rdmsr instruction needs to be called directly as the code may be used in
early boot in which case the rdmsr wrappers should be avoided as they are
incompatible with the decompression boot phase. 

Signed-off-by: John Allen <john.allen@amd.com>
---
v2:
  - Do not expose XSS state for ECX > 1
  - Direct MSR read was left as is for now. Using __rdmsr produces a warning
    during kernel build as the __ex_table section used by __rdmsr isn't used
    during decompression boot. Additionally, we can see other code
    performing a similar direct msr read in this file in commit
    ee0bfa08a3453.
---
 arch/x86/kernel/sev-shared.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)
  

Comments

Borislav Petkov June 15, 2023, 11:52 a.m. UTC | #1
On Wed, May 24, 2023 at 03:56:19PM +0000, John Allen wrote:
> +	if (has_cpuflag(X86_FEATURE_SHSTK) && regs->ax == 0xd && regs->cx <= 1) {
> +		unsigned long lo, hi;
> +		u64 xss;
> +
> +		/*
> +		 * Since vc_handle_cpuid may be used during early boot, the
> +		 * rdmsr wrappers are incompatible and should not be used.
> +		 * Invoke the instruction directly.
> +		 */
> +		asm volatile("rdmsr" : "=a" (lo), "=d" (hi)
> +			     : "c" (MSR_IA32_XSS));
> +		xss = (hi << 32) | lo;
> +		ghcb_set_xss(ghcb, xss);

$ git grep ghcb_set_xss
$

So this patch needs some tree which I'm not aware of.

Also, this passing through of host XSS to the guest looks like it is
bypassing the vcpu->arch.ia32_xss copy which KVM seems to maintain. It
looks to me like the handling needs to be synchronized with it or so.

Thx.
  
John Allen June 15, 2023, 3:23 p.m. UTC | #2
On Thu, Jun 15, 2023 at 01:52:55PM +0200, Borislav Petkov wrote:
> On Wed, May 24, 2023 at 03:56:19PM +0000, John Allen wrote:
> > +	if (has_cpuflag(X86_FEATURE_SHSTK) && regs->ax == 0xd && regs->cx <= 1) {
> > +		unsigned long lo, hi;
> > +		u64 xss;
> > +
> > +		/*
> > +		 * Since vc_handle_cpuid may be used during early boot, the
> > +		 * rdmsr wrappers are incompatible and should not be used.
> > +		 * Invoke the instruction directly.
> > +		 */
> > +		asm volatile("rdmsr" : "=a" (lo), "=d" (hi)
> > +			     : "c" (MSR_IA32_XSS));
> > +		xss = (hi << 32) | lo;
> > +		ghcb_set_xss(ghcb, xss);
> 
> $ git grep ghcb_set_xss
> $
> 
> So this patch needs some tree which I'm not aware of.
> 
> Also, this passing through of host XSS to the guest looks like it is
> bypassing the vcpu->arch.ia32_xss copy which KVM seems to maintain. It
> looks to me like the handling needs to be synchronized with it or so.

Hi Boris,

Yeah, sorry, this is confusing. This patch is logically part of the SVM
shadow stack support series:
https://lore.kernel.org/all/20230524155339.415820-1-john.allen@amd.com/

Since this patch is for the guest kernel, it is meant for the tip tree
rather than the kvm tree so I sent it as a separate patch. However, as
you noted, this patch depends on patch 5/6 of that series to introduce
the ghcb_set_xss function. How would you advise that I handle this
entanglement in the next series?

Thanks,
John
  
Borislav Petkov June 15, 2023, 3:30 p.m. UTC | #3
On Thu, Jun 15, 2023 at 10:23:01AM -0500, John Allen wrote:
> How would you advise that I handle this entanglement in the next
> series?

Send the *whole* and complete set to both maintainers - KVM and tip
- and they'll decide how to do the patch tetris.

Thx.
  
John Allen June 15, 2023, 3:34 p.m. UTC | #4
On Thu, Jun 15, 2023 at 05:30:51PM +0200, Borislav Petkov wrote:
> On Thu, Jun 15, 2023 at 10:23:01AM -0500, John Allen wrote:
> > How would you advise that I handle this entanglement in the next
> > series?
> 
> Send the *whole* and complete set to both maintainers - KVM and tip
> - and they'll decide how to do the patch tetris.

Thanks Boris. Sounds good to me.

Thanks,
John
  

Patch

diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 3a5b0c9c4fcc..fc4109cc2e67 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -887,6 +887,21 @@  static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
 		/* xgetbv will cause #GP - use reset value for xcr0 */
 		ghcb_set_xcr0(ghcb, 1);
 
+	if (has_cpuflag(X86_FEATURE_SHSTK) && regs->ax == 0xd && regs->cx <= 1) {
+		unsigned long lo, hi;
+		u64 xss;
+
+		/*
+		 * Since vc_handle_cpuid may be used during early boot, the
+		 * rdmsr wrappers are incompatible and should not be used.
+		 * Invoke the instruction directly.
+		 */
+		asm volatile("rdmsr" : "=a" (lo), "=d" (hi)
+			     : "c" (MSR_IA32_XSS));
+		xss = (hi << 32) | lo;
+		ghcb_set_xss(ghcb, xss);
+	}
+
 	ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
 	if (ret != ES_OK)
 		return ret;