Message ID | 20230511040857.6094-17-weijiang.yang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp4172040vqo; Thu, 11 May 2023 00:16:56 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4j2HALQZifGVevL0xFq8owrmb6DNd5WXiF8NKSvWomb8fNA4G3LXLnaHz4d0YQ0aoGXUAA X-Received: by 2002:a17:90a:e20a:b0:23d:1b50:1ebe with SMTP id a10-20020a17090ae20a00b0023d1b501ebemr20182656pjz.27.1683789415732; Thu, 11 May 2023 00:16:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683789415; cv=none; d=google.com; s=arc-20160816; b=LHIgix05+zb8E74nSHYElDgVX/S75JLqowbzashwymJSHhTZIgZNifG0r3VyN6Vwwh xqNyF5hhNLByi6/Bw1dJN9WxapSWIUIgYPsfRJ1ye57pCCX1ij7eHH1CeuUNA1bR4PiN swtQIxFDfxHzjGAdvHgJ36hnTbInGIkXa+CUrelcnVpAfrMEPLaKzTGqIBtZNdNP8vJx xhu8mWnCM3d6H+UQbkS2WtZGEIK8HFlCBBNZ9lqiri73tMljKoPRK1w/kXtQMd+5oqb9 0U7RvKdzexMLMDvteCBe8PXKd6kSNP3a1oav2+ZUIFbgzzucnQ1Bf54PGxGMy7A/nn0u KFkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XG53f3NnvSS1wJpJzLKWWmTE+Ana0OshLFxEIhHMWkE=; b=oKnqeylXMR688U+aZ7rlimyW5YtUpGTumM1Vadczi17qKmps5g9rGwL9Z4bn1gjH+f jzycNMOqNAghJIFwzGhKMjKzSCIXHL3XOj9f9UATRyrK5yca95v6fxrvh1/ROsx7VHiy PcTC3SArEakaNJl5ZYXOTYAtLwDW0B89Wjsqnk7UD4sLBmgxhvFENocL3A3lCE8vTBZk RG4mGgx1xd5DBrVxu3BeXSZ5uuyr7yEhSa8viDZIghp5BxLEuMLDpnoXDGBkP5cxC+sS 5LjyVHbH8By1m8Dax3gPr4KNYTNYAYUr8CTqlxyrxQrRNZT4JtJIBHr5CZkche2Fu/HO uWFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YVdRmoPa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n8-20020a170902968800b001aaeb3adb5csi5574233plp.188.2023.05.11.00.16.40; Thu, 11 May 2023 00:16:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YVdRmoPa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237324AbjEKHPX (ORCPT <rfc822;peekingduck44@gmail.com> + 99 others); Thu, 11 May 2023 03:15:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237549AbjEKHOX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 11 May 2023 03:14:23 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2F528A42; Thu, 11 May 2023 00:13:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683789238; x=1715325238; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i7lV+AJDDwMSt8GlHXcIg9vDWIXFzEYq7jFzVP/darg=; b=YVdRmoPa9C8Z4O1+YxTv1ARmzlXUMpORhv6JHuzHcugiOLk5IkYfswK/ Rl3kXxhtKAqD99zRcVx6LgmQ2Z0HGmChzYhSkvXGKqgt79XH6hC/Rl4lR 42Mt8obw8YJ8gl402bc8Is4balVptHxd0Qjix0IpA2SuKDkW52zdPtTLj wepUtueVGH17A+eWM3EsxoKWJuxS72PmNAcC55OyLknRtIQDkZ3fa/spx VQN6I8wYTx/9M6XNBU5oVatKReephfhLDEOx2r+RSIKR4kAgFO6nryKs9 Sx4F8E1jKC5d8G1t/CZB71Nq+NhBVWTnQOo/Gv145ndAN4Rvq+hoD5X0X Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10706"; a="334896680" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="334896680" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2023 00:13:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10706"; a="1029512388" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="1029512388" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2023 00:13:26 -0700 From: Yang Weijiang <weijiang.yang@intel.com> To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterz@infradead.org, rppt@kernel.org, binbin.wu@linux.intel.com, rick.p.edgecombe@intel.com, weijiang.yang@intel.com, john.allen@amd.com Subject: [PATCH v3 16/21] KVM:x86: Save/Restore GUEST_SSP to/from SMM state save area Date: Thu, 11 May 2023 00:08:52 -0400 Message-Id: <20230511040857.6094-17-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230511040857.6094-1-weijiang.yang@intel.com> References: <20230511040857.6094-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,DATE_IN_PAST_03_06, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765581170369954260?= X-GMAIL-MSGID: =?utf-8?q?1765581170369954260?= |
Series |
Enable CET Virtualization
|
|
Commit Message
Yang, Weijiang
May 11, 2023, 4:08 a.m. UTC
Save GUEST_SSP to SMM state save area when guest exits to SMM
due to SMI and restore it VMCS field when guest exits SMM.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
arch/x86/kvm/smm.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
Comments
On Thu, May 11, 2023, Yang Weijiang wrote: > Save GUEST_SSP to SMM state save area when guest exits to SMM > due to SMI and restore it VMCS field when guest exits SMM. This fails to answer "Why does KVM need to do this?" > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/kvm/smm.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c > index b42111a24cc2..c54d3eb2b7e4 100644 > --- a/arch/x86/kvm/smm.c > +++ b/arch/x86/kvm/smm.c > @@ -275,6 +275,16 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, > enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); > > smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); > + > + if (kvm_cet_user_supported()) { This is wrong, KVM should not save/restore state that doesn't exist from the guest's perspective, i.e. this needs to check guest_cpuid_has(). On a related topic, I would love feedback on my series that adds a framework for features like this, where KVM needs to check guest CPUID as well as host support. https://lore.kernel.org/all/20230217231022.816138-1-seanjc@google.com > + struct msr_data msr; > + > + msr.index = MSR_KVM_GUEST_SSP; > + msr.host_initiated = true; Huh? > + /* GUEST_SSP is stored in VMCS at vm-exit. */ (a) this is not VMX code, i.e. referencing the VMCS is wrong, and (b) how the guest's SSP is managed is irrelevant, all that matters is that KVM can get the current guest value. > + static_call(kvm_x86_get_msr)(vcpu, &msr); > + smram->ssp = msr.data; > + } > } > #endif > > @@ -565,6 +575,16 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, > static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); > ctxt->interruptibility = (u8)smstate->int_shadow; > > + if (kvm_cet_user_supported()) { > + struct msr_data msr; > + > + msr.index = MSR_KVM_GUEST_SSP; > + msr.host_initiated = true; > + msr.data = smstate->ssp; > + /* Mimic host_initiated access to bypass ssp access check. */ No, masquerading as a host access is all kinds of wrong. I have no idea what check you're trying to bypass, but whatever it is, it's wrong. Per the SDM, the SSP field in SMRAM is writable, which means that KVM needs to correctly handle the scenario where SSP holds garbage, e.g. a non-canonical address. Why can't this use kvm_get_msr() and kvm_set_msr()?
On 6/24/2023 6:30 AM, Sean Christopherson wrote: > On Thu, May 11, 2023, Yang Weijiang wrote: >> Save GUEST_SSP to SMM state save area when guest exits to SMM >> due to SMI and restore it VMCS field when guest exits SMM. > This fails to answer "Why does KVM need to do this?" How about this: Guest SMM mode execution is out of guest kernel, to avoid GUEST_SSP corruption, KVM needs to save current normal mode GUEST_SSP to SMRAM area so that it can restore original GUEST_SSP at the end of SMM. > >> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >> --- >> arch/x86/kvm/smm.c | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> >> diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c >> index b42111a24cc2..c54d3eb2b7e4 100644 >> --- a/arch/x86/kvm/smm.c >> +++ b/arch/x86/kvm/smm.c >> @@ -275,6 +275,16 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, >> enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); >> >> smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); >> + >> + if (kvm_cet_user_supported()) { > This is wrong, KVM should not save/restore state that doesn't exist from the guest's > perspective, i.e. this needs to check guest_cpuid_has(). Yes, the check missed the case that user space disables SHSTK. Will change it, thanks! > > On a related topic, I would love feedback on my series that adds a framework for > features like this, where KVM needs to check guest CPUID as well as host support. > > https://lore.kernel.org/all/20230217231022.816138-1-seanjc@google.com The framework looks good, will it be merged in kvm_x86? > >> + struct msr_data msr; >> + >> + msr.index = MSR_KVM_GUEST_SSP; >> + msr.host_initiated = true; > Huh? > >> + /* GUEST_SSP is stored in VMCS at vm-exit. */ > (a) this is not VMX code, i.e. referencing the VMCS is wrong, and (b) how the > guest's SSP is managed is irrelevant, all that matters is that KVM can get the > current guest value. Sorry the comment is incorrect, my original intent is: it's stored in VM control structure field, will change it. > >> + static_call(kvm_x86_get_msr)(vcpu, &msr); >> + smram->ssp = msr.data; >> + } >> } >> #endif >> >> @@ -565,6 +575,16 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, >> static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); >> ctxt->interruptibility = (u8)smstate->int_shadow; >> >> + if (kvm_cet_user_supported()) { >> + struct msr_data msr; >> + >> + msr.index = MSR_KVM_GUEST_SSP; >> + msr.host_initiated = true; >> + msr.data = smstate->ssp; >> + /* Mimic host_initiated access to bypass ssp access check. */ > No, masquerading as a host access is all kinds of wrong. I have no idea what > check you're trying to bypass, but whatever it is, it's wrong. Per the SDM, the > SSP field in SMRAM is writable, which means that KVM needs to correctly handle > the scenario where SSP holds garbage, e.g. a non-canonical address. MSR_KVM_GUEST_SSP is only accessible to user space, e.g., during LM, it's not accessible to VM itself. So in kvm_cet_is_msr_accessible(), I added a check to tell whether the access is initiated from user space or not, I tried to bypass that check. Yes, I will add necessary checks here. > > Why can't this use kvm_get_msr() and kvm_set_msr()? If my above assumption is correct, these helpers are passed by host_initiated=false and cannot meet the requirments.
On Mon, Jun 26, 2023, Weijiang Yang wrote: > > On 6/24/2023 6:30 AM, Sean Christopherson wrote: > > On Thu, May 11, 2023, Yang Weijiang wrote: > > > Save GUEST_SSP to SMM state save area when guest exits to SMM > > > due to SMI and restore it VMCS field when guest exits SMM. > > This fails to answer "Why does KVM need to do this?" > > How about this: > > Guest SMM mode execution is out of guest kernel, to avoid GUEST_SSP > corruption, > > KVM needs to save current normal mode GUEST_SSP to SMRAM area so that it can > restore original GUEST_SSP at the end of SMM. The key point I am looking for is a call out that KVM is emulating architectural behavior, i.e. that smram->ssp is defined in the SDM and that the documented behavior of Intel CPUs is that the CPU's current SSP is saved on SMI and loaded on RSM. And I specifically say "loaded" and not "restored", because the field is writable. > > > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > > > --- > > > arch/x86/kvm/smm.c | 20 ++++++++++++++++++++ > > > 1 file changed, 20 insertions(+) > > > > > > diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c > > > index b42111a24cc2..c54d3eb2b7e4 100644 > > > --- a/arch/x86/kvm/smm.c > > > +++ b/arch/x86/kvm/smm.c > > > @@ -275,6 +275,16 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, > > > enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); > > > smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); > > > + > > > + if (kvm_cet_user_supported()) { > > This is wrong, KVM should not save/restore state that doesn't exist from the guest's > > perspective, i.e. this needs to check guest_cpuid_has(). > > Yes, the check missed the case that user space disables SHSTK. Will change > it, thanks! > > > > > On a related topic, I would love feedback on my series that adds a framework for > > features like this, where KVM needs to check guest CPUID as well as host support. > > > > https://lore.kernel.org/all/20230217231022.816138-1-seanjc@google.com > > The framework looks good, will it be merged in kvm_x86? Yes, I would like to merge it at some point. > > > @@ -565,6 +575,16 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, > > > static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); > > > ctxt->interruptibility = (u8)smstate->int_shadow; > > > + if (kvm_cet_user_supported()) { > > > + struct msr_data msr; > > > + > > > + msr.index = MSR_KVM_GUEST_SSP; > > > + msr.host_initiated = true; > > > + msr.data = smstate->ssp; > > > + /* Mimic host_initiated access to bypass ssp access check. */ > > No, masquerading as a host access is all kinds of wrong. I have no idea what > > check you're trying to bypass, but whatever it is, it's wrong. Per the SDM, the > > SSP field in SMRAM is writable, which means that KVM needs to correctly handle > > the scenario where SSP holds garbage, e.g. a non-canonical address. > > MSR_KVM_GUEST_SSP is only accessible to user space, e.g., during LM, it's not > accessible to VM itself. So in kvm_cet_is_msr_accessible(), I added a check to > tell whether the access is initiated from user space or not, I tried to bypass > that check. Yes, I will add necessary checks here. > > > > > Why can't this use kvm_get_msr() and kvm_set_msr()? > > If my above assumption is correct, these helpers are passed by > host_initiated=false and cannot meet the requirments. Sorry, I don't follow. These writes are NOT initiated from the host, i.e. kvm_get_msr() and kvm_set_msr() do the right thing, unless I'm missing something.
On 6/27/2023 5:20 AM, Sean Christopherson wrote: > On Mon, Jun 26, 2023, Weijiang Yang wrote: >> On 6/24/2023 6:30 AM, Sean Christopherson wrote: >>> On Thu, May 11, 2023, Yang Weijiang wrote: >>>> Save GUEST_SSP to SMM state save area when guest exits to SMM >>>> due to SMI and restore it VMCS field when guest exits SMM. >>> This fails to answer "Why does KVM need to do this?" >> How about this: >> >> Guest SMM mode execution is out of guest kernel, to avoid GUEST_SSP >> corruption, >> >> KVM needs to save current normal mode GUEST_SSP to SMRAM area so that it can >> restore original GUEST_SSP at the end of SMM. > The key point I am looking for is a call out that KVM is emulating architectural > behavior, i.e. that smram->ssp is defined in the SDM and that the documented > behavior of Intel CPUs is that the CPU's current SSP is saved on SMI and loaded > on RSM. And I specifically say "loaded" and not "restored", because the field > is writable. OK, will enclose these ideas. > >>>> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >>>> --- >>>> arch/x86/kvm/smm.c | 20 ++++++++++++++++++++ >>>> 1 file changed, 20 insertions(+) >>>> >>>> diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c >>>> index b42111a24cc2..c54d3eb2b7e4 100644 >>>> --- a/arch/x86/kvm/smm.c >>>> +++ b/arch/x86/kvm/smm.c >>>> @@ -275,6 +275,16 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, >>>> enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); >>>> smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); >>>> + >>>> + if (kvm_cet_user_supported()) { >>> This is wrong, KVM should not save/restore state that doesn't exist from the guest's >>> perspective, i.e. this needs to check guest_cpuid_has(). >> Yes, the check missed the case that user space disables SHSTK. Will change >> it, thanks! >> >>> On a related topic, I would love feedback on my series that adds a framework for >>> features like this, where KVM needs to check guest CPUID as well as host support. >>> >>> https://lore.kernel.org/all/20230217231022.816138-1-seanjc@google.com >> The framework looks good, will it be merged in kvm_x86? > Yes, I would like to merge it at some point. > >>>> @@ -565,6 +575,16 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, >>>> static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); >>>> ctxt->interruptibility = (u8)smstate->int_shadow; >>>> + if (kvm_cet_user_supported()) { >>>> + struct msr_data msr; >>>> + >>>> + msr.index = MSR_KVM_GUEST_SSP; >>>> + msr.host_initiated = true; >>>> + msr.data = smstate->ssp; >>>> + /* Mimic host_initiated access to bypass ssp access check. */ >>> No, masquerading as a host access is all kinds of wrong. I have no idea what >>> check you're trying to bypass, but whatever it is, it's wrong. Per the SDM, the >>> SSP field in SMRAM is writable, which means that KVM needs to correctly handle >>> the scenario where SSP holds garbage, e.g. a non-canonical address. >> MSR_KVM_GUEST_SSP is only accessible to user space, e.g., during LM, it's not >> accessible to VM itself. So in kvm_cet_is_msr_accessible(), I added a check to >> tell whether the access is initiated from user space or not, I tried to bypass >> that check. Yes, I will add necessary checks here. >> >>> Why can't this use kvm_get_msr() and kvm_set_msr()? >> If my above assumption is correct, these helpers are passed by >> host_initiated=false and cannot meet the requirments. > Sorry, I don't follow. These writes are NOT initiated from the host, i.e. > kvm_get_msr() and kvm_set_msr() do the right thing, unless I'm missing something. In this series, in patch 14, I added below check: +/* The synthetic MSR is for userspace access only. */ +if (msr->index == MSR_KVM_GUEST_SSP) +return false; If kvm_get_msr() or kvm_set_msr() is used(host_initiated=false), it'll hit this check and fail to write the MSR. But there's anther check at the beginning of kvm_cet_is_msr_accessible(): +if (msr->host_initiated) +return true; I thought to use the host_initiated = true to bypass the former check. Now the helper is going to be overhauled then this is not an issue.
diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index b42111a24cc2..c54d3eb2b7e4 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -275,6 +275,16 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); + + if (kvm_cet_user_supported()) { + struct msr_data msr; + + msr.index = MSR_KVM_GUEST_SSP; + msr.host_initiated = true; + /* GUEST_SSP is stored in VMCS at vm-exit. */ + static_call(kvm_x86_get_msr)(vcpu, &msr); + smram->ssp = msr.data; + } } #endif @@ -565,6 +575,16 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); ctxt->interruptibility = (u8)smstate->int_shadow; + if (kvm_cet_user_supported()) { + struct msr_data msr; + + msr.index = MSR_KVM_GUEST_SSP; + msr.host_initiated = true; + msr.data = smstate->ssp; + /* Mimic host_initiated access to bypass ssp access check. */ + static_call(kvm_x86_set_msr)(vcpu, &msr); + } + return X86EMUL_CONTINUE; } #endif