From patchwork Mon Feb 19 07:47:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 202960 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2685:b0:108:e6aa:91d0 with SMTP id mn5csp1143546dyc; Mon, 19 Feb 2024 00:11:39 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUaba8Q/NUbv3K9oERwGY7cEq2Nx4KVvCaphbzxv7aRZYlXEtyQb8nAUG+G4pYphAr6d/wIEVZrvvMlj4l7chCDk2ta7A== X-Google-Smtp-Source: AGHT+IEYTTPF4gbuWL8yT2T3Oou1oCF5N1rPQvyft9yMfuxZzOtcu1ktECb7qXRgvV8Mj9DSFDXu X-Received: by 2002:a17:90a:ee90:b0:299:564f:c7d4 with SMTP id i16-20020a17090aee9000b00299564fc7d4mr2733332pjz.20.1708330299226; Mon, 19 Feb 2024 00:11:39 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708330299; cv=pass; d=google.com; s=arc-20160816; b=pqziqEyfDOS+S0Wp7t8t30zDmOn4osCKG2kYjH8FGkC6f9aYPveqrAoVsT9FQmSv0o bcqjNTiapcXCqGJDcz1mUeYLRA6fZYsjQkZ1Ecudaxq8AXnN61tND3Qwz3OaM4jS5KBM fi0y0pvuiFbCGpImnSBohkjzJJrmxTXzN2FhixdGmO+lJ3aWDzok1uwEQHRed55bw5WH wi10BiPuhRg0ZHSIXJgqpRKCQGb9oWSdsLh/jkmsbIQy6Mh1Z61PBvfuVMKbr5m+SzI6 2NBuhdTxkkhfcNrijx0Btq2+EjTE3PkG25C8+3LQJPlkdkU99ZR4qxxGCLzbi/x0LYK3 Qeyg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=2wNfXQJ8Re2/X1LShN1j0iAU6pPADvYmGA6J6B0HdY4=; fh=6bYtQoKQxNJSYrJA1a9vseXH6qHZpRYO7L/7krtpXA0=; b=EThYLs5D5vUnl8vAvA/p/UDdpfuNPK5079rLefLtZfCyyjEJUnUmi+50PIWX1ABCcM ZfBLeTC7CfkKd0YVJepj8c/22Sia9RJgC/cbdFoNplsU2X5d8v0S3odAVyg88HODxXmY H5zNV/XVbOEkv15YcdPS+nSExgJ9JTHYoRlNeSwCChAB4FsNjkLZPkfyxnNJ4DBwZe0N ea/5B7bWE5MxTwYWrNrEtkffJP8snC7S7L38/rld+NR3eqfztsAh3GvH+g3V/kSGtccL TIQeCNq1HYv1mMYvMlGapGTfhD5DhQhU53sIdXtygBw13aqh79FGt3t2M0nWYzd/QSw7 JtqQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Miz9tD+i; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-70892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70892-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id lw4-20020a17090b180400b0029983f36d50si2177445pjb.173.2024.02.19.00.11.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Feb 2024 00:11:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-70892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Miz9tD+i; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-70892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70892-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5702BB230B9 for ; Mon, 19 Feb 2024 08:00:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 438BE25615; Mon, 19 Feb 2024 07:48:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Miz9tD+i" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A46EA40BF4; Mon, 19 Feb 2024 07:47:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708328881; cv=none; b=Wf4pG66TN4Sl9AUScvBxP0tEmNkgjN0vpGQtfw5MrZlMpObTT6RzdftBp+nLW4TaYTC+hQ3WPCNdzA8rH1JLR+pmpsjnto0peOuoo6D8kQIVXuj0q5Xe0erMQfm70SDuOn2R8QVEvKrEKEFN+AWpwgbxT/cmDa2RPr2EKEPPejA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708328881; c=relaxed/simple; bh=beytBB2aXs9EkybmEc3IsWJYZyRIF+fS294AYqn0udQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XW+T04SSgmb31B0Y3a4TPlc7Nn87RKdWe30cAzK0I+AtM1NXg8r3VKKDCW1dUPhzbvXVVd4rzUsJS8eo/CGYJduDPrlvIti03CtSCIHnP4gQDIAV3teyH2bxV9Q45TNg8FH91rzSFlI9vEeHSAT9aF7sUuwXnI+qLU7e6WvXSbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Miz9tD+i; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708328879; x=1739864879; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=beytBB2aXs9EkybmEc3IsWJYZyRIF+fS294AYqn0udQ=; b=Miz9tD+i6o8uuJTqLos0ELcMiqIGQ+BjLMiQShRyRjbz/VinrFhk7owD MaIaAC/AHuMiUG75VXY7UKGDzRg9M88JXGdwNGsA1cg6B+ryBQIg8mht1 ybfGFd29cjLIindKTzAPRH1M+0bQk8qJpkE6gwPCI2sp33kKgEThoUbw/ CBwy/5buJGVBLX6cTYmrdI94fcafbgCItCWjYUozTFSJuW2lNcdaOftPH pyLbe2nGHsmgTEhTFnfs+VV+CxehIHAf90aURwUtCF8U4jgqwb4iOVfO3 +WUJdVnq8CIdEU0kORQhAly3A2unm0XDT49QRamwoYS6LVLYim/v8PA3j A==; X-IronPort-AV: E=McAfee;i="6600,9927,10988"; a="2535167" X-IronPort-AV: E=Sophos;i="6.06,170,1705392000"; d="scan'208";a="2535167" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2024 23:47:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10988"; a="826966136" X-IronPort-AV: E=Sophos;i="6.06,170,1705392000"; d="scan'208";a="826966136" Received: from jf.jf.intel.com (HELO jf.intel.com) ([10.165.9.183]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2024 23:47:44 -0800 From: Yang Weijiang To: seanjc@google.com, pbonzini@redhat.com, dave.hansen@intel.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterz@infradead.org, chao.gao@intel.com, rick.p.edgecombe@intel.com, mlevitsk@redhat.com, john.allen@amd.com, weijiang.yang@intel.com Subject: [PATCH v10 26/27] KVM: nVMX: Enable CET support for nested guest Date: Sun, 18 Feb 2024 23:47:32 -0800 Message-ID: <20240219074733.122080-27-weijiang.yang@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240219074733.122080-1-weijiang.yang@intel.com> References: <20240219074733.122080-1-weijiang.yang@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791314151892172244 X-GMAIL-MSGID: 1791314151892172244 Set up CET MSRs, related VM_ENTRY/EXIT control bits and fixed CR4 setting to enable CET for nested VM. vmcs12 and vmcs02 needs to be synced when L2 exits to L1 or when L1 wants to resume L2, that way correct CET states can be observed by one another. Signed-off-by: Yang Weijiang Reviewed-by: Maxim Levitsky --- arch/x86/kvm/vmx/nested.c | 80 ++++++++++++++++++++++++++++++++++++++- arch/x86/kvm/vmx/vmcs12.c | 6 +++ arch/x86/kvm/vmx/vmcs12.h | 14 ++++++- arch/x86/kvm/vmx/vmx.c | 2 + arch/x86/kvm/vmx/vmx.h | 3 ++ 5 files changed, 102 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0439208523b8..d0311260270c 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -691,6 +691,28 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); + /* Pass CET MSRs to nested VM if L0 and L1 are set to pass-through. */ + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_U_CET, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_S_CET, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_PL0_SSP, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_PL1_SSP, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_PL2_SSP, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_PL3_SSP, MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, + MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW); + kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); vmx->nested.force_msr_bitmap_recalc = false; @@ -2438,6 +2460,30 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0 } } +static inline void cet_vmcs_fields_get(struct kvm_vcpu *vcpu, u64 *ssp, + u64 *s_cet, u64 *ssp_tbl) +{ + if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) { + *ssp = vmcs_readl(GUEST_SSP); + *s_cet = vmcs_readl(GUEST_S_CET); + *ssp_tbl = vmcs_readl(GUEST_INTR_SSP_TABLE); + } else if (guest_can_use(vcpu, X86_FEATURE_IBT)) { + *s_cet = vmcs_readl(GUEST_S_CET); + } +} + +static inline void cet_vmcs_fields_put(struct kvm_vcpu *vcpu, u64 ssp, + u64 s_cet, u64 ssp_tbl) +{ + if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) { + vmcs_writel(GUEST_SSP, ssp); + vmcs_writel(GUEST_S_CET, s_cet); + vmcs_writel(GUEST_INTR_SSP_TABLE, ssp_tbl); + } else if (guest_can_use(vcpu, X86_FEATURE_IBT)) { + vmcs_writel(GUEST_S_CET, s_cet); + } +} + static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) { struct hv_enlightened_vmcs *hv_evmcs = nested_vmx_evmcs(vmx); @@ -2553,6 +2599,11 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); + if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE) + cet_vmcs_fields_put(&vmx->vcpu, vmcs12->guest_ssp, + vmcs12->guest_s_cet, + vmcs12->guest_ssp_tbl); + set_cr4_guest_host_mask(vmx); } @@ -2591,6 +2642,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, kvm_set_dr(vcpu, 7, vcpu->arch.dr7); vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl); } + + if (!vmx->nested.nested_run_pending || + !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)) + cet_vmcs_fields_put(vcpu, vmx->nested.pre_vmenter_ssp, + vmx->nested.pre_vmenter_s_cet, + vmx->nested.pre_vmenter_ssp_tbl); + if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))) vmcs_write64(GUEST_BNDCFGS, vmx->nested.pre_vmenter_bndcfgs); @@ -3471,6 +3529,12 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))) vmx->nested.pre_vmenter_bndcfgs = vmcs_read64(GUEST_BNDCFGS); + if (!vmx->nested.nested_run_pending || + !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)) + cet_vmcs_fields_get(vcpu, &vmx->nested.pre_vmenter_ssp, + &vmx->nested.pre_vmenter_s_cet, + &vmx->nested.pre_vmenter_ssp_tbl); + /* * Overwrite vmcs01.GUEST_CR3 with L1's CR3 if EPT is disabled *and* * nested early checks are disabled. In the event of a "late" VM-Fail, @@ -4294,6 +4358,9 @@ static bool is_vmcs12_ext_field(unsigned long field) case GUEST_IDTR_BASE: case GUEST_PENDING_DBG_EXCEPTIONS: case GUEST_BNDCFGS: + case GUEST_SSP: + case GUEST_S_CET: + case GUEST_INTR_SSP_TABLE: return true; default: break; @@ -4344,6 +4411,10 @@ static void sync_vmcs02_to_vmcs12_rare(struct kvm_vcpu *vcpu, vmcs12->guest_pending_dbg_exceptions = vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS); + cet_vmcs_fields_get(&vmx->vcpu, &vmcs12->guest_ssp, + &vmcs12->guest_s_cet, + &vmcs12->guest_ssp_tbl); + vmx->nested.need_sync_vmcs02_to_vmcs12_rare = false; } @@ -4569,6 +4640,10 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS) vmcs_write64(GUEST_BNDCFGS, 0); + if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_CET_STATE) + cet_vmcs_fields_put(vcpu, vmcs12->host_ssp, vmcs12->host_s_cet, + vmcs12->host_ssp_tbl); + if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PAT) { vmcs_write64(GUEST_IA32_PAT, vmcs12->host_ia32_pat); vcpu->arch.pat = vmcs12->host_ia32_pat; @@ -6840,7 +6915,7 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf, VM_EXIT_HOST_ADDR_SPACE_SIZE | #endif VM_EXIT_LOAD_IA32_PAT | VM_EXIT_SAVE_IA32_PAT | - VM_EXIT_CLEAR_BNDCFGS; + VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_CET_STATE; msrs->exit_ctls_high |= VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER | @@ -6862,7 +6937,8 @@ static void nested_vmx_setup_entry_ctls(struct vmcs_config *vmcs_conf, #ifdef CONFIG_X86_64 VM_ENTRY_IA32E_MODE | #endif - VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS; + VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS | + VM_ENTRY_LOAD_CET_STATE; msrs->entry_ctls_high |= (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER | VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL); diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c index 106a72c923ca..4233b5ca9461 100644 --- a/arch/x86/kvm/vmx/vmcs12.c +++ b/arch/x86/kvm/vmx/vmcs12.c @@ -139,6 +139,9 @@ const unsigned short vmcs12_field_offsets[] = { FIELD(GUEST_PENDING_DBG_EXCEPTIONS, guest_pending_dbg_exceptions), FIELD(GUEST_SYSENTER_ESP, guest_sysenter_esp), FIELD(GUEST_SYSENTER_EIP, guest_sysenter_eip), + FIELD(GUEST_S_CET, guest_s_cet), + FIELD(GUEST_SSP, guest_ssp), + FIELD(GUEST_INTR_SSP_TABLE, guest_ssp_tbl), FIELD(HOST_CR0, host_cr0), FIELD(HOST_CR3, host_cr3), FIELD(HOST_CR4, host_cr4), @@ -151,5 +154,8 @@ const unsigned short vmcs12_field_offsets[] = { FIELD(HOST_IA32_SYSENTER_EIP, host_ia32_sysenter_eip), FIELD(HOST_RSP, host_rsp), FIELD(HOST_RIP, host_rip), + FIELD(HOST_S_CET, host_s_cet), + FIELD(HOST_SSP, host_ssp), + FIELD(HOST_INTR_SSP_TABLE, host_ssp_tbl), }; const unsigned int nr_vmcs12_fields = ARRAY_SIZE(vmcs12_field_offsets); diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 01936013428b..3884489e7f7e 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -117,7 +117,13 @@ struct __packed vmcs12 { natural_width host_ia32_sysenter_eip; natural_width host_rsp; natural_width host_rip; - natural_width paddingl[8]; /* room for future expansion */ + natural_width host_s_cet; + natural_width host_ssp; + natural_width host_ssp_tbl; + natural_width guest_s_cet; + natural_width guest_ssp; + natural_width guest_ssp_tbl; + natural_width paddingl[2]; /* room for future expansion */ u32 pin_based_vm_exec_control; u32 cpu_based_vm_exec_control; u32 exception_bitmap; @@ -292,6 +298,12 @@ static inline void vmx_check_vmcs12_offsets(void) CHECK_OFFSET(host_ia32_sysenter_eip, 656); CHECK_OFFSET(host_rsp, 664); CHECK_OFFSET(host_rip, 672); + CHECK_OFFSET(host_s_cet, 680); + CHECK_OFFSET(host_ssp, 688); + CHECK_OFFSET(host_ssp_tbl, 696); + CHECK_OFFSET(guest_s_cet, 704); + CHECK_OFFSET(guest_ssp, 712); + CHECK_OFFSET(guest_ssp_tbl, 720); CHECK_OFFSET(pin_based_vm_exec_control, 744); CHECK_OFFSET(cpu_based_vm_exec_control, 748); CHECK_OFFSET(exception_bitmap, 752); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9df25c9e80f5..210724d2151c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7727,6 +7727,8 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) cr4_fixed1_update(X86_CR4_PKE, ecx, feature_bit(PKU)); cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP)); cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57)); + cr4_fixed1_update(X86_CR4_CET, ecx, feature_bit(SHSTK)); + cr4_fixed1_update(X86_CR4_CET, edx, feature_bit(IBT)); entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1); cr4_fixed1_update(X86_CR4_LAM_SUP, eax, feature_bit(LAM)); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index d0cad2624564..3c1de37728fe 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -224,6 +224,9 @@ struct nested_vmx { */ u64 pre_vmenter_debugctl; u64 pre_vmenter_bndcfgs; + u64 pre_vmenter_ssp; + u64 pre_vmenter_s_cet; + u64 pre_vmenter_ssp_tbl; /* to migrate it to L1 if L2 writes to L1's CR8 directly */ int l1_tpr_threshold;