Message ID | 20230803042732.88515-6-weijiang.yang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f41:0:b0:3e4:2afc:c1 with SMTP id v1csp996010vqx; Thu, 3 Aug 2023 01:29:55 -0700 (PDT) X-Google-Smtp-Source: APBJJlGKaACcTHbVCQPRQzuQ5g0ceL6WhQKXdteb8vUadVuVlwOfSsygSQxSJnQ/DRRz/naaWisL X-Received: by 2002:a17:902:b198:b0:1bb:d70b:dff9 with SMTP id s24-20020a170902b19800b001bbd70bdff9mr13671819plr.18.1691051394904; Thu, 03 Aug 2023 01:29:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691051394; cv=none; d=google.com; s=arc-20160816; b=S8W3ckoDZ/grr6OJP31b7kBbTc1ErZ+l3VOsEQtjz1j8GvOtvhrxA9P3kLVCl7Gh4V wJDyOwnqzGR0z5YgHmgnzwPbkkFgd+UJQTyHhNGZQwiP3oeJQMQBhN1HeE1TF6sqmt5O 9DbUcd/y22pcyNRTp5TxsvXoBwGYY/NHBPQORmR8ZaPzGJ+uamQrOqAQU60MpeC+yjqj sNSoIv46KscETkF3uhvxpkL8/mLzI1lyjihl2I90A55NVHKxzeMy+yi4YNiRa1CJCvVX B4FrxF0+cr+pjiqixlqDo6i39PKf/fadNmAhWqwlmJiU8uaSY8JO/v/NlWFePgbquleN ydmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AfIuFCPrqWw4GRrrYsIM0+6xbwAx6ikLLjERruDYeQo=; fh=Opje+PjCQx5n1tZXLBqSYGCQ4Th9+H4dl5HcyP+qSnE=; b=eoiZfNGs5d65PK0d/hHHAPJRJ1DmEiuj2pqN0OE7854yvwGz5jdUHivE3G+x11lIYG ZBFP5Ymty5aeFCoUfTkIsA4aHInNzuKYFcBDDwIC9zvIvjppd7WeN+CN1CeB+T2xwWoA mOCtBgYzOBdBh2/ZTwN/ECspEOorGmuQwQQm+gXBKzOWZ5tNinkb+eoQh6xZQBPhg3Pn cZlv4YUyRodkQfCuirhYcr3GgWGX2kmpH5ekAppeDix1iXYeJ5InOrxN4dcblSgcfOBa wenA85qKq0ceeFbps9lxqQnAIxLF/2P+pbaaI4IHAfQyBsgADA9HhQ69rYyJ1TQTOTl5 LRmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GUkFSN0v; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h9-20020a170902f54900b001bb98357332si12711253plf.365.2023.08.03.01.29.40; Thu, 03 Aug 2023 01:29:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GUkFSN0v; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234053AbjHCHhP (ORCPT <rfc822;jeff.pang.chn@gmail.com> + 99 others); Thu, 3 Aug 2023 03:37:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231895AbjHCHgG (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 3 Aug 2023 03:36:06 -0400 Received: from mgamail.intel.com (unknown [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1DAE49D6; Thu, 3 Aug 2023 00:32:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691047938; x=1722583938; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iob2WlcaBJVHzzN0CVpB4q36QOk92fLy0+YqXdTOxuI=; b=GUkFSN0vSY6S9N7VrY9a9L83bLJqULQzbkZ19i06HBfPahd8xgpDgAcE HP8OJiHviPJ2l463ruiGPu0jslboVn7VWVR9sRXLLTl5/qp9BIuMiR0xG PuSAbUkZH6AHiIPm7AAu07hqq89XZ/aziScj9xPzWSTOJxbrpcONB0RvE phWx5qnjGSB6+XJZKejnLBvfhQUGaE5KWCGOJlTD0AiuxDmqWS196gx0g E7bvNYa0puivYRqfZthDs710DdHtCpMxDxNwD6YpSrU8scjhpaYt8fRVY IEQ20C98adMyf8N96IfWxzYlUu6s2siOyaDCC8fjjpcPLlzFC2OZoVFqX w==; X-IronPort-AV: E=McAfee;i="6600,9927,10790"; a="354708104" X-IronPort-AV: E=Sophos;i="6.01,251,1684825200"; d="scan'208";a="354708104" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2023 00:32:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10790"; a="794888479" X-IronPort-AV: E=Sophos;i="6.01,251,1684825200"; d="scan'208";a="794888479" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2023 00:32:15 -0700 From: Yang Weijiang <weijiang.yang@intel.com> To: seanjc@google.com, pbonzini@redhat.com, peterz@infradead.org, john.allen@amd.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: rick.p.edgecombe@intel.com, chao.gao@intel.com, binbin.wu@linux.intel.com, weijiang.yang@intel.com Subject: [PATCH v5 05/19] KVM:x86: Initialize kvm_caps.supported_xss Date: Thu, 3 Aug 2023 00:27:18 -0400 Message-Id: <20230803042732.88515-6-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230803042732.88515-1-weijiang.yang@intel.com> References: <20230803042732.88515-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,DATE_IN_PAST_03_06, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773195907221140049 X-GMAIL-MSGID: 1773195907221140049 |
Series |
Enable CET Virtualization
|
|
Commit Message
Yang, Weijiang
Aug. 3, 2023, 4:27 a.m. UTC
Set kvm_caps.supported_xss to host_xss && KVM XSS mask.
host_xss contains the host supported xstate feature bits for thread
context switch, KVM_SUPPORTED_XSS includes all KVM enabled XSS feature
bits, the operation result represents all KVM supported feature bits.
Since the result is subset of host_xss, the related XSAVE-managed MSRs
are automatically swapped for guest and host when vCPU exits to
userspace.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
arch/x86/kvm/vmx/vmx.c | 1 -
arch/x86/kvm/x86.c | 6 +++++-
2 files changed, 5 insertions(+), 2 deletions(-)
Comments
On Thu, Aug 03, 2023, Yang Weijiang wrote: > Set kvm_caps.supported_xss to host_xss && KVM XSS mask. > host_xss contains the host supported xstate feature bits for thread > context switch, KVM_SUPPORTED_XSS includes all KVM enabled XSS feature > bits, the operation result represents all KVM supported feature bits. > Since the result is subset of host_xss, the related XSAVE-managed MSRs > are automatically swapped for guest and host when vCPU exits to > userspace. > > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/kvm/vmx/vmx.c | 1 - > arch/x86/kvm/x86.c | 6 +++++- > 2 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 0ecf4be2c6af..c8d9870cfecb 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7849,7 +7849,6 @@ static __init void vmx_set_cpu_caps(void) > kvm_cpu_cap_set(X86_FEATURE_UMIP); > > /* CPUID 0xD.1 */ > - kvm_caps.supported_xss = 0; Dropping this code in *this* patch is wrong, this belong in whatever patch(es) adds IBT and SHSTK support in VMX. And that does matter because it means this common patch can be carried wih SVM support without breaking VMX. > if (!cpu_has_vmx_xsaves()) > kvm_cpu_cap_clear(X86_FEATURE_XSAVES); > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 5d6d6fa33e5b..e9f3627d5fdd 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -225,6 +225,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; > | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ > | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) > > +#define KVM_SUPPORTED_XSS 0 > + > u64 __read_mostly host_efer; > EXPORT_SYMBOL_GPL(host_efer); > > @@ -9498,8 +9500,10 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) > > rdmsrl_safe(MSR_EFER, &host_efer); > > - if (boot_cpu_has(X86_FEATURE_XSAVES)) > + if (boot_cpu_has(X86_FEATURE_XSAVES)) { > rdmsrl(MSR_IA32_XSS, host_xss); > + kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS; > + } Can you opportunistically (in this patch) hoist this above EFER so that XCR0 and XSS are colocated? I.e. end up with this: if (boot_cpu_has(X86_FEATURE_XSAVE)) { host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); kvm_caps.supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0; } if (boot_cpu_has(X86_FEATURE_XSAVES)) { rdmsrl(MSR_IA32_XSS, host_xss); kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS; } rdmsrl_safe(MSR_EFER, &host_efer);
On 8/5/2023 2:45 AM, Sean Christopherson wrote: > On Thu, Aug 03, 2023, Yang Weijiang wrote: >> Set kvm_caps.supported_xss to host_xss && KVM XSS mask. >> host_xss contains the host supported xstate feature bits for thread >> context switch, KVM_SUPPORTED_XSS includes all KVM enabled XSS feature >> bits, the operation result represents all KVM supported feature bits. >> Since the result is subset of host_xss, the related XSAVE-managed MSRs >> are automatically swapped for guest and host when vCPU exits to >> userspace. >> >> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >> --- >> arch/x86/kvm/vmx/vmx.c | 1 - >> arch/x86/kvm/x86.c | 6 +++++- >> 2 files changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >> index 0ecf4be2c6af..c8d9870cfecb 100644 >> --- a/arch/x86/kvm/vmx/vmx.c >> +++ b/arch/x86/kvm/vmx/vmx.c >> @@ -7849,7 +7849,6 @@ static __init void vmx_set_cpu_caps(void) >> kvm_cpu_cap_set(X86_FEATURE_UMIP); >> >> /* CPUID 0xD.1 */ >> - kvm_caps.supported_xss = 0; > Dropping this code in *this* patch is wrong, this belong in whatever patch(es) adds > IBT and SHSTK support in VMX. > > And that does matter because it means this common patch can be carried wih SVM > support without breaking VMX. OK, I'll dropping this line for VMX/SVM in CET feature bits enabling patch. >> if (!cpu_has_vmx_xsaves()) >> kvm_cpu_cap_clear(X86_FEATURE_XSAVES); >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index 5d6d6fa33e5b..e9f3627d5fdd 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -225,6 +225,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; >> | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ >> | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) >> >> +#define KVM_SUPPORTED_XSS 0 >> + >> u64 __read_mostly host_efer; >> EXPORT_SYMBOL_GPL(host_efer); >> >> @@ -9498,8 +9500,10 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) >> >> rdmsrl_safe(MSR_EFER, &host_efer); >> >> - if (boot_cpu_has(X86_FEATURE_XSAVES)) >> + if (boot_cpu_has(X86_FEATURE_XSAVES)) { >> rdmsrl(MSR_IA32_XSS, host_xss); >> + kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS; >> + } > Can you opportunistically (in this patch) hoist this above EFER so that XCR0 and > XSS are colocated? I.e. end up with this: > > if (boot_cpu_has(X86_FEATURE_XSAVE)) { > host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); > kvm_caps.supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0; > } > if (boot_cpu_has(X86_FEATURE_XSAVES)) { > rdmsrl(MSR_IA32_XSS, host_xss); > kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS; > } > > rdmsrl_safe(MSR_EFER, &host_efer); Will change it, thanks!
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0ecf4be2c6af..c8d9870cfecb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7849,7 +7849,6 @@ static __init void vmx_set_cpu_caps(void) kvm_cpu_cap_set(X86_FEATURE_UMIP); /* CPUID 0xD.1 */ - kvm_caps.supported_xss = 0; if (!cpu_has_vmx_xsaves()) kvm_cpu_cap_clear(X86_FEATURE_XSAVES); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5d6d6fa33e5b..e9f3627d5fdd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -225,6 +225,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) +#define KVM_SUPPORTED_XSS 0 + u64 __read_mostly host_efer; EXPORT_SYMBOL_GPL(host_efer); @@ -9498,8 +9500,10 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) rdmsrl_safe(MSR_EFER, &host_efer); - if (boot_cpu_has(X86_FEATURE_XSAVES)) + if (boot_cpu_has(X86_FEATURE_XSAVES)) { rdmsrl(MSR_IA32_XSS, host_xss); + kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS; + } kvm_init_pmu_capability(ops->pmu_ops);