Message ID | 20230914063325.85503-11-weijiang.yang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp709475vqi; Thu, 14 Sep 2023 17:19:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFQY6yMU7S/MmjZTVqJRlNIeWKW3KbrHRyHvIjXTx+Zv7Se45b9a4x1oMlcGt3fy2MSf4zh X-Received: by 2002:a17:902:d4c3:b0:1c3:e3b1:98f9 with SMTP id o3-20020a170902d4c300b001c3e3b198f9mr4922792plg.24.1694737174514; Thu, 14 Sep 2023 17:19:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694737174; cv=none; d=google.com; s=arc-20160816; b=Wo7pCZiNYnMI0CeFBTUW/UO9pmR6Tfy1avG3tHLhD1/nqoe6ImekeGEq/L+l72axzZ N/V4vZAGh+sqeWlSuS0HOL8fro6+GUS6Bu8TJ5C63LuCJz84IOS4mLrVbg51rjzOtP5c GxN/mXp0Hkr5OHs5vKaV2JX+fNUNEcmH0BEKIs795y2B+rM/8/P0kn5Q0nvgqZzaoxHN 3qfp7B18DC1vojihrJERz3L0whkpl/YhSjLrgv2BgDWhpnihUe/AD7IM7qobtdUG5ixA OTgurbuDPflvyrmj8SUlSy5VkEc8gYLXoB2HYmwtvlSs+Hk1XWFqSJWbt5XILekAyGgb mgTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8foqlmJN62eale56ARtkInIicX8/TPX5DiQN3sl2KZg=; fh=jQqUhNQZPAXcZ44u72Wu3jv2pQizzn2Be2T8mkd1RwU=; b=uBDYZQx4vkgCyFveewf/tGaoL0MnKDCP12UnuY3FVi4DCWE1lz60+DZlxml7mhQUrv kVI06u2d2uaYqepvK2M1VFvgXHwLjwKaHisjigIJBdrdUHG4AsnF2DMJIg2pWKzOX4Ah F2MxlI/VT8PwHe432RPfR31oaM1W99oqIG3lgaopzoaLy2ijcTEjWxNIAerg6Lbhd4bx kfLOjdfA0ALU0P05SIKOeuBh3gfjWZZCcPEQ1ireWGWT6E/Tsqqj494ZJgxxo7lnYxkp YiiM3/zilIWBKmrppX0yjUt7K1uemiQWJONnHxT/WuqLOPB1DXg+16QozIvLvkK2DoXj gABQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CL2wMSn3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id n125-20020a632783000000b0057762236e02si2368713pgn.149.2023.09.14.17.19.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 17:19:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CL2wMSn3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 97B7E852D264; Thu, 14 Sep 2023 02:39:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237762AbjINJiy (ORCPT <rfc822;chrisfriedt@gmail.com> + 35 others); Thu, 14 Sep 2023 05:38:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237286AbjINJiX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 14 Sep 2023 05:38:23 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CC9FCCD; Thu, 14 Sep 2023 02:38:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694684299; x=1726220299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=joiI9l6YnLOfTvcebVDodp/6FgyILV4huxPWCPMd3T8=; b=CL2wMSn3I+HM850gOAz2L4CkGL6NJPI56Ne6/giq7HeMlaScMwP/bkUj EdPPe1OjuA6KBxVEkrMMyzRkGRsPf1A3o3HT6xtHxYQxJuPJaCx68HA56 XZwqY3piMi3ycKQzMHi0xEsdgd2i2sBC9e8CLSmhm3xHb2mwSz03RDxNm YGv/qPmM/kCwR7ue8dBY/NJQVGsr4ugR/5ELDQ86kBz+vy6KDDVUa8Dqo 6PBpjA4MdZHAzfjqUmfGgmYemeiNHRYiOvDjaMuzJrG88YIfM4rlg9uzw mkktM2qUlAr+EUmt0BhyOyAM/bkrxWJL4rrmKnmVDi9fYJrunAgjd1kNn g==; X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="409857358" X-IronPort-AV: E=Sophos;i="6.02,145,1688454000"; d="scan'208";a="409857358" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2023 02:38:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="747656243" X-IronPort-AV: E=Sophos;i="6.02,145,1688454000"; d="scan'208";a="747656243" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2023 02:38:18 -0700 From: Yang Weijiang <weijiang.yang@intel.com> To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: dave.hansen@intel.com, peterz@infradead.org, chao.gao@intel.com, rick.p.edgecombe@intel.com, weijiang.yang@intel.com, john.allen@amd.com Subject: [PATCH v6 10/25] KVM: x86: Add kvm_msr_{read,write}() helpers Date: Thu, 14 Sep 2023 02:33:10 -0400 Message-Id: <20230914063325.85503-11-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230914063325.85503-1-weijiang.yang@intel.com> References: <20230914063325.85503-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 02:39:11 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777060727189501528 X-GMAIL-MSGID: 1777060727189501528 |
Series |
Enable CET Virtualization
|
|
Commit Message
Yang, Weijiang
Sept. 14, 2023, 6:33 a.m. UTC
Wrap __kvm_{get,set}_msr() into two new helpers for KVM usage and use the
helpers to replace existing usage of the raw functions.
kvm_msr_{read,write}() are KVM-internal helpers, i.e. used when KVM needs
to get/set a MSR value for emulating CPU behavior.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
arch/x86/include/asm/kvm_host.h | 4 +++-
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/x86.c | 16 +++++++++++++---
3 files changed, 17 insertions(+), 5 deletions(-)
Comments
On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote: > Wrap __kvm_{get,set}_msr() into two new helpers for KVM usage and use the > helpers to replace existing usage of the raw functions. > kvm_msr_{read,write}() are KVM-internal helpers, i.e. used when KVM needs > to get/set a MSR value for emulating CPU behavior. I am not sure if I like this patch or not. On one hand the code is cleaner this way, but on the other hand now it is easier to call kvm_msr_write() on behalf of the guest. For example we also have the 'kvm_set_msr()' which does actually set the msr on behalf of the guest. How about we call the new function kvm_msr_set_host() and rename kvm_set_msr() to kvm_msr_set_guest(), together with good comments explaning what they do? Also functions like kvm_set_msr_ignored_check(), kvm_set_msr_with_filter() and such, IMHO have names that are not very user friendly. A refactoring is very welcome in this area. At the very least they should gain thoughtful comments about what they do. For reading msrs API, I can suggest similar names and comments: /* * Read a value of a MSR. * Some MSRs exist in the KVM model even when the guest can't read them. */ int kvm_get_msr_value(struct kvm_vcpu *vcpu, u32 index, u64 *data); /* Read a value of a MSR on the behalf of the guest */ int kvm_get_guest_msr_value(struct kvm_vcpu *vcpu, u32 index, u64 *data); Although I am not going to argue over this, there are multiple ways to improve this, and also keeping things as is, or something similar to this patch is also fine with me. Best regards, Maxim Levitsky > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/include/asm/kvm_host.h | 4 +++- > arch/x86/kvm/cpuid.c | 2 +- > arch/x86/kvm/x86.c | 16 +++++++++++++--- > 3 files changed, 17 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 1a4def36d5bb..0fc5e6312e93 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1956,7 +1956,9 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu); > > void kvm_enable_efer_bits(u64); > bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer); > -int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated); > + > +int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data); > +int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data); > int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data); > int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data); > int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu); > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 7c3e4a550ca7..1f206caec559 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -1531,7 +1531,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, > *edx = entry->edx; > if (function == 7 && index == 0) { > u64 data; > - if (!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) && > + if (!kvm_msr_read(vcpu, MSR_IA32_TSX_CTRL, &data) && > (data & TSX_CTRL_CPUID_CLEAR)) > *ebx &= ~(F(RTM) | F(HLE)); > } else if (function == 0x80000007) { > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 6c9c81e82e65..e0b55c043dab 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1917,8 +1917,8 @@ static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu, > * Returns 0 on success, non-0 otherwise. > * Assumes vcpu_load() was already called. > */ > -int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, > - bool host_initiated) > +static int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, > + bool host_initiated) > { > struct msr_data msr; > int ret; > @@ -1944,6 +1944,16 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, > return ret; > } > > +int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data) > +{ > + return __kvm_set_msr(vcpu, index, data, true); > +} > + > +int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data) > +{ > + return __kvm_get_msr(vcpu, index, data, true); > +} > + > static int kvm_get_msr_ignored_check(struct kvm_vcpu *vcpu, > u32 index, u64 *data, bool host_initiated) > { > @@ -12082,7 +12092,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) > MSR_IA32_MISC_ENABLE_BTS_UNAVAIL; > > __kvm_set_xcr(vcpu, 0, XFEATURE_MASK_FP); > - __kvm_set_msr(vcpu, MSR_IA32_XSS, 0, true); > + kvm_msr_write(vcpu, MSR_IA32_XSS, 0); > } > > /* All GPRs except RDX (handled below) are zeroed on RESET/INIT. */
On Tue, Oct 31, 2023, Maxim Levitsky wrote: > On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote: > > Wrap __kvm_{get,set}_msr() into two new helpers for KVM usage and use the > > helpers to replace existing usage of the raw functions. > > kvm_msr_{read,write}() are KVM-internal helpers, i.e. used when KVM needs > > to get/set a MSR value for emulating CPU behavior. > > I am not sure if I like this patch or not. On one hand the code is cleaner > this way, but on the other hand now it is easier to call kvm_msr_write() on > behalf of the guest. > > For example we also have the 'kvm_set_msr()' which does actually set the msr > on behalf of the guest. > > How about we call the new function kvm_msr_set_host() and rename > kvm_set_msr() to kvm_msr_set_guest(), together with good comments explaning > what they do? LOL, just call me Nostradamus[*] ;-) : > SSP save/load should go to enter_smm_save_state_64() and rsm_load_state_64(), : > where other fields of SMRAM are handled. : : +1. The right way to get/set MSRs like this is to use __kvm_get_msr() and pass : %true for @host_initiated. Though I would add a prep patch to provide wrappers : for __kvm_get_msr() and __kvm_set_msr(). Naming will be hard, but I think we ^^^^^^^^^^^^^^^^^^^ : can use kvm_{read,write}_msr() to go along with the KVM-initiated register : accessors/mutators, e.g. kvm_register_read(), kvm_pdptr_write(), etc. [*] https://lore.kernel.org/all/ZM0YZgFsYWuBFOze@google.com > Also functions like kvm_set_msr_ignored_check(), kvm_set_msr_with_filter() and such, > IMHO have names that are not very user friendly. I don't like the host/guest split because KVM always operates on guest values, e.g. kvm_msr_set_host() in particular could get confusing. IMO kvm_get_msr() and kvm_set_msr(), and to some extent the helpers you note below, are the real problem. What if we rename kvm_{g,s}et_msr() to kvm_emulate_msr_{read,write}() to make it more obvious that those are the "guest" helpers? And do that as a prep patch in this series (there aren't _that_ many users). I'm also in favor of renaming the "inner" helpers, but I think we should tackle those separately.separately
On Wed, 2023-11-01 at 12:32 -0700, Sean Christopherson wrote: > On Tue, Oct 31, 2023, Maxim Levitsky wrote: > > On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote: > > > Wrap __kvm_{get,set}_msr() into two new helpers for KVM usage and use the > > > helpers to replace existing usage of the raw functions. > > > kvm_msr_{read,write}() are KVM-internal helpers, i.e. used when KVM needs > > > to get/set a MSR value for emulating CPU behavior. > > > > I am not sure if I like this patch or not. On one hand the code is cleaner > > this way, but on the other hand now it is easier to call kvm_msr_write() on > > behalf of the guest. > > > > For example we also have the 'kvm_set_msr()' which does actually set the msr > > on behalf of the guest. > > > > How about we call the new function kvm_msr_set_host() and rename > > kvm_set_msr() to kvm_msr_set_guest(), together with good comments explaning > > what they do? > > LOL, just call me Nostradamus[*] ;-) > > : > SSP save/load should go to enter_smm_save_state_64() and rsm_load_state_64(), > : > where other fields of SMRAM are handled. > : > : +1. The right way to get/set MSRs like this is to use __kvm_get_msr() and pass > : %true for @host_initiated. Though I would add a prep patch to provide wrappers > : for __kvm_get_msr() and __kvm_set_msr(). Naming will be hard, but I think we > ^^^^^^^^^^^^^^^^^^^ > : can use kvm_{read,write}_msr() to go along with the KVM-initiated register > : accessors/mutators, e.g. kvm_register_read(), kvm_pdptr_write(), etc. > > [*] https://lore.kernel.org/all/ZM0YZgFsYWuBFOze@google.com > > > Also functions like kvm_set_msr_ignored_check(), kvm_set_msr_with_filter() and such, > > IMHO have names that are not very user friendly. > > I don't like the host/guest split because KVM always operates on guest values, > e.g. kvm_msr_set_host() in particular could get confusing. That makes sense. > > IMO kvm_get_msr() and kvm_set_msr(), and to some extent the helpers you note below, > are the real problem. > > What if we rename kvm_{g,s}et_msr() to kvm_emulate_msr_{read,write}() to make it > more obvious that those are the "guest" helpers? And do that as a prep patch in > this series (there aren't _that_ many users). Makes sense. > > I'm also in favor of renaming the "inner" helpers, but I think we should tackle > those separately.separately OK. > Best regards, Maxim Levitsky
On 11/3/2023 2:26 AM, Maxim Levitsky wrote: > On Wed, 2023-11-01 at 12:32 -0700, Sean Christopherson wrote: >> On Tue, Oct 31, 2023, Maxim Levitsky wrote: >>> On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote: >>>> Wrap __kvm_{get,set}_msr() into two new helpers for KVM usage and use the >>>> helpers to replace existing usage of the raw functions. >>>> kvm_msr_{read,write}() are KVM-internal helpers, i.e. used when KVM needs >>>> to get/set a MSR value for emulating CPU behavior. >>> I am not sure if I like this patch or not. On one hand the code is cleaner >>> this way, but on the other hand now it is easier to call kvm_msr_write() on >>> behalf of the guest. >>> >>> For example we also have the 'kvm_set_msr()' which does actually set the msr >>> on behalf of the guest. >>> >>> How about we call the new function kvm_msr_set_host() and rename >>> kvm_set_msr() to kvm_msr_set_guest(), together with good comments explaning >>> what they do? >> LOL, just call me Nostradamus[*] ;-) >> >> : > SSP save/load should go to enter_smm_save_state_64() and rsm_load_state_64(), >> : > where other fields of SMRAM are handled. >> : >> : +1. The right way to get/set MSRs like this is to use __kvm_get_msr() and pass >> : %true for @host_initiated. Though I would add a prep patch to provide wrappers >> : for __kvm_get_msr() and __kvm_set_msr(). Naming will be hard, but I think we >> ^^^^^^^^^^^^^^^^^^^ >> : can use kvm_{read,write}_msr() to go along with the KVM-initiated register >> : accessors/mutators, e.g. kvm_register_read(), kvm_pdptr_write(), etc. >> >> [*] https://lore.kernel.org/all/ZM0YZgFsYWuBFOze@google.com >> >>> Also functions like kvm_set_msr_ignored_check(), kvm_set_msr_with_filter() and such, >>> IMHO have names that are not very user friendly. >> I don't like the host/guest split because KVM always operates on guest values, >> e.g. kvm_msr_set_host() in particular could get confusing. > That makes sense. > >> IMO kvm_get_msr() and kvm_set_msr(), and to some extent the helpers you note below, >> are the real problem. >> >> What if we rename kvm_{g,s}et_msr() to kvm_emulate_msr_{read,write}() to make it >> more obvious that those are the "guest" helpers? And do that as a prep patch in >> this series (there aren't _that_ many users). > Makes sense. Then I'll modify related code and add the pre-patch in next version, thanks! >> I'm also in favor of renaming the "inner" helpers, but I think we should tackle >> those separately.separately > OK. > > Best regards, > Maxim Levitsky > >
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1a4def36d5bb..0fc5e6312e93 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1956,7 +1956,9 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu); void kvm_enable_efer_bits(u64); bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer); -int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated); + +int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data); +int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data); int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data); int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data); int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 7c3e4a550ca7..1f206caec559 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -1531,7 +1531,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, *edx = entry->edx; if (function == 7 && index == 0) { u64 data; - if (!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) && + if (!kvm_msr_read(vcpu, MSR_IA32_TSX_CTRL, &data) && (data & TSX_CTRL_CPUID_CLEAR)) *ebx &= ~(F(RTM) | F(HLE)); } else if (function == 0x80000007) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6c9c81e82e65..e0b55c043dab 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1917,8 +1917,8 @@ static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu, * Returns 0 on success, non-0 otherwise. * Assumes vcpu_load() was already called. */ -int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, - bool host_initiated) +static int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, + bool host_initiated) { struct msr_data msr; int ret; @@ -1944,6 +1944,16 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, return ret; } +int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data) +{ + return __kvm_set_msr(vcpu, index, data, true); +} + +int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data) +{ + return __kvm_get_msr(vcpu, index, data, true); +} + static int kvm_get_msr_ignored_check(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated) { @@ -12082,7 +12092,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) MSR_IA32_MISC_ENABLE_BTS_UNAVAIL; __kvm_set_xcr(vcpu, 0, XFEATURE_MASK_FP); - __kvm_set_msr(vcpu, MSR_IA32_XSS, 0, true); + kvm_msr_write(vcpu, MSR_IA32_XSS, 0); } /* All GPRs except RDX (handled below) are zeroed on RESET/INIT. */