Message ID | 20221019095035.10823-6-xin3.li@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp284925wrs; Wed, 19 Oct 2022 05:06:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7TCKDgdP/CXMwBgmpg2uYHpZ1Jd2u5tDRX+60sXLA7AFGFKG6Ae4PbSml1qfaUhBNTYG3Z X-Received: by 2002:a17:907:2c4a:b0:78d:ee99:a06b with SMTP id hf10-20020a1709072c4a00b0078dee99a06bmr6177841ejc.578.1666181217664; Wed, 19 Oct 2022 05:06:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666181217; cv=none; d=google.com; s=arc-20160816; b=PwqH/QVVJaskL/vQSD6QAwt3oHeiABJ+qeCM88FyWC9jNdHF+dIsZKZ//8IG4fYj1t B/LkSp5mYZxnlk1s3MT9K3k7JZPbpKcmZFrZkytt+gahOr/6gorOrwbtiovTooc8OhE4 UKTkgUcI6jBUgLwa72zoc9aQPVxnP0Sg21RlFiXJc8JozglbgVYZ6JbKEnAnnwBCbluG Z9HcUkMkT3iL6o3NeTbAaNZRzaY/ot40oixktCc5uzZ1RBGRpgS5532QgEkKLhvh1+dk XIxc4lSWnzuM1ZNaOlDBjsLQ4I+zTaKuQJtBQKJGG5v2BP2mC4+RJ/PkrMHtiYgXBjL4 lI3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MOfdUOe6adtDOE0U+DD503NapYXq6AvQWdCDomYVWwA=; b=HXAB0R0ZcbRI8KfNW4xfbKyx2KQ0875/UrCo/aeMuj4lH8IRQmLj9+0GJz/N3fh40T +lpQeldWLcyOVd/nVlQci8Jtj53ZFWvAq06SFgCj3VABamHj6HVO7ojsEHckEzsNYtOZ s2fzobbisuAcri0HE5f6m7PjsvRdOoL6ZCwXWpGuT1V9SXmDLvNRBYvXlQVluPHdJNAK ULxs38vRfGS7L8uh7rs75qWfYme1tLAeXXOLM3qKCRKNzyrJumTvnd76z8qndC0QGPWS pYuFPnNHrY2ppUwCigGvub5OR0scUqqOzzDrKpHROa35Cs6mRu0TKofGBQQMvVBffmg/ gzGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MVWjmA7M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gg16-20020a170906e29000b007835897050esi12442846ejb.404.2022.10.19.05.06.29; Wed, 19 Oct 2022 05:06:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MVWjmA7M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231347AbiJSMCE (ORCPT <rfc822;samuel.l.nystrom@gmail.com> + 99 others); Wed, 19 Oct 2022 08:02:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232349AbiJSMBi (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 19 Oct 2022 08:01:38 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24998F53D9 for <linux-kernel@vger.kernel.org>; Wed, 19 Oct 2022 04:38:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666179502; x=1697715502; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j1nrWwUKj7h3R57+EPy6Nb6HTjf2lvY1jPi2BPfuQLU=; b=MVWjmA7M8BJ609W7/eAn6+thiMRjXL6bdKADh5VJLLaPVzvo4AB1ajUT YbBPw9N+8DqvVtwrSRyZsglqNpHNyJ4hGJIktpgPUNZOmo/B53C1DF8sG 4wr9kjS2YlIQgLf4eAustdA+keMuLCOSvdtMp8KpJSV7+rpl6ZQ/h30Wq zhkTu0YGlwlariRd1QS0FointFcen4Cx9xRNZd8zKylcmqQ/QSi56Zvox a9xCbW49jAJ1aF6z04CLD8V8IeO8HVFxyBgK3mcKimrqcJO4/NGN8BDcz oUjHXnJ3WJYEuo5l/YASmwT4r5MaEtu8UNI8PHOxM+f8rNFQdgNgW9QDa A==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="308054300" X-IronPort-AV: E=Sophos;i="5.95,195,1661842800"; d="scan'208";a="308054300" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2022 03:13:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="692312627" X-IronPort-AV: E=Sophos;i="5.95,195,1661842800"; d="scan'208";a="692312627" Received: from unknown (HELO fred..) ([172.25.112.68]) by fmsmga008.fm.intel.com with ESMTP; 19 Oct 2022 03:13:32 -0700 From: Xin Li <xin3.li@intel.com> To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, peterz@infradead.org, brgerst@gmail.com, chang.seok.bae@intel.com Subject: [PATCH v4 5/5] x86/gsseg: use the LKGS instruction if available for load_gs_index() Date: Wed, 19 Oct 2022 02:50:35 -0700 Message-Id: <20221019095035.10823-6-xin3.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019095035.10823-1-xin3.li@intel.com> References: <20221019095035.10823-1-xin3.li@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747117636384414497?= X-GMAIL-MSGID: =?utf-8?q?1747117636384414497?= |
Series |
Enable LKGS instruction
|
|
Commit Message
Li, Xin3
Oct. 19, 2022, 9:50 a.m. UTC
From: "H. Peter Anvin (Intel)" <hpa@zytor.com> The LKGS instruction atomically loads a segment descriptor into the %gs descriptor registers, *except* that %gs.base is unchanged, and the base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly what we want this function to do. Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Xin Li <xin3.li@intel.com> --- Changes since v3: * We want less ASM not more, thus keep local_irq_save/restore() inside native_load_gs_index() (Thomas Gleixner). * For paravirt enabled kernels, initialize pv_ops.cpu.load_gs_index to native_lkgs (Thomas Gleixner). Changes since V2: * Mark DI as input and output (+D) as in V1, since the exception handler modifies it (Brian Gerst). Changes since V1: * Use EX_TYPE_ZERO_REG instead of fixup code in the obsolete .fixup code section (Peter Zijlstra). * Add a comment that states the LKGS_DI macro will be repalced with "lkgs %di" once the binutils support the LKGS instruction (Peter Zijlstra). --- arch/x86/include/asm/gsseg.h | 33 +++++++++++++++++++++++++++++---- arch/x86/kernel/cpu/common.c | 1 + 2 files changed, 30 insertions(+), 4 deletions(-)
Comments
On 19.10.22 11:50, Xin Li wrote: > From: "H. Peter Anvin (Intel)" <hpa@zytor.com> > > The LKGS instruction atomically loads a segment descriptor into the > %gs descriptor registers, *except* that %gs.base is unchanged, and the > base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly > what we want this function to do. > > Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > Signed-off-by: Brian Gerst <brgerst@gmail.com> > Signed-off-by: Xin Li <xin3.li@intel.com> > --- > > Changes since v3: > * We want less ASM not more, thus keep local_irq_save/restore() inside > native_load_gs_index() (Thomas Gleixner). > * For paravirt enabled kernels, initialize pv_ops.cpu.load_gs_index to > native_lkgs (Thomas Gleixner). > > Changes since V2: > * Mark DI as input and output (+D) as in V1, since the exception handler > modifies it (Brian Gerst). > > Changes since V1: > * Use EX_TYPE_ZERO_REG instead of fixup code in the obsolete .fixup code > section (Peter Zijlstra). > * Add a comment that states the LKGS_DI macro will be repalced with "lkgs %di" > once the binutils support the LKGS instruction (Peter Zijlstra). > --- > arch/x86/include/asm/gsseg.h | 33 +++++++++++++++++++++++++++++---- > arch/x86/kernel/cpu/common.c | 1 + > 2 files changed, 30 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h > index d15577c39e8d..ab6a595cea70 100644 > --- a/arch/x86/include/asm/gsseg.h > +++ b/arch/x86/include/asm/gsseg.h > @@ -14,17 +14,42 @@ > > extern asmlinkage void asm_load_gs_index(u16 selector); > > +/* Replace with "lkgs %di" once binutils support LKGS instruction */ > +#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7) > + > +static inline void native_lkgs(unsigned int selector) > +{ > + u16 sel = selector; > + asm_inline volatile("1: " LKGS_DI > + _ASM_EXTABLE_TYPE_REG(1b, 1b, EX_TYPE_ZERO_REG, %k[sel]) > + : [sel] "+D" (sel)); > +} > + > static inline void native_load_gs_index(unsigned int selector) > { > - unsigned long flags; > + if (cpu_feature_enabled(X86_FEATURE_LKGS)) { > + native_lkgs(selector); > + } else { > + unsigned long flags; > > - local_irq_save(flags); > - asm_load_gs_index(selector); > - local_irq_restore(flags); > + local_irq_save(flags); > + asm_load_gs_index(selector); > + local_irq_restore(flags); > + } > } > > #endif /* CONFIG_X86_64 */ > > +static inline void __init lkgs_init(void) > +{ > +#ifdef CONFIG_PARAVIRT_XXL > +#ifdef CONFIG_X86_64 > + if (cpu_feature_enabled(X86_FEATURE_LKGS)) > + pv_ops.cpu.load_gs_index = native_lkgs; For this to work correctly when running as a Xen PV guest, you need to add setup_clear_cpu_cap(X86_FEATURE_LKGS); to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise the Xen specific .load_gs_index vector will be overwritten. Juergen
> > +static inline void __init lkgs_init(void) { #ifdef > > +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 > > + if (cpu_feature_enabled(X86_FEATURE_LKGS)) > > + pv_ops.cpu.load_gs_index = native_lkgs; > > For this to work correctly when running as a Xen PV guest, you need to add > > setup_clear_cpu_cap(X86_FEATURE_LKGS); > > to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise the Xen > specific .load_gs_index vector will be overwritten. Yeah, we definitely should add it to disable LKGS in a Xen PV guest. So does it mean that the Xen PV uses a black list during feature detection? If yes then new features are often required to be masked with an explicit call to setup_clear_cpu_cap. Wouldn't a white list be better? Then the job is more just on the Xen PV side, and it can selectively enable a new feature, sometimes with Xen PV specific handling code added. Xin > > > Juergen
On October 19, 2022 10:45:07 AM PDT, "Li, Xin3" <xin3.li@intel.com> wrote: >> > +static inline void __init lkgs_init(void) { #ifdef >> > +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 >> > + if (cpu_feature_enabled(X86_FEATURE_LKGS)) >> > + pv_ops.cpu.load_gs_index = native_lkgs; >> >> For this to work correctly when running as a Xen PV guest, you need to add >> >> setup_clear_cpu_cap(X86_FEATURE_LKGS); >> >> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise the Xen >> specific .load_gs_index vector will be overwritten. > >Yeah, we definitely should add it to disable LKGS in a Xen PV guest. > >So does it mean that the Xen PV uses a black list during feature detection? >If yes then new features are often required to be masked with an explicit >call to setup_clear_cpu_cap. > >Wouldn't a white list be better? >Then the job is more just on the Xen PV side, and it can selectively enable >a new feature, sometimes with Xen PV specific handling code added. > >Xin > >> >> >> Juergen > Most things don't frob the paravirt list. Maybe we should make the paravirt frobbing a separate patch, at it is separable.
On 19.10.22 19:45, Li, Xin3 wrote: >>> +static inline void __init lkgs_init(void) { #ifdef >>> +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 >>> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) >>> + pv_ops.cpu.load_gs_index = native_lkgs; >> >> For this to work correctly when running as a Xen PV guest, you need to add >> >> setup_clear_cpu_cap(X86_FEATURE_LKGS); >> >> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise the Xen >> specific .load_gs_index vector will be overwritten. > > Yeah, we definitely should add it to disable LKGS in a Xen PV guest. > > So does it mean that the Xen PV uses a black list during feature detection? > If yes then new features are often required to be masked with an explicit > call to setup_clear_cpu_cap. > > Wouldn't a white list be better? > Then the job is more just on the Xen PV side, and it can selectively enable > a new feature, sometimes with Xen PV specific handling code added. This is not how it works. Feature detection is generic code, so we'd need to tweak that for switching to a whitelist. Additionally most features don't require any Xen PV specific handling. This is needed for some paravirtualized privileged operations only. So switching to a whitelist would add more effort. Juergen
On 19.10.22 20:01, H. Peter Anvin wrote: > On October 19, 2022 10:45:07 AM PDT, "Li, Xin3" <xin3.li@intel.com> wrote: >>>> +static inline void __init lkgs_init(void) { #ifdef >>>> +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 >>>> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) >>>> + pv_ops.cpu.load_gs_index = native_lkgs; >>> >>> For this to work correctly when running as a Xen PV guest, you need to add >>> >>> setup_clear_cpu_cap(X86_FEATURE_LKGS); >>> >>> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as otherwise the Xen >>> specific .load_gs_index vector will be overwritten. >> >> Yeah, we definitely should add it to disable LKGS in a Xen PV guest. >> >> So does it mean that the Xen PV uses a black list during feature detection? >> If yes then new features are often required to be masked with an explicit >> call to setup_clear_cpu_cap. >> >> Wouldn't a white list be better? >> Then the job is more just on the Xen PV side, and it can selectively enable >> a new feature, sometimes with Xen PV specific handling code added. >> >> Xin >> >>> >>> >>> Juergen >> > > Most things don't frob the paravirt list. > > Maybe we should make the paravirt frobbing a separate patch, at it is separable. Works for me. Juergen
> On 19.10.22 19:45, Li, Xin3 wrote: > >>> +static inline void __init lkgs_init(void) { #ifdef > >>> +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 > >>> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) > >>> + pv_ops.cpu.load_gs_index = native_lkgs; > >> > >> For this to work correctly when running as a Xen PV guest, you need > >> to add > >> > >> setup_clear_cpu_cap(X86_FEATURE_LKGS); > >> > >> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as > >> otherwise the Xen specific .load_gs_index vector will be overwritten. > > > > Yeah, we definitely should add it to disable LKGS in a Xen PV guest. > > > > So does it mean that the Xen PV uses a black list during feature detection? > > If yes then new features are often required to be masked with an > > explicit call to setup_clear_cpu_cap. > > > > Wouldn't a white list be better? > > Then the job is more just on the Xen PV side, and it can selectively > > enable a new feature, sometimes with Xen PV specific handling code added. > > This is not how it works. Feature detection is generic code, so we'd need to > tweak that for switching to a whitelist. > Yes, a Xen PV guest is basically a Linux system. However IIRC, the Xen PV CPUID is para-virtualized, so it's Xen hypervisor's responsibility to decide features exposed to a Xen PV guest. No? > Additionally most features don't require any Xen PV specific handling. This is > needed for some paravirtualized privileged operations only. So switching to a > whitelist would add more effort. > LKGS is allowed only in ring 0, thus only Xen hypervisor could use it. Xin > > Juergen
> > > > Most things don't frob the paravirt list. > > > > Maybe we should make the paravirt frobbing a separate patch, at it is > separable. > > Works for me. Thanks, I will send out the patch after Xen PV testing (need to setup it first). Xin > > > Juergen
On 20.10.22 07:58, Li, Xin3 wrote: >> On 19.10.22 19:45, Li, Xin3 wrote: >>>>> +static inline void __init lkgs_init(void) { #ifdef >>>>> +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 >>>>> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) >>>>> + pv_ops.cpu.load_gs_index = native_lkgs; >>>> >>>> For this to work correctly when running as a Xen PV guest, you need >>>> to add >>>> >>>> setup_clear_cpu_cap(X86_FEATURE_LKGS); >>>> >>>> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as >>>> otherwise the Xen specific .load_gs_index vector will be overwritten. >>> >>> Yeah, we definitely should add it to disable LKGS in a Xen PV guest. >>> >>> So does it mean that the Xen PV uses a black list during feature detection? >>> If yes then new features are often required to be masked with an >>> explicit call to setup_clear_cpu_cap. >>> >>> Wouldn't a white list be better? >>> Then the job is more just on the Xen PV side, and it can selectively >>> enable a new feature, sometimes with Xen PV specific handling code added. >> >> This is not how it works. Feature detection is generic code, so we'd need to >> tweak that for switching to a whitelist. >> > > Yes, a Xen PV guest is basically a Linux system. However IIRC, the Xen PV > CPUID is para-virtualized, so it's Xen hypervisor's responsibility to decide > features exposed to a Xen PV guest. No? In theory you are right, of course. OTOH the Xen PV interface has a long and complicated history, and we have to deal with old hypervisor versions, too. >> Additionally most features don't require any Xen PV specific handling. This is >> needed for some paravirtualized privileged operations only. So switching to a >> whitelist would add more effort. >> > > LKGS is allowed only in ring 0, thus only Xen hypervisor could use it. Right, it would be one of the features where a whitelist would be nice. OTOH today only 11 features need special handling in Xen PV guests, while the rest of more than 300 features doesn't. Juergen
> On 20.10.22 07:58, Li, Xin3 wrote: > >> On 19.10.22 19:45, Li, Xin3 wrote: > >>>>> +static inline void __init lkgs_init(void) { #ifdef > >>>>> +CONFIG_PARAVIRT_XXL #ifdef CONFIG_X86_64 > >>>>> + if (cpu_feature_enabled(X86_FEATURE_LKGS)) > >>>>> + pv_ops.cpu.load_gs_index = native_lkgs; > >>>> > >>>> For this to work correctly when running as a Xen PV guest, you need > >>>> to add > >>>> > >>>> setup_clear_cpu_cap(X86_FEATURE_LKGS); > >>>> > >>>> to xen_init_capabilities() in arch/x86/xen/enlighten_pv.c, as > >>>> otherwise the Xen specific .load_gs_index vector will be overwritten. > >>> > >>> Yeah, we definitely should add it to disable LKGS in a Xen PV guest. > >>> > >>> So does it mean that the Xen PV uses a black list during feature detection? > >>> If yes then new features are often required to be masked with an > >>> explicit call to setup_clear_cpu_cap. > >>> > >>> Wouldn't a white list be better? > >>> Then the job is more just on the Xen PV side, and it can selectively > >>> enable a new feature, sometimes with Xen PV specific handling code > added. > >> > >> This is not how it works. Feature detection is generic code, so we'd > >> need to tweak that for switching to a whitelist. > >> > > > > Yes, a Xen PV guest is basically a Linux system. However IIRC, the > > Xen PV CPUID is para-virtualized, so it's Xen hypervisor's > > responsibility to decide features exposed to a Xen PV guest. No? > > In theory you are right, of course. > > OTOH the Xen PV interface has a long and complicated history, and we have to > deal with old hypervisor versions, too. > > >> Additionally most features don't require any Xen PV specific > >> handling. This is needed for some paravirtualized privileged > >> operations only. So switching to a whitelist would add more effort. > >> > > > > LKGS is allowed only in ring 0, thus only Xen hypervisor could use it. > > Right, it would be one of the features where a whitelist would be nice. > > OTOH today only 11 features need special handling in Xen PV guests, while the > rest of more than 300 features doesn't. > Got to say, nothing is more convincing than strong data. Xin > > Juergen
diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h index d15577c39e8d..ab6a595cea70 100644 --- a/arch/x86/include/asm/gsseg.h +++ b/arch/x86/include/asm/gsseg.h @@ -14,17 +14,42 @@ extern asmlinkage void asm_load_gs_index(u16 selector); +/* Replace with "lkgs %di" once binutils support LKGS instruction */ +#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7) + +static inline void native_lkgs(unsigned int selector) +{ + u16 sel = selector; + asm_inline volatile("1: " LKGS_DI + _ASM_EXTABLE_TYPE_REG(1b, 1b, EX_TYPE_ZERO_REG, %k[sel]) + : [sel] "+D" (sel)); +} + static inline void native_load_gs_index(unsigned int selector) { - unsigned long flags; + if (cpu_feature_enabled(X86_FEATURE_LKGS)) { + native_lkgs(selector); + } else { + unsigned long flags; - local_irq_save(flags); - asm_load_gs_index(selector); - local_irq_restore(flags); + local_irq_save(flags); + asm_load_gs_index(selector); + local_irq_restore(flags); + } } #endif /* CONFIG_X86_64 */ +static inline void __init lkgs_init(void) +{ +#ifdef CONFIG_PARAVIRT_XXL +#ifdef CONFIG_X86_64 + if (cpu_feature_enabled(X86_FEATURE_LKGS)) + pv_ops.cpu.load_gs_index = native_lkgs; +#endif +#endif +} + #ifndef CONFIG_PARAVIRT_XXL static inline void load_gs_index(unsigned int selector) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3e508f239098..d6eb4f60b47d 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1939,6 +1939,7 @@ void __init identify_boot_cpu(void) setup_cr_pinning(); tsx_init(); + lkgs_init(); } void identify_secondary_cpu(struct cpuinfo_x86 *c)