Message ID | 20231027-delay-verw-v4-1-9a3622d4bcf7@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp652481vqb; Fri, 27 Oct 2023 07:39:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH7ASHknY1eOwPJSGa8uBNM4e3w/Pm9tk5mAOXWQui4x5ZzkhAEdAqA3iUujKE3j+4aJ6N1 X-Received: by 2002:a81:c105:0:b0:5a1:fb1d:740a with SMTP id f5-20020a81c105000000b005a1fb1d740amr2424413ywi.51.1698417584447; Fri, 27 Oct 2023 07:39:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698417584; cv=none; d=google.com; s=arc-20160816; b=CfkJ8wBejWKOkNaNfGv8lf6tXVd6F2LUqmdLap2LeXj2SuZ084s+Fd5Jbq1/L6SNka JlFNGXX0ElP9knz0LFJ50nh2JPJq7xeI9z6yBH+s+S9+yUUsReQQvLVi15V5+iyJz7gm 2d6bopHEDy8LlHNUkjt5UoGYyGI7E1zowoNdyGd8j2PM1fyxZr7MoHQWonoPIbRvXGkq THWQY20J35WZsdw29P2CWNM1Sq4BspLMqeeFW36kQEiPuKOkIg/D06vLiT8HC240QTpC 5diufguU3o8KS1E/yODZq7MkxxsLNr498gt6VRsbdziJz72hEcYbKaVRwKMr0pozXmTN 1hrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=LILw1Z2vts/SpWBB8vwvL7y8EItpKATIVasNhXBL/rA=; fh=nr44bLQvVCZBc0nIZdewn81p1e8Bet4/2V+MdlvYUZA=; b=HmdwnOGk+BBKu0LjLx7C8zKppIMiuXnSW8XfxjaQ/P0M1ZUR6r8LNywzxLlqrubPxz Ncf5mMyFv89fUSbTrbaxCu3Cg2WygsLN58oSDsUc5g5W7R6Z82OCYKjd4bt3mo8383Ja ZVGsGfWwhlQxJ2vxgvfPN/vyhM956M8Cmy58PIpGyEd4NpIU1KpRp18YR7BFCGiuutib RgHIwt0Z6Pkysp2iGTHjuiz2rfmmBKLR2+NM8v5p68AFKoBTF2cPz3gVU1KJC8iHVug1 Qg89Ixie5ymhZ+61N0myRNo6YIjLxjoUzw2uOW32f1ZWQfmTir4HTViwDJTNDQKLckiD wrwg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W91o8T+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id p16-20020a0de610000000b005a4f6575a70si2760210ywe.62.2023.10.27.07.39.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Oct 2023 07:39:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W91o8T+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 67C4982EA163; Fri, 27 Oct 2023 07:39:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346065AbjJ0OjG (ORCPT <rfc822;a1648639935@gmail.com> + 25 others); Fri, 27 Oct 2023 10:39:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345876AbjJ0OjB (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 27 Oct 2023 10:39:01 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 537E2DE; Fri, 27 Oct 2023 07:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698417539; x=1729953539; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=PE5fU1agwGvIYR6WvOaWH9GVoG/TbRgWJmFahYBu+3E=; b=W91o8T+OFJe/GWkmC5yEGIfpYejsOutjiDN0hOcrJ8nxXEKCSzJDDKiw v05OKruP/gBwbcBL4eDc4rfgudKP6Nyj/7mHi8ZDZpvWO+tbGuuSbVlyl eort2OLe81g79DRQxFh8ak0xtHazucaBBmOQls+2D1kR5pZZYBfn5BipL GtJVM07eytSsKUnaV5efvkYdam0RLyBDr8Umxq27000uY2f5hymqrAlhk 61EPGQLz7a/iI0YtkRMa7SqpjVOUUMSnOtww7C74TliLqQt5S4je+jkke acyT9ouHYoO6w0BN/laURUYHE4dAz2Cwug0ukLXtftRIMuCuV70kkw6ml Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="386670113" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="386670113" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 07:38:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="812619" Received: from dmnassar-mobl.amr.corp.intel.com (HELO desk) ([10.212.203.39]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 07:38:06 -0700 Date: Fri, 27 Oct 2023 07:38:40 -0700 From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org>, Josh Poimboeuf <jpoimboe@kernel.org>, Andy Lutomirski <luto@kernel.org>, Jonathan Corbet <corbet@lwn.net>, Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com>, tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper <andrew.cooper3@citrix.com>, Nikolay Borisov <nik.borisov@suse.com> Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn <alyssa.milburn@linux.intel.com>, Daniel Sneddon <daniel.sneddon@linux.intel.com>, antonio.gomez.iglesias@linux.intel.com, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Pawan Gupta <pawan.kumar.gupta@linux.intel.com>, Alyssa Milburn <alyssa.milburn@intel.com> Subject: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW Message-ID: <20231027-delay-verw-v4-1-9a3622d4bcf7@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231027-delay-verw-v4-0-9a3622d4bcf7@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231027-delay-verw-v4-0-9a3622d4bcf7@linux.intel.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Fri, 27 Oct 2023 07:39:33 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780919916892350987 X-GMAIL-MSGID: 1780919916892350987 |
Series |
Delay VERW
|
|
Commit Message
Pawan Gupta
Oct. 27, 2023, 2:38 p.m. UTC
MDS mitigation requires clearing the CPU buffers before returning to
user. This needs to be done late in the exit-to-user path. Current
location of VERW leaves a possibility of kernel data ending up in CPU
buffers for memory accesses done after VERW such as:
1. Kernel data accessed by an NMI between VERW and return-to-user can
remain in CPU buffers ( since NMI returning to kernel does not
execute VERW to clear CPU buffers.
2. Alyssa reported that after VERW is executed,
CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
call. Memory accesses during stack scrubbing can move kernel stack
contents into CPU buffers.
3. When caller saved registers are restored after a return from
function executing VERW, the kernel stack accesses can remain in
CPU buffers(since they occur after VERW).
To fix this VERW needs to be moved very late in exit-to-user path.
In preparation for moving VERW to entry/exit asm code, create macros
that can be used in asm. Also make them depend on a new feature flag
X86_FEATURE_CLEAR_CPU_BUF.
Reported-by: Alyssa Milburn <alyssa.milburn@intel.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry.S | 17 +++++++++++++++++
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 15 +++++++++++++++
3 files changed, 33 insertions(+), 1 deletion(-)
Comments
On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: > MDS mitigation requires clearing the CPU buffers before returning to > user. This needs to be done late in the exit-to-user path. Current > location of VERW leaves a possibility of kernel data ending up in CPU > buffers for memory accesses done after VERW such as: > > 1. Kernel data accessed by an NMI between VERW and return-to-user can > remain in CPU buffers ( since NMI returning to kernel does not Some leftover '(' > execute VERW to clear CPU buffers. > 2. Alyssa reported that after VERW is executed, > CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system > call. Memory accesses during stack scrubbing can move kernel stack > contents into CPU buffers. > 3. When caller saved registers are restored after a return from > function executing VERW, the kernel stack accesses can remain in > CPU buffers(since they occur after VERW). > > To fix this VERW needs to be moved very late in exit-to-user path. > > In preparation for moving VERW to entry/exit asm code, create macros > that can be used in asm. Also make them depend on a new feature flag > X86_FEATURE_CLEAR_CPU_BUF. The macros don't depend on the feature flag - VERW patching is done based on it. > @@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb) > EXPORT_SYMBOL_GPL(entry_ibpb); > > .popsection > + > +.pushsection .entry.text, "ax" > + > +.align L1_CACHE_BYTES, 0xcc > +SYM_CODE_START_NOALIGN(mds_verw_sel) That weird thing needs a comment explaining what it is for. > + UNWIND_HINT_UNDEFINED > + ANNOTATE_NOENDBR > + .word __KERNEL_DS > +.align L1_CACHE_BYTES, 0xcc > +SYM_CODE_END(mds_verw_sel); > +/* For KVM */ > +EXPORT_SYMBOL_GPL(mds_verw_sel); > + > +.popsection > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h > index 58cb9495e40f..f21fc0f12737 100644 > --- a/arch/x86/include/asm/cpufeatures.h > +++ b/arch/x86/include/asm/cpufeatures.h > @@ -308,10 +308,10 @@ > #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ > #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ > #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ > - > #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ > #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ > #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */ > +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */ ... using VERW > > /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ > #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h > index c55cc243592e..005e69f93115 100644 > --- a/arch/x86/include/asm/nospec-branch.h > +++ b/arch/x86/include/asm/nospec-branch.h > @@ -329,6 +329,21 @@ > #endif > .endm > > +/* > + * Macros to execute VERW instruction that mitigate transient data sampling > + * attacks such as MDS. On affected systems a microcode update overloaded VERW > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. > + * > + * Note: Only the memory operand variant of VERW clears the CPU buffers. > + */ > +.macro EXEC_VERW > + verw _ASM_RIP(mds_verw_sel) > +.endm > + > +.macro CLEAR_CPU_BUFFERS > + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF > +.endm Why can't this simply be: .macro CLEAR_CPU_BUFFERS ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF .endm without that silly EXEC_VERW macro?
On Fri, Oct 27, 2023 at 05:32:03PM +0200, Borislav Petkov wrote: > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: > > 1. Kernel data accessed by an NMI between VERW and return-to-user can > > remain in CPU buffers ( since NMI returning to kernel does not > > Some leftover '(' Ok. > > In preparation for moving VERW to entry/exit asm code, create macros > > that can be used in asm. Also make them depend on a new feature flag > > X86_FEATURE_CLEAR_CPU_BUF. > > The macros don't depend on the feature flag - VERW patching is done > based on it. Will fix. > > @@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb) > > EXPORT_SYMBOL_GPL(entry_ibpb); > > > > .popsection > > + > > +.pushsection .entry.text, "ax" > > + > > +.align L1_CACHE_BYTES, 0xcc > > +SYM_CODE_START_NOALIGN(mds_verw_sel) > > That weird thing needs a comment explaining what it is for. Right. > > +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */ > > ... using VERW Ok. > > +/* > > + * Macros to execute VERW instruction that mitigate transient data sampling > > + * attacks such as MDS. On affected systems a microcode update overloaded VERW > > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. > > + * > > + * Note: Only the memory operand variant of VERW clears the CPU buffers. > > + */ > > +.macro EXEC_VERW > > + verw _ASM_RIP(mds_verw_sel) > > +.endm > > + > > +.macro CLEAR_CPU_BUFFERS > > + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF > > +.endm > > Why can't this simply be: > > .macro CLEAR_CPU_BUFFERS > ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF This will not work in 32-bit mode that uses the same macro. Thanks for the review.
On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: > +.pushsection .entry.text, "ax" > + > +.align L1_CACHE_BYTES, 0xcc > +SYM_CODE_START_NOALIGN(mds_verw_sel) > + UNWIND_HINT_UNDEFINED > + ANNOTATE_NOENDBR > + .word __KERNEL_DS > +.align L1_CACHE_BYTES, 0xcc > +SYM_CODE_END(mds_verw_sel); > +/* For KVM */ > +EXPORT_SYMBOL_GPL(mds_verw_sel); > + > +.popsection This is data, so why is it "CODE" in .entry.text?
On 01/12/2023 7:36 pm, Josh Poimboeuf wrote: > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: >> +.pushsection .entry.text, "ax" >> + >> +.align L1_CACHE_BYTES, 0xcc >> +SYM_CODE_START_NOALIGN(mds_verw_sel) >> + UNWIND_HINT_UNDEFINED >> + ANNOTATE_NOENDBR >> + .word __KERNEL_DS >> +.align L1_CACHE_BYTES, 0xcc >> +SYM_CODE_END(mds_verw_sel); >> +/* For KVM */ >> +EXPORT_SYMBOL_GPL(mds_verw_sel); >> + >> +.popsection > This is data, so why is it "CODE" in .entry.text? Because KPTI. ~Andrew
On Fri, Dec 01, 2023 at 07:39:05PM +0000, Andrew Cooper wrote: > On 01/12/2023 7:36 pm, Josh Poimboeuf wrote: > > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: > >> +.pushsection .entry.text, "ax" > >> + > >> +.align L1_CACHE_BYTES, 0xcc > >> +SYM_CODE_START_NOALIGN(mds_verw_sel) > >> + UNWIND_HINT_UNDEFINED > >> + ANNOTATE_NOENDBR > >> + .word __KERNEL_DS > >> +.align L1_CACHE_BYTES, 0xcc > >> +SYM_CODE_END(mds_verw_sel); > >> +/* For KVM */ > >> +EXPORT_SYMBOL_GPL(mds_verw_sel); > >> + > >> +.popsection > > This is data, so why is it "CODE" in .entry.text? > > Because KPTI. Urgh... Pawan please add a comment.
On Fri, Dec 01, 2023 at 12:04:42PM -0800, Josh Poimboeuf wrote: > On Fri, Dec 01, 2023 at 07:39:05PM +0000, Andrew Cooper wrote: > > On 01/12/2023 7:36 pm, Josh Poimboeuf wrote: > > > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote: > > >> +.pushsection .entry.text, "ax" > > >> + > > >> +.align L1_CACHE_BYTES, 0xcc > > >> +SYM_CODE_START_NOALIGN(mds_verw_sel) > > >> + UNWIND_HINT_UNDEFINED > > >> + ANNOTATE_NOENDBR > > >> + .word __KERNEL_DS > > >> +.align L1_CACHE_BYTES, 0xcc > > >> +SYM_CODE_END(mds_verw_sel); > > >> +/* For KVM */ > > >> +EXPORT_SYMBOL_GPL(mds_verw_sel); > > >> + > > >> +.popsection > > > This is data, so why is it "CODE" in .entry.text? > > > > Because KPTI. > > Urgh... Pawan please add a comment. Yes, this place needs a comment, will add.
diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S index bfb7bcb362bc..8dc84bb9dc0b 100644 --- a/arch/x86/entry/entry.S +++ b/arch/x86/entry/entry.S @@ -6,6 +6,9 @@ #include <linux/linkage.h> #include <asm/export.h> #include <asm/msr-index.h> +#include <asm/unwind_hints.h> +#include <asm/segment.h> +#include <asm/cache.h> .pushsection .noinstr.text, "ax" @@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb) EXPORT_SYMBOL_GPL(entry_ibpb); .popsection + +.pushsection .entry.text, "ax" + +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_START_NOALIGN(mds_verw_sel) + UNWIND_HINT_UNDEFINED + ANNOTATE_NOENDBR + .word __KERNEL_DS +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_END(mds_verw_sel); +/* For KVM */ +EXPORT_SYMBOL_GPL(mds_verw_sel); + +.popsection diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 58cb9495e40f..f21fc0f12737 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -308,10 +308,10 @@ #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ - #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */ +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c55cc243592e..005e69f93115 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -329,6 +329,21 @@ #endif .endm +/* + * Macros to execute VERW instruction that mitigate transient data sampling + * attacks such as MDS. On affected systems a microcode update overloaded VERW + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. + * + * Note: Only the memory operand variant of VERW clears the CPU buffers. + */ +.macro EXEC_VERW + verw _ASM_RIP(mds_verw_sel) +.endm + +.macro CLEAR_CPU_BUFFERS + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_RETPOLINE_SAFE \