Message ID | 20231020-delay-verw-v1-2-cff54096326d@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2010:b0:403:3b70:6f57 with SMTP id fe16csp1316128vqb; Fri, 20 Oct 2023 13:47:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEnZMVln4FMZVR2IdCNoDcHr/0s59DGgAiRT1JZW+bOouDTwvHjUOr7GyELJ8kj356fQWcf X-Received: by 2002:a05:6a00:1402:b0:6b6:1216:d8fe with SMTP id l2-20020a056a00140200b006b61216d8femr2888462pfu.27.1697834836392; Fri, 20 Oct 2023 13:47:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697834836; cv=none; d=google.com; s=arc-20160816; b=a7TVK5P2h4JjRLPfvHdHx2dSkibRuMGmCJxKwwtVyM1skketUsBwqRBMcH1P9FMDoo 13hmN4KYvWgI4p5pBP+SkqZkelg6eBznF4FCJ5Rnko9sRMNxMiB2Fd8xVCXcfAEesURe vdAKdgPkzEN0f2Ghw2wNrC30sp7DYUyiXK058gWJAiq1UWE2Y5W+CqWgyrGSKkX5hrUY sOU+GcGEdfA5+aBdfg5cRAoAjIsF6bZGkVh9rqpSIxjqaBbF6bgKH4ZzkrJmwubpyX1K efcz324j3i2zW4O0UmDKBOxVgMI2g66zgKvOpqT5kADU6dzGq0HaWOUE1y50doQ6WGwp 4uoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=+4bXi0i3JM+1tb66mtUEbuj02CZdqBs8bh0Zx/rzKMA=; fh=pfRkWPzvKcXRDb8bzZOHP7Lfu+YA9O7iyInvCf011xY=; b=isrT+5DHGTHiPGjm+nr/Be3F9GEaEdemkgGZuJw/iGVp+pSdB5iCSuRa4588DkAoJM ISxYr/go4gkZoMHwaZ5DkBT2V2O9CLB1ubr2bQb1DgbViKhAY4YC6akYkVenMJAOZiSM iAzr3lf/K3brzEhWgbCxeXlHeZCcuIdl+0eEBTtQIZXYh6Jf76ROrHBcycKEM0kzBmWF zJ9T14LVvo3+tXJhSF6ccRR/TXzl8R9gNvicVzL9GOuAfYaksoFkNJYzLFCgITzHttDV JKietFna9j3X+f405LndfBF0kzwyVJkeUtYRBpf7fmCQ/Qi3Heaf6YJNmBHVid29l9NP 13GA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JtQId4Tq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id v202-20020a6361d3000000b005891f742957si2494818pgb.570.2023.10.20.13.47.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Oct 2023 13:47:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JtQId4Tq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 15DC98095F4D; Fri, 20 Oct 2023 13:45:31 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbjJTUpK (ORCPT <rfc822;a1648639935@gmail.com> + 26 others); Fri, 20 Oct 2023 16:45:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231160AbjJTUpI (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 20 Oct 2023 16:45:08 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA248D72; Fri, 20 Oct 2023 13:45:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697834704; x=1729370704; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=uGr8ClT2h4Zv+dGb/ooumBjLtEvJ/qtZo0hY1GnJHiw=; b=JtQId4Tq+iQT8NvWsje/7gQ+4wjmiFSQjQoSYO3NTHzd+QxGHX2E4WYI femvCycQI5en2W0y0weDxNYO9IStc15L2wbz5a8mtpUja2R15r/IOGqJM FWgiWue0E/yxhsNcDLd1YC4lgNAJ3tCO8eekbtpNfReSDNKFL6gYSn7O+ 8jj4E3GQQRsdsqHasxx8I/cwY6nV5khn990lJ3qTVpPvb2QNF+gNTh+RJ 7z3LMW5LTxWCI2uqal+hxPNCQlW1RWiDcpumRNzdwsAv1FFZBOa+EYQKu AjhJ5cDllDFVcH0WdcebcdhLvf+6eEeSQ/ul4MWOguoCiM6JddsF5JQbU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10869"; a="453048206" X-IronPort-AV: E=Sophos;i="6.03,239,1694761200"; d="scan'208";a="453048206" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 13:45:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,239,1694761200"; d="scan'208";a="5217189" Received: from hkchanda-mobl.amr.corp.intel.com (HELO desk) ([10.209.90.113]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 13:43:52 -0700 Date: Fri, 20 Oct 2023 13:45:03 -0700 From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org>, Josh Poimboeuf <jpoimboe@kernel.org>, Andy Lutomirski <luto@kernel.org>, Jonathan Corbet <corbet@lwn.net>, Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com>, tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn <alyssa.milburn@linux.intel.com>, Daniel Sneddon <daniel.sneddon@linux.intel.com>, antonio.gomez.iglesias@linux.intel.com, Pawan Gupta <pawan.kumar.gupta@linux.intel.com>, Dave Hansen <dave.hansen@intel.com> Subject: [PATCH 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20231020-delay-verw-v1-2-cff54096326d@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231020-delay-verw-v1-0-cff54096326d@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231020-delay-verw-v1-0-cff54096326d@linux.intel.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Fri, 20 Oct 2023 13:45:31 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780308861366336221 X-GMAIL-MSGID: 1780308861366336221 |
Series |
Delay VERW
|
|
Commit Message
Pawan Gupta
Oct. 20, 2023, 8:45 p.m. UTC
Mitigation for MDS is to use VERW instruction to clear any secrets in
CPU Buffers. Any memory accesses after VERW execution can still remain
in CPU buffers. It is safer to execute VERW late in return to user path
to minimize the window in which kernel data can end up in CPU buffers.
There are not many kernel secrets to be had after SWITCH_TO_USER_CR3.
Add support for deploying VERW mitigation after user register state is
restored. This helps minimize the chances of kernel data ending up into
CPU buffers after executing VERW.
Note that the mitigation at the new location is not yet enabled.
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry_64.S | 14 ++++++++++++++
arch/x86/entry/entry_64_compat.S | 2 ++
2 files changed, 16 insertions(+)
Comments
On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > + USER_CLEAR_CPU_BUFFERS > + > jmp .Lnative_iret > > > @@ -774,6 +780,9 @@ native_irq_return_ldt: > */ > popq %rax /* Restore user RAX */ > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > + USER_CLEAR_CPU_BUFFERS > + I'm thinking the comments add unnecessary noise here. The macro name is self-documenting enough. The detail about what mitigations are being done can go above the macro definition itself, which the reader can refer to if they want more detail about what the macro is doing and why. Speaking of the macro name, I think just "CLEAR_CPU_BUFFERS" is sufficient. The "USER_" prefix makes it harder to read IMO.
On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: > @@ -663,6 +665,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) > /* Restore RDI. */ > popq %rdi > swapgs > + > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > + USER_CLEAR_CPU_BUFFERS > + > jmp .Lnative_iret > > > @@ -774,6 +780,9 @@ native_irq_return_ldt: > */ > popq %rax /* Restore user RAX */ > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > + USER_CLEAR_CPU_BUFFERS > + Can the above two USER_CLEAR_CPU_BUFFERS be replaced with a single one just above native_irq_return_iret? Otherwise the native_irq_return_ldt case ends up getting two VERWs. > /* > * RSP now points to an ordinary IRET frame, except that the page > * is read-only and RSP[31:16] are preloaded with the userspace > @@ -1502,6 +1511,9 @@ nmi_restore: > std > movq $0, 5*8(%rsp) /* clear "NMI executing" */ > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > + USER_CLEAR_CPU_BUFFERS > + > /* > * iretq reads the "iret" frame and exits the NMI stack in a > * single instruction. We are returning to kernel mode, so this This isn't needed here. This is the NMI return-to-kernel path. The NMI return-to-user path is already mitigated as it goes through swapgs_restore_regs_and_return_to_usermode.
On Mon, Oct 23, 2023 at 11:22:11AM -0700, Josh Poimboeuf wrote: > On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > + USER_CLEAR_CPU_BUFFERS > > + > > jmp .Lnative_iret > > > > > > @@ -774,6 +780,9 @@ native_irq_return_ldt: > > */ > > popq %rax /* Restore user RAX */ > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > + USER_CLEAR_CPU_BUFFERS > > + > > I'm thinking the comments add unnecessary noise here. The macro name is > self-documenting enough. > > The detail about what mitigations are being done can go above the macro > definition itself, which the reader can refer to if they want more > detail about what the macro is doing and why. Sure, I will move the comments to definition. > Speaking of the macro name, I think just "CLEAR_CPU_BUFFERS" is > sufficient. The "USER_" prefix makes it harder to read IMO. Ok, will drop "USER_".
On 10/23/23 11:22, Josh Poimboeuf wrote: > On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: >> + /* Mitigate CPU data sampling attacks .e.g. MDS */ >> + USER_CLEAR_CPU_BUFFERS >> + >> jmp .Lnative_iret >> >> >> @@ -774,6 +780,9 @@ native_irq_return_ldt: >> */ >> popq %rax /* Restore user RAX */ >> >> + /* Mitigate CPU data sampling attacks .e.g. MDS */ >> + USER_CLEAR_CPU_BUFFERS >> + > > I'm thinking the comments add unnecessary noise here. The macro name is > self-documenting enough. > > The detail about what mitigations are being done can go above the macro > definition itself, which the reader can refer to if they want more > detail about what the macro is doing and why. > > Speaking of the macro name, I think just "CLEAR_CPU_BUFFERS" is > sufficient. The "USER_" prefix makes it harder to read IMO. Yes, please. The "USER_" prefix should be reserved for things that are uniquely for the unambiguous return-to-userspace paths.
On Mon, Oct 23, 2023 at 11:35:21AM -0700, Josh Poimboeuf wrote: > On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: > > @@ -663,6 +665,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) > > /* Restore RDI. */ > > popq %rdi > > swapgs > > + > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > + USER_CLEAR_CPU_BUFFERS > > + > > jmp .Lnative_iret > > > > > > @@ -774,6 +780,9 @@ native_irq_return_ldt: > > */ > > popq %rax /* Restore user RAX */ > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > + USER_CLEAR_CPU_BUFFERS > > + > > Can the above two USER_CLEAR_CPU_BUFFERS be replaced with a single one > just above native_irq_return_iret? Otherwise the native_irq_return_ldt > case ends up getting two VERWs. Wouldn't that make interrupts returning to kernel also execute VERWs? idtentry_body error_return restore_regs_and_return_to_kernel verw native_irq_return_ldt doesn't look to be a common case. Anyways, I will see how to remove the extra VERW. > > /* > > * RSP now points to an ordinary IRET frame, except that the page > > * is read-only and RSP[31:16] are preloaded with the userspace > > @@ -1502,6 +1511,9 @@ nmi_restore: > > std > > movq $0, 5*8(%rsp) /* clear "NMI executing" */ > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > + USER_CLEAR_CPU_BUFFERS > > + > > /* > > * iretq reads the "iret" frame and exits the NMI stack in a > > * single instruction. We are returning to kernel mode, so this > > This isn't needed here. This is the NMI return-to-kernel path. Yes, the VERW here can be omitted. But probably need to check if an NMI occuring between VERW and ring transition will still execute VERW after the NMI.
On Mon, Oct 23, 2023 at 02:04:10PM -0700, Pawan Gupta wrote: > On Mon, Oct 23, 2023 at 11:35:21AM -0700, Josh Poimboeuf wrote: > > On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote: > > > @@ -663,6 +665,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) > > > /* Restore RDI. */ > > > popq %rdi > > > swapgs > > > + > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > > + USER_CLEAR_CPU_BUFFERS > > > + > > > jmp .Lnative_iret > > > > > > > > > @@ -774,6 +780,9 @@ native_irq_return_ldt: > > > */ > > > popq %rax /* Restore user RAX */ > > > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > > + USER_CLEAR_CPU_BUFFERS > > > + > > > > Can the above two USER_CLEAR_CPU_BUFFERS be replaced with a single one > > just above native_irq_return_iret? Otherwise the native_irq_return_ldt > > case ends up getting two VERWs. > > Wouldn't that make interrupts returning to kernel also execute VERWs? > > idtentry_body > error_return > restore_regs_and_return_to_kernel > verw > > native_irq_return_ldt doesn't look to be a common case. Anyways, I will > see how to remove the extra VERW. Ah, right. > > > /* > > > * RSP now points to an ordinary IRET frame, except that the page > > > * is read-only and RSP[31:16] are preloaded with the userspace > > > @@ -1502,6 +1511,9 @@ nmi_restore: > > > std > > > movq $0, 5*8(%rsp) /* clear "NMI executing" */ > > > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > > + USER_CLEAR_CPU_BUFFERS > > > + > > > /* > > > * iretq reads the "iret" frame and exits the NMI stack in a > > > * single instruction. We are returning to kernel mode, so this > > > > This isn't needed here. This is the NMI return-to-kernel path. > > Yes, the VERW here can be omitted. But probably need to check if an NMI > occuring between VERW and ring transition will still execute VERW after > the NMI. That window does exist, though I'm not sure it's worth worrying about.
On Mon, Oct 23, 2023 at 02:47:52PM -0700, Josh Poimboeuf wrote: > > > > /* > > > > * RSP now points to an ordinary IRET frame, except that the page > > > > * is read-only and RSP[31:16] are preloaded with the userspace > > > > @@ -1502,6 +1511,9 @@ nmi_restore: > > > > std > > > > movq $0, 5*8(%rsp) /* clear "NMI executing" */ > > > > > > > > + /* Mitigate CPU data sampling attacks .e.g. MDS */ > > > > + USER_CLEAR_CPU_BUFFERS > > > > + > > > > /* > > > > * iretq reads the "iret" frame and exits the NMI stack in a > > > > * single instruction. We are returning to kernel mode, so this > > > > > > This isn't needed here. This is the NMI return-to-kernel path. > > > > Yes, the VERW here can be omitted. But probably need to check if an NMI > > occuring between VERW and ring transition will still execute VERW after > > the NMI. > > That window does exist, though I'm not sure it's worth worrying about. I am in favor of omitting the VERW here, unless someone objects with a rationale. IMO, precisely timing the NMIs in such a narrow window is impractical.
On 10/23/23 15:30, Pawan Gupta wrote: >>>>> /* >>>>> * iretq reads the "iret" frame and exits the NMI stack in a >>>>> * single instruction. We are returning to kernel mode, so this >>>> This isn't needed here. This is the NMI return-to-kernel path. >>> Yes, the VERW here can be omitted. But probably need to check if an NMI >>> occuring between VERW and ring transition will still execute VERW after >>> the NMI. >> That window does exist, though I'm not sure it's worth worrying about. > I am in favor of omitting the VERW here, unless someone objects with a > rationale. IMO, precisely timing the NMIs in such a narrow window is > impractical. I'd bet that given the right PMU event you could make this pretty reliable. But normal users can't do that by default. That leaves the NMI watchdog which (I bet) you can still time, but which is pretty low frequency. Are there any other NMI sources that a normal user can cause problems with? Let's at least leave a marker in here that folks can grep for: /* Skip CLEAR_CPU_BUFFERS since it will rarely help */ and some nice logic in the changelog that they can dig out if need be. But, basically it sounds like the logic is: 1. It's rare to get an NMI after VERW but before returning to userspace 2. There is no known way to make that NMI less rare or target it 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. Anything else?
On Mon, Oct 23, 2023 at 03:45:41PM -0700, Dave Hansen wrote: > On 10/23/23 15:30, Pawan Gupta wrote: > >>>>> /* > >>>>> * iretq reads the "iret" frame and exits the NMI stack in a > >>>>> * single instruction. We are returning to kernel mode, so this > >>>> This isn't needed here. This is the NMI return-to-kernel path. > >>> Yes, the VERW here can be omitted. But probably need to check if an NMI > >>> occuring between VERW and ring transition will still execute VERW after > >>> the NMI. > >> That window does exist, though I'm not sure it's worth worrying about. > > I am in favor of omitting the VERW here, unless someone objects with a > > rationale. IMO, precisely timing the NMIs in such a narrow window is > > impractical. > > I'd bet that given the right PMU event you could make this pretty > reliable. But normal users can't do that by default. That leaves the > NMI watchdog which (I bet) you can still time, but which is pretty low > frequency. > > Are there any other NMI sources that a normal user can cause problems with? Generating recoverable parity check errors using rowhammer? But, thats probably going too far for very little gain. > Let's at least leave a marker in here that folks can grep for: > > /* Skip CLEAR_CPU_BUFFERS since it will rarely help */ Sure. > and some nice logic in the changelog that they can dig out if need be. > > But, basically it sounds like the logic is: > > 1. It's rare to get an NMI after VERW but before returning to userspace > 2. There is no known way to make that NMI less rare or target it > 3. It would take a large number of these precisely-timed NMIs to mount > an actual attack. There's presumably not enough bandwidth. Thanks for this. > Anything else? 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be interesting.
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 43606de22511..e72ac30f0714 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -223,6 +223,8 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -663,6 +665,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) /* Restore RDI. */ popq %rdi swapgs + + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS + jmp .Lnative_iret @@ -774,6 +780,9 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1502,6 +1511,9 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1520,6 +1532,8 @@ SYM_CODE_START(ignore_sysret) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(ignore_sysret) #endif diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 70150298f8bd..d2ccd9148239 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -271,6 +271,8 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + /* Mitigate CPU data sampling attacks .e.g. MDS */ + USER_CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR