From patchwork Mon Feb 5 07:18:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196670 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp737968dyb; Mon, 5 Feb 2024 00:24:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IH/fkPhcqcB14N0mTkedpeQbok5LCebvkQiwGgSqsNbUEdB7DBnvb2YVBMHyXbi0A1lGhd1 X-Received: by 2002:a17:90a:2e02:b0:296:7ba5:2052 with SMTP id q2-20020a17090a2e0200b002967ba52052mr2762369pjd.47.1707121459474; Mon, 05 Feb 2024 00:24:19 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707121459; cv=pass; d=google.com; s=arc-20160816; b=TGHwOxpSkufVTaDJBfKrQ/sCJ/gj+rd0LaL0v+eWYzNYCZqeJsEjmgvixA3coPwdCk rSD9y/UL+q+8yYP/cbcT/I3FEu5hjzEddfqUFTx0UV+AK+kXlypk9A9mkjidgC8ss+xg W7eefvMmnTlD3r/i9lXrLN7K3wVSHuAmSb8+8mzOuFLSIWmF27NBTvWqqbjvFkWMwMEC +IoKX1mHS61jk5WGkzXCRR8laxZHZ9LSUM7EL1Sfgq8RMPQCBrYG8iixn22u6vo2yBhR rqehaZlHGUFHYJ+oxAxkXZVl+L4QUozoInqj731avU820yzXd66AA8QF924mznxlfqGv eWmg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=4CfiISbvsamXSLYja8xP6w6xmuhNopqcjw1JOG5iJVA=; fh=wsHGV0KHvc6pAKKOC0GZxq64OC1XEv+VAlp2mC7i4xs=; b=QiJWt8VCE35zzY952NwfwTPnOdA8bpqFG+mrqXUzfvM+ZgWJsNJnQtQpJrp8xo7pbP Ml22/q3GhNDh6OWgK1DcVDeIS/dJmu14YC5G9YjJgzkJfYaeUNAs6M09kE27IDU8LTBM B0vOm6yBqdRgZTxih+P2AoAYmOt8C7vytekL+4rAvYmvnFS97Xms67FVrnfwanqBhCbq GBv6NaSrO2YnZfr1x1sHa/Mp1f4d1JPhqtqZrsWambLJ5ME4NFk/u7QCQeyIc9Ff7uKn Wk++E5JmcB7WrLlTfXFH47LzknPvZSnyfcoIuUeIJaMt0GJDV0oc+Rya9DYS/s9kj9cz 1sBg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=I8j2E42Z; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52119-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52119-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCV13Rq6jd7Y1KFkp0sG+hE5qPMeeoH6fTSGadI513Matz55i1Ii+5U0QhUfkJbn4AXWpAKMpuUS0T8Zx3x9kW0h65yk3A== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id p10-20020a17090a868a00b002966704cc6fsi3525796pjn.38.2024.02.05.00.24.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 00:24:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52119-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=I8j2E42Z; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52119-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52119-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 88E112856A5 for ; Mon, 5 Feb 2024 07:20:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E04AD125AF; Mon, 5 Feb 2024 07:19:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I8j2E42Z" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F5A4111BC; Mon, 5 Feb 2024 07:19:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117544; cv=none; b=mxhq37LvpkZb7X9YgkPdqoO9vE4pRGOdFgQ+g1dpwfUaRFgVa+k5GUOsiJI7QUpz05TEzZQBHekisgCCYu/39nHnuYq50QtQuxZsYszyWHeLlwMUtNyHxpYRqxmGz3oOUH4ctpDgGshw7uvl9klZZIJx97g7IXG7rNaHNoDFKwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117544; c=relaxed/simple; bh=EesMNqE0ujt4byHeug0Qr3bN30EdDER8F7GBOgLYbeY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=E6dxJrFUcSRAXg8d7VfZavln3duPTl7aRSqfkBSkNau4oLNOGZeP0X1C85KW7PwykvWLOaPgp7+34LXylPFrx432zZigfOhbK+NHpKwwwg7Y8GcJVVVTh3B0w8E8OLGkyY/qVFJH5fEBELvyvOf2aUP3JLdKD7/tbzyCKMJImfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I8j2E42Z; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117544; x=1738653544; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=EesMNqE0ujt4byHeug0Qr3bN30EdDER8F7GBOgLYbeY=; b=I8j2E42ZKXUlQ7ObTqA/enXR0+tx8gvEg1k3ZX8Nn814PODOX1KlrqZo TiYfBos1lT75TINzueTP/Q7RQAbQgb+HwX/R2fFd6wdA+aFWNQTPWBsR3 72y9smura5HHHbnI/yKSkLuQC1c8sVrR3cHSnYfs4VR65pnM3BacSabfG JAGPCDvN+Yno4aH9Jkw3Lf+xh+9A5Te/EWs1K3o73By0Y6WMBWL8mtDIj lC8N8ImscI+pKd/f9v1jO1OYcWmqfPNveDR1zRAz16KKy9sFedviSo79F 89D0bNt0fJbUlkCPy4g9zcMaUR2tMnCevw3flLYdFrPaKYMxjWEeIsBPn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="11557538" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="11557538" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="909221964" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="909221964" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:00 -0800 Date: Sun, 4 Feb 2024 23:18:59 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Alyssa Milburn , stable@kernel.org Subject: [PATCH v7 1/6] x86/bugs: Add asm helpers for executing VERW Message-ID: <20240204-delay-verw-v7-1-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790046591383591411 X-GMAIL-MSGID: 1790046591383591411 MDS mitigation requires clearing the CPU buffers before returning to user. This needs to be done late in the exit-to-user path. Current location of VERW leaves a possibility of kernel data ending up in CPU buffers for memory accesses done after VERW such as: 1. Kernel data accessed by an NMI between VERW and return-to-user can remain in CPU buffers since NMI returning to kernel does not execute VERW to clear CPU buffers. 2. Alyssa reported that after VERW is executed, CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system call. Memory accesses during stack scrubbing can move kernel stack contents into CPU buffers. 3. When caller saved registers are restored after a return from function executing VERW, the kernel stack accesses can remain in CPU buffers(since they occur after VERW). To fix this VERW needs to be moved very late in exit-to-user path. In preparation for moving VERW to entry/exit asm code, create macros that can be used in asm. Also make VERW patching depend on a new feature flag X86_FEATURE_CLEAR_CPU_BUF. Reported-by: Alyssa Milburn Suggested-by: Andrew Cooper Suggested-by: Peter Zijlstra Cc: stable@kernel.org Signed-off-by: Pawan Gupta --- arch/x86/entry/entry.S | 22 ++++++++++++++++++++++ arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/nospec-branch.h | 17 +++++++++++++++++ 3 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S index 8c8d38f0cb1d..bd8e77c5a375 100644 --- a/arch/x86/entry/entry.S +++ b/arch/x86/entry/entry.S @@ -6,6 +6,9 @@ #include #include #include +#include +#include +#include .pushsection .noinstr.text, "ax" @@ -20,3 +23,22 @@ SYM_FUNC_END(entry_ibpb) EXPORT_SYMBOL_GPL(entry_ibpb); .popsection + +/* + * Defines the VERW operand that is disguised as entry code so that + * it can be referenced with KPTI enabled. This ensures VERW can be + * used late in exit-to-user path after page tables are switched. + */ +.pushsection .entry.text, "ax" + +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_START_NOALIGN(mds_verw_sel) + UNWIND_HINT_UNDEFINED + ANNOTATE_NOENDBR + .word __KERNEL_DS +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_END(mds_verw_sel); +/* For KVM */ +EXPORT_SYMBOL_GPL(mds_verw_sel); + +.popsection diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index fdf723b6f6d0..2b62cdd8dd12 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -95,7 +95,7 @@ #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */ #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */ #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */ -/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */ +#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* "" Clear CPU buffers using VERW */ #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */ #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */ #define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 262e65539f83..ec85dfe67123 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -315,6 +315,21 @@ #endif .endm +/* + * Macros to execute VERW instruction that mitigate transient data sampling + * attacks such as MDS. On affected systems a microcode update overloaded VERW + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. + * + * Note: Only the memory operand variant of VERW clears the CPU buffers. + */ +.macro EXEC_VERW + verw _ASM_RIP(mds_verw_sel) +.endm + +.macro CLEAR_CPU_BUFFERS + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_RETPOLINE_SAFE \ @@ -536,6 +551,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear); +extern u16 mds_verw_sel; + #include /** From patchwork Mon Feb 5 07:19:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196808 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp847659dyb; Mon, 5 Feb 2024 04:45:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IFKVqAuFKj9x+TPRFPbatZ3E3kv6jWIV0Hn0SjqWtA+pJ37Gxwn4MDOPwEfbxAKYgdLNcib X-Received: by 2002:a05:6122:4c83:b0:4c0:1cb7:1ba9 with SMTP id fg3-20020a0561224c8300b004c01cb71ba9mr2769083vkb.9.1707137112373; Mon, 05 Feb 2024 04:45:12 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707137112; cv=pass; d=google.com; s=arc-20160816; b=m2S8sc2vbl6yD6MNOBlTiDN1Sr5bJNoRfAqn27bUrf3aavjFl/4e8p4dNOqvkGKL+l D4Z7ZvQadPvYQRaJGENKF8Wla6r6DQQGOaM1bLdqzn98v/yN8cFcavmnk6DyeOtuzrpW D+VkpSOrcz89HW52k+J4XEb0uw9eg4BJT16YLeOVde6iXc97JTkVKmLbliFkjjzPUefK 5LefoNCKcsa6bGAw+w/xaHEfTKUbbWcQdBV6fg7URkg6glz3hRyh/6p/4s0s4XWf4RNk WeAMaPSCRySzDNabLOSLZLJ/JbJ3HSltBYfF8yZ/mmZojxUQ36U73nbkYZVGmsi7xAWK tKAQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=h6Av1fjqQhua5cardZuvjbd2yKTW8u/OaTgaLYxrjTs=; fh=x96da3Oq9LRLcqWY7RXb7/K6ncmkP1RnzKfia1N85yU=; b=UTAPvRP37qwdOHr0s+afjR8pZ9AX4t3GdcqMvzuQIJSKqF8qi3e9XL6XBUa15BnMKi RSP3lNjRTC1CHZ3M5p/RLMCUF/CfAMbPzS62fz9CBsnVGgKVjUbACGtZ3slh4JCdYkTY yHRjIh6AMV3BYcIt6R/94DaYW6j9MNoQjM9byhpyESyhur2egWAxv5niOeFq0ZsprSke N94rUr8Lpq63YJv3RYT+oAizZFVkgjlCa5pur3HZV3jqwY1vFHG4gaPVwCnIrZWep+PM 0H35hpnC9JmVJJy6EIELYMLdwwksohoCll6p0RHAUicGAy+OiHaZUf2wWNiIfwigwdMU gNpw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GrCP8cLI; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52120-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52120-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCV9R4kw+RRCDpsbou5DFpjHOR62BqjpGVz5OrGwm+7eYpoSDZCXl9zhLezNJOkQKaUQlcbKJojEEMI5PYDlnMbdqWUkhQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id gu6-20020a056214260600b0068c929892f1si6056263qvb.82.2024.02.05.04.45.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 04:45:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52120-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GrCP8cLI; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52120-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52120-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id B1E7B1C2361B for ; Mon, 5 Feb 2024 07:20:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 441BA12E4E; Mon, 5 Feb 2024 07:19:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GrCP8cLI" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99A0612B6B; Mon, 5 Feb 2024 07:19:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117560; cv=none; b=dX1MZNGNm6NnRMUd2ksVg9NuNwnYeCv3jIQ6MtSZ+t5pz0ODQhpjlpe/HoXTubTTvamv/64SQP0lYtveoWVkiz3mHj7zxFtdWw0B4XpV3sQ41ya7Hvo3zOIvEt0UVaoP41aOUpSXPfYH8xj4PYB75h07xk77zIS0qdr5ucGprAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117560; c=relaxed/simple; bh=muMVHd/9060DOMcFxg6pmke63ovxAm+uQnsL/4dWals=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Ju+aYnGRh/lA+yBXR34KyArmZ/Lrta8BMHYmrhBgpr2KtttuMoA9m5UKTn4Eyu74JAMcF5Ugsp4WgltKxBlb4ERvPFW2u5QF8hc3VKAw9ety8Yl3+pGOANNKdOpBht9SjWPrZqQj3DB+dorCIVXqQ1NjVVaD6TbKBdNdI2l80S4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GrCP8cLI; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117559; x=1738653559; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=muMVHd/9060DOMcFxg6pmke63ovxAm+uQnsL/4dWals=; b=GrCP8cLIC1xmCvgFmRbMiDseXj/eXS8PsbmsE65Wv7f7b06aWuFsKVar OsPrVnz1NjctIa14FofTyiX17pxyBVVnXEWqfkemAzYwpapWBCFpbSuFZ xMnZqnDHPD92zGd2k3oU6unZOBxzUclOdlgaRNtjpg5Nj/CbM158zc0E6 dofurZfnyYPJvNcTBx5su+v+wwJ2AUyXcfYH+tiDIKHn3KK55eOocyuN5 zQ1ZH/t3j3kWa7tRh4N/4vpx7FIjPiyQTj96wlIYYodDVrW5uqeyPLkIi wN62jnLiZQIWA2yiylzdfpgqo5J3vIzgg8JluH3fvrnQxPZ/LNBmOE8DD A==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="634537" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="634537" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="968143" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:17 -0800 Date: Sun, 4 Feb 2024 23:19:16 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Dave Hansen , stable@kernel.org Subject: [PATCH v7 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20240204-delay-verw-v7-2-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790063004858868958 X-GMAIL-MSGID: 1790063004858868958 Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. Suggested-by: Dave Hansen Cc: stable@kernel.org Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index c40f89ab1b4c..9bb485977629 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -161,6 +161,7 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -573,6 +574,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) .Lswapgs_and_iret: swapgs + CLEAR_CPU_BUFFERS /* Assert that the IRET frame indicates user mode. */ testb $3, 8(%rsp) jnz .Lnative_iret @@ -723,6 +725,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1449,6 +1453,12 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1466,6 +1476,7 @@ SYM_CODE_START(entry_SYSCALL32_ignore) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(entry_SYSCALL32_ignore) diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index de94e2e84ecc..eabf48c4d4b4 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -270,6 +270,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR From patchwork Mon Feb 5 07:19:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196643 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp716222dyb; Sun, 4 Feb 2024 23:20:47 -0800 (PST) X-Google-Smtp-Source: AGHT+IFqJSNBQlpMjxHKIuIGkXrPV4joOlQ7oPAbzUXAN7R1R75PKGmVRX/4PTONVR3949qmhkgL X-Received: by 2002:a05:651c:1a11:b0:2d0:abff:9c83 with SMTP id by17-20020a05651c1a1100b002d0abff9c83mr1280920ljb.13.1707117646798; Sun, 04 Feb 2024 23:20:46 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707117646; cv=pass; d=google.com; s=arc-20160816; b=LzpqytdVtpg1GdI6BIKWWZL2aFnFHPFpGaVAHCAtxDk+WXSg1wt6h5DVXGrYrU10wb PAHZyRz9ex5+GjdPaRCOnyG0Hb0WRw3NMGyQGQ/a8536d3U7qRFGNKWFen3+BD2Fh5nz xgso5dgWSU1E9gA8Yk7ckTbkjgEb0x8z1rkMBJsVASzhowchOk19P3+ESe0ndtYqin1s E0lnU17dZ3D24uBT6GJ8oAAJeHNU9fg2h9J4qoymjT62pWIDjMhvN13fCwdTGkwshu+q +XBgzwKG3XpHerz9HQqN3ZpB4exybHnns9p47BR/sv3+264/Tmq+DkJc8A2Iuu2zF3Ak hgXA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Blf8RrJxnbX0CpjXJGcvxU+rD7oUCjdEW57SW5ZPpZM=; fh=+7fk9gD/4WVhAwle85j16MZKvgZC0e/vVlsBjq7tHqE=; b=odlNDnjlLxV48gVS+IxsBhWQy3nnddicKUWqrOi/o70v8pEu5QNzSGKjZh560M0tKa Hb3fd2ZvX7SExs0LNxTMaJvvA7IbmWubXxBTIWUcAtgtroBukOLhzfTZfi4gjmEhm17J L8Xvcj8bNx6VlooSbj7Z7+qoX/393TRTw0APKIxPWAZ/ScRlXC2hVf/lVTQnRs/QHi9F 8DYZi8JaQ/jsiCyqKX1skWa3qx68Y1QYOTbfVJhVVI4dsTwarFmkfCqxBk8UOxy4A/i3 a7495VVLylN7Jq460URHsd+D10y6+S9A1yAPXJaLFQHbD8aBdpIF0g8V+nTQs9Ku49bG AAZA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=X2U5DYlS; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52121-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52121-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCUxnwCFxJSxIDa74V0BldxdF13V9YNDPf9XkMk6ErGcSCzNU3Sc7XLZuyTiVcJbk/0eupQCAIA1C8KmTWyTUgcRL8kbqA== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id l21-20020a50d6d5000000b005601c0f1922si2518827edj.519.2024.02.04.23.20.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Feb 2024 23:20:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52121-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=X2U5DYlS; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52121-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52121-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 587C61F24842 for ; Mon, 5 Feb 2024 07:20:46 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C429B11185; Mon, 5 Feb 2024 07:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="X2U5DYlS" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C80112E67; Mon, 5 Feb 2024 07:19:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117576; cv=none; b=Ax5CNt98txIAcQquYJTdwr4E6cUWFxFGwTslySLpm6BXb2/Ncrw3tBWyTrNs6kk1zBwGTa0t/cP4OxxfcJX8er2xNZdHlz+bMHJJFwM5+Y5nsjZFUhSYXIXbFwVupVMcWryjkriWoISAteZgWp8JBn/LW+PF/JTG82DglwUPrQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117576; c=relaxed/simple; bh=dzS0jKsh4MjJhp4dlQ/PUGxlDxqVg59HNWjd5mlmtng=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=kw7HMHBLDyGUQYKGr277sC6c0AVlAfoDXkZVofcg14ZhfFojW4OmeE7kBcZiZNkjUgK6/6g4EL+G8eZTHdA02pGw1y3G4wOvi//grMTZmPMMNTTeX3tN9+iC1zmUVnbIeTMUq0W5IVOLuYf8ilMFuEcQaUQJ7j+XdJtAhMVa5kI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=X2U5DYlS; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117575; x=1738653575; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=dzS0jKsh4MjJhp4dlQ/PUGxlDxqVg59HNWjd5mlmtng=; b=X2U5DYlSVRrpIiW8SBbfsD6e6/u2ABLPRUu5SJxlxGW1/ztuECSuMBJV ljjvGjecqWY8S6F2sHInMcrxYVx5PRRi03aNYHzvPgwE4wRqqJUiTXUIv f05HOf7Iaj/fAKv6FoERFVP29nHktqhJP9OX+w4frhMIls2YFlY3vYEhk a8SRgfvwqYC4uCg9AEX+4dLvaxf2BfBceN0T6IORsKty57ygvxhkGbeGq euFVKYNpW0DiVZUfqLmmom9qENUHWLUXAdhAHUBNp2DWV9LmBH1ZgXt/t 2hSFrkFhn4QGqChuHErfOmzAOe7JiYECBz1DOou97Oe+xr6cVnI0MRNXI w==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="11823041" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="11823041" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="31442994" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:33 -0800 Date: Sun, 4 Feb 2024 23:19:33 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , stable@kernel.org Subject: [PATCH v7 3/6] x86/entry_32: Add VERW just before userspace transition Message-ID: <20240204-delay-verw-v7-3-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790042593486433091 X-GMAIL-MSGID: 1790042593486433091 As done for entry_64, add support for executing VERW late in exit to user path for 32-bit mode. Cc: stable@kernel.org Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_32.S | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index c73047bf9f4b..fba427646805 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32) BUG_IF_WRONG_CR3 no_user_check=1 popfl popl %eax + CLEAR_CPU_BUFFERS /* * Return back to the vDSO, which will pop ecx and edx. @@ -954,6 +955,7 @@ restore_all_switch_stack: /* Restore user state */ RESTORE_REGS pop=4 # skip orig_eax/error_code + CLEAR_CPU_BUFFERS .Lirq_return: /* * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization @@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi) /* Not on SYSENTER stack. */ call exc_nmi + CLEAR_CPU_BUFFERS jmp .Lnmi_return .Lnmi_from_sysenter_stack: From patchwork Mon Feb 5 07:19:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196704 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp768056dyb; Mon, 5 Feb 2024 01:46:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IFwE5M85o2rIf08zO1oqNsilnCsNL46if+p59WLlcNA6kvnGbHJKx705WUcoj62ZaLW907G X-Received: by 2002:a50:a6cf:0:b0:55f:d8c9:60e5 with SMTP id f15-20020a50a6cf000000b0055fd8c960e5mr5465305edc.9.1707126401319; Mon, 05 Feb 2024 01:46:41 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707126401; cv=pass; d=google.com; s=arc-20160816; b=IWdm9we0hg3MUfNKDJ2SsSJ3/MjuQHn+k/GXvjgRzvaxnR3g/RMQVdMdWIuz+Evw4p zgpCCEFO2pcOcfe8J1E641wH7+pFfwPxT7kk54OfCsyQClfPdoIqgPGO/iNySYR4wpQQ +8YU5o62VIQTDCNzj+hePZA2UtsTJQtLY74CdKXIdnONmbnhPH3yHlsP84vGWn152zzC oyQGUZdRVg4gfEvxIbY+o71AvDQlJmdwjvESv4S/laBloyEpeUYWk/xgSU2nst1caBeJ 1zY0uES3nOkv1gnPE4/15Cfzlq+YGaT4zi39WSWgpS3E6nN+3izLZzdoTQSq5PFpXm9O BreA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=qg07Ehhb/1kHDFYLni7QccRdWYXNnW5BD1THHzbBHiU=; fh=DvGKjXYEMiCbHNNIwiF9c6Ucm+dhonILEUZjgCIboPI=; b=je/e7oAXBgqgRFhOrQGcz9X/kWx16geqMS1BsvqO7B9JqHm08j4rif/h3ypw20s1tX c9Hxhq+tD7zQfbp5NkRuygU0Jwm+5GNntkQA9l0UCZ9mKBlDC057SF1v6x6xWseA7pPn /Lgm+YQevY2qRCnT+UhpncB+q5Aalh3eK1tkIj8J3SPrMwc3plubtXCx1Z3gs9aNW1TS AE4ozNs/cwt5VCj8EfslIpJun3RY6ACijzysfT+rENEAt32xu7ghD5SdhZ+2dhJ03lLD 6ZQgN4KcZXrelz5sALwjf6LBLAPOfUxC+T7Ynkhzdh6OTCJI0HiyNlQDzPOriYx/bP+n T71Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MAtA3Hyb; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52122-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCWXNp8Yrif5smSkCLPB+ZNwL3/qFvS/Iidp9YipKq29KaqssC/MHj/vWPjbE4I+QHvj019KGjmB9LCAKzWZ7NeA0/8vxQ== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id k9-20020a508ac9000000b0055f2b309c58si3825361edk.366.2024.02.05.01.46.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 01:46:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MAtA3Hyb; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52122-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 60F901F247DE for ; Mon, 5 Feb 2024 07:21:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 77AED11C80; Mon, 5 Feb 2024 07:19:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MAtA3Hyb" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06C5B1119E; Mon, 5 Feb 2024 07:19:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117593; cv=none; b=ljhcqeJEuO2/WbPLjHJt/mFl4d0+Xi9xnnZD7sVCfo+PiMhT/FcJSmeM9AxqNgBIvmr2v6OWcCn0EahgYxeD/0mTt7VjakGQUN8LzRy++L4W2o9T+RgrSXwZfLVpNs6M76oFBFURc9tvU708mXG7jrovIJfhbX28sMGW+k0GyzM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117593; c=relaxed/simple; bh=Gq7Mow9DJCIuNzMiVKFc/ZtBcV5mk0EwJ7b2PuoUDY8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=H/x7seTVYFbzOE+7wMmcWOypcLDiQ5UwOzA1F46ECuiACbo7Y4xCHr/RgNteOpf2dSVsy3wUhvW82GzYExN9AUQ1buVk1t7FzBs2ninw7WqFRNDi46pPYeoU8vF9JZdHBcQrYFGMZoqPUAeWO+j6UC3JO5tGQgC0QyjF6CpcbTQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MAtA3Hyb; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117592; x=1738653592; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Gq7Mow9DJCIuNzMiVKFc/ZtBcV5mk0EwJ7b2PuoUDY8=; b=MAtA3HybItBD4VOZ6cO8bjgcRyxqcum8J11RiLoIRj53A98A7QDoysE0 nYOmQGAWEwma3RR8c+2OBjBkH4GyANhBwPibFRb1gRN4VluHJkZmOrM+u /n4TzVD8bo4U3TEoGZvStORLxT2pIJ4ykZbLQM17tX+h4SNSeQnapOhNJ 3VeKiBnfjaMU+o1tIW5FRlL9BqXKVrRGmnvcWqvfn5aSnUXXDnaTYWas+ 1lCb7CckpNZSWOlAmDiAPcn28TQ6EnfnwtHLK3LP1JeRMb/CZtEh6DJqh SaJVhx6QwmlnDEEzGZpa4qSyaSIN3yIc5oaU4xvFOx5ACpuJQ5jxhqOME Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="4260963" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="4260963" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="31738071" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:19:49 -0800 Date: Sun, 4 Feb 2024 23:19:49 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , stable@kernel.org Subject: [PATCH v7 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Message-ID: <20240204-delay-verw-v7-4-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790051773519771931 X-GMAIL-MSGID: 1790051773519771931 The VERW mitigation at exit-to-user is enabled via a static branch mds_user_clear. This static branch is never toggled after boot, and can be safely replaced with an ALTERNATIVE() which is convenient to use in asm. Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user path. Also remove the now redundant VERW in exc_nmi() and arch_exit_to_user_mode(). Cc: stable@kernel.org Signed-off-by: Pawan Gupta --- Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++----------- arch/x86/include/asm/entry-common.h | 1 - arch/x86/include/asm/nospec-branch.h | 12 ------------ arch/x86/kernel/cpu/bugs.c | 15 ++++++-------- arch/x86/kernel/nmi.c | 3 --- arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 34 insertions(+), 37 deletions(-) diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst index e73fdff62c0a..c58c72362911 100644 --- a/Documentation/arch/x86/mds.rst +++ b/Documentation/arch/x86/mds.rst @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing: mds_clear_cpu_buffers() +Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. +Other than CFLAGS.ZF, this macro doesn't clobber any registers. + The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions. @@ -138,17 +141,30 @@ Mitigation points When transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel - command line. The migitation is enabled through the static key - mds_user_clear. - - The mitigation is invoked in prepare_exit_to_usermode() which covers - all but one of the kernel to user space transitions. The exception - is when we return from a Non Maskable Interrupt (NMI), which is - handled directly in do_nmi(). - - (The reason that NMI is special is that prepare_exit_to_usermode() can - enable IRQs. In NMI context, NMIs are blocked, and we don't want to - enable IRQs with NMIs blocked.) + command line. The mitigation is enabled through the feature flag + X86_FEATURE_CLEAR_CPU_BUF. + + The mitigation is invoked just before transitioning to userspace after + user registers are restored. This is done to minimize the window in + which kernel data could be accessed after VERW e.g. via an NMI after + VERW. + + **Corner case not handled** + Interrupts returning to kernel don't clear CPUs buffers since the + exit-to-user path is expected to do that anyways. But, there could be + a case when an NMI is generated in kernel after the exit-to-user path + has cleared the buffers. This case is not handled and NMI returning to + kernel don't clear CPU buffers because: + + 1. It is rare to get an NMI after VERW, but before returning to userspace. + 2. For an unprivileged user, there is no known way to make that NMI + less rare or target it. + 3. It would take a large number of these precisely-timed NMIs to mount + an actual attack. There's presumably not enough bandwidth. + 4. The NMI in question occurs after a VERW, i.e. when user state is + restored and most interesting data is already scrubbed. Whats left + is only the data that NMI touches, and that may or may not be of + any interest. 2. C-State transition diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index ce8f50192ae3..7e523bb3d2d3 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, static __always_inline void arch_exit_to_user_mode(void) { - mds_user_clear_cpu_buffers(); amd_clear_divider(); } #define arch_exit_to_user_mode arch_exit_to_user_mode diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index ec85dfe67123..17dfe028e95e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -544,7 +544,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -DECLARE_STATIC_KEY_FALSE(mds_user_clear); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); @@ -578,17 +577,6 @@ static __always_inline void mds_clear_cpu_buffers(void) asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc"); } -/** - * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability - * - * Clear CPU buffers if the corresponding static key is enabled - */ -static __always_inline void mds_user_clear_cpu_buffers(void) -{ - if (static_branch_likely(&mds_user_clear)) - mds_clear_cpu_buffers(); -} - /** * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability * diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index bb0ab8466b91..48d049cd74e7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); /* Control unconditional IBPB in switch_mm() */ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -/* Control MDS CPU buffer clear before returning to user space */ -DEFINE_STATIC_KEY_FALSE(mds_user_clear); -EXPORT_SYMBOL_GPL(mds_user_clear); /* Control MDS CPU buffer clear before idling (halt, mwait) */ DEFINE_STATIC_KEY_FALSE(mds_idle_clear); EXPORT_SYMBOL_GPL(mds_idle_clear); @@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void) if (!boot_cpu_has(X86_FEATURE_MD_CLEAR)) mds_mitigation = MDS_MITIGATION_VMWERV; - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || cpu_mitigations_auto_nosmt())) @@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void) * For guests that can't determine whether the correct microcode is * present on host, enable the mitigation for UCODE_NEEDED as well. */ - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (taa_nosmt || cpu_mitigations_auto_nosmt()) cpu_smt_disable(false); @@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void) */ if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM))) - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); else static_branch_enable(&mmio_stale_data_clear); @@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void) if (cpu_mitigations_off()) return; - if (!static_key_enabled(&mds_user_clear)) + if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF)) goto out; /* - * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data - * mitigation, if necessary. + * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO + * Stale Data mitigation, if necessary. */ if (mds_mitigation == MDS_MITIGATION_OFF && boot_cpu_has_bug(X86_BUG_MDS)) { diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 17e955ab69fe..3082cf24b69e 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -563,9 +563,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi) } if (this_cpu_dec_return(nmi_state)) goto nmi_restart; - - if (user_mode(regs)) - mds_user_clear_cpu_buffers(); } #if IS_ENABLED(CONFIG_KVM_INTEL) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e262bc2ba4e5..b551de3ec0bc 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7227,7 +7227,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&mds_user_clear)) + else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) From patchwork Mon Feb 5 07:20:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196752 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp813652dyb; Mon, 5 Feb 2024 03:36:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IH7mnuschJjEWeAWwIcB3ij3QfIy/kbLriY5HY51iwdlSO1MFMS8qCh+rBv5J3Ae15i7jXe X-Received: by 2002:aa7:d347:0:b0:55f:fb28:2701 with SMTP id m7-20020aa7d347000000b0055ffb282701mr4458235edr.1.1707132995407; Mon, 05 Feb 2024 03:36:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707132995; cv=pass; d=google.com; s=arc-20160816; b=Ke4migXQTgokO2DPxaD5uJFuqxx2KPbr1yUaPetVwmtxDjeX8N53VD0QGRzMMuWhLG sMfnfRV7XKPB5PuMBogCEkPwsw5qbnO8ldrLXJHyArKQE+ckxJawyhKvfbgXoRzz1cZu nNGlcZ6whs5bDh7ffP1DKC6akJwVLEJwdp+pvo2/Cl4xh9uYoMwDWJmk3inxC6wm7r2f YAY/6S4XweFT+6KU+QBvzP4pGVwo5oLGMG744HSIR8LQLHC832OCZX8wct7U7jzA256X 4U2D10CrVT4+w/c1KGRr/FraMe1f3HqDOQHIyJ4AN0p2mQg4nqSGgstmWMN27ojAPwPU eADw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=oSjmcUrJzY+sukiLjPy5/xpk/5GOaxQMdty3no3+n+M=; fh=I+kqP/mrjz0JBqfcrA89CxoDAfYNbXpEzBnaF5DmvU0=; b=gXGu6NxK+SpmV27bQ4UZ/uNRKFq4c48Cn/Vn+fXfUv11ax55nc/rGIAvJKx/OepPfs DVtr2rUrMG+o7fLifyMZ94cT7TS/cYEcb1IF899kZWaZGRo1KcA/AssBrHGFX9P/f47z Ua+ZXhIzaBvEcnoMWwx5BkL+uy7FN4diAgCPG0EopN/5L+xMaWpUVz+Z4hgtjJCuZpz1 jOFzTNQMdSsqXJvNJMPHrClSsYGVG1fP7EKe/XE07EOQrdBuhr0A4mbjx6Jtbhlc1sKT A+2U0uszSd05ZmRygtajL3nXjRteTVv47QXwoWDDCiSHl9/qB9bmvXBc3oKrqo4/TN1y EGKQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bET8C+Hh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCVS3NaGe+7ZzCK2UE9dIgfxPjWkEThemPEKCjPEUAV6xw/tI0vj067XL07dU7ZzghuDT9wvB7fkKru1u1nYc2LGm7P28g== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id f3-20020aa7d843000000b0056012ce9cfasi2934817eds.555.2024.02.05.03.36.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 03:36:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bET8C+Hh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 337FD1F24D1F for ; Mon, 5 Feb 2024 07:21:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AEDAB12E50; Mon, 5 Feb 2024 07:20:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bET8C+Hh" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B30F0125C3; Mon, 5 Feb 2024 07:20:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117626; cv=none; b=fMdPHjt4J/KId2hnmt8cNAHI6jqDLpmYgrWyJ0RGOKUj7+NoCAlNfu6Fc/YjeXLmNiLz/dlgTvRB4flghO6px5juoN74iaRDeG8ReTfycRYvI/CrshTn5jjBBDVto92ve5RdtggxJ5Zg4vMcvOkhq09b9PxcICpv/caQutxrMIM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117626; c=relaxed/simple; bh=0JgW9OHNQ+vb2IEEEnanH09lZCiQ0sf1zXcyj320zgw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=YnJL8BwLloaanMabKDntjVfCdQXPKS79C06U115iaI4wwRgZczENxeLBCWBeQwfKAZM9m0UAq+eeTSkiULLNmoL4BocSJOysZE22G86A2ep8iLtytstTZVekVNcm5Me3j0TAsHcU6VGrl/gM2rcUkEcNRTqFW024Tj2bmJFlnbA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bET8C+Hh; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117625; x=1738653625; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=0JgW9OHNQ+vb2IEEEnanH09lZCiQ0sf1zXcyj320zgw=; b=bET8C+HhGnaFHocwdb1suuo/uLz8/l2coUdsHGaS6CeOX9GVbb83xm9L ma5fFI/ln2TCPSiG6XHtP0v7OYZ1E0rIFuTTgx4HPTt2oXzTvcm964bci cgkukvqBpkor3m+IU2naH7pXkXeVga4Z4lfo3MFTiiBxwCMFudMXe7ow1 B9p5qt3yXjR4+DcH/dkK2hkKPmRnRvX0GQgsj5DeKeMG6GPYO+WhyN0Hj O73D9edPEZQturfV6H6wjgbRxsNTwhsRpVWuzhV5Jd5Tw+811K2h3xWd9 E+XBz44pLQ/KKmgdWWRgsYkjG2QHhhMcVBJ171QSiMZ1qZQHUzQFsbHiT g==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="359499" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="359499" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:20:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="38056090" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:20:24 -0800 Date: Sun, 4 Feb 2024 23:20:23 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , stable@kernel.org Subject: [PATCH v7 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Message-ID: <20240204-delay-verw-v7-6-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790058687644443116 X-GMAIL-MSGID: 1790058687644443116 During VMentry VERW is executed to mitigate MDS. After VERW, any memory access like register push onto stack may put host data in MDS affected CPU buffers. A guest can then use MDS to sample host data. Although likelihood of secrets surviving in registers at current VERW callsite is less, but it can't be ruled out. Harden the MDS mitigation by moving the VERW mitigation late in VMentry path. Note that VERW for MMIO Stale Data mitigation is unchanged because of the complexity of per-guest conditional VERW which is not easy to handle that late in asm with no GPRs available. If the CPU is also affected by MDS, VERW is unconditionally executed late in asm regardless of guest having MMIO access. Cc: stable@kernel.org Signed-off-by: Pawan Gupta Acked-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 3 +++ arch/x86/kvm/vmx/vmx.c | 20 ++++++++++++++++---- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index ef7cfbad4d57..2bfbf758d061 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -161,6 +161,9 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX + /* Clobbers EFLAGS.ZF */ + CLEAR_CPU_BUFFERS + /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b551de3ec0bc..0ec71f935ed2 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -388,7 +388,16 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx) static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - vmx->disable_fb_clear = (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && + /* + * Disable VERW's behavior of clearing CPU buffers for the guest if the + * CPU isn't affected by MDS/TAA, and the host hasn't forcefully enabled + * the mitigation. Disabling the clearing behavior provides a + * performance boost for guests that aren't aware that manually clearing + * CPU buffers is unnecessary, at the cost of MSR accesses on VM-Entry + * and VM-Exit. + */ + vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) && + (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && !boot_cpu_has_bug(X86_BUG_MDS) && !boot_cpu_has_bug(X86_BUG_TAA); @@ -7224,11 +7233,14 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_state_enter_irqoff(); - /* L1D Flush includes CPU buffer clear to mitigate MDS */ + /* + * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW + * mitigation for MDS is done late in VMentry and is still + * executed in spite of L1D Flush. This is because an extra VERW + * should not matter much after the big hammer L1D Flush. + */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) - mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) mds_clear_cpu_buffers();