From patchwork Wed Jan 24 07:41:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 191460 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp888310dyi; Wed, 24 Jan 2024 02:10:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IF4x50H38QwlfgXcOGOiGLJojUakOKlQ50YCz8FlOo2jhig9oXWvDoPIKkM+EviLpyIdhmT X-Received: by 2002:a05:620a:d81:b0:783:48b1:7b71 with SMTP id q1-20020a05620a0d8100b0078348b17b71mr10402264qkl.9.1706091021030; Wed, 24 Jan 2024 02:10:21 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706091021; cv=pass; d=google.com; s=arc-20160816; b=VnRepXtS85XS7j7H4dHh3m8CC7JriqHGT2MQ2miM1ju3hDChaHNbn/K4WS4K3qzVuE adgP727EPPlOfkwDhdJapY4yElFvERNT+5hEysb2ju6bD9srbZTi0A6iOcc0Vg5u/QNr Q0Wk/OO0AqxQPCfYNI+9svzmE8+XO0xVciIA7kXhmQlngLcvmu3N5mLM7ULNBKHPTp2z waK4bi2UcYGi6R6Xv1mVIRMLjJRhlDP3IIO857JYUG5bSjQPprTqh1q+8/fSl8Bm0V6Y fTZpbCuDTRXb5UJ3OsMgtLItQC7Ywp5hTcjsOMxwwYPV1vbC7pvJeKnIKuK1OMl+n0Gr n75w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=AAWYqQo49rSUP/liphSBHP0is1wvO5riamGNWMX1lTw=; fh=mM7j0AJ/S24jynU3+f8ZuQxBzhzn0ovRqHTZp+moQSw=; b=FSml4l/owgBTibR1X34GVAfiWkbOmNDRkxSNz2uEHsc/6jCJIjZoB+HsgIljzmdW1z 0sxMXXQlmG3mVJh75H7kiEU47sFALPZJkPjc1927EDs7AaCHk+7yEy7PuP3eExmRgpFI S3b8iAOWVxQF7qAA747ld+kEm+BJNKoDpEEPYnMV9cOWAG0gf6OKohVSrx7aNbLoOov3 gTIegXQox2fslnKIeOqBVujLFTasIj5qZZ6x35GjRn5BlzZq/ouAR9TTQckFmg8fz+BX 1SvyqvoU/CMdjPR9VIzXLbv2hkI219U4c8enAdNeJB0VzrCjxQw9hE5dUl9faTy/S0PT WA9w== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GGfVKPzj; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36574-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36574-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id l18-20020a05620a211200b0078377b4b392si9698673qkl.638.2024.01.24.02.10.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 02:10:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36574-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GGfVKPzj; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36574-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36574-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 2A6011C25959 for ; Wed, 24 Jan 2024 07:43:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0ECEB179BC; Wed, 24 Jan 2024 07:41:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GGfVKPzj" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DE7E171DD; Wed, 24 Jan 2024 07:41:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082064; cv=none; b=hfZzMspZkTTuBowdORqgwUU0SQfaC+WFfY1I6e6mkuS+6+kEO+z2DTd7mKJMP02rLoGRq5AHoF9Sj9PqnSJf5cpcLOyasBMoWAcb18QjkA4wTLcJK9ToNlsM6nmk9EHLYBdYs7csINaQjhMK5xduabExo296oXEYY2GR4PJoWV8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082064; c=relaxed/simple; bh=/z2uYKhSyYcTU8/yd1Vq7zQ21LOb4cUygjDFi2E+u98=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mgv+bus5RFefcks1fCuQH/I+vIXjctunmY1lGEDiRJ7F99tvgSRnQ3NXQKmE70Jt/FRFURQIInXrH3iej9LG3kjoQQb0KHJrG72CHabVT0L2fNqU9ugRF/tfu0EpQT+Wy9knb9XQKjRnXtb/QrZGiBHt0oz6/x7yWJpV3VxFe6c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GGfVKPzj; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706082063; x=1737618063; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=/z2uYKhSyYcTU8/yd1Vq7zQ21LOb4cUygjDFi2E+u98=; b=GGfVKPzjWZIQTqaxOLw6JNkgtRoLVqGWls97u8ZZCJGC6MTX2yxFVL9Q TNyyl5xq4ziSr4xhRSUBVOcUhKQ60VMVgoQ2SFVpAjZgdguQoWh+LvpMS j3bdESyDjQIsmYBxBiNSN/gsiF7KoaLvvf4gg3jOyFnRh3bqh9D9Qf0DT mKEEgBEUSA07unmKKwFiw70KSsIQc7zS2VPM7VxnaDtmO1WO0TUlwoUzo Kp7gklIAy8mPEiHR0sWo44WsYeA553aVOrfyUzrctftcUWU67k8VrSw8F qnrNXzC1uWO8SjH8cZtbo5nSxocUkFelNL4ZgpG+zbekYQOoLiu8Vb2d2 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="8880160" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="8880160" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="786326523" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="786326523" Received: from bbaidya-mobl.amr.corp.intel.com (HELO desk) ([10.209.53.134]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:01 -0800 Date: Tue, 23 Jan 2024 23:41:01 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Alyssa Milburn Subject: [PATCH v6 1/6] x86/bugs: Add asm helpers for executing VERW Message-ID: <20240123-delay-verw-v6-1-a8206baca7d3@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788966098465297618 X-GMAIL-MSGID: 1788966098465297618 MDS mitigation requires clearing the CPU buffers before returning to user. This needs to be done late in the exit-to-user path. Current location of VERW leaves a possibility of kernel data ending up in CPU buffers for memory accesses done after VERW such as: 1. Kernel data accessed by an NMI between VERW and return-to-user can remain in CPU buffers since NMI returning to kernel does not execute VERW to clear CPU buffers. 2. Alyssa reported that after VERW is executed, CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system call. Memory accesses during stack scrubbing can move kernel stack contents into CPU buffers. 3. When caller saved registers are restored after a return from function executing VERW, the kernel stack accesses can remain in CPU buffers(since they occur after VERW). To fix this VERW needs to be moved very late in exit-to-user path. In preparation for moving VERW to entry/exit asm code, create macros that can be used in asm. Also make VERW patching depend on a new feature flag X86_FEATURE_CLEAR_CPU_BUF. Reported-by: Alyssa Milburn Suggested-by: Andrew Cooper Suggested-by: Peter Zijlstra Signed-off-by: Pawan Gupta --- arch/x86/entry/entry.S | 22 ++++++++++++++++++++++ arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/nospec-branch.h | 15 +++++++++++++++ 3 files changed, 38 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S index 8c8d38f0cb1d..bd8e77c5a375 100644 --- a/arch/x86/entry/entry.S +++ b/arch/x86/entry/entry.S @@ -6,6 +6,9 @@ #include #include #include +#include +#include +#include .pushsection .noinstr.text, "ax" @@ -20,3 +23,22 @@ SYM_FUNC_END(entry_ibpb) EXPORT_SYMBOL_GPL(entry_ibpb); .popsection + +/* + * Defines the VERW operand that is disguised as entry code so that + * it can be referenced with KPTI enabled. This ensures VERW can be + * used late in exit-to-user path after page tables are switched. + */ +.pushsection .entry.text, "ax" + +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_START_NOALIGN(mds_verw_sel) + UNWIND_HINT_UNDEFINED + ANNOTATE_NOENDBR + .word __KERNEL_DS +.align L1_CACHE_BYTES, 0xcc +SYM_CODE_END(mds_verw_sel); +/* For KVM */ +EXPORT_SYMBOL_GPL(mds_verw_sel); + +.popsection diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 4af140cf5719..79a7e81b9458 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -308,10 +308,10 @@ #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ - #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */ +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers using VERW */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index f93e9b96927a..4ea4c310db52 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -315,6 +315,21 @@ #endif .endm +/* + * Macros to execute VERW instruction that mitigate transient data sampling + * attacks such as MDS. On affected systems a microcode update overloaded VERW + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. + * + * Note: Only the memory operand variant of VERW clears the CPU buffers. + */ +.macro EXEC_VERW + verw _ASM_RIP(mds_verw_sel) +.endm + +.macro CLEAR_CPU_BUFFERS + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_RETPOLINE_SAFE \ From patchwork Wed Jan 24 07:41:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 191461 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp888594dyi; Wed, 24 Jan 2024 02:10:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IHGitue0R8mH7a7QnggC1TNkNS2urh/1Jwmwyn1ids0FU79TeLh6ieL7cR2hFpVWG+o9Pii X-Received: by 2002:ac8:7ec7:0:b0:42a:51c2:9eff with SMTP id x7-20020ac87ec7000000b0042a51c29effmr1676811qtj.58.1706091050019; Wed, 24 Jan 2024 02:10:50 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706091050; cv=pass; d=google.com; s=arc-20160816; b=HH0eAu+Zw/KwA27ZKoBnL0MjkLYVvthMvnLxX8n69C/fl7lx16+puy0Tiu5aucmM/y yQJc3KqtD0jntZSoa+oHlkPcoaBvql45NMBKyrTARwT14Km/JvcZtNRTCHVrJTXj5W6v YXZ98n0ZtTTlfAPZJsQUWrsNag7NFr2DZm3rdZhJV4jDxg2oQqQajCSWxuU+l/AKhyw3 SfmulP/2uxOw3tb5a2WqSAaJm5Z9Z1HJI2ZdArAqLvPfcReILdVizwYQU725cn8G8uEE f0jorEo+cRQc5jhT/Pd8kwF0abNPle+MNnxCvG5mnwjIs9ud1oKFqI9ZOL1/K4m48neI afzQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=khozrtzAj3vwjyUrXpWv9rTDxIFxnIQL0DQ40domI8A=; fh=9zb6IElBBOwPDhWsAkR8DeNi2jqlz/RyBa2WbyJnsJM=; b=aiEUfvUnU5dpaLLcNcXiFKu8Kk9IGf6e51gwBXaqyvJJyXR7qMA16no9pWdGgG8YQj BK6md7SGPP01Yhd053risET+pUI5AtbwHTlH9zpOBt5Vi/QVb7IOJNZ4W5aVbKMZWmvn RbS0BVYmRyZHSq+Y4kKQi/Ar0rwi0m4OBug7GBmvheXzJQh6cd0rFCOij+NX6dZNkcNG ENBdvoi+FlmKGu8/l83AZJcgU211WUO5ZO3/qwBcb90DvGH/JbBoUkegQknRHI1lwQfe WIzSUL9IBmLHJNfdWl3uTckln6UxSEh32zcrHef2LmjGO+O5CbsIU/wCJLKyzLkl5+Yl OEvQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="cKZM/FvW"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36575-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36575-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id v7-20020a05622a130700b00429c7c5072esi10058364qtk.179.2024.01.24.02.10.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 02:10:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36575-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="cKZM/FvW"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36575-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36575-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 7F90E1C274EF for ; Wed, 24 Jan 2024 07:43:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9369D17BDA; Wed, 24 Jan 2024 07:41:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cKZM/FvW" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBEFB17BA3; Wed, 24 Jan 2024 07:41:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082084; cv=none; b=r+mxDwKkP/f6oSd8QT1reuB5qcBobaDa0rIDD3hkkoPp272QyJ5spKOfhWkjc1nmtdnO9ppCd4xIqOIVf7/4O9wEp25n4DHRBkOdKaAKpgeLvhsPWLSxo9wkX70hjw5QRXu7kMDOO3h8YKE1xmQXPLa9kEDSOGZP4YypoMmBnNc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082084; c=relaxed/simple; bh=u0ACNJnmCWkNPcbKiCWkDdCakrm+/qCMT4yvny0P32Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mnyNIY57o3VTHu3pxATcr+AaksFF8jd7OHH7IQCLRQEP3Rv8S4A6bkh95s/KiE2KvktngPeDT3//UXZBpvlZ5iwv5jhSQdSGGHf9X0zrqAKHHfCNbhc9E09iEhqMDOV8zuCulIaSgBSuC+Y9rzY+KmawR7PyZ6YGmQt77oX/W0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cKZM/FvW; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706082083; x=1737618083; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=u0ACNJnmCWkNPcbKiCWkDdCakrm+/qCMT4yvny0P32Q=; b=cKZM/FvWr4tmH2AYAFE7/g3eMWmAPLBfiQEYtiK+JBGjQylCFmyMV+A/ 6OVAQ1KSK2kEeUOFRoEontXuB3bRrMe+OB2yHL38EVvGZofIIIUonyIFO sHq9363a7yDlLMWp+FFlNlznDuzyG7xAMmHiJCiP1e5kn8W/cRx6kPeYC d9Ql3gLdGklg2gOvs6KRax54fhXoL0gtSOLOKOBJPpkk9vyljFfCeYmWM oPJJCpOokPV80N0N57p1xqXZi3tclvhFW6ApkRN/++bZd+9Nx8/YFQmlv 1Se8yFVQuEvuJ51hby3hNP17Q4zsot8vnMpzk6uUZPN37TrORilhq56ql w==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="9156924" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="9156924" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="905514749" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="905514749" Received: from bbaidya-mobl.amr.corp.intel.com (HELO desk) ([10.209.53.134]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:18 -0800 Date: Tue, 23 Jan 2024 23:41:17 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Dave Hansen Subject: [PATCH v6 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20240123-delay-verw-v6-2-a8206baca7d3@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788966129137255763 X-GMAIL-MSGID: 1788966129137255763 Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index de6469dffe3a..bdb17fad5d04 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -161,6 +161,7 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -601,6 +602,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) /* Restore RDI. */ popq %rdi swapgs + CLEAR_CPU_BUFFERS jmp .Lnative_iret @@ -712,6 +714,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1438,6 +1442,12 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1455,6 +1465,7 @@ SYM_CODE_START(entry_SYSCALL32_ignore) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(entry_SYSCALL32_ignore) diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index de94e2e84ecc..eabf48c4d4b4 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -270,6 +270,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR From patchwork Wed Jan 24 07:41:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 191462 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp888701dyi; Wed, 24 Jan 2024 02:11:05 -0800 (PST) X-Google-Smtp-Source: AGHT+IGr07SCYyEqUBKM1kA9/8gXXnBblB0UzSLuuoTF9Tvx51hmxIw/mIRYZowU0uZAwarp699S X-Received: by 2002:ac8:57c3:0:b0:42a:52b1:55db with SMTP id w3-20020ac857c3000000b0042a52b155dbmr1503163qta.35.1706091065452; Wed, 24 Jan 2024 02:11:05 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706091065; cv=pass; d=google.com; s=arc-20160816; b=cXZHUA4bMdCxcKflHZDnze+pwvBKSsjBOXxzxAmS5MZ9T2SQRR2zn2sjdx0F7JHwFo at7k+qGuP04lFCLuLDrLNRSGt2trv0mz74fb8Na1hUyxQb49tQfTUpRUn6MkUvvjco+2 KReYWAxM9MYYc6EdQYD2zTNxiTov36rZHZ8TzLoVUr2FcMucHu6i1S5PSst5U9Aq5dFp 5Yw2m16ROw07CWOFIS7GKsHi7QPEBZbTH4vw236A1oLi2DPRgfqYr+y3gSDD16PZa5rZ mECQWFnMvTCmyFst1lJxoOoO4CLuqxyaSiXBoOX/Vxdq/4Y+Sn3HshEFl1ZFDLX1yxs9 JHYg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=rzZbi2kNM6RYCVjIUYiVz1/256dCuMd9sPMX2hlHtQM=; fh=ER2H6g+88FnZd3JyhT6ewHH56pdK2QWfWr77z0IHoNE=; b=xTwgqaV9l3QAJE+bqfeG2MPQTjmhnus9MLlEAgdsBBXOcyV4dez2zVBkaDM7m1mMhg I66CrNX5/cAGSAV3Spd7l6bAxoyn71UH1MAl9EMTaCqRAdIvm7m4NZdf6750lixgdMf5 VsrTJP/lMxL4xGkxFYW6bZDEQsuRvMfwHPUci+HsnhM6rBdO/4dKxzN7TN/mAudFk86j Evi39eAM2L3dTdNZutXo2M8HLLItOUdBVyFGkPTTxK0wYe0QNX/ICgNCZ4yCFmJD0WQg Qby4YgPjj8taYXKomluwNMZoKmIhH/hiz52OP3O6Hz7UFZdNiurkRjkwGWuWvRiUHwJz PKsA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fQeBfOXp; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36576-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id u21-20020ac858d5000000b0042a4499a6besi5253842qta.587.2024.01.24.02.11.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 02:11:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fQeBfOXp; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36576-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 10D811C27A63 for ; Wed, 24 Jan 2024 07:43:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AB0151802E; Wed, 24 Jan 2024 07:41:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fQeBfOXp" Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F54817C60; Wed, 24 Jan 2024 07:41:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.20 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082101; cv=none; b=CxtDyvkHBkPjmTR6Q47e1gemKFP7AWmgFCQ6yqLGTXbWE07MCfhwqeL0XYvJ5edmuhqhIXZWE/pZNPkX2VMhc69f5CRtgtatGpVHUsZ9k2nOmonHoO1LEc5PV6ztqoDLrYRJ2z62KErpt/yWRgT80SsdRKsoikD/0l+u93/+3pI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082101; c=relaxed/simple; bh=SvyQgb7E1GjbbhEZauDFduuMMCCNLr60ulcJzDeIqyo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=oHg54yyecq70Lp31bL/Aw0ig1xUSRivNtNN7HdnOm67TrRPjRVs4k8KQTFfAQaPnwOepDbIDw2CWUPqf6Mg0+Ap+DMnGgm0Qi68SiGrmTqAjQRa77N7hQYUTN/kqvIw2hxh2PH+Mh/nOB5mwJZXApv02xWkPTUgvYVpJzqwOSbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fQeBfOXp; arc=none smtp.client-ip=134.134.136.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706082099; x=1737618099; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=SvyQgb7E1GjbbhEZauDFduuMMCCNLr60ulcJzDeIqyo=; b=fQeBfOXpgulKL5uuywMmcqcd1ze5YRI7vhFXWBWd+M8kMKU0yEhRUdbc v0nH+Wdj94+55J3KiC5PT+3nJdH2h3EZR6QP9AH4SBXmahj26MseVmC3f sTrd0ErSXdDaN1rc0TecjMrepoM0VmhUJkfxGt+tE+oVyXJEM/mPrMbc0 rox1aSI3v98cYtGvnmTKwzoXUcUO9EPrch/UdY8CyE08E8u9ae13ZDj0e +rpQMqPWbUOn4BmULmX0PxIv88PTG0vBkFlnSetlq+Y9oZQAwg6+ch0Aq 2XJ3YM7JAQHP+N7WclzcDxzTNnKgAzN7mlJpN2IFUKq8yacjkcDxrtO7w w==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="392184505" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="392184505" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="1819066" Received: from bbaidya-mobl.amr.corp.intel.com (HELO desk) ([10.209.53.134]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:37 -0800 Date: Tue, 23 Jan 2024 23:41:36 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v6 3/6] x86/entry_32: Add VERW just before userspace transition Message-ID: <20240123-delay-verw-v6-3-a8206baca7d3@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788966145457969663 X-GMAIL-MSGID: 1788966145457969663 As done for entry_64, add support for executing VERW late in exit to user path for 32-bit mode. Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_32.S | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index c73047bf9f4b..fba427646805 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32) BUG_IF_WRONG_CR3 no_user_check=1 popfl popl %eax + CLEAR_CPU_BUFFERS /* * Return back to the vDSO, which will pop ecx and edx. @@ -954,6 +955,7 @@ restore_all_switch_stack: /* Restore user state */ RESTORE_REGS pop=4 # skip orig_eax/error_code + CLEAR_CPU_BUFFERS .Lirq_return: /* * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization @@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi) /* Not on SYSENTER stack. */ call exc_nmi + CLEAR_CPU_BUFFERS jmp .Lnmi_return .Lnmi_from_sysenter_stack: From patchwork Wed Jan 24 07:41:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 191618 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp1060539dyi; Wed, 24 Jan 2024 07:22:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IHWai2kOqvPpaQC+iyKt6pYIPAcf6StC5dKVle89Yfev9NDoqckhAMKBW/WHYjwQttNj7tg X-Received: by 2002:a05:6a00:1886:b0:6dd:a1e5:9044 with SMTP id x6-20020a056a00188600b006dda1e59044mr879907pfh.18.1706109773475; Wed, 24 Jan 2024 07:22:53 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706109773; cv=pass; d=google.com; s=arc-20160816; b=U6PPNvVgD5+UGWF8QTFvpV4X6XL70BiKqlodPozuLDcPeHeykRnrrioFTBR1AsVLrn +Xr8SWQeuU2/uAxST5EoJ//eDIBJGvi6nDKpXatt9zr9xBX+ALR0nQ3AaciRjZ8jFzDK 7Zx7yqqmrpsk1t55ZKOknOvN9a9vr2hxWXEaxo+56L47KfQnWB9+zCEQFG9gLXgRbHSU T2pzYzal850mnt+ywy/fVHdqOaEO7xX4XGPa+dLiSv5lpXoSvKGxSiFn1SQrhnRSNWqF tPD4l3oLFbMurwR6Qgk/1v+JiRfppEIim6hxJswhisbqBzyh0jAHVWgK9pe+fR4nBwG+ tYfw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=VNeP+Gd3Qzgj2+RZtpjYxL5WhsUJTdm0Wic9ltoyClk=; fh=ER2H6g+88FnZd3JyhT6ewHH56pdK2QWfWr77z0IHoNE=; b=etAMBv6j5v9DB6husLC7mJhZbA0J5n2NL78Fvap6x6WvCCZg7m8ulZ33H3OLrBSb5o FrNFAJEIDQo6xLkYTdtVD063UENzN036tdypGvM4dth8PCV/SgC3ytZ+YxFZXTdUX32h zkHOGTnAeOiKThAN6ncaxmNmEO/4uOCikbMYS1nTFwmXOjBkNNJ6WwDfJbQKgEDyP3ut xuliyOwKoSD851jijYZyHU+jIds0FLk0O/CQcGnANlm9/b+4WX32XuyTZmrTI0xTObKQ ypnHi7bNoq1czNjr+NyBLOwKFzdfYW7mCmcpi/xlByGtbFqol34py9t/3lQvdQOli5dA is8w== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LvtRMZ1a; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36578-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36578-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id k15-20020a636f0f000000b005cdb499a9afsi11987587pgc.108.2024.01.24.07.22.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 07:22:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36578-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LvtRMZ1a; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36578-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36578-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E5DA128E7AE for ; Wed, 24 Jan 2024 07:44:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 83B7A17721; Wed, 24 Jan 2024 07:42:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LvtRMZ1a" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D876168C7; Wed, 24 Jan 2024 07:42:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082123; cv=none; b=ffGK1kWPwzo/svuaWUilnufJdrIlBmYaqeAoMv4WVfEihPmr/bBRVKFTqd4PfPhDau8xBgSWUy26c2+9S9josayxrV/TIY1q1nB9pZn1G3SGj6k1Jg/173yMsHtp0QRrDyebm9x9qJW9l11Lu8A6f6pAg1eOwi5CR3ENojpOjqw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082123; c=relaxed/simple; bh=avBVslkO75CqSDbvGNj2leB8duki0KQBBwT6ocfVkB8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mtYxlsStWM1uTmWeC4yPo3phBbEEJK33fb2dzQ5/wpJ34tbTh1LVw/GkPly4SBYFxGw2Zpn5M3oFTOatjG3wqpmwBwAg71ODlzre1li3k02TIxBUdWNOlfnqFgGJllaN5S8m832rv7tj9wAVdnnsoMyLKHfattCIaNJNjeahX6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LvtRMZ1a; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706082120; x=1737618120; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=avBVslkO75CqSDbvGNj2leB8duki0KQBBwT6ocfVkB8=; b=LvtRMZ1awcBbs1JFzfV8hJjag7wKCC2n0FLKQmcbq6whr1KtQhs7bovR O/tGpTlFyQA3PBONvUXyKzzAs4v5yRU6gYgWeYrPq+UKX+t5iIolr4Dcs OPO7qFTSLhE88yG5rA71T6rbGCr6fD1czgTXq6v2PLZPJb1TaXNx6Ihih NHj4CpmH7rYlerFCt0Bu27a321cGZm3ygaGUgCw8CBqmKfNnj0MKL21Li oO6GIm2jCfkd1pcJVvVThgwmtoE+gafZReY8N1pDjqvUDG6xghNJpvYuj p/FI/Akeui4J0gIyKvglufHDV7RoeioSW/pxX9ZawlGApOJUrzT8g/IUD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="8530350" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="8530350" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="20640181" Received: from bbaidya-mobl.amr.corp.intel.com (HELO desk) ([10.209.53.134]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:41:55 -0800 Date: Tue, 23 Jan 2024 23:41:53 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v6 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Message-ID: <20240123-delay-verw-v6-4-a8206baca7d3@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788985762071921421 X-GMAIL-MSGID: 1788985762071921421 The VERW mitigation at exit-to-user is enabled via a static branch mds_user_clear. This static branch is never toggled after boot, and can be safely replaced with an ALTERNATIVE() which is convenient to use in asm. Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user path. Also remove the now redundant VERW in exc_nmi() and arch_exit_to_user_mode(). Signed-off-by: Pawan Gupta --- Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++----------- arch/x86/include/asm/entry-common.h | 1 - arch/x86/include/asm/nospec-branch.h | 12 ------------ arch/x86/kernel/cpu/bugs.c | 15 ++++++-------- arch/x86/kernel/nmi.c | 3 --- arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 34 insertions(+), 37 deletions(-) diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst index e73fdff62c0a..c58c72362911 100644 --- a/Documentation/arch/x86/mds.rst +++ b/Documentation/arch/x86/mds.rst @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing: mds_clear_cpu_buffers() +Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. +Other than CFLAGS.ZF, this macro doesn't clobber any registers. + The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions. @@ -138,17 +141,30 @@ Mitigation points When transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel - command line. The migitation is enabled through the static key - mds_user_clear. - - The mitigation is invoked in prepare_exit_to_usermode() which covers - all but one of the kernel to user space transitions. The exception - is when we return from a Non Maskable Interrupt (NMI), which is - handled directly in do_nmi(). - - (The reason that NMI is special is that prepare_exit_to_usermode() can - enable IRQs. In NMI context, NMIs are blocked, and we don't want to - enable IRQs with NMIs blocked.) + command line. The mitigation is enabled through the feature flag + X86_FEATURE_CLEAR_CPU_BUF. + + The mitigation is invoked just before transitioning to userspace after + user registers are restored. This is done to minimize the window in + which kernel data could be accessed after VERW e.g. via an NMI after + VERW. + + **Corner case not handled** + Interrupts returning to kernel don't clear CPUs buffers since the + exit-to-user path is expected to do that anyways. But, there could be + a case when an NMI is generated in kernel after the exit-to-user path + has cleared the buffers. This case is not handled and NMI returning to + kernel don't clear CPU buffers because: + + 1. It is rare to get an NMI after VERW, but before returning to userspace. + 2. For an unprivileged user, there is no known way to make that NMI + less rare or target it. + 3. It would take a large number of these precisely-timed NMIs to mount + an actual attack. There's presumably not enough bandwidth. + 4. The NMI in question occurs after a VERW, i.e. when user state is + restored and most interesting data is already scrubbed. Whats left + is only the data that NMI touches, and that may or may not be of + any interest. 2. C-State transition diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index ce8f50192ae3..7e523bb3d2d3 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, static __always_inline void arch_exit_to_user_mode(void) { - mds_user_clear_cpu_buffers(); amd_clear_divider(); } #define arch_exit_to_user_mode arch_exit_to_user_mode diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 4ea4c310db52..0a8fa023a804 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -544,7 +544,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -DECLARE_STATIC_KEY_FALSE(mds_user_clear); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); @@ -576,17 +575,6 @@ static __always_inline void mds_clear_cpu_buffers(void) asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc"); } -/** - * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability - * - * Clear CPU buffers if the corresponding static key is enabled - */ -static __always_inline void mds_user_clear_cpu_buffers(void) -{ - if (static_branch_likely(&mds_user_clear)) - mds_clear_cpu_buffers(); -} - /** * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability * diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index bb0ab8466b91..48d049cd74e7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); /* Control unconditional IBPB in switch_mm() */ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -/* Control MDS CPU buffer clear before returning to user space */ -DEFINE_STATIC_KEY_FALSE(mds_user_clear); -EXPORT_SYMBOL_GPL(mds_user_clear); /* Control MDS CPU buffer clear before idling (halt, mwait) */ DEFINE_STATIC_KEY_FALSE(mds_idle_clear); EXPORT_SYMBOL_GPL(mds_idle_clear); @@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void) if (!boot_cpu_has(X86_FEATURE_MD_CLEAR)) mds_mitigation = MDS_MITIGATION_VMWERV; - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || cpu_mitigations_auto_nosmt())) @@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void) * For guests that can't determine whether the correct microcode is * present on host, enable the mitigation for UCODE_NEEDED as well. */ - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (taa_nosmt || cpu_mitigations_auto_nosmt()) cpu_smt_disable(false); @@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void) */ if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM))) - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); else static_branch_enable(&mmio_stale_data_clear); @@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void) if (cpu_mitigations_off()) return; - if (!static_key_enabled(&mds_user_clear)) + if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF)) goto out; /* - * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data - * mitigation, if necessary. + * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO + * Stale Data mitigation, if necessary. */ if (mds_mitigation == MDS_MITIGATION_OFF && boot_cpu_has_bug(X86_BUG_MDS)) { diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 17e955ab69fe..3082cf24b69e 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -563,9 +563,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi) } if (this_cpu_dec_return(nmi_state)) goto nmi_restart; - - if (user_mode(regs)) - mds_user_clear_cpu_buffers(); } #if IS_ENABLED(CONFIG_KVM_INTEL) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index be20a60047b1..bdcf2c041e0c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7229,7 +7229,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&mds_user_clear)) + else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) From patchwork Wed Jan 24 07:42:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 191433 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp861133dyi; Wed, 24 Jan 2024 01:02:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IFh3th44h1APCINeYtLBQG8DVJsOxAAWnAaCYtfjgJVPCJn9608at+bvCxxrNOGvixsFIHM X-Received: by 2002:a05:6a20:4da6:b0:199:9c7e:a58b with SMTP id gj38-20020a056a204da600b001999c7ea58bmr552278pzb.118.1706086936839; Wed, 24 Jan 2024 01:02:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706086936; cv=pass; d=google.com; s=arc-20160816; b=RKYuQUCW4Ikk/8xXVYKjskPKum5kYqvDuiO/UfxKctAeDQfx2EhCLIcVkQDjdsKzzW lOQAd7clKe3Y7a7R8PMKPZjPdZeznvahIEQ/sN5Yoob725TU5NS6gaf/748/4GNAZtmo zj4YlJo3OsdI05YZgpADHBx4fI/KPN91Y4e22iU7hcDQtgxfaMd5sPGiY5TqLf14Ag9e LEcRl9rol3qyLARXSqo1PO/0EdoD4Yf5SaKNw3zivcz55zGJ6inD2T8917wgJP63w4KK GmCQikE2Res17FMfKm7f94H6RCm1wUdrm3krBwQ4b68kPe2K3hq2u2kmjU+boZmpQGoG xuHw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Ya5x+74v6CArtc8dDax8Evc0luoPHJvtMq3RFVrUxcM=; fh=ER2H6g+88FnZd3JyhT6ewHH56pdK2QWfWr77z0IHoNE=; b=jI8AO2vlk1ivymai+eX0oTJUgEhzD6qsA9WhgqjgGuqZ7fLU9PSQsOWisvlrMF7xM7 26jh/bTDb/Su6BVgHq9mFXDv+buH4UFRXKir2NX9C8/HIZ3zP+P7XrKR2ZGEudw4sOzM kQnMVTb7FqGP6qUSTkSIFMwkeIbmFBxch3E/NdWLhUWwjNxN8T/Pv7a/CRtRTvQSvE5/ mDOHYg7mSzxHi5zHK+KGBYUynYQkY7F3gjTMmks/8cqUObOsP+svkyxUAXmOeOdhiU0K OtbRUt7R4GvQP8MwRhW7sBcGUP/rbl+lkcSWSmJfQ+R5Yy0Q6+OIR2y0wDddj3tseSL5 plSw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DOtWu88t; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36580-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36580-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id bj7-20020a17090b088700b0029058af1c92si8541501pjb.166.2024.01.24.01.02.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 01:02:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36580-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DOtWu88t; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-36580-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36580-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 791B228F2C1 for ; Wed, 24 Jan 2024 07:45:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E89FA18041; Wed, 24 Jan 2024 07:42:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DOtWu88t" Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EACA168D8; Wed, 24 Jan 2024 07:42:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.20 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082156; cv=none; b=e9mGXpopZ71b7EvzJ+uGIc7Mf9/dCkJs71jpWLyCfYdCQ7tvgLewxQCgSuWpUmt+vGx5lu+0TlZECTzI3OjVjKqZrdf259hODLtX0mxDefa9DnKopEONiYAzliu2KPIQ8xF8p322yArwR8PcicQWeXsmXXFycW9RV0CY4M3Uy9A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706082156; c=relaxed/simple; bh=l7St/8Yz8St+HFZatHx/yUdB2U84CrVMB3Tw9Oi74Cc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=JWVIWTDTFe0LCvMmbN7RdbhDl1ScPui530V8HfqKT7qgCNDzna970aqxORnPDJxXdA36KqeNZFEfYiP69pHmYjv6WuOqJXy5Qlyn/CCSeLQtbzbTYnht3//qIxgq9BMtjf6qHFqWZ4UtVLC0AjL+z6Y4fJrg3VPBCmnhIBkRDwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DOtWu88t; arc=none smtp.client-ip=134.134.136.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706082154; x=1737618154; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=l7St/8Yz8St+HFZatHx/yUdB2U84CrVMB3Tw9Oi74Cc=; b=DOtWu88tu3vGy9cJdvUSE00qbFEPC5uGZSXU6m7wpFShq7obmJ6EZMO6 PIvZ6p8gfnthph6CSJYGWmUA0eHLcVK4U4lm2B0C45GiPJN8DNTd6skpy nOxeDfXkLdnlIDrj53+D6SlxlqxmgYMrefKWsz2Cvqme/cryoxhhliQ50 iUDuz8CvRImW90vxc+Pl7UwUOn9guH2MVlQ1VKNsKF4vk4PuP4zDIeycs k5S7oLOIMXDjXN37RNeKjK0qbNOqHg9SgOeYrC5W+FmotTc17MUzE7S6u mgmKrWMHxwxJTnpeWyG8mt4tlQZoaDimI/Nxlrlb4oYTo0fH6t/k0zr6e g==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="392184770" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="392184770" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:42:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="1819198" Received: from bbaidya-mobl.amr.corp.intel.com (HELO desk) ([10.209.53.134]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2024 23:42:31 -0800 Date: Tue, 23 Jan 2024 23:42:29 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v6 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Message-ID: <20240123-delay-verw-v6-6-a8206baca7d3@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240123-delay-verw-v6-0-a8206baca7d3@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788961815682296046 X-GMAIL-MSGID: 1788961815682296046 During VMentry VERW is executed to mitigate MDS. After VERW, any memory access like register push onto stack may put host data in MDS affected CPU buffers. A guest can then use MDS to sample host data. Although likelihood of secrets surviving in registers at current VERW callsite is less, but it can't be ruled out. Harden the MDS mitigation by moving the VERW mitigation late in VMentry path. Note that VERW for MMIO Stale Data mitigation is unchanged because of the complexity of per-guest conditional VERW which is not easy to handle that late in asm with no GPRs available. If the CPU is also affected by MDS, VERW is unconditionally executed late in asm regardless of guest having MMIO access. Signed-off-by: Pawan Gupta Acked-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 3 +++ arch/x86/kvm/vmx/vmx.c | 20 ++++++++++++++++---- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index b3b13ec04bac..139960deb736 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -161,6 +161,9 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX + /* Clobbers EFLAGS.ZF */ + CLEAR_CPU_BUFFERS + /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bdcf2c041e0c..0190e7584ffd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -387,7 +387,16 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx) static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - vmx->disable_fb_clear = (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && + /* + * Disable VERW's behavior of clearing CPU buffers for the guest if the + * CPU isn't affected by MDS/TAA, and the host hasn't forcefully enabled + * the mitigation. Disabling the clearing behavior provides a + * performance boost for guests that aren't aware that manually clearing + * CPU buffers is unnecessary, at the cost of MSR accesses on VM-Entry + * and VM-Exit. + */ + vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) && + (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && !boot_cpu_has_bug(X86_BUG_MDS) && !boot_cpu_has_bug(X86_BUG_TAA); @@ -7226,11 +7235,14 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_state_enter_irqoff(); - /* L1D Flush includes CPU buffer clear to mitigate MDS */ + /* + * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW + * mitigation for MDS is done late in VMentry and is still + * executed in spite of L1D Flush. This is because an extra VERW + * should not matter much after the big hammer L1D Flush. + */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) - mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) mds_clear_cpu_buffers();