From patchwork Wed Oct 25 20:53:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 158295 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp231826vqb; Wed, 25 Oct 2023 13:54:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEn7Bq+IzbZiWjgyeWndiIwQDA7mw1lcAWZuFGlYNJTho6lI7VhAKYEEvmu/0U1zWeEtPSU X-Received: by 2002:a25:d7c7:0:b0:d9a:4839:68fe with SMTP id o190-20020a25d7c7000000b00d9a483968femr14944607ybg.43.1698267265992; Wed, 25 Oct 2023 13:54:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698267265; cv=none; d=google.com; s=arc-20160816; b=F841uStJtGiD1TdQgo5lJhFLG0uJxO+FdpDDOqKBf2m/rQ35jVNsxodXh2ihqb8Jx2 f+2X/cFT9Wpxgrx8X1Y8reRv0w8lGU7FtnjXBfO4zdkLL1begi77vU+LQGC3+NDjz0QW S4un9s7tYlM7NzF5159NjU3oQrgcg9eXJjhnMS0W2T9NUpniR+Q1YzFxVsk8zRLOSRpL dN7KLOncJ9XV7L4h9Iqd2sEcyp76oQyGQOUepF5fBUX3ljXMQS3d1nsrAbrArE9JxTB4 1Oy5uWpddRtoe48qmbq+7T3VzhvMTsrWrC0aO090N6ki9SG5UY5rw0si3Rs/rlnVlFKq OfHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Ge1mFZSSZx6Zx0C+6un4ZIu+Adp8APpkNLRsMYIkDkA=; fh=lbJsPO+ZRDN9baDq/BZnnNH6CGOpZtNT0YCi3hapTHQ=; b=vlpPF9/sauoM7tOIqJQRptElt4kTzzRb/9P5wgCSQQPxQ00rTrNjiITa18DT0W+E0e /akPxZ0kZmOy+LBSn9dkwyh6ervZDm52taTeYlhAYWzrDcScBY+9+u0efDxAFCx7rLSo nYSCifayU03ui8PIXzzciq35RJdpw56iw8bMj8W0upCQo/sRnQkBWZMz0rlmLBp29xDW N5meIWqW8W37FLRUVhx0lylWgLeV0x1mNv8duHD4QOlDZODfTwNCcq6G4I++w4fHZvfS HqeeW0KbOENUTcZjYrU58dpFm5lj4oRNENce5UZzmJD6Lu1qYUxpQz/JL4XuCiCqlmGz hFYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="HP/Jicyd"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id 187-20020a2515c4000000b00d7ba10667b0si4221483ybv.536.2023.10.25.13.54.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Oct 2023 13:54:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="HP/Jicyd"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id DFE9B81DF0DF; Wed, 25 Oct 2023 13:54:21 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235071AbjJYUxm (ORCPT + 25 others); Wed, 25 Oct 2023 16:53:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjJYUx0 (ORCPT ); Wed, 25 Oct 2023 16:53:26 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B5F31AD; Wed, 25 Oct 2023 13:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698267196; x=1729803196; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=wkq/Le2P/oy/c1LxizAjXnanmzK5N4i8jkBb77ryAII=; b=HP/JicydR3wupsu7JuM1n+dy0gfQnUZ+5YYNPgmCwGqjfFXyB3hF0wsk 3/Rc+bM9/rn6zqaam9S6XpxDiVp/3H49ihuItRnEFtskJeM3uEPoXPW2H VvPcU5i7XEO5vDxbbkfDvbpD4PnUdxwtVjVaNlnNAfIyXkqduZKkUkmzc 0utF7aOoz8jUhIuB2alqCb4hTQei+JrBDRtQbXGc9tqBzKBWOBl9b4Ofc r0ygYjUGc1p4BNPFVW/ZVXyep1SEnTkNvULLNrECV8FNzOIUN7ngt1jAg /KweT6+zPfyO2rvHlA+YvtpgvyP9dk4p/8JyKRcewI99aWquAcnLI6FyF Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10874"; a="367610029" X-IronPort-AV: E=Sophos;i="6.03,250,1694761200"; d="scan'208";a="367610029" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2023 13:53:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10874"; a="902683225" X-IronPort-AV: E=Sophos;i="6.03,250,1694761200"; d="scan'208";a="902683225" Received: from kkomeyli-mobl.amr.corp.intel.com (HELO desk) ([10.251.29.139]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2023 13:50:50 -0700 Date: Wed, 25 Oct 2023 13:53:11 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v3 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Message-ID: <20231025-delay-verw-v3-4-52663677ee35@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231025-delay-verw-v3-0-52663677ee35@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231025-delay-verw-v3-0-52663677ee35@linux.intel.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 25 Oct 2023 13:54:21 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780762296467549398 X-GMAIL-MSGID: 1780762296467549398 The VERW mitigation at exit-to-user is enabled via a static branch mds_user_clear. This static branch is never toggled after boot, and can be safely replaced with an ALTERNATIVE() which is convenient to use in asm. Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user path. Also remove the now redundant VERW in exc_nmi() and arch_exit_to_user_mode(). Signed-off-by: Pawan Gupta --- Documentation/arch/x86/mds.rst | 39 ++++++++++++++++++++++++++---------- arch/x86/include/asm/entry-common.h | 1 - arch/x86/include/asm/nospec-branch.h | 12 ----------- arch/x86/kernel/cpu/bugs.c | 15 ++++++-------- arch/x86/kernel/nmi.c | 2 -- arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 35 insertions(+), 36 deletions(-) diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst index e73fdff62c0a..34b9e476078c 100644 --- a/Documentation/arch/x86/mds.rst +++ b/Documentation/arch/x86/mds.rst @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing: mds_clear_cpu_buffers() +Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user +path. This macro works for cases where GPRs can't be clobbered. + The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions. @@ -138,17 +141,31 @@ Mitigation points When transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel - command line. The migitation is enabled through the static key - mds_user_clear. - - The mitigation is invoked in prepare_exit_to_usermode() which covers - all but one of the kernel to user space transitions. The exception - is when we return from a Non Maskable Interrupt (NMI), which is - handled directly in do_nmi(). - - (The reason that NMI is special is that prepare_exit_to_usermode() can - enable IRQs. In NMI context, NMIs are blocked, and we don't want to - enable IRQs with NMIs blocked.) + command line. The mitigation is enabled through the feature flag + X86_FEATURE_CLEAR_CPU_BUF. + + The mitigation is invoked just before transitioning to userspace after + user registers are restored. This is done to minimize the window in + which kernel data could be accessed after VERW e.g. via an NMI after + VERW. + + Corner case not handled + ^^^^^^^^^^^^^^^^^^^^^^^ + Interrupts returning to kernel don't clear CPUs buffers since the + exit-to-user path is expected to do that anyways. But, there could be + a case when an NMI is generated in kernel after the exit-to-user path + has cleared the buffers. This case is not handled and NMI returning to + kernel don't clear CPU buffers because: + + 1. It is rare to get an NMI after VERW, but before returning to userspace. + 2. For an unprivileged user, there is no known way to make that NMI + less rare or target it. + 3. It would take a large number of these precisely-timed NMIs to mount + an actual attack. There's presumably not enough bandwidth. + 4. The NMI in question occurs after a VERW, i.e. when user state is + restored and most interesting data is already scrubbed. Whats left + is only the data that NMI touches, and that may or may not be of + any interest. 2. C-State transition diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index ce8f50192ae3..7e523bb3d2d3 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, static __always_inline void arch_exit_to_user_mode(void) { - mds_user_clear_cpu_buffers(); amd_clear_divider(); } #define arch_exit_to_user_mode arch_exit_to_user_mode diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 005e69f93115..12b8e86678bf 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -553,7 +553,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -DECLARE_STATIC_KEY_FALSE(mds_user_clear); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); @@ -585,17 +584,6 @@ static __always_inline void mds_clear_cpu_buffers(void) asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc"); } -/** - * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability - * - * Clear CPU buffers if the corresponding static key is enabled - */ -static __always_inline void mds_user_clear_cpu_buffers(void) -{ - if (static_branch_likely(&mds_user_clear)) - mds_clear_cpu_buffers(); -} - /** * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability * diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 10499bcd4e39..00aab0c0937f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); /* Control unconditional IBPB in switch_mm() */ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -/* Control MDS CPU buffer clear before returning to user space */ -DEFINE_STATIC_KEY_FALSE(mds_user_clear); -EXPORT_SYMBOL_GPL(mds_user_clear); /* Control MDS CPU buffer clear before idling (halt, mwait) */ DEFINE_STATIC_KEY_FALSE(mds_idle_clear); EXPORT_SYMBOL_GPL(mds_idle_clear); @@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void) if (!boot_cpu_has(X86_FEATURE_MD_CLEAR)) mds_mitigation = MDS_MITIGATION_VMWERV; - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || cpu_mitigations_auto_nosmt())) @@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void) * For guests that can't determine whether the correct microcode is * present on host, enable the mitigation for UCODE_NEEDED as well. */ - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (taa_nosmt || cpu_mitigations_auto_nosmt()) cpu_smt_disable(false); @@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void) */ if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM))) - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); else static_branch_enable(&mmio_stale_data_clear); @@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void) if (cpu_mitigations_off()) return; - if (!static_key_enabled(&mds_user_clear)) + if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF)) goto out; /* - * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data - * mitigation, if necessary. + * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO + * Stale Data mitigation, if necessary. */ if (mds_mitigation == MDS_MITIGATION_OFF && boot_cpu_has_bug(X86_BUG_MDS)) { diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index a0c551846b35..ebfff8dca661 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -551,8 +551,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi) if (this_cpu_dec_return(nmi_state)) goto nmi_restart; - if (user_mode(regs)) - mds_user_clear_cpu_buffers(); if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) { WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1); WARN_ON_ONCE(nsp->idt_seq & 0x1); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f3693..24e8694b83fc 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7229,7 +7229,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&mds_user_clear)) + else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm))