From patchwork Mon Feb 5 07:20:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 196752 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp813652dyb; Mon, 5 Feb 2024 03:36:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IH7mnuschJjEWeAWwIcB3ij3QfIy/kbLriY5HY51iwdlSO1MFMS8qCh+rBv5J3Ae15i7jXe X-Received: by 2002:aa7:d347:0:b0:55f:fb28:2701 with SMTP id m7-20020aa7d347000000b0055ffb282701mr4458235edr.1.1707132995407; Mon, 05 Feb 2024 03:36:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707132995; cv=pass; d=google.com; s=arc-20160816; b=Ke4migXQTgokO2DPxaD5uJFuqxx2KPbr1yUaPetVwmtxDjeX8N53VD0QGRzMMuWhLG sMfnfRV7XKPB5PuMBogCEkPwsw5qbnO8ldrLXJHyArKQE+ckxJawyhKvfbgXoRzz1cZu nNGlcZ6whs5bDh7ffP1DKC6akJwVLEJwdp+pvo2/Cl4xh9uYoMwDWJmk3inxC6wm7r2f YAY/6S4XweFT+6KU+QBvzP4pGVwo5oLGMG744HSIR8LQLHC832OCZX8wct7U7jzA256X 4U2D10CrVT4+w/c1KGRr/FraMe1f3HqDOQHIyJ4AN0p2mQg4nqSGgstmWMN27ojAPwPU eADw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=oSjmcUrJzY+sukiLjPy5/xpk/5GOaxQMdty3no3+n+M=; fh=I+kqP/mrjz0JBqfcrA89CxoDAfYNbXpEzBnaF5DmvU0=; b=gXGu6NxK+SpmV27bQ4UZ/uNRKFq4c48Cn/Vn+fXfUv11ax55nc/rGIAvJKx/OepPfs DVtr2rUrMG+o7fLifyMZ94cT7TS/cYEcb1IF899kZWaZGRo1KcA/AssBrHGFX9P/f47z Ua+ZXhIzaBvEcnoMWwx5BkL+uy7FN4diAgCPG0EopN/5L+xMaWpUVz+Z4hgtjJCuZpz1 jOFzTNQMdSsqXJvNJMPHrClSsYGVG1fP7EKe/XE07EOQrdBuhr0A4mbjx6Jtbhlc1sKT A+2U0uszSd05ZmRygtajL3nXjRteTVv47QXwoWDDCiSHl9/qB9bmvXBc3oKrqo4/TN1y EGKQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bET8C+Hh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCVS3NaGe+7ZzCK2UE9dIgfxPjWkEThemPEKCjPEUAV6xw/tI0vj067XL07dU7ZzghuDT9wvB7fkKru1u1nYc2LGm7P28g== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id f3-20020aa7d843000000b0056012ce9cfasi2934817eds.555.2024.02.05.03.36.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 03:36:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bET8C+Hh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52124-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 337FD1F24D1F for ; Mon, 5 Feb 2024 07:21:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AEDAB12E50; Mon, 5 Feb 2024 07:20:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bET8C+Hh" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B30F0125C3; Mon, 5 Feb 2024 07:20:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117626; cv=none; b=fMdPHjt4J/KId2hnmt8cNAHI6jqDLpmYgrWyJ0RGOKUj7+NoCAlNfu6Fc/YjeXLmNiLz/dlgTvRB4flghO6px5juoN74iaRDeG8ReTfycRYvI/CrshTn5jjBBDVto92ve5RdtggxJ5Zg4vMcvOkhq09b9PxcICpv/caQutxrMIM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707117626; c=relaxed/simple; bh=0JgW9OHNQ+vb2IEEEnanH09lZCiQ0sf1zXcyj320zgw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=YnJL8BwLloaanMabKDntjVfCdQXPKS79C06U115iaI4wwRgZczENxeLBCWBeQwfKAZM9m0UAq+eeTSkiULLNmoL4BocSJOysZE22G86A2ep8iLtytstTZVekVNcm5Me3j0TAsHcU6VGrl/gM2rcUkEcNRTqFW024Tj2bmJFlnbA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bET8C+Hh; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707117625; x=1738653625; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=0JgW9OHNQ+vb2IEEEnanH09lZCiQ0sf1zXcyj320zgw=; b=bET8C+HhGnaFHocwdb1suuo/uLz8/l2coUdsHGaS6CeOX9GVbb83xm9L ma5fFI/ln2TCPSiG6XHtP0v7OYZ1E0rIFuTTgx4HPTt2oXzTvcm964bci cgkukvqBpkor3m+IU2naH7pXkXeVga4Z4lfo3MFTiiBxwCMFudMXe7ow1 B9p5qt3yXjR4+DcH/dkK2hkKPmRnRvX0GQgsj5DeKeMG6GPYO+WhyN0Hj O73D9edPEZQturfV6H6wjgbRxsNTwhsRpVWuzhV5Jd5Tw+811K2h3xWd9 E+XBz44pLQ/KKmgdWWRgsYkjG2QHhhMcVBJ171QSiMZ1qZQHUzQFsbHiT g==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="359499" X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="359499" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:20:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,242,1701158400"; d="scan'208";a="38056090" Received: from tdspence-mobl1.amr.corp.intel.com (HELO desk) ([10.251.0.86]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2024 23:20:24 -0800 Date: Sun, 4 Feb 2024 23:20:23 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , stable@kernel.org Subject: [PATCH v7 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Message-ID: <20240204-delay-verw-v7-6-59be2d704cb2@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240204-delay-verw-v7-0-59be2d704cb2@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790058687644443116 X-GMAIL-MSGID: 1790058687644443116 During VMentry VERW is executed to mitigate MDS. After VERW, any memory access like register push onto stack may put host data in MDS affected CPU buffers. A guest can then use MDS to sample host data. Although likelihood of secrets surviving in registers at current VERW callsite is less, but it can't be ruled out. Harden the MDS mitigation by moving the VERW mitigation late in VMentry path. Note that VERW for MMIO Stale Data mitigation is unchanged because of the complexity of per-guest conditional VERW which is not easy to handle that late in asm with no GPRs available. If the CPU is also affected by MDS, VERW is unconditionally executed late in asm regardless of guest having MMIO access. Cc: stable@kernel.org Signed-off-by: Pawan Gupta Acked-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 3 +++ arch/x86/kvm/vmx/vmx.c | 20 ++++++++++++++++---- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index ef7cfbad4d57..2bfbf758d061 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -161,6 +161,9 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX + /* Clobbers EFLAGS.ZF */ + CLEAR_CPU_BUFFERS + /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b551de3ec0bc..0ec71f935ed2 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -388,7 +388,16 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx) static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - vmx->disable_fb_clear = (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && + /* + * Disable VERW's behavior of clearing CPU buffers for the guest if the + * CPU isn't affected by MDS/TAA, and the host hasn't forcefully enabled + * the mitigation. Disabling the clearing behavior provides a + * performance boost for guests that aren't aware that manually clearing + * CPU buffers is unnecessary, at the cost of MSR accesses on VM-Entry + * and VM-Exit. + */ + vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) && + (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && !boot_cpu_has_bug(X86_BUG_MDS) && !boot_cpu_has_bug(X86_BUG_TAA); @@ -7224,11 +7233,14 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_state_enter_irqoff(); - /* L1D Flush includes CPU buffer clear to mitigate MDS */ + /* + * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW + * mitigation for MDS is done late in VMentry and is still + * executed in spite of L1D Flush. This is because an extra VERW + * should not matter much after the big hammer L1D Flush. + */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) - mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) mds_clear_cpu_buffers();