From patchwork Wed Feb 14 02:21:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 200823 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp1022422dyb; Tue, 13 Feb 2024 22:30:25 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUACCZkNc29+ir2IVx6BxZeObA9uO25DR28UqsfA1NqZJwq6/0ManMSfrwlMrzuapwvtD+/mkJsHYVrracK/8MJl5tSCw== X-Google-Smtp-Source: AGHT+IH/2Odt0x6z3qJfmJkAhiJZ08lcWHEu0D1zhG12oqHUfrsJhOKfeKdWBamUAyuLqLTyVjNy X-Received: by 2002:a05:6214:2622:b0:685:6715:9693 with SMTP id gv2-20020a056214262200b0068567159693mr2422090qvb.8.1707892225213; Tue, 13 Feb 2024 22:30:25 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707892225; cv=pass; d=google.com; s=arc-20160816; b=MJ8N3b0kt2j2qwzws24u0d1dR7xaMdOmQXgDsEQuBFXr3QKSqiwAj5TGr7DA0ZlK46 C/qCafTd3JbyjRXcg8zAiZHXUqyUafwoAMp7hEluDPuHWSCXgIznPLAQjdkU12G/n0QU Gh2dcfmjRBYIMhmYaS8thx4QpIEF2Lvv1s4rA8C6iM1bwUv/mKMOwOHUQvbjEtfPjSPL Oxez/BzZ3ukl8PjJDDWZYHsAChgaTrvVLwr/DncxHcgY9moMx6hUTSYjJU/PsF1LRR0L /pl1AI40T5/cyX3GMBmVHuF1J0PNvyOV5pta+h0PS0RKp9UwOLa1DMkc3Vld7uCAukm2 lWmA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=HEHTscg2GNsby7u8dw6BhBxvk+vKsO5pQVAkBhxZbLw=; fh=BaQb903muLeLL+sUGjsVxlwp2HQODeVocZSln5PXLD8=; b=gQZ5N4cvvK1tHv8FZD6xuZcdSfy2JtmF6fF37+PUyeKLafH5rEg6l8JPZfNdQdcI8P LoQSUuBP6qGg0yHzZ0+V1LKDpEGZyCAjl+9oE+/KSzZ719FwtZV4OKtZF+Wez8pElb6c yUO8FlkvcfZGQnKa7U7wEUlFW/OQLNSvyDUXjARJDRu1Jhs9WuL4Ujjerqn6h+KE945G ryexmVWjpwmsW4SKHWcCsjoW2p9vAjBlMFMyP8wa5uwN8plO9iiH1X7qMrjcvF1SdVNz EBCrnMu2bmOOLQm1HS61FN1+qS0jrSzTrHUDBJ+qaJMO44+LFhuQ7sjU8uaaAm0wFzi8 M9rw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="G+rr/gHS"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-64686-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64686-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=2; AJvYcCUT5eVDCSGmqY3T8giSg+tPeKrs7n3p1BtO6yNDkwXNxXl/X4YV6TWZzOYqrepXMNHierUfgG63k2uf/z5AUUxXENATxQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id 7-20020a05621420e700b0068d14e993bfsi4734182qvk.85.2024.02.13.22.30.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 22:30:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64686-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="G+rr/gHS"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-64686-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64686-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C27621C283B0 for ; Wed, 14 Feb 2024 02:22:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1C0758F66; Wed, 14 Feb 2024 02:22:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="G+rr/gHS" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AE2B947A; Wed, 14 Feb 2024 02:21:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707877317; cv=none; b=Kklaihcb5M+NSnEyL68/13y2GmIWP5pH1zOj25pqZ0lxH8FYWOysiS/mm1p/6HQVSp+gpWpr3pbW4fVTObIaZSqRNVjyVfD/DgDgOff9o819oKJqnFG16fsg+B24OxLK+Ao/r2Jw0YH054o+4+z+gWAa0ed5vawng3KpfEjlIy4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707877317; c=relaxed/simple; bh=730qVyOoAO3iHO+2lbrC+3aGZCiIPt6MdeACYblDLQY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=lBZaUWEgh83KYEeil4SJg3mL1ahULX2Pt5+POYq9PQuUEKM6JK3sI5MlLCwe42lebUY75YEfFWrOO5TQ5cwVb85zSFnWpo7i/sp2l3RTY4Xpd8DRS6KXNbJaOU4Cqwl1vxJ18aLyT+DZssKvoOJ6VK/k0odVBiUmSFkZu478/ck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=G+rr/gHS; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707877315; x=1739413315; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=730qVyOoAO3iHO+2lbrC+3aGZCiIPt6MdeACYblDLQY=; b=G+rr/gHSZCF3lAAtKhVP4NbpkA8aUuEdbNX7C3KjaMwzPYdBA1IXtA1O Jzg8c3FGfBYh4Cclr3f6eUN7nIcJI0TH3PA4/RNVDOyROoWNupCczDhAg XMANTxuLL0T2UsCTzLYGaVrL/c18C+hnqMTMWoq5eEAuz8YHoMlOJ8nX4 w+IZA8RjqMFQMwwcej9mn1l9O28WeInyH/Y2O4bdc8CnT/ywhb4cinmcU 069Jcc6Sr9PnAnX3fkhwRjUQLW1MQv3mrbUMh8aSKLOA27YdlOkihI31T Yi3PPSFuXq3qjEyYzdjLMymjHrHSNWvoXZayx/kiLL0krGi41mViJ8ueL w==; X-IronPort-AV: E=McAfee;i="6600,9927,10982"; a="1784869" X-IronPort-AV: E=Sophos;i="6.06,158,1705392000"; d="scan'208";a="1784869" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2024 18:21:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,158,1705392000"; d="scan'208";a="7710856" Received: from diegoavi-mobl.amr.corp.intel.com (HELO desk) ([10.255.230.185]) by orviesa005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2024 18:21:53 -0800 Date: Tue, 13 Feb 2024 18:21:52 -0800 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com, Andrew Cooper , Nikolay Borisov Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com Subject: [PATCH v8 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20240213-delay-verw-v8-2-a6216d83edb7@linux.intel.com> X-Mailer: b4 0.12.3 References: <20240213-delay-verw-v8-0-a6216d83edb7@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240213-delay-verw-v8-0-a6216d83edb7@linux.intel.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790854798097192190 X-GMAIL-MSGID: 1790854798097192190 Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. Suggested-by: Dave Hansen Cc: stable@kernel.org Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index dfb9b8c66123..8af2a26b24f6 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -161,6 +161,7 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -571,6 +572,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) .Lswapgs_and_iret: swapgs + CLEAR_CPU_BUFFERS /* Assert that the IRET frame indicates user mode. */ testb $3, 8(%rsp) jnz .Lnative_iret @@ -721,6 +723,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1446,6 +1450,12 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1463,6 +1473,7 @@ SYM_CODE_START(entry_SYSCALL32_ignore) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(entry_SYSCALL32_ignore) diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index de94e2e84ecc..eabf48c4d4b4 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -270,6 +270,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR