From patchwork Sun Mar 12 11:26:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 68274 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp674952wrd; Sun, 12 Mar 2023 04:29:51 -0700 (PDT) X-Google-Smtp-Source: AK7set8ft+jWMMWYBooNzqxT1OTkaT045tHUDTMymNB1tinLmJSCB6gPAHmfEHpkxKcBGn8pQw1p X-Received: by 2002:a17:902:c949:b0:19a:59d1:389e with SMTP id i9-20020a170902c94900b0019a59d1389emr8324262pla.23.1678620591571; Sun, 12 Mar 2023 04:29:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1678620591; cv=none; d=google.com; s=arc-20160816; b=hOhe4KDUrpWQokNivWeonTvoOVYQNRm2FSLqJIdZsRBW8U4dgDJ/nLRC66YW/0Fksf qx1ZL3Hccz9+ltS+zbdu83g7Wsz3X2R6wffqoDJC7wqMP6UgPCT2RoW5gc+Rjf+7HaJo yqYF5HECSOpF1vT2XrHCumHaWNIYFSQJPvo/8oeZr9m4Ru1B6wFfEkw2Qm51H6g2a/G6 udPparXWa06gQlyDOYX08w/N/F75k/jWBZ0yyflQtWFOmw0BsUt4Mb1CAlWeWvdSL/q/ YP8ToyFc0dFG8n/VR+4ht7+gI0x0RV9Yu+kng28w0qViOBUN+2fRtI674SXRtOBASN8e HWLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=X513hi487xr2Y5cmAhhhOp2BVxYcOzKFwOGV+7peITQ=; b=upwhcLoOW7YBwyGppYQDoxSGmSADs0Rhzt385DEyXb110Y2M8EsZbg9c5Jg8XRVnmY dxSK324wqrsnIgGjsTqINIfA0qHjMS55Vk6Jiayua6b8f4n6I1HoUnkpx0IHuOSdhDyk /ZZ7Oc5G8wslGVRmUZ6cxoaJzwpkj/oLnfOyLs6Z1rdzjspmdyuwosJSlqzalmbTsJPy DhFFsnY79Xcs4Zqprm1UMHVHubHhiCsAFZMGnjJposgPsqv63zKPEuLPnCxlrDgFYH8E rHb6GHGq+b8KsGCHc0pB+Ip1AsEPuJYIaL/Xmr8YYh4agJ2Y11//UPxT/8MBMvBqsan9 nNeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=k3LKUTF6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kc13-20020a17090333cd00b0019d04cfd41asi4013759plb.593.2023.03.12.04.29.39; Sun, 12 Mar 2023 04:29:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=k3LKUTF6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230287AbjCLL1q (ORCPT + 99 others); Sun, 12 Mar 2023 07:27:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230336AbjCLL1V (ORCPT ); Sun, 12 Mar 2023 07:27:21 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21A5B31E28 for ; Sun, 12 Mar 2023 04:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678620420; x=1710156420; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tq6WtwHrazfLXD0XBMc2Q+r2SDElsHETE3zLNMxGwxY=; b=k3LKUTF6yAC/ZMEGgbk4wf5VNdENVldHrIfNR3iH6YIxu0S8VEcc4Q02 7Msfg9r+8gs5snjHYNoP1jiHNoGZerg+qcCR+ENEyJLy5dYTmqznU7m+I 4lECB9g3QqFl+J3ua43dXe2W7fZOQr7f6GJ9ZaBq2vQxC+ti7cWARVjQb Y+GIaJBKI7nbAyRJ7kfX2x9JYvzvjy24cWrDJvTW/J0bbdSXbNfqKAcJw OtCIm/QKArBOiZfkNKfEHQDjWt3aHcp0DRgXjeN43mYbcA5bD2gi6hB8D YJhzLa0vALsT8wUr3fkJfVY/O90dY2KzRHvRp9Ny0wEpgh/dt7njFF+dR Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10646"; a="399577574" X-IronPort-AV: E=Sophos;i="5.98,254,1673942400"; d="scan'208";a="399577574" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2023 04:26:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10646"; a="671580735" X-IronPort-AV: E=Sophos;i="5.98,254,1673942400"; d="scan'208";a="671580735" Received: from nmoazzen-mobl1.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.219.215]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2023 04:26:30 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 8FAA510D7B4; Sun, 12 Mar 2023 14:26:19 +0300 (+03) From: "Kirill A. Shutemov" To: Dave Hansen , Andy Lutomirski , Peter Zijlstra Cc: x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Taras Madan , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , Bharata B Rao , Jacob Pan , Ashok Raj , Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv16 06/17] x86/uaccess: Provide untagged_addr() and remove tags before address check Date: Sun, 12 Mar 2023 14:26:01 +0300 Message-Id: <20230312112612.31869-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230312112612.31869-1-kirill.shutemov@linux.intel.com> References: <20230312112612.31869-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760161265250676520?= X-GMAIL-MSGID: =?utf-8?q?1760161265250676520?= untagged_addr() is a helper used by the core-mm to strip tag bits and get the address to the canonical shape based on rules of the current thread. It only handles userspace addresses. The untagging mask is stored in per-CPU variable and set on context switching to the task. The tags must not be included into check whether it's okay to access the userspace address. Strip tags in access_ok(). Signed-off-by: Kirill A. Shutemov Acked-by: Peter Zijlstra (Intel) Tested-by: Alexander Potapenko --- arch/x86/include/asm/mmu.h | 3 +++ arch/x86/include/asm/mmu_context.h | 11 +++++++++ arch/x86/include/asm/tlbflush.h | 10 ++++++++ arch/x86/include/asm/uaccess.h | 39 ++++++++++++++++++++++++++++-- arch/x86/kernel/process.c | 3 +++ arch/x86/mm/init.c | 5 ++++ 6 files changed, 69 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 22fc9fbf1d0a..9cac8c45a647 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -45,6 +45,9 @@ typedef struct { #ifdef CONFIG_ADDRESS_MASKING /* Active LAM mode: X86_CR3_LAM_U48 or X86_CR3_LAM_U57 or 0 (disabled) */ unsigned long lam_cr3_mask; + + /* Significant bits of the virtual address. Excludes tag bits. */ + u64 untag_mask; #endif struct mutex lock; diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 0295c3863db7..eb1387ac40fa 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -101,6 +101,12 @@ static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) { mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask; + mm->context.untag_mask = oldmm->context.untag_mask; +} + +static inline void mm_reset_untag_mask(struct mm_struct *mm) +{ + mm->context.untag_mask = -1UL; } #else @@ -113,6 +119,10 @@ static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) { } + +static inline void mm_reset_untag_mask(struct mm_struct *mm) +{ +} #endif #define enter_lazy_tlb enter_lazy_tlb @@ -139,6 +149,7 @@ static inline int init_new_context(struct task_struct *tsk, mm->context.execute_only_pkey = -1; } #endif + mm_reset_untag_mask(mm); init_new_context_ldt(mm); return 0; } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e8b47f57bd4a..75bfaa421030 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -54,6 +54,15 @@ static inline void cr4_clear_bits(unsigned long mask) local_irq_restore(flags); } +#ifdef CONFIG_ADDRESS_MASKING +DECLARE_PER_CPU(u64, tlbstate_untag_mask); + +static inline u64 current_untag_mask(void) +{ + return this_cpu_read(tlbstate_untag_mask); +} +#endif + #ifndef MODULE /* * 6 because 6 should be plenty and struct tlb_state will fit in two cache @@ -380,6 +389,7 @@ static inline void set_tlbstate_lam_mode(struct mm_struct *mm) { this_cpu_write(cpu_tlbstate.lam, mm->context.lam_cr3_mask >> X86_CR3_LAM_U57_BIT); + this_cpu_write(tlbstate_untag_mask, mm->context.untag_mask); } #else diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 1cc756eafa44..c79ebdbd6356 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -7,11 +7,13 @@ #include #include #include +#include #include #include #include #include #include +#include #ifdef CONFIG_DEBUG_ATOMIC_SLEEP static inline bool pagefault_disabled(void); @@ -21,6 +23,39 @@ static inline bool pagefault_disabled(void); # define WARN_ON_IN_IRQ() #endif +#ifdef CONFIG_ADDRESS_MASKING +/* + * Mask out tag bits from the address. + * + * Magic with the 'sign' allows to untag userspace pointer without any branches + * while leaving kernel addresses intact. + */ +static inline unsigned long __untagged_addr(unsigned long addr, + unsigned long mask) +{ + long sign = addr >> 63; + + addr &= mask | sign; + return addr; +} + +#define untagged_addr(addr) ({ \ + u64 __addr = (__force u64)(addr); \ + __addr = __untagged_addr(__addr, current_untag_mask()); \ + (__force __typeof__(addr))__addr; \ +}) + +#define untagged_addr_remote(mm, addr) ({ \ + u64 __addr = (__force u64)(addr); \ + mmap_assert_locked(mm); \ + __addr = __untagged_addr(__addr, (mm)->context.untag_mask); \ + (__force __typeof__(addr))__addr; \ +}) + +#else +#define untagged_addr(addr) (addr) +#endif + /** * access_ok - Checks if a user space pointer is valid * @addr: User space pointer to start of block to check @@ -38,10 +73,10 @@ static inline bool pagefault_disabled(void); * Return: true (nonzero) if the memory block may be valid, false (zero) * if it is definitely invalid. */ -#define access_ok(addr, size) \ +#define access_ok(addr, size) \ ({ \ WARN_ON_IN_IRQ(); \ - likely(__access_ok(addr, size)); \ + likely(__access_ok(untagged_addr(addr), size)); \ }) #include diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index b650cde3f64d..bbc8c4c6e360 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -48,6 +48,7 @@ #include #include #include +#include #include "process.h" @@ -368,6 +369,8 @@ void arch_setup_new_exec(void) task_clear_spec_ssb_noexec(current); speculation_ctrl_update(read_thread_flags()); } + + mm_reset_untag_mask(current->mm); } #ifdef CONFIG_X86_IOPL_IOPERM diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index cb258f58fdc8..659b6c0f7910 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -1048,6 +1048,11 @@ __visible DEFINE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate) = { .cr4 = ~0UL, /* fail hard if we screw up cr4 shadow initialization */ }; +#ifdef CONFIG_ADDRESS_MASKING +DEFINE_PER_CPU(u64, tlbstate_untag_mask); +EXPORT_PER_CPU_SYMBOL(tlbstate_untag_mask); +#endif + void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache) { /* entry 0 MUST be WB (hardwired to speed up translations) */