Message ID | 20230218211433.26859-16-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp555081wrn; Sat, 18 Feb 2023 13:20:15 -0800 (PST) X-Google-Smtp-Source: AK7set8PWs6uGUMBV70PtiBBymREd0cPV8Zi8/NBSpYZiFxDf8uDXUUYpRWqaKX4S0km6s9O7WwH X-Received: by 2002:a05:6a20:9385:b0:be:c5de:7157 with SMTP id x5-20020a056a20938500b000bec5de7157mr6351996pzh.53.1676755214968; Sat, 18 Feb 2023 13:20:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676755214; cv=none; d=google.com; s=arc-20160816; b=e/zKU0NkU3RfOsNPyvLhq3cLFYPR7cMDPgLF7dcRytSvEhLc1FaWFj2UHtmqzNtFCS D4A+NlKq5eootcNYtESOAWDYjknvIRrnrdfGVZGkLlZR9n4jUeO/Q9+tQ2IfzLNUH2by 638sZch0nkXSBuOTiw37+/KVvd2JwmTdIqmz6fDJ3FFyhsyD1GfATXUhXfP/hpEwBSyu 8JTPQ9t7hb6nxbfsQ8uA0tJha0DoQacVx0jWZ9i8oG2v4IjLNng9VS8g12FTy+1mqqpt x/kqSh2W5mfTcT+k4J6wqezu45ymH3ajUfa6wvQ8WiQ8TP9TWMrB98widDFgERo8Fnht lqkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=SpY8RNKvPEU8/scbKuOf3L3LO6d2Aql0f3gk0afkH+Q=; b=CGoUCscppNqvR7w1Faa7PVkzYC1zFZk6QmutFa29HyXClNThy+3B5oZuHSxP4+SQSy /00HdDqQnrjryFcXwGcrcp+Ol0tvPaJ/iBGJfQri3OgV6sggjzGJHrdTgnOF0V0t254n ZaW+qgB/Yq/5KA9qxcITvqmm3Bpvhx79DyWwrKtuZmWXIDU6Z8BzpVMVcgvk7AczsXij k8g6ArLytsVpaIT1uy/PJ+yk7iwKWi0pbVUY5OcPh+IN7eoXF3PlXO+p0wCWFJS0LhWl vV/LLOy1EgnDodulzpB7r7z4qwI34l1GG0CGGWa0OmewGJtxZ2Bo+O71FMpK7JGL2Urd +5Ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LXYBQQRd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j4-20020a62c504000000b005a8bc825d39si8467968pfg.130.2023.02.18.13.20.01; Sat, 18 Feb 2023 13:20:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LXYBQQRd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230159AbjBRVTj (ORCPT <rfc822;assdfgzxcv4@gmail.com> + 99 others); Sat, 18 Feb 2023 16:19:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230053AbjBRVSj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 18 Feb 2023 16:18:39 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE1AC199ED; Sat, 18 Feb 2023 13:17:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676755028; x=1708291028; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=64CpYc54+5BlAgu2DYKyZjSWnQK6dNYfq1G22ifMybk=; b=LXYBQQRdiTVbe6faY6Hvn8gHHByJZx2lT6Np+RSI2Q3jimIqIR9pZOa3 h5NlF7rLtIFYDy4KcuYca8RmFwvgi2CZDJFvNP+kWTwSb19zepyIul4sw vM34K9eFJ06gx5/HOigTFChHxhMG6x5zA7SaqowPr/Rb6F/RYaCpfkcbm hmyOQmBsRPBcx1L8HLPlTSVpMhNvUfIUDZJdk6CguYwczr7uY8tkGP7cF 0m+f6RfCbhCWAEhYmF1LZLn2MklJMIlHIGIzHXN2BEz6dKL0pXMfQBvV8 uefC5vy82cDs1BpUMRjq8dPgvnUOvOWRMjrtzNjuEel1Tws3nvBQvRVJu Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="418427420" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="418427420" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:09 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="664241643" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="664241643" Received: from adityava-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.80.223]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:08 -0800 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@kernel.org>, Balbir Singh <bsingharora@gmail.com>, Borislav Petkov <bp@alien8.de>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Eugene Syromiatnikov <esyr@redhat.com>, Florian Weimer <fweimer@redhat.com>, "H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromium.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap <rdunlap@infradead.org>, Weijiang Yang <weijiang.yang@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Allen <john.allen@amd.com>, kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu <yu-cheng.yu@intel.com> Subject: [PATCH v6 15/41] x86/mm: Update ptep/pmdp_set_wrprotect() for _PAGE_SAVED_DIRTY Date: Sat, 18 Feb 2023 13:14:07 -0800 Message-Id: <20230218211433.26859-16-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230218211433.26859-1-rick.p.edgecombe@intel.com> References: <20230218211433.26859-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758205276632132411?= X-GMAIL-MSGID: =?utf-8?q?1758205276632132411?= |
Series |
Shadow stacks for userspace
|
|
Commit Message
Edgecombe, Rick P
Feb. 18, 2023, 9:14 p.m. UTC
From: Yu-cheng Yu <yu-cheng.yu@intel.com> When shadow stack is in use, Write=0,Dirty=1 PTE are preserved for shadow stack. Copy-on-write PTEs then have Write=0,SavedDirty=1. When a PTE goes from Write=1,Dirty=1 to Write=0,SavedDirty=1, it could become a transient shadow stack PTE in two cases: 1. Some processors can start a write but end up seeing a Write=0 PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, and a TLB flush is not necessary. 2. When _PAGE_DIRTY is replaced with _PAGE_SAVED_DIRTY non-atomically, a transient shadow stack PTE can be created as a result. Thus, prevent that with cmpxchg. In the case of pmdp_set_wrprotect(), for nopmd configs the ->pmd operated on does not exist and the logic would need to be different. Although the extra functionality will normally be optimized out when user shadow stacks are not configured, also exclude it in the preprocessor stage so that it will still compile. User shadow stack is not supported there by Linux anyway. Leave the cpu_feature_enabled() check so that the functionality also gets disabled based on runtime detection of the feature. Similarly, compile it out in ptep_set_wrprotect() due to a clang warning on i386. Like above, the code path should get optimized out on i386 since shadow stack is not supported on 32 bit kernels, but this makes the compiler happy. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> --- v6: - Fix comment and log to update for _PAGE_COW being replaced with _PAGE_SAVED_DIRTY. v5: - Commit log verbiage and formatting (Boris) - Remove capitalization on shadow stack (Boris) - Fix i386 warning on recent clang v3: - Remove unnecessary #ifdef (Dave Hansen) v2: - Compile out some code due to clang build error - Clarify commit log (dhansen) - Normalize PTE bit descriptions between patches (dhansen) - Update comment with text from (dhansen) --- arch/x86/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 110e552eb602..e5f00c077039 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1192,6 +1192,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pte_t old_pte, new_pte; + + old_pte = READ_ONCE(*ptep); + do { + new_pte = pte_wrprotect(old_pte); + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } +#endif clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1244,6 +1261,24 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pmd_t old_pmd, new_pmd; + + old_pmd = READ_ONCE(*pmdp); + do { + new_pmd = pmd_wrprotect(old_pmd); + } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); + + return; + } +#endif + clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); }