Message ID | 20230227222957.24501-32-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2682864wrd; Mon, 27 Feb 2023 14:34:59 -0800 (PST) X-Google-Smtp-Source: AK7set+PAApUNM1uDXiHMVYPrcMasuzRUCYaqIPQdu/DNgrBD9zP0C3KqcLdw8Lj4108lfIUDPns X-Received: by 2002:a17:906:6589:b0:8f2:bd2f:e321 with SMTP id x9-20020a170906658900b008f2bd2fe321mr354026ejn.45.1677537299187; Mon, 27 Feb 2023 14:34:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677537299; cv=none; d=google.com; s=arc-20160816; b=mpk4BMgvnrsppRtId+ilHRIWkPpw6cIEkSrPIb8RQ3CsSSy1buo2PUaO/Ul+hngG/p ViXxXDKZH5H1k3Oyv+TLiAz/f9OoxN+lTqZ4qMjCdUakbluYGHaJU2MeTCACokqmB9ew QTABIK5iSDitj3NQTOvDS7Xjg3bCcvq/cNDYVgt8tb9SkWnqlWfWkch9qov6ADQdMRdP l9pSaiaGNYqQ/uG9X0JyayeEN9ktfdbpbUSdEG6/8eUSc2Io/FG5O7Mq1XGz09ZtPDhX Ls7vCjHD0VxtsdZqhJHij6CF35lG+pJJGa6po2jZS70JMAssPdus4jusGqePE6wPB5KG CUpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=lzo3Du4DYoE+b17QWEoI7riVHVFfp/EGQhzvABVyQ5w=; b=GT69eZFITHIog+92uUc9oWXRmM6KNXi785BjhFlGQxwFS4bxvfnbl4i9nY8Bpiyevp kKD1AhA0AtWbpu9eARSWBMFYxxOuaUr+x9kj5B29mW6M1vF5D90++ZVyFmqvbpB0N1zh 8NLhRPcThcpFUcPFKWIzI+uHrnpxgM6t720+GdxEntU2VqQQWBwfvqWvWz68yxcQel/R 247QP4TOWD8h1qzhCFq5+NVfk/XRPqQA8Y+csCu3Yxcy/CmOw8QKr1Y53IYwH38wbFB9 LAIX4hwK1sqLydI0njgqO76aFv+UeZTki03OBAMsHgxojTY//1W1B2pOdz34ALk4F/HY HIJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VM+QbI6n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lz13-20020a170906fb0d00b008cbd430f670si8450965ejb.533.2023.02.27.14.34.32; Mon, 27 Feb 2023 14:34:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VM+QbI6n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230415AbjB0WeL (ORCPT <rfc822;wenzhi022@gmail.com> + 99 others); Mon, 27 Feb 2023 17:34:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbjB0Wc4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 27 Feb 2023 17:32:56 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E7A82A147; Mon, 27 Feb 2023 14:32:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677537124; x=1709073124; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=EFxkXhIou5r9xim+54dTmsDWNwff9pMdXLmKz8K9Fko=; b=VM+QbI6nPAC7CvBio7qAxi5OH1RMQE1MriPHykAQVpidDeELiXjDdC7q JdtAtTKevF4Ympr/GjiiuzQEYprpwmhHe0K5G1Ll2c0xIP1aiOw0NpzLu skXkw1W50BQO8OmPnipE9dE4qTs1zB9Y8YiYEbnW2eyLV38lHtcBQEgA1 YUQeVkiIvoyiaQiuyrtHoBB00JE9YjtbBclbST9QiGbZx587DvMeaYuM1 q3OC4lPJmW4QYAyyTzLDOQJZj1TAOeT6jIpC516OYn1GaSrCFErWddvMb 3aYVW/exdCqqxVaO8rkmx0Ig156QSMKaEuR/Z2S3YUWhyGnszbk43dpDR g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="313657759" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="313657759" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:32 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="848024746" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="848024746" Received: from leonqu-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.72.19]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:29 -0800 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@kernel.org>, Balbir Singh <bsingharora@gmail.com>, Borislav Petkov <bp@alien8.de>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Eugene Syromiatnikov <esyr@redhat.com>, Florian Weimer <fweimer@redhat.com>, "H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromium.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap <rdunlap@infradead.org>, Weijiang Yang <weijiang.yang@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Allen <john.allen@amd.com>, kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu <yu-cheng.yu@intel.com> Subject: [PATCH v7 31/41] x86/shstk: Introduce routines modifying shstk Date: Mon, 27 Feb 2023 14:29:47 -0800 Message-Id: <20230227222957.24501-32-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230227222957.24501-1-rick.p.edgecombe@intel.com> References: <20230227222957.24501-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759025350921782895?= X-GMAIL-MSGID: =?utf-8?q?1759025350921782895?= |
Series |
Shadow stacks for userspace
|
|
Commit Message
Edgecombe, Rick P
Feb. 27, 2023, 10:29 p.m. UTC
From: Yu-cheng Yu <yu-cheng.yu@intel.com> Shadow stacks are normally written to via CALL/RET or specific CET instructions like RSTORSSP/SAVEPREVSSP. However during some Linux operations the kernel will need to write to directly using the ring-0 only WRUSS instruction. A shadow stack restore token marks a restore point of the shadow stack, and the address in a token must point directly above the token, which is within the same shadow stack. This is distinctively different from other pointers on the shadow stack, since those pointers point to executable code area. Introduce token setup and verify routines. Also introduce WRUSS, which is a kernel-mode instruction but writes directly to user shadow stack. In future patches that enable shadow stack to work with signals, the kernel will need something to denote the point in the stack where sigreturn may be called. This will prevent attackers calling sigreturn at arbitrary places in the stack, in order to help prevent SROP attacks. To do this, something that can only be written by the kernel needs to be placed on the shadow stack. This can be accomplished by setting bit 63 in the frame written to the shadow stack. Userspace return addresses can't have this bit set as it is in the kernel range. It is also can't be a valid restore token. Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Kees Cook <keescook@chromium.org> --- v5: - Fix typo in commit log v3: - Drop shstk_check_rstor_token() - Fail put_shstk_data() if bit 63 is set in the data (Kees) - Add comment in create_rstor_token() (Kees) - Pull in create_rstor_token() changes from future patch (Kees) v2: - Add data helpers for writing to shadow stack. v1: - Use xsave helpers. --- arch/x86/include/asm/special_insns.h | 13 +++++ arch/x86/kernel/shstk.c | 73 ++++++++++++++++++++++++++++ 2 files changed, 86 insertions(+)
Comments
On Mon, Feb 27, 2023 at 02:29:47PM -0800, Rick Edgecombe wrote: > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > Shadow stacks are normally written to via CALL/RET or specific CET ^ indirectly. > instructions like RSTORSSP/SAVEPREVSSP. However during some Linux > operations the kernel will need to write to directly using the ring-0 only "However, sometimes the kernel will need to..." > WRUSS instruction. > > A shadow stack restore token marks a restore point of the shadow stack, and > the address in a token must point directly above the token, which is within > the same shadow stack. This is distinctively different from other pointers > on the shadow stack, since those pointers point to executable code area. > > Introduce token setup and verify routines. Also introduce WRUSS, which is > a kernel-mode instruction but writes directly to user shadow stack. > > In future patches that enable shadow stack to work with signals, the kernel > will need something to denote the point in the stack where sigreturn may be > called. This will prevent attackers calling sigreturn at arbitrary places > in the stack, in order to help prevent SROP attacks. > > To do this, something that can only be written by the kernel needs to be > placed on the shadow stack. This can be accomplished by setting bit 63 in > the frame written to the shadow stack. Userspace return addresses can't > have this bit set as it is in the kernel range. It is also can't be a s/is // > valid restore token. ... > diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h > index de48d1389936..d6cd9344f6c7 100644 > --- a/arch/x86/include/asm/special_insns.h > +++ b/arch/x86/include/asm/special_insns.h > @@ -202,6 +202,19 @@ static inline void clwb(volatile void *__p) > : [pax] "a" (p)); > } > > +#ifdef CONFIG_X86_USER_SHADOW_STACK > +static inline int write_user_shstk_64(u64 __user *addr, u64 val) > +{ > + asm_volatile_goto("1: wrussq %[val], (%[addr])\n" > + _ASM_EXTABLE(1b, %l[fail]) > + :: [addr] "r" (addr), [val] "r" (val) > + :: fail); > + return 0; > +fail: > + return -EFAULT; Nice! > +} > +#endif /* CONFIG_X86_USER_SHADOW_STACK */ > + > #define nop() asm volatile ("nop") > > static inline void serialize(void) ... > +static int put_shstk_data(u64 __user *addr, u64 data) > +{ > + if (WARN_ON_ONCE(data & BIT(63))) Dunno, maybe something like: /* * A comment explaining what that is... */ #define SHSTK_SIGRETURN_TOKEN BIT_ULL(63) or so? And use that instead of that magical bit 63.
On Thu, 2023-03-09 at 17:48 +0100, Borislav Petkov wrote: > On Mon, Feb 27, 2023 at 02:29:47PM -0800, Rick Edgecombe wrote: > > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > > > Shadow stacks are normally written to via CALL/RET or specific CET > > ^ > indirectly. Dunno here, RSTORSSP/SAVEPREVSSP are kind of direct. > > > instructions like RSTORSSP/SAVEPREVSSP. However during some Linux > > operations the kernel will need to write to directly using the > > ring-0 only > > "However, sometimes the kernel will need to..." Ok. > > > WRUSS instruction. > > > > A shadow stack restore token marks a restore point of the shadow > > stack, and > > the address in a token must point directly above the token, which > > is within > > the same shadow stack. This is distinctively different from other > > pointers > > on the shadow stack, since those pointers point to executable code > > area. > > > > Introduce token setup and verify routines. Also introduce WRUSS, > > which is > > a kernel-mode instruction but writes directly to user shadow stack. > > > > In future patches that enable shadow stack to work with signals, > > the kernel > > will need something to denote the point in the stack where > > sigreturn may be > > called. This will prevent attackers calling sigreturn at arbitrary > > places > > in the stack, in order to help prevent SROP attacks. > > > > To do this, something that can only be written by the kernel needs > > to be > > placed on the shadow stack. This can be accomplished by setting bit > > 63 in > > the frame written to the shadow stack. Userspace return addresses > > can't > > have this bit set as it is in the kernel range. It is also can't be > > a > > s/is // Yep, thanks. > > > valid restore token. > > ... > > > diff --git a/arch/x86/include/asm/special_insns.h > > b/arch/x86/include/asm/special_insns.h > > index de48d1389936..d6cd9344f6c7 100644 > > --- a/arch/x86/include/asm/special_insns.h > > +++ b/arch/x86/include/asm/special_insns.h > > @@ -202,6 +202,19 @@ static inline void clwb(volatile void *__p) > > : [pax] "a" (p)); > > } > > > > +#ifdef CONFIG_X86_USER_SHADOW_STACK > > +static inline int write_user_shstk_64(u64 __user *addr, u64 val) > > +{ > > + asm_volatile_goto("1: wrussq %[val], (%[addr])\n" > > + _ASM_EXTABLE(1b, %l[fail]) > > + :: [addr] "r" (addr), [val] "r" (val) > > + :: fail); > > + return 0; > > +fail: > > + return -EFAULT; > > Nice! > > > +} > > +#endif /* CONFIG_X86_USER_SHADOW_STACK */ > > + > > #define nop() asm volatile ("nop") > > > > static inline void serialize(void) > > ... > > > +static int put_shstk_data(u64 __user *addr, u64 data) > > +{ > > + if (WARN_ON_ONCE(data & BIT(63))) > > Dunno, maybe something like: > > /* > * A comment explaining what that is... > */ > #define SHSTK_SIGRETURN_TOKEN BIT_ULL(63) > > or so? > > And use that instead of that magical bit 63. Seems very reasonable. Since we are calling this the "data format", I might go with SHSTK_DATA_BIT.
On Thu, Mar 09, 2023 at 05:03:26PM +0000, Edgecombe, Rick P wrote: > On Thu, 2023-03-09 at 17:48 +0100, Borislav Petkov wrote: > > On Mon, Feb 27, 2023 at 02:29:47PM -0800, Rick Edgecombe wrote: > > > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > > > > > Shadow stacks are normally written to via CALL/RET or specific CET > > > > ^ > > indirectly. > > Dunno here, RSTORSSP/SAVEPREVSSP are kind of direct. > > > > > > instructions like RSTORSSP/SAVEPREVSSP. However during some Linux > > > operations the kernel will need to write to directly using the ^^^^^^^^^ Yes, I was trying to make the contrast more obvious because you say "directly" here. But not too important.
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index de48d1389936..d6cd9344f6c7 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -202,6 +202,19 @@ static inline void clwb(volatile void *__p) : [pax] "a" (p)); } +#ifdef CONFIG_X86_USER_SHADOW_STACK +static inline int write_user_shstk_64(u64 __user *addr, u64 val) +{ + asm_volatile_goto("1: wrussq %[val], (%[addr])\n" + _ASM_EXTABLE(1b, %l[fail]) + :: [addr] "r" (addr), [val] "r" (val) + :: fail); + return 0; +fail: + return -EFAULT; +} +#endif /* CONFIG_X86_USER_SHADOW_STACK */ + #define nop() asm volatile ("nop") static inline void serialize(void) diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 1d30295e0066..13c02747386f 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -25,6 +25,8 @@ #include <asm/fpu/api.h> #include <asm/prctl.h> +#define SS_FRAME_SIZE 8 + static bool features_enabled(unsigned long features) { return current->thread.features & features; @@ -40,6 +42,35 @@ static void features_clr(unsigned long features) current->thread.features &= ~features; } +/* + * Create a restore token on the shadow stack. A token is always 8-byte + * and aligned to 8. + */ +static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) +{ + unsigned long addr; + + /* Token must be aligned */ + if (!IS_ALIGNED(ssp, 8)) + return -EINVAL; + + addr = ssp - SS_FRAME_SIZE; + + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + ssp |= BIT(0); + + if (write_user_shstk_64((u64 __user *)addr, (u64)ssp)) + return -EFAULT; + + if (token_addr) + *token_addr = addr; + + return 0; +} + static unsigned long alloc_shstk(unsigned long size) { int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_ABOVE4G; @@ -159,6 +190,48 @@ int shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, return 0; } +static unsigned long get_user_shstk_addr(void) +{ + unsigned long long ssp; + + fpregs_lock_and_load(); + + rdmsrl(MSR_IA32_PL3_SSP, ssp); + + fpregs_unlock(); + + return ssp; +} + +static int put_shstk_data(u64 __user *addr, u64 data) +{ + if (WARN_ON_ONCE(data & BIT(63))) + return -EINVAL; + + /* + * Mark the high bit so that the sigframe can't be processed as a + * return address. + */ + if (write_user_shstk_64(addr, data | BIT(63))) + return -EFAULT; + return 0; +} + +static int get_shstk_data(unsigned long *data, unsigned long __user *addr) +{ + unsigned long ldata; + + if (unlikely(get_user(ldata, addr))) + return -EFAULT; + + if (!(ldata & BIT(63))) + return -EINVAL; + + *data = ldata & ~BIT(63); + + return 0; +} + void shstk_free(struct task_struct *tsk) { struct thread_shstk *shstk = &tsk->thread.shstk;