Message ID | 20230227222957.24501-22-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2682720wrd; Mon, 27 Feb 2023 14:34:37 -0800 (PST) X-Google-Smtp-Source: AK7set80LJZfLpj+XiUNgePhz2zH2CDG7W1BLrB24K2mQdv6J2SgnuaFgFHuC5nUxRRU52OHh8SD X-Received: by 2002:a17:906:9e11:b0:88b:a30:25f0 with SMTP id fp17-20020a1709069e1100b0088b0a3025f0mr279788ejc.32.1677537277387; Mon, 27 Feb 2023 14:34:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677537277; cv=none; d=google.com; s=arc-20160816; b=I7PDJWyTGF0ULxEn8XDSwpJhKk8k1pi0TvaT1FzJQfgL4ShZ2VfG9v/HOiTMFIhFqL WrHdl3tDhcE9Ub3PCQ9Lq3VDvFKQuM/0PQ9hnVeMEKFhUqt4FX/ML7fUhTFNMl2bZ1r+ JnhIvJZa4QVZEw+7C02aC3GrXg6Z7pUGA/qQvholUkwZQFFesNx8rfKWpST3DZiFE8co 34WBjqzKdQLj5T/vibzANS5i9SKhCJ5fPIVcSLWcAcRmaWhKoJXuCQEVaBkOidbLuS1N Y/H9ZHeZS7FMwb9rGXo0eV5N6iqN7cRf47tNGUpysD2PSaZtYKi/0+9RhGxf44ljSdWv TJQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=YgqLQ+1vy4zj1LyeIgLZtOZ931vA8iNoFckEDb4u1rM=; b=q0olS1w/i00+JYCsFzR+Unb/3lMRCLCWZ+V3HQUaElTRa71xrpvszH2nCMe10Ov0us eUZaWL05vNMuN2b21LdRerwDIYy1mcMXCd4LKnpo23KEUUMS2xMXJle0b9JGFWAv/D48 NqW5pf4kyC/ANqnrKDICnWFTME1oXZIJYkvKUUaDBWtuXaMc6Sj/+xIObUBIZ4OWWwBt Cubo5fafdrBJB3tWq+BtgnQB5cpXMgbtb49fIKoN7FIB8WIPRBEtOrIGupVGbDMNbYJC OYL4AZA8v3RHe8YeQjr5DpmmhASSnHMVn18rDreCZP7+rfqmyTPbar6HuEC04/v2piv3 /hvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Zx60cAIf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y18-20020a170906559200b008e3da5c56f4si8404843ejp.485.2023.02.27.14.34.10; Mon, 27 Feb 2023 14:34:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Zx60cAIf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229745AbjB0WdK (ORCPT <rfc822;wenzhi022@gmail.com> + 99 others); Mon, 27 Feb 2023 17:33:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230271AbjB0Wbx (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 27 Feb 2023 17:31:53 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 863B8298F9; Mon, 27 Feb 2023 14:31:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677537110; x=1709073110; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=iMZ7pHmGnRJHR2PLdV4imtJgm0cCw3zHhRVdTgGS/n0=; b=Zx60cAIfb9k8jZSJ4Ei42RmIZz4cr0tAvz/8NwY3DLdE+OcWqdaDhLSI zIMywQX29F7MS7L/DvicQE2HepLY8Jm4Fho4pBPpDS1AfyR1UJZ2vZPlZ lZ9oRzuMTn5/UY2AAi77bDCssHCrs6RpW2R8PVeT+ktvuAVIfn8XwNXwQ Lu2QpF3JWlu7/HQH+YXx1aqprEgMyUL6QRGLm6jiIMPCysWZnwplsEPTT uVCtC+J7EPd9g0U0zQQqvwv4He/+0Zrkdf8yo3HdzOGk78XEO+ck/CQPL ss/H/ZItPX11ZLFHz0f9eoVmtVPe1eqJ1GVBS8CmG2lJUqeFJv/NWF6M2 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="313657523" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="313657523" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:25 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="848024623" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="848024623" Received: from leonqu-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.72.19]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:22 -0800 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@kernel.org>, Balbir Singh <bsingharora@gmail.com>, Borislav Petkov <bp@alien8.de>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Eugene Syromiatnikov <esyr@redhat.com>, Florian Weimer <fweimer@redhat.com>, "H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromium.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap <rdunlap@infradead.org>, Weijiang Yang <weijiang.yang@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Allen <john.allen@amd.com>, kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu <yu-cheng.yu@intel.com> Subject: [PATCH v7 21/41] mm: Add guard pages around a shadow stack. Date: Mon, 27 Feb 2023 14:29:37 -0800 Message-Id: <20230227222957.24501-22-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230227222957.24501-1-rick.p.edgecombe@intel.com> References: <20230227222957.24501-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759025328111452052?= X-GMAIL-MSGID: =?utf-8?q?1759025328111452052?= |
Series |
Shadow stacks for userspace
|
|
Commit Message
Edgecombe, Rick P
Feb. 27, 2023, 10:29 p.m. UTC
From: Yu-cheng Yu <yu-cheng.yu@intel.com> The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP can move the ssp to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack vmas, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stack's to grow, which is unneeded and adds a strange difference to how most regular stacks work. Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Kees Cook <keescook@chromium.org> --- v5: - Fix typo in commit log v4: - Drop references to 32 bit instructions - Switch to generic code to drop __weak (Peterz) v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) Yu-cheng v25: - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. --- include/linux/mm.h | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-)
Comments
Just typos: On Mon, Feb 27, 2023 at 02:29:37PM -0800, Rick Edgecombe wrote: > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > The x86 Control-flow Enforcement Technology (CET) feature includes a new > type of memory called shadow stack. This shadow stack memory has some > unusual properties, which requires some core mm changes to function > properly. > > The architecture of shadow stack constrains the ability of userspace to > move the shadow stack pointer (SSP) in order to prevent corrupting or > switching to other shadow stacks. The RSTORSSP can move the ssp to ^ instruction s/ssp/SSP/g > different shadow stacks, but it requires a specially placed token in order > to do this. However, the architecture does not prevent incrementing the > stack pointer to wander onto an adjacent shadow stack. To prevent this in > software, enforce guard pages at the beginning of shadow stack vmas, such VMAs > that there will always be a gap between adjacent shadow stacks. > > Make the gap big enough so that no userspace SSP changing operations > (besides RSTORSSP), can move the SSP from one stack to the next. The > SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET "can be incremented or decremented" > can move the SSP by a maximum of 8 bytes, at which point the shadow > stack would be accessed. > > The INCSSP instruction can also increment the shadow stack pointer. It > is the shadow stack analog of an instruction like: > > addq $0x80, %rsp > > However, there is one important difference between an ADD on %rsp and > INCSSP. In addition to modifying SSP, INCSSP also reads from the memory > of the first and last elements that were "popped". It can be thought of > as acting like this: > > READ_ONCE(ssp); // read+discard top element on stack > ssp += nr_to_pop * 8; // move the shadow stack > READ_ONCE(ssp-8); // read+discard last popped stack element > > The maximum distance INCSSP can move the SSP is 2040 bytes, before it > would read the memory. Therefore a single page gap will be enough to ^ , > prevent any operation from shifting the SSP to an adjacent stack, since > it would have to land in the gap at least once, causing a fault. > > This could be accomplished by using VM_GROWSDOWN, but this has a > downside. The behavior would allow shadow stack's to grow, which is s/stack's/stacks/ > unneeded and adds a strange difference to how most regular stacks work. > > Tested-by: Pengfei Xu <pengfei.xu@intel.com> > Tested-by: John Allen <john.allen@amd.com> > Tested-by: Kees Cook <keescook@chromium.org> > Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> > Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Cc: Kees Cook <keescook@chromium.org> > > --- > v5: > - Fix typo in commit log > > v4: > - Drop references to 32 bit instructions > - Switch to generic code to drop __weak (Peterz) > > v2: > - Use __weak instead of #ifdef (Dave Hansen) > - Only have start gap on shadow stack (Andy Luto) > - Create stack_guard_start_gap() to not duplicate code > in an arch version of vm_start_gap() (Dave Hansen) > - Improve commit log partly with verbiage from (Dave Hansen) > > Yu-cheng v25: > - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. > --- > include/linux/mm.h | 31 ++++++++++++++++++++++++++----- > 1 file changed, 26 insertions(+), 5 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 097544afb1aa..6a093daced88 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3107,15 +3107,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) > return mtree_load(&mm->mm_mt, addr); > } > > +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & VM_GROWSDOWN) > + return stack_guard_gap; > + > + /* > + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ. > + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB > + * and touches the first and the last element in the range, which > + * triggers a page fault if the range is not in a shadow stack. > + * Because of this, creating 4-KB guard pages around a shadow > + * stack prevents these instructions from going beyond. I'd prefer the equivalant explanation above from the commit message - it is more precise. > + * > + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma > + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK > + */ > + if (vma->vm_flags & VM_SHADOW_STACK) > + return PAGE_SIZE; > + > + return 0; > +}
On Mon, 2023-03-06 at 09:08 +0100, Borislav Petkov wrote:
> Just typos:
All seem reasonable to me. Thanks.
For using the log verbiage for the comment, it is quite big. Does
something like this seem reasonable?
/*
* The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ.
* The INCSSP instruction can increment the shadow stack pointer. It
* is the shadow stack analog of an instruction like:
*
* addq $0x80, %rsp
*
* However, there is one important difference between an ADD on %rsp
* and INCSSP. In addition to modifying SSP, INCSSP also reads from the
* memory of the first and last elements that were "popped". It can be
* thought of as acting like this:
*
* READ_ONCE(ssp); // read+discard top element on stack
* ssp += nr_to_pop * 8; // move the shadow stack
* READ_ONCE(ssp-8); // read+discard last popped stack element
*
* The maximum distance INCSSP can move the SSP is 2040 bytes, before
* it would read the memory. Therefore a single page gap will be enough
* to prevent any operation from shifting the SSP to an adjacent stack,
* since it would have to land in the gap at least once, causing a
* fault.
*
* Prevent using INCSSP to move the SSP between shadow stacks by
* having a PAGE_SIZE gaurd gap.
*/
On Tue, Mar 07, 2023 at 01:29:50AM +0000, Edgecombe, Rick P wrote: > On Mon, 2023-03-06 at 09:08 +0100, Borislav Petkov wrote: > > Just typos: > > All seem reasonable to me. Thanks. > > For using the log verbiage for the comment, it is quite big. Does > something like this seem reasonable? Yeah, it does. I wouldn't want to lose that explanation in a commit message. However, this special aspect pertains to the shstk implementation in x86 but the code is generic mm and such arch-specific comments are kinda unfitting there. I wonder if it would be better if you could stick that explanation somewhere in arch/x86/ and only refer to it in a short comment above VM_SHADOW_STACK check in stack_guard_start_gap()... Thx.
On 07.03.23 11:32, Borislav Petkov wrote: > On Tue, Mar 07, 2023 at 01:29:50AM +0000, Edgecombe, Rick P wrote: >> On Mon, 2023-03-06 at 09:08 +0100, Borislav Petkov wrote: >>> Just typos: >> >> All seem reasonable to me. Thanks. >> >> For using the log verbiage for the comment, it is quite big. Does >> something like this seem reasonable? > > Yeah, it does. I wouldn't want to lose that explanation in a commit > message. > > However, this special aspect pertains to the shstk implementation in x86 > but the code is generic mm and such arch-specific comments are kinda > unfitting there. > > I wonder if it would be better if you could stick that explanation > somewhere in arch/x86/ and only refer to it in a short comment above > VM_SHADOW_STACK check in stack_guard_start_gap()... +1
On Tue, 2023-03-07 at 11:44 +0100, David Hildenbrand wrote: > On 07.03.23 11:32, Borislav Petkov wrote: > > On Tue, Mar 07, 2023 at 01:29:50AM +0000, Edgecombe, Rick P wrote: > > > On Mon, 2023-03-06 at 09:08 +0100, Borislav Petkov wrote: > > > > Just typos: > > > > > > All seem reasonable to me. Thanks. > > > > > > For using the log verbiage for the comment, it is quite big. Does > > > something like this seem reasonable? > > > > Yeah, it does. I wouldn't want to lose that explanation in a commit > > message. > > > > However, this special aspect pertains to the shstk implementation > > in x86 > > but the code is generic mm and such arch-specific comments are > > kinda > > unfitting there. > > > > I wonder if it would be better if you could stick that explanation > > somewhere in arch/x86/ and only refer to it in a short comment > > above > > VM_SHADOW_STACK check in stack_guard_start_gap()... > > +1 I can't find a good place for it in the arch code. Basically there is no arch/x86 functionality that has to do with guard pages. The closest is pte_mkwrite() because it at least references VM_SHADOW_STACK but it doesn't really fit. We could to add an arch version of stack_guard_start_gap() but we had that and removed it for other style reasons. Code duplication IIRC. So I thought to just move it elsewhere in mm.h where VM_SHADOW_STACK is defined.
On Mon, Feb 27, 2023 at 2:31 PM Rick Edgecombe <rick.p.edgecombe@intel.com> wrote: > > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > The x86 Control-flow Enforcement Technology (CET) feature includes a new > type of memory called shadow stack. This shadow stack memory has some > unusual properties, which requires some core mm changes to function > properly. > > The architecture of shadow stack constrains the ability of userspace to > move the shadow stack pointer (SSP) in order to prevent corrupting or > switching to other shadow stacks. The RSTORSSP can move the ssp to > different shadow stacks, but it requires a specially placed token in order > to do this. However, the architecture does not prevent incrementing the > stack pointer to wander onto an adjacent shadow stack. To prevent this in > software, enforce guard pages at the beginning of shadow stack vmas, such > that there will always be a gap between adjacent shadow stacks. > > Make the gap big enough so that no userspace SSP changing operations > (besides RSTORSSP), can move the SSP from one stack to the next. The > SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET > can move the SSP by a maximum of 8 bytes, at which point the shadow > stack would be accessed. > > The INCSSP instruction can also increment the shadow stack pointer. It > is the shadow stack analog of an instruction like: > > addq $0x80, %rsp > > However, there is one important difference between an ADD on %rsp and > INCSSP. In addition to modifying SSP, INCSSP also reads from the memory > of the first and last elements that were "popped". It can be thought of > as acting like this: > > READ_ONCE(ssp); // read+discard top element on stack > ssp += nr_to_pop * 8; // move the shadow stack > READ_ONCE(ssp-8); // read+discard last popped stack element > > The maximum distance INCSSP can move the SSP is 2040 bytes, before it > would read the memory. Therefore a single page gap will be enough to > prevent any operation from shifting the SSP to an adjacent stack, since > it would have to land in the gap at least once, causing a fault. > > This could be accomplished by using VM_GROWSDOWN, but this has a > downside. The behavior would allow shadow stack's to grow, which is > unneeded and adds a strange difference to how most regular stacks work. > > Tested-by: Pengfei Xu <pengfei.xu@intel.com> > Tested-by: John Allen <john.allen@amd.com> > Tested-by: Kees Cook <keescook@chromium.org> > Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> > Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Cc: Kees Cook <keescook@chromium.org> > > --- > v5: > - Fix typo in commit log > > v4: > - Drop references to 32 bit instructions > - Switch to generic code to drop __weak (Peterz) > > v2: > - Use __weak instead of #ifdef (Dave Hansen) > - Only have start gap on shadow stack (Andy Luto) > - Create stack_guard_start_gap() to not duplicate code > in an arch version of vm_start_gap() (Dave Hansen) > - Improve commit log partly with verbiage from (Dave Hansen) > > Yu-cheng v25: > - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. > --- > include/linux/mm.h | 31 ++++++++++++++++++++++++++----- > 1 file changed, 26 insertions(+), 5 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 097544afb1aa..6a093daced88 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3107,15 +3107,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) > return mtree_load(&mm->mm_mt, addr); > } > > +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & VM_GROWSDOWN) > + return stack_guard_gap; > + > + /* > + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ. > + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB > + * and touches the first and the last element in the range, which > + * triggers a page fault if the range is not in a shadow stack. > + * Because of this, creating 4-KB guard pages around a shadow > + * stack prevents these instructions from going beyond. > + * > + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma > + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK > + */ > + if (vma->vm_flags & VM_SHADOW_STACK) > + return PAGE_SIZE; This is an arch agnostic header file. Can we remove `VM_SHADOW_STACK` from here? and instead have `arch_is_shadow_stack` which consumes vma flags and returns true or false. This allows different architectures to choose their own encoding of vma flags to represent a shadow stack. > + > + return 0; > +} > + > static inline unsigned long vm_start_gap(struct vm_area_struct *vma) > { > + unsigned long gap = stack_guard_start_gap(vma); > unsigned long vm_start = vma->vm_start; > > - if (vma->vm_flags & VM_GROWSDOWN) { > - vm_start -= stack_guard_gap; > - if (vm_start > vma->vm_start) > - vm_start = 0; > - } > + vm_start -= gap; > + if (vm_start > vma->vm_start) > + vm_start = 0; > return vm_start; > } > > -- > 2.17.1 >
diff --git a/include/linux/mm.h b/include/linux/mm.h index 097544afb1aa..6a093daced88 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3107,15 +3107,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ. + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB + * and touches the first and the last element in the range, which + * triggers a page fault if the range is not in a shadow stack. + * Because of this, creating 4-KB guard pages around a shadow + * stack prevents these instructions from going beyond. + * + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK + */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; }