Message ID | 20230613001108.3040476-17-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp222042vqr; Mon, 12 Jun 2023 17:44:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4XCAmMZVbVKDnBQurEfZ24LgMqu1QWX+QahVXn3nDV1YX4S1mZB9ABw9O53X9MlgrtFzHv X-Received: by 2002:a17:907:9811:b0:97e:aace:b6bc with SMTP id ji17-20020a170907981100b0097eaaceb6bcmr6916850ejc.53.1686617052541; Mon, 12 Jun 2023 17:44:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686617052; cv=none; d=google.com; s=arc-20160816; b=wy7gSkdetmAR9QIcMqthpHzbNeLqeEcK/JEMxgKcDZbpXXPhgLYArICeMCDqTKBMBA K2NW9U+qNXlalfkq4bdHrS16TSj7XqLshLBrkXoziVkGoL7+N0w8fhikXQTNK1efWUBt m8ty9brMJkENNZexMs9kfg85Qw6LuDeVCaJe9aKtJ9yerrWACUJ0GEDKk24KYmbQ2Qln dJl8lETbdpoRz429iHrsuxEAW9RhNNBisJYtlLb8chw3MsqU4u10+reVaChtoF8d86rg 4InQdzWhs10vkXYWcmUGRcBxjB9h7SOJz6nVZWKBFGFws1yK/azVWr4T0O+DZeEpKWun ktGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hAQJJAeb9RDs/WyI+bBNY0zKz2ihMNwhX4fY+cexkp8=; b=XyKJyhyYYn/GojrhlMKUwodTEgn6EIhpqVXerMrcK2D+SrBxDNvF1ucOv8WaPVLq32 6Ok92zw9vLwAK+aLm0XQ/5pdVMkv46lrT3snMYtPWJZU/SXZb+b6rBgwgUV56DrRyk2b ZE4eGcQW3zjuUTuG7Ygh7eoB6HVbY2o8mX6CuYwuYzrVdFIJenjbCI/sQ09gjsdHS0JR cb/bDqT4Qf/T++IPkFEcAV94DBiaLdDJ36uDeaMLMHSZK7l9kbpMIz1AyeP70n6dxZXG fCOw+2KQS8LOEjs7B01/2A1X7btoB7GDeGYNN/p/fElDCrfdEaQmhaG/DBNiICxDCNg0 Gk8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lgizI1y9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pg3-20020a170907204300b00977ed399758si6002895ejb.1052.2023.06.12.17.43.49; Mon, 12 Jun 2023 17:44:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lgizI1y9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239039AbjFMAOB (ORCPT <rfc822;rust.linux@gmail.com> + 99 others); Mon, 12 Jun 2023 20:14:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238907AbjFMAMh (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 12 Jun 2023 20:12:37 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C89F918C; Mon, 12 Jun 2023 17:12:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686615148; x=1718151148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oZdKPqbrR7Qk+U72in77o4gzn1lGdOYqXWRfhhqI1mg=; b=lgizI1y9RJzdU8F/vZYEjcUSqeNizgbKGojD/5jhzbPP3PcuX5KnK5ii d/2nrGuUDxwfwyVKQUDkAbyv7TKi8qxI/vYQ2b/n99dj7A/zOXjLrImpZ lfG9XL7xBRXCqA1RJWGow0wK5i5z6koEgmo4WpKvdszPcFKWDjAc3Hkqs KCjqo1bzNVoDPWIyAdRtbYYdcTUPyu/SdPJmRN0jjhpx+0BgsPieHRGkQ YYibl6hyWXN8w302fN7uD9GD+Ht1vCJGpaum2tCC8HD058UT+e+hmoACy JOFWG9QIsSVGqUO1pEV9V4eI55Mqn3cF7AGnyUiqWxM9Rd/D8BWm+u3Ed w==; X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="361557020" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="361557020" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="835671031" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="835671031" Received: from almeisch-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.amr.corp.intel.com) ([10.209.42.242]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:20 -0700 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@kernel.org>, Balbir Singh <bsingharora@gmail.com>, Borislav Petkov <bp@alien8.de>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Eugene Syromiatnikov <esyr@redhat.com>, Florian Weimer <fweimer@redhat.com>, "H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromium.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap <rdunlap@infradead.org>, Weijiang Yang <weijiang.yang@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Allen <john.allen@amd.com>, kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com, torvalds@linux-foundation.org, broonie@kernel.org Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu <yu-cheng.yu@intel.com>, Pengfei Xu <pengfei.xu@intel.com> Subject: [PATCH v9 16/42] mm: Add guard pages around a shadow stack. Date: Mon, 12 Jun 2023 17:10:42 -0700 Message-Id: <20230613001108.3040476-17-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> References: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768546162571091752?= X-GMAIL-MSGID: =?utf-8?q?1768546162571091752?= |
Series |
Shadow stacks for userspace
|
|
Commit Message
Edgecombe, Rick P
June 13, 2023, 12:10 a.m. UTC
The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP instruction can move the SSP to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack VMAs, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can be incremented or decremented by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore, a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stacks to grow, which is unneeded and adds a strange difference to how most regular stacks work. In the maple tree code, there is some logic for retrying the unmapped area search if a guard gap is violated. This retry should happen for shadow stack guard gap violations as well. This logic currently only checks for VM_GROWSDOWN for start gaps. Since shadow stacks also have a start gap as well, create an new define VM_STARTGAP_FLAGS to hold all the VM flag bits that have start gaps, and make mmap use it. Co-developed-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> --- v9: - Add logic needed to still have guard gaps with maple tree. --- include/linux/mm.h | 54 ++++++++++++++++++++++++++++++++++++++++------ mm/mmap.c | 4 ++-- 2 files changed, 50 insertions(+), 8 deletions(-)
Comments
On Mon, Jun 12, 2023 at 05:10:42PM -0700, Rick Edgecombe wrote: > The x86 Control-flow Enforcement Technology (CET) feature includes a new > type of memory called shadow stack. This shadow stack memory has some > unusual properties, which requires some core mm changes to function > properly. Reviewed-by: Mark Brown <broonie@kernel.org>
On Mon, Jun 12, 2023 at 05:10:42PM -0700, Rick Edgecombe wrote: > +++ b/include/linux/mm.h > @@ -342,7 +342,36 @@ extern unsigned int kobjsize(const void *objp); > #endif /* CONFIG_ARCH_HAS_PKEYS */ > > #ifdef CONFIG_X86_USER_SHADOW_STACK > -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ > +/* > + * This flag should not be set with VM_SHARED because of lack of support > + * core mm. It will also get a guard page. This helps userspace protect > + * itself from attacks. The reasoning is as follows: > + * > + * The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The > + * INCSSP instruction can increment the shadow stack pointer. It is the > + * shadow stack analog of an instruction like: > + * > + * addq $0x80, %rsp > + * > + * However, there is one important difference between an ADD on %rsp > + * and INCSSP. In addition to modifying SSP, INCSSP also reads from the > + * memory of the first and last elements that were "popped". It can be > + * thought of as acting like this: > + * > + * READ_ONCE(ssp); // read+discard top element on stack > + * ssp += nr_to_pop * 8; // move the shadow stack > + * READ_ONCE(ssp-8); // read+discard last popped stack element > + * > + * The maximum distance INCSSP can move the SSP is 2040 bytes, before > + * it would read the memory. Therefore a single page gap will be enough > + * to prevent any operation from shifting the SSP to an adjacent stack, > + * since it would have to land in the gap at least once, causing a > + * fault. > + * > + * Prevent using INCSSP to move the SSP between shadow stacks by > + * having a PAGE_SIZE guard gap. > + */ > +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 > #else > # define VM_SHADOW_STACK VM_NONE > #endif This is a lot of very x86-specific language in a generic header file. I'm sure there's a better place for all this text.
On Thu, 2023-06-22 at 19:21 +0100, Matthew Wilcox wrote: > On Mon, Jun 12, 2023 at 05:10:42PM -0700, Rick Edgecombe wrote: > > +++ b/include/linux/mm.h > > @@ -342,7 +342,36 @@ extern unsigned int kobjsize(const void > > *objp); > > #endif /* CONFIG_ARCH_HAS_PKEYS */ > > > > #ifdef CONFIG_X86_USER_SHADOW_STACK > > -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set > > with VM_SHARED */ > > +/* > > + * This flag should not be set with VM_SHARED because of lack of > > support > > + * core mm. It will also get a guard page. This helps userspace > > protect > > + * itself from attacks. The reasoning is as follows: > > + * > > + * The shadow stack pointer(SSP) is moved by CALL, RET, and > > INCSSPQ. The > > + * INCSSP instruction can increment the shadow stack pointer. It > > is the > > + * shadow stack analog of an instruction like: > > + * > > + * addq $0x80, %rsp > > + * > > + * However, there is one important difference between an ADD on > > %rsp > > + * and INCSSP. In addition to modifying SSP, INCSSP also reads > > from the > > + * memory of the first and last elements that were "popped". It > > can be > > + * thought of as acting like this: > > + * > > + * READ_ONCE(ssp); // read+discard top element on stack > > + * ssp += nr_to_pop * 8; // move the shadow stack > > + * READ_ONCE(ssp-8); // read+discard last popped stack element > > + * > > + * The maximum distance INCSSP can move the SSP is 2040 bytes, > > before > > + * it would read the memory. Therefore a single page gap will be > > enough > > + * to prevent any operation from shifting the SSP to an adjacent > > stack, > > + * since it would have to land in the gap at least once, causing a > > + * fault. > > + * > > + * Prevent using INCSSP to move the SSP between shadow stacks by > > + * having a PAGE_SIZE guard gap. > > + */ > > +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 > > #else > > # define VM_SHADOW_STACK VM_NONE > > #endif > > This is a lot of very x86-specific language in a generic header file. > I'm sure there's a better place for all this text. Yes, I couldn't find another place for it. This was the reasoning: https://lore.kernel.org/lkml/07deaffc10b1b68721bbbce370e145d8fec2a494.camel@intel.com/ Did you have any particular place in mind?
On Thu, Jun 22, 2023 at 06:27:40PM +0000, Edgecombe, Rick P wrote: > On Thu, 2023-06-22 at 19:21 +0100, Matthew Wilcox wrote: > > On Mon, Jun 12, 2023 at 05:10:42PM -0700, Rick Edgecombe wrote: > > > +++ b/include/linux/mm.h > > > @@ -342,7 +342,36 @@ extern unsigned int kobjsize(const void > > > *objp); > > > #endif /* CONFIG_ARCH_HAS_PKEYS */ > > > > > > #ifdef CONFIG_X86_USER_SHADOW_STACK > > > -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set > > > with VM_SHARED */ > > > +/* > > > + * This flag should not be set with VM_SHARED because of lack of > > > support > > > + * core mm. It will also get a guard page. This helps userspace > > > protect > > > + * itself from attacks. The reasoning is as follows: > > > + * > > > + * The shadow stack pointer(SSP) is moved by CALL, RET, and > > > INCSSPQ. The > > > + * INCSSP instruction can increment the shadow stack pointer. It > > > is the > > > + * shadow stack analog of an instruction like: > > > + * > > > + * addq $0x80, %rsp > > > + * > > > + * However, there is one important difference between an ADD on > > > %rsp > > > + * and INCSSP. In addition to modifying SSP, INCSSP also reads > > > from the > > > + * memory of the first and last elements that were "popped". It > > > can be > > > + * thought of as acting like this: > > > + * > > > + * READ_ONCE(ssp); // read+discard top element on stack > > > + * ssp += nr_to_pop * 8; // move the shadow stack > > > + * READ_ONCE(ssp-8); // read+discard last popped stack element > > > + * > > > + * The maximum distance INCSSP can move the SSP is 2040 bytes, > > > before > > > + * it would read the memory. Therefore a single page gap will be > > > enough > > > + * to prevent any operation from shifting the SSP to an adjacent > > > stack, > > > + * since it would have to land in the gap at least once, causing a > > > + * fault. > > > + * > > > + * Prevent using INCSSP to move the SSP between shadow stacks by > > > + * having a PAGE_SIZE guard gap. > > > + */ > > > +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 > > > #else > > > # define VM_SHADOW_STACK VM_NONE > > > #endif > > > > This is a lot of very x86-specific language in a generic header file. > > I'm sure there's a better place for all this text. > > Yes, I couldn't find another place for it. This was the reasoning: > https://lore.kernel.org/lkml/07deaffc10b1b68721bbbce370e145d8fec2a494.camel@intel.com/ > > Did you have any particular place in mind? Since it's near CONFIG_X86_USER_SHADOW_STACK the comment in mm.h could be /* * VMA is used for shadow stack and implies guard pages. * See arch/x86/kernel/shstk.c for details */ and the long reasoning comment can be moved near alloc_shstk in arch/x86/kernel/shstk.h
On Fri, Jun 23, 2023 at 10:40:00AM +0300, Mike Rapoport wrote: > On Thu, Jun 22, 2023 at 06:27:40PM +0000, Edgecombe, Rick P wrote: > > Yes, I couldn't find another place for it. This was the reasoning: > > https://lore.kernel.org/lkml/07deaffc10b1b68721bbbce370e145d8fec2a494.camel@intel.com/ > > Did you have any particular place in mind? > Since it's near CONFIG_X86_USER_SHADOW_STACK the comment in mm.h could be > /* > * VMA is used for shadow stack and implies guard pages. > * See arch/x86/kernel/shstk.c for details > */ > and the long reasoning comment can be moved near alloc_shstk in > arch/x86/kernel/shstk.h This isn't an x86 specific concept, arm64 has a very similar extension called Guarded Control Stack (which I should be publishing changes for in the not too distant future) and riscv also has something. For arm64 I'm using the generic mm changes wholesale, we have a similar need for guard pages around the GCS and while the mechanics of accessing are different the requirement ends up being the same. Perhaps we could just rewrite the comment to say that guard pages prevent over/underflow of the stack by userspace and that a single page is sufficient for all current architectures, with the details of the working for x86 put in some x86 specific place?
On Fri, 2023-06-23 at 13:17 +0100, Mark Brown wrote: > On Fri, Jun 23, 2023 at 10:40:00AM +0300, Mike Rapoport wrote: > > On Thu, Jun 22, 2023 at 06:27:40PM +0000, Edgecombe, Rick P wrote: > > > > Yes, I couldn't find another place for it. This was the > > > reasoning: > > > > https://lore.kernel.org/lkml/07deaffc10b1b68721bbbce370e145d8fec2a494.camel@intel.com/ > > > > Did you have any particular place in mind? > > > Since it's near CONFIG_X86_USER_SHADOW_STACK the comment in mm.h > could be > > > /* > > * VMA is used for shadow stack and implies guard pages. > > * See arch/x86/kernel/shstk.c for details > > */ > > > and the long reasoning comment can be moved near alloc_shstk in > > arch/x86/kernel/shstk.h Makes sense. Not sure why I didn't think of this earlier. > > This isn't an x86 specific concept, arm64 has a very similar > extension > called Guarded Control Stack (which I should be publishing changes > for > in the not too distant future) and riscv also has something. For > arm64 > I'm using the generic mm changes wholesale, we have a similar need > for > guard pages around the GCS and while the mechanics of accessing are > different the requirement ends up being the same. Perhaps we could > just > rewrite the comment to say that guard pages prevent over/underflow of > the stack by userspace and that a single page is sufficient for all > current architectures, with the details of the working for x86 put in > some x86 specific place? Something sort of similar came up in regards to the riscv series, about adding something like an is_shadow_stack_vma() helper. The plan was to not make too many assumptions about the final details of the other shadow stack features and leave that for refactoring. I think some kind of generic comment like you suggest makes sense, but I don't want to try to assert any arch specifics for features that are not upstream. It should be very easy to tweak the comment when the time comes. The points about x86 details not belonging in non-arch headers and having some arch generic explanation in the file are well taken though.
On Sun, Jun 25, 2023 at 04:44:32PM +0000, Edgecombe, Rick P wrote: > On Fri, 2023-06-23 at 13:17 +0100, Mark Brown wrote: > > This isn't an x86 specific concept, arm64 has a very similar > > extension > > called Guarded Control Stack (which I should be publishing changes > > for > > in the not too distant future) and riscv also has something. For > > arm64 > > I'm using the generic mm changes wholesale, we have a similar need > > for > > guard pages around the GCS and while the mechanics of accessing are > > different the requirement ends up being the same. Perhaps we could > > just > > rewrite the comment to say that guard pages prevent over/underflow of > > the stack by userspace and that a single page is sufficient for all > > current architectures, with the details of the working for x86 put in > > some x86 specific place? > Something sort of similar came up in regards to the riscv series, about > adding something like an is_shadow_stack_vma() helper. The plan was to > not make too many assumptions about the final details of the other > shadow stack features and leave that for refactoring. I think some kind > of generic comment like you suggest makes sense, but I don't want to > try to assert any arch specifics for features that are not upstream. It > should be very easy to tweak the comment when the time comes. > The points about x86 details not belonging in non-arch headers and > having some arch generic explanation in the file are well taken though. I think a statement to the effect that "this works for currently supported architectures" is fine, if something comes along with additional requirements then the comment can be adjusted as part of merging the new thing.
diff --git a/include/linux/mm.h b/include/linux/mm.h index fb17cbd531ac..535c58d3b2e4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -342,7 +342,36 @@ extern unsigned int kobjsize(const void *objp); #endif /* CONFIG_ARCH_HAS_PKEYS */ #ifdef CONFIG_X86_USER_SHADOW_STACK -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ +/* + * This flag should not be set with VM_SHARED because of lack of support + * core mm. It will also get a guard page. This helps userspace protect + * itself from attacks. The reasoning is as follows: + * + * The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The + * INCSSP instruction can increment the shadow stack pointer. It is the + * shadow stack analog of an instruction like: + * + * addq $0x80, %rsp + * + * However, there is one important difference between an ADD on %rsp + * and INCSSP. In addition to modifying SSP, INCSSP also reads from the + * memory of the first and last elements that were "popped". It can be + * thought of as acting like this: + * + * READ_ONCE(ssp); // read+discard top element on stack + * ssp += nr_to_pop * 8; // move the shadow stack + * READ_ONCE(ssp-8); // read+discard last popped stack element + * + * The maximum distance INCSSP can move the SSP is 2040 bytes, before + * it would read the memory. Therefore a single page gap will be enough + * to prevent any operation from shifting the SSP to an adjacent stack, + * since it would have to land in the gap at least once, causing a + * fault. + * + * Prevent using INCSSP to move the SSP between shadow stacks by + * having a PAGE_SIZE guard gap. + */ +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 #else # define VM_SHADOW_STACK VM_NONE #endif @@ -405,6 +434,8 @@ extern unsigned int kobjsize(const void *objp); #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS #endif +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) + #ifdef CONFIG_STACK_GROWSUP #define VM_STACK VM_GROWSUP #else @@ -3235,15 +3266,26 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* See reasoning around the VM_SHADOW_STACK definition */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; } diff --git a/mm/mmap.c b/mm/mmap.c index afdf5f78432b..d4793600a8d4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1570,7 +1570,7 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) gap = mas.index; gap += (info->align_offset - gap) & info->align_mask; tmp = mas_next(&mas, ULONG_MAX); - if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ + if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ if (vm_start_gap(tmp) < gap + length - 1) { low_limit = tmp->vm_end; mas_reset(&mas); @@ -1622,7 +1622,7 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) gap -= (gap - info->align_offset) & info->align_mask; gap_end = mas.last; tmp = mas_next(&mas, ULONG_MAX); - if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ + if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ if (vm_start_gap(tmp) <= gap_end) { high_limit = vm_start_gap(tmp); mas_reset(&mas);