Message ID | 20230227222957.24501-26-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2682765wrd; Mon, 27 Feb 2023 14:34:43 -0800 (PST) X-Google-Smtp-Source: AK7set9wbEidCdWjKhJ2notnfeeDAIefe3Xxc0j88Ekm6BTVVSiofmXYUnMsTuDrqg0fEqGqzOMl X-Received: by 2002:a50:ef08:0:b0:4ac:b559:4730 with SMTP id m8-20020a50ef08000000b004acb5594730mr1240052eds.25.1677537283136; Mon, 27 Feb 2023 14:34:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677537283; cv=none; d=google.com; s=arc-20160816; b=AdDf9x2DCA99VZkM8VISS5WmiMf/c5INZIwT8hH3VVzNXDaPTQB4wVVJkv3BIFwcQo KvcW1YuMH0VfcW4Q32SvZS9GzpIzC8v+yQ8kCYKFPMKwaABIw8GcA1WUjuM9lQ5URdKX UjYRIFHX2nMfgBgefUhXDjfIqPO2AU+aqR9CVxfyy2ie49Acte9G0jRdVftnoDu8HVla lTppuoNAHHDeUwYj0hjW6FDjRVRcIFEgOb8mYzRqzHXupOfM/zAevDJo0QZWhZ3+Jy9r c/5ctT4zB5KP3l4egDfzmNXpWSNJEekR+8OAk7iLvDN/lLsfcbQPmk5ZMaYpEMYMmVDb udzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=veDLluZZGIZBNK/wTadMCXK8+klKEMevtWOXQT8RUT0=; b=fJpMdOKSntxqo7eujmIk4st1kBYgaAns5w5rN8JJ6l2H6Mm2KdDElO6FJHWLjKKkhq 8pGMkUtylbRx+B1e3/zU81YbXnYMaATyVH/RHIOfNgiHNXMQhvvIiPkPLQX2lxOEUay0 zBKu9ZhP7Q51KRA3g0We34Kw5OJ300eQtOSRQItnAEyMLpMYfc4fnRvaRbs7kRKZWE1A N7USTptzpCt6oSd0xYJ4nt5f8hu4eCSRTfcaxBipIo5JrHBZXdzrxGIxGU6PXKBvJ7lL 3dRQeZJVKYwambRhTJMyHF9kaSd/3u/mIyO6mrSDSjY/MnqnipqufPBHjE0gOLSa0JQ7 foXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=XpTQP1tv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t26-20020a05640203da00b004acdca4e902si8807278edw.127.2023.02.27.14.34.16; Mon, 27 Feb 2023 14:34:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=XpTQP1tv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230114AbjB0WdY (ORCPT <rfc822;wenzhi022@gmail.com> + 99 others); Mon, 27 Feb 2023 17:33:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjB0Wb6 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 27 Feb 2023 17:31:58 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF12129E26; Mon, 27 Feb 2023 14:31:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677537114; x=1709073114; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=FzRSuSkDyk6//OI0YkurvPKbmt+RZMfvZ31TH5a40rI=; b=XpTQP1tv2cumrq8T4isXSp5lScCzSMsryarxoZQbCGFNao9Ax+N4vPaW OV9JhbLWRhQ4ek79QJ2TltihTt72Onr78+dYkZ+5LPX3fD73S4NZeZ6PY 7WZZyZGYV1vHdLx9nGZLExu3sTsrrTQqEh1EAvoC+9ZEIUP7KtwrlNt1k JqXDIsg6RfxeeiAcbPutzCAnlTe3n4nAuW95oPzz/2f92ztwzO9nUdog4 6VkzmSlPjFKn7F0DiTh1Jel+7U8qEJr3De/O+WKKQaoGWxemd+nMj4+cO POmZ50GlsYniFCEAroy+8i+AjvHCplWDlyeT/Hxgsnt/HxrslY3UlbDee g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="313657611" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="313657611" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:26 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="848024675" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="848024675" Received: from leonqu-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.72.19]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:25 -0800 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@kernel.org>, Balbir Singh <bsingharora@gmail.com>, Borislav Petkov <bp@alien8.de>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Eugene Syromiatnikov <esyr@redhat.com>, Florian Weimer <fweimer@redhat.com>, "H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromium.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap <rdunlap@infradead.org>, Weijiang Yang <weijiang.yang@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Allen <john.allen@amd.com>, kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v7 25/41] x86/mm: Introduce MAP_ABOVE4G Date: Mon, 27 Feb 2023 14:29:41 -0800 Message-Id: <20230227222957.24501-26-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230227222957.24501-1-rick.p.edgecombe@intel.com> References: <20230227222957.24501-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759025334498405843?= X-GMAIL-MSGID: =?utf-8?q?1759025334498405843?= |
Series |
Shadow stacks for userspace
|
|
Commit Message
Edgecombe, Rick P
Feb. 27, 2023, 10:29 p.m. UTC
The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which require some core mm changes to function properly. One of the properties is that the shadow stack pointer (SSP), which is a CPU register that points to the shadow stack like the stack pointer points to the stack, can't be pointing outside of the 32 bit address space when the CPU is executing in 32 bit mode. It is desirable to prevent executing in 32 bit mode when shadow stack is enabled because the kernel can't easily support 32 bit signals. On x86 it is possible to transition to 32 bit mode without any special interaction with the kernel, by doing a "far call" to a 32 bit segment. So the shadow stack implementation can use this address space behavior as a feature, by enforcing that shadow stack memory is always crated outside of the 32 bit address space. This way userspace will trigger a general protection fault which will in turn trigger a segfault if it tries to transition to 32 bit mode with shadow stack enabled. This provides a clean error generating border for the user if they try attempt to do 32 bit mode shadow stack, rather than leave the kernel in a half working state for userspace to be surprised by. So to allow future shadow stack enabling patches to map shadow stacks out of the 32 bit address space, introduce MAP_ABOVE4G. The behavior is pretty much like MAP_32BIT, except that it has the opposite address range. The are a few differences though. If both MAP_32BIT and MAP_ABOVE4G are provided, the kernel will use the MAP_ABOVE4G behavior. Like MAP_32BIT, MAP_ABOVE4G is ignored in a 32 bit syscall. Since the default search behavior is top down, the normal kaslr base can be used for MAP_ABOVE4G. This is unlike MAP_32BIT which has to add it's own randomization in the bottom up case. For MAP_32BIT, only the bottom up search path is used. For MAP_ABOVE4G both are potentially valid, so both are used. In the bottomup search path, the default behavior is already consistent with MAP_ABOVE4G since mmap base should be above 4GB. Without MAP_ABOVE4G, the shadow stack will already normally be above 4GB. So without introducing MAP_ABOVE4G, trying to transition to 32 bit mode with shadow stack enabled would usually segfault anyway. This is already pretty decent guard rails. But the addition of MAP_ABOVE4G is some small complexity spent to make it make it more complete. Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> --- v5: - New patch --- arch/x86/include/uapi/asm/mman.h | 1 + arch/x86/kernel/sys_x86_64.c | 6 +++++- include/linux/mman.h | 4 ++++ 3 files changed, 10 insertions(+), 1 deletion(-)
Comments
On Mon, Feb 27, 2023 at 02:29:41PM -0800, Rick Edgecombe wrote: > The x86 Control-flow Enforcement Technology (CET) feature includes a new > type of memory called shadow stack. This shadow stack memory has some > unusual properties, which require some core mm changes to function > properly. > > One of the properties is that the shadow stack pointer (SSP), which is a > CPU register that points to the shadow stack like the stack pointer points > to the stack, can't be pointing outside of the 32 bit address space when > the CPU is executing in 32 bit mode. It is desirable to prevent executing > in 32 bit mode when shadow stack is enabled because the kernel can't easily > support 32 bit signals. > > On x86 it is possible to transition to 32 bit mode without any special > interaction with the kernel, by doing a "far call" to a 32 bit segment. > So the shadow stack implementation can use this address space behavior > as a feature, by enforcing that shadow stack memory is always crated ^^^^^^^ "created" and I'd say "mapped" or "allocated" here. "Created" sounds weird. > outside of the 32 bit address space. This way userspace will trigger a > general protection fault which will in turn trigger a segfault if it > tries to transition to 32 bit mode with shadow stack enabled. > > This provides a clean error generating border for the user if they try > attempt to do 32 bit mode shadow stack, rather than leave the kernel in a > half working state for userspace to be surprised by. > > So to allow future shadow stack enabling patches to map shadow stacks > out of the 32 bit address space, introduce MAP_ABOVE4G. The behavior I guess this needs to be documented in the mmap() manpage too. > is pretty much like MAP_32BIT, except that it has the opposite address > range. The are a few differences though. > > If both MAP_32BIT and MAP_ABOVE4G are provided, the kernel will use the > MAP_ABOVE4G behavior. Like MAP_32BIT, MAP_ABOVE4G is ignored in a 32 bit > syscall. > > Since the default search behavior is top down, the normal kaslr base can > be used for MAP_ABOVE4G. This is unlike MAP_32BIT which has to add it's ^^^^ "its" > own randomization in the bottom up case. ... > diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c > index 8cc653ffdccd..06378b5682c1 100644 > --- a/arch/x86/kernel/sys_x86_64.c > +++ b/arch/x86/kernel/sys_x86_64.c > @@ -193,7 +193,11 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, > > info.flags = VM_UNMAPPED_AREA_TOPDOWN; > info.length = len; > - info.low_limit = PAGE_SIZE; > + if (!in_32bit_syscall() && (flags & MAP_ABOVE4G)) > + info.low_limit = 0x100000000; We have a human readable define for that: SZ_4G > + else > + info.low_limit = PAGE_SIZE; > + > info.high_limit = get_mmap_base(0); > > /*
On Mon, 2023-03-06 at 19:09 +0100, Borislav Petkov wrote: > > diff --git a/arch/x86/kernel/sys_x86_64.c > > b/arch/x86/kernel/sys_x86_64.c > > index 8cc653ffdccd..06378b5682c1 100644 > > --- a/arch/x86/kernel/sys_x86_64.c > > +++ b/arch/x86/kernel/sys_x86_64.c > > @@ -193,7 +193,11 @@ arch_get_unmapped_area_topdown(struct file > > *filp, const unsigned long addr0, > > > > info.flags = VM_UNMAPPED_AREA_TOPDOWN; > > info.length = len; > > - info.low_limit = PAGE_SIZE; > > + if (!in_32bit_syscall() && (flags & MAP_ABOVE4G)) > > + info.low_limit = 0x100000000; > > We have a human readable define for that: SZ_4G Uhh, yes that's much better. And the typos. Thanks.
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index 775dbd3aff73..5a0256e73f1e 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -3,6 +3,7 @@ #define _ASM_X86_MMAN_H #define MAP_32BIT 0x40 /* only give out 32bit addresses */ +#define MAP_ABOVE4G 0x80 /* only map above 4GB */ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS #define arch_calc_vm_prot_bits(prot, key) ( \ diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 8cc653ffdccd..06378b5682c1 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -193,7 +193,11 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; - info.low_limit = PAGE_SIZE; + if (!in_32bit_syscall() && (flags & MAP_ABOVE4G)) + info.low_limit = 0x100000000; + else + info.low_limit = PAGE_SIZE; + info.high_limit = get_mmap_base(0); /* diff --git a/include/linux/mman.h b/include/linux/mman.h index cee1e4b566d8..40d94411d492 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -15,6 +15,9 @@ #ifndef MAP_32BIT #define MAP_32BIT 0 #endif +#ifndef MAP_ABOVE4G +#define MAP_ABOVE4G 0 +#endif #ifndef MAP_HUGE_2MB #define MAP_HUGE_2MB 0 #endif @@ -50,6 +53,7 @@ | MAP_STACK \ | MAP_HUGETLB \ | MAP_32BIT \ + | MAP_ABOVE4G \ | MAP_HUGE_2MB \ | MAP_HUGE_1GB)