From patchwork Mon Mar 20 16:39:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot2 for Thomas Gleixner X-Patchwork-Id: 72317 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp1335411wrt; Mon, 20 Mar 2023 10:20:11 -0700 (PDT) X-Google-Smtp-Source: AK7set/l0WvEvLNWQgRxWetsWw1xJNikb4Eeh/J1QQ9hf/+XjnbE2WM/u5Ni+eAF4s4Zs+Go+k94 X-Received: by 2002:aa7:97a8:0:b0:625:9ed4:2e99 with SMTP id d8-20020aa797a8000000b006259ed42e99mr16088819pfq.10.1679332811552; Mon, 20 Mar 2023 10:20:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679332811; cv=none; d=google.com; s=arc-20160816; b=m6oS8vP8tlNRb4IdnGAIGq95634PL1IV1dCPm1ctEHuspwQaFXoA/34YpkUurTUBpA ogE1kZYc9fGmTdDvhkTW8L0Mqku/J6aeWMiZgRGpSCiUzz2o3oZRkIDZXFie8TcNdw5P qtQuLVdgBpbthTzXhwulXSHN2ilaCm7ZTrUeF+V0IlfZCg9lbHN6tjGoxU0IFMuow6Rw V8u2dPDf1xxCqVgGcRzpquXPXXeJWYeCp88c2Ve4pCO5M2aUbn8Qw+ip1vX0BCFIa3Ot 041OYeP53bxMoSjP2fg/6Nb/Ny1n2pXibU3yY6dIlK+QooFdY003Tkuwo/ZLalCw5Usw 8pcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:cc:subject:to:reply-to:sender:from :dkim-signature:dkim-signature:date; bh=tpyZsIq9SZQi89gEpV2Me6ofOXRaOfIhZIQD0jYJpAM=; b=I9XQeHb0SdEGrMZ9KCBZqSvHP+Bi5+59XEDY4knJgL++JoZ0RoyFmjXLoVF3fSAjgc 9Iip7MazABu7MOh+PdOXvt1h86YdpsxambXb8lu7r/E10wVFhVkWttaHk16P4+KSf46I O6HCmnirkal0/oS+vMK2CwoK1KS5d9FThbMHgOACgAxyf1YayR4QQ+nO3msQDVgtgF10 qJwyM7Qi285E00FJNwfRfrJapW5RBusDuiz2H+8UhuCMv/kor0GsIlnU1idYaj9s8UMG uwhinw30Ibk/qL8uZr+k9/3kCY171uRfPJPBdGk7nEtFFdCSRt9nrrj8AAD9QhP0dPJi VD+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=beZfSDht; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a63e855000000b0050bfa82c244si11074234pgk.440.2023.03.20.10.19.56; Mon, 20 Mar 2023 10:20:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=beZfSDht; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231247AbjCTQuN (ORCPT + 99 others); Mon, 20 Mar 2023 12:50:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230238AbjCTQtS (ORCPT ); Mon, 20 Mar 2023 12:49:18 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EF9BFF0F; Mon, 20 Mar 2023 09:41:55 -0700 (PDT) Date: Mon, 20 Mar 2023 16:39:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1679330365; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=tpyZsIq9SZQi89gEpV2Me6ofOXRaOfIhZIQD0jYJpAM=; b=beZfSDhtu+luBq9TYu2wYfi//SrhoQmDQkFrbbh2wW6vkL2nxCtER2RAKjvwN1w9NznAW1 DwYiepiFSL/RCdC7S7bO4IxcHsK8TQ73gN70MfdiO1G11S7FDrJIxry1qEX9EDWZ6HSWTP KvW5FbsHqI5y0NINyPO/HGI+Kj4zvydQFmp9bj53voXIai1mkYmUcgCyC3wnOa73bY5PLH 2JYsIoLlk5GwuC/oHgfSG9PcYPGAdhM57A38IBnY023PwIQfks3+ICvuTnw+awd+aWD3xx 8lvH02T4PBPSmdJv4tiECZ9uc9tF41ZLqKBiEkZ9dFejJYVBDNTmjZduI5GyvA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1679330365; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=tpyZsIq9SZQi89gEpV2Me6ofOXRaOfIhZIQD0jYJpAM=; b=sWTV/BvOZEDvhPQBmj5mhv/MQPwjKLfX3qBcFDYEFSIZvKHqAjljLS3jkxdsl7rVYv8s9d gWcg6yRNkDcpV7DA== From: "tip-bot2 for Rick Edgecombe" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/shstk] mm: Add guard pages around a shadow stack. Cc: "Yu-cheng Yu" , Rick Edgecombe , Dave Hansen , "Borislav Petkov (AMD)" , Kees Cook , "Mike Rapoport (IBM)" , Pengfei Xu , John Allen , x86@kernel.org, linux-kernel@vger.kernel.org MIME-Version: 1.0 Message-ID: <167933036554.5837.4712461537546656961.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760908082298284686?= X-GMAIL-MSGID: =?utf-8?q?1760908082298284686?= The following commit has been merged into the x86/shstk branch of tip: Commit-ID: 2d4ef66720386d567e64e95008d4bdd0812bcbd2 Gitweb: https://git.kernel.org/tip/2d4ef66720386d567e64e95008d4bdd0812bcbd2 Author: Rick Edgecombe AuthorDate: Sat, 18 Mar 2023 17:15:16 -07:00 Committer: Dave Hansen CommitterDate: Mon, 20 Mar 2023 09:01:10 -07:00 mm: Add guard pages around a shadow stack. The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP instruction can move the SSP to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack VMAs, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can be incremented or decremented by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore, a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stacks to grow, which is unneeded and adds a strange difference to how most regular stacks work. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Signed-off-by: Dave Hansen Reviewed-by: Borislav Petkov (AMD) Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Link: https://lore.kernel.org/all/20230319001535.23210-22-rick.p.edgecombe%40intel.com --- include/linux/mm.h | 52 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 097544a..d09fbe9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -349,7 +349,36 @@ extern unsigned int kobjsize(const void *objp); #endif /* CONFIG_ARCH_HAS_PKEYS */ #ifdef CONFIG_X86_USER_SHADOW_STACK -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ +/* + * This flag should not be set with VM_SHARED because of lack of support + * core mm. It will also get a guard page. This helps userspace protect + * itself from attacks. The reasoning is as follows: + * + * The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The + * INCSSP instruction can increment the shadow stack pointer. It is the + * shadow stack analog of an instruction like: + * + * addq $0x80, %rsp + * + * However, there is one important difference between an ADD on %rsp + * and INCSSP. In addition to modifying SSP, INCSSP also reads from the + * memory of the first and last elements that were "popped". It can be + * thought of as acting like this: + * + * READ_ONCE(ssp); // read+discard top element on stack + * ssp += nr_to_pop * 8; // move the shadow stack + * READ_ONCE(ssp-8); // read+discard last popped stack element + * + * The maximum distance INCSSP can move the SSP is 2040 bytes, before + * it would read the memory. Therefore a single page gap will be enough + * to prevent any operation from shifting the SSP to an adjacent stack, + * since it would have to land in the gap at least once, causing a + * fault. + * + * Prevent using INCSSP to move the SSP between shadow stacks by + * having a PAGE_SIZE guard gap. + */ +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 #else # define VM_SHADOW_STACK VM_NONE #endif @@ -3107,15 +3136,26 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* See reasoning around the VM_SHADOW_STACK definition */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; }