From patchwork Sun Mar 19 00:14:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71652 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp510853wrt; Sat, 18 Mar 2023 17:17:00 -0700 (PDT) X-Google-Smtp-Source: AK7set/PnFff1jblt8NTjSeTU/E44zI28HDqVQCoklOzUv1YG903RD84w4yIkSC6ZXwGVzD7lrJE X-Received: by 2002:a05:6a20:7aa2:b0:d9:1ec8:e9bb with SMTP id u34-20020a056a207aa200b000d91ec8e9bbmr378642pzh.28.1679185020109; Sat, 18 Mar 2023 17:17:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185020; cv=none; d=google.com; s=arc-20160816; b=VnKJzP6sn0N0FzpH/0ylqB9s/NE454hiztU6lWqVqPKXBMc8q5/kHwWStvhYjDIL1Y 5/T5mXInkt7gzRE08fPaCYeBn/WVblo5S1XthQtFZgiclauxNYtXRUd2F31D5wNbSXZ5 c+HU3FkL/L1z8T9czg7pc5Xo/MSL/EfnUQV7+tw7XtLVO7VesQ/agYYTbp3ieKv/NyiP QgK5VaruudaPeyOuXuElN6vi1/TFFuGjshXllW3hvMisxe21Mny7zaiUbtyw9hHJ4uGa fnq3p8WfH2d7AgvKXkK2Y3nBgaDBAhpNoG4FprBphP0SA77vPIFUAKYZnOJfJRGASpHz rOvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=zRxHGq/x4fwbwwr5Ekrd9FyaMCugduAsObDIjvFXmzA=; b=HrvoK+Y1OPFssGN5wH/e7TjPVktOHJA4SX7ysfXYFGzwxUGNbjUYKb9zTE2Lml+YTh KOOmLAki+zJn4yeBL+1FsEBbC8HG1gkR2SOotwMaHNHL5jZvLbPdGiGqdC02q74Z9N5J eRirNLJXuXTed0ybr4cQmJKSZKKWb7AurPprgxPINH80NRAEkXiLv6QueLCDEQrevGKO DmPPj1cdXXqZ6H5bJ0g3q/dBvykr5oB7C4y4xpg7pnRcKzPzjC/It/O3SidVhbwBfDLv ax2V2itGYcHQN5l04EIQZNaJQAtEFVWbm+b3i+gUhLN19Fv+uDQ93JDhyEjGFFVFqBdM mNhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TbzVLXfe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x5-20020a654145000000b0050f53eec0b3si3461690pgp.663.2023.03.18.17.16.36; Sat, 18 Mar 2023 17:17:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TbzVLXfe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229652AbjCSAQD (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229504AbjCSAP6 (ORCPT ); Sat, 18 Mar 2023 20:15:58 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B008B2887D; Sat, 18 Mar 2023 17:15:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184956; x=1710720956; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=0t92Iyq66UV7JVrVlXJ2Eu9M0Ep1BbamDrvvLE4ZT7Y=; b=TbzVLXfeYNAdXf82DO3mTidMM7kVOt3DIvHjheQRvwsDVCw2lVtIm+kN shxMpDrTsovZKnoaUHuoInvlZCvFd9xBd5BbUYjCv/4tamlqts8uOXdcE Ovl6Sj7AKV5/ddFE9Vu0XyEN/O3APJO2p+4szl2fy6/R4LcKEhUEc9o/H zrVFYf5gCU2Pd8FNxoh067wlCLrLzqkgzEQB2njwhJ3nHepjNJJNYMNdv bKXgfrIG7IXMhodEGMUVAJmMbbGt5yHVLjYCFzc/IgMGO/ayO/1nKpKkG L5lb8IzaeuFFp1X9ypbueysE+Vgq4s1GBlcTyXZhSUb5vSHT5Ona6S96+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490734" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490734" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672767" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672767" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:54 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 01/40] Documentation/x86: Add CET shadow stack description Date: Sat, 18 Mar 2023 17:14:56 -0700 Message-Id: <20230319001535.23210-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760753111861297622?= X-GMAIL-MSGID: =?utf-8?q?1760753111861297622?= Introduce a new document on Control-flow Enforcement Technology (CET). Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Add more details to docs (Szabolcs Nagy) - More doc tweaks (Dave Hansen) v5: - Literal format tweaks (Bagas Sanjaya) - Update EOPNOTSUPP text due to unification after comment from (Kees) - Update 32 bit signal support with new behavior - Remove capitalization on shadow stack (Boris) - Fix typo v4: - Drop clearcpuid piece (Boris) - Add some info about 32 bit v3: - Clarify kernel IBT is supported by the kernel. (Kees, Andrew Cooper) - Clarify which arch_prctl's can take multiple bits. (Kees) - Describe ASLR characteristics of thread shadow stacks. (Kees) - Add exec section. (Andrew Cooper) - Fix some capitalization (Bagas Sanjaya) - Update new location of enablement status proc. - Add info about new user_shstk software capability. - Add more info about what the kernel pushes to the shadow stack on signal. --- Documentation/x86/index.rst | 1 + Documentation/x86/shstk.rst | 169 ++++++++++++++++++++++++++++++++++++ 2 files changed, 170 insertions(+) create mode 100644 Documentation/x86/shstk.rst diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst index c73d133fd37c..8ac64d7de4dc 100644 --- a/Documentation/x86/index.rst +++ b/Documentation/x86/index.rst @@ -22,6 +22,7 @@ x86-specific Documentation mtrr pat intel-hfi + shstk iommu intel_txt amd-memory-encryption diff --git a/Documentation/x86/shstk.rst b/Documentation/x86/shstk.rst new file mode 100644 index 000000000000..f09afa504ec0 --- /dev/null +++ b/Documentation/x86/shstk.rst @@ -0,0 +1,169 @@ +.. SPDX-License-Identifier: GPL-2.0 + +====================================================== +Control-flow Enforcement Technology (CET) Shadow Stack +====================================================== + +CET Background +============== + +Control-flow Enforcement Technology (CET) covers several related x86 processor +features that provide protection against control flow hijacking attacks. CET +can protect both applications and the kernel. + +CET introduces shadow stack and indirect branch tracking (IBT). A shadow stack +is a secondary stack allocated from memory which cannot be directly modified by +applications. When executing a CALL instruction, the processor pushes the +return address to both the normal stack and the shadow stack. Upon +function return, the processor pops the shadow stack copy and compares it +to the normal stack copy. If the two differ, the processor raises a +control-protection fault. IBT verifies indirect CALL/JMP targets are intended +as marked by the compiler with 'ENDBR' opcodes. Not all CPU's have both Shadow +Stack and Indirect Branch Tracking. Today in the 64-bit kernel, only userspace +shadow stack and kernel IBT are supported. + +Requirements to use Shadow Stack +================================ + +To use userspace shadow stack you need HW that supports it, a kernel +configured with it and userspace libraries compiled with it. + +The kernel Kconfig option is X86_USER_SHADOW_STACK. When compiled in, shadow +stacks can be disabled at runtime with the kernel parameter: nousershstk. + +To build a user shadow stack enabled kernel, Binutils v2.29 or LLVM v6 or later +are required. + +At run time, /proc/cpuinfo shows CET features if the processor supports +CET. "user_shstk" means that userspace shadow stack is supported on the current +kernel and HW. + +Application Enabling +==================== + +An application's CET capability is marked in its ELF note and can be verified +from readelf/llvm-readelf output:: + + readelf -n | grep -a SHSTK + properties: x86 feature: SHSTK + +The kernel does not process these applications markers directly. Applications +or loaders must enable CET features using the interface described in section 4. +Typically this would be done in dynamic loader or static runtime objects, as is +the case in GLIBC. + +Enabling arch_prctl()'s +======================= + +Elf features should be enabled by the loader using the below arch_prctl's. They +are only supported in 64 bit user applications. These operate on the features +on a per-thread basis. The enablement status is inherited on clone, so if the +feature is enabled on the first thread, it will propagate to all the thread's +in an app. + +arch_prctl(ARCH_SHSTK_ENABLE, unsigned long feature) + Enable a single feature specified in 'feature'. Can only operate on + one feature at a time. + +arch_prctl(ARCH_SHSTK_DISABLE, unsigned long feature) + Disable a single feature specified in 'feature'. Can only operate on + one feature at a time. + +arch_prctl(ARCH_SHSTK_LOCK, unsigned long features) + Lock in features at their current enabled or disabled status. 'features' + is a mask of all features to lock. All bits set are processed, unset bits + are ignored. The mask is ORed with the existing value. So any feature bits + set here cannot be enabled or disabled afterwards. + +The return values are as follows. On success, return 0. On error, errno can +be:: + + -EPERM if any of the passed feature are locked. + -ENOTSUPP if the feature is not supported by the hardware or + kernel. + -EINVAL arguments (non existing feature, etc) + +The feature's bits supported are:: + + ARCH_SHSTK_SHSTK - Shadow stack + ARCH_SHSTK_WRSS - WRSS + +Currently shadow stack and WRSS are supported via this interface. WRSS +can only be enabled with shadow stack, and is automatically disabled +if shadow stack is disabled. + +Proc Status +=========== +To check if an application is actually running with shadow stack, the +user can read the /proc/$PID/status. It will report "wrss" or "shstk" +depending on what is enabled. The lines look like this:: + + x86_Thread_features: shstk wrss + x86_Thread_features_locked: shstk wrss + +Implementation of the Shadow Stack +================================== + +Shadow Stack Size +----------------- + +A task's shadow stack is allocated from memory to a fixed size of +MIN(RLIMIT_STACK, 4 GB). In other words, the shadow stack is allocated to +the maximum size of the normal stack, but capped to 4 GB. In the case +of the clone3 syscall, there is a stack size passed in and shadow stack +uses this instead of the rlimit. + +Signal +------ + +The main program and its signal handlers use the same shadow stack. Because +the shadow stack stores only return addresses, a large shadow stack covers +the condition that both the program stack and the signal alternate stack run +out. + +When a signal happens, the old pre-signal state is pushed on the stack. When +shadow stack is enabled, the shadow stack specific state is pushed onto the +shadow stack. Today this is only the old SSP (shadow stack pointer), pushed +in a special format with bit 63 set. On sigreturn this old SSP token is +verified and restored by the kernel. The kernel will also push the normal +restorer address to the shadow stack to help userspace avoid a shadow stack +violation on the sigreturn path that goes through the restorer. + +So the shadow stack signal frame format is as follows:: + + |1...old SSP| - Pointer to old pre-signal ssp in sigframe token format + (bit 63 set to 1) + | ...| - Other state may be added in the future + + +32 bit ABI signals are not supported in shadow stack processes. Linux prevents +32 bit execution while shadow stack is enabled by the allocating shadow stacks +outside of the 32 bit address space. When execution enters 32 bit mode, either +via far call or returning to userspace, a #GP is generated by the hardware +which, will be delivered to the process as a segfault. When transitioning to +userspace the register's state will be as if the userspace ip being returned to +caused the segfault. + +Fork +---- + +The shadow stack's vma has VM_SHADOW_STACK flag set; its PTEs are required +to be read-only and dirty. When a shadow stack PTE is not RO and dirty, a +shadow access triggers a page fault with the shadow stack access bit set +in the page fault error code. + +When a task forks a child, its shadow stack PTEs are copied and both the +parent's and the child's shadow stack PTEs are cleared of the dirty bit. +Upon the next shadow stack access, the resulting shadow stack page fault +is handled by page copy/re-use. + +When a pthread child is created, the kernel allocates a new shadow stack +for the new thread. New shadow stack creation behaves like mmap() with respect +to ASLR behavior. Similarly, on thread exit the thread's shadow stack is +disabled. + +Exec +---- + +On exec, shadow stack features are disabled by the kernel. At which point, +userspace can choose to re-enable, or lock them. From patchwork Sun Mar 19 00:14:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71672 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527915wrt; Sat, 18 Mar 2023 18:20:08 -0700 (PDT) X-Google-Smtp-Source: AK7set+y/MvSdq7MrohL3/XXwmU6xpMkXHnRXFW0jqChzgHj4PT+vbGR0lfVMx5TphsACz/iLQVC X-Received: by 2002:a05:6a21:3395:b0:d6:f08c:47b2 with SMTP id yy21-20020a056a21339500b000d6f08c47b2mr14856287pzb.8.1679188808236; Sat, 18 Mar 2023 18:20:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188808; cv=none; d=google.com; s=arc-20160816; b=ZGuw5D2BxyJ76tgQ2V+3WfBnyfB1dr+wIeV3r0Hu7d5kRd76tHkmAufOihlJLPDRDD AA0ssnSMIsKhIs5S4y39hoy+mg1sMtHSgMv/Erz/qfOLd1AxrNCxZaLotqhyA/H3d5Pa DlLutyVZbDIh72rC68+vgCgKRTDtFG+2BU5pCfH4OFPbPSSgdHgqjNnOkEt25WRWRb9C m8qmCEeMaNVQmj4KcumCz+s1Cj4Z3kfN1WHPBns3kbQLNRw2W7WhTPHdbSbGDbQmwTEO hnNRlCVTygGbOn4QF9G23xjOoBSvSeiG8YTw6AW1i/5MXIoh1yUJayasq7+XuRy80fCu KTlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=VkrxRxNRr4BKJw2j76aUuVqNLhkKJerjZq1QQHk+4g4=; b=fAv1OQix9pZQSfgdqGZPMocJuMJ7rRmdNNCmYz5cyPbB5Uo4fXihhSBo9vxLQes7+8 kMFVZEPh5hFEP49xEzUXiuuB8R3+OZMbT4yIgrKr6cItc8R5Wwa8fbtUgSq+7TzRQKrq nMD62PZ+KEb4IGsl6XHcmCSUyTVy5UZPGtvy/Pts7/U+zivfqceiFF9q7TFBZZ7srkKr 7qCN8wAZpdB5AQ8OccvffmxkxLe2oY8X5s0S9Nplj7p2KSEddBNwbgER60VGRLech2lM 2hO4CXmTpYOO6xbYwB/V4sBaiHbf664z2jjGtnm1sW2FgqfqjLlkmEflkgarKVrLVRZ4 uivQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Wq/d1W0O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.19.55; Sat, 18 Mar 2023 18:20:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Wq/d1W0O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229778AbjCSAQI (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229541AbjCSAP7 (ORCPT ); Sat, 18 Mar 2023 20:15:59 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29ACC28879; Sat, 18 Mar 2023 17:15:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184958; x=1710720958; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=r28jIOtA/K6D8osNwQJhDgDdLXhK5+vr3NOhbtj47RE=; b=Wq/d1W0Oqa951uHliAqWQtV5pNgQyOyI6UWbXuUpEpnxpkLBNZ4mUhw9 VXR1GMMXvs7RC97C7JaaDZRk4sCBsBE16v2MWkERSW75/eLxJlQhAdJ7f ArhSjq23DtTxy7E/86y82KT/9jErxrTIkStnx4ME1kmF2HXeVVEdw4NsP nHcGbehd8PMVoqs0lQvA1GIsHwhPtUONphVjV5zv0DBeHAxzABTnWxCXZ 0cmaHXmOR1qKrOQsA8fJMv+VAlWKjHYLPHHLiHzAD1ZfIPOHtztL6Jkz/ kC0Q3XjN625SAd+NDN3cdQWR0OnvtsOWU1VyoH7d/OEW9J4sy8LWzgSVz w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490756" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490756" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672771" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672771" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:56 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 02/40] x86/shstk: Add Kconfig option for shadow stack Date: Sat, 18 Mar 2023 17:14:57 -0700 Message-Id: <20230319001535.23210-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757083912952241?= X-GMAIL-MSGID: =?utf-8?q?1760757083912952241?= Shadow stack provides protection for applications against function return address corruption. It is active when the processor supports it, the kernel has CONFIG_X86_SHADOW_STACK enabled, and the application is built for the feature. This is only implemented for the 64-bit kernel. When it is enabled, legacy non-shadow stack applications continue to work, but without protection. Since there is another feature that utilizes CET (Kernel IBT) that will share implementation with shadow stacks, create CONFIG_CET to signify that at least one CET feature is configured. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v5: - Remove capitalization of shadow stack (Boris) v3: - Add X86_CET (Kees) - Add back WRUSS dependency (Kees) - Fix verbiage (Dave) - Change from promt to bool (Kirill) - Add more to commit log v2: - Remove already wrong kernel size increase info (tlgx) - Change prompt to remove "Intel" (tglx) - Update line about what CPUs are supported (Dave) Yu-cheng v25: - Remove X86_CET and use X86_SHADOW_STACK directly. --- arch/x86/Kconfig | 24 ++++++++++++++++++++++++ arch/x86/Kconfig.assembler | 5 +++++ 2 files changed, 29 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a825bf031f49..f03791b73f9f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1851,6 +1851,11 @@ config CC_HAS_IBT (CC_IS_CLANG && CLANG_VERSION >= 140000)) && \ $(as-instr,endbr64) +config X86_CET + def_bool n + help + CET features configured (Shadow stack or IBT) + config X86_KERNEL_IBT prompt "Indirect Branch Tracking" def_bool y @@ -1858,6 +1863,7 @@ config X86_KERNEL_IBT # https://github.com/llvm/llvm-project/commit/9d7001eba9c4cb311e03cd8cdc231f9e579f2d0f depends on !LD_IS_LLD || LLD_VERSION >= 140000 select OBJTOOL + select X86_CET help Build the kernel with support for Indirect Branch Tracking, a hardware support course-grain forward-edge Control Flow Integrity @@ -1952,6 +1958,24 @@ config X86_SGX If unsure, say N. +config X86_USER_SHADOW_STACK + bool "X86 userspace shadow stack" + depends on AS_WRUSS + depends on X86_64 + select ARCH_USES_HIGH_VMA_FLAGS + select X86_CET + help + Shadow stack protection is a hardware feature that detects function + return address corruption. This helps mitigate ROP attacks. + Applications must be enabled to use it, and old userspace does not + get protection "for free". + + CPUs supporting shadow stacks were first released in 2020. + + See Documentation/x86/shstk.rst for more information. + + If unsure, say N. + config EFI bool "EFI runtime service support" depends on ACPI diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler index b88f784cb02e..8ad41da301e5 100644 --- a/arch/x86/Kconfig.assembler +++ b/arch/x86/Kconfig.assembler @@ -24,3 +24,8 @@ config AS_GFNI def_bool $(as-instr,vgf2p8mulb %xmm0$(comma)%xmm1$(comma)%xmm2) help Supported by binutils >= 2.30 and LLVM integrated assembler + +config AS_WRUSS + def_bool $(as-instr,wrussq %rax$(comma)(%rbx)) + help + Supported by binutils >= 2.31 and LLVM integrated assembler From patchwork Sun Mar 19 00:14:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71653 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp510863wrt; Sat, 18 Mar 2023 17:17:01 -0700 (PDT) X-Google-Smtp-Source: AK7set9IkMc7BWKmmtAtX5CyfHYKpfmICPlvnqqCMKPpQo6LzDvzGjdbUjWRps/xPXGGFa00qaZ9 X-Received: by 2002:a05:6a20:b044:b0:d3:89a1:76d1 with SMTP id dx4-20020a056a20b04400b000d389a176d1mr9662194pzb.11.1679185021716; Sat, 18 Mar 2023 17:17:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185021; cv=none; d=google.com; s=arc-20160816; b=ewkOtqigmn1A1/1wuORXOzwc1hFvCQpQpCRjGvKiwLUpDrdyTRdSv0fCtz4PPo0Poz zlWPMCt4FF0VGpynmn1XCAtnCfFSz/+3gluZptB2LXzuwYMlb61wZK3L0wFrfrbHN45z Jdj9lKusfNm73zNdT4F0S8uLqNb4Gdzzgcq3sneUVi3SAwLN6JhAYM+hWJYvLjL9KpfE 9OQxKzTNNNKij3A0QeYWJ6zrMWol0dmPt/7OB+/DiLMBVr0tW1TNl23DHXSQGhZFB1/n pfxeWP1iZHFXIvsyJfemvEub1QgFPfWGmbXY6qzyiKYHQpK/e2wAX+orJJ8IjTdrMIRP Q6RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=Fu0JiLzYUtOtJag4aNX/shhq8JJ4z7u8FfcAdekpma4=; b=FdtEU9naf1N61BtlFXa2F65yFQbjeR7bXieLzWFkh6Dkxzfqa0+gNXs1PYf1zjSOU2 rFFbwuBUGfTVSHfQ629ibITHqp2jUKXd7+Mm9X1dZMZi+T4VLC1qnn0Tf7sqq6Aa5S+J Dc/Ef1t0lBGygBnWUx5D+6BvbvkqtZinmKxruIn+N5qf6CjNzIJLA8zZkD4DFa1imy/5 zbTEL3FuNkcolBfq0xib93t6rX+PpDdVBPRyO7PFE27G5HN4LPSB6xagyuWJ6wL+lXWA 75PUY36xGUNGfm0vz5RLGeJqIgF6s13eOZPmbbQqURLwN9tMwG2oQFxEuK0SXR4MfHR4 Br/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lSLXACX8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z13-20020a056a00240d00b005a81866ff8csi6470765pfh.319.2023.03.18.17.16.47; Sat, 18 Mar 2023 17:17:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lSLXACX8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229738AbjCSAQP (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229621AbjCSAQB (ORCPT ); Sat, 18 Mar 2023 20:16:01 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0BBF28D00; Sat, 18 Mar 2023 17:15:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184959; x=1710720959; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=DZhiQrXgICtOhDgJWbiC3iTfdlJMOxEjzVm3FJAu0Gg=; b=lSLXACX8JY2WT5mXqxNuhMpj9NUzFJADeZLBQ1jDnMg1DrGM7xG1eyi2 uMwghvB1FflLCvh0LdaQw+B5E3vsN2P8cb6+MdMT3hBzN2mNlQYU8jcJx nQaCYgdsG5b05CVPx6LlxkeBys0qI6kTGfsATRAayM+wI1Iz5xpLriaTX MUwovQAgUlYuvOQkGwU7Y6j2T3m3VWBmhuPIn353xsAQlnBnnJ+HdTrmt /kPwU7AkTRjpN0aQBYhtgtnGJsux1SuZ3zsVkTJQOWEPITkB9jyzE2jWR Kk7bGAzVERKyYeGZHyE7iCyTQrOM71qOjtKLPgfNuMivtIZj9/u/G6iRo g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490778" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490778" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672776" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672776" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:57 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 03/40] x86/cpufeatures: Add CPU feature flags for shadow stacks Date: Sat, 18 Mar 2023 17:14:58 -0700 Message-Id: <20230319001535.23210-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760753113188015585?= X-GMAIL-MSGID: =?utf-8?q?1760753113188015585?= The Control-Flow Enforcement Technology contains two related features, one of which is Shadow Stacks. Future patches will utilize this feature for shadow stack support in KVM, so add a CPU feature flags for Shadow Stacks (CPUID.(EAX=7,ECX=0):ECX[bit 7]). To protect shadow stack state from malicious modification, the registers are only accessible in supervisor mode. This implementation context-switches the registers with XSAVES. Make X86_FEATURE_SHSTK depend on XSAVES. The shadow stack feature, enumerated by the CPUID bit described above, encompasses both supervisor and userspace support for shadow stack. In near future patches, only userspace shadow stack will be enabled. In expectation of future supervisor shadow stack support, create a software CPU capability to enumerate kernel utilization of userspace shadow stack support. This user shadow stack bit should depend on the HW "shstk" capability and that logic will be implemented in future patches. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v5: - Drop "shstk" from cpuinfo (Boris) - Remove capitalization on shadow stack (Boris) v3: - Add user specific shadow stack cpu cap (Andrew Cooper) - Drop reviewed-bys from Boris and Kees due to the above change. v2: - Remove IBT reference in commit log (Kees) - Describe xsaves dependency using text from (Dave) v1: - Remove IBT, can be added in a follow on IBT series. --- arch/x86/include/asm/cpufeatures.h | 2 ++ arch/x86/include/asm/disabled-features.h | 8 +++++++- arch/x86/kernel/cpu/cpuid-deps.c | 1 + 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 73c9672c123b..3d98ce9f41fe 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -309,6 +309,7 @@ #define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */ #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ +#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ @@ -378,6 +379,7 @@ #define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */ #define X86_FEATURE_WAITPKG (16*32+ 5) /* UMONITOR/UMWAIT/TPAUSE Instructions */ #define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* Additional AVX512 Vector Bit Manipulation Instructions */ +#define X86_FEATURE_SHSTK (16*32+ 7) /* "" Shadow stack */ #define X86_FEATURE_GFNI (16*32+ 8) /* Galois Field New Instructions */ #define X86_FEATURE_VAES (16*32+ 9) /* Vector AES */ #define X86_FEATURE_VPCLMULQDQ (16*32+10) /* Carry-Less Multiplication Double Quadword */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index 5dfa4fb76f4b..505f78ddca82 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -99,6 +99,12 @@ # define DISABLE_TDX_GUEST (1 << (X86_FEATURE_TDX_GUEST & 31)) #endif +#ifdef CONFIG_X86_USER_SHADOW_STACK +#define DISABLE_USER_SHSTK 0 +#else +#define DISABLE_USER_SHSTK (1 << (X86_FEATURE_USER_SHSTK & 31)) +#endif + /* * Make sure to add features to the correct mask */ @@ -114,7 +120,7 @@ #define DISABLED_MASK9 (DISABLE_SGX) #define DISABLED_MASK10 0 #define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET| \ - DISABLE_CALL_DEPTH_TRACKING) + DISABLE_CALL_DEPTH_TRACKING|DISABLE_USER_SHSTK) #define DISABLED_MASK12 0 #define DISABLED_MASK13 0 #define DISABLED_MASK14 0 diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c index f6748c8bd647..e462c1d3800a 100644 --- a/arch/x86/kernel/cpu/cpuid-deps.c +++ b/arch/x86/kernel/cpu/cpuid-deps.c @@ -81,6 +81,7 @@ static const struct cpuid_dep cpuid_deps[] = { { X86_FEATURE_XFD, X86_FEATURE_XSAVES }, { X86_FEATURE_XFD, X86_FEATURE_XGETBV1 }, { X86_FEATURE_AMX_TILE, X86_FEATURE_XFD }, + { X86_FEATURE_SHSTK, X86_FEATURE_XSAVES }, {} }; From patchwork Sun Mar 19 00:14:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71665 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527329wrt; Sat, 18 Mar 2023 18:17:35 -0700 (PDT) X-Google-Smtp-Source: AK7set8f1k8roYIMlL7eQK90HVZqXsA3Kmw8H9cgfG3X2dITBOkgWcxNfgOBCmapxltL/khyyaYd X-Received: by 2002:a17:903:6c8:b0:1a1:9997:24fd with SMTP id kj8-20020a17090306c800b001a1999724fdmr8599703plb.63.1679188655534; Sat, 18 Mar 2023 18:17:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188655; cv=none; d=google.com; s=arc-20160816; b=gL+CrracRajHaL208gfhV+eBePePkC5HfnSK5S8/wvwwboCRb/pPuMKQV1i7EWNV8Y +0VN5by1vpWC2EZ0WrtAwKueXByXEDopUGoMKD6XiHFUWw441uIRVv1Mc+zJ03bZe+wC 8LQnMlFLE3RQ4F5A2jL0qoAMJm+zt2FiF75I3o34SuRTpzG2MPt6kesYGC+VRLwp19Bs 25GIED//JwljTWbmZjMyw5IrxghDkkWaPn9zDMwGa6UHR+hBh+18fWCI3dy4rbhMaySp FCrdigmA2BbrLdhv5x1l4IznDsmCOdkXaCih3z2twWcuG/XRujcQP5gplzgfH7Xy+iiC v7xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=EAELIjQ8DyrYHPzfrIMRE/EUjFCHmLFlbgspe709h8s=; b=vPLovoY0qjP4n+s3BT2niMkRr67vPuVg089H2MagEt977+R+uppExFlo7zbbccjrhS 9lNwLRi83cjV9INsG5SaW4F6j8ZKYlUJoUYi41ECwbtDOCI1XkATG2IGlJ0NZiyToQVI 2SYPoxL5VDmlrCp4ghoZpuyAZXLuEpyoDOz1IDS6CibXLG/O7VqbmcBJtESnQDL8hdBK uSfQLWBscNXctrsJjGgrZIaZP3uxm3mzOtUV9dKLc43i0iAGWmF2hSfqOaO51EGKrw2T zW+DQM0BkDGx6AArUQkTGc6uqOSBTOTtvnbbjGMNgmbp7eCdjI37wUpF9JlPbIoSjpXg JKkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KBwzjQ6a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.17.23; Sat, 18 Mar 2023 18:17:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KBwzjQ6a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229799AbjCSAQL (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229623AbjCSAQC (ORCPT ); Sat, 18 Mar 2023 20:16:02 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE28728D0F; Sat, 18 Mar 2023 17:16:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184961; x=1710720961; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=K4Y6KAE5IrGSFrjM9/a33nZFp30wbA4tT+ZAd3T/ndY=; b=KBwzjQ6a3mPMkIVALWYxlHJngm8nfIVax0VxeuJhq/rczMOJeTWMAJjS njSZXak8Nwus4OtCoIBLdl3X5PkaSaVqMpa2qAarE1dY+YlKb9uEGwKIX w0uHymqErEcIU1YL7iLwQWDydGEZSWpH3SnyVCyoVURAvFWLUkhUUBZ+R a6Zd1WOW9CIp0bPQXcLJ9+SjNDjn4+5CyRuWA5bH+T1xeH35Qzye7cDCx pKVOrt53C4CQ+6YRnqswghHxGcez27xyiUnvpgJaTO4SuUpoEip5QfHUT 3/jLiIuXy1+00E4HRRIfwoZIi+s17YBz1Zm40gQrjENpR5NuX5EFhijzh A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490802" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490802" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672779" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672779" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:15:59 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 04/40] x86/cpufeatures: Enable CET CR4 bit for shadow stack Date: Sat, 18 Mar 2023 17:14:59 -0700 Message-Id: <20230319001535.23210-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756923365302349?= X-GMAIL-MSGID: =?utf-8?q?1760756923365302349?= Setting CR4.CET is a prerequisite for utilizing any CET features, most of which also require setting MSRs. Kernel IBT already enables the CET CR4 bit when it detects IBT HW support and is configured with kernel IBT. However, future patches that enable userspace shadow stack support will need the bit set as well. So change the logic to enable it in either case. Clear MSR_IA32_U_CET in cet_disable() so that it can't live to see userspace in a new kexec-ed kernel that has CR4.CET set from kernel IBT. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v5: - Drop "shstk" from cpuinfo (Boris) - Remove capitalization on shadow stack (Boris) v3: - Add user specific shadow stack cpu cap (Andrew Cooper) - Drop reviewed-bys from Boris and Kees due to the above change. v2: - Remove IBT reference in commit log (Kees) - Describe xsaves dependency using text from (Dave) v1: - Remove IBT, can be added in a follow on IBT series. --- arch/x86/kernel/cpu/common.c | 35 +++++++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 8cd4126d8253..cc686e5039be 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -600,27 +600,43 @@ __noendbr void ibt_restore(u64 save) static __always_inline void setup_cet(struct cpuinfo_x86 *c) { - u64 msr = CET_ENDBR_EN; + bool user_shstk, kernel_ibt; - if (!HAS_KERNEL_IBT || - !cpu_feature_enabled(X86_FEATURE_IBT)) + if (!IS_ENABLED(CONFIG_X86_CET)) return; - wrmsrl(MSR_IA32_S_CET, msr); + kernel_ibt = HAS_KERNEL_IBT && cpu_feature_enabled(X86_FEATURE_IBT); + user_shstk = cpu_feature_enabled(X86_FEATURE_SHSTK) && + IS_ENABLED(CONFIG_X86_USER_SHADOW_STACK); + + if (!kernel_ibt && !user_shstk) + return; + + if (user_shstk) + set_cpu_cap(c, X86_FEATURE_USER_SHSTK); + + if (kernel_ibt) + wrmsrl(MSR_IA32_S_CET, CET_ENDBR_EN); + else + wrmsrl(MSR_IA32_S_CET, 0); + cr4_set_bits(X86_CR4_CET); - if (!ibt_selftest()) { + if (kernel_ibt && !ibt_selftest()) { pr_err("IBT selftest: Failed!\n"); wrmsrl(MSR_IA32_S_CET, 0); setup_clear_cpu_cap(X86_FEATURE_IBT); - return; } } __noendbr void cet_disable(void) { - if (cpu_feature_enabled(X86_FEATURE_IBT)) - wrmsrl(MSR_IA32_S_CET, 0); + if (!(cpu_feature_enabled(X86_FEATURE_IBT) || + cpu_feature_enabled(X86_FEATURE_SHSTK))) + return; + + wrmsrl(MSR_IA32_S_CET, 0); + wrmsrl(MSR_IA32_U_CET, 0); } /* @@ -1482,6 +1498,9 @@ static void __init cpu_parse_early_param(void) if (cmdline_find_option_bool(boot_command_line, "noxsaves")) setup_clear_cpu_cap(X86_FEATURE_XSAVES); + if (cmdline_find_option_bool(boot_command_line, "nousershstk")) + setup_clear_cpu_cap(X86_FEATURE_USER_SHSTK); + arglen = cmdline_find_option(boot_command_line, "clearcpuid", arg, sizeof(arg)); if (arglen <= 0) return; From patchwork Sun Mar 19 00:15:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71654 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp510991wrt; Sat, 18 Mar 2023 17:17:23 -0700 (PDT) X-Google-Smtp-Source: AK7set8LrPB2U9to2/4MZieYvkQNnPCebj6ueC7/Q3oqOFgq4oSURLscY4T+OEsEPtfElMWZylqy X-Received: by 2002:a05:6a20:d5:b0:c7:73ad:1071 with SMTP id 21-20020a056a2000d500b000c773ad1071mr9073451pzh.14.1679185043240; Sat, 18 Mar 2023 17:17:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185043; cv=none; d=google.com; s=arc-20160816; b=ZO2as3mZGgQGVn9DYLa21A4eNtLIGlDe3b2es2xQ/ITP7pCcej0JTCnqgfcUbcPqAB 2e0T+18lx6oSuoVylRzpjE9WdTgwJRkSJlDesQ7C1QTCTzWx9tcpQm6vARxFw8Tdwcmw UHQuIMTsD8wWi9LUJqoiEOdDdjPRSVv3MBj2CRKiPip0ep5iUd8wBo8/DXXBcyP+7M5n SovW8kvm2METhLAf3K3gefyLi8NJCyWV/9/pXrbGIyG0CXyp6eJA72a9Lbpgr8u09RPF 2n6B7DXifdY/EDzqXmy/poKSYyP/zboRf2Xc5rzGgZP8bkZ0E92Og2L+X/NI5lOPyXyX NC6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=x+DZu64AZlWqg9jI5n7i6mEJSBUcY0ZFP1UxEduBU74=; b=wBg7BrgCC5ecirGnJZzrqpL9rN5Hgd/hZMspUs2mxCMndiEZwefvCSkNBWxi8e15Ja J/y1Uhfl4d0nhFbu6T9ECPDhOw5RQNBt3dBEdiewBxNlVnG+PGe0vILPTBNf+rw0rwEq obwMjLsk7ijG3+UrGVi0CEp5oweYeFjyuKGNhFiqZyCv22c1qAlqInIJlgQtHOL0DQ4R Xo0I/VImhbl998VdfSjbXAX/ubvckO3vt07eCTAOMsQAPEjSZCXnJJrAz//myUBzsSL8 pkOxZbNhXZpHwvdWQOMrWiZHb2MejYn97TrTRmrIXgKB6jUbYG/AgNUwSlEhoQYz5AJB teSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nnCzbCgB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t69-20020a638148000000b004fbb1ffaf6dsi6463647pgd.448.2023.03.18.17.17.07; Sat, 18 Mar 2023 17:17:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nnCzbCgB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229887AbjCSAQS (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229724AbjCSAQF (ORCPT ); Sat, 18 Mar 2023 20:16:05 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E24528D24; Sat, 18 Mar 2023 17:16:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184963; x=1710720963; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=tYOU3V6KmQtESAJgUK7c/60CO62FCAwvULJrBjiqxhw=; b=nnCzbCgB1KMIK0eyp1iVLirkJ9/I4DreV3eIazQKkWz9qjDZdtIhz7TT mofe7kzXnhXuZ7nobPndp61Dsh8TLzz7GSZ1VGQR+yvmal82uIdjG+RR9 1Z8xzSatIHjS5P78epQTSEJJDP7ShGN3PjTT9ctVkH1gziUyXVJWZvu2B aQHKLoQaDfC/N1tB6vMH0DWJ0FqmlasfA8vUSM0Prx45ghyWMgX/BsaW6 NYnqcV8FFjXC7A5Rmh+GXdEKBGb4JHOFvtNmw7XUkeDOa0Ujk1bz2v0Y7 UbYLeqNVdNV2HoNcF90d7LLCzAoOolXCRSAkcqmSvt43sN1Orza/BqHCm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490825" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490825" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672783" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672783" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:00 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 05/40] x86/fpu/xstate: Introduce CET MSR and XSAVES supervisor states Date: Sat, 18 Mar 2023 17:15:00 -0700 Message-Id: <20230319001535.23210-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760753135768975633?= X-GMAIL-MSGID: =?utf-8?q?1760753135768975633?= Shadow stack register state can be managed with XSAVE. The registers can logically be separated into two groups: * Registers controlling user-mode operation * Registers controlling kernel-mode operation The architecture has two new XSAVE state components: one for each group of those groups of registers. This lets an OS manage them separately if it chooses. Future patches for host userspace and KVM guests will only utilize the user-mode registers, so only configure XSAVE to save user-mode registers. This state will add 16 bytes to the xsave buffer size. Future patches will use the user-mode XSAVE area to save guest user-mode CET state. However, VMCS includes new fields for guest CET supervisor states. KVM can use these to save and restore guest supervisor state, so host supervisor XSAVE support is not required. Adding this exacerbates the already unwieldy if statement in check_xstate_against_struct() that handles warning about un-implemented xfeatures. So refactor these check's by having XCHECK_SZ() set a bool when it actually check's the xfeature. This ends up exceeding 80 chars, but was better on balance than other options explored. Pass the bool as pointer to make it clear that XCHECK_SZ() can change the variable. While configuring user-mode XSAVE, clarify kernel-mode registers are not managed by XSAVE by defining the xfeature in XFEATURE_MASK_SUPERVISOR_UNSUPPORTED, like is done for XFEATURE_MASK_PT. This serves more of a documentation as code purpose, and functionally, only enables a few safety checks. Both XSAVE state components are supervisor states, even the state controlling user-mode operation. This is a departure from earlier features like protection keys where the PKRU state is a normal user (non-supervisor) state. Having the user state be supervisor-managed ensures there is no direct, unprivileged access to it, making it harder for an attacker to subvert CET. To facilitate this privileged access, define the two user-mode CET MSRs, and the bits defined in those MSRs relevant to future shadow stack enablement patches. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v5: - Move comments from end of lines in cet_user_state struct (Boris) v3: - Add missing "is" in commit log (Boris) - Change to case statement for struct size checking (Boris) - Adjust commas on xfeature_names (Kees, Boris) v2: - Change name to XFEATURE_CET_KERNEL_UNUSED (peterz) KVM refresh: - Reword commit log using some verbiage posted by Dave Hansen - Remove unlikely to be used supervisor cet xsave struct - Clarify that supervisor cet state is not saved by xsave - Remove unused supervisor MSRs --- arch/x86/include/asm/fpu/types.h | 16 +++++- arch/x86/include/asm/fpu/xstate.h | 6 ++- arch/x86/kernel/fpu/xstate.c | 90 +++++++++++++++---------------- 3 files changed, 61 insertions(+), 51 deletions(-) diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h index 7f6d858ff47a..eb810074f1e7 100644 --- a/arch/x86/include/asm/fpu/types.h +++ b/arch/x86/include/asm/fpu/types.h @@ -115,8 +115,8 @@ enum xfeature { XFEATURE_PT_UNIMPLEMENTED_SO_FAR, XFEATURE_PKRU, XFEATURE_PASID, - XFEATURE_RSRVD_COMP_11, - XFEATURE_RSRVD_COMP_12, + XFEATURE_CET_USER, + XFEATURE_CET_KERNEL_UNUSED, XFEATURE_RSRVD_COMP_13, XFEATURE_RSRVD_COMP_14, XFEATURE_LBR, @@ -138,6 +138,8 @@ enum xfeature { #define XFEATURE_MASK_PT (1 << XFEATURE_PT_UNIMPLEMENTED_SO_FAR) #define XFEATURE_MASK_PKRU (1 << XFEATURE_PKRU) #define XFEATURE_MASK_PASID (1 << XFEATURE_PASID) +#define XFEATURE_MASK_CET_USER (1 << XFEATURE_CET_USER) +#define XFEATURE_MASK_CET_KERNEL (1 << XFEATURE_CET_KERNEL_UNUSED) #define XFEATURE_MASK_LBR (1 << XFEATURE_LBR) #define XFEATURE_MASK_XTILE_CFG (1 << XFEATURE_XTILE_CFG) #define XFEATURE_MASK_XTILE_DATA (1 << XFEATURE_XTILE_DATA) @@ -252,6 +254,16 @@ struct pkru_state { u32 pad; } __packed; +/* + * State component 11 is Control-flow Enforcement user states + */ +struct cet_user_state { + /* user control-flow settings */ + u64 user_cet; + /* user shadow stack pointer */ + u64 user_ssp; +}; + /* * State component 15: Architectural LBR configuration state. * The size of Arch LBR state depends on the number of LBRs (lbr_depth). diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index cd3dd170e23a..d4427b88ee12 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -50,7 +50,8 @@ #define XFEATURE_MASK_USER_DYNAMIC XFEATURE_MASK_XTILE_DATA /* All currently supported supervisor features */ -#define XFEATURE_MASK_SUPERVISOR_SUPPORTED (XFEATURE_MASK_PASID) +#define XFEATURE_MASK_SUPERVISOR_SUPPORTED (XFEATURE_MASK_PASID | \ + XFEATURE_MASK_CET_USER) /* * A supervisor state component may not always contain valuable information, @@ -77,7 +78,8 @@ * Unsupported supervisor features. When a supervisor feature in this mask is * supported in the future, move it to the supported supervisor feature mask. */ -#define XFEATURE_MASK_SUPERVISOR_UNSUPPORTED (XFEATURE_MASK_PT) +#define XFEATURE_MASK_SUPERVISOR_UNSUPPORTED (XFEATURE_MASK_PT | \ + XFEATURE_MASK_CET_KERNEL) /* All supervisor states including supported and unsupported states. */ #define XFEATURE_MASK_SUPERVISOR_ALL (XFEATURE_MASK_SUPERVISOR_SUPPORTED | \ diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 714166cc25f2..13a80521dd51 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -39,26 +39,26 @@ */ static const char *xfeature_names[] = { - "x87 floating point registers" , - "SSE registers" , - "AVX registers" , - "MPX bounds registers" , - "MPX CSR" , - "AVX-512 opmask" , - "AVX-512 Hi256" , - "AVX-512 ZMM_Hi256" , - "Processor Trace (unused)" , + "x87 floating point registers", + "SSE registers", + "AVX registers", + "MPX bounds registers", + "MPX CSR", + "AVX-512 opmask", + "AVX-512 Hi256", + "AVX-512 ZMM_Hi256", + "Processor Trace (unused)", "Protection Keys User registers", "PASID state", - "unknown xstate feature" , - "unknown xstate feature" , - "unknown xstate feature" , - "unknown xstate feature" , - "unknown xstate feature" , - "unknown xstate feature" , - "AMX Tile config" , - "AMX Tile data" , - "unknown xstate feature" , + "Control-flow User registers", + "Control-flow Kernel registers (unused)", + "unknown xstate feature", + "unknown xstate feature", + "unknown xstate feature", + "unknown xstate feature", + "AMX Tile config", + "AMX Tile data", + "unknown xstate feature", }; static unsigned short xsave_cpuid_features[] __initdata = { @@ -73,6 +73,7 @@ static unsigned short xsave_cpuid_features[] __initdata = { [XFEATURE_PT_UNIMPLEMENTED_SO_FAR] = X86_FEATURE_INTEL_PT, [XFEATURE_PKRU] = X86_FEATURE_PKU, [XFEATURE_PASID] = X86_FEATURE_ENQCMD, + [XFEATURE_CET_USER] = X86_FEATURE_SHSTK, [XFEATURE_XTILE_CFG] = X86_FEATURE_AMX_TILE, [XFEATURE_XTILE_DATA] = X86_FEATURE_AMX_TILE, }; @@ -276,6 +277,7 @@ static void __init print_xstate_features(void) print_xstate_feature(XFEATURE_MASK_Hi16_ZMM); print_xstate_feature(XFEATURE_MASK_PKRU); print_xstate_feature(XFEATURE_MASK_PASID); + print_xstate_feature(XFEATURE_MASK_CET_USER); print_xstate_feature(XFEATURE_MASK_XTILE_CFG); print_xstate_feature(XFEATURE_MASK_XTILE_DATA); } @@ -344,6 +346,7 @@ static __init void os_xrstor_booting(struct xregs_state *xstate) XFEATURE_MASK_BNDREGS | \ XFEATURE_MASK_BNDCSR | \ XFEATURE_MASK_PASID | \ + XFEATURE_MASK_CET_USER | \ XFEATURE_MASK_XTILE) /* @@ -446,14 +449,15 @@ static void __init __xstate_dump_leaves(void) } \ } while (0) -#define XCHECK_SZ(sz, nr, nr_macro, __struct) do { \ - if ((nr == nr_macro) && \ - WARN_ONCE(sz != sizeof(__struct), \ - "%s: struct is %zu bytes, cpu state %d bytes\n", \ - __stringify(nr_macro), sizeof(__struct), sz)) { \ +#define XCHECK_SZ(sz, nr, __struct) ({ \ + if (WARN_ONCE(sz != sizeof(__struct), \ + "[%s]: struct is %zu bytes, cpu state %d bytes\n", \ + xfeature_names[nr], sizeof(__struct), sz)) { \ __xstate_dump_leaves(); \ } \ -} while (0) + true; \ +}) + /** * check_xtile_data_against_struct - Check tile data state size. @@ -527,36 +531,28 @@ static bool __init check_xstate_against_struct(int nr) * Ask the CPU for the size of the state. */ int sz = xfeature_size(nr); + /* * Match each CPU state with the corresponding software * structure. */ - XCHECK_SZ(sz, nr, XFEATURE_YMM, struct ymmh_struct); - XCHECK_SZ(sz, nr, XFEATURE_BNDREGS, struct mpx_bndreg_state); - XCHECK_SZ(sz, nr, XFEATURE_BNDCSR, struct mpx_bndcsr_state); - XCHECK_SZ(sz, nr, XFEATURE_OPMASK, struct avx_512_opmask_state); - XCHECK_SZ(sz, nr, XFEATURE_ZMM_Hi256, struct avx_512_zmm_uppers_state); - XCHECK_SZ(sz, nr, XFEATURE_Hi16_ZMM, struct avx_512_hi16_state); - XCHECK_SZ(sz, nr, XFEATURE_PKRU, struct pkru_state); - XCHECK_SZ(sz, nr, XFEATURE_PASID, struct ia32_pasid_state); - XCHECK_SZ(sz, nr, XFEATURE_XTILE_CFG, struct xtile_cfg); - - /* The tile data size varies between implementations. */ - if (nr == XFEATURE_XTILE_DATA) - check_xtile_data_against_struct(sz); - - /* - * Make *SURE* to add any feature numbers in below if - * there are "holes" in the xsave state component - * numbers. - */ - if ((nr < XFEATURE_YMM) || - (nr >= XFEATURE_MAX) || - (nr == XFEATURE_PT_UNIMPLEMENTED_SO_FAR) || - ((nr >= XFEATURE_RSRVD_COMP_11) && (nr <= XFEATURE_RSRVD_COMP_16))) { + switch (nr) { + case XFEATURE_YMM: return XCHECK_SZ(sz, nr, struct ymmh_struct); + case XFEATURE_BNDREGS: return XCHECK_SZ(sz, nr, struct mpx_bndreg_state); + case XFEATURE_BNDCSR: return XCHECK_SZ(sz, nr, struct mpx_bndcsr_state); + case XFEATURE_OPMASK: return XCHECK_SZ(sz, nr, struct avx_512_opmask_state); + case XFEATURE_ZMM_Hi256: return XCHECK_SZ(sz, nr, struct avx_512_zmm_uppers_state); + case XFEATURE_Hi16_ZMM: return XCHECK_SZ(sz, nr, struct avx_512_hi16_state); + case XFEATURE_PKRU: return XCHECK_SZ(sz, nr, struct pkru_state); + case XFEATURE_PASID: return XCHECK_SZ(sz, nr, struct ia32_pasid_state); + case XFEATURE_XTILE_CFG: return XCHECK_SZ(sz, nr, struct xtile_cfg); + case XFEATURE_CET_USER: return XCHECK_SZ(sz, nr, struct cet_user_state); + case XFEATURE_XTILE_DATA: check_xtile_data_against_struct(sz); return true; + default: XSTATE_WARN_ON(1, "No structure for xstate: %d\n", nr); return false; } + return true; } From patchwork Sun Mar 19 00:15:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71678 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp530328wrt; Sat, 18 Mar 2023 18:28:55 -0700 (PDT) X-Google-Smtp-Source: AK7set/KmK/xc+Y6PD2QEfpM5nVnbAO8KEZziDSkG1cOpR5KCv7cS5LgYNa4/v2/kXJUARtJXb+o X-Received: by 2002:a17:902:ec8b:b0:19e:b5d3:170f with SMTP id x11-20020a170902ec8b00b0019eb5d3170fmr15094715plg.4.1679189334727; Sat, 18 Mar 2023 18:28:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189334; cv=none; d=google.com; s=arc-20160816; b=Uh97wIuUQKZ6dvC8n0zctnS74yAQGSV47ahNnrIh1f7bLSHIxXJZ4om2lsl4KVp1VW 2f62mJSpmSk6hztVnOUiKjorsri2bh07CSABdoRjyhHbV71Zcpget50V0mFMzbcXSvPS eQVvJOdAs/ZPRTQ0qngg6OyqOSwBN6vBVyI2nEVZVMOh9XCJ9rEhMd/G0m+KIn0FmJC+ oWK38pV491+hobQx7bFJQvX3QVN3Wutq99lGWDJLDravcqvf9aOjLJHrMScqwQKn277M ZrwtkiSPoYvnhQ8K3U8uSs9l4udew7edr+LITaiMMBV4d1M5T0ojNz64GKvMJvQSf5gR jsEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=+zckbUWRWZvfPROGysXYXzeczc+/ZvxVdCvyMsXY99o=; b=pMAsW5sFJW86odQ0n9rrMz32+d/RMhjrcuMG23EbFY8zLW3Tj/g2ujqwp6S64LBp0B t3ngXVmC3WUIE4ybqtSyyTW9VTZOQ1ab+nQbUo+kDSXvB52E0nq5ISe3e/BaErc0hwRI MkCV0BPBjaw5OQNg7SkDadkt9MncPGcz26c8OFNu8Mtv+t6Z2m0dImAprqyC2Q4yKiZ8 TY4g/kwXUdNQtIMAtdSlkTJ8/hpI6G7dZQx/PjK7RpEsEWG56+rqrma0BdgQs909qXky hInJJc0gEjdA++GawB+1zWXGV5vXalRj4hDTnUFr9lXX+bfn4+ejis/7NTP6EQ6+DVWR EEJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QPuaQR5V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x6-20020a1709028ec600b0019a71e14c19si6531045plo.320.2023.03.18.18.28.42; Sat, 18 Mar 2023 18:28:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QPuaQR5V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbjCSAQX (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbjCSAQG (ORCPT ); Sat, 18 Mar 2023 20:16:06 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C41528E74; Sat, 18 Mar 2023 17:16:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184965; x=1710720965; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=JJ3ES6pKzLeV3jf7I3FnSvx/TVTiB5ybDZUP+4xGp5E=; b=QPuaQR5Vi4d3Z/x3CyUxK59vRUKuToq9j9Djl3/AZNL5VFrnUhj3+wUS V+e8hk9uYSdvuoi7snnZcabq/H97SMjgC/94Oiyhn4OBrSpsOjSsWFMKD 0sx6AGCXVvti9Z3q2ksWZj5JnmwWuILMgcnTDdsGO6Y/xCr4bHZxJazG9 Si0JexYLGLmk68X+e2DuZPa7ZDHZe7y1nuq4P4aAdoZjLBy1S+uSgzatH cy0tvjF7m8G+QX3T6zROWHoVh+MfCc4WON3GU8QSrgZULpwTQRKx3AAo1 BFJUin+YZv+Z8HBnlSKGexLK6XJkxCKnAtpOWNI39avew12wcnf7Z4WZs A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490848" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490848" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672788" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672788" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:02 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 06/40] x86/fpu: Add helper for modifying xstate Date: Sat, 18 Mar 2023 17:15:01 -0700 Message-Id: <20230319001535.23210-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757635982486618?= X-GMAIL-MSGID: =?utf-8?q?1760757635982486618?= Just like user xfeatures, supervisor xfeatures can be active in the registers or present in the task FPU buffer. If the registers are active, the registers can be modified directly. If the registers are not active, the modification must be performed on the task FPU buffer. When the state is not active, the kernel could perform modifications directly to the buffer. But in order for it to do that, it needs to know where in the buffer the specific state it wants to modify is located. Doing this is not robust against optimizations that compact the FPU buffer, as each access would require computing where in the buffer it is. The easiest way to modify supervisor xfeature data is to force restore the registers and write directly to the MSRs. Often times this is just fine anyway as the registers need to be restored before returning to userspace. Do this for now, leaving buffer writing optimizations for the future. Add a new function fpregs_lock_and_load() that can simultaneously call fpregs_lock() and do this restore. Also perform some extra sanity checks in this function since this will be used in non-fpu focused code. Suggested-by: Thomas Gleixner Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v6: - Drop "but appear to work" (Boris) v5: - Fix spelling error (Boris) - Don't export fpregs_lock_and_load() (Boris) v3: - Rename to fpregs_lock_and_load() to match the unlocking fpregs_unlock(). (Kees) - Elaborate in comment about helper. (Dave) v2: - Drop optimization of writing directly the buffer, and change API accordingly. - fpregs_lock_and_load() suggested by tglx - Some commit log verbiage from dhansen --- arch/x86/include/asm/fpu/api.h | 9 +++++++++ arch/x86/kernel/fpu/core.c | 18 ++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 503a577814b2..aadc6893dcaa 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -82,6 +82,15 @@ static inline void fpregs_unlock(void) preempt_enable(); } +/* + * FPU state gets lazily restored before returning to userspace. So when in the + * kernel, the valid FPU state may be kept in the buffer. This function will force + * restore all the fpu state to the registers early if needed, and lock them from + * being automatically saved/restored. Then FPU state can be modified safely in the + * registers, before unlocking with fpregs_unlock(). + */ +void fpregs_lock_and_load(void); + #ifdef CONFIG_X86_DEBUG_FPU extern void fpregs_assert_state_consistent(void); #else diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index caf33486dc5e..f851558b673f 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -753,6 +753,24 @@ void switch_fpu_return(void) } EXPORT_SYMBOL_GPL(switch_fpu_return); +void fpregs_lock_and_load(void) +{ + /* + * fpregs_lock() only disables preemption (mostly). So modifying state + * in an interrupt could screw up some in progress fpregs operation. + * Warn about it. + */ + WARN_ON_ONCE(!irq_fpu_usable()); + WARN_ON_ONCE(current->flags & PF_KTHREAD); + + fpregs_lock(); + + fpregs_assert_state_consistent(); + + if (test_thread_flag(TIF_NEED_FPU_LOAD)) + fpregs_restore_userregs(); +} + #ifdef CONFIG_X86_DEBUG_FPU /* * If current FPU state according to its tracking (loaded FPU context on this From patchwork Sun Mar 19 00:15:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71692 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532415wrt; Sat, 18 Mar 2023 18:35:51 -0700 (PDT) X-Google-Smtp-Source: AK7set/+jknRp1ATdXBeGehNzbEpf5SFzzuglZnKxtUFhRFCwiE6xj83Fkg+ejHcsd617UvBX22S X-Received: by 2002:a17:902:d505:b0:1a0:514c:c2d2 with SMTP id b5-20020a170902d50500b001a0514cc2d2mr13892082plg.68.1679189751353; Sat, 18 Mar 2023 18:35:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189751; cv=none; d=google.com; s=arc-20160816; b=FmN9W0d4vWl6pMzDDzwnhIsMgJJGKslNs0RJ3IFCkrVq9fwu/LDACqR4eYhQFKzaT/ S8ZnxIO5zksKmpMq3XPbfBq6zT67vHFma3yQGXXQCcPRB/LpNAVh23jYOxIq2C+uBYDG DhiZDZXZ/nLQ2y1ff5+EpycMcCJQZGXguMyT9qYXb6/bVapcWZpzDWfOpOQD2uGQxbs4 C7vOF/iOIdEoD/zyHUYQMbwHua7FcOXwbttw0ECXRtv6zUY7URfiOqBMAG81hmapKLw6 Hu/nxdKiqBpBuglAfBBRc+Hru9JL+W6JDwyn4qL8oMcbpb8kGqya+0OrpTcjsZSENSp1 NQ3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=RjDWz6QztU2WGgm8Ai0lU/L1fcgwZ8GHfXLrNr2QNDk=; b=G2JTId1cFfA2ajMG3q3yNkQSntG1vDZ39nnNUSG9sy//O10RWR0M1AHQ+WRrVf0Ahw CNrypxV0jmrOPZXknWgOAL4uF298SovOsLlIB85nB6QUHN89kowqi64WY8cygyy5TC6E Wic1Z/4/oz2yh/D9SSoOADUP3KvgWwq3r3O+uZ+hYqnHWw4vpTx9OkqfpRDvRVsAH+yT x/EKs8qDq/DFcLgL/cQPUpVC5OUJwgRLvov7VHIYYlzHHERZyyoKLEQRxNNgho4tDq2b jqEnsNIO7BA0ss8GSIdLhIyFfXuNnSpqyL7jYtmR+/eT/6seDRYe9NqZZuekrrYda6Pa jPPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BE8NMD6y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x6-20020a1709028ec600b0019a71e14c19si6531045plo.320.2023.03.18.18.35.39; Sat, 18 Mar 2023 18:35:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BE8NMD6y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229999AbjCSAQx (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229823AbjCSAQs (ORCPT ); Sat, 18 Mar 2023 20:16:48 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5D1128D0B; Sat, 18 Mar 2023 17:16:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184967; x=1710720967; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/SBgbCkSzTb9QOJ8HmCDkj2Bq5biu5bt/+mw6FahRpc=; b=BE8NMD6y8YnPiX0D2+/QefGDlAKUhJtHn4kOvwe0t7w+LBSkWSA9fuSF Pmyl3cqrLcUJY4A9EIHTUvXE2kgtr0zWxCHgYZseJHqJBT8leW4PwL6BY 4QB6JuNc0h3wBWASwPEqdOJELPHT2bmqXbWP2WvNpG4KNITkQsdo7HmcE GwiIddCuHDsd7kJigJEv1rTMMI4HgRJgRJsJDRys3z6A076inal3TA9Yr dHDYEv7WNGUkmI8pyVllJB0N6koJAu96/jvvFkb1Nw6UYf77UMExiq1NL w4n4+wgnLzV/y4p22ZMYt1DEDB4tPvu4sPc64BiY0llIjEIEUEbtsMJfS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490871" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490871" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672796" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672796" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:05 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 07/40] x86/traps: Move control protection handler to separate file Date: Sat, 18 Mar 2023 17:15:02 -0700 Message-Id: <20230319001535.23210-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758072814157244?= X-GMAIL-MSGID: =?utf-8?q?1760758072814157244?= Today the control protection handler is defined in traps.c and used only for the kernel IBT feature. To reduce ifdeffery, move it to it's own file. In future patches, functionality will be added to make this handler also handle user shadow stack faults. So name the file cet.c. No functional change. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Add "x86/traps" to log (Boris) v6: - Split move to cet.c and shadow stack enhancements to fault handler to separate files. (Kees) --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/cet.c | 76 ++++++++++++++++++++++++++++++++++++++++ arch/x86/kernel/traps.c | 75 --------------------------------------- 3 files changed, 78 insertions(+), 75 deletions(-) create mode 100644 arch/x86/kernel/cet.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index dd61752f4c96..92446f1dedd7 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -144,6 +144,8 @@ obj-$(CONFIG_CFI_CLANG) += cfi.o obj-$(CONFIG_CALL_THUNKS) += callthunks.o +obj-$(CONFIG_X86_CET) += cet.o + ### # 64 bit specific files ifeq ($(CONFIG_X86_64),y) diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c new file mode 100644 index 000000000000..7ad22b705b64 --- /dev/null +++ b/arch/x86/kernel/cet.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +static __ro_after_init bool ibt_fatal = true; + +extern void ibt_selftest_ip(void); /* code label defined in asm below */ + +enum cp_error_code { + CP_EC = (1 << 15) - 1, + + CP_RET = 1, + CP_IRET = 2, + CP_ENDBR = 3, + CP_RSTRORSSP = 4, + CP_SETSSBSY = 5, + + CP_ENCL = 1 << 15, +}; + +DEFINE_IDTENTRY_ERRORCODE(exc_control_protection) +{ + if (!cpu_feature_enabled(X86_FEATURE_IBT)) { + pr_err("Unexpected #CP\n"); + BUG(); + } + + if (WARN_ON_ONCE(user_mode(regs) || (error_code & CP_EC) != CP_ENDBR)) + return; + + if (unlikely(regs->ip == (unsigned long)&ibt_selftest_ip)) { + regs->ax = 0; + return; + } + + pr_err("Missing ENDBR: %pS\n", (void *)instruction_pointer(regs)); + if (!ibt_fatal) { + printk(KERN_DEFAULT CUT_HERE); + __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL); + return; + } + BUG(); +} + +/* Must be noinline to ensure uniqueness of ibt_selftest_ip. */ +noinline bool ibt_selftest(void) +{ + unsigned long ret; + + asm (" lea ibt_selftest_ip(%%rip), %%rax\n\t" + ANNOTATE_RETPOLINE_SAFE + " jmp *%%rax\n\t" + "ibt_selftest_ip:\n\t" + UNWIND_HINT_FUNC + ANNOTATE_NOENDBR + " nop\n\t" + + : "=a" (ret) : : "memory"); + + return !ret; +} + +static int __init ibt_setup(char *str) +{ + if (!strcmp(str, "off")) + setup_clear_cpu_cap(X86_FEATURE_IBT); + + if (!strcmp(str, "warn")) + ibt_fatal = false; + + return 1; +} + +__setup("ibt=", ibt_setup); diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d317dc3d06a3..cc223e60aba2 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -213,81 +213,6 @@ DEFINE_IDTENTRY(exc_overflow) do_error_trap(regs, 0, "overflow", X86_TRAP_OF, SIGSEGV, 0, NULL); } -#ifdef CONFIG_X86_KERNEL_IBT - -static __ro_after_init bool ibt_fatal = true; - -extern void ibt_selftest_ip(void); /* code label defined in asm below */ - -enum cp_error_code { - CP_EC = (1 << 15) - 1, - - CP_RET = 1, - CP_IRET = 2, - CP_ENDBR = 3, - CP_RSTRORSSP = 4, - CP_SETSSBSY = 5, - - CP_ENCL = 1 << 15, -}; - -DEFINE_IDTENTRY_ERRORCODE(exc_control_protection) -{ - if (!cpu_feature_enabled(X86_FEATURE_IBT)) { - pr_err("Unexpected #CP\n"); - BUG(); - } - - if (WARN_ON_ONCE(user_mode(regs) || (error_code & CP_EC) != CP_ENDBR)) - return; - - if (unlikely(regs->ip == (unsigned long)&ibt_selftest_ip)) { - regs->ax = 0; - return; - } - - pr_err("Missing ENDBR: %pS\n", (void *)instruction_pointer(regs)); - if (!ibt_fatal) { - printk(KERN_DEFAULT CUT_HERE); - __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL); - return; - } - BUG(); -} - -/* Must be noinline to ensure uniqueness of ibt_selftest_ip. */ -noinline bool ibt_selftest(void) -{ - unsigned long ret; - - asm (" lea ibt_selftest_ip(%%rip), %%rax\n\t" - ANNOTATE_RETPOLINE_SAFE - " jmp *%%rax\n\t" - "ibt_selftest_ip:\n\t" - UNWIND_HINT_FUNC - ANNOTATE_NOENDBR - " nop\n\t" - - : "=a" (ret) : : "memory"); - - return !ret; -} - -static int __init ibt_setup(char *str) -{ - if (!strcmp(str, "off")) - setup_clear_cpu_cap(X86_FEATURE_IBT); - - if (!strcmp(str, "warn")) - ibt_fatal = false; - - return 1; -} - -__setup("ibt=", ibt_setup); - -#endif /* CONFIG_X86_KERNEL_IBT */ - #ifdef CONFIG_X86_F00F_BUG void handle_invalid_op(struct pt_regs *regs) #else From patchwork Sun Mar 19 00:15:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71664 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp526997wrt; Sat, 18 Mar 2023 18:16:18 -0700 (PDT) X-Google-Smtp-Source: AK7set/w0CtmF0hQZ0zpTNnsRuCFTtj81axMugdNT+0fsukVDJDLuHL+2QWlo01USregxT4S6FKE X-Received: by 2002:a05:6a20:7da6:b0:cc:7106:b84b with SMTP id v38-20020a056a207da600b000cc7106b84bmr9200896pzj.29.1679188578147; Sat, 18 Mar 2023 18:16:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188578; cv=none; d=google.com; s=arc-20160816; b=XUIrtDzRo7vH00h79sazW80JeeXujAvyZxv0TzMJyEFTdVSUmjXNbBiYEAEHYe1utq Onm3J9kRX6gRPtdvO5vx0Xi9+HB1wUzH2luMe0g98v+YInBpAtEyiUeOYOM3jsLAHNlj kJikbec1oecgkzR/Q1g7oJ86hmhST2bHwytSkO220DVdwoSQE0DQTygh/n4ADkiRbl97 NKjHjn/17+1jjsTBQPVcXrdwjyJhrququoMQ93e98+9bU8X+NCerYoGvwsksSfBwS48G iUEVFy39VBX0hqGc5ypw74hrZYjM1+34mKhjCGjwi7m0+4jtCxw8qG60ii3V7/ozGFDT +Kpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=8QtzyDW1ykTiAkFRIMO4HMWo77/z46ElfMQlWOnkzNc=; b=B0fxN+EYFbOrrXE02PxwLqmUW39X9s5jpEV0qk3ryGTwDsNZ3g0bA8ipWp1bhPgidm sO6uPz38OAta6X3LxwBQ9VTFX8Yo00m9APbr78FwS1ubWbkrFSy9H5jCxv8R2VW4NubK IpfZlmzbAHyKT5GE+eaDZaJolyu2WJu3xkUYNNwou8O0cb37hkKrSuRA98a11DPztPAV IlIGxSg3Rna2tdU5ePHYyE5Pi+v9Wp2ywL6WnFcSaWshPTDiUaerO2RQuiILAwonRTG5 mEB/RWexW8Xogk7HYtswgPI8QACL26jXxOrAnU707XUxlK20xXhIivABjlfprL9N8L/e nIeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="leWeNrN/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p5-20020a631e45000000b004fc274006ddsi6209365pgm.670.2023.03.18.18.16.04; Sat, 18 Mar 2023 18:16:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="leWeNrN/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjCSAQz (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229942AbjCSAQt (ORCPT ); Sat, 18 Mar 2023 20:16:49 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F96C28EB8; Sat, 18 Mar 2023 17:16:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184969; x=1710720969; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=4pjPISZWb7kvLOoOGyus9YN15ONkQWn4j+qGkUUpqp8=; b=leWeNrN/M9hD5kgwMQJOVym13vgNgS00cOh+NRkXt9WMs50y8fcjt1zQ x8dLkPabqLu1QllfmwTwQDbSTbXxlHRyhpzLD1RzwndqAo9q9HJ2yfGSm lEppGL0tHxIDP1MPn9aSJgEsjTBLDnefBbO8N8JYSKt9mI5sc1khD9din BU/0pONVrbN08dTXyrlVnOp2UomilrUV8k9mtXmDDWuGrQI53Rg2F/TkK QsAiMCUdM1Z8z40zlX9Sg/a9FXQb7I2DKrMyfjy1feZHkKtuGMu1JBQyi ky7L8s/Qit577u6PsBnssBdrZWTlXcMSRSuoJVW9t1UI5Oqx907P6hgvR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490893" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490893" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672805" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672805" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:06 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 08/40] x86/shstk: Add user control-protection fault handler Date: Sat, 18 Mar 2023 17:15:03 -0700 Message-Id: <20230319001535.23210-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756842335366212?= X-GMAIL-MSGID: =?utf-8?q?1760756842335366212?= A control-protection fault is triggered when a control-flow transfer attempt violates Shadow Stack or Indirect Branch Tracking constraints. For example, the return address for a RET instruction differs from the copy on the shadow stack. There already exists a control-protection fault handler for handling kernel IBT faults. Refactor this fault handler into separate user and kernel handlers, like the page fault handler. Add a control-protection handler for usermode. To avoid ifdeffery, put them both in a new file cet.c, which is compiled in the case of either of the two CET features supported in the kernel: kernel IBT or user mode shadow stack. Move some static inline functions from traps.c into a header so they can be used in cet.c. Opportunistically fix a comment in the kernel IBT part of the fault handler that is on the end of the line instead of preceding it. Keep the same behavior for the kernel side of the fault handler, except for converting a BUG to a WARN in the case of a #CP happening when the feature is missing. This unifies the behavior with the new shadow stack code, and also prevents the kernel from crashing under this situation which is potentially recoverable. The control-protection fault handler works in a similar way as the general protection fault handler. It provides the si_code SEGV_CPERR to the signal handler. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v7: - Adjust alignment of WARN statement v6: - Split into separate patches (Kees) - Change to "x86/shstk" in commit log (Boris) v5: - Move to separate file to avoid ifdeffery (Boris) - Improvements to commit log (Boris) - Rename control_protection_err (Boris) - Move comment from end of line in IBT fault handler (Boris) v3: - Shorten user/kernel #CP handler function names (peterz) - Restore CP_ENDBR check to kernel handler (peterz) - Utilize CONFIG_X86_CET (Kees) - Unify "unexpected" warnings (Andrew Cooper) - Use 2d array for error code chars (Andrew Cooper) - Add comment about why to read SSP MSR before enabling interrupts v2: - Integrate with kernel IBT fault handler - Update printed messages. (Dave) - Remove array_index_nospec() usage. (Dave) - Remove IBT messages. (Dave) - Add enclave error code bit processing it case it can get triggered somehow. - Add extra "unknown" in control_protection_err. --- arch/arm/kernel/signal.c | 2 +- arch/arm64/kernel/signal.c | 2 +- arch/arm64/kernel/signal32.c | 2 +- arch/sparc/kernel/signal32.c | 2 +- arch/sparc/kernel/signal_64.c | 2 +- arch/x86/include/asm/disabled-features.h | 8 +- arch/x86/include/asm/idtentry.h | 2 +- arch/x86/include/asm/traps.h | 12 +++ arch/x86/kernel/cet.c | 94 +++++++++++++++++++++--- arch/x86/kernel/idt.c | 2 +- arch/x86/kernel/signal_32.c | 2 +- arch/x86/kernel/signal_64.c | 2 +- arch/x86/kernel/traps.c | 12 --- arch/x86/xen/enlighten_pv.c | 2 +- arch/x86/xen/xen-asm.S | 2 +- include/uapi/asm-generic/siginfo.h | 3 +- 16 files changed, 117 insertions(+), 34 deletions(-) diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index e07f359254c3..9a3c9de5ac5e 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -681,7 +681,7 @@ asmlinkage void do_rseq_syscall(struct pt_regs *regs) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 06a02707f488..19b6b292892c 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -1341,7 +1341,7 @@ void __init minsigstksz_setup(void) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index 4700f8522d27..bbd542704730 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -460,7 +460,7 @@ void compat_setup_restart_syscall(struct pt_regs *regs) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c index dad38960d1a8..82da8a2d769d 100644 --- a/arch/sparc/kernel/signal32.c +++ b/arch/sparc/kernel/signal32.c @@ -751,7 +751,7 @@ asmlinkage int do_sys32_sigstack(u32 u_ssptr, u32 u_ossptr, unsigned long sp) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c index 570e43e6fda5..b4e410976e0d 100644 --- a/arch/sparc/kernel/signal_64.c +++ b/arch/sparc/kernel/signal_64.c @@ -562,7 +562,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long orig_i0, unsigned long */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index 505f78ddca82..652e366b68a0 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -105,6 +105,12 @@ #define DISABLE_USER_SHSTK (1 << (X86_FEATURE_USER_SHSTK & 31)) #endif +#ifdef CONFIG_X86_KERNEL_IBT +#define DISABLE_IBT 0 +#else +#define DISABLE_IBT (1 << (X86_FEATURE_IBT & 31)) +#endif + /* * Make sure to add features to the correct mask */ @@ -128,7 +134,7 @@ #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \ DISABLE_ENQCMD) #define DISABLED_MASK17 0 -#define DISABLED_MASK18 0 +#define DISABLED_MASK18 (DISABLE_IBT) #define DISABLED_MASK19 0 #define DISABLED_MASK20 0 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index b241af4ce9b4..61e0e6301f09 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -614,7 +614,7 @@ DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF, xenpv_exc_double_fault); #endif /* #CP */ -#ifdef CONFIG_X86_KERNEL_IBT +#ifdef CONFIG_X86_CET DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_CP, exc_control_protection); #endif diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 47ecfff2c83d..75e0dabf0c45 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -47,4 +47,16 @@ void __noreturn handle_stack_overflow(struct pt_regs *regs, struct stack_info *info); #endif +static inline void cond_local_irq_enable(struct pt_regs *regs) +{ + if (regs->flags & X86_EFLAGS_IF) + local_irq_enable(); +} + +static inline void cond_local_irq_disable(struct pt_regs *regs) +{ + if (regs->flags & X86_EFLAGS_IF) + local_irq_disable(); +} + #endif /* _ASM_X86_TRAPS_H */ diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c index 7ad22b705b64..cc10d8be9d74 100644 --- a/arch/x86/kernel/cet.c +++ b/arch/x86/kernel/cet.c @@ -4,10 +4,6 @@ #include #include -static __ro_after_init bool ibt_fatal = true; - -extern void ibt_selftest_ip(void); /* code label defined in asm below */ - enum cp_error_code { CP_EC = (1 << 15) - 1, @@ -20,15 +16,80 @@ enum cp_error_code { CP_ENCL = 1 << 15, }; -DEFINE_IDTENTRY_ERRORCODE(exc_control_protection) +static const char cp_err[][10] = { + [0] = "unknown", + [1] = "near ret", + [2] = "far/iret", + [3] = "endbranch", + [4] = "rstorssp", + [5] = "setssbsy", +}; + +static const char *cp_err_string(unsigned long error_code) +{ + unsigned int cpec = error_code & CP_EC; + + if (cpec >= ARRAY_SIZE(cp_err)) + cpec = 0; + return cp_err[cpec]; +} + +static void do_unexpected_cp(struct pt_regs *regs, unsigned long error_code) +{ + WARN_ONCE(1, "Unexpected %s #CP, error_code: %s\n", + user_mode(regs) ? "user mode" : "kernel mode", + cp_err_string(error_code)); +} + +static DEFINE_RATELIMIT_STATE(cpf_rate, DEFAULT_RATELIMIT_INTERVAL, + DEFAULT_RATELIMIT_BURST); + +static void do_user_cp_fault(struct pt_regs *regs, unsigned long error_code) { - if (!cpu_feature_enabled(X86_FEATURE_IBT)) { - pr_err("Unexpected #CP\n"); - BUG(); + struct task_struct *tsk; + unsigned long ssp; + + /* + * An exception was just taken from userspace. Since interrupts are disabled + * here, no scheduling should have messed with the registers yet and they + * will be whatever is live in userspace. So read the SSP before enabling + * interrupts so locking the fpregs to do it later is not required. + */ + rdmsrl(MSR_IA32_PL3_SSP, ssp); + + cond_local_irq_enable(regs); + + tsk = current; + tsk->thread.error_code = error_code; + tsk->thread.trap_nr = X86_TRAP_CP; + + /* Ratelimit to prevent log spamming. */ + if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) && + __ratelimit(&cpf_rate)) { + pr_emerg("%s[%d] control protection ip:%lx sp:%lx ssp:%lx error:%lx(%s)%s", + tsk->comm, task_pid_nr(tsk), + regs->ip, regs->sp, ssp, error_code, + cp_err_string(error_code), + error_code & CP_ENCL ? " in enclave" : ""); + print_vma_addr(KERN_CONT " in ", regs->ip); + pr_cont("\n"); } - if (WARN_ON_ONCE(user_mode(regs) || (error_code & CP_EC) != CP_ENDBR)) + force_sig_fault(SIGSEGV, SEGV_CPERR, (void __user *)0); + cond_local_irq_disable(regs); +} + +static __ro_after_init bool ibt_fatal = true; + +/* code label defined in asm below */ +extern void ibt_selftest_ip(void); + +static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code) +{ + if ((error_code & CP_EC) != CP_ENDBR) { + do_unexpected_cp(regs, error_code); return; + } if (unlikely(regs->ip == (unsigned long)&ibt_selftest_ip)) { regs->ax = 0; @@ -74,3 +135,18 @@ static int __init ibt_setup(char *str) } __setup("ibt=", ibt_setup); + +DEFINE_IDTENTRY_ERRORCODE(exc_control_protection) +{ + if (user_mode(regs)) { + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + do_user_cp_fault(regs, error_code); + else + do_unexpected_cp(regs, error_code); + } else { + if (cpu_feature_enabled(X86_FEATURE_IBT)) + do_kernel_cp_fault(regs, error_code); + else + do_unexpected_cp(regs, error_code); + } +} diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index a58c6bc1cd68..5074b8420359 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -107,7 +107,7 @@ static const __initconst struct idt_data def_idts[] = { ISTG(X86_TRAP_MC, asm_exc_machine_check, IST_INDEX_MCE), #endif -#ifdef CONFIG_X86_KERNEL_IBT +#ifdef CONFIG_X86_CET INTG(X86_TRAP_CP, asm_exc_control_protection), #endif diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c index 9027fc088f97..c12624bc82a3 100644 --- a/arch/x86/kernel/signal_32.c +++ b/arch/x86/kernel/signal_32.c @@ -402,7 +402,7 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c index 13a1e6083837..0e808c72bf7e 100644 --- a/arch/x86/kernel/signal_64.c +++ b/arch/x86/kernel/signal_64.c @@ -403,7 +403,7 @@ void sigaction_compat_abi(struct k_sigaction *act, struct k_sigaction *oact) */ static_assert(NSIGILL == 11); static_assert(NSIGFPE == 15); -static_assert(NSIGSEGV == 9); +static_assert(NSIGSEGV == 10); static_assert(NSIGBUS == 5); static_assert(NSIGTRAP == 6); static_assert(NSIGCHLD == 6); diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index cc223e60aba2..18fb9d620824 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -77,18 +77,6 @@ DECLARE_BITMAP(system_vectors, NR_VECTORS); -static inline void cond_local_irq_enable(struct pt_regs *regs) -{ - if (regs->flags & X86_EFLAGS_IF) - local_irq_enable(); -} - -static inline void cond_local_irq_disable(struct pt_regs *regs) -{ - if (regs->flags & X86_EFLAGS_IF) - local_irq_disable(); -} - __always_inline int is_valid_bugaddr(unsigned long addr) { if (addr < TASK_SIZE_MAX) diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index bb59cc6ddb2d..9c29cd5393cc 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -640,7 +640,7 @@ static struct trap_array_entry trap_array[] = { TRAP_ENTRY(exc_coprocessor_error, false ), TRAP_ENTRY(exc_alignment_check, false ), TRAP_ENTRY(exc_simd_coprocessor_error, false ), -#ifdef CONFIG_X86_KERNEL_IBT +#ifdef CONFIG_X86_CET TRAP_ENTRY(exc_control_protection, false ), #endif }; diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S index 4a184f6e4e4d..7cdcb4ce6976 100644 --- a/arch/x86/xen/xen-asm.S +++ b/arch/x86/xen/xen-asm.S @@ -148,7 +148,7 @@ xen_pv_trap asm_exc_page_fault xen_pv_trap asm_exc_spurious_interrupt_bug xen_pv_trap asm_exc_coprocessor_error xen_pv_trap asm_exc_alignment_check -#ifdef CONFIG_X86_KERNEL_IBT +#ifdef CONFIG_X86_CET xen_pv_trap asm_exc_control_protection #endif #ifdef CONFIG_X86_MCE diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h index ffbe4cec9f32..0f52d0ac47c5 100644 --- a/include/uapi/asm-generic/siginfo.h +++ b/include/uapi/asm-generic/siginfo.h @@ -242,7 +242,8 @@ typedef struct siginfo { #define SEGV_ADIPERR 7 /* Precise MCD exception */ #define SEGV_MTEAERR 8 /* Asynchronous ARM MTE error */ #define SEGV_MTESERR 9 /* Synchronous ARM MTE exception */ -#define NSIGSEGV 9 +#define SEGV_CPERR 10 /* Control protection fault */ +#define NSIGSEGV 10 /* * SIGBUS si_codes From patchwork Sun Mar 19 00:15:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71661 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp526356wrt; Sat, 18 Mar 2023 18:13:58 -0700 (PDT) X-Google-Smtp-Source: AK7set+Ahrm62qMbx+zhUDyWpAbwRirDiqSpwIMtI8W6mC+wmigXkKc7pRaifkeKmk5elfWFBXkR X-Received: by 2002:a05:6a20:431e:b0:d4:c806:bdb6 with SMTP id h30-20020a056a20431e00b000d4c806bdb6mr15656041pzk.45.1679188437934; Sat, 18 Mar 2023 18:13:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188437; cv=none; d=google.com; s=arc-20160816; b=ghhJ0TgSvVFCg7qSyvsOlueJiFOnxPHZCppVnH9zEF8UrDU1hmT6rJ5YQYep7MbvJU 7kZgGp2S86s59xNjcVTwSRlYvqrxLR7zScuPWayEnf9dye/MVf1ywgxm+bwPin9H+Kww GkeTqcrfnuoHgRIz1oP5mJ3nvp8WSyxQYoVoynEB7jgQy/xj6lwYROm3HHM5LhCiYph/ uh4Fuiq4LK/I8vWtRKhzPBaltXNERVBBEvmQM3LEhnjH3QG8wxn3c5uIHE+J4q/ZZTM5 oAvkUZEVk4GeW2jiYEaMRZe1YVcUBhVXxqMg5gedCxTWIpyvVZh/vdlvmpwAtgRoNKO/ NLJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=ocahhM3Bn6Dfg/QyiFRf3gcfnukdUEBCs6hjdna9J3s=; b=SP9JO/gTBcEkzR/M+Ve8dTts7j0KKfpltrzHEigVHKXE+UuozagIxG/q47nIMzJohX JQ0jl+Ylh/538b9thg8mHqV12TVi++S3YCt4a/muoyf9UaGXl144sMEYBEpa1K8DTaJi DfYej7bpcSFgqNmgguG3pt22HZvBlHFpMVpUT62TKUDqbFNZY4fmcAgFkeL3vCtZh75n S6rwDMCb1/m46WLW2oRjJ5gsnvvuBMt2nkU/COyzChfUCLo3fypNVGCI8KS2BJeNfILf /tcSQisEiuoVTKIc6phqzo036OjEyrD82mgk2rpj5eQ+X1sy5bnAI7/ao6AmuHE6yaEY EmQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HVXyMwRX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.13.45; Sat, 18 Mar 2023 18:13:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HVXyMwRX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230040AbjCSAQ7 (ORCPT + 99 others); Sat, 18 Mar 2023 20:16:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229950AbjCSAQt (ORCPT ); Sat, 18 Mar 2023 20:16:49 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2962529158; Sat, 18 Mar 2023 17:16:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184971; x=1710720971; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=dTEoEHoAOcO1vfWEsqQCGpPA+bqJxwa+ObH7rTEuQRQ=; b=HVXyMwRX+z3TAmkvp/qM3U8jGbqyuhYoM4a+Puv8OAHeZhLCR1SVi0Nw FQhEehs6iX6/IjcXrFQIf1g2twbhf9GqahCCDn41KL4ERwzBnXMCY5IXg El1DvEgQuIWMH/e/k58uNuxOjW9PjsrRkuEF8pzn8HnKSIgp19UtItdci o9Ng2EK62kwzf/rln82ALuOetNZ/j8MwsE9SK4k1Mug9bNTyn5GjFhJAa Ifn4iB/7rEaHJbTE9ltR5WG7nEta17ON07ft4hV2vAt6k3Xq6pqzF2d5S YlnS8klEV/+nuwhgKBFhMK6KToOdWXIrSFBveQGO4Nw+iTKxm74ckpchi w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490915" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490915" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672812" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672812" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:08 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 09/40] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Date: Sat, 18 Mar 2023 17:15:04 -0700 Message-Id: <20230319001535.23210-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756695360477076?= X-GMAIL-MSGID: =?utf-8?q?1760756695360477076?= New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as shadow stack pages. In normal cases, it can be helpful to create Write=1 PTEs as also Dirty=1 if HW dirty tracking is not needed, because if the Dirty bit is not already set the CPU has to set Dirty=1 when the memory gets written to. This creates additional work for the CPU. So traditional wisdom was to simply set the Dirty bit whenever you didn't care about it. However, it was never really very helpful for read-only kernel memory. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some instructions can write to such supervisor memory. The kernel does not set IA32_S_CET.SH_STK_EN, so avoiding kernel Write=0,Dirty=1 memory is not strictly needed for any functional reason. But having Write=0,Dirty=1 kernel memory doesn't have any functional benefit either, so to reduce ambiguity between shadow stack and regular Write=0 pages, remove Dirty=1 from any kernel Write=0 PTEs. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v6: - Also remove dirty from newly added set_memory_rox() v5: - Spelling and grammar in commit log (Boris) v3: - Update commit log (Andrew Cooper, Peterz) v2: - Normalize PTE bit descriptions between patches --- arch/x86/include/asm/pgtable_types.h | 6 +++--- arch/x86/mm/pat/set_memory.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 447d4bee25c4..0646ad00178b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -192,10 +192,10 @@ enum page_cache_mode { #define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 356758b7d4b4..1b5c0dc9f32b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2073,12 +2073,12 @@ int set_memory_nx(unsigned long addr, int numpages) int set_memory_ro(unsigned long addr, int numpages) { - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); } int set_memory_rox(unsigned long addr, int numpages) { - pgprot_t clr = __pgprot(_PAGE_RW); + pgprot_t clr = __pgprot(_PAGE_RW | _PAGE_DIRTY); if (__supported_pte_mask & _PAGE_NX) clr.pgprot |= _PAGE_NX; From patchwork Sun Mar 19 00:15:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71655 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp511331wrt; Sat, 18 Mar 2023 17:18:34 -0700 (PDT) X-Google-Smtp-Source: AK7set/YofQuTp1lIo8kFavKC1tcEYa3AMIPOMzbdAw+7GGiOzLuaCKWpKiwH2BVsEWCdIVbR3a3 X-Received: by 2002:a17:90b:1c85:b0:23d:1143:1e3c with SMTP id oo5-20020a17090b1c8500b0023d11431e3cmr13636879pjb.44.1679185114281; Sat, 18 Mar 2023 17:18:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185114; cv=none; d=google.com; s=arc-20160816; b=FkEDujN7Y+t9Oeum5QmsamgkMrtfDCDeaACs2fX3izCSqF2/9Vl6/2rXZcb0s2iMWU jr7RzNZZSBfgf5TzddEKwL3/9fHI3zmNLzN68xanDOwt+YNAcezx5lPIEK2KjfvWeJPn v2/WpQSXlybh/5xXjlAx5nsitRC/NZG98c+F4pYOwSOxccjMwxzY117JtFheCtUNZ3am clY0J4ADOd5FsVgRqwl62H68sU2Bo7ALd7+c0qDX+FavoP1AKPWXAcwqpPvsbOthH7he KvXqPnNDIS1vMuE5JM6Y1RfluRxuNvYu0dHqfoLWX+QITCbfM3I4iuLanAoY+b0Cu43y idvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=sRLLsxiM3hz9WKTUNGj41sd5fcP3AEYnmcR9FvvjXv8=; b=xnH0ICLykOfdsiNiC05MHgyFDjf95Cry9kA+FAORYYGQ3LyGX/kl9envL8iAPAHC+n X8TMtKdPAeVmitnL0nNqFlJWIRqu71fnyiMVq7soXGOcBHgLzVHsJTyKUTFdo3dEnSJV bXstKZcc4Q+bX8wi4HfdfwCwSBacp/kM6Yaz6i0ZQ1XFm6+d7FigUvgBZQ09pt/I7VjE fznRHJH8T6Z8bkrdvPvpdRVa24TBsjsy1noYawxHNi7CYv+njawx2HCV9iNDy02xX2Nl 79Wt9vd05ZGEN9OBWh3Ugk7lTzBNMIwT5H71yPjeBi/StkZscLithgM+JLzQZcNAKJ7Z RMKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RwYEX84C; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nv14-20020a17090b1b4e00b0023d1b2fb2d9si6894879pjb.165.2023.03.18.17.18.19; Sat, 18 Mar 2023 17:18:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RwYEX84C; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229878AbjCSARg (ORCPT + 99 others); Sat, 18 Mar 2023 20:17:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229994AbjCSAQw (ORCPT ); Sat, 18 Mar 2023 20:16:52 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3855928E62; Sat, 18 Mar 2023 17:16:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184973; x=1710720973; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Vja8kjxFIpUKpFfGiffO7CqOO8r6G2hYmB0i6nbkB4w=; b=RwYEX84ChaIWRDghDt+fDVKaSWBWMbigkMP2SpY1xgNOnUTTU2Nb6LD1 6cBpkGDIAMNiAaCliCGbvAEot17HbVyUMhM5+cm8dmUHhCBetC0mac266 KP9pZwHQqzUYreI6Oo8BYArzMCN2P7V1xRbuChAmrBzKtzQtqUdqNS+KF fZKim4euVPvwmHlGclOYzJmOCNyXlm6s0jqcFlwakRL6cmttzHaJYTARP QNO0m4Wt2pdvX85op/nYpWZheRvhjB+DL+sOn7kDN+kpdBRmx2oFSoGXD MJhLKSLJj7AOvUDZU2H0udhvDAMtlUdWKLwZKa1Rs/Y2UMzzVREmNAlWS w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490937" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490937" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672817" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672817" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:10 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 10/40] x86/mm: Move pmd_write(), pud_write() up in the file Date: Sat, 18 Mar 2023 17:15:05 -0700 Message-Id: <20230319001535.23210-11-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760753210610587566?= X-GMAIL-MSGID: =?utf-8?q?1760753210610587566?= To prepare the introduction of _PAGE_SAVED_DIRTY, move pmd_write() and pud_write() up in the file, so that they can be used by other helpers below. No functional changes. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Reviewed-by: Kirill A. Shutemov Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- arch/x86/include/asm/pgtable.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7425f32e5293..56eea96502c6 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -160,6 +160,18 @@ static inline int pte_write(pte_t pte) return pte_flags(pte) & _PAGE_RW; } +#define pmd_write pmd_write +static inline int pmd_write(pmd_t pmd) +{ + return pmd_flags(pmd) & _PAGE_RW; +} + +#define pud_write pud_write +static inline int pud_write(pud_t pud) +{ + return pud_flags(pud) & _PAGE_RW; +} + static inline int pte_huge(pte_t pte) { return pte_flags(pte) & _PAGE_PSE; @@ -1120,12 +1132,6 @@ extern int pmdp_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp); -#define pmd_write pmd_write -static inline int pmd_write(pmd_t pmd) -{ - return pmd_flags(pmd) & _PAGE_RW; -} - #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) @@ -1155,12 +1161,6 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); } -#define pud_write pud_write -static inline int pud_write(pud_t pud) -{ - return pud_flags(pud) & _PAGE_RW; -} - #ifndef pmdp_establish #define pmdp_establish pmdp_establish static inline pmd_t pmdp_establish(struct vm_area_struct *vma, From patchwork Sun Mar 19 00:15:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71681 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531233wrt; Sat, 18 Mar 2023 18:31:47 -0700 (PDT) X-Google-Smtp-Source: AK7set+6J80YotIDMbLKnqNKs0wbSfHxViYpo6UE8JvYpVKlDaGIb1EPdpsKD2HxWjztFbO9Rb2h X-Received: by 2002:a05:6a20:7fa3:b0:cb:77f0:9a42 with SMTP id d35-20020a056a207fa300b000cb77f09a42mr15439898pzj.33.1679189507018; Sat, 18 Mar 2023 18:31:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189507; cv=none; d=google.com; s=arc-20160816; b=rl1mfIBtGfBW4VzxQWTVDkmu6Hz/xUATr6w0MqM78IC6TvqcQO4fgZ/2iHZxXIYNQn I31LMOBN6SRMbSavR9abkhL/2pyW+t9QY6Yk9JR1+f6G03fUbQn9RDdl8vXRTALwFqQf BCj55zpnZ/uSRiWaATkV1cAYcPmXL/f6M/RI01F+V/NhAW/lncV4kM4TnUDT98SZw9PM AioUmO9TfJL+hL6nCJLSzLqw24Ap3zvmEnmtt/ztPLkqRX3nLXH76H8vTRQtaQoAx4p4 GryySm2ARn1NxrytEfQd2oh4TzOOyRALpnWqc6RKV0/BVm3MCpflp134nO7sz2tHzDmS sk2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=vY6sCeD2TLN/+eCA+S8sO9I7Rz5ryINUnVTgEv9BsKg=; b=HqTZ9jCfNydz91cOBkFYJH9wfFb8v2b5r59+0jyEvu9V3gUTnC2RLYYplH0mduWpA7 b2X7tO1VGGlfqxg33yuXHvKmw3ecPBCFpsxIaQeaqC9aF8w+hKcvt/4poVUZPvee3cas ufBSztNi7tySVRSxdVQYH/AZ7h7XjDEJATZMpTKccDxKf9q92TGHAHZn8q6IvD4LiWWi m1RBBHLlNdf/NXAvvxGetcAS0LFJq0qRW6RfIbblDnxQCqdyDOfqq5E7O5FViEWM4hek 77zbi6aaHXk/nlSyRl9PlxT1hPpEiEgZT8Ko9SyLcmf2r979gvq4BLzg//SeS3uxWu3H SDkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MRhGSha3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a634103000000b004fbc54ad3b6si6550344pga.470.2023.03.18.18.31.34; Sat, 18 Mar 2023 18:31:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MRhGSha3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229817AbjCSARq (ORCPT + 99 others); Sat, 18 Mar 2023 20:17:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230051AbjCSARI (ORCPT ); Sat, 18 Mar 2023 20:17:08 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F8EE28E51; Sat, 18 Mar 2023 17:16:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184979; x=1710720979; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3uhcjpuPc6iiKv5YasMIVAsUAr3DhRKyR9YoK1X/nYQ=; b=MRhGSha3VHuQsQ+yJ/GUD/YlLOawG3DbWKNkBXdMfZBC+p16Y20lEIY5 wyT8Os46y1DiNRLngqzkyn9I424cdVllQz0hXHEIrl4O5mTb0thMbur3R m7yh9eKUtAYEmTsKUT+/BYkTOqCLJgkA3l3vYtCHos2u9WXuphgOmaABY 1Hm9RluHumlGVlEPSlAH3a0Uq/8JCHRfroDQU04eeMYnR+OIll7OljB2Q WJmXBgUj+vnQHgdMHrvk6rBbUZzVBIKJ9VdiO/JDQHmEbQ0CcOYXntQ5U lviN7g+w9ukW0oDQFna1E19ho9JtTF8FQ/soy2WrGeJ5H7oqohbNSDoeQ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490959" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490959" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672821" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672821" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:11 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 11/40] mm: Introduce pte_mkwrite_kernel() Date: Sat, 18 Mar 2023 17:15:06 -0700 Message-Id: <20230319001535.23210-12-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757816206506845?= X-GMAIL-MSGID: =?utf-8?q?1760757816206506845?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One of these changes is to allow for pte_mkwrite() to create different types of writable memory (the existing conventionally writable type and also the new shadow stack type). Future patches will convert pte_mkwrite() to take a VMA in order to facilitate this, however there are places in the kernel where pte_mkwrite() is called outside of the context of a VMA. These are for kernel memory. So create a new variant called pte_mkwrite_kernel() and switch the kernel users over to it. Have pte_mkwrite() and pte_mkwrite_kernel() be the same for now. Future patches will introduce changes to make pte_mkwrite() take a VMA. Only do this for architectures that need it because they call pte_mkwrite() in arch code without an associated VMA. Since it will only currently be used in arch code, so do not include it in arch_pgtable_helpers.rst. Suggested-by: David Hildenbrand Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Acked-by: Deepak Gupta Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Link: https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ --- Hi Non-x86 Arch’s, x86 has a feature that allows for the creation of a special type of writable memory (shadow stack) that is only writable in limited specific ways. Previously, changes were proposed to core MM code to teach it to decide when to create normally writable memory or the special shadow stack writable memory, but David Hildenbrand suggested[0] to change pXX_mkwrite() to take a VMA, so awareness of shadow stack memory can be moved into x86 code. Since pXX_mkwrite() is defined in every arch, it requires some tree-wide changes. So that is why you are seeing some patches out of a big x86 series pop up in your arch mailing list. There is no functional change. After this refactor, the shadow stack series goes on to use the arch helpers to push shadow stack memory details inside arch/x86. Testing was just 0-day build testing. Hopefully that is enough context. Thanks! [0] https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ v6: - New patch --- arch/arm64/include/asm/pgtable.h | 7 ++++++- arch/arm64/mm/trans_pgd.c | 4 ++-- arch/s390/include/asm/pgtable.h | 7 ++++++- arch/s390/mm/pageattr.c | 2 +- arch/x86/include/asm/pgtable.h | 7 ++++++- arch/x86/xen/mmu_pv.c | 2 +- 6 files changed, 22 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a..cccf8885792e 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -180,13 +180,18 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) return pmd; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite_kernel(pte_t pte) { pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); return pte; } +static inline pte_t pte_mkwrite(pte_t pte) +{ + return pte_mkwrite_kernel(pte); +} + static inline pte_t pte_mkclean(pte_t pte) { pte = clear_pte_bit(pte, __pgprot(PTE_DIRTY)); diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 4ea2eefbc053..5c07e68d80ea 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -40,7 +40,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) * read only (code, rodata). Clear the RDONLY bit from * the temporary mappings we use during restore. */ - set_pte(dst_ptep, pte_mkwrite(pte)); + set_pte(dst_ptep, pte_mkwrite_kernel(pte)); } else if (debug_pagealloc_enabled() && !pte_none(pte)) { /* * debug_pagealloc will removed the PTE_VALID bit if @@ -53,7 +53,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) */ BUG_ON(!pfn_valid(pte_pfn(pte))); - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_kernel(pte))); } } diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 2c70b4d1263d..d4943f2d3f00 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1005,7 +1005,7 @@ static inline pte_t pte_wrprotect(pte_t pte) return set_pte_bit(pte, __pgprot(_PAGE_PROTECT)); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite_kernel(pte_t pte) { pte = set_pte_bit(pte, __pgprot(_PAGE_WRITE)); if (pte_val(pte) & _PAGE_DIRTY) @@ -1013,6 +1013,11 @@ static inline pte_t pte_mkwrite(pte_t pte) return pte; } +static inline pte_t pte_mkwrite(pte_t pte) +{ + return pte_mkwrite_kernel(pte); +} + static inline pte_t pte_mkclean(pte_t pte) { pte = clear_pte_bit(pte, __pgprot(_PAGE_DIRTY)); diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 85195c18b2e8..4ee5fe5caa23 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -96,7 +96,7 @@ static int walk_pte_level(pmd_t *pmdp, unsigned long addr, unsigned long end, if (flags & SET_MEMORY_RO) new = pte_wrprotect(new); else if (flags & SET_MEMORY_RW) - new = pte_mkwrite(pte_mkdirty(new)); + new = pte_mkwrite_kernel(pte_mkdirty(new)); if (flags & SET_MEMORY_NX) new = set_pte_bit(new, __pgprot(_PAGE_NOEXEC)); else if (flags & SET_MEMORY_X) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 56eea96502c6..3607f2572f9e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -364,11 +364,16 @@ static inline pte_t pte_mkyoung(pte_t pte) return pte_set_flags(pte, _PAGE_ACCESSED); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite_kernel(pte_t pte) { return pte_set_flags(pte, _PAGE_RW); } +static inline pte_t pte_mkwrite(pte_t pte) +{ + return pte_mkwrite_kernel(pte); +} + static inline pte_t pte_mkhuge(pte_t pte) { return pte_set_flags(pte, _PAGE_PSE); diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index ee29fb558f2e..a23f04243c19 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -150,7 +150,7 @@ void make_lowmem_page_readwrite(void *vaddr) if (pte == NULL) return; /* vaddr missing */ - ptev = pte_mkwrite(*pte); + ptev = pte_mkwrite_kernel(*pte); if (HYPERVISOR_update_va_mapping(address, ptev, 0)) BUG(); From patchwork Sun Mar 19 00:15:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71686 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531699wrt; Sat, 18 Mar 2023 18:33:16 -0700 (PDT) X-Google-Smtp-Source: AK7set9TsmFOxxcBAYQzxhc+FuN5b3+M+hWhx0GKfxNc2rHfZrxlke0quMQk3PiDz1ZkaH+bw/z4 X-Received: by 2002:a17:903:2281:b0:1a1:bd77:644 with SMTP id b1-20020a170903228100b001a1bd770644mr2664308plh.7.1679189595841; Sat, 18 Mar 2023 18:33:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189595; cv=none; d=google.com; s=arc-20160816; b=gPJl+itS1eTdtulhPsRVzsu0cR5xtk9jvh7GLnvsETaRG1MHByDzSykTSfnN2vKUoy Upk4rS2qLDtbUg/q2By8fpFrn/4/z1Gp3ssMR3HUgCl3lT3G+TY1aVGytwXAPYMLKjY8 nZQDa5SPuiDy3WipM5wgxPioj9KwO4Q/VUMegXgee5s6pW3ay2rXvME700lsyL0qEN36 XVrBilz+jiOjvl0xuyOo5fZ6PpVeU3xO+sRP7idGzKEpVtYOpod3PdNLE3f369cSKKyt BqHKw08AsIzXNdH8pBajATgLoQ+D4O5TqKXGR73dOrbBwe8XLiQzcFpjIkWlmqir0BW4 5T7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=33oWnhrr3jWrLjHxAwTEeqeEC+bn441diBSfuvAd5ZQ=; b=N+pjT/SGDLHsfUer71tUWs+QMPn9SVvNT198d7mppd+Dy7698492yQ+f0nMuq09ptG 9omusI47n0h9IiHoZksX00UGu9Ov4ghawYz1cAQx8QZXLzQbvO6DXAOWCFKo68hPedlM /L2P1nyLXf+8A/1/T6l4OJSKeWad/wXwvcFXzAg05DPal6P0Gve0LUfDIKKx0Y65aRJH qWIISZiV11HnDBKl50oklfRxAk7pykeSgZkmv2s0IUAzHG65lbJKTvFbmxiDfRYS5RY5 RJ/ZRCJHIEPnVWX2Ue+WIw1H1UsQ6DlR5arIkDaBpZwhZUGawFWfcXAf1PW6QNDc3LtX IFew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cBdDLEj2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a9-20020a170902b58900b001a11cf69a4dsi6540861pls.226.2023.03.18.18.33.03; Sat, 18 Mar 2023 18:33:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cBdDLEj2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbjCSAR4 (ORCPT + 99 others); Sat, 18 Mar 2023 20:17:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230061AbjCSARX (ORCPT ); Sat, 18 Mar 2023 20:17:23 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD71028E99; Sat, 18 Mar 2023 17:16:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184981; x=1710720981; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=izjgG3aKsLsDvcFVl85obe6QvAjx4LnRxYiOny/YjPI=; b=cBdDLEj22kYToXR6AuwBbS4aH0dsiFEWij1dPkYPkp5E/IBbxJOsXOtJ tQ/u8iLEUBJfW2KrccPeNJnmY1IKPVXtiXGc4wdwLOBw0ycNAkyMjbIx+ slSacdvl6/0ydsaMUYwQTjidn/xSDJqkMzgFu6Qzn6E+81YTnPGirKqwz loqVBO3JM8pjakDmTnYCOFryaJoxk3vw0BopghDNjlOCNm3h3eQUBhGTS ro4R5JjqaKayq+tTL9L+x1KrIAinj2heN7pAwyidL4caB9HtNdSNTk+sS p5N52vMyy4TVZ2rTjkMYlIl8qf1LAVs958lyRqkFPag+jD3oj3j9ZsGiG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338490981" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338490981" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672826" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672826" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:13 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 12/40] s390/mm: Introduce pmd_mkwrite_kernel() Date: Sat, 18 Mar 2023 17:15:07 -0700 Message-Id: <20230319001535.23210-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757909493776810?= X-GMAIL-MSGID: =?utf-8?q?1760757909493776810?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One of these changes is to allow for pmd_mkwrite() to create different types of writable memory (the existing conventionally writable type and also the new shadow stack type). Future patches will convert pmd_mkwrite() to take a VMA in order to facilitate this, however there are places in the kernel where pmd_mkwrite() is called outside of the context of a VMA. These are for kernel memory. So create a new variant called pmd_mkwrite_kernel() and switch the kernel users over to it. Have pmd_mkwrite() and pmd_mkwrite_kernel() be the same for now. Future patches will introduce changes to make pmd_mkwrite() take a VMA. Only do this for architectures that need it because they call pmd_mkwrite() in arch code without an associated VMA. Since it will only currently be used in arch code, so do not include it in arch_pgtable_helpers.rst. Suggested-by: David Hildenbrand Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: Heiko Carstens Acked-by: David Hildenbrand Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Link: https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ --- Hi Non-x86 Arch’s, x86 has a feature that allows for the creation of a special type of writable memory (shadow stack) that is only writable in limited specific ways. Previously, changes were proposed to core MM code to teach it to decide when to create normally writable memory or the special shadow stack writable memory, but David Hildenbrand suggested[0] to change pXX_mkwrite() to take a VMA, so awareness of shadow stack memory can be moved into x86 code. Since pXX_mkwrite() is defined in every arch, it requires some tree-wide changes. So that is why you are seeing some patches out of a big x86 series pop up in your arch mailing list. There is no functional change. After this refactor, the shadow stack series goes on to use the arch helpers to push shadow stack memory details inside arch/x86. Testing was just 0-day build testing. Hopefully that is enough context. Thanks! [0] https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ v6: - New patch --- arch/s390/include/asm/pgtable.h | 7 ++++++- arch/s390/mm/pageattr.c | 2 +- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index d4943f2d3f00..deeb918cae1d 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1491,7 +1491,7 @@ static inline pmd_t pmd_wrprotect(pmd_t pmd) return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT)); } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite_kernel(pmd_t pmd) { pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_WRITE)); if (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY) @@ -1499,6 +1499,11 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd) return pmd; } +static inline pmd_t pmd_mkwrite(pmd_t pmd) +{ + return pmd_mkwrite_kernel(pmd); +} + static inline pmd_t pmd_mkclean(pmd_t pmd) { pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_DIRTY)); diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 4ee5fe5caa23..7b6967dfacd0 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -146,7 +146,7 @@ static void modify_pmd_page(pmd_t *pmdp, unsigned long addr, if (flags & SET_MEMORY_RO) new = pmd_wrprotect(new); else if (flags & SET_MEMORY_RW) - new = pmd_mkwrite(pmd_mkdirty(new)); + new = pmd_mkwrite_kernel(pmd_mkdirty(new)); if (flags & SET_MEMORY_NX) new = set_pmd_bit(new, __pgprot(_SEGMENT_ENTRY_NOEXEC)); else if (flags & SET_MEMORY_X) From patchwork Sun Mar 19 00:15:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71668 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527542wrt; Sat, 18 Mar 2023 18:18:31 -0700 (PDT) X-Google-Smtp-Source: AK7set/vRZ+gk1RCkhgulsp/XTYNo7mOmhCY8t/Po7Z7Qduo6P71u/lxaFDSadzxSXyxqTDg8uwv X-Received: by 2002:a17:903:110d:b0:1a1:93d0:e807 with SMTP id n13-20020a170903110d00b001a193d0e807mr14545147plh.36.1679188711131; Sat, 18 Mar 2023 18:18:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188711; cv=none; d=google.com; s=arc-20160816; b=vroSOIOMwlCaE6m93dE27yKC3pw1FW4N9zTaSQlSCtfipLegLt4E90V3sHQS3luTRO Z2ZCKb89rvtJKOjuVwUoeUk1A0scQgYaL9ZXkhrDJQVjx2/iH8KQ3xFHhVskbHIlUlaN FHvD2GY0SehIcZAKKpwOsHZaqJ+K6RJexcOp5N9ZhsKYymz3ykPtBw5mG+hZwFAPxcK9 IvCno53xiIM59DyULqoKjXrBiOaUKA5Y3E+e03CJMKRkCMFFGzqhWKELkQJgcJCoOPe8 W+702AHOrJYjvpc1r5XkADR6xzfP8kZKsUAofnbRhWCxj6ZPRHTr7zEL5vMBLbrDeH+I fctQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=TSsXlf43k9VOYGKYYG+E2tqghej9414fDj7SiRnCWD4=; b=OIWjwhrpYusAZLqX9l3y35RjDERHddLGeOOwYrksocKrBfAFpqJcOGChUu5YJ/YGD8 3qEd/+Cnu1y8ORsMrJT0iINuWJBmtFb5HLvf2lUfic0clJ37vjNxUeyKfPNPDYVdqUce ogTmfeUhZCPvncd7cfhuSWxuZe5LYScXDlxrRz3Q7CZmR9CAeTNXaZBfs81HSTtPLmSt CbCq6/5+GgV0T/UgUMDDsjLIQZOedhmO06+fl1XwCtoyUqY4hDY8rIkI5sCMqv1AoPqh e5JemSZaGtNQwV9ZbwKyXDKz8rq1DC5xnq8bPBYlBfS51X6UCcsktxoO6UkX40lpURCl 7S3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=N1zeGEns; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x6-20020a1709028ec600b0019a71e14c19si6531045plo.320.2023.03.18.18.18.18; Sat, 18 Mar 2023 18:18:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=N1zeGEns; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230153AbjCSASQ (ORCPT + 99 others); Sat, 18 Mar 2023 20:18:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230110AbjCSARf (ORCPT ); Sat, 18 Mar 2023 20:17:35 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F136D29E08; Sat, 18 Mar 2023 17:16:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184992; x=1710720992; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nB4ZdTNkaq5+69bBda9vM9LpVqdgiauUG+V0FTBn/HQ=; b=N1zeGEnsYpF7PoO5Etz/wYt8Aiw8qQA37TzAw2qR3Yg7vdJmY2fEHQL0 UzBqOqwq63TcJkzqZVLYeWNigNKcGE5ZR53lP+FTq0guxkJIJmg2yc4CQ TVOXyhwAUmew97YW5f4di9L0onQ+FUwbdZLofuqvXU7AW9W/fs3Nq5FBm fxyBS1TLOjaLqGxHtHE9qz3HhAXrfFl8ohhoGCs3NhZ3cy1BmB5RYJLZC ChC7ystc7oR3WwC0r0wwFoq661/5PIJxul5ENSFwNI4BfJTnQN4TZzLsZ I+9xwBFDPp6KyrmcTqfvGHy6AFUZ4vWaigZC9KUDdR87djGPk2gjgORMG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491005" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491005" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672833" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672833" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:14 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 13/40] mm: Make pte_mkwrite() take a VMA Date: Sat, 18 Mar 2023 17:15:08 -0700 Message-Id: <20230319001535.23210-14-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756981750139327?= X-GMAIL-MSGID: =?utf-8?q?1760756981750139327?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One of these unusual properties is that shadow stack memory is writable, but only in limited ways. These limits are applied via a specific PTE bit combination. Nevertheless, the memory is writable, and core mm code will need to apply the writable permissions in the typical paths that call pte_mkwrite(). In addition to VM_WRITE, the shadow stack VMA's will have a flag denoting that they are special shadow stack flavor of writable memory. So make pte_mkwrite() take a VMA, so that the x86 implementation of it can know to create regular writable memory or shadow stack memory. Apply the same changes for pmd_mkwrite() and huge_pte_mkwrite(). No functional change. Suggested-by: David Hildenbrand Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: Michael Ellerman Acked-by: David Hildenbrand Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Link: https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ --- Hi Non-x86 Arch’s, x86 has a feature that allows for the creation of a special type of writable memory (shadow stack) that is only writable in limited specific ways. Previously, changes were proposed to core MM code to teach it to decide when to create normally writable memory or the special shadow stack writable memory, but David Hildenbrand suggested[0] to change pXX_mkwrite() to take a VMA, so awareness of shadow stack memory can be moved into x86 code. Since pXX_mkwrite() is defined in every arch, it requires some tree-wide changes. So that is why you are seeing some patches out of a big x86 series pop up in your arch mailing list. There is no functional change. After this refactor, the shadow stack series goes on to use the arch helpers to push shadow stack memory details inside arch/x86. Testing was just 0-day build testing. Hopefully that is enough context. Thanks! [0] https://lore.kernel.org/lkml/0e29a2d0-08d8-bcd6-ff26-4bea0e4037b0@redhat.com/ v6: - New patch --- Documentation/mm/arch_pgtable_helpers.rst | 9 ++++++--- arch/alpha/include/asm/pgtable.h | 6 +++++- arch/arc/include/asm/hugepage.h | 2 +- arch/arc/include/asm/pgtable-bits-arcv2.h | 7 ++++++- arch/arm/include/asm/pgtable-3level.h | 7 ++++++- arch/arm/include/asm/pgtable.h | 2 +- arch/arm64/include/asm/pgtable.h | 4 ++-- arch/csky/include/asm/pgtable.h | 2 +- arch/hexagon/include/asm/pgtable.h | 2 +- arch/ia64/include/asm/pgtable.h | 2 +- arch/loongarch/include/asm/pgtable.h | 4 ++-- arch/m68k/include/asm/mcf_pgtable.h | 2 +- arch/m68k/include/asm/motorola_pgtable.h | 6 +++++- arch/m68k/include/asm/sun3_pgtable.h | 6 +++++- arch/microblaze/include/asm/pgtable.h | 2 +- arch/mips/include/asm/pgtable.h | 6 +++--- arch/nios2/include/asm/pgtable.h | 2 +- arch/openrisc/include/asm/pgtable.h | 2 +- arch/parisc/include/asm/pgtable.h | 6 +++++- arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +- arch/powerpc/include/asm/book3s/64/pgtable.h | 4 ++-- arch/powerpc/include/asm/nohash/32/pgtable.h | 2 +- arch/powerpc/include/asm/nohash/32/pte-8xx.h | 2 +- arch/powerpc/include/asm/nohash/64/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 6 +++--- arch/s390/include/asm/hugetlb.h | 4 ++-- arch/s390/include/asm/pgtable.h | 4 ++-- arch/sh/include/asm/pgtable_32.h | 10 ++++++++-- arch/sparc/include/asm/pgtable_32.h | 2 +- arch/sparc/include/asm/pgtable_64.h | 6 +++--- arch/um/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 6 ++++-- arch/xtensa/include/asm/pgtable.h | 2 +- include/asm-generic/hugetlb.h | 4 ++-- include/linux/mm.h | 2 +- mm/debug_vm_pgtable.c | 16 ++++++++-------- mm/huge_memory.c | 6 +++--- mm/hugetlb.c | 4 ++-- mm/memory.c | 4 ++-- mm/migrate_device.c | 2 +- mm/mprotect.c | 2 +- mm/userfaultfd.c | 2 +- 42 files changed, 106 insertions(+), 69 deletions(-) diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst index 30d9a09f01f4..78ac3ff2fe1d 100644 --- a/Documentation/mm/arch_pgtable_helpers.rst +++ b/Documentation/mm/arch_pgtable_helpers.rst @@ -46,7 +46,8 @@ PTE Page Table Helpers +---------------------------+--------------------------------------------------+ | pte_mkclean | Creates a clean PTE | +---------------------------+--------------------------------------------------+ -| pte_mkwrite | Creates a writable PTE | +| pte_mkwrite | Creates a writable PTE of the type specified by | +| | the VMA. | +---------------------------+--------------------------------------------------+ | pte_wrprotect | Creates a write protected PTE | +---------------------------+--------------------------------------------------+ @@ -118,7 +119,8 @@ PMD Page Table Helpers +---------------------------+--------------------------------------------------+ | pmd_mkclean | Creates a clean PMD | +---------------------------+--------------------------------------------------+ -| pmd_mkwrite | Creates a writable PMD | +| pmd_mkwrite | Creates a writable PMD of the type specified by | +| | the VMA. | +---------------------------+--------------------------------------------------+ | pmd_wrprotect | Creates a write protected PMD | +---------------------------+--------------------------------------------------+ @@ -222,7 +224,8 @@ HugeTLB Page Table Helpers +---------------------------+--------------------------------------------------+ | huge_pte_mkdirty | Creates a dirty HugeTLB | +---------------------------+--------------------------------------------------+ -| huge_pte_mkwrite | Creates a writable HugeTLB | +| huge_pte_mkwrite | Creates a writable HugeTLB of the type specified | +| | by the VMA. | +---------------------------+--------------------------------------------------+ | huge_pte_wrprotect | Creates a write protected HugeTLB | +---------------------------+--------------------------------------------------+ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..fb5d207c2a89 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -256,9 +256,13 @@ extern inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; extern inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) |= _PAGE_FOW; return pte; } extern inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) &= ~(__DIRTY_BITS); return pte; } extern inline pte_t pte_mkold(pte_t pte) { pte_val(pte) &= ~(__ACCESS_BITS); return pte; } -extern inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) &= ~_PAGE_FOW; return pte; } extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; return pte; } extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; } +extern inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + pte_val(pte) &= ~_PAGE_FOW; + return pte; +} /* * The smp_rmb() in the following functions are required to order the load of diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h index 5001b796fb8d..223a96967188 100644 --- a/arch/arc/include/asm/hugepage.h +++ b/arch/arc/include/asm/hugepage.h @@ -21,7 +21,7 @@ static inline pmd_t pte_pmd(pte_t pte) } #define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd))) -#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd))) +#define pmd_mkwrite(pmd, vma) pte_pmd(pte_mkwrite(pmd_pte(pmd), (vma))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..a5b8bc955015 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -87,7 +87,6 @@ PTE_BIT_FUNC(mknotpresent, &= ~(_PAGE_PRESENT)); PTE_BIT_FUNC(wrprotect, &= ~(_PAGE_WRITE)); -PTE_BIT_FUNC(mkwrite, |= (_PAGE_WRITE)); PTE_BIT_FUNC(mkclean, &= ~(_PAGE_DIRTY)); PTE_BIT_FUNC(mkdirty, |= (_PAGE_DIRTY)); PTE_BIT_FUNC(mkold, &= ~(_PAGE_ACCESSED)); @@ -95,6 +94,12 @@ PTE_BIT_FUNC(mkyoung, |= (_PAGE_ACCESSED)); PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL)); PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ)); +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + pte_val(pte) |= (_PAGE_WRITE); + return pte; +} + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 106049791500..df071a807610 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -202,11 +202,16 @@ static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } PMD_BIT_FUNC(wrprotect, |= L_PMD_SECT_RDONLY); PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); -PMD_BIT_FUNC(mkwrite, &= ~L_PMD_SECT_RDONLY); PMD_BIT_FUNC(mkdirty, |= L_PMD_SECT_DIRTY); PMD_BIT_FUNC(mkclean, &= ~L_PMD_SECT_DIRTY); PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) +{ + pmd_val(pmd) |= L_PMD_SECT_RDONLY; + return pmd; +} + #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) #define pmd_pfn(pmd) (((pmd_val(pmd) & PMD_MASK) & PHYS_MASK) >> PAGE_SHIFT) diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..39ad1ae1308d 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -227,7 +227,7 @@ static inline pte_t pte_wrprotect(pte_t pte) return set_pte_bit(pte, __pgprot(L_PTE_RDONLY)); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return clear_pte_bit(pte, __pgprot(L_PTE_RDONLY)); } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index cccf8885792e..913bf370f74a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -187,7 +187,7 @@ static inline pte_t pte_mkwrite_kernel(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return pte_mkwrite_kernel(pte); } @@ -492,7 +492,7 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_cont(pmd) pte_cont(pmd_pte(pmd)) #define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd))) #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd))) -#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd))) +#define pmd_mkwrite(pmd, vma) pte_pmd(pte_mkwrite(pmd_pte(pmd), (vma))) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..c2f92c991e37 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -176,7 +176,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; if (pte_val(pte) & _PAGE_MODIFIED) diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..14ab9c789c0e 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -300,7 +300,7 @@ static inline pte_t pte_wrprotect(pte_t pte) } /* pte_mkwrite - mark page as writable */ -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; return pte; diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..f879dd626da6 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -268,7 +268,7 @@ ia64_phys_addr_valid (unsigned long addr) * access rights: */ #define pte_wrprotect(pte) (__pte(pte_val(pte) & ~_PAGE_AR_RW)) -#define pte_mkwrite(pte) (__pte(pte_val(pte) | _PAGE_AR_RW)) +#define pte_mkwrite(pte, vma) (__pte(pte_val(pte) | _PAGE_AR_RW)) #define pte_mkold(pte) (__pte(pte_val(pte) & ~_PAGE_A)) #define pte_mkyoung(pte) (__pte(pte_val(pte) | _PAGE_A)) #define pte_mkclean(pte) (__pte(pte_val(pte) & ~_PAGE_D)) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..ebf645f40298 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -390,7 +390,7 @@ static inline pte_t pte_mkdirty(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; if (pte_val(pte) & _PAGE_MODIFIED) @@ -490,7 +490,7 @@ static inline int pmd_write(pmd_t pmd) return !!(pmd_val(pmd) & _PAGE_WRITE); } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { pmd_val(pmd) |= _PAGE_WRITE; if (pmd_val(pmd) & _PAGE_MODIFIED) diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h index 13741c1245e1..37d77e055016 100644 --- a/arch/m68k/include/asm/mcf_pgtable.h +++ b/arch/m68k/include/asm/mcf_pgtable.h @@ -211,7 +211,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= CF_PAGE_WRITABLE; return pte; diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index ec0dc19ab834..c4e8eb76286d 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -155,7 +155,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) |= _PAGE_RONLY; return pte; } static inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) &= ~_PAGE_DIRTY; return pte; } static inline pte_t pte_mkold(pte_t pte) { pte_val(pte) &= ~_PAGE_ACCESSED; return pte; } -static inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) &= ~_PAGE_RONLY; return pte; } static inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= _PAGE_DIRTY; return pte; } static inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= _PAGE_ACCESSED; return pte; } static inline pte_t pte_mknocache(pte_t pte) @@ -168,6 +167,11 @@ static inline pte_t pte_mkcache(pte_t pte) pte_val(pte) = (pte_val(pte) & _CACHEMASK040) | m68k_supervisor_cachemode; return pte; } +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + pte_val(pte) &= ~_PAGE_RONLY; + return pte; +} #define swapper_pg_dir kernel_pg_dir extern pgd_t kernel_pg_dir[128]; diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h index e582b0484a55..2a06bea51a1e 100644 --- a/arch/m68k/include/asm/sun3_pgtable.h +++ b/arch/m68k/include/asm/sun3_pgtable.h @@ -143,10 +143,14 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & SUN3_PAGE_ACCESS static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) &= ~SUN3_PAGE_WRITEABLE; return pte; } static inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) &= ~SUN3_PAGE_MODIFIED; return pte; } static inline pte_t pte_mkold(pte_t pte) { pte_val(pte) &= ~SUN3_PAGE_ACCESSED; return pte; } -static inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) |= SUN3_PAGE_WRITEABLE; return pte; } static inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= SUN3_PAGE_MODIFIED; return pte; } static inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= SUN3_PAGE_ACCESSED; return pte; } static inline pte_t pte_mknocache(pte_t pte) { pte_val(pte) |= SUN3_PAGE_NOCACHE; return pte; } +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + pte_val(pte) |= SUN3_PAGE_WRITEABLE; + return pte; +} // use this version when caches work... //static inline pte_t pte_mkcache(pte_t pte) { pte_val(pte) &= SUN3_PAGE_NOCACHE; return pte; } // until then, use: diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..5b83e82f8d7e 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -266,7 +266,7 @@ static inline pte_t pte_mkread(pte_t pte) \ { pte_val(pte) |= _PAGE_USER; return pte; } static inline pte_t pte_mkexec(pte_t pte) \ { pte_val(pte) |= _PAGE_USER | _PAGE_EXEC; return pte; } -static inline pte_t pte_mkwrite(pte_t pte) \ +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) \ { pte_val(pte) |= _PAGE_RW; return pte; } static inline pte_t pte_mkdirty(pte_t pte) \ { pte_val(pte) |= _PAGE_DIRTY; return pte; } diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 791389bf3c12..06efd567144a 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -309,7 +309,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte.pte_low |= _PAGE_WRITE; if (pte.pte_low & _PAGE_MODIFIED) { @@ -364,7 +364,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; if (pte_val(pte) & _PAGE_MODIFIED) @@ -626,7 +626,7 @@ static inline pmd_t pmd_wrprotect(pmd_t pmd) return pmd; } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { pmd_val(pmd) |= _PAGE_WRITE; if (pmd_val(pmd) & _PAGE_MODIFIED) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..edd458518e0e 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -129,7 +129,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; return pte; diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..fd40aec189d1 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -250,7 +250,7 @@ static inline pte_t pte_mkold(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITE; return pte; diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index e2950f5db7c9..89f62137e67f 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -331,8 +331,12 @@ static inline pte_t pte_mkold(pte_t pte) { pte_val(pte) &= ~_PAGE_ACCESSED; retu static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) &= ~_PAGE_WRITE; return pte; } static inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= _PAGE_DIRTY; return pte; } static inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= _PAGE_ACCESSED; return pte; } -static inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) |= _PAGE_WRITE; return pte; } static inline pte_t pte_mkspecial(pte_t pte) { pte_val(pte) |= _PAGE_SPECIAL; return pte; } +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + pte_val(pte) |= _PAGE_WRITE; + return pte; +} /* * Huge pte definitions. diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index 7bf1fe7297c6..10d9a1d2aca9 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -498,7 +498,7 @@ static inline pte_t pte_mkpte(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) | _PAGE_RW); } diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 4acc9690f599..be0636522d36 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -600,7 +600,7 @@ static inline pte_t pte_mkexec(pte_t pte) return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_EXEC)); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { /* * write implies read, hence set both @@ -1071,7 +1071,7 @@ static inline pte_t *pmdp_ptep(pmd_t *pmd) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) -#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd))) +#define pmd_mkwrite(pmd, vma) pte_pmd(pte_mkwrite(pmd_pte(pmd), (vma))) #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY #define pmd_soft_dirty(pmd) pte_soft_dirty(pmd_pte(pmd)) diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h index fec56d965f00..7bfbcb9ba55b 100644 --- a/arch/powerpc/include/asm/nohash/32/pgtable.h +++ b/arch/powerpc/include/asm/nohash/32/pgtable.h @@ -171,7 +171,7 @@ void unmap_kernel_page(unsigned long va); do { pte_update(mm, addr, ptep, ~0, 0, 0); } while (0) #ifndef pte_mkwrite -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) | _PAGE_RW); } diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 1a89ebdc3acc..f32450eb270a 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -101,7 +101,7 @@ static inline int pte_write(pte_t pte) #define pte_write pte_write -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) & ~_PAGE_RO); } diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h index 287e25864ffa..589009555877 100644 --- a/arch/powerpc/include/asm/nohash/64/pgtable.h +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h @@ -85,7 +85,7 @@ #ifndef __ASSEMBLY__ /* pte_clear moved to later in this file */ -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) | _PAGE_RW); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab05f892d317..93de938f44ec 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -338,7 +338,7 @@ static inline pte_t pte_wrprotect(pte_t pte) /* static inline pte_t pte_mkread(pte_t pte) */ -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) | _PAGE_WRITE); } @@ -624,9 +624,9 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) return pte_pmd(pte_mkyoung(pmd_pte(pmd))); } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - return pte_pmd(pte_mkwrite(pmd_pte(pmd))); + return pte_pmd(pte_mkwrite(pmd_pte(pmd), vma)); } static inline pmd_t pmd_wrprotect(pmd_t pmd) diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index ccdbccfde148..558f7eef9c4d 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -102,9 +102,9 @@ static inline int huge_pte_dirty(pte_t pte) return pte_dirty(pte); } -static inline pte_t huge_pte_mkwrite(pte_t pte) +static inline pte_t huge_pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { - return pte_mkwrite(pte); + return pte_mkwrite(pte, vma); } static inline pte_t huge_pte_mkdirty(pte_t pte) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index deeb918cae1d..8f2c743da0eb 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1013,7 +1013,7 @@ static inline pte_t pte_mkwrite_kernel(pte_t pte) return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return pte_mkwrite_kernel(pte); } @@ -1499,7 +1499,7 @@ static inline pmd_t pmd_mkwrite_kernel(pmd_t pmd) return pmd; } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { return pmd_mkwrite_kernel(pmd); } diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..9f2dcb9eafc8 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -351,6 +351,12 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define PTE_BIT_FUNC(h,fn,op) \ static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; } +#define PTE_BIT_FUNC_VMA(h,fn,op) \ +static inline pte_t pte_##fn(pte_t pte, struct vm_area_struct *vma) \ +{ \ + pte.pte_##h op; \ + return pte; \ +} #ifdef CONFIG_X2TLB /* @@ -359,11 +365,11 @@ static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; } * kernel permissions), we attempt to couple them a bit more sanely here. */ PTE_BIT_FUNC(high, wrprotect, &= ~(_PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE)); -PTE_BIT_FUNC(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE); +PTE_BIT_FUNC_VMA(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE); PTE_BIT_FUNC(high, mkhuge, |= _PAGE_SZHUGE); #else PTE_BIT_FUNC(low, wrprotect, &= ~_PAGE_RW); -PTE_BIT_FUNC(low, mkwrite, |= _PAGE_RW); +PTE_BIT_FUNC_VMA(low, mkwrite, |= _PAGE_RW); PTE_BIT_FUNC(low, mkhuge, |= _PAGE_SZHUGE); #endif diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..3e8836179456 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -241,7 +241,7 @@ static inline pte_t pte_mkold(pte_t pte) return __pte(pte_val(pte) & ~SRMMU_REF); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return __pte(pte_val(pte) | SRMMU_WRITE); } diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..c5cd5c03f557 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -466,7 +466,7 @@ static inline pte_t pte_mkclean(pte_t pte) return __pte(val); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { unsigned long val = pte_val(pte), mask; @@ -756,11 +756,11 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) return __pmd(pte_val(pte)); } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { pte_t pte = __pte(pmd_val(pmd)); - pte = pte_mkwrite(pte); + pte = pte_mkwrite(pte, vma); return __pmd(pte_val(pte)); } diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..963479c133b7 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -207,7 +207,7 @@ static inline pte_t pte_mkyoung(pte_t pte) return(pte); } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { if (unlikely(pte_get_bits(pte, _PAGE_RW))) return pte; diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 3607f2572f9e..66c514808276 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -369,7 +369,9 @@ static inline pte_t pte_mkwrite_kernel(pte_t pte) return pte_set_flags(pte, _PAGE_RW); } -static inline pte_t pte_mkwrite(pte_t pte) +struct vm_area_struct; + +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { return pte_mkwrite_kernel(pte); } @@ -470,7 +472,7 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) return pmd_set_flags(pmd, _PAGE_ACCESSED); } -static inline pmd_t pmd_mkwrite(pmd_t pmd) +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { return pmd_set_flags(pmd, _PAGE_RW); } diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..d72632d9c53c 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -262,7 +262,7 @@ static inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= _PAGE_DIRTY; return pte; } static inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= _PAGE_ACCESSED; return pte; } -static inline pte_t pte_mkwrite(pte_t pte) +static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { pte_val(pte) |= _PAGE_WRITABLE; return pte; } #define pgprot_noncached(prot) \ diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index d7f6335d3999..e86c830728de 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -20,9 +20,9 @@ static inline unsigned long huge_pte_dirty(pte_t pte) return pte_dirty(pte); } -static inline pte_t huge_pte_mkwrite(pte_t pte) +static inline pte_t huge_pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { - return pte_mkwrite(pte); + return pte_mkwrite(pte, vma); } #ifndef __HAVE_ARCH_HUGE_PTE_WRPROTECT diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f79667824eb..af652444fbba 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1163,7 +1163,7 @@ void free_compound_page(struct page *page); static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) { if (likely(vma->vm_flags & VM_WRITE)) - pte = pte_mkwrite(pte); + pte = pte_mkwrite(pte, vma); return pte; } diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index af59cc7bd307..7bc5592900bc 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -109,10 +109,10 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) WARN_ON(!pte_same(pte, pte)); WARN_ON(!pte_young(pte_mkyoung(pte_mkold(pte)))); WARN_ON(!pte_dirty(pte_mkdirty(pte_mkclean(pte)))); - WARN_ON(!pte_write(pte_mkwrite(pte_wrprotect(pte)))); + WARN_ON(!pte_write(pte_mkwrite(pte_wrprotect(pte), args->vma))); WARN_ON(pte_young(pte_mkold(pte_mkyoung(pte)))); WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); - WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte)))); + WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); } @@ -153,7 +153,7 @@ static void __init pte_advanced_tests(struct pgtable_debug_args *args) pte = pte_mkclean(pte); set_pte_at(args->mm, args->vaddr, args->ptep, pte); flush_dcache_page(page); - pte = pte_mkwrite(pte); + pte = pte_mkwrite(pte, args->vma); pte = pte_mkdirty(pte); ptep_set_access_flags(args->vma, args->vaddr, args->ptep, pte, 1); pte = ptep_get(args->ptep); @@ -199,10 +199,10 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) WARN_ON(!pmd_same(pmd, pmd)); WARN_ON(!pmd_young(pmd_mkyoung(pmd_mkold(pmd)))); WARN_ON(!pmd_dirty(pmd_mkdirty(pmd_mkclean(pmd)))); - WARN_ON(!pmd_write(pmd_mkwrite(pmd_wrprotect(pmd)))); + WARN_ON(!pmd_write(pmd_mkwrite(pmd_wrprotect(pmd), args->vma))); WARN_ON(pmd_young(pmd_mkold(pmd_mkyoung(pmd)))); WARN_ON(pmd_dirty(pmd_mkclean(pmd_mkdirty(pmd)))); - WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd)))); + WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd, args->vma)))); WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd)))); WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd)))); /* @@ -253,7 +253,7 @@ static void __init pmd_advanced_tests(struct pgtable_debug_args *args) pmd = pmd_mkclean(pmd); set_pmd_at(args->mm, vaddr, args->pmdp, pmd); flush_dcache_page(page); - pmd = pmd_mkwrite(pmd); + pmd = pmd_mkwrite(pmd, args->vma); pmd = pmd_mkdirty(pmd); pmdp_set_access_flags(args->vma, vaddr, args->pmdp, pmd, 1); pmd = READ_ONCE(*args->pmdp); @@ -928,8 +928,8 @@ static void __init hugetlb_basic_tests(struct pgtable_debug_args *args) pte = mk_huge_pte(page, args->page_prot); WARN_ON(!huge_pte_dirty(huge_pte_mkdirty(pte))); - WARN_ON(!huge_pte_write(huge_pte_mkwrite(huge_pte_wrprotect(pte)))); - WARN_ON(huge_pte_write(huge_pte_wrprotect(huge_pte_mkwrite(pte)))); + WARN_ON(!huge_pte_write(huge_pte_mkwrite(huge_pte_wrprotect(pte), args->vma))); + WARN_ON(huge_pte_write(huge_pte_wrprotect(huge_pte_mkwrite(pte, args->vma)))); #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB pte = pfn_pte(args->fixed_pmd_pfn, args->page_prot); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4fc43859e59a..aaf815838144 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -555,7 +555,7 @@ __setup("transparent_hugepage=", setup_transparent_hugepage); pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { if (likely(vma->vm_flags & VM_WRITE)) - pmd = pmd_mkwrite(pmd); + pmd = pmd_mkwrite(pmd, vma); return pmd; } @@ -1580,7 +1580,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) pmd = pmd_modify(oldpmd, vma->vm_page_prot); pmd = pmd_mkyoung(pmd); if (writable) - pmd = pmd_mkwrite(pmd); + pmd = pmd_mkwrite(pmd, vma); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl); @@ -1926,7 +1926,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, /* See change_pte_range(). */ if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && !pmd_write(entry) && can_change_pmd_writable(vma, addr, entry)) - entry = pmd_mkwrite(entry); + entry = pmd_mkwrite(entry, vma); ret = HPAGE_PMD_NR; set_pmd_at(mm, addr, pmd, entry); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 07abcb6eb203..6af471bdcff8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4900,7 +4900,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page, if (writable) { entry = huge_pte_mkwrite(huge_pte_mkdirty(mk_huge_pte(page, - vma->vm_page_prot))); + vma->vm_page_prot)), vma); } else { entry = huge_pte_wrprotect(mk_huge_pte(page, vma->vm_page_prot)); @@ -4916,7 +4916,7 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, { pte_t entry; - entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep))); + entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep)), vma); if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1)) update_mmu_cache(vma, address, ptep); } diff --git a/mm/memory.c b/mm/memory.c index f456f3b5049c..d0972d2d6f36 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4067,7 +4067,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) entry = mk_pte(&folio->page, vma->vm_page_prot); entry = pte_sw_mkyoung(entry); if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); + entry = pte_mkwrite(pte_mkdirty(entry), vma); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); @@ -4755,7 +4755,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte = pte_modify(old_pte, vma->vm_page_prot); pte = pte_mkyoung(pte); if (writable) - pte = pte_mkwrite(pte); + pte = pte_mkwrite(pte, vma); ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); update_mmu_cache(vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index d30c9de60b0d..df3f5e9d5f76 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -646,7 +646,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, } entry = mk_pte(page, vma->vm_page_prot); if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); + entry = pte_mkwrite(pte_mkdirty(entry), vma); } ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); diff --git a/mm/mprotect.c b/mm/mprotect.c index 231929f119d9..2d148d82d907 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -198,7 +198,7 @@ static long change_pte_range(struct mmu_gather *tlb, if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && !pte_write(ptent) && can_change_pte_writable(vma, addr, ptent)) - ptent = pte_mkwrite(ptent); + ptent = pte_mkwrite(ptent, vma); ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); if (pte_needs_flush(oldpte, ptent)) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 53c3d916ff66..3db6f87c0aca 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -75,7 +75,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, if (page_in_cache && !vm_shared) writable = false; if (writable) - _dst_pte = pte_mkwrite(_dst_pte); + _dst_pte = pte_mkwrite(_dst_pte, dst_vma); if (wp_copy) _dst_pte = pte_mkuffd_wp(_dst_pte); From patchwork Sun Mar 19 00:15:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71687 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531901wrt; Sat, 18 Mar 2023 18:33:58 -0700 (PDT) X-Google-Smtp-Source: AK7set96Sr8+W+sMxdIr6On4Y6AOW6sgjHc6bT984YDfFviF8M/k9dMrZ/nH8rw0hF/39gS4g5B2 X-Received: by 2002:a17:90b:3902:b0:234:a88e:d67e with SMTP id ob2-20020a17090b390200b00234a88ed67emr13701193pjb.34.1679189637958; Sat, 18 Mar 2023 18:33:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189637; cv=none; d=google.com; s=arc-20160816; b=N9cFQAaD3aQy/BUE+xIJOITtc2/C3RhcXhUAM+KunJrbrcFH1JaYhcsIA3QJZf6hKI nXtQjC1wi0p47ivXo4fYMD78bj7ZJnjvORi4Pf1KsTErqMj+RQeZjkplPli2DCP5aqX6 VZZSqpgB689HEgFFF6vEmbzB0ijvydU9za9y4KiOP1xSKfBPiSH5koK4oM+ASFrW5sBl o+1mV1THDf4qpLllIaTIaQl8oKJUWzskqbnHwNfoh+BTDAorWMIw8y0FBOMQVLEz1hB4 pERWiIUGXOLEG0gYMv3fppwukhkfooxO3607OIUTFjxFuxhSQOl8hwnwvvQaZbsm9SCZ ho8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=KA3CNIiTNbUMsggsIlv4U6nkNQ9hLyyEzwURtly/TkM=; b=NSL3jYy8wbgvFVGxynzsp+6eCbsWKoR+hY+CNOYQPdQW8StzfrccglgRkOJlNy8l/P mbKDypsvSv4SjopYGr5/Vfr7u85flRAg1X3z6P/sfUdIPqF6lG3u81g1DIkoZB5qpQJy hJSe2VcwvcEu/yJL0i7cG0lq2XTvvkLnvBQGKPNVF1Z/i6xE4WK3bgeYOn3p2JN15+Bb buluacXrx49MqObtFYP5Wd1w633Zx+VmmzvchSP2262pZGoKGPKpy3ywMCMvMwSNQmaB SF43n9nXd3YA05ABxFTmBvYwydOgxWNecMbLVn+GTpxja0Mw515NFF7qbiCy8YpDDKao zm0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GTz4RGlw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h23-20020a17090a9c1700b0023b3a60296fsi11832835pjp.71.2023.03.18.18.33.45; Sat, 18 Mar 2023 18:33:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GTz4RGlw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229967AbjCSASW (ORCPT + 99 others); Sat, 18 Mar 2023 20:18:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230030AbjCSARo (ORCPT ); Sat, 18 Mar 2023 20:17:44 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02F842A151; Sat, 18 Mar 2023 17:16:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185006; x=1710721006; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=8dF02s3tOW4RiD+j+9DDrpVET91W8ytsL0rmu61SgrY=; b=GTz4RGlwZ0IlV/wZoI6aIaCBFrVSDbiYB9jF2RMJnfAva7HHNb7/QYvP SOSLQ3R6+FGMZ0foV+LANOoCjfOVtT2WEHf8LlZBnk6IfjA2Qd6Piw4ts R9SHi7pKxUykCqBko3hg5UAftHZD16qpvILAaKU79nAlVUn0BQyOplwZb uADGSZebSvKkx0VjWIooXslxc8PyRsUK4x/qvn0h7qgNQTsZkNTtgBbT1 OKzfjPshdjPIUJIWgFS1FIedB7JPQnpNJ4EKM5/C+Wl+Deoj3Uq+TtM2L q/XP7xt37B6zGgPyL6K8tQYIIcP/ZPBDdvnejFESyVmBaG4N1XOYELQbv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491028" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491028" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672837" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672837" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:16 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 14/40] x86/mm: Introduce _PAGE_SAVED_DIRTY Date: Sat, 18 Mar 2023 17:15:09 -0700 Message-Id: <20230319001535.23210-15-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757953855576940?= X-GMAIL-MSGID: =?utf-8?q?1760757953855576940?= Some OSes have a greater dependence on software available bits in PTEs than Linux. That left the hardware architects looking for a way to represent a new memory type (shadow stack) within the existing bits. They chose to repurpose a lightly-used state: Write=0,Dirty=1. So in order to support shadow stack memory, Linux should avoid creating memory with this PTE bit combination unless it intends for it to be shadow stack. The reason it's lightly used is that Dirty=1 is normally set by HW _before_ a write. A write with a Write=0 PTE would typically only generate a fault, not set Dirty=1. Hardware can (rarely) both set Dirty=1 *and* generate the fault, resulting in a Write=0,Dirty=1 PTE. Hardware which supports shadow stacks will no longer exhibit this oddity. So that leaves Write=0,Dirty=1 PTEs created in software. To avoid inadvertently created shadow stack memory, in places where Linux normally creates Write=0,Dirty=1, it can use the software-defined _PAGE_SAVED_DIRTY in place of the hardware _PAGE_DIRTY. In other words, whenever Linux needs to create Write=0,Dirty=1, it instead creates Write=0,SavedDirty=1 except for shadow stack, which is Write=0,Dirty=1. There are six bits left available to software in the 64-bit PTE after consuming a bit for _PAGE_SAVED_DIRTY. No space is consumed in 32-bit kernels because shadow stacks are not enabled there. Implement only the infrastructure for _PAGE_SAVED_DIRTY. Changes to actually begin creating _PAGE_SAVED_DIRTY PTEs will follow once other pieces are in place. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Remove trailing whitespace (dhansen, Boris) v7: - Use lightly edited comment verbiage from (David Hildenbrand) - Update commit log to reduce verbosity (David Hildenbrand) v6: - Rename _PAGE_COW to _PAGE_SAVED_DIRTY (David Hildenbrand) - Add _PAGE_SAVED_DIRTY to _PAGE_CHG_MASK v5: - Fix log, comments and whitespace (Boris) - Remove capitalization on shadow stack (Boris) --- arch/x86/include/asm/pgtable.h | 79 ++++++++++++++++++++++++++++ arch/x86/include/asm/pgtable_types.h | 50 +++++++++++++++--- arch/x86/include/asm/tlbflush.h | 3 +- 3 files changed, 123 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 66c514808276..7360783f2140 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -301,6 +301,45 @@ static inline pte_t pte_clear_flags(pte_t pte, pteval_t clear) return native_make_pte(v & ~clear); } +/* + * Write protection operations can result in Dirty=1,Write=0 PTEs. But in the + * case of X86_FEATURE_USER_SHSTK, the software SavedDirty bit is used, since + * the Dirty=1,Write=0 will result in the memory being treated as shadow stack + * by the HW. So when creating dirty, write-protected memory, a software bit is + * used _PAGE_BIT_SAVED_DIRTY. The following functions pte_mksaveddirty() and + * pte_clear_saveddirty() take a conventional dirty, write-protected PTE + * (Write=0,Dirty=1) and transition it to the shadow stack compatible + * version. (Write=0,SavedDirty=1). + */ +static inline pte_t pte_mksaveddirty(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pte; + + pte = pte_clear_flags(pte, _PAGE_DIRTY); + return pte_set_flags(pte, _PAGE_SAVED_DIRTY); +} + +static inline pte_t pte_clear_saveddirty(pte_t pte) +{ + /* + * _PAGE_SAVED_DIRTY is unnecessary on !X86_FEATURE_USER_SHSTK kernels, + * since the HW dirty bit can be used without creating shadow stack + * memory. See the _PAGE_SAVED_DIRTY definition for more details. + */ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pte; + + /* + * PTE is getting copied-on-write, so it will be dirtied + * if writable, or made shadow stack if shadow stack and + * being copied on access. Set the dirty bit for both + * cases. + */ + pte = pte_set_flags(pte, _PAGE_DIRTY); + return pte_clear_flags(pte, _PAGE_SAVED_DIRTY); +} + static inline pte_t pte_wrprotect(pte_t pte) { return pte_clear_flags(pte, _PAGE_RW); @@ -420,6 +459,26 @@ static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear) return native_make_pmd(v & ~clear); } +/* See comments above pte_mksaveddirty() */ +static inline pmd_t pmd_mksaveddirty(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pmd; + + pmd = pmd_clear_flags(pmd, _PAGE_DIRTY); + return pmd_set_flags(pmd, _PAGE_SAVED_DIRTY); +} + +/* See comments above pte_mksaveddirty() */ +static inline pmd_t pmd_clear_saveddirty(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pmd; + + pmd = pmd_set_flags(pmd, _PAGE_DIRTY); + return pmd_clear_flags(pmd, _PAGE_SAVED_DIRTY); +} + static inline pmd_t pmd_wrprotect(pmd_t pmd) { return pmd_clear_flags(pmd, _PAGE_RW); @@ -491,6 +550,26 @@ static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear) return native_make_pud(v & ~clear); } +/* See comments above pte_mksaveddirty() */ +static inline pud_t pud_mksaveddirty(pud_t pud) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pud; + + pud = pud_clear_flags(pud, _PAGE_DIRTY); + return pud_set_flags(pud, _PAGE_SAVED_DIRTY); +} + +/* See comments above pte_mksaveddirty() */ +static inline pud_t pud_clear_saveddirty(pud_t pud) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return pud; + + pud = pud_set_flags(pud, _PAGE_DIRTY); + return pud_clear_flags(pud, _PAGE_SAVED_DIRTY); +} + static inline pud_t pud_mkold(pud_t pud) { return pud_clear_flags(pud, _PAGE_ACCESSED); diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 0646ad00178b..8f266788c0d7 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -21,7 +21,8 @@ #define _PAGE_BIT_SOFTW2 10 /* " */ #define _PAGE_BIT_SOFTW3 11 /* " */ #define _PAGE_BIT_PAT_LARGE 12 /* On 2MB or 1GB pages */ -#define _PAGE_BIT_SOFTW4 58 /* available for programmer */ +#define _PAGE_BIT_SOFTW4 57 /* available for programmer */ +#define _PAGE_BIT_SOFTW5 58 /* available for programmer */ #define _PAGE_BIT_PKEY_BIT0 59 /* Protection Keys, bit 1/4 */ #define _PAGE_BIT_PKEY_BIT1 60 /* Protection Keys, bit 2/4 */ #define _PAGE_BIT_PKEY_BIT2 61 /* Protection Keys, bit 3/4 */ @@ -34,6 +35,15 @@ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 +/* + * Indicates a Saved Dirty bit page. + */ +#ifdef CONFIG_X86_USER_SHADOW_STACK +#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit */ +#else +#define _PAGE_BIT_SAVED_DIRTY 0 +#endif + /* If _PAGE_BIT_PRESENT is clear, we use these: */ /* - if the user mapped it with PROT_NONE; pte_present gives true */ #define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL @@ -117,6 +127,25 @@ #define _PAGE_SOFTW4 (_AT(pteval_t, 0)) #endif +/* + * The hardware requires shadow stack to be Write=0,Dirty=1. However, + * there are valid cases where the kernel might create read-only PTEs that + * are dirty (e.g., fork(), mprotect(), uffd-wp(), soft-dirty tracking). In + * this case, the _PAGE_SAVED_DIRTY bit is used instead of the HW-dirty bit, + * to avoid creating a wrong "shadow stack" PTEs. Such PTEs have + * (Write=0,SavedDirty=1,Dirty=0) set. + * + * Note that on processors without shadow stack support, the + * _PAGE_SAVED_DIRTY remains unused. + */ +#ifdef CONFIG_X86_USER_SHADOW_STACK +#define _PAGE_SAVED_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_SAVED_DIRTY) +#else +#define _PAGE_SAVED_DIRTY (_AT(pteval_t, 0)) +#endif + +#define _PAGE_DIRTY_BITS (_PAGE_DIRTY | _PAGE_SAVED_DIRTY) + #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) /* @@ -125,9 +154,9 @@ * instance, and is *not* included in this mask since * pte_modify() does modify it. */ -#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ - _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \ +#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ + _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY_BITS | \ + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \ _PAGE_UFFD_WP) #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE) @@ -186,12 +215,17 @@ enum page_cache_mode { #define PAGE_READONLY __pg(__PP| 0|_USR|___A|__NX| 0| 0| 0) #define PAGE_READONLY_EXEC __pg(__PP| 0|_USR|___A| 0| 0| 0| 0) -#define __PAGE_KERNEL (__PP|__RW| 0|___A|__NX|___D| 0|___G) -#define __PAGE_KERNEL_EXEC (__PP|__RW| 0|___A| 0|___D| 0|___G) -#define _KERNPG_TABLE_NOENC (__PP|__RW| 0|___A| 0|___D| 0| 0) -#define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) +/* + * Page tables needs to have Write=1 in order for any lower PTEs to be + * writable. This includes shadow stack memory (Write=0, Dirty=1) + */ #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) +#define _KERNPG_TABLE_NOENC (__PP|__RW| 0|___A| 0|___D| 0| 0) +#define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) + +#define __PAGE_KERNEL (__PP|__RW| 0|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_EXEC (__PP|__RW| 0|___A| 0|___D| 0|___G) #define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) #define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index cda3118f3b27..6c5ef14060a8 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -273,7 +273,8 @@ static inline bool pte_flags_need_flush(unsigned long oldflags, const pteval_t flush_on_clear = _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED; const pteval_t software_flags = _PAGE_SOFTW1 | _PAGE_SOFTW2 | - _PAGE_SOFTW3 | _PAGE_SOFTW4; + _PAGE_SOFTW3 | _PAGE_SOFTW4 | + _PAGE_SAVED_DIRTY; const pteval_t flush_on_change = _PAGE_RW | _PAGE_USER | _PAGE_PWT | _PAGE_PCD | _PAGE_PSE | _PAGE_GLOBAL | _PAGE_PAT | _PAGE_PAT_LARGE | _PAGE_PKEY_BIT0 | _PAGE_PKEY_BIT1 | From patchwork Sun Mar 19 00:15:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71693 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532475wrt; Sat, 18 Mar 2023 18:36:02 -0700 (PDT) X-Google-Smtp-Source: AK7set+M/q0/R87cTJ34sjXBh5WMby4U6rDTG2fnU4GXDbRfUcUE3fIH7/Gc6gQRbbtW9XIRhb43 X-Received: by 2002:a17:90b:314d:b0:23f:618a:6bec with SMTP id ip13-20020a17090b314d00b0023f618a6becmr6733144pjb.49.1679189761833; Sat, 18 Mar 2023 18:36:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189761; cv=none; d=google.com; s=arc-20160816; b=bFy7suXsrW6oL29ZFnIWXHFhfU17SgpKDR72o3uK0zrMgl2b4MSnPDQPoHTiV00w4m yYRBfnuiv8bm+AGLTFvgv2CQbdl2IGANIvcfwo/DvH76ukUi6rsi0vSHPeCAG5tZ/0CU 1hO8v0manu30oMzlhAetJQocQ/FQjBuoAUY5swIytBUh3uF++JWA6+tJ+eiIIoW675tq YeBDB/OdCSvaM3Jcxce0BokcnG8I1TcuqioaJCEHXBQo+HvNaUm9wjKggBGKgjCMgNSX 9gxh2ljutBZbSSUeF/+2/n8DmJGu7GbqBqclj8utYOjq1JxxNFmZa6J/+3jn6C98tMrN w9yQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=+6GvvYu5yHulXslGrW9yeLU+ZeVo4mtLKdyIMGCutuA=; b=P9SfiAjn8hdBCd6R8VkVBzIrUmtot8OOUU7dbLH2OtWtJAPL7P9CSB4RFOrhwnSzgw QTW+IVcXyQlauqaIzW8aDJsPk35OjE/R215Buk6sxtD+sErG/WDPeHjzyGN29vBUjpwQ xBpPZEQgxhCBk08beLBFK52TneiQ4+/GssRfilpfOZbOqfGZ8yiagku/U7VDHxRt7klr tBNmEhfcVC8mWJ6/1LTiPOONwP82/W+6YXSsaUWD8zGg1OjzB1M7nZmiST82OD8qpHrG EeIUbgzqdq0WVWKJg0at1UFzA2ysldZ2ENHgaYGcCqwyysELTQ1A2ZbhjtlO/PHQ4vma 1H7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Rph1UHcd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fs3-20020a17090af28300b0023d53736026si6386775pjb.125.2023.03.18.18.35.49; Sat, 18 Mar 2023 18:36:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Rph1UHcd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230176AbjCSAS3 (ORCPT + 99 others); Sat, 18 Mar 2023 20:18:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbjCSARy (ORCPT ); Sat, 18 Mar 2023 20:17:54 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DFB528D2B; Sat, 18 Mar 2023 17:16:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185013; x=1710721013; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=bCew3mJcfQObTYD8BfRHQ7rk2S0psN/tr65S9eiRcMg=; b=Rph1UHcdY1SmqGbeMuGQQcaQuEZwG9pSmrL6KHICpg5jFrjF7/imRahT Ri0pwEXfu8LGp/cY7VOKMCBoikLJ5toT4zn0aZKJyGxNWpLIsN+SW7DTi s2oW+oVi2Y8QuQuMS7D1HeNWIW43fCrEoHEIKP9SU1LRissalx7fhAn8l DgVNGopLcVzPTbzx4HJJ3s9e7x6JcRHMnDSLBUExjfWA6bWsdHzio+VrG Wy7uLw8zJF6VDQMJw3xUJLywZqKo4kw/VVwmobN+o08oJzug7FFSB/Vj/ wOZLsu9j9Wd0tqtTfHjalQWGgdALqF6cx1kaMKLGWVb1MtJf1IxvRThUw Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491050" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491050" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672841" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672841" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:18 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 15/40] x86/mm: Update ptep/pmdp_set_wrprotect() for _PAGE_SAVED_DIRTY Date: Sat, 18 Mar 2023 17:15:10 -0700 Message-Id: <20230319001535.23210-16-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758083690678527?= X-GMAIL-MSGID: =?utf-8?q?1760758083690678527?= When shadow stack is in use, Write=0,Dirty=1 PTE are preserved for shadow stack. Copy-on-write PTEs then have Write=0,SavedDirty=1. When a PTE goes from Write=1,Dirty=1 to Write=0,SavedDirty=1, it could become a transient shadow stack PTE in two cases: 1. Some processors can start a write but end up seeing a Write=0 PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, and a TLB flush is not necessary. 2. When _PAGE_DIRTY is replaced with _PAGE_SAVED_DIRTY non-atomically, a transient shadow stack PTE can be created as a result. Thus, prevent that with cmpxchg. In the case of pmdp_set_wrprotect(), for nopmd configs the ->pmd operated on does not exist and the logic would need to be different. Although the extra functionality will normally be optimized out when user shadow stacks are not configured, also exclude it in the preprocessor stage so that it will still compile. User shadow stack is not supported there by Linux anyway. Leave the cpu_feature_enabled() check so that the functionality also gets disabled based on runtime detection of the feature. Similarly, compile it out in ptep_set_wrprotect() due to a clang warning on i386. Like above, the code path should get optimized out on i386 since shadow stack is not supported on 32 bit kernels, but this makes the compiler happy. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v6: - Fix comment and log to update for _PAGE_COW being replaced with _PAGE_SAVED_DIRTY. v5: - Commit log verbiage and formatting (Boris) - Remove capitalization on shadow stack (Boris) - Fix i386 warning on recent clang v3: - Remove unnecessary #ifdef (Dave Hansen) v2: - Compile out some code due to clang build error - Clarify commit log (dhansen) - Normalize PTE bit descriptions between patches (dhansen) - Update comment with text from (dhansen) --- arch/x86/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7360783f2140..349fcab0405a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1192,6 +1192,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pte_t old_pte, new_pte; + + old_pte = READ_ONCE(*ptep); + do { + new_pte = pte_wrprotect(old_pte); + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } +#endif clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1244,6 +1261,24 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pmd_t old_pmd, new_pmd; + + old_pmd = READ_ONCE(*pmdp); + do { + new_pmd = pmd_wrprotect(old_pmd); + } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); + + return; + } +#endif + clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); } From patchwork Sun Mar 19 00:15:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71669 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527698wrt; Sat, 18 Mar 2023 18:19:16 -0700 (PDT) X-Google-Smtp-Source: AK7set8AJYkGVVL9PHg9kluw1botfikRTQUEnoZu18vm4DC2Rh3B3Qm7GSHDvfsc9c5AQLD2sQ+0 X-Received: by 2002:a05:6a20:3d8a:b0:d0:76e3:16e5 with SMTP id s10-20020a056a203d8a00b000d076e316e5mr17047185pzi.2.1679188755783; Sat, 18 Mar 2023 18:19:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188755; cv=none; d=google.com; s=arc-20160816; b=Esag83jbD2C+Dp7oDvTjP9FTElMWTcyZgETOv1goejmP7L6H67kAhs9DLW58Lnw45v T5uzo5YUaLW3lmBt+7LHRG6XicxmLz8pUYEu8C3HudIS72HrHwvqwPjPUAw+mCM7ZvjY cJg/jmxNh/XVLuQhbarP+/MMYVEPNARakE8AlgELVyZJzIm/dyqL2BxnqgHlqB9KxbeQ rua8Ale4KwUJVXliLezCkOKfkxSZ+IdBak0wpDoXgez40LdbMQOOOF571EJMJFEXABTc l3Z4Lo8GxOGyUXcJdPKn3aZvzGhWTTM2XQR5va7tQJ4DzpI50/E46b9KmkAFAvtq27qJ 7Vrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=yh44XlmxZ3TpyhdCF8bY5/DkNMqHR+ekMlxGg2PUBdY=; b=t1S9eE1rlqmAlj2sKpyboEncAIKBGmW4v4RiyUrJTw6QBH/VZAe6HVX5QZDTl0i1m+ CQdh3TKn4CvGAJHhVOWySHeYw2RlKxoYByJ7QwUzm8TFf/EM/Z+T41fycML3YKAvPy37 wiG6HlQsrUuyXS5C6zoj8fRWMJTgm3576GIoo4bkDfL6rYdD0UT+1+W8vudZ8UrDISfe GXKGlupQk6c20ElvTWua+LSeHNHusexN2mUpG3fxVTJDvWWCpFU5kmwgd0ZZnsVu/y+B b2C0/BmmycKkHaXuJopPb47dzeUsc1Rxm1M7KfwCyQiEzdeLvkV9yyD3xgXNqkCJrwq2 hakA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VHawS3Az; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.19.03; Sat, 18 Mar 2023 18:19:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VHawS3Az; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230091AbjCSAS7 (ORCPT + 99 others); Sat, 18 Mar 2023 20:18:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230094AbjCSASM (ORCPT ); Sat, 18 Mar 2023 20:18:12 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57F792A176; Sat, 18 Mar 2023 17:17:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185028; x=1710721028; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=V5J+b+gsGH0IqMjFO6EWI1A0U+2t/hj/jJS5+EQE7p4=; b=VHawS3Az1Dt65+LznpuKFD0WkBv8wrbayKcVbRrPFFCSQByqahY3Wd6B V/WX7wy9tlvDxtcUZsuRfP4ZPXDnmFDoEZ8OQV54JOekvzlEASctoUkcb F7D6ZPc6EFC+LEHyCs+CAgBxcer7L8W18smIBBhCWEe9cAvD81UGno+RL EQm9XiJt/Syac0YjEHbx92oDZICEWwTIM4/EXouvHdBRyvHzldmtR14LQ yInvGZovzhyn/kPSd5C8BhFW9xfgR70sqUtxSZA8FEis9y8fNpwTk+ynx OpCmqTMCncw/9CLpAnCZZT5cQUgrk93NhAALTXZmVb4NZdyWcMDUYXMMn g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491072" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491072" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672845" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672845" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:20 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 16/40] x86/mm: Start actually marking _PAGE_SAVED_DIRTY Date: Sat, 18 Mar 2023 17:15:11 -0700 Message-Id: <20230319001535.23210-17-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757028617352334?= X-GMAIL-MSGID: =?utf-8?q?1760757028617352334?= The recently introduced _PAGE_SAVED_DIRTY should be used instead of the HW Dirty bit whenever a PTE is Write=0, in order to not inadvertently create shadow stack PTEs. Update pte_mk*() helpers to do this, and apply the same changes to pmd and pud. For pte_modify() this is a bit trickier. It takes a "raw" pgprot_t which was not necessarily created with any of the existing PTE bit helpers. That means that it can return a pte_t with Write=0,Dirty=1, a shadow stack PTE, when it did not intend to create one. Modify it to also move _PAGE_DIRTY to _PAGE_SAVED_DIRTY. To avoid creating Write=0,Dirty=1 PTEs, pte_modify() needs to avoid: 1. Marking Write=0 PTEs Dirty=1 2. Marking Dirty=1 PTEs Write=0 The first case cannot happen as the existing behavior of pte_modify() is to filter out any Dirty bit passed in newprot. Handle the second case by shifting _PAGE_DIRTY=1 to _PAGE_SAVED_DIRTY=1 if the PTE was write protected by the pte_modify() call. Apply the same changes to pmd_modify(). Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v6: - Rename _PAGE_COW to _PAGE_SAVED_DIRTY (David Hildenbrand) - Open code _PAGE_SAVED_DIRTY part in pte_modify() (Boris) - Change the logic so the open coded part is not too ugly - Merge pte_modify() patch with this one because of the above v4: - Break part patch for better bisectability --- arch/x86/include/asm/pgtable.h | 168 ++++++++++++++++++++++++++++----- 1 file changed, 145 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 349fcab0405a..05dfdbdf96b4 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -124,9 +124,17 @@ extern pmdval_t early_pmd_flags; * The following only work if pte_present() is true. * Undefined behaviour if not.. */ -static inline int pte_dirty(pte_t pte) +static inline bool pte_dirty(pte_t pte) { - return pte_flags(pte) & _PAGE_DIRTY; + return pte_flags(pte) & _PAGE_DIRTY_BITS; +} + +static inline bool pte_shstk(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return false; + + return (pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY)) == _PAGE_DIRTY; } static inline int pte_young(pte_t pte) @@ -134,9 +142,18 @@ static inline int pte_young(pte_t pte) return pte_flags(pte) & _PAGE_ACCESSED; } -static inline int pmd_dirty(pmd_t pmd) +static inline bool pmd_dirty(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_DIRTY; + return pmd_flags(pmd) & _PAGE_DIRTY_BITS; +} + +static inline bool pmd_shstk(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return false; + + return (pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY | _PAGE_PSE)) == + (_PAGE_DIRTY | _PAGE_PSE); } #define pmd_young pmd_young @@ -145,9 +162,9 @@ static inline int pmd_young(pmd_t pmd) return pmd_flags(pmd) & _PAGE_ACCESSED; } -static inline int pud_dirty(pud_t pud) +static inline bool pud_dirty(pud_t pud) { - return pud_flags(pud) & _PAGE_DIRTY; + return pud_flags(pud) & _PAGE_DIRTY_BITS; } static inline int pud_young(pud_t pud) @@ -157,13 +174,21 @@ static inline int pud_young(pud_t pud) static inline int pte_write(pte_t pte) { - return pte_flags(pte) & _PAGE_RW; + /* + * Shadow stack pages are logically writable, but do not have + * _PAGE_RW. Check for them separately from _PAGE_RW itself. + */ + return (pte_flags(pte) & _PAGE_RW) || pte_shstk(pte); } #define pmd_write pmd_write static inline int pmd_write(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_RW; + /* + * Shadow stack pages are logically writable, but do not have + * _PAGE_RW. Check for them separately from _PAGE_RW itself. + */ + return (pmd_flags(pmd) & _PAGE_RW) || pmd_shstk(pmd); } #define pud_write pud_write @@ -342,7 +367,16 @@ static inline pte_t pte_clear_saveddirty(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte) { - return pte_clear_flags(pte, _PAGE_RW); + pte = pte_clear_flags(pte, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PTE (Write=0,Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pte_dirty(pte)) + pte = pte_mksaveddirty(pte); + return pte; } #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP @@ -380,7 +414,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte) static inline pte_t pte_mkclean(pte_t pte) { - return pte_clear_flags(pte, _PAGE_DIRTY); + return pte_clear_flags(pte, _PAGE_DIRTY_BITS); } static inline pte_t pte_mkold(pte_t pte) @@ -395,7 +429,19 @@ static inline pte_t pte_mkexec(pte_t pte) static inline pte_t pte_mkdirty(pte_t pte) { - return pte_set_flags(pte, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pteval_t dirty = _PAGE_DIRTY; + + /* Avoid creating Dirty=1,Write=0 PTEs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pte_write(pte)) + dirty = _PAGE_SAVED_DIRTY; + + return pte_set_flags(pte, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pte_t pte_mkwrite_shstk(pte_t pte) +{ + /* pte_clear_saveddirty() also sets Dirty=1 */ + return pte_clear_saveddirty(pte); } static inline pte_t pte_mkyoung(pte_t pte) @@ -412,7 +458,12 @@ struct vm_area_struct; static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { - return pte_mkwrite_kernel(pte); + pte = pte_mkwrite_kernel(pte); + + if (pte_dirty(pte)) + pte = pte_clear_saveddirty(pte); + + return pte; } static inline pte_t pte_mkhuge(pte_t pte) @@ -481,7 +532,15 @@ static inline pmd_t pmd_clear_saveddirty(pmd_t pmd) static inline pmd_t pmd_wrprotect(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_RW); + pmd = pmd_clear_flags(pmd, _PAGE_RW); + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PMD (RW=0, Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pmd_dirty(pmd)) + pmd = pmd_mksaveddirty(pmd); + return pmd; } #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP @@ -508,12 +567,23 @@ static inline pmd_t pmd_mkold(pmd_t pmd) static inline pmd_t pmd_mkclean(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_DIRTY); + return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS); } static inline pmd_t pmd_mkdirty(pmd_t pmd) { - return pmd_set_flags(pmd, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pmdval_t dirty = _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=1, Write=0 PMDs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pmd_write(pmd)) + dirty = _PAGE_SAVED_DIRTY; + + return pmd_set_flags(pmd, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) +{ + return pmd_clear_saveddirty(pmd); } static inline pmd_t pmd_mkdevmap(pmd_t pmd) @@ -533,7 +603,12 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - return pmd_set_flags(pmd, _PAGE_RW); + pmd = pmd_set_flags(pmd, _PAGE_RW); + + if (pmd_dirty(pmd)) + pmd = pmd_clear_saveddirty(pmd); + + return pmd; } static inline pud_t pud_set_flags(pud_t pud, pudval_t set) @@ -577,17 +652,32 @@ static inline pud_t pud_mkold(pud_t pud) static inline pud_t pud_mkclean(pud_t pud) { - return pud_clear_flags(pud, _PAGE_DIRTY); + return pud_clear_flags(pud, _PAGE_DIRTY_BITS); } static inline pud_t pud_wrprotect(pud_t pud) { - return pud_clear_flags(pud, _PAGE_RW); + pud = pud_clear_flags(pud, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PUD (RW=0, Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pud_dirty(pud)) + pud = pud_mksaveddirty(pud); + return pud; } static inline pud_t pud_mkdirty(pud_t pud) { - return pud_set_flags(pud, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pudval_t dirty = _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=1, Write=0 PUDs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pud_write(pud)) + dirty = _PAGE_SAVED_DIRTY; + + return pud_set_flags(pud, dirty | _PAGE_SOFT_DIRTY); } static inline pud_t pud_mkdevmap(pud_t pud) @@ -607,7 +697,11 @@ static inline pud_t pud_mkyoung(pud_t pud) static inline pud_t pud_mkwrite(pud_t pud) { - return pud_set_flags(pud, _PAGE_RW); + pud = pud_set_flags(pud, _PAGE_RW); + + if (pud_dirty(pud)) + pud = pud_clear_saveddirty(pud); + return pud; } #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY @@ -724,6 +818,8 @@ static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { pteval_t val = pte_val(pte), oldval = val; + bool wr_protected; + pte_t pte_result; /* * Chop off the NX bit (if present), and add the NX portion of @@ -732,17 +828,43 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) val &= _PAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); - return __pte(val); + + pte_result = __pte(val); + + /* + * Do the saveddirty fixup if the PTE was just write protected and + * it's dirty. + */ + wr_protected = (oldval & _PAGE_RW) && !(val & _PAGE_RW); + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && wr_protected && + (val & _PAGE_DIRTY)) + pte_result = pte_mksaveddirty(pte_result); + + return pte_result; } static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { pmdval_t val = pmd_val(pmd), oldval = val; + bool wr_protected; + pmd_t pmd_result; - val &= _HPAGE_CHG_MASK; + val &= (_HPAGE_CHG_MASK & ~_PAGE_DIRTY); val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK; val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK); - return __pmd(val); + + pmd_result = __pmd(val); + + /* + * Do the saveddirty fixup if the PMD was just write protected and + * it's dirty. + */ + wr_protected = (oldval & _PAGE_RW) && !(val & _PAGE_RW); + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && wr_protected && + (val & _PAGE_DIRTY)) + pmd_result = pmd_mksaveddirty(pmd_result); + + return pmd_result; } /* From patchwork Sun Mar 19 00:15:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71685 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531654wrt; Sat, 18 Mar 2023 18:33:07 -0700 (PDT) X-Google-Smtp-Source: AK7set/CWY/0eY1wFpt0kq0mQ1uGCPaXcc7S6D6kk8GDhmHfYYKwaDZ7/PZbPeyrjGJXeadyYkVo X-Received: by 2002:a17:90b:164f:b0:236:a4bc:222 with SMTP id il15-20020a17090b164f00b00236a4bc0222mr13632039pjb.38.1679189587647; Sat, 18 Mar 2023 18:33:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189587; cv=none; d=google.com; s=arc-20160816; b=Z8acsHZxSqTe7BpJZ2Ab7GbbK9J2hsSxyNnNEtO0V81uReod7ufkyGOXGmTbLpoHwJ druSQqQqy5ZERMwHYuET2bKTei7AdJ0jFIrtopC1DZdVN1qCKOm/9FFe4yPw76VVcQvG 9l0/yYyrVJrv1HujLNOEXmOoj3p24ecXychJ4P7g5Lu/Jm2Q4wDrouwsz9PHxxfBywvg jp8yCsNMDmTFqRVNPDu9fArJdUmYAAYEcF68AkAZ6wR8Bg6VA/p6/umGUhBFgC9atHNe WwMPTxPdc/1Zg6p/LQQKh//oaf27iKqNDI6AT/lAdbELOPtMq5hYhkdp63yqSwbKrc5j 9H+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=Pwhy/M47BV1NmQ8E1swDHEJVYRiFTE7CnNQ7FqeGg+g=; b=BwwTUCN3LBGCyjQ7cpk7qmYCeaMdMI467p9XuK1oS7Bc9qhJSsXDzZfhlaPdGM9Ixt v5oNygbjG9BU3qmRldixYTJ3ni+1jfoWowuftevKIK0Z9j5IPB1Anh6JvyuLFsZbskZ3 ifEiCUwOkJD2wXuzrmN+PMJd8hdnXORzsL1F/JJ3MEHnyh6RPYUDFzE2JRa0+S/Tfsu1 wnunE+RvLZeHSM6RistTOIPLCQBdHT53yrW0lsev19omGF+AJQaf6JkWEGQbXtnQwHOZ Fbi2BtpCoUs0X0Ka4Uny9e2+dQs2fMFpPqm4KhtGdaaBCFf5Wt0tq2RGOk9fOkz3nQWW W4jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CuNWZCLK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fs3-20020a17090af28300b0023d53736026si6386775pjb.125.2023.03.18.18.32.55; Sat, 18 Mar 2023 18:33:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CuNWZCLK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230148AbjCSATM (ORCPT + 99 others); Sat, 18 Mar 2023 20:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230156AbjCSASV (ORCPT ); Sat, 18 Mar 2023 20:18:21 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DD832A6EE; Sat, 18 Mar 2023 17:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185036; x=1710721036; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Za8k1YHP/NKzL9PLFScEe9Ide1J8X/tcTz7SUA3IT3Y=; b=CuNWZCLKvehmV2IW7nIXkP8BFvTWA4GN1+YLjm3EbCE0S3JUTwrs4rx0 5BUEG64yj+3ltWxBJ/wN/D7gSE1mMyX9k6N3KoPACkWd+ZBPAN0DLTHK5 80+sATCYt5fqo/z8t3NNysRUpI3YbVzRyzvEa06nOMX4DOcwaP4Nj28on I5ZLaz9RM93Vy5bDiYRWXwwtoPjrBePAn7hJh9jKUSjJyXivGZ9C3n/H2 Oo6ZbFTLI8foIJMdxP1QC6Ff7+f2dgGcxz0PrE9a7FALW1Kf+NUypmcaI mQ9Ez/on+wI8eD8uyr9IVjYQNm9HlJWuDhKnQMh9ffpVO//NW2GbGQoaV g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491095" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491095" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672849" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672849" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:21 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 17/40] mm: Move VM_UFFD_MINOR_BIT from 37 to 38 Date: Sat, 18 Mar 2023 17:15:12 -0700 Message-Id: <20230319001535.23210-18-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757901102428028?= X-GMAIL-MSGID: =?utf-8?q?1760757901102428028?= From: Yu-cheng Yu The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. Future patches will introduce a new VM flag VM_SHADOW_STACK that will be VM_HIGH_ARCH_BIT_5. VM_HIGH_ARCH_BIT_1 through VM_HIGH_ARCH_BIT_4 are bits 32-36, and bit 37 is the unrelated VM_UFFD_MINOR_BIT. For the sake of order, make all VM_HIGH_ARCH_BITs stay together by moving VM_UFFD_MINOR_BIT from 37 to 38. This will allow VM_SHADOW_STACK to be introduced as 37. Co-developed-by: Rick Edgecombe Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Reviewed-by: Axel Rasmussen Acked-by: Mike Rapoport (IBM) Acked-by: Peter Xu Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Reviewed-by: David Hildenbrand --- include/linux/mm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index af652444fbba..a1b31caae013 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -377,7 +377,7 @@ extern unsigned int kobjsize(const void *objp); #endif #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR -# define VM_UFFD_MINOR_BIT 37 +# define VM_UFFD_MINOR_BIT 38 # define VM_UFFD_MINOR BIT(VM_UFFD_MINOR_BIT) /* UFFD minor faults */ #else /* !CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ # define VM_UFFD_MINOR VM_NONE From patchwork Sun Mar 19 00:15:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71673 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp528031wrt; Sat, 18 Mar 2023 18:20:27 -0700 (PDT) X-Google-Smtp-Source: AK7set99wFgD5HwgAdrbE2MD9nRlt5tb3awgZ7DUd2+PGLpMD1DeIXxRaAxF7FFmq/Rc0HiEyNw2 X-Received: by 2002:a17:902:e34c:b0:1a1:7da3:ef58 with SMTP id p12-20020a170902e34c00b001a17da3ef58mr9848980plc.28.1679188826764; Sat, 18 Mar 2023 18:20:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188826; cv=none; d=google.com; s=arc-20160816; b=wnizEt9uDNWPYbh/MvivRTzWnn7abtGgDKn9X3nwDh9U1xo1YePI5P32kvTaxcRv9W Vgc1LnKvLLvAe7Omwu1zQc1wtO4kU6wOSVQ8/Lmx43H49Ebz7oqQ0ZfZ09RIE4eff4NH a+1OE73baeb+0Rekw5PjqVScUHy5yBwkboaqcCsZWWYlok1J9Vs4RWmv4Via6gIoZisO 1Uuai8/n/PQHIrvXTKBSswdkB5i0b/3LMjGh8oJ390RUkjnpYJnuOMlDxHyc7YOhjeB7 cd3KdyC/DVyZyaeA2gPnEk1hx5MD4XH66BZYEzYdokXVfB32D/jUiZzNlLpSCWV87wVe wgQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=Mr90LVFmE9gifcOMDGIP+CURvzz5o3WovfUcqTtHWl0=; b=ifUocWSNcoyO/GlVHEH7Lj7PPQjHr54azU7hQ6EXZwfCUkvSsDX6O24rIipoc7bKMK +kQpBQFaHhdt5ksjkezRRHwNVXaCGEawnq+XQlx/S2AT85g+tpBMkyFfSHH3XVGUTxjw uvUmgOHiVZPqdd3/V+dHAvDxQw1GFGY+L2BBjPxec0hUyejZvX3CHEsJJLW6XqI7v19I h/hBj6ejUlRuLQWExkLB9c3kU2R/9yXC79N4AeF0lBSbn2oNUp6lcGMVJg8uk7LPPhDy 4vRGtk2NiKFmVQkFdiw4src59+UQDnx33JDaU1664dln+xSaCpIn9iwlUjSRpx5d6a7f bDYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nejJyKbK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.20.11; Sat, 18 Mar 2023 18:20:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nejJyKbK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229971AbjCSATR (ORCPT + 99 others); Sat, 18 Mar 2023 20:19:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbjCSASc (ORCPT ); Sat, 18 Mar 2023 20:18:32 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2CD82A985; Sat, 18 Mar 2023 17:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185041; x=1710721041; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=zgMYcW7jhLowf3LtAX7K40lEhH7jOSIjySbX8LofR4w=; b=nejJyKbK8OTQ4nhv1fyYtK6FJRFFBzex9bWu7+CL1dlXDPTOuDZTxo/E J/UeUbegClGQZYB2iDLR/g4RlrLfw4nmual8vI3+2Eu337CtWZEq13FLC 9p88uiKab8xjfGS0NJIo/RHTspn39UONismVf2sbd4iEBo3YQ/a7X/Y6z dJv5t5PuP+hzQGta3q+zOsdPPpaxX9G9EuBrAwWM9Cfi2viwHJUTXYPQ4 nbMj5Hl7IaLHyGwgNxrx994lCktni20kshku3EaSXA1mFU2Yb01ZFNquj IR4xQyUY4sm2BrKnpgpKWArN2ZtobhKVHuNQYCfFCEHMZvswzvoxTy2cd Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491117" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491117" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672855" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672855" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:23 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 18/40] mm: Introduce VM_SHADOW_STACK for shadow stack memory Date: Sat, 18 Mar 2023 17:15:13 -0700 Message-Id: <20230319001535.23210-19-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757103163107962?= X-GMAIL-MSGID: =?utf-8?q?1760757103163107962?= From: Yu-cheng Yu New hardware extensions implement support for shadow stack memory, such as x86 Control-flow Enforcement Technology (CET). Add a new VM flag to identify these areas, for example, to be used to properly indicate shadow stack PTEs to the hardware. Shadow stack VMA creation will be tightly controlled and limited to anonymous memory to make the implementation simpler and since that is all that is required. The solution will rely on pte_mkwrite() to create the shadow stack PTEs, so it will not be required for vm_get_page_prot() to learn how to create shadow stack memory. For this reason document that VM_SHADOW_STACK should not be mixed with VM_SHARED. Co-developed-by: Rick Edgecombe Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Reviewed-by: Kirill A. Shutemov Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Acked-by: David Hildenbrand --- v7: - Use lightly edited commit log verbiage from (David Hildenbrand) - Add explanation for VM_SHARED limitation (David Hildenbrand) v6: - Add comment about VM_SHADOW_STACK not being allowed with VM_SHARED (David Hildenbrand) v3: - Drop arch specific change in arch_vma_name(). The memory can show as anonymous (Kirill) - Change CONFIG_ARCH_HAS_SHADOW_STACK to CONFIG_X86_USER_SHADOW_STACK in show_smap_vma_flags() (Boris) --- Documentation/filesystems/proc.rst | 1 + fs/proc/task_mmu.c | 3 +++ include/linux/mm.h | 8 ++++++++ 3 files changed, 12 insertions(+) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index 9d5fd9424e8b..8b314df7ccdf 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -564,6 +564,7 @@ encoded manner. The codes are the following: mt arm64 MTE allocation tags are enabled um userfaultfd missing tracking uw userfaultfd wr-protect tracking + ss shadow stack page == ======================================= Note that there is no guarantee that every flag and associated mnemonic will diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 6a96e1713fd5..324b092c2ac9 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -711,6 +711,9 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR [ilog2(VM_UFFD_MINOR)] = "ui", #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ +#ifdef CONFIG_X86_USER_SHADOW_STACK + [ilog2(VM_SHADOW_STACK)] = "ss", +#endif }; size_t i; diff --git a/include/linux/mm.h b/include/linux/mm.h index a1b31caae013..097544afb1aa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -326,11 +326,13 @@ extern unsigned int kobjsize(const void *objp); #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */ +#define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) +#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5) #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ #ifdef CONFIG_ARCH_HAS_PKEYS @@ -346,6 +348,12 @@ extern unsigned int kobjsize(const void *objp); #endif #endif /* CONFIG_ARCH_HAS_PKEYS */ +#ifdef CONFIG_X86_USER_SHADOW_STACK +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ +#else +# define VM_SHADOW_STACK VM_NONE +#endif + #if defined(CONFIG_X86) # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ #elif defined(CONFIG_PPC) From patchwork Sun Mar 19 00:15:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71689 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532148wrt; Sat, 18 Mar 2023 18:34:54 -0700 (PDT) X-Google-Smtp-Source: AK7set8rv5PT/PTaCry1VBv6Xe5SccvOqdlVhdQqKSG2fwzx52JayTPz7bFhqicUENcL0WnoGA64 X-Received: by 2002:a05:6a20:5484:b0:cd:47dc:82b5 with SMTP id i4-20020a056a20548400b000cd47dc82b5mr15940872pzk.21.1679189694526; Sat, 18 Mar 2023 18:34:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189694; cv=none; d=google.com; s=arc-20160816; b=AOzWD8PoBnm1ARCEeGWwp2sGpwKhncQNbH/yTpNvLcYbUeaP7iDrebq6zNVL0Kpp6s YLuXR4EesGCvhOdLetkADLU5p/R/AFecvVZcXwFhcgffB2LDL0GogKDr7Wq90qN2HAyL iB9vKwMfkXgDoBlhoqr3kdiwK81AnN1H7/9paRnyahquciklrEj21+fvA6yX6onGBm0v ZlHHjeqeFuepHmQZCNE+VPmf1SEpUMiQVxXo8PDk7GJC1CZJ0n5QQGtdhCNMqQUBQhsr CWpGMSvg854stOKyOnk6D3fF+Vu8mdNfEuR3tCb7VTkD7F2q+eg89SkgMRRmVaKrvu2x 2UzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=LspeC6VcDZY1g8y8TPAj26UpA/NokIsNWt2kToN9yyA=; b=kWK1xaKBVhyEb0YlfODOPa/nwVCg/1Oc1zwKdG+zRTXnXoVkZ16F43wg2LZwVl5yrF Jd+ekAPgRVCFiPk6eRvAFJnY46skMsc0DtT7vccB7tNUlCpT9k3ztBWR8/MyHj/Vf5L7 h9jw4SRNRyt1I2OmdN/CBRpd7LloRinjHuZndwSwTHvSHb7IDslztzq5aUa/RH0ZQuP/ 38w+hh1bltVsw0xQq8b/cs4I8HUYgiYKYMWl8WjBG0LKQ5teWMeU+l7QeruOIOj9Gn2a LzA8rDhezwdKY9HiPByHC+/YA0eDSJioceISq4IgBOMHgY1o48U9z8XS/xVqIDAhU1Mr xCAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HrurZ5B7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a634103000000b004fbc54ad3b6si6550344pga.470.2023.03.18.18.34.42; Sat, 18 Mar 2023 18:34:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HrurZ5B7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230192AbjCSATj (ORCPT + 99 others); Sat, 18 Mar 2023 20:19:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230089AbjCSAS7 (ORCPT ); Sat, 18 Mar 2023 20:18:59 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CB6028E54; Sat, 18 Mar 2023 17:17:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185063; x=1710721063; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=dronpxQi47Q7lStj0Ifv5In98YSzpCoT6jYTioIlyo0=; b=HrurZ5B7TySOiEFTWCOm+h1oC23XGo+bAMb7afJfYzlhZNqmxuo0/je9 AHId82EnTLvmlibTD8zAIC2m7FbmEofvC+GyHUFTUbhz0k+9cIsNE1nZc lZv1JqP2YsN3l+2QLTsBu14O5rj9YHXO79CMF+RUcON0jwokAd/oHiqQT 1EJ18qBdPR+LEAjgMh4NkmGdCMa2oUjTDxoEs6TVQrFOtJF5/H4mHsmDr sEnSN6xnVagxFTz6m+aJNP1n+Vo0kl5zKCkC3eClALaHDS2KEPEDoq5fQ Pw9g8x1a07TWnv6InVe3pHEWI37O458EjD1o8/YaKOsV2uaKpbutgSXu8 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491139" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491139" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672858" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672858" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:25 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 19/40] x86/mm: Check shadow stack page fault errors Date: Sat, 18 Mar 2023 17:15:14 -0700 Message-Id: <20230319001535.23210-20-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758013195879169?= X-GMAIL-MSGID: =?utf-8?q?1760758013195879169?= The CPU performs "shadow stack accesses" when it expects to encounter shadow stack mappings. These accesses can be implicit (via CALL/RET instructions) or explicit (instructions like WRSS). Shadow stack accesses to shadow-stack mappings can result in faults in normal, valid operation just like regular accesses to regular mappings. Shadow stacks need some of the same features like delayed allocation, swap and copy-on-write. The kernel needs to use faults to implement those features. The architecture has concepts of both shadow stack reads and shadow stack writes. Any shadow stack access to non-shadow stack memory will generate a fault with the shadow stack error code bit set. This means that, unlike normal write protection, the fault handler needs to create a type of memory that can be written to (with instructions that generate shadow stack writes), even to fulfill a read access. So in the case of COW memory, the COW needs to take place even with a shadow stack read. Otherwise the page will be left (shadow stack) writable in userspace. So to trigger the appropriate behavior, set FAULT_FLAG_WRITE for shadow stack accesses, even if the access was a shadow stack read. For the purpose of making this clearer, consider the following example. If a process has a shadow stack, and forks, the shadow stack PTEs will become read-only due to COW. If the CPU in one process performs a shadow stack read access to the shadow stack, for example executing a RET and causing the CPU to read the shadow stack copy of the return address, then in order for the fault to be resolved the PTE will need to be set with shadow stack permissions. But then the memory would be changeable from userspace (from CALL, RET, WRSS, etc). So this scenario needs to trigger COW, otherwise the shared page would be changeable from both processes. Shadow stack accesses can also result in errors, such as when a shadow stack overflows, or if a shadow stack access occurs to a non-shadow-stack mapping. Also, generate the errors for invalid shadow stack accesses. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Further tweak commit log (dhansen, Boris) v7: - Update comment in fault handler (David Hildenbrand) v6: - Update comment due to rename of Cow bit to SavedDirty v5: - Add description of COW example (Boris) - Replace "permissioned" (Boris) - Remove capitalization of shadow stack (Boris) --- arch/x86/include/asm/trap_pf.h | 2 ++ arch/x86/mm/fault.c | 22 ++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/arch/x86/include/asm/trap_pf.h b/arch/x86/include/asm/trap_pf.h index 10b1de500ab1..afa524325e55 100644 --- a/arch/x86/include/asm/trap_pf.h +++ b/arch/x86/include/asm/trap_pf.h @@ -11,6 +11,7 @@ * bit 3 == 1: use of reserved bit detected * bit 4 == 1: fault was an instruction fetch * bit 5 == 1: protection keys block access + * bit 6 == 1: shadow stack access fault * bit 15 == 1: SGX MMU page-fault */ enum x86_pf_error_code { @@ -20,6 +21,7 @@ enum x86_pf_error_code { X86_PF_RSVD = 1 << 3, X86_PF_INSTR = 1 << 4, X86_PF_PK = 1 << 5, + X86_PF_SHSTK = 1 << 6, X86_PF_SGX = 1 << 15, }; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a498ae1fbe66..7beb0ba6b2ec 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1117,8 +1117,22 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) (error_code & X86_PF_INSTR), foreign)) return 1; + /* + * Shadow stack accesses (PF_SHSTK=1) are only permitted to + * shadow stack VMAs. All other accesses result in an error. + */ + if (error_code & X86_PF_SHSTK) { + if (unlikely(!(vma->vm_flags & VM_SHADOW_STACK))) + return 1; + if (unlikely(!(vma->vm_flags & VM_WRITE))) + return 1; + return 0; + } + if (error_code & X86_PF_WRITE) { /* write, present and write, not present: */ + if (unlikely(vma->vm_flags & VM_SHADOW_STACK)) + return 1; if (unlikely(!(vma->vm_flags & VM_WRITE))) return 1; return 0; @@ -1310,6 +1324,14 @@ void do_user_addr_fault(struct pt_regs *regs, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* + * Read-only permissions can not be expressed in shadow stack PTEs. + * Treat all shadow stack accesses as WRITE faults. This ensures + * that the MM will prepare everything (e.g., break COW) such that + * maybe_mkwrite() can create a proper shadow stack PTE. + */ + if (error_code & X86_PF_SHSTK) + flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_WRITE) flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_INSTR) From patchwork Sun Mar 19 00:15:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71674 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp528598wrt; Sat, 18 Mar 2023 18:22:34 -0700 (PDT) X-Google-Smtp-Source: AK7set8Dmpaw/VPB2A2WGocaVyz6P/cuvVu3NBqik42Dz3ckNgAmTmVZDR9AP56O33XBqHweXEJo X-Received: by 2002:a05:6a20:3b0f:b0:d5:e5e2:36ac with SMTP id c15-20020a056a203b0f00b000d5e5e236acmr12191324pzh.19.1679188954418; Sat, 18 Mar 2023 18:22:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188954; cv=none; d=google.com; s=arc-20160816; b=pr7qaG7jEYFnwuneWG2xtShWRQc/uv5ppN4QyV1tcnJISr0W2BtyTxRxkjYP0b3zd+ f62FFdAZ28SOhPbTbDQyVLlXCOWFiytoL8f0BiKgOJYzQ7p7KyTIPNP6YCyeaswmlNiI 0sLJTjkcO42uiHbzHdgCwpk1K4NqfqJP+UQi+rrsMtQjiya2MrfoEPnGSFkRsOAyHjaE xoNUnSHRTDHOQIS8by9SdVsMzTf35WccsepsI4hS+2HLZ7Z5S9CDLTanUqNDzRr3kPzH 32hX0VPAu4c2R+mN1P6PR73ljxefGECRhVJHap13Lvt/MVidNf5TT6xOG8RA7SIkPnAy 6bow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=ebrXWkOUB+pBTQx21r8Cu08K8bkARURBsW0wQIMrvgs=; b=HVO6+iiWgeO6pK+C+FS0hIwNXaXpq7CLPh5S2j2pacoDyV1J/xVYpZKMsSW5xTV3Ax hMGm46NotmrGdQpXdQnC1f8CEFB7QyzWPyjdcqjO7ZegpANqFO/4lEus81tIWiisf4JP ecZn4bAyXGdvHRh03LaOBiPGQVIRlxGK09Etlpfj1QSuc9kbYwBfIowB1194HCy8iyBk oHLkDWSf+DG+3qRuBmI6JvYNz8gmh0Iq+9unzAdK6xkSVZvXZsQfK12DpmJuyQFjMI/o rseg45FJ4GihLwni7KuFXVkgmRlXSbuxKhk1sYt7fDhCTLfRuuNpBzNuxx7wGA7p9qEg iG/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IpaEJuhL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.22.21; Sat, 18 Mar 2023 18:22:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IpaEJuhL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230198AbjCSAT6 (ORCPT + 99 others); Sat, 18 Mar 2023 20:19:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230152AbjCSATM (ORCPT ); Sat, 18 Mar 2023 20:19:12 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 024AE298F6; Sat, 18 Mar 2023 17:17:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185073; x=1710721073; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=HJSW741k8aVu5mpNpwjQ9huZphqr/UOu519JqYv/Vlg=; b=IpaEJuhLf+pVg6obEAOlrCI/MUkoaMPFa6Ly6guupQgA9tuJjbG2oeO1 n/OvfgNpv0Sqe7joxxir5AlvFvyKTKw5aSElVqB0DekTyXcjQGHhnEvLv +nTYxIpF93ITmRbRErk0TuoZFu+rEySMqwaLvUfBf2B8OAYrXZK8WTP7p xvgIiC97+ly3Tavy2eyn1X+eBm5Ry2+4YIWC1Uh/HgLep6cH2ksFJrgJj ks7FSlR0BaxiaQ+iEaaU59uanIfZeqFO6di5G6YXbL+/5z/oIEuSPEUHW XukbzNx6jWqwXOtzINn94pSiUf5C47WgJO6qZNZdKMVr3ReGySFylQ7hA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491163" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491163" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672862" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672862" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:26 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 20/40] x86/mm: Teach pte_mkwrite() about stack memory Date: Sat, 18 Mar 2023 17:15:15 -0700 Message-Id: <20230319001535.23210-21-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757236916409613?= X-GMAIL-MSGID: =?utf-8?q?1760757236916409613?= If a VMA has the VM_SHADOW_STACK flag, it is shadow stack memory. So when it is made writable with pte_mkwrite(), it should create shadow stack memory, not conventionally writable memory. Now that all the places where shadow stack memory might be created pass a VMA into pte_mkwrite(), it can know when it should do this. So make pte_mkwrite() create shadow stack memory when the VMA has the VM_SHADOW_STACK flag. Do the same thing for pmd_mkwrite(). This requires referencing VM_SHADOW_STACK in these functions, which are currently defined in pgtable.h, however mm.h (where VM_SHADOW_STACK is located) can't be pulled in without causing problems for files that reference pgtable.h. So also move pte/pmd_mkwrite() into pgtable.c, where they can safely reference VM_SHADOW_STACK. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: Deepak Gupta Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) v6: - New patch --- arch/x86/include/asm/pgtable.h | 20 ++------------------ arch/x86/mm/pgtable.c | 26 ++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 05dfdbdf96b4..d81e7ec27507 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -456,15 +456,7 @@ static inline pte_t pte_mkwrite_kernel(pte_t pte) struct vm_area_struct; -static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) -{ - pte = pte_mkwrite_kernel(pte); - - if (pte_dirty(pte)) - pte = pte_clear_saveddirty(pte); - - return pte; -} +pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma); static inline pte_t pte_mkhuge(pte_t pte) { @@ -601,15 +593,7 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) return pmd_set_flags(pmd, _PAGE_ACCESSED); } -static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) -{ - pmd = pmd_set_flags(pmd, _PAGE_RW); - - if (pmd_dirty(pmd)) - pmd = pmd_clear_saveddirty(pmd); - - return pmd; -} +pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); static inline pud_t pud_set_flags(pud_t pud, pudval_t set) { diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index e4f499eb0f29..98856bcc8102 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -880,3 +880,29 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) #endif /* CONFIG_X86_64 */ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ + +pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHADOW_STACK) + return pte_mkwrite_shstk(pte); + + pte = pte_mkwrite_kernel(pte); + + if (pte_dirty(pte)) + pte = pte_clear_saveddirty(pte); + + return pte; +} + +pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHADOW_STACK) + return pmd_mkwrite_shstk(pmd); + + pmd = pmd_set_flags(pmd, _PAGE_RW); + + if (pmd_dirty(pmd)) + pmd = pmd_clear_saveddirty(pmd); + + return pmd; +} From patchwork Sun Mar 19 00:15:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71679 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp530347wrt; Sat, 18 Mar 2023 18:28:58 -0700 (PDT) X-Google-Smtp-Source: AK7set/Fp53Rbmcu4A6vlv3TFVKij7BbwMru1szj7SsrXadw48FXHrKkcm5W5Oif9+B8QbTDKiUi X-Received: by 2002:a05:6a20:1bdd:b0:d5:7f0b:f2f with SMTP id cv29-20020a056a201bdd00b000d57f0b0f2fmr11198331pzb.26.1679189338038; Sat, 18 Mar 2023 18:28:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189338; cv=none; d=google.com; s=arc-20160816; b=SfRBuZ0n60HhcbpXiP3SMc4Y0BAqBjP7T2NoxhATWdXzwI9NrDRtRjaGr9gAw81YBv isjfT3YvgjryncQiphRd13FCHmzwQvpWU/t4fpM9lRwcEsEwZH1msRUg9ufo4ATW89Pj TOx0a75Da8xXWpkKlwOpKZkBPwfLMyDw9Dycnh44ppcn2xtgdLzbmKf8S03rqE8kh7lV JppdizNl0Sya8q+2/Vr2c/+l4WgLx2OPhfjQ7yYeBRRxug+BhV/PIPuIP+yXc2teQb9y iU8adxZ9RxVtwAm9FAAa2SYb1jV7LWxhRTTxUhWFLvd5GfmNxTzynEbm+ZQzyPuPmsk3 kHlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=7b/QOAWeKJUMsVXOohTWsZBhj9LBeQOJng5A/JiJyYY=; b=uyH9sCTqMnMxrxppAPJUhS/6dDgvJHMuhBsQl8lTvED8oKEXPnqCJCGcvrwH4EQ/FW RxUGDMNEeTTIqEuk8xGDO/K5SdzyNBsyoUFFQtr8MtinG0zXNemVDzRRwmUDsttkBq9k FuLqi74Y0PE3Wa6hhKLBI5/HnuP43YqPhGJFMJYVAFwHufhAuf9j+CHnDdUKBHnjWV+y QESUfFkOqXSn22y0VJFyMoLOLuwbILUYjy+caOP7FhrtRWqIF5yNBG+bjOI5i7f2IWDp hk59c24Car/HiZH/fhCLL70sqHaSmL8b/VGPzaLE9RxB6JQufHdNAg58qbfQKd5lSTU6 Tc5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fipricKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.28.45; Sat, 18 Mar 2023 18:28:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fipricKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbjCSAUK (ORCPT + 99 others); Sat, 18 Mar 2023 20:20:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbjCSATb (ORCPT ); Sat, 18 Mar 2023 20:19:31 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE1FC29163; Sat, 18 Mar 2023 17:18:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185080; x=1710721080; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=u/rJkxTnxXm+UTYwj285Z8LlwIJixTQ+7DbH6nVmNI4=; b=fipricKjTrDO61A0MNC+r5bnaPod7K3uX4LX8QX8hhfE+IY22XBgZSFS M71sHWVJZRRpFzr32Nv/C0rkdDbW4E+U9qvqAAsZiLZZcEYoenHrYGz2L IBNHZZKAJ46v+RI47UMCzbz1zKiTL6Tnr0/o8mnzi0xTXb22CsLG6jqsu /8Aj4JEREB0xZXFaLYobQn1zS+KOg89VpCZ5w4+nMd/p/xlkSh2U6q0St 9xYu1fk1A8OEJxVYPI31k9TTUMMsRQcp7lkbB0wyGdBFUjEL81VKb3o3B +oPVzJxKXYUFgGkPXqLN3VBoa4UOKkX35XhmGawwXlc3ACPTnhWcE5O4W A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491186" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491186" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672865" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672865" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:28 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 21/40] mm: Add guard pages around a shadow stack. Date: Sat, 18 Mar 2023 17:15:16 -0700 Message-Id: <20230319001535.23210-22-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757639076826952?= X-GMAIL-MSGID: =?utf-8?q?1760757639076826952?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP instruction can move the SSP to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack VMAs, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can be incremented or decremented by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore, a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stacks to grow, which is unneeded and adds a strange difference to how most regular stacks work. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Move and update comment (Boris, David Hildenbrand) v5: - Fix typo in commit log v4: - Drop references to 32 bit instructions - Switch to generic code to drop __weak (Peterz) v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) --- include/linux/mm.h | 52 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 097544afb1aa..d09fbe9f43f8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -349,7 +349,36 @@ extern unsigned int kobjsize(const void *objp); #endif /* CONFIG_ARCH_HAS_PKEYS */ #ifdef CONFIG_X86_USER_SHADOW_STACK -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ +/* + * This flag should not be set with VM_SHARED because of lack of support + * core mm. It will also get a guard page. This helps userspace protect + * itself from attacks. The reasoning is as follows: + * + * The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The + * INCSSP instruction can increment the shadow stack pointer. It is the + * shadow stack analog of an instruction like: + * + * addq $0x80, %rsp + * + * However, there is one important difference between an ADD on %rsp + * and INCSSP. In addition to modifying SSP, INCSSP also reads from the + * memory of the first and last elements that were "popped". It can be + * thought of as acting like this: + * + * READ_ONCE(ssp); // read+discard top element on stack + * ssp += nr_to_pop * 8; // move the shadow stack + * READ_ONCE(ssp-8); // read+discard last popped stack element + * + * The maximum distance INCSSP can move the SSP is 2040 bytes, before + * it would read the memory. Therefore a single page gap will be enough + * to prevent any operation from shifting the SSP to an adjacent stack, + * since it would have to land in the gap at least once, causing a + * fault. + * + * Prevent using INCSSP to move the SSP between shadow stacks by + * having a PAGE_SIZE guard gap. + */ +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 #else # define VM_SHADOW_STACK VM_NONE #endif @@ -3107,15 +3136,26 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* See reasoning around the VM_SHADOW_STACK definition */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; } From patchwork Sun Mar 19 00:15:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71659 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp515277wrt; Sat, 18 Mar 2023 17:34:03 -0700 (PDT) X-Google-Smtp-Source: AK7set+gTZUT/tv0pAnuTtFBl8RHen3MUSXlHSWqJvzlIKvaK0UpDeqBtnDzxl0zUBJsgTLE7rTY X-Received: by 2002:aa7:9a1a:0:b0:600:cc40:2589 with SMTP id w26-20020aa79a1a000000b00600cc402589mr10959990pfj.3.1679186043043; Sat, 18 Mar 2023 17:34:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679186043; cv=none; d=google.com; s=arc-20160816; b=I3xvsCm9S/Fhtm2WKyE/RELPOkQiH/Fgoum/b+ws2BCrQj3l1eDdB4feouud8N1qR2 w2/HlC+1/WBeqIF6T07oJ/GbKyPaaoVJgRUlnahXqwt44DBil9NSf6fpCOyc/TaoiDdg 5fR65guu86G9n/DBvdTxEfBIpFiwudVNXcaCdyYekaIq9IslEZmKwr5Alq5S8qc6DA12 MY/p5ZSczvqr0bkJmDr5bT515AlT+8deXofWCKJ8p3V7bAfkcLTygvKpvJlg7WdfVjBX ZTiHPxkcteQHsBkN1ZUD3kxqykhpzD9lAxdDw6k0+Ftu3ciRyRwxRvE4zHi40qrwMXhN Uutg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=85mBto0Sll7IRfEByhCC7vuZskZGVCPWl8XWWuDZvdA=; b=E1gBJ8DlTSh3dIYZWEzkCbM0IhP6PDB7DvcgPaz1iDW5kkpNqQXMTS+TsC6JGPQYmc ZzDOyg4NRDQeOEPqbeE6dDuqslVz4DQWNr7inwZ519fDRHdc4pqzog7HSSLwfeSOkouY VsMFA7Fx+BDe7PRFHGj6szVdCoTROOAljeII2qQNMbX303Dow9uaD4gCcorXm9CQcffn JlKVFxEvk5HSRnDl6RAGHNpTd8bUyNM/MdmVjGvXfyPjI+ltkZLQlbRrsBkCqZlXM/ns m3R4kAMEZjh1E3/tWTdU6jZomB7i9q5Yrky6Xvmiavfy163vtPQe/FzPNWa59ftN+xYb x7dQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KjrAtLf1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 15-20020a63174f000000b004fb7e7ef77dsi6269928pgx.693.2023.03.18.17.33.49; Sat, 18 Mar 2023 17:34:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KjrAtLf1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230232AbjCSAU1 (ORCPT + 99 others); Sat, 18 Mar 2023 20:20:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230098AbjCSATj (ORCPT ); Sat, 18 Mar 2023 20:19:39 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4059FB776; Sat, 18 Mar 2023 17:18:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185100; x=1710721100; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Y8Z54H+73n2LbfSOdI84l/ZPS4qK7Tl7I0RpDbdZ/iA=; b=KjrAtLf1PYOlA2PxSvyyY5JKF8NqnTZP1uejV/lWdRjrTnIJiRC26pea IUwdt1H8E9h9XiHa/Iymo3jKE8/ssk1LGJmXwe/6JaJAZrgkaIqi/OSay rfenCwPUjTgiIYDvdlcqwplgrQ3ObCi8otc5gFYi+HtqVJZdEKqyKqwnZ +3dtD7vKyuF3oC2vQImGHTFwW0BLd3cTWrtKsj/fPHjcIGxTAoxKCPvcU mkPybTK1cXH8FocJolb3Cn9B2T4G593HWcwXwyhOTt9UcP27YBcEd9EEM 8oREoYVCpFWV9AvCl33MRZpk0Fo0SAp8T5jwOA9ziKViKbHSWCqCZv110 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491209" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491209" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672869" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672869" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:30 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 22/40] mm/mmap: Add shadow stack pages to memory accounting Date: Sat, 18 Mar 2023 17:15:17 -0700 Message-Id: <20230319001535.23210-23-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760754184336877202?= X-GMAIL-MSGID: =?utf-8?q?1760754184336877202?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbaige (Boris) - Update comment around is_stack_mapping() (David Hildenbrand) v7: - Change is_stack_mapping() to know about VM_SHADOW_STACK so the additions in vm_stat_account() can be dropped. (David Hildenbrand) v3: - Remove unneeded VM_SHADOW_STACK check in accountable_mapping() (Kirill) v2: - Remove is_shadow_stack_mapping() and just change it to directly bitwise and VM_SHADOW_STACK. --- mm/internal.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 7920a8b7982e..2e9f313fcf67 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -491,14 +491,14 @@ static inline bool is_exec_mapping(vm_flags_t flags) } /* - * Stack area - automatically grows in one direction + * Stack area (including shadow stacks) * * VM_GROWSUP / VM_GROWSDOWN VMAs are always private anonymous: * do_mmap() forbids all other combinations. */ static inline bool is_stack_mapping(vm_flags_t flags) { - return (flags & VM_STACK) == VM_STACK; + return ((flags & VM_STACK) == VM_STACK) || (flags & VM_SHADOW_STACK); } /* From patchwork Sun Mar 19 00:15:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71662 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp526372wrt; Sat, 18 Mar 2023 18:14:01 -0700 (PDT) X-Google-Smtp-Source: AK7set9cHM2WxD2++3al2SaO9yAwFZxFKdUmZa0azYWouLzKOE2EVkNWAUpPIbHJ5scdxxGTQflX X-Received: by 2002:a17:902:f291:b0:19f:2dff:2199 with SMTP id k17-20020a170902f29100b0019f2dff2199mr9550667plc.68.1679188441164; Sat, 18 Mar 2023 18:14:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188441; cv=none; d=google.com; s=arc-20160816; b=ncHTt6QZYrzug8FDCv3V0SxJAWuxUbLSynNNhOvbymTiSvUn1N5VND1YD/PegpamOV 1R38P9oMyPgCZ+RCCg3/zZeDUr1yCpbo+WZL7rZhLiiXuivxtis9c1au3UroUSfHpLzO uxp4/iyJfnXvJkz8NgMoB79ys+rq6enSIQAeyJl/qBk24TNg66XshScF//kXZtXxUpgU hrUia9wQ1zxfCdVWRzZsCzXnvkV53hGSlxAGWBrXSkFM7TuPo62AtQ75v1v6MVkZ9uCO oB+hyeMmkqAp17HoBoFn6XmcqUAYEU9mlo2/8i0f3wohJ5fK1Uc+GiotSoUYAblHOoNl ieZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=AOIntr3VxGUPIG0WTgprd1+JG8OST+0GjG70hTrdOgU=; b=EYAzH4Gv25ovlRWT94ygIUmmN9qjtKCa5JThMftXC3251J7DvEVoGDMuH7szuwLRr7 bDIiTT8PC/rWOIatoBKRcAUZMxgv+Qg/KVYpMxWYmFqHEl+I+uWddwImkkSH8AR/q67o tE4aPNzgwKd/V47lpcdcTbK4V6rUaEkjHGHE4r2KCOSgZl/w2jSvNZkSKQv2ed7etH2j w4X37QEaShqZE3bdug5m742VAiLwx+2Uap3nO1OySVCu/GNwwrA2VdEoZmAEz53a1mt7 R+ZfURtMqDlWPScd/8ZNPTjA3H1qD5wr5v93yzM7+ouBByO0X4mjonMlDszikyt3a2h/ W0eQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="db/8MtrT"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bg11-20020a1709028e8b00b0019ad44844b3si6253993plb.118.2023.03.18.18.13.49; Sat, 18 Mar 2023 18:14:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="db/8MtrT"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229923AbjCSAUo (ORCPT + 99 others); Sat, 18 Mar 2023 20:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230057AbjCSAUI (ORCPT ); Sat, 18 Mar 2023 20:20:08 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E91F28D07; Sat, 18 Mar 2023 17:18:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185114; x=1710721114; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=JiFIyLCYmEZEMe86X9SZ26iJG6/i80APvHoHhUeLipc=; b=db/8MtrTjUybRJ2TnHtj6gg3h8FJeeBZu96h2psRcf6xCOqvarwK78LC 9onxlGkqXz35mU0fOUk3BdR70C7ZKBbEFlInMMRotbi6yY7XcMyTIJoIG hLThi1mqVFoMOrsEfgHHs7cipD3Hqv7AczXkQycbX6dcbwEjql6BtJjgh uUwKmlr6qzbKrEk83SlBDVS4uqsOGsnBIjcXFTrVlbfutvAuHdyydiozR 9y1R8VveMnvKu4N0hMKiFzDUNplrtNZvwmYuxqqFvN2R+1iNIPqSJDkqe k4eulaq2VO94Z0A0l5H0JoIVl7M3pCCybex0GW3iArbQdn+Tyu8ffRfG+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491233" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491233" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672874" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672874" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:31 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 23/40] mm: Re-introduce vm_flags to do_mmap() Date: Sat, 18 Mar 2023 17:15:18 -0700 Message-Id: <20230319001535.23210-24-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756699165025742?= X-GMAIL-MSGID: =?utf-8?q?1760756699165025742?= From: Yu-cheng Yu There was no more caller passing vm_flags to do_mmap(), and vm_flags was removed from the function's input by: commit 45e55300f114 ("mm: remove unnecessary wrapper function do_mmap_pgoff()"). There is a new user now. Shadow stack allocation passes VM_SHADOW_STACK to do_mmap(). Thus, re-introduce vm_flags to do_mmap(). Co-developed-by: Rick Edgecombe Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Peter Collingbourne Reviewed-by: Kees Cook Reviewed-by: Kirill A. Shutemov Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- fs/aio.c | 2 +- include/linux/mm.h | 3 ++- ipc/shm.c | 2 +- mm/mmap.c | 10 +++++----- mm/nommu.c | 4 ++-- mm/util.c | 2 +- 6 files changed, 12 insertions(+), 11 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index b0b17bd098bb..4a7576989719 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -558,7 +558,7 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events) ctx->mmap_base = do_mmap(ctx->aio_ring_file, 0, ctx->mmap_size, PROT_READ | PROT_WRITE, - MAP_SHARED, 0, &unused, NULL); + MAP_SHARED, 0, 0, &unused, NULL); mmap_write_unlock(mm); if (IS_ERR((void *)ctx->mmap_base)) { ctx->mmap_size = 0; diff --git a/include/linux/mm.h b/include/linux/mm.h index d09fbe9f43f8..d389198e17c2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3043,7 +3043,8 @@ extern unsigned long mmap_region(struct file *file, unsigned long addr, struct list_head *uf); extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, - unsigned long pgoff, unsigned long *populate, struct list_head *uf); + vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, + struct list_head *uf); extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool downgrade); diff --git a/ipc/shm.c b/ipc/shm.c index 60e45e7045d4..576a543b7cff 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -1662,7 +1662,7 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, goto invalid; } - addr = do_mmap(file, addr, size, prot, flags, 0, &populate, NULL); + addr = do_mmap(file, addr, size, prot, flags, 0, 0, &populate, NULL); *raddr = addr; err = 0; if (IS_ERR_VALUE(addr)) diff --git a/mm/mmap.c b/mm/mmap.c index 740b54be3ed4..e2c8e8e611d4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1191,11 +1191,11 @@ static inline bool file_mmap_ok(struct file *file, struct inode *inode, */ unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, - unsigned long flags, unsigned long pgoff, - unsigned long *populate, struct list_head *uf) + unsigned long flags, vm_flags_t vm_flags, + unsigned long pgoff, unsigned long *populate, + struct list_head *uf) { struct mm_struct *mm = current->mm; - vm_flags_t vm_flags; int pkey = 0; validate_mm(mm); @@ -1256,7 +1256,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr, * to. we assume access permissions have been handled by the open * of the memory object, so we don't do any here. */ - vm_flags = calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) | + vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) | mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC; if (flags & MAP_LOCKED) @@ -2829,7 +2829,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, file = get_file(vma->vm_file); ret = do_mmap(vma->vm_file, start, size, - prot, flags, pgoff, &populate, NULL); + prot, flags, 0, pgoff, &populate, NULL); fput(file); out: mmap_write_unlock(mm); diff --git a/mm/nommu.c b/mm/nommu.c index 57ba243c6a37..f6ddd084671f 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1002,6 +1002,7 @@ unsigned long do_mmap(struct file *file, unsigned long len, unsigned long prot, unsigned long flags, + vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, struct list_head *uf) @@ -1009,7 +1010,6 @@ unsigned long do_mmap(struct file *file, struct vm_area_struct *vma; struct vm_region *region; struct rb_node *rb; - vm_flags_t vm_flags; unsigned long capabilities, result; int ret; VMA_ITERATOR(vmi, current->mm, 0); @@ -1029,7 +1029,7 @@ unsigned long do_mmap(struct file *file, /* we've determined that we can make the mapping, now translate what we * now know into VMA flags */ - vm_flags = determine_vm_flags(file, prot, flags, capabilities); + vm_flags |= determine_vm_flags(file, prot, flags, capabilities); /* we're going to need to record the mapping */ diff --git a/mm/util.c b/mm/util.c index b8ed9dbc7fd5..a93e832f4065 100644 --- a/mm/util.c +++ b/mm/util.c @@ -539,7 +539,7 @@ unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr, if (!ret) { if (mmap_write_lock_killable(mm)) return -EINTR; - ret = do_mmap(file, addr, len, prot, flag, pgoff, &populate, + ret = do_mmap(file, addr, len, prot, flag, 0, pgoff, &populate, &uf); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); From patchwork Sun Mar 19 00:15:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71695 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532615wrt; Sat, 18 Mar 2023 18:36:32 -0700 (PDT) X-Google-Smtp-Source: AK7set8YiuUYX5gquobquqs4uCXJKL2/VhqDVc1rKi5ekV+PQHs40YKT1a2vjIoiJBIghyDcK8Fc X-Received: by 2002:a17:902:c411:b0:1a0:7422:939a with SMTP id k17-20020a170902c41100b001a07422939amr18030727plk.4.1679189792441; Sat, 18 Mar 2023 18:36:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189792; cv=none; d=google.com; s=arc-20160816; b=s6wCh8WF7H1pz866oUTSfkBUDD7dca2IGwlkkw+D6QQxzh/Mm5Sh6CJbotfUcRXBH7 Ga0IjDDGZdcTford8iKe6xHStYx1fhtuic2KMBAgSOsYx4OIbz62A26acw/pTUNXqxDn 5i68EOPqvxAg+I5as5pIWzftJxCLN7XxveoqUB3lXF0uPzLbEShu99SH5uQ8P3hdKkGS lxC148hGZw2NyxQCoFxR3JCaiZCMZ/i6G+p6M8Pai/esL1qeijvDNkgqb+iFN57dqQJ6 wL0G/f1zs2TSdb3wrpy/yvhYeHHltwLkPCNO6/aG7mk/HuiF6E5mIVt9BS1K4aui+v8j JaCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=SYx+P7Ic7HNiZnYw3nI9Ocezil5osIwuNxYJyZqEM2o=; b=frJjvclWX0u5oGULOPCImFlhOmtXGQfz11LxlitMbyoTjkWyySLBmp+1gHaXoMWAwl OoCiz5XDmkLBHxJ1sPzeX7WlIfZOgLTmFbvvdXY9AcbUObMHnImJG/6zq07kPAfCdIHE OsNq9OLI6Wcl1zcwt3K0mGQjrkzZRrGvMsfdR1hNMh0jfzzo1fhy11o4EzWb8PSzJBR7 oX7q3zaXX37LBmYXVNf/Sahh+5mmEGU/C9rips74ZAtTnjEJfRhZhN9qjtOwmis9+M8X 3mx6WNaI0oU749UwJMQIdtRJM7N4ts8gC3Eo4G+UU2IqX9ikiTWDsuoc+YAcvnTvCwJX 4Nmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Wk3BVtp/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w4-20020a63fb44000000b005074cff89b9si6112115pgj.250.2023.03.18.18.36.17; Sat, 18 Mar 2023 18:36:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Wk3BVtp/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbjCSAUt (ORCPT + 99 others); Sat, 18 Mar 2023 20:20:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229985AbjCSAUQ (ORCPT ); Sat, 18 Mar 2023 20:20:16 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A8232B28F; Sat, 18 Mar 2023 17:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185119; x=1710721119; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=opyjBN/VDdKGQFn13MiWP9WuORkQYDUhpnOpss/5QKc=; b=Wk3BVtp/wFugqnceAR/D/rFe7gPZ2ivYp4vi8ES2j+NYF15zgU2NKp7M u91ksz1EO/fm6I5FVgJhEiQvzma0h7oBXraKeoRR2St07awE7TKbIJwRu oxn9Fe9uiBqAT6M01ThElGes8FRr0pWt/xWXbMYdvJAvEKHSEL1BmDhuN c9+lE4eoq9viwwZlt1nwP7v11xpujO/dG6nbGmCbTv44N8NjlyTXPm4nt lMOXjygUz4Ah31Iv7cKif2zGrv9k2twoLMWjx42NpWspLWfycq2FyR5aa h/lDBdGEhdPUGQeadVcBcXhGdN/tDSNhDSUcMujtnAOhVubp5aU/LmWXy Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491256" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491256" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672879" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672879" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:33 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 24/40] mm: Don't allow write GUPs to shadow stack memory Date: Sat, 18 Mar 2023 17:15:19 -0700 Message-Id: <20230319001535.23210-25-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758115862430646?= X-GMAIL-MSGID: =?utf-8?q?1760758115862430646?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. In userspace, shadow stack memory is writable only in very specific, controlled ways. However, since userspace can, even in the limited ways, modify shadow stack contents, the kernel treats it as writable memory. As a result, without additional work there would remain many ways for userspace to trigger the kernel to write arbitrary data to shadow stacks via get_user_pages(, FOLL_WRITE) based operations. To help userspace protect their shadow stacks, make this a little less exposed by blocking writable get_user_pages() operations for shadow stack VMAs. Still allow FOLL_FORCE to write through shadow stack protections, as it does for read-only protections. This is required for debugging use cases. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris, AndyL) v3: - Add comment in __pte_access_permitted() (Dave) - Remove unneeded shadow stack specific check in __pte_access_permitted() (Jann) --- arch/x86/include/asm/pgtable.h | 5 +++++ mm/gup.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index d81e7ec27507..2e3d8cca1195 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1638,6 +1638,11 @@ static inline bool __pte_access_permitted(unsigned long pteval, bool write) { unsigned long need_pte_bits = _PAGE_PRESENT|_PAGE_USER; + /* + * Write=0,Dirty=1 PTEs are shadow stack, which the kernel + * shouldn't generally allow access to, but since they + * are already Write=0, the below logic covers both cases. + */ if (write) need_pte_bits |= _PAGE_RW; diff --git a/mm/gup.c b/mm/gup.c index eab18ba045db..e7c7bcc0e268 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -978,7 +978,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) return -EFAULT; if (write) { - if (!(vm_flags & VM_WRITE)) { + if (!(vm_flags & VM_WRITE) || (vm_flags & VM_SHADOW_STACK)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; /* hugetlb does not support FOLL_FORCE|FOLL_WRITE. */ From patchwork Sun Mar 19 00:15:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71684 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531525wrt; Sat, 18 Mar 2023 18:32:41 -0700 (PDT) X-Google-Smtp-Source: AK7set9SjPyAMfXc2YZ5gEcEfRVBHOV09n4m24LwVQFV63Q4CB1pNuCMxDG/+81pdLsapM8anU1Q X-Received: by 2002:a17:90a:ea05:b0:23f:6b62:803c with SMTP id w5-20020a17090aea0500b0023f6b62803cmr6051299pjy.40.1679189561199; Sat, 18 Mar 2023 18:32:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189561; cv=none; d=google.com; s=arc-20160816; b=EbnAwGYj0pjKLf9gRiwk/ECZrtlc51ccHd/YEcuA1MVNSGUmSMUfvnfaNrvO59QNKS xQ92pifKOl0/TcW3WnRoHFi7sHpRJDeSeGBnm83dUBBStBK9sopZ00aF2uRoBJH3DCCF xnQU67TY8qHWItY973bKQFevAkJSYmQoXUZi5zu+A3V7/9SgxcpJvJUpSxsXsOqWIxCL 3IFnTJljNN6OPYrTMFrm+VhLzjDMw8ibMcgiRyPTTxxtPo2IJcXk+zYUEk/xXRICMIko a7rkqbgsSwpXPIEDOzbYU0ssI7fbT/ZwCFWyinn1Tzdh0VMNnXxTFpy4zwdTOK9V2IWN q99g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=mwokoOpGillR2xIBR8l2lN8VfHeotneOlmH/z0FhJq4=; b=M70Us8qgS9VMmdSPamaNCG4mhqGcQxCKi03oVCgE5j+3DoJo7R1eB8TMsbPdrZ8QfQ kMUC9SciH/EScNVceT/1jtyLBdyN6s20egaqDMz3vpuoIWoFnFxAjdtrylhbBiTYvnf3 TSbR2l5FJ2ycaBBlmdT+UpLNDakU0ohgpmqn55Yo/2VXw5XizsGBtu92fC3cUEshRZus NytWa4YpBXakeRmzFQmwCLEOpK5MQc/hMvpLH+oNvq4/zY3eJz/femMd6vRpo5xwtmDc LAZ4AVia3bnibgOA3LHejj1CBiZCKcwgLluBYZbpDr33+vWJdeVI4BljMUX+sxpKR2sl AGFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lb1Od6fh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h23-20020a17090a9c1700b0023b3a60296fsi11832835pjp.71.2023.03.18.18.32.29; Sat, 18 Mar 2023 18:32:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lb1Od6fh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230273AbjCSAVG (ORCPT + 99 others); Sat, 18 Mar 2023 20:21:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229964AbjCSAUV (ORCPT ); Sat, 18 Mar 2023 20:20:21 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D692F2B2AB; Sat, 18 Mar 2023 17:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185124; x=1710721124; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=m2G03CSTQRSNX4YXn65CryKHCN03SDYmmfcTPTkRd8E=; b=lb1Od6fhtL2DLrCK9c+USmxyHjHNckFz4EimAijArw9/WJ4flEY+hAmJ kbYbFTMuLrtNqrzf2WSuECwPndQ2Bgu3mgBdz1XeQIBaxNpMG7LQB6hUt FooqfXx+xvBZrpV+7HMhwsUcAHUu+nvPlm/62HQotWwKhesIKLeErd8HW 4e08tSW0xQJGruLVQWp9BiZzyTjIvccASPsCqwjZmtIxI33V02uiZzlZY q6TouUe16LxaRiVVjW4GsdtXWaA1B/RHSWAJ3cIa7zV0/g7yefoiuXv0A oArJ2d6q3fBoiNHeeH1qYB6qPKhno2qljRDQRV9OamBVNMw9UWcd1P9cO g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491282" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491282" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672886" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672886" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:34 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 25/40] x86/mm: Introduce MAP_ABOVE4G Date: Sat, 18 Mar 2023 17:15:20 -0700 Message-Id: <20230319001535.23210-26-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757873081540711?= X-GMAIL-MSGID: =?utf-8?q?1760757873081540711?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which require some core mm changes to function properly. One of the properties is that the shadow stack pointer (SSP), which is a CPU register that points to the shadow stack like the stack pointer points to the stack, can't be pointing outside of the 32 bit address space when the CPU is executing in 32 bit mode. It is desirable to prevent executing in 32 bit mode when shadow stack is enabled because the kernel can't easily support 32 bit signals. On x86 it is possible to transition to 32 bit mode without any special interaction with the kernel, by doing a "far call" to a 32 bit segment. So the shadow stack implementation can use this address space behavior as a feature, by enforcing that shadow stack memory is always mapped outside of the 32 bit address space. This way userspace will trigger a general protection fault which will in turn trigger a segfault if it tries to transition to 32 bit mode with shadow stack enabled. This provides a clean error generating border for the user if they try attempt to do 32 bit mode shadow stack, rather than leave the kernel in a half working state for userspace to be surprised by. So to allow future shadow stack enabling patches to map shadow stacks out of the 32 bit address space, introduce MAP_ABOVE4G. The behavior is pretty much like MAP_32BIT, except that it has the opposite address range. The are a few differences though. If both MAP_32BIT and MAP_ABOVE4G are provided, the kernel will use the MAP_ABOVE4G behavior. Like MAP_32BIT, MAP_ABOVE4G is ignored in a 32 bit syscall. Since the default search behavior is top down, the normal kaslr base can be used for MAP_ABOVE4G. This is unlike MAP_32BIT which has to add its own randomization in the bottom up case. For MAP_32BIT, only the bottom up search path is used. For MAP_ABOVE4G both are potentially valid, so both are used. In the bottomup search path, the default behavior is already consistent with MAP_ABOVE4G since mmap base should be above 4GB. Without MAP_ABOVE4G, the shadow stack will already normally be above 4GB. So without introducing MAP_ABOVE4G, trying to transition to 32 bit mode with shadow stack enabled would usually segfault anyway. This is already pretty decent guard rails. But the addition of MAP_ABOVE4G is some small complexity spent to make it make it more complete. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Use SZ_4G (Boris) v5: - New patch --- arch/x86/include/uapi/asm/mman.h | 1 + arch/x86/kernel/sys_x86_64.c | 6 +++++- include/linux/mman.h | 4 ++++ 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index 775dbd3aff73..5a0256e73f1e 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -3,6 +3,7 @@ #define _ASM_X86_MMAN_H #define MAP_32BIT 0x40 /* only give out 32bit addresses */ +#define MAP_ABOVE4G 0x80 /* only map above 4GB */ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS #define arch_calc_vm_prot_bits(prot, key) ( \ diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 8cc653ffdccd..c783aeb37dce 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -193,7 +193,11 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; - info.low_limit = PAGE_SIZE; + if (!in_32bit_syscall() && (flags & MAP_ABOVE4G)) + info.low_limit = SZ_4G; + else + info.low_limit = PAGE_SIZE; + info.high_limit = get_mmap_base(0); /* diff --git a/include/linux/mman.h b/include/linux/mman.h index cee1e4b566d8..40d94411d492 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -15,6 +15,9 @@ #ifndef MAP_32BIT #define MAP_32BIT 0 #endif +#ifndef MAP_ABOVE4G +#define MAP_ABOVE4G 0 +#endif #ifndef MAP_HUGE_2MB #define MAP_HUGE_2MB 0 #endif @@ -50,6 +53,7 @@ | MAP_STACK \ | MAP_HUGETLB \ | MAP_32BIT \ + | MAP_ABOVE4G \ | MAP_HUGE_2MB \ | MAP_HUGE_1GB) From patchwork Sun Mar 19 00:15:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71667 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527536wrt; Sat, 18 Mar 2023 18:18:29 -0700 (PDT) X-Google-Smtp-Source: AK7set+EKpw8T4sQrJBV08+VlVu9Bp2YpbKg6Y4iuwoRLl7tnXmlP0hGeBmWd8Ize63aVrNODWB/ X-Received: by 2002:a05:6a20:1613:b0:d5:a476:b159 with SMTP id l19-20020a056a20161300b000d5a476b159mr15490444pzj.61.1679188709165; Sat, 18 Mar 2023 18:18:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188709; cv=none; d=google.com; s=arc-20160816; b=WRy/cU5D6wlUvC2J0sELtKsG6PmZvzkJSUM0aiLZAENI66dEwB/jnIHczN43einOxH yPyAFit9cSsMN1wb//auZjQnWuqYQ0P0gmgUE+y3k0WP/TW9CMP1GbetZUn28U+0YP34 N7L24ALEUwjk9ychOCHgojf0Tg4Q0obw4kUEcMj/ecLLz83lWTsPhupEOACw/xqq8cLn tTtjMLsMRHSphZW8nhprUy0ngLyELm3k2yZYQdzGWbetEpT+KsLwhcky+5634lasrY6n jqGrj7Zsui2MLgt7O7yFybkSR1XhHjLh8f6qwWqEmhgrFerUByUYOFJOveHbXdCGDi94 XXEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=QkKdIBSM6+NW96X7S3iu9foVuWMjrTX9oD9LGR3hCIY=; b=ZNzYeNjg7TZwRtoLIojWfWrRalnkB8nUmYEuy9N3HX/GEb7EGZHo+gLh6A/g1SqYoX FIna9+UrOxqGWsMPEORlrQBfDSDPWhJWQiMIBV+Hx+/xE7ThlmfExEOMXF/5AoToe8Hs 3i/RkcwFUJCdBR0z9dezAwM5hj8jovXOdSBCk/KbO4Zenc0FWTlrctKhAQA//dRVGwNY xheN81uK2Ffby3cGZKGAAu/0XcV8+mZndn3+COU60Tkb8tX/Wcq4beZdOPt/n+eQmOVk VEF7iJRKg6vhCS5KBVoiZ48plNifu7lGL1owHV63GaS9teYiXR6KmSFo8Prf/BhKZSMR 5TgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="PqFoSa7/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.18.16; Sat, 18 Mar 2023 18:18:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="PqFoSa7/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229981AbjCSAVS (ORCPT + 99 others); Sat, 18 Mar 2023 20:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230110AbjCSAUk (ORCPT ); Sat, 18 Mar 2023 20:20:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2E6729164; Sat, 18 Mar 2023 17:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185140; x=1710721140; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=hsI3rH3pI5zk+4z/rTXM9tpk9smi32a1UvWDoOYm7ZQ=; b=PqFoSa7/6Z5cmCTiu11VWoCfxMrMU7vhg/2XObKVjJqnOgzg6jK7DhBE drxJnb8txN/5olsNU5b1HANo7xjSuEn5034CLGrQEjjsvSxS6dY7GbmCu moIpbTw19UQvIjcWwZSHc7zdwS2mjrPQ/0/wxXSn31wvyAbeSbFc1piZt s5+IRz4CCzE1ZHmwyuqrTniDsUgghhm1o49mysE8CxtIgHs1HsXorl7XZ p+zaePi92v1DvIxzlH0iiRIBvY+4rcqHnjRGc1Xk2ZQ7xB1Yvg5ncXUFH pipP52qAgr5KQKiPZ0QXF2Bwgw9Wz3JVDG8r+ERVUkcJMl6x1k0YUoJl3 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491306" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491306" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672895" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672895" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:36 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 26/40] mm: Warn on shadow stack memory in wrong vma Date: Sat, 18 Mar 2023 17:15:21 -0700 Message-Id: <20230319001535.23210-27-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756979986015272?= X-GMAIL-MSGID: =?utf-8?q?1760756979986015272?= The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One sharp edge is that PTEs that are both Write=0 and Dirty=1 are treated as shadow by the CPU, but this combination used to be created by the kernel on x86. Previous patches have changed the kernel to now avoid creating these PTEs unless they are for shadow stack memory. In case any missed corners of the kernel are still creating PTEs like this for non-shadow stack memory, and to catch any re-introductions of the logic, warn if any shadow stack PTEs (Write=0, Dirty=1) are found in non-shadow stack VMAs when they are being zapped. This won't catch transient cases but should have decent coverage. It will be compiled out when shadow stack is not configured. In order to check if a PTE is shadow stack in core mm code, add two arch breakouts arch_check_zapped_pte/pmd(). This will allow shadow stack specific code to be kept in arch/x86. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Acked-by: David Hildenbrand --- v8: - Update commit log verbaige (Boris) v6: - Add arch breakout to remove shstk from core MM code. v5: - Fix typo in commit log v3: - New patch --- arch/x86/include/asm/pgtable.h | 6 ++++++ arch/x86/mm/pgtable.c | 12 ++++++++++++ include/linux/pgtable.h | 14 ++++++++++++++ mm/huge_memory.c | 1 + mm/memory.c | 1 + 5 files changed, 34 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 2e3d8cca1195..e5b3dce0d9fe 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1684,6 +1684,12 @@ static inline bool arch_has_hw_pte_young(void) return true; } +#define arch_check_zapped_pte arch_check_zapped_pte +void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte); + +#define arch_check_zapped_pmd arch_check_zapped_pmd +void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd); + #ifdef CONFIG_XEN_PV #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young static inline bool arch_has_hw_nonleaf_pmd_young(void) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 98856bcc8102..afab0bc7862b 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -906,3 +906,15 @@ pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) return pmd; } + +void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte) +{ + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && + pte_shstk(pte)); +} + +void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd) +{ + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && + pmd_shstk(pmd)); +} diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index c63cd44777ec..4a8970b9fb11 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -291,6 +291,20 @@ static inline bool arch_has_hw_pte_young(void) } #endif +#ifndef arch_check_zapped_pte +static inline void arch_check_zapped_pte(struct vm_area_struct *vma, + pte_t pte) +{ +} +#endif + +#ifndef arch_check_zapped_pmd +static inline void arch_check_zapped_pmd(struct vm_area_struct *vma, + pmd_t pmd) +{ +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index aaf815838144..24797be05fcb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1689,6 +1689,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, */ orig_pmd = pmdp_huge_get_and_clear_full(vma, addr, pmd, tlb->fullmm); + arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) diff --git a/mm/memory.c b/mm/memory.c index d0972d2d6f36..c953c2c4588c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1389,6 +1389,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, continue; ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + arch_check_zapped_pte(vma, ptent); tlb_remove_tlb_entry(tlb, pte, addr); zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); From patchwork Sun Mar 19 00:15:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71683 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531519wrt; Sat, 18 Mar 2023 18:32:39 -0700 (PDT) X-Google-Smtp-Source: AK7set+fs9T8wD9zbCP4VeNB1lTywZXJgS5auotgPhoqzZhnMMB2maOUosnyFBR7TIsCI2tC9u3g X-Received: by 2002:a05:6a20:12c2:b0:cb:ac6c:13d3 with SMTP id v2-20020a056a2012c200b000cbac6c13d3mr9448330pzg.21.1679189559303; Sat, 18 Mar 2023 18:32:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189559; cv=none; d=google.com; s=arc-20160816; b=iRBizrf7hKqwSrOzCYEPwoSjdocC9xjXYtXBTFlmRR7hWQ3F1k0Y3crKU0Okzu67eQ kvpLyERIzgcCeObuZBmVq0hX54IzBEzD+3FVRlWceKt3C6uan3qruZf6mJkeOo35iHJt TXouFfWrpgSoo3YraTw/oAjO6jJOc7iQqU9vis3mWsDY2cojVmAhGPa8GmQBXHBkxFLE UwqUNpi74FxECMQELKhZWkvQHFeixes6uZe2ta8d0Z+gOuPZLd42c+Un5yEbayuiDsRQ WZW3h1gna0VREDTFByCqHFgJoU5fva0PVacjtRkeb7UQfQkFRF2eC3DWD/mfHANdrXul Guxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=hckJnny/Q5UIHPZn1YqPA8ZhgyJI/JYSq0aVaLRSpaE=; b=qW3s8uMniwCIMBVfS0zsUyfzTspEaOLBnLaUs8SXp+hk21uC+VI0crUbdYew9VROYr J/R+hX7XNGl3fb9lt3i7i+mC8d8vTml2Gi8xT55aIVLTcBxqpnNT0vMFyUajgOU94B4o OQDjZwG7jb6chJem1IulvJgfLLxAIbRYarq2j0sWqxOWjW8AWYf501/ZW0Cc5i4WMAj0 NSSZ2brpJzwQSIDKFD/k6DUQL+5Ts1Qm/KNlYeKD+vn5Jg//v6jauQaQzNhrgxh+JV/n NOw2DFiSbHn49NavQO57EvNcVhMQnxlzfe9jeiLxlf+qSZgNN+aaBw6iO4Rx1xhjfwAm /GAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=M7Dx0Sn5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.32.26; Sat, 18 Mar 2023 18:32:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=M7Dx0Sn5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230297AbjCSAVb (ORCPT + 99 others); Sat, 18 Mar 2023 20:21:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230210AbjCSAUo (ORCPT ); Sat, 18 Mar 2023 20:20:44 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AC8C2A6D7; Sat, 18 Mar 2023 17:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185151; x=1710721151; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=5Gp9o9+xrqpZdB7LgH7hV6GLcbn71BAvzGNiD7RQW5E=; b=M7Dx0Sn5yZVPFkuv7o9BMjFB+7cQnXcrQRPk8ZesaheNyllnlNyJCW86 T5hl9zQ9zgQ0w0B4v60HdL6amrpgN+ELpcxYJPRRH6KXZ9Ji8993a2Fty vHOdkb41O3Mbt8rNRWJa5wCYqtCJTeufusJHqgMLpoC4EpDkBBiBOyulK rtPQ2e/mxMLIueTX73jTPknwsljEKKopU79apFsG/4gYUz9XdamlcGQBn +8zYpOS+FA9ovdTzWQro7yz4/O3q54/CkJis3Kn5Fz3VeA/0sM+Tc+IQE txHvGU1Ctj+amJVFQoBTAD+EvZGH+W0RxDH5DYdiHfH/wJfvLzkmTpsjE Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491328" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491328" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672901" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672901" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:38 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 27/40] x86/mm: Warn if create Write=0,Dirty=1 with raw prot Date: Sat, 18 Mar 2023 17:15:22 -0700 Message-Id: <20230319001535.23210-28-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757870806120861?= X-GMAIL-MSGID: =?utf-8?q?1760757870806120861?= When user shadow stack is in use, Write=0,Dirty=1 is treated by the CPU as shadow stack memory. So for shadow stack memory this bit combination is valid, but when Dirty=1,Write=1 (conventionally writable) memory is being write protected, the kernel has been taught to transition the Dirty=1 bit to SavedDirty=1, to avoid inadvertently creating shadow stack memory. It does this inside pte_wrprotect() because it knows the PTE is not intended to be a writable shadow stack entry, it is supposed to be write protected. However, when a PTE is created by a raw prot using mk_pte(), mk_pte() can't know whether to adjust Dirty=1 to SavedDirty=1. It can't distinguish between the caller intending to create a shadow stack PTE or needing the SavedDirty shift. The kernel has been updated to not do this, and so Write=0,Dirty=1 memory should only be created by the pte_mkfoo() helpers. Add a warning to make sure no new mk_pte() start doing this, like, for example, set_memory_rox() did. Signed-off-by: Rick Edgecombe Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) v6: - New patch (Note, this has already been a useful warning, it caught the newly added set_memory_rox() doing this) --- arch/x86/include/asm/pgtable.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e5b3dce0d9fe..7142f99d3fbb 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1032,7 +1032,15 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) * (Currently stuck as a macro because of indirect forward reference * to linux/mm.h:page_to_nid()) */ -#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) +#define mk_pte(page, pgprot) \ +({ \ + pgprot_t __pgprot = pgprot; \ + \ + WARN_ON_ONCE(cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && \ + (pgprot_val(__pgprot) & (_PAGE_DIRTY | _PAGE_RW)) == \ + _PAGE_DIRTY); \ + pfn_pte(page_to_pfn(page), __pgprot); \ +}) static inline int pmd_bad(pmd_t pmd) { From patchwork Sun Mar 19 00:15:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71670 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527818wrt; Sat, 18 Mar 2023 18:19:42 -0700 (PDT) X-Google-Smtp-Source: AK7set+eUwu0dOmGljrL50iRDQOHNiqtlJXqH8ygJkGEIUiPJINpEoMy3H71pip5yiP4ksz5et/U X-Received: by 2002:a17:90a:51:b0:23d:4b01:b27 with SMTP id 17-20020a17090a005100b0023d4b010b27mr14431168pjb.10.1679188782623; Sat, 18 Mar 2023 18:19:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188782; cv=none; d=google.com; s=arc-20160816; b=z+5WJZijmqNj0K4sRKV9n8IsBHxdzItlcrABS9Fx2203waBvJkf1EaJFhSffm30Smz eGDEkNTAYjGyBND0CA+89olsq46SH74hw7+zAyNmbAwsGZYRPI75DH3xAZENB2xmZEvb yk+37Pjk1tKgHe6XUo8phd36IWm1doCQ7oQ8ys0nxukllghlIPpoHRITxd6MeaDHP3hA CAydF1llfsVoVBaax7Mczkl3Gt1QsYknNg1nmXMdP4Prz974qeXDo1xAzJQfUasmDgHn BWlPIEfZ7/mz1n2VveFK24SzxzpDs9SEY3+E+lnHsNnhmh5qajQHCI6mQ7vguXjqNFyo lt7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=9CimsSkcnB1SixWm79FtbmcEJmTxe2uhP+S9LsdP8OM=; b=RrhLEUYIRgjcW/6BsnCsd+rVUhMDyvsrgIc9sU3i2MCwvrbIVgr6WuvTck4L+TTxKj juYMbHlGXDRFOxi/1fqpIswT+6oYrqV58RscmlCacQ4MAg0Z0sYSxSlMGsvTPP+CjkAZ 1IyEZPpN9zLwkTqbz5BUhMrAYxlQEIpqnkdDSSXh4M9YIyBWprEOyZotbBB4iwAQZc5o PML3/gn1zrQ47Us6nHwJOpu0DAC/lAPsCAHhIY5jBeRJQloN3ph0VE3DJ4d8aTP1MT4E 9paOoe4HDSq88W0o3kE0w+abSaH8vV6wUM+/cR4oyA/PhONOIkYTI3nwI+ASX1t5X2o+ zYeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cX7gWp2t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.19.29; Sat, 18 Mar 2023 18:19:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cX7gWp2t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbjCSAVm (ORCPT + 99 others); Sat, 18 Mar 2023 20:21:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230266AbjCSAVE (ORCPT ); Sat, 18 Mar 2023 20:21:04 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EE932A176; Sat, 18 Mar 2023 17:19:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185155; x=1710721155; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=0ExKgIRjskyaMqTzivet8rEqvydaMzUAlH3acHHtrn0=; b=cX7gWp2tsOdwp19pokmAOdbD+i5RbuMXPCYC7ldGPkihHPvvd8aW4LX0 Xf+gB9QxmxDzJ8RvJ3WshMG7xaXjWQ3Mxuo+l/lqhrnrGaoKVSjSCOXME 1LiCdAjNkEX1c1o+n36h6Kqz1VPdi+Ag+2t9GcE4MJ63r5ymlK7y1LBJf sHg55+MNRiSdGBAoB+DI+LdnOBg4N/C8zeMYsYxhQFKYq3jTfyPSKZN4O jZXaM2C7/JIkMKFfN1ckFJK4D3k491ImYyaLG8fTh8F4k2ZACc+pecwIA Ud6xFeAiWSvMVoToxerB410Rr+CQO2MurmAn0qQGviGEX4UJX9gQMjJyo A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491351" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491351" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672914" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672914" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:39 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 28/40] x86: Introduce userspace API for shadow stack Date: Sat, 18 Mar 2023 17:15:23 -0700 Message-Id: <20230319001535.23210-29-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757057372423068?= X-GMAIL-MSGID: =?utf-8?q?1760757057372423068?= Add three new arch_prctl() handles: - ARCH_SHSTK_ENABLE/DISABLE enables or disables the specified feature. Returns 0 on success or a negative value on error. - ARCH_SHSTK_LOCK prevents future disabling or enabling of the specified feature. Returns 0 on success or a negative value on error. The features are handled per-thread and inherited over fork(2)/clone(2), but reset on exec(). Co-developed-by: Kirill A. Shutemov Signed-off-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- This is preparation patch. It does not implement any features. The {} are to make the diff in a future patch cleaner. v8: - Update commit log verbiage (Boris) v4: - Remove references to CET and replace with shadow stack (Peterz) v3: - Move shstk.c Makefile changes earlier (Kees) - Add #ifdef around features_locked and features (Kees) - Encapsulate features reset earlier in reset_thread_features() so features and features_locked are not referenced in code that would be compiled !CONFIG_X86_USER_SHADOW_STACK. (Kees) - Fix typo in commit log (Kees) - Switch arch_prctl() numbers to avoid conflict with LAM v2: - Only allow one enable/disable per call (tglx) - Return error code like a normal arch_prctl() (Alexander Potapenko) - Make CET only (tglx) --- arch/x86/include/asm/processor.h | 6 +++++ arch/x86/include/asm/shstk.h | 21 +++++++++++++++ arch/x86/include/uapi/asm/prctl.h | 6 +++++ arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/process_64.c | 7 ++++- arch/x86/kernel/shstk.c | 44 +++++++++++++++++++++++++++++++ 6 files changed, 85 insertions(+), 1 deletion(-) create mode 100644 arch/x86/include/asm/shstk.h create mode 100644 arch/x86/kernel/shstk.c diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 8d73004e4cac..bd16e012b3e9 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -28,6 +28,7 @@ struct vm86; #include #include #include +#include #include #include @@ -475,6 +476,11 @@ struct thread_struct { */ u32 pkru; +#ifdef CONFIG_X86_USER_SHADOW_STACK + unsigned long features; + unsigned long features_locked; +#endif + /* Floating point and extended processor state */ struct fpu fpu; /* diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h new file mode 100644 index 000000000000..ec753809f074 --- /dev/null +++ b/arch/x86/include/asm/shstk.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_SHSTK_H +#define _ASM_X86_SHSTK_H + +#ifndef __ASSEMBLY__ +#include + +struct task_struct; + +#ifdef CONFIG_X86_USER_SHADOW_STACK +long shstk_prctl(struct task_struct *task, int option, unsigned long features); +void reset_thread_features(void); +#else +static inline long shstk_prctl(struct task_struct *task, int option, + unsigned long arg2) { return -EINVAL; } +static inline void reset_thread_features(void) {} +#endif /* CONFIG_X86_USER_SHADOW_STACK */ + +#endif /* __ASSEMBLY__ */ + +#endif /* _ASM_X86_SHSTK_H */ diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index 500b96e71f18..b2b3b7200b2d 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -20,4 +20,10 @@ #define ARCH_MAP_VDSO_32 0x2002 #define ARCH_MAP_VDSO_64 0x2003 +/* Don't use 0x3001-0x3004 because of old glibcs */ + +#define ARCH_SHSTK_ENABLE 0x5001 +#define ARCH_SHSTK_DISABLE 0x5002 +#define ARCH_SHSTK_LOCK 0x5003 + #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 92446f1dedd7..b366641703e3 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -146,6 +146,8 @@ obj-$(CONFIG_CALL_THUNKS) += callthunks.o obj-$(CONFIG_X86_CET) += cet.o +obj-$(CONFIG_X86_USER_SHADOW_STACK) += shstk.o + ### # 64 bit specific files ifeq ($(CONFIG_X86_64),y) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index bb65a68b4b49..9bbad1763e33 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -514,6 +514,8 @@ start_thread_common(struct pt_regs *regs, unsigned long new_ip, load_gs_index(__USER_DS); } + reset_thread_features(); + loadsegment(fs, 0); loadsegment(es, _ds); loadsegment(ds, _ds); @@ -830,7 +832,10 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2) case ARCH_MAP_VDSO_64: return prctl_map_vdso(&vdso_image_64, arg2); #endif - + case ARCH_SHSTK_ENABLE: + case ARCH_SHSTK_DISABLE: + case ARCH_SHSTK_LOCK: + return shstk_prctl(task, option, arg2); default: ret = -EINVAL; break; diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c new file mode 100644 index 000000000000..41ed6552e0a5 --- /dev/null +++ b/arch/x86/kernel/shstk.c @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * shstk.c - Intel shadow stack support + * + * Copyright (c) 2021, Intel Corporation. + * Yu-cheng Yu + */ + +#include +#include +#include + +void reset_thread_features(void) +{ + current->thread.features = 0; + current->thread.features_locked = 0; +} + +long shstk_prctl(struct task_struct *task, int option, unsigned long features) +{ + if (option == ARCH_SHSTK_LOCK) { + task->thread.features_locked |= features; + return 0; + } + + /* Don't allow via ptrace */ + if (task != current) + return -EINVAL; + + /* Do not allow to change locked features */ + if (features & task->thread.features_locked) + return -EPERM; + + /* Only support enabling/disabling one feature at a time. */ + if (hweight_long(features) > 1) + return -EINVAL; + + if (option == ARCH_SHSTK_DISABLE) { + return -EINVAL; + } + + /* Handle ARCH_SHSTK_ENABLE */ + return -EINVAL; +} From patchwork Sun Mar 19 00:15:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71690 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532219wrt; Sat, 18 Mar 2023 18:35:09 -0700 (PDT) X-Google-Smtp-Source: AK7set/a3y1RIvMW+Fr/d0D1OnQj5ur+L4FZqvKQsFijp9Yo2Y2prBg1aBP8+soZycFaL5NbTZEf X-Received: by 2002:a17:90a:d486:b0:23f:5c65:8c72 with SMTP id s6-20020a17090ad48600b0023f5c658c72mr7004734pju.49.1679189709174; Sat, 18 Mar 2023 18:35:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189709; cv=none; d=google.com; s=arc-20160816; b=TZmR5CCM677PxDIjaDIP/7A789rv4Z9B4K3CMIpn/Ij1Lf1YnYAnq2GK9kfIoY1Yfj 8ZGGa/ZGfBeTg+VHHXbFG7MtopUpn+axfAqeJpraEhRwXLpNOKEiksnYxQZI4nv/TNUK 6Ou2Wm1udmyA4FKrrST2Z/za8O/UdBjBEod3+ozzm5IseCSxi4hVTlzsgCFM/4LXxdSy 01tx4OOZh33P3FBLjYqOLETyaS/D9BWeCm9YuqHJJIQc1jzf2sGnaUFIgpMHktbgPPZU 2OwZk3iyxlc6GXNHGly+mc5zPbK5gHv4JWiOns7Bd+jzX3c11fZJJ0mbzMEAxGa74r5Z 4VbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=o3Rz70ck6nM+CTFRpD6v5ZMhHD7aV88XHo5GuS7VB1o=; b=RpbL3Sc5XAnjpZVC/zbwJZwmAdvD74/XcmXP/+kpKVBXDCmtbaxF2TydgijE3dc/kH o4+a1QWIJqw2rjfg9yQG7XbT97eECt2/fuwewxx7TSkXDeo3RvRK7Fs30Y3fk6qMb0dr ACVbtx0a446P4F862Zs5jBWjBAS4RSK5tf0ChPVBHTKBzAIeTo4cA0lbQ4Ft6AErW/ln oesNhTu5r3Sv2HsAi/MACj/3rI/Jk8xDYgaw0xA8I45tLHs/5LKYAYx10STCDLyealq3 TzvMfeCnb5AxEi/yo+52DFl06ORhjie+HC9tFsXbbyfCT217PQobb7a2tOKVgJlwEJ0Y 6sDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JiLVNFuo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fs3-20020a17090af28300b0023d53736026si6386775pjb.125.2023.03.18.18.34.56; Sat, 18 Mar 2023 18:35:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JiLVNFuo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229774AbjCSAV5 (ORCPT + 99 others); Sat, 18 Mar 2023 20:21:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbjCSAVO (ORCPT ); Sat, 18 Mar 2023 20:21:14 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB2562A6C0; Sat, 18 Mar 2023 17:19:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185161; x=1710721161; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qFnC8BjwG9ajYhpSUihMC4CLPCFBVSbSaq8Aw+3zJPY=; b=JiLVNFuoWfaTXh6i20fugnVZyH0hKLbI0J58B000jrDCfltGpFfo1s/e 1OKX2JZ58vwPth6nVn4teWqPi0eWtUawtKjoFi1WEFe7OtyrNih8Ub0/M bOh/X9f5IE5Ey8PKmPi1nQZjaLShkkQdGQsnfQ/3zKldGK6sY+JVV1WjE R8ZotepRpyFaOR3StpFY5e9TGjHPRV560QdAVlCOBUKdUAWOPxe3GeDk8 j+o3D654o12NvEV01jsL2VNJ8LgWpbrKC4fiKk2+8VdfDUyHPaegfKLt6 1epyVwdmVzwsuDtZR44AT5gTjHHvfK+qukD1vmhWhT3KerF3jkq9XEIgs w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491375" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491375" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672927" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672927" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:41 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 29/40] x86/shstk: Add user-mode shadow stack support Date: Sat, 18 Mar 2023 17:15:24 -0700 Message-Id: <20230319001535.23210-30-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758028316110215?= X-GMAIL-MSGID: =?utf-8?q?1760758028316110215?= Introduce basic shadow stack enabling/disabling/allocation routines. A task's shadow stack is allocated from memory with VM_SHADOW_STACK flag and has a fixed size of min(RLIMIT_STACK, 4GB). Keep the task's shadow stack address and size in thread_struct. This will be copied when cloning new threads, but needs to be cleared during exec, so add a function to do this. 32 bit shadow stack is not expected to have many users and it will complicate the signal implementation. So do not support IA32 emulation or x32. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v7: - Add explanation for not supporting 32 bit in commit log (Boris) v5: - Switch to EOPNOTSUPP - Use MAP_ABOVE4G - Move set_clr_bits_msrl() to patch where it is first used v4: - Just set MSR_IA32_U_CET when disabling shadow stack, since we don't have IBT yet. (Peterz) v3: - Use define for set_clr_bits_msrl() (Kees) - Make some functions static (Kees) - Change feature_foo() to features_foo() (Kees) - Centralize shadow stack size rlimit checks (Kees) - Disable x32 support --- arch/x86/include/asm/processor.h | 2 + arch/x86/include/asm/shstk.h | 7 ++ arch/x86/include/uapi/asm/prctl.h | 3 + arch/x86/kernel/shstk.c | 145 ++++++++++++++++++++++++++++++ 4 files changed, 157 insertions(+) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index bd16e012b3e9..ff98cd6d5af2 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -479,6 +479,8 @@ struct thread_struct { #ifdef CONFIG_X86_USER_SHADOW_STACK unsigned long features; unsigned long features_locked; + + struct thread_shstk shstk; #endif /* Floating point and extended processor state */ diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index ec753809f074..2b1f7c9b9995 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -8,12 +8,19 @@ struct task_struct; #ifdef CONFIG_X86_USER_SHADOW_STACK +struct thread_shstk { + u64 base; + u64 size; +}; + long shstk_prctl(struct task_struct *task, int option, unsigned long features); void reset_thread_features(void); +void shstk_free(struct task_struct *p); #else static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} +static inline void shstk_free(struct task_struct *p) {} #endif /* CONFIG_X86_USER_SHADOW_STACK */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index b2b3b7200b2d..7dfd9dc00509 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -26,4 +26,7 @@ #define ARCH_SHSTK_DISABLE 0x5002 #define ARCH_SHSTK_LOCK 0x5003 +/* ARCH_SHSTK_ features bits */ +#define ARCH_SHSTK_SHSTK (1ULL << 0) + #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 41ed6552e0a5..3cb85224d856 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -8,14 +8,159 @@ #include #include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include #include +static bool features_enabled(unsigned long features) +{ + return current->thread.features & features; +} + +static void features_set(unsigned long features) +{ + current->thread.features |= features; +} + +static void features_clr(unsigned long features) +{ + current->thread.features &= ~features; +} + +static unsigned long alloc_shstk(unsigned long size) +{ + int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_ABOVE4G; + struct mm_struct *mm = current->mm; + unsigned long addr, unused; + + mmap_write_lock(mm); + addr = do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); + + mmap_write_unlock(mm); + + return addr; +} + +static unsigned long adjust_shstk_size(unsigned long size) +{ + if (size) + return PAGE_ALIGN(size); + + return PAGE_ALIGN(min_t(unsigned long long, rlimit(RLIMIT_STACK), SZ_4G)); +} + +static void unmap_shadow_stack(u64 base, u64 size) +{ + while (1) { + int r; + + r = vm_munmap(base, size); + + /* + * vm_munmap() returns -EINTR when mmap_lock is held by + * something else, and that lock should not be held for a + * long time. Retry it for the case. + */ + if (r == -EINTR) { + cond_resched(); + continue; + } + + /* + * For all other types of vm_munmap() failure, either the + * system is out of memory or there is bug. + */ + WARN_ON_ONCE(r); + break; + } +} + +static int shstk_setup(void) +{ + struct thread_shstk *shstk = ¤t->thread.shstk; + unsigned long addr, size; + + /* Already enabled */ + if (features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + /* Also not supported for 32 bit and x32 */ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || in_32bit_syscall()) + return -EOPNOTSUPP; + + size = adjust_shstk_size(0); + addr = alloc_shstk(size); + if (IS_ERR_VALUE(addr)) + return PTR_ERR((void *)addr); + + fpregs_lock_and_load(); + wrmsrl(MSR_IA32_PL3_SSP, addr + size); + wrmsrl(MSR_IA32_U_CET, CET_SHSTK_EN); + fpregs_unlock(); + + shstk->base = addr; + shstk->size = size; + features_set(ARCH_SHSTK_SHSTK); + + return 0; +} + void reset_thread_features(void) { + memset(¤t->thread.shstk, 0, sizeof(struct thread_shstk)); current->thread.features = 0; current->thread.features_locked = 0; } +void shstk_free(struct task_struct *tsk) +{ + struct thread_shstk *shstk = &tsk->thread.shstk; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || + !features_enabled(ARCH_SHSTK_SHSTK)) + return; + + if (!tsk->mm) + return; + + unmap_shadow_stack(shstk->base, shstk->size); +} + +static int shstk_disable(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return -EOPNOTSUPP; + + /* Already disabled? */ + if (!features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + fpregs_lock_and_load(); + /* Disable WRSS too when disabling shadow stack */ + wrmsrl(MSR_IA32_U_CET, 0); + wrmsrl(MSR_IA32_PL3_SSP, 0); + fpregs_unlock(); + + shstk_free(current); + features_clr(ARCH_SHSTK_SHSTK); + + return 0; +} + long shstk_prctl(struct task_struct *task, int option, unsigned long features) { if (option == ARCH_SHSTK_LOCK) { From patchwork Sun Mar 19 00:15:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71657 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp514684wrt; Sat, 18 Mar 2023 17:31:50 -0700 (PDT) X-Google-Smtp-Source: AK7set+30Gzisj5Q+pEgwCqkk1t0A0fdyQ8OsrGXFqQiHJ2FwPoQDT0W+V4abX9UspekvOQ/oR3b X-Received: by 2002:a17:903:32d1:b0:19f:a694:6d3c with SMTP id i17-20020a17090332d100b0019fa6946d3cmr13242752plr.55.1679185910107; Sat, 18 Mar 2023 17:31:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185910; cv=none; d=google.com; s=arc-20160816; b=XHFf66yqTn8JHehAyNI15zhsUroR6ac2NXtIymLA9fnmdzHFz9fL76kXksx1zi6Yut OAWgeK1+95SSy4tJqYzep3nClHqMgyLZGMkfJXRHh/ZzmTPj2U0GgSqYaZ/YLqvEaypu Lk0ZPQlwGOs51vAzMQIbMjVf+XtyHYqhEO7KLiIpkMLFIWLqunC9Kwo/TUAgkbk66NDF nQ4t/EpiczgJshzldgz1vUa+fJO+6RLEY823nj0xYgxjRhJ/4TCUdETzatgfQEepwNsw paTCXZK+lElZEwNnozdy1vAqveu/elqFYk8iBmazB+XPwmAf81G5BtY4A5sRDfOP34X3 FEZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=W0ftUG5PHbqga3JJcvdLhbXeuZS1VD+kWPs82boa/+k=; b=OiCGrX/opZcfy0Pun3XdPbSyGP7VbSFZ0XgeX912kCkeq1QXRwynnnqg6SVsu2iA9G 7U4Yc5ueAfrbOIMKS8/jWVPEmaVyLFLqP6xBX0IVczzC7lyilbNoWD2XP5Vhp64/r/Ps k9uyOFpU2Eg5uG84ZT717/mrUezyGEW0KtGTcTgzJ19cdOkzhIpJY5x47t24vsz0ejio 085Q9CQ7VxSA8NdDQOFk0VuIe7APMs+jYf6TzJwCuf+T9+ws1uTTPsp2d3JhMvfPbAnx w/DfD+0em+fXzaDwQ58cCc+Xyg6EDt9bq40jSlpbOSDdXv/i1AjKr1ZsaT3ggCv6iscp PaVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lSoTA2is; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j12-20020a170902da8c00b0019ec2a63408si6807713plx.477.2023.03.18.17.31.36; Sat, 18 Mar 2023 17:31:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lSoTA2is; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230201AbjCSAWL (ORCPT + 99 others); Sat, 18 Mar 2023 20:22:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230052AbjCSAV1 (ORCPT ); Sat, 18 Mar 2023 20:21:27 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E18062A6F0; Sat, 18 Mar 2023 17:19:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185173; x=1710721173; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/W3M7babRpwbvz/5H38tZ0gNNiq3szmBJJYgOrNIixQ=; b=lSoTA2isSaCSFDPMPF/rxuzDDo15XD4WfHGk4WV//g88hkGUftuAx0Tq doHfUcBXMZO7KWL3gpORN/Ow6jXO708H+QrEZhhJhnXxhwT6uwsDZKNz0 f2cP9ZS/0+T6msqZddL1VQvvmeToRkd05BezBIatoNI2xEib/XjEV6RDY 02WJRFo2ypwYj1SDYobecrrIzYdwHgH1ze4AFDNO8hHwkgA5QyeGYeWc1 /M4XxHxPVwqj6kd1+BijYcVBsevRYeRsjNFDaHp2C42faeliRIY3tzCM3 P2JbCbRHQEgxklpa9SbMzmx8vDZYn428QGE88xnf7xFuXIR5JEXak/kNH w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491398" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491398" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672937" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672937" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:43 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 30/40] x86/shstk: Handle thread shadow stack Date: Sat, 18 Mar 2023 17:15:25 -0700 Message-Id: <20230319001535.23210-31-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760754045034816257?= X-GMAIL-MSGID: =?utf-8?q?1760754045034816257?= When a process is duplicated, but the child shares the address space with the parent, there is potential for the threads sharing a single stack to cause conflicts for each other. In the normal non-CET case this is handled in two ways. With regular CLONE_VM a new stack is provided by userspace such that the parent and child have different stacks. For vfork, the parent is suspended until the child exits. So as long as the child doesn't return from the vfork()/CLONE_VFORK calling function and sticks to a limited set of operations, the parent and child can share the same stack. For shadow stack, these scenarios present similar sharing problems. For the CLONE_VM case, the child and the parent must have separate shadow stacks. Instead of changing clone to take a shadow stack, have the kernel just allocate one and switch to it. Use stack_size passed from clone3() syscall for thread shadow stack size. A compat-mode thread shadow stack size is further reduced to 1/4. This allows more threads to run in a 32-bit address space. The clone() does not pass stack_size, which was added to clone3(). In that case, use RLIMIT_STACK size and cap to 4 GB. For shadow stack enabled vfork(), the parent and child can share the same shadow stack, like they can share a normal stack. Since the parent is suspended until the child terminates, the child will not interfere with the parent while executing as long as it doesn't return from the vfork() and overwrite up the shadow stack. The child can safely overwrite down the shadow stack, as the parent can just overwrite this later. So CET does not add any additional limitations for vfork(). Free the shadow stack on thread exit by doing it in mm_release(). Skip this when exiting a vfork() child since the stack is shared in the parent. During this operation, the shadow stack pointer of the new thread needs to be updated to point to the newly allocated shadow stack. Since the ability to do this is confined to the FPU subsystem, change fpu_clone() to take the new shadow stack pointer, and update it internally inside the FPU subsystem. This part was suggested by Thomas Gleixner. Suggested-by: Thomas Gleixner Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Move ifdef inside update_fpu_shstk() (Boris) - Change shstk_alloc_thread_stack() return value to simplify caller (Boris) - Remove extra info about glibc's vfork() implementation from log (Szabolcs Nagy) v3: - Fix update_fpu_shstk() stub (Mike Rapoport) - Fix chunks around alloc_shstk() in wrong patch (Kees) - Fix stack_size/flags swap (Kees) - Use centralized stack size logic (Kees) v2: - Have fpu_clone() take new shadow stack pointer and update SSP in xsave buffer for new task. (tglx) v1: - Expand commit log. - Add more comments. - Switch to xsave helpers. --- arch/x86/include/asm/fpu/sched.h | 3 ++- arch/x86/include/asm/mmu_context.h | 2 ++ arch/x86/include/asm/shstk.h | 5 ++++ arch/x86/kernel/fpu/core.c | 36 +++++++++++++++++++++++++- arch/x86/kernel/process.c | 21 ++++++++++++++- arch/x86/kernel/shstk.c | 41 ++++++++++++++++++++++++++++-- 6 files changed, 103 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/fpu/sched.h b/arch/x86/include/asm/fpu/sched.h index c2d6cd78ed0c..3c2903bbb456 100644 --- a/arch/x86/include/asm/fpu/sched.h +++ b/arch/x86/include/asm/fpu/sched.h @@ -11,7 +11,8 @@ extern void save_fpregs_to_fpstate(struct fpu *fpu); extern void fpu__drop(struct fpu *fpu); -extern int fpu_clone(struct task_struct *dst, unsigned long clone_flags, bool minimal); +extern int fpu_clone(struct task_struct *dst, unsigned long clone_flags, bool minimal, + unsigned long shstk_addr); extern void fpu_flush_thread(void); /* diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index e01aa74a6de7..9714f08d941b 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -147,6 +147,8 @@ do { \ #else #define deactivate_mm(tsk, mm) \ do { \ + if (!tsk->vfork_done) \ + shstk_free(tsk); \ load_gs_index(0); \ loadsegment(fs, 0); \ } while (0) diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index 2b1f7c9b9995..d4a5c7b10cb5 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -15,11 +15,16 @@ struct thread_shstk { long shstk_prctl(struct task_struct *task, int option, unsigned long features); void reset_thread_features(void); +unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, + unsigned long stack_size); void shstk_free(struct task_struct *p); #else static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} +static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, + unsigned long clone_flags, + unsigned long stack_size) { return 0; } static inline void shstk_free(struct task_struct *p) {} #endif /* CONFIG_X86_USER_SHADOW_STACK */ diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index f851558b673f..aa4856b236b8 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -552,8 +552,36 @@ static inline void fpu_inherit_perms(struct fpu *dst_fpu) } } +/* A passed ssp of zero will not cause any update */ +static int update_fpu_shstk(struct task_struct *dst, unsigned long ssp) +{ +#ifdef CONFIG_X86_USER_SHADOW_STACK + struct cet_user_state *xstate; + + /* If ssp update is not needed. */ + if (!ssp) + return 0; + + xstate = get_xsave_addr(&dst->thread.fpu.fpstate->regs.xsave, + XFEATURE_CET_USER); + + /* + * If there is a non-zero ssp, then 'dst' must be configured with a shadow + * stack and the fpu state should be up to date since it was just copied + * from the parent in fpu_clone(). So there must be a valid non-init CET + * state location in the buffer. + */ + if (WARN_ON_ONCE(!xstate)) + return 1; + + xstate->user_ssp = (u64)ssp; +#endif + return 0; +} + /* Clone current's FPU state on fork */ -int fpu_clone(struct task_struct *dst, unsigned long clone_flags, bool minimal) +int fpu_clone(struct task_struct *dst, unsigned long clone_flags, bool minimal, + unsigned long ssp) { struct fpu *src_fpu = ¤t->thread.fpu; struct fpu *dst_fpu = &dst->thread.fpu; @@ -613,6 +641,12 @@ int fpu_clone(struct task_struct *dst, unsigned long clone_flags, bool minimal) if (use_xsave()) dst_fpu->fpstate->regs.xsave.header.xfeatures &= ~XFEATURE_MASK_PASID; + /* + * Update shadow stack pointer, in case it changed during clone. + */ + if (update_fpu_shstk(dst, ssp)) + return 1; + trace_x86_fpu_copy_src(src_fpu); trace_x86_fpu_copy_dst(dst_fpu); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index b650cde3f64d..8bf13cff0141 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -48,6 +48,7 @@ #include #include #include +#include #include "process.h" @@ -119,6 +120,7 @@ void exit_thread(struct task_struct *tsk) free_vm86(t); + shstk_free(tsk); fpu__drop(fpu); } @@ -140,6 +142,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) struct inactive_task_frame *frame; struct fork_frame *fork_frame; struct pt_regs *childregs; + unsigned long new_ssp; int ret = 0; childregs = task_pt_regs(p); @@ -174,7 +177,16 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) frame->flags = X86_EFLAGS_FIXED; #endif - fpu_clone(p, clone_flags, args->fn); + /* + * Allocate a new shadow stack for thread if needed. If shadow stack, + * is disabled, new_ssp will remain 0, and fpu_clone() will know not to + * update it. + */ + new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size); + if (IS_ERR_VALUE(new_ssp)) + return PTR_ERR((void *)new_ssp); + + fpu_clone(p, clone_flags, args->fn, new_ssp); /* Kernel thread ? */ if (unlikely(p->flags & PF_KTHREAD)) { @@ -220,6 +232,13 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) if (!ret && unlikely(test_tsk_thread_flag(current, TIF_IO_BITMAP))) io_bitmap_share(p); + /* + * If copy_thread() if failing, don't leak the shadow stack possibly + * allocated in shstk_alloc_thread_stack() above. + */ + if (ret) + shstk_free(p); + return ret; } diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 3cb85224d856..bd9cdc3a7338 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -47,7 +47,7 @@ static unsigned long alloc_shstk(unsigned long size) unsigned long addr, unused; mmap_write_lock(mm); - addr = do_mmap(NULL, addr, size, PROT_READ, flags, + addr = do_mmap(NULL, 0, size, PROT_READ, flags, VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); mmap_write_unlock(mm); @@ -126,6 +126,37 @@ void reset_thread_features(void) current->thread.features_locked = 0; } +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, + unsigned long stack_size) +{ + struct thread_shstk *shstk = &tsk->thread.shstk; + unsigned long addr, size; + + /* + * If shadow stack is not enabled on the new thread, skip any + * switch to a new shadow stack. + */ + if (!features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + /* + * For CLONE_VM, except vfork, the child needs a separate shadow + * stack. + */ + if ((clone_flags & (CLONE_VFORK | CLONE_VM)) != CLONE_VM) + return 0; + + size = adjust_shstk_size(stack_size); + addr = alloc_shstk(size); + if (IS_ERR_VALUE(addr)) + return addr; + + shstk->base = addr; + shstk->size = size; + + return addr + size; +} + void shstk_free(struct task_struct *tsk) { struct thread_shstk *shstk = &tsk->thread.shstk; @@ -134,7 +165,13 @@ void shstk_free(struct task_struct *tsk) !features_enabled(ARCH_SHSTK_SHSTK)) return; - if (!tsk->mm) + /* + * When fork() with CLONE_VM fails, the child (tsk) already has a + * shadow stack allocated, and exit_thread() calls this function to + * free it. In this case the parent (current) and the child share + * the same mm struct. + */ + if (!tsk->mm || tsk->mm != current->mm) return; unmap_shadow_stack(shstk->base, shstk->size); From patchwork Sun Mar 19 00:15:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71666 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527444wrt; Sat, 18 Mar 2023 18:18:02 -0700 (PDT) X-Google-Smtp-Source: AK7set9h7lb/RWuoixw25D5X+66Sm99TsqprIbH/x+XyjqXRl5yAyu/W0aXUKQxu3X27cKG6PXCi X-Received: by 2002:a05:6a20:8b83:b0:d4:c1ba:9f4e with SMTP id m3-20020a056a208b8300b000d4c1ba9f4emr12183461pzh.35.1679188682199; Sat, 18 Mar 2023 18:18:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188682; cv=none; d=google.com; s=arc-20160816; b=XEUv7Imm2Q73jbsFjp+8Rv9+jC4cv871XGJurd+IkurSd9fOoRsUVyZkOTcBCmO2CJ 0L6+vnSYW7eI46FR0MHlXxW2bVH1cRrTvD68c8WCt/0oXY4Lmi58dB6iM6nWmZDWOe/p HyY4wC2amlCYDqvxqSTgbF2q//cUhL8hTgDovkPoPrTaM9ii+rkeDWZfneZFDob73Gbu RU0YmpoX+eWC5o1c2k+MK/OVI5Uq/udtgN/wIeGQUrQdun80i3ax2QgNoLHUhDedPCPE DyypgNx/OrzrMuwJ0xZMw5YASuWKqPrzoiIh3pP8lTnlJw9XuuRdosdNjY0SVqRl0WdP w2/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=UTD47oSwK8HDyM7Tg5IVAtcUOtNetxtvbzv1n8npU2M=; b=IDwPo6xldzNSDysGiTHsVFJ59/skQJOmrRYdlYW4QNCZJ2lL27cdAO4tfELaEvU/f1 hJ6iPDSEboc0YtZlayKCvv9nzwgIB1mlmNRf8pimzl0Sda+mNXhTL5HLNbkyivXQKRbf LuVUz2FrJTtA/ZH7OIpvuxJu8s6967K429fr+QN+SAXRFlQNKc1tJpRC3F+/ZqS3pPfc VfE4ZRA4Hd+ssahYm8rk4nTdSPMsFVKR6yyOV3xFq/wgI76Fbu4CzWqNUygyxJyqLlIU cGdSP9OGmeX0BxJ4WZCE/323+AgPNPtGll0ByCbTGXjM/vOBqgqoT1bubDHMzfcTiJI0 Uo5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IsdWT5sx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.17.49; Sat, 18 Mar 2023 18:18:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IsdWT5sx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230341AbjCSAWX (ORCPT + 99 others); Sat, 18 Mar 2023 20:22:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbjCSAVa (ORCPT ); Sat, 18 Mar 2023 20:21:30 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CD3E2B9D1; Sat, 18 Mar 2023 17:19:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185182; x=1710721182; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=dBIkMgtPKqQv5kVHDuakiGU5eUx6LQnshjLSaxY9/+4=; b=IsdWT5sxlFApwumXL8OAmtxb2UhFWs8+9Z/Jt8IBJcAOJz0eNRdvHiw2 NJcIdxmOKE6x7TCXY4s9vt8pTdHoyAdwFP6/FM/zcxjW5ZZNEyJm58JaH SgIy0oeN9s/O7ETdq/tdcHHKn+bjetTkD1a980J317yu1HYFx1ABsGI+a FSBrzDYpQEAEtNPah2/jNnn1C/2qa7YKdWbbmlVL8MEPT8UyBl1ZLJmq3 L5jUA99zdASzzCm8yzokO0jxV2Ev8DsNO3M8gU5zPGu5sp1NBb0V7zoeW SvUqmGRPaxkYGtVdILCIiki0mzQQPHjV08yzwoaD1wJdUl++E+CbwgiUC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491420" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491420" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672945" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672945" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:44 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 31/40] x86/shstk: Introduce routines modifying shstk Date: Sat, 18 Mar 2023 17:15:26 -0700 Message-Id: <20230319001535.23210-32-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756951275600769?= X-GMAIL-MSGID: =?utf-8?q?1760756951275600769?= Shadow stacks are normally written to via CALL/RET or specific CET instructions like RSTORSSP/SAVEPREVSSP. However, sometimes the kernel will need to write to the shadow stack directly using the ring-0 only WRUSS instruction. A shadow stack restore token marks a restore point of the shadow stack, and the address in a token must point directly above the token, which is within the same shadow stack. This is distinctively different from other pointers on the shadow stack, since those pointers point to executable code area. Introduce token setup and verify routines. Also introduce WRUSS, which is a kernel-mode instruction but writes directly to user shadow stack. In future patches that enable shadow stack to work with signals, the kernel will need something to denote the point in the stack where sigreturn may be called. This will prevent attackers calling sigreturn at arbitrary places in the stack, in order to help prevent SROP attacks. To do this, something that can only be written by the kernel needs to be placed on the shadow stack. This can be accomplished by setting bit 63 in the frame written to the shadow stack. Userspace return addresses can't have this bit set as it is in the kernel range. It also can't be a valid restore token. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Use define instead of magic BIT(63) (Boris) v5: - Fix typo in commit log v3: - Drop shstk_check_rstor_token() - Fail put_shstk_data() if bit 63 is set in the data (Kees) - Add comment in create_rstor_token() (Kees) - Pull in create_rstor_token() changes from future patch (Kees) v2: - Add data helpers for writing to shadow stack. --- arch/x86/include/asm/special_insns.h | 13 +++++ arch/x86/kernel/shstk.c | 75 ++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index de48d1389936..d6cd9344f6c7 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -202,6 +202,19 @@ static inline void clwb(volatile void *__p) : [pax] "a" (p)); } +#ifdef CONFIG_X86_USER_SHADOW_STACK +static inline int write_user_shstk_64(u64 __user *addr, u64 val) +{ + asm_volatile_goto("1: wrussq %[val], (%[addr])\n" + _ASM_EXTABLE(1b, %l[fail]) + :: [addr] "r" (addr), [val] "r" (val) + :: fail); + return 0; +fail: + return -EFAULT; +} +#endif /* CONFIG_X86_USER_SHADOW_STACK */ + #define nop() asm volatile ("nop") static inline void serialize(void) diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index bd9cdc3a7338..e22928c63ffc 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -25,6 +25,8 @@ #include #include +#define SS_FRAME_SIZE 8 + static bool features_enabled(unsigned long features) { return current->thread.features & features; @@ -40,6 +42,35 @@ static void features_clr(unsigned long features) current->thread.features &= ~features; } +/* + * Create a restore token on the shadow stack. A token is always 8-byte + * and aligned to 8. + */ +static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) +{ + unsigned long addr; + + /* Token must be aligned */ + if (!IS_ALIGNED(ssp, 8)) + return -EINVAL; + + addr = ssp - SS_FRAME_SIZE; + + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + ssp |= BIT(0); + + if (write_user_shstk_64((u64 __user *)addr, (u64)ssp)) + return -EFAULT; + + if (token_addr) + *token_addr = addr; + + return 0; +} + static unsigned long alloc_shstk(unsigned long size) { int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_ABOVE4G; @@ -157,6 +188,50 @@ unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long cl return addr + size; } +static unsigned long get_user_shstk_addr(void) +{ + unsigned long long ssp; + + fpregs_lock_and_load(); + + rdmsrl(MSR_IA32_PL3_SSP, ssp); + + fpregs_unlock(); + + return ssp; +} + +#define SHSTK_DATA_BIT BIT(63) + +static int put_shstk_data(u64 __user *addr, u64 data) +{ + if (WARN_ON_ONCE(data & SHSTK_DATA_BIT)) + return -EINVAL; + + /* + * Mark the high bit so that the sigframe can't be processed as a + * return address. + */ + if (write_user_shstk_64(addr, data | SHSTK_DATA_BIT)) + return -EFAULT; + return 0; +} + +static int get_shstk_data(unsigned long *data, unsigned long __user *addr) +{ + unsigned long ldata; + + if (unlikely(get_user(ldata, addr))) + return -EFAULT; + + if (!(ldata & SHSTK_DATA_BIT)) + return -EINVAL; + + *data = ldata & ~SHSTK_DATA_BIT; + + return 0; +} + void shstk_free(struct task_struct *tsk) { struct thread_shstk *shstk = &tsk->thread.shstk; From patchwork Sun Mar 19 00:15:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71691 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532352wrt; Sat, 18 Mar 2023 18:35:36 -0700 (PDT) X-Google-Smtp-Source: AK7set/fYek69u1ormDYwpf6rhZ2nyOsVuOvulrBnspoJEHeeIv/HP7LoWHh9QibJW/U1NBaP6UJ X-Received: by 2002:a17:90b:314e:b0:23a:177b:5bfa with SMTP id ip14-20020a17090b314e00b0023a177b5bfamr13350411pjb.22.1679189736471; Sat, 18 Mar 2023 18:35:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189736; cv=none; d=google.com; s=arc-20160816; b=sXOCNmguIEY6090vMrkD51BsY7hV2iw7F6Rrd+vOv2lOxAZNC+8Pk/4q15+tyWGjS/ P1/03gRkV7a+R+1axIgMNITEihE8+ZSAXwk0bWoX+xOCg7xAlZLBx3XJm5YZUtGkMjia x07tnNxHiUwJB6m4bDu5lWW04iPh6KFw6VmFe1p1NcJXoV93oXqu9T3dk3c97mKwgxyk YhU3Qn4+sUoLjRY0p5SMmy1TBzdLIwmSAco53Q6cQN3h3+Fq0RwGuDoi+Bj3P1AnpZtz 9aryRp7zpfAAgCEKILHa0reDsb7WBOr2yOzivnVOq5suxrgEyvHA4GKVdrOmuCEekpwB zdXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=DMqkzk73IxHG1I4Q11s/ouCJSTSEx1hUtj8KnGYTYws=; b=UkwxF4fpA5z0RFlB+ceq1kcQlEee+GvThoHeADHc3M1HA02yWi033CjIdJMSq8wtoU IVnlh8CGSeLj5oV3eXovRuOEWDNzCibyIlnTQYQp+mWQg5gqu5Cf2EsyngzxcfSC2hXo jCN5WVdkkK1KHKFk717K6cXYCNw1BEkhPc/rxir9+QbJsqXveESAGaVLrPmzT0CkzhlU Wzj2+k/jvOmySYN9Ln3kuCyUiiTlRx0XQwLzVq2IgSuzybCj6aWGCLbT0+NgcJLwXp4h zMWTQUjcwyf9DbV7Ak1RAL4O4BM6eFN5/NTVwCErXA8aZc0KaBaxTbaVWc1pksfMowgc cTzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="SE/tLHXD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fs3-20020a17090af28300b0023d53736026si6386775pjb.125.2023.03.18.18.35.24; Sat, 18 Mar 2023 18:35:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="SE/tLHXD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230348AbjCSAWd (ORCPT + 99 others); Sat, 18 Mar 2023 20:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbjCSAVm (ORCPT ); Sat, 18 Mar 2023 20:21:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E37129427; Sat, 18 Mar 2023 17:19:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185189; x=1710721189; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=44qPir7RLXKA27y57yzk3u7KQrjWXmALrqhQJT9eSm4=; b=SE/tLHXDMUBAet5m0M4KZsPTs+/4If9d/zyoraZEh8Uzl4hcYrbBR84t vtyCi/Z/CBm8a5JHNKMDH2KHbbm2Uxb6C2PSwrFAcAKjwupc7RVw2XHKR nJzzGsvquzYkkBQwy8HknsDB0Amb2ULhJBQPnakppvKpujgMTjXanlobZ x2z22n9FfBfC73pRITTmvyVNzp7S6qrqTfl0LWZt2msGd7OYPJjty2o5n jU/TH8bOX7lnDo1DBjooDMJADk9u5gyRWfmWSxKP6/TbCwb/ORk4aQDqO OdOcTRp+/dZeu+HtvcERuh4FoIJjMhC5QvtMMAPkqO23/bf560URpgSxd w==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491443" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491443" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672951" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672951" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:46 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 32/40] x86/shstk: Handle signals for shadow stack Date: Sat, 18 Mar 2023 17:15:27 -0700 Message-Id: <20230319001535.23210-33-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758057252916835?= X-GMAIL-MSGID: =?utf-8?q?1760758057252916835?= When a signal is handled, the context is pushed to the stack before handling it. For shadow stacks, since the shadow stack only tracks return addresses, there isn't any state that needs to be pushed. However, there are still a few things that need to be done. These things are visible to userspace and which will be kernel ABI for shadow stacks. One is to make sure the restorer address is written to shadow stack, since the signal handler (if not changing ucontext) returns to the restorer, and the restorer calls sigreturn. So add the restorer on the shadow stack before handling the signal, so there is not a conflict when the signal handler returns to the restorer. The other thing to do is to place some type of checkable token on the thread's shadow stack before handling the signal and check it during sigreturn. This is an extra layer of protection to hamper attackers calling sigreturn manually as in SROP-like attacks. For this token the shadow stack data format defined earlier can be used. Have the data pushed be the previous SSP. In the future the sigreturn might want to return back to a different stack. Storing the SSP (instead of a restore offset or something) allows for future functionality that may want to restore to a different stack. So, when handling a signal push - the SSP pointing in the shadow stack data format - the restorer address below the restore token. In sigreturn, verify SSP is stored in the data format and pop the shadow stack. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Remove duplicate alignment check (Boris) v3: - Drop shstk_setup_rstor_token() (Kees) - Drop x32 signal support, since x32 support is dropped v2: - Switch to new shstk signal format v1: - Use xsave helpers. - Expand commit log. --- arch/x86/include/asm/shstk.h | 5 ++ arch/x86/kernel/shstk.c | 95 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/signal.c | 1 + arch/x86/kernel/signal_64.c | 6 +++ 4 files changed, 107 insertions(+) diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index d4a5c7b10cb5..ecb23a8ca47d 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -6,6 +6,7 @@ #include struct task_struct; +struct ksignal; #ifdef CONFIG_X86_USER_SHADOW_STACK struct thread_shstk { @@ -18,6 +19,8 @@ void reset_thread_features(void); unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, unsigned long stack_size); void shstk_free(struct task_struct *p); +int setup_signal_shadow_stack(struct ksignal *ksig); +int restore_signal_shadow_stack(void); #else static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } @@ -26,6 +29,8 @@ static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, unsigned long stack_size) { return 0; } static inline void shstk_free(struct task_struct *p) {} +static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; } +static inline int restore_signal_shadow_stack(void) { return 0; } #endif /* CONFIG_X86_USER_SHADOW_STACK */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index e22928c63ffc..f02e8ea4f1b5 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -232,6 +232,101 @@ static int get_shstk_data(unsigned long *data, unsigned long __user *addr) return 0; } +static int shstk_push_sigframe(unsigned long *ssp) +{ + unsigned long target_ssp = *ssp; + + /* Token must be aligned */ + if (!IS_ALIGNED(target_ssp, 8)) + return -EINVAL; + + *ssp -= SS_FRAME_SIZE; + if (put_shstk_data((void *__user)*ssp, target_ssp)) + return -EFAULT; + + return 0; +} + +static int shstk_pop_sigframe(unsigned long *ssp) +{ + unsigned long token_addr; + int err; + + err = get_shstk_data(&token_addr, (unsigned long __user *)*ssp); + if (unlikely(err)) + return err; + + /* Restore SSP aligned? */ + if (unlikely(!IS_ALIGNED(token_addr, 8))) + return -EINVAL; + + /* SSP in userspace? */ + if (unlikely(token_addr >= TASK_SIZE_MAX)) + return -EINVAL; + + *ssp = token_addr; + + return 0; +} + +int setup_signal_shadow_stack(struct ksignal *ksig) +{ + void __user *restorer = ksig->ka.sa.sa_restorer; + unsigned long ssp; + int err; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || + !features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + if (!restorer) + return -EINVAL; + + ssp = get_user_shstk_addr(); + if (unlikely(!ssp)) + return -EINVAL; + + err = shstk_push_sigframe(&ssp); + if (unlikely(err)) + return err; + + /* Push restorer address */ + ssp -= SS_FRAME_SIZE; + err = write_user_shstk_64((u64 __user *)ssp, (u64)restorer); + if (unlikely(err)) + return -EFAULT; + + fpregs_lock_and_load(); + wrmsrl(MSR_IA32_PL3_SSP, ssp); + fpregs_unlock(); + + return 0; +} + +int restore_signal_shadow_stack(void) +{ + unsigned long ssp; + int err; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || + !features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + ssp = get_user_shstk_addr(); + if (unlikely(!ssp)) + return -EINVAL; + + err = shstk_pop_sigframe(&ssp); + if (unlikely(err)) + return err; + + fpregs_lock_and_load(); + wrmsrl(MSR_IA32_PL3_SSP, ssp); + fpregs_unlock(); + + return 0; +} + void shstk_free(struct task_struct *tsk) { struct thread_shstk *shstk = &tsk->thread.shstk; diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c index 004cb30b7419..356253e85ce9 100644 --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -40,6 +40,7 @@ #include #include #include +#include static inline int is_ia32_compat_frame(struct ksignal *ksig) { diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c index 0e808c72bf7e..cacf2ede6217 100644 --- a/arch/x86/kernel/signal_64.c +++ b/arch/x86/kernel/signal_64.c @@ -175,6 +175,9 @@ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs) frame = get_sigframe(ksig, regs, sizeof(struct rt_sigframe), &fp); uc_flags = frame_uc_flags(regs); + if (setup_signal_shadow_stack(ksig)) + return -EFAULT; + if (!user_access_begin(frame, sizeof(*frame))) return -EFAULT; @@ -260,6 +263,9 @@ SYSCALL_DEFINE0(rt_sigreturn) if (!restore_sigcontext(regs, &frame->uc.uc_mcontext, uc_flags)) goto badframe; + if (restore_signal_shadow_stack()) + goto badframe; + if (restore_altstack(&frame->uc.uc_stack)) goto badframe; From patchwork Sun Mar 19 00:15:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71658 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp515088wrt; Sat, 18 Mar 2023 17:33:17 -0700 (PDT) X-Google-Smtp-Source: AK7set9MajF/pPTPJXoUY8RyFINamSDdD3ZpDmVRWn4wSHwg1PE17n6nmHDOz40FO5oYIPg/CstS X-Received: by 2002:a17:902:f94c:b0:19d:5fd:11fb with SMTP id kx12-20020a170902f94c00b0019d05fd11fbmr10684824plb.23.1679185997583; Sat, 18 Mar 2023 17:33:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679185997; cv=none; d=google.com; s=arc-20160816; b=cLP2E0hDeW1lpJimAPFQ2Vw6x2M0L7BrcvsqCAAJK0tB9qgalyXEYK4og2DbR5E1D6 SZvboXRK4JBPltyOMf51NXvxNu612+y/6+VY6Jv63FqVuu/+Y+BnafyJUSLll3oSAtm4 a8j/M6nnFv64A6PQVEj5WYfXKrjvWz5dftYfUAbx8swRobHyZ9zjZ3TvKQVXmGya4LTw zBqPiFbZaWR2PEYx8cCEPR/up9x9pRLR2X2i/dcIP6WtCIB9D54Ra8hXsc0EuKXpcqlt z22CeGgf5XYux36BMzDPlR55hOgHYmLCsSDEEitkTITWP13LakKr5OwFKAeLEO/d3aIE UkUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=f2N7NeFbDpnW0PteYmflUApJP8SLzh237LLNBHrJXWg=; b=gX3t7/w254CTBfgBM7VhtAYeC5+UsZeGWeM53S1lfgX0F/lHIsxPHtlxWkd9ut1rT7 /dpsnPW85dqmDU6IDToK9CMtz51FIaakvv3jTh2DOW71qB0EDUDQ7Z4WqAeYkv8Z/tcE azOo8YFY1zc84RMTJDkvRvj0rYCOKi7t/zce8zPJ+W6o+KDBPXqWAgSKXC/RNq5xHGyr bDwggHwFc/0CeZjnI9+Tg3uxpTV8lYlM/b5ak8H+azaJ7FOISoLCViR9hyDYlkas9dcU jorH4ho2PBPOyY0Vll0Lse3HpPl1aSBD465h0rKWcViUKRaqRYG+XpFO7+G5fvf7pf8D oTbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a9el+Dz1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ky3-20020a170902f98300b001a068523000si5865390plb.231.2023.03.18.17.33.04; Sat, 18 Mar 2023 17:33:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a9el+Dz1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbjCSAWp (ORCPT + 99 others); Sat, 18 Mar 2023 20:22:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbjCSAWG (ORCPT ); Sat, 18 Mar 2023 20:22:06 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D38BD149AB; Sat, 18 Mar 2023 17:19:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185195; x=1710721195; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=yPUIYMNlxQJ7Nq6500igNTRtdiMTwrmDSrCHmUaqViI=; b=a9el+Dz18GXrg03YD0kmbGONJu6ywy1aMGHH101/S2oKPOj5IqDhAUqC 3nrFen5HGHTWmtmxPfQVivxwibknJ5CIKM34DRHXpcFLY9khWDtHbXTY+ 77URjdvBrPwQ05Moo5uk1nTCPklWfo4VcuE8hziFzEHGcJ0iVWFR7C5K/ vAAW80zBqRI0pi8XuyJIz3/puiZKehv2aqo0U9QGRV83dfHb+oXxjkUNB QTsqjI3fKdcMTKvzTMyYmeDSfNYmN7I4BlXm1dMcjRVvRFLe1dndSsK8x M/7BzXxQDsRVYbKzBkiHqTU/j2p+ReE/JGkaAiUZMZ0IchaFxOUD+wxS4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491466" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491466" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672961" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672961" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:48 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 33/40] x86/shstk: Introduce map_shadow_stack syscall Date: Sat, 18 Mar 2023 17:15:28 -0700 Message-Id: <20230319001535.23210-34-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760754136688672294?= X-GMAIL-MSGID: =?utf-8?q?1760754136688672294?= When operating with shadow stacks enabled, the kernel will automatically allocate shadow stacks for new threads, however in some cases userspace will need additional shadow stacks. The main example of this is the ucontext family of functions, which require userspace allocating and pivoting to userspace managed stacks. Unlike most other user memory permissions, shadow stacks need to be provisioned with special data in order to be useful. They need to be setup with a restore token so that userspace can pivot to them via the RSTORSSP instruction. But, the security design of shadow stacks is that they should not be written to except in limited circumstances. This presents a problem for userspace, as to how userspace can provision this special data, without allowing for the shadow stack to be generally writable. Previously, a new PROT_SHADOW_STACK was attempted, which could be mprotect()ed from RW permissions after the data was provisioned. This was found to not be secure enough, as other threads could write to the shadow stack during the writable window. The kernel can use a special instruction, WRUSS, to write directly to userspace shadow stacks. So the solution can be that memory can be mapped as shadow stack permissions from the beginning (never generally writable in userspace), and the kernel itself can write the restore token. First, a new madvise() flag was explored, which could operate on the PROT_SHADOW_STACK memory. This had a couple of downsides: 1. Extra checks were needed in mprotect() to prevent writable memory from ever becoming PROT_SHADOW_STACK. 2. Extra checks/vma state were needed in the new madvise() to prevent restore tokens being written into the middle of pre-used shadow stacks. It is ideal to prevent restore tokens being added at arbitrary locations, so the check was to make sure the shadow stack had never been written to. 3. It stood out from the rest of the madvise flags, as more of direct action than a hint at future desired behavior. So rather than repurpose two existing syscalls (mmap, madvise) that don't quite fit, just implement a new map_shadow_stack syscall to allow userspace to map and setup new shadow stacks in one step. While ucontext is the primary motivator, userspace may have other unforeseen reasons to setup its own shadow stacks using the WRSS instruction. Towards this provide a flag so that stacks can be optionally setup securely for the common case of ucontext without enabling WRSS. Or potentially have the kernel set up the shadow stack in some new way. The following example demonstrates how to create a new shadow stack with map_shadow_stack: void *shstk = map_shadow_stack(addr, stack_size, SHADOW_STACK_SET_TOKEN); Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Use SZ_4G (Boris) - Return different error codes for each reason (Boris) v5: - Fix addr/mapped_addr (Kees) - Switch to EOPNOTSUPP (Kees suggested ENOTSUPP, but checkpatch suggests this) - Return error for addresses below 4G v3: - Change syscall common -> 64 (Kees) - Use bit shift notation instead of 0x1 for uapi header (Kees) - Call do_mmap() with MAP_FIXED_NOREPLACE (Kees) - Block unsupported flags (Kees) - Require size >= 8 to set token (Kees) v2: - Change syscall to take address like mmap() for CRIU's usage --- arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/x86/include/uapi/asm/mman.h | 3 ++ arch/x86/kernel/shstk.c | 59 ++++++++++++++++++++++---- include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 2 +- kernel/sys_ni.c | 1 + 6 files changed, 58 insertions(+), 9 deletions(-) diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..f65c671ce3b1 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 64 map_shadow_stack sys_map_shadow_stack # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index 5a0256e73f1e..8148bdddbd2c 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -13,6 +13,9 @@ ((key) & 0x8 ? VM_PKEY_BIT3 : 0)) #endif +/* Flags for map_shadow_stack(2) */ +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */ + #include #endif /* _ASM_X86_MMAN_H */ diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index f02e8ea4f1b5..6d2531ce661c 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -71,19 +72,31 @@ static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) return 0; } -static unsigned long alloc_shstk(unsigned long size) +static unsigned long alloc_shstk(unsigned long addr, unsigned long size, + unsigned long token_offset, bool set_res_tok) { int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_ABOVE4G; struct mm_struct *mm = current->mm; - unsigned long addr, unused; + unsigned long mapped_addr, unused; - mmap_write_lock(mm); - addr = do_mmap(NULL, 0, size, PROT_READ, flags, - VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); + if (addr) + flags |= MAP_FIXED_NOREPLACE; + mmap_write_lock(mm); + mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); mmap_write_unlock(mm); - return addr; + if (!set_res_tok || IS_ERR_VALUE(mapped_addr)) + goto out; + + if (create_rstor_token(mapped_addr + token_offset, NULL)) { + vm_munmap(mapped_addr, size); + return -EINVAL; + } + +out: + return mapped_addr; } static unsigned long adjust_shstk_size(unsigned long size) @@ -134,7 +147,7 @@ static int shstk_setup(void) return -EOPNOTSUPP; size = adjust_shstk_size(0); - addr = alloc_shstk(size); + addr = alloc_shstk(0, size, 0, false); if (IS_ERR_VALUE(addr)) return PTR_ERR((void *)addr); @@ -178,7 +191,7 @@ unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long cl return 0; size = adjust_shstk_size(stack_size); - addr = alloc_shstk(size); + addr = alloc_shstk(0, size, 0, false); if (IS_ERR_VALUE(addr)) return addr; @@ -368,6 +381,36 @@ static int shstk_disable(void) return 0; } +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags) +{ + bool set_tok = flags & SHADOW_STACK_SET_TOKEN; + unsigned long aligned_size; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return -EOPNOTSUPP; + + if (flags & ~SHADOW_STACK_SET_TOKEN) + return -EINVAL; + + /* If there isn't space for a token */ + if (set_tok && size < 8) + return -ENOSPC; + + if (addr && addr < SZ_4G) + return -ERANGE; + + /* + * An overflow would result in attempting to write the restore token + * to the wrong location. Not catastrophic, but just return the right + * error code and block it. + */ + aligned_size = PAGE_ALIGN(size); + if (aligned_size < size) + return -EOVERFLOW; + + return alloc_shstk(addr, aligned_size, size, set_tok); +} + long shstk_prctl(struct task_struct *task, int option, unsigned long features) { if (option == ARCH_SHSTK_LOCK) { diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 33a0ee3bcb2e..392dc11e3556 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1058,6 +1058,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_map_shadow_stack(unsigned long addr, unsigned long size, unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 45fa180cc56a..b12940ec5926 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -887,7 +887,7 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 /* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..cb9aebd34646 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -381,6 +381,7 @@ COND_SYSCALL(vm86old); COND_SYSCALL(modify_ldt); COND_SYSCALL(vm86); COND_SYSCALL(kexec_file_load); +COND_SYSCALL(map_shadow_stack); /* s390 */ COND_SYSCALL(s390_pci_mmio_read); From patchwork Sun Mar 19 00:15:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71660 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp526089wrt; Sat, 18 Mar 2023 18:12:54 -0700 (PDT) X-Google-Smtp-Source: AK7set9nOy71FNBtjpvGuYuSaZ9Nu9w9NK9fN3uD/GB7APBh5Gyveg5+gjNDmreE4131Sa5BHPJX X-Received: by 2002:a05:6a20:c523:b0:d9:33a3:e7c5 with SMTP id gm35-20020a056a20c52300b000d933a3e7c5mr176180pzb.35.1679188374527; Sat, 18 Mar 2023 18:12:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188374; cv=none; d=google.com; s=arc-20160816; b=i4N5qjpbVSk41KGY+3z1huShGnZzB1uaJLKTThS9XUOa5FK70O5t+g3eyOyPn831tL TKEiMDWC3F+jDeP/Na8zuLbe0e3omIn3StJNwMoFDIejcH4z8o8Lx+LWyJQxrWSN5lEp 4GRnRkXd7qRkvL2tQXZRurWcxAtW94SMOxUDqCxP4eDVWF+bIBayL0yZ0cHgjzYFIdbi 4D8y99JBr804eXwyLVxpKkDqa2i8/QEchthvpuemojl95TTJzNVrdV8dq647SIu1Ah+p Ttf3Xv2LgLLTN0puie/f+OkMDImlSXawnjT+2cEXFmLhSdP7v+714VC07UTASU7kF54k s2dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=PdS9YQ0k1vCI/xLvcPjO4/eFlX4M1vAT9LvhSm6ce6U=; b=KBm6YryP3mJk3JruBPXBsJuX5hIgfh2vk9mNA+7OTqNabzgH+ZcdadqWIoA6DkP0qj qCnC2SJ4drcjaVs+acFkMqfhz7SwEJL02OGDQJPl0LL2B6UK7+t+ScFOBQLbdMNATlBi BteOj3dfOhRALNwCng7Yg4L0eoMKcGj261uGvuSIO80XiXxsvFU3kLZdUQ1SnDepSTu3 YMm2RuJhg0mu3z/Xx0RtwowtukgCNlmlY0IXGjTE3/xD98Np4f7uEvKWUnULVGOioGM+ sAPZv/X6U0mLi/QMYmVZ6pP0zTMOaQfDFdK93i/kjiSSYUtMd7IlgHIOlyxr2ELppnpV FnLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="f8/9aQoj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y191-20020a638ac8000000b0050bd71e861asi6822393pgd.432.2023.03.18.18.12.31; Sat, 18 Mar 2023 18:12:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="f8/9aQoj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230371AbjCSAXE (ORCPT + 99 others); Sat, 18 Mar 2023 20:23:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230259AbjCSAW3 (ORCPT ); Sat, 18 Mar 2023 20:22:29 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9528CC9; Sat, 18 Mar 2023 17:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185210; x=1710721210; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=hhZNEhMG5Ur6pBP04hNf9KVjJtSiNiJCBm2RUWbtaSg=; b=f8/9aQoja0GcL6hEYim3TTpKEgvc0vJqpEXMwC2UzIakJBG+sqHwWHdj e34g3zAqHrSFr4AJESgspOusfeoG8xhYjoE2tpNSinjeiCrQLDtpbj/rM 9LfQPhpAJKr6Ee+ubz3KTxWkuj3nnyXPJ8g3pu/l+eryZaoqlm9fIl6E+ dh5DikuwehoW72dWYfoaQuzja4WZ9jwqQAV8+HtBII9yffS6uwFbcvPhz 8Fzs51rbc7jARH8vQfW2UlfwhzIP6cE346qOyrnnolpKJN6hC09kk6dX0 UromntTIFfSqva3Ub0aRyNxtSoPlRhashAozycD9viaohJbqvuTozmkWG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491490" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491490" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672971" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672971" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:50 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 34/40] x86/shstk: Support WRSS for userspace Date: Sat, 18 Mar 2023 17:15:29 -0700 Message-Id: <20230319001535.23210-35-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760756628783136528?= X-GMAIL-MSGID: =?utf-8?q?1760756628783136528?= For the current shadow stack implementation, shadow stacks contents can't easily be provisioned with arbitrary data. This property helps apps protect themselves better, but also restricts any potential apps that may want to do exotic things at the expense of a little security. The x86 shadow stack feature introduces a new instruction, WRSS, which can be enabled to write directly to shadow stack memory from userspace. Allow it to get enabled via the prctl interface. Only enable the userspace WRSS instruction, which allows writes to userspace shadow stacks from userspace. Do not allow it to be enabled independently of shadow stack, as HW does not support using WRSS when shadow stack is disabled. From a fault handler perspective, WRSS will behave very similar to WRUSS, which is treated like a user access from a #PF err code perspective. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Drop set_clr_bits_msrl() (Boris) - Fix comments wrss->WRSS (Boris) v6: - Make set_clr_bits_msrl() avoid side effects in 'msr' v5: - Switch to EOPNOTSUPP - Move set_clr_bits_msrl() to patch where it is first used - Commit log formatting v3: - Make wrss_control() static - Fix verbiage in commit log (Kees) --- arch/x86/include/uapi/asm/prctl.h | 1 + arch/x86/kernel/shstk.c | 43 ++++++++++++++++++++++++++++++- 2 files changed, 43 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index 7dfd9dc00509..e31495668056 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -28,5 +28,6 @@ /* ARCH_SHSTK_ features bits */ #define ARCH_SHSTK_SHSTK (1ULL << 0) +#define ARCH_SHSTK_WRSS (1ULL << 1) #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 6d2531ce661c..01b45666f1b6 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -360,6 +360,47 @@ void shstk_free(struct task_struct *tsk) unmap_shadow_stack(shstk->base, shstk->size); } +static int wrss_control(bool enable) +{ + u64 msrval; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return -EOPNOTSUPP; + + /* + * Only enable WRSS if shadow stack is enabled. If shadow stack is not + * enabled, WRSS will already be disabled, so don't bother clearing it + * when disabling. + */ + if (!features_enabled(ARCH_SHSTK_SHSTK)) + return -EPERM; + + /* Already enabled/disabled? */ + if (features_enabled(ARCH_SHSTK_WRSS) == enable) + return 0; + + fpregs_lock_and_load(); + rdmsrl(MSR_IA32_U_CET, msrval); + + if (enable) { + features_set(ARCH_SHSTK_WRSS); + msrval |= CET_WRSS_EN; + } else { + features_clr(ARCH_SHSTK_WRSS); + if (!(msrval & CET_WRSS_EN)) + goto unlock; + + msrval &= ~CET_WRSS_EN; + } + + wrmsrl(MSR_IA32_U_CET, msrval); + +unlock: + fpregs_unlock(); + + return 0; +} + static int shstk_disable(void) { if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) @@ -376,7 +417,7 @@ static int shstk_disable(void) fpregs_unlock(); shstk_free(current); - features_clr(ARCH_SHSTK_SHSTK); + features_clr(ARCH_SHSTK_SHSTK | ARCH_SHSTK_WRSS); return 0; } From patchwork Sun Mar 19 00:15:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71682 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531260wrt; Sat, 18 Mar 2023 18:31:52 -0700 (PDT) X-Google-Smtp-Source: AK7set/CNLawLexUSuQhzxfeO4ef2SdqnZbcuqYT2+x0+Ki03/sFDINoK/2mpLQW5xUSM2h32/HL X-Received: by 2002:a17:90b:4c85:b0:237:b64c:6bb3 with SMTP id my5-20020a17090b4c8500b00237b64c6bb3mr14223072pjb.11.1679189511847; Sat, 18 Mar 2023 18:31:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189511; cv=none; d=google.com; s=arc-20160816; b=JHDprzM/NdWnkjn+FCYt2CGvNeAdJosZv2WvyAUfYVi7OKoUGgvI8a0dKnv2qKobFS lNtiQlS0is5arkl2FdIfvshTPkV9K0bOInf8NATU4yJPsGls5qEK/zMCWLaM0q+yQglm 22IUWDMJbRmnT3Amuj51d0Y/bxtW4AHvdkU6yJY82Iz1tUn2ToR0dC0wCbvARqPcS40Z yEUE9RIwt1ysJJ0egE2ZFobM7DeOYj06XFfSUh+ldZicAcWDufoQqJGyi4P2GBPJoOS8 rL3VV8wo0yC1EOGf0fxi0mzHUHneJDn/veIL2fBmEXd3VROTwx3361YrDDI4CLrn5LN6 N6yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=NjfnKDB6b0HKeEX/PKHPZraQnlxUrPOXKdybnz7FfaU=; b=ZSMgaMY5rA6KfXD6jUlYoppHHWVKYgLNWqVbvei6ips9naIH8AXwKvzTjLRXPu6sTz rzBbpVrFCKEYbx/0niZhSIatRWn/+H+y6MQ79MaP6WlXbsmQn1IV3K1FB3HcmaZzJfBb JqC4KD6bIOJrXgrS9irqYwSRms+avbWyvi4ePSvVvNghZhR5iVjBX1CBiXucRHxmTHcm 1+ser4NvOQ43cMHeVHOmxvlcMzLxmqymg3cvnWSx9A73c9bjgLNqQrM8tJ00lyyvJPoi pq4hQ6fAuJDclEWqDQ3sVMemd1Da+Y93i6U1dMX5nHcyOqzlT37GBgYYjuio82iKt3m0 /+SQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W0+umJOk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fs3-20020a17090af28300b0023d53736026si6386775pjb.125.2023.03.18.18.31.39; Sat, 18 Mar 2023 18:31:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W0+umJOk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230380AbjCSAXJ (ORCPT + 99 others); Sat, 18 Mar 2023 20:23:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230305AbjCSAWc (ORCPT ); Sat, 18 Mar 2023 20:22:32 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FFB129E1F; Sat, 18 Mar 2023 17:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185217; x=1710721217; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=SZKEydO4/dqCRXo76Ax/xNMIP87QKuRYX84LlmCxycM=; b=W0+umJOkovW1kwBVkA3izhrfi/atcAsG43NBfyO4iSuWoYYa4D1TPV+y 0dchiP3KysTylwTDfqcq934Rcz6o3kwkCb0C4jr1pghW/KGMBmlh7OLvZ XGQuY1KE7Uvn+ZXO2YMQ9Ff8cRMX4WpxacOjxqaLNEItOxRYsTVr1kQo8 TTc9vEJbTa/3qtwFC4CPqVzjJIfyxdacwfxjKY6awCuZpmlJk5EQ5ituk V8I5JShJIO8n4S1PJzaFJSTBzuvZpBk0JsY3GkvG8LAcbzov76WsUbDpQ HcnZffIcX4xh+gJOVW1RQy8LvolPAfUpc5d7hToQ+iMUGwTzd7gUMcq4r g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491512" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491512" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672980" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672980" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:51 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 35/40] x86: Expose thread features in /proc/$PID/status Date: Sat, 18 Mar 2023 17:15:30 -0700 Message-Id: <20230319001535.23210-36-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757821872496287?= X-GMAIL-MSGID: =?utf-8?q?1760757821872496287?= Applications and loaders can have logic to decide whether to enable shadow stack. They usually don't report whether shadow stack has been enabled or not, so there is no way to verify whether an application actually is protected by shadow stack. Add two lines in /proc/$PID/status to report enabled and locked features. Since, this involves referring to arch specific defines in asm/prctl.h, implement an arch breakout to emit the feature lines. [Switched to CET, added to commit log] Co-developed-by: Kirill A. Shutemov Signed-off-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v4: - Remove "CET" references v3: - Move to /proc/pid/status (Kees) v2: - New patch --- arch/x86/kernel/cpu/proc.c | 23 +++++++++++++++++++++++ fs/proc/array.c | 6 ++++++ include/linux/proc_fs.h | 2 ++ 3 files changed, 31 insertions(+) diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c index 099b6f0d96bd..31c0e68f6227 100644 --- a/arch/x86/kernel/cpu/proc.c +++ b/arch/x86/kernel/cpu/proc.c @@ -4,6 +4,8 @@ #include #include #include +#include +#include #include "cpu.h" @@ -175,3 +177,24 @@ const struct seq_operations cpuinfo_op = { .stop = c_stop, .show = show_cpuinfo, }; + +#ifdef CONFIG_X86_USER_SHADOW_STACK +static void dump_x86_features(struct seq_file *m, unsigned long features) +{ + if (features & ARCH_SHSTK_SHSTK) + seq_puts(m, "shstk "); + if (features & ARCH_SHSTK_WRSS) + seq_puts(m, "wrss "); +} + +void arch_proc_pid_thread_features(struct seq_file *m, struct task_struct *task) +{ + seq_puts(m, "x86_Thread_features:\t"); + dump_x86_features(m, task->thread.features); + seq_putc(m, '\n'); + + seq_puts(m, "x86_Thread_features_locked:\t"); + dump_x86_features(m, task->thread.features_locked); + seq_putc(m, '\n'); +} +#endif /* CONFIG_X86_USER_SHADOW_STACK */ diff --git a/fs/proc/array.c b/fs/proc/array.c index 9b0315d34c58..3e1a33dcd0d0 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -423,6 +423,11 @@ static inline void task_thp_status(struct seq_file *m, struct mm_struct *mm) seq_printf(m, "THP_enabled:\t%d\n", thp_enabled); } +__weak void arch_proc_pid_thread_features(struct seq_file *m, + struct task_struct *task) +{ +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -446,6 +451,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, task_cpus_allowed(m, task); cpuset_task_status_allowed(m, task); task_context_switch_counts(m, task); + arch_proc_pid_thread_features(m, task); return 0; } diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h index 0260f5ea98fe..80ff8e533cbd 100644 --- a/include/linux/proc_fs.h +++ b/include/linux/proc_fs.h @@ -158,6 +158,8 @@ int proc_pid_arch_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task); #endif /* CONFIG_PROC_PID_ARCH_STATUS */ +void arch_proc_pid_thread_features(struct seq_file *m, struct task_struct *task); + #else /* CONFIG_PROC_FS */ static inline void proc_root_init(void) From patchwork Sun Mar 19 00:15:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71694 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp532559wrt; Sat, 18 Mar 2023 18:36:17 -0700 (PDT) X-Google-Smtp-Source: AK7set/ba50oDyAvxU+T9OXv8bzqfalYR5d5VD5ljNvC1rw+wOBQjuwGvgumRgdYxVF9vIdkGk8C X-Received: by 2002:a05:6a20:6926:b0:c7:651c:1bae with SMTP id q38-20020a056a20692600b000c7651c1baemr13756897pzj.32.1679189777471; Sat, 18 Mar 2023 18:36:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189777; cv=none; d=google.com; s=arc-20160816; b=msnbB+w2C3B/DF5UB7zPY3augfKAUt8R/SgIy5CTS0EXcweXtd1lDZV5Kn2QSsSTGR +MllTNMfeXdKbKI0LlxW+QUbyl3DTJrQVwxyAbnE1NZKeYPpWEkKC29HlwXdxWwb4bk0 3EXthSvRk9NKRdivRUqCEb3mapystiaD69hSxr5rlZJduYhe1tRVZjt/ADKDQsAXCak4 Ga1yhSQlC17ah6KExFyHBrF304r440wCiKfGFWWPC4iFWmrht7gzvrX62Sb3sGN217tw YdWtz9b/HEkkjgQQnJ3WPTwehbexRTKHvfMJJWcFFkdgd0O6+chOFnZ5Nqt3Kpo0qTx7 yrlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=gQxmIVCDyiXNijRz9GDbHj/9hmkObe8BeLOXxhdnjvY=; b=rIZArEJU6cs3lWY+lV0zzPEOLAwjpZGUkrYKaLlPH5+zk0A9VWUtooo++Vjjv3Ddy0 WIvvij49s8cwVKegqAmAWaa7EGxJXq8AluGIdHPh28X7vbn/ewpA5cKnk9T+Px1LcFm5 ZO0/SBZT3CvsB5i0aa1Z4nSVNwJhM5TzIT4uVWdYsLTtxdw/g6LhOYmhiWyAECwC+jFa x6t4I385KTJRaDNtR+XkpjEYfur4QX32GKu+w4PylW8woQNHL2xTEMDPFWVekKTkorjS IA9TmUraJRWhuy7bgkT9TZ33sCsCL23wH0mZoOn45Fg9TJsXFqKM+cWEQS0Csx/ABUp0 j0MQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZcHZ+Plb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a634103000000b004fbc54ad3b6si6550344pga.470.2023.03.18.18.36.04; Sat, 18 Mar 2023 18:36:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZcHZ+Plb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230016AbjCSAX2 (ORCPT + 99 others); Sat, 18 Mar 2023 20:23:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230363AbjCSAWp (ORCPT ); Sat, 18 Mar 2023 20:22:45 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA6B22A151; Sat, 18 Mar 2023 17:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185226; x=1710721226; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=XWhd33s80bJV+B2OfEZjFLW5DvuhEuOlL0wc6R5POX8=; b=ZcHZ+Plb6w4zATX7xLbuJTEy7zEqYic7LTMTMp0GDKAxESu6hMkFr8Uv RqxWssiHFBuRRv7MWEkfjEj/+9v3e5S459EDJieHPJ6rpGJi5qJ3Fr/tY 8MpeFbsGkYbCyHXi8JafURPKwmXKBEXeVL0yjDBYE7Yk4F9qWWSrBPvNK ttafLOyQ92KM7EQitCy43F1NI57QhEzWIJg4dkMtxw7tZeuEGsI7Sl4Qt b5EWaiWKEolmcB+IrvUi4Io6s/I76/S9Gos96oVkbDThhUzki6nqW/KOO GnOhoji4vpEbuLggASXy7s98d08kH/h+Em2khPam/FOX/er+kmvQvl0bz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491534" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491534" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672989" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672989" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:53 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 36/40] x86/shstk: Wire in shadow stack interface Date: Sat, 18 Mar 2023 17:15:31 -0700 Message-Id: <20230319001535.23210-37-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760758100278005430?= X-GMAIL-MSGID: =?utf-8?q?1760758100278005430?= The kernel now has the main shadow stack functionality to support applications. Wire in the WRSS and shadow stack enable/disable functions into the existing shadow stack API skeleton. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v4: - Remove "CET" references v2: - Split from other patches --- arch/x86/kernel/shstk.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 01b45666f1b6..ee89c4206ac9 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -472,9 +472,17 @@ long shstk_prctl(struct task_struct *task, int option, unsigned long features) return -EINVAL; if (option == ARCH_SHSTK_DISABLE) { + if (features & ARCH_SHSTK_WRSS) + return wrss_control(false); + if (features & ARCH_SHSTK_SHSTK) + return shstk_disable(); return -EINVAL; } /* Handle ARCH_SHSTK_ENABLE */ + if (features & ARCH_SHSTK_SHSTK) + return shstk_setup(); + if (features & ARCH_SHSTK_WRSS) + return wrss_control(true); return -EINVAL; } From patchwork Sun Mar 19 00:15:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71675 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp528931wrt; Sat, 18 Mar 2023 18:23:56 -0700 (PDT) X-Google-Smtp-Source: AK7set+mkxCwHa5Br8aDr1TbQkPNc2T9bkdzQXe0lzWUoycf7UJ0bmlOrsAQAFnfbYqZ1vZmwWLo X-Received: by 2002:a17:90a:190f:b0:237:50b6:9838 with SMTP id 15-20020a17090a190f00b0023750b69838mr13282056pjg.45.1679189035966; Sat, 18 Mar 2023 18:23:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189035; cv=none; d=google.com; s=arc-20160816; b=cxY5FyIlDDwsCdvrQS7qqTFUnLHm0c2MQX6Gzbnkaip2Qg8JooehY7BEese48QUpE5 8a4gex7kxlHmIZoPAiRdtYCsdrPmFb0qY6iMP53xzPzGn+h6bWbcD861PX8Uh82qh1Fz ktNobMPx+s5AXbmIvgTlPsevrt84+Bz+0BHN1Co0/JAXjp/I2zQQj19da+9JjuJa0gLF bBTh89gCs+DTvqK5cYBpJodNfu9nnUshJCjaOiREQvxpHaVw0diBLfSZO7cz6zG8W8ay BgiucEkIg9qLED97zUkN3ka+eBmDJZVdwGRxAHLTPo7fAzV9po7ZU3aPL00jQarY8elY JPJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=JpMzkk+jy0rJ6f1elZMSo7lNRiY2NQ3lwqoaaYq6es4=; b=oUmtE2ep0lK31/maDDshiJuuYU3Jgj7minwnl+/dl92+CCyzggvYCg6v8sXHELjdYf tKKcEwA8xFyPZrcuQuJbjk9LFoDU9Imv9+IQwALFAKnq+FyjK6fxrl0/dKALWztQhgFU WfFBgiH5458z4p8geuBJs2RTvprKYUGr3DaL5HYr+3VhZROu4Umo3WDgnbbuPRa5IDdM 5XN5O8/tTnApMmAYz+m1MZ26jvH2wc7Z7mnWRrqh8aIg4t8H9jDbGggatp1wh1PHp4hX 17g1tGr8BPfbAu5eS+D3VjyDOmi5DtbwDSElOOscf1mVGmcTaEYK9TAa5GbjVqwpLQv4 oQ2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=S8zcm2KO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a656148000000b00502fd16cd1csi6543680pgv.367.2023.03.18.18.23.43; Sat, 18 Mar 2023 18:23:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=S8zcm2KO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229898AbjCSAXj (ORCPT + 99 others); Sat, 18 Mar 2023 20:23:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230366AbjCSAW4 (ORCPT ); Sat, 18 Mar 2023 20:22:56 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EDFE2C647; Sat, 18 Mar 2023 17:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185235; x=1710721235; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=vX8wXG9P6QSNMOt0GN0vBlDeSdgKUz2zkT7fbKn25E0=; b=S8zcm2KOSoiJUBo4TAH7rvf56AHvJDL6VSZvIhdeAeHeb4nDCie1bVtx oh946WWbtFdgJIPEJ/KXUBt/J0QQKMg18VC9PChHYjHLSeySL+GFmgvz4 zXoFhodAZ5B3XUAMcGo2tq7LZm39kO/Y20i4EX5WdxkNSJ2c/dTOo5qHw iwbDjNcwe0seN0okqAjuWwC2RR83kQYObWlGfcET+KE7QW5myZf6aUtNb kti9J556315aWJnU+7Jko854LsTzR3AL6twQWCh6OoKU1A18RoEjYznqW itZ/TqVGakeTDwqQ/cDN1xe34tpoVzyCczMB+t6gFX9h/UW5mNCotIRuz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491557" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491557" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672998" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672998" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:54 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 37/40] selftests/x86: Add shadow stack test Date: Sat, 18 Mar 2023 17:15:32 -0700 Message-Id: <20230319001535.23210-38-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757322245061552?= X-GMAIL-MSGID: =?utf-8?q?1760757322245061552?= Add a simple selftest for exercising some shadow stack behavior: - map_shadow_stack syscall and pivot - Faulting in shadow stack memory - Handling shadow stack violations - GUP of shadow stack memory - mprotect() of shadow stack memory - Userfaultfd on shadow stack memory - 32 bit segmentation Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v7: - Remove KHDR_INCLUDES and just add a copy of the defines (Boris) v6: - Tweak mprotect test - Code style tweaks v5: - Update 32 bit signal test with new ABI and better asm v4: - Add test for 32 bit signal ABI blocking --- tools/testing/selftests/x86/Makefile | 2 +- .../testing/selftests/x86/test_shadow_stack.c | 695 ++++++++++++++++++ 2 files changed, 696 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/x86/test_shadow_stack.c diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile index ca9374b56ead..cfc8a26ad151 100644 --- a/tools/testing/selftests/x86/Makefile +++ b/tools/testing/selftests/x86/Makefile @@ -18,7 +18,7 @@ TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \ test_FCMOV test_FCOMI test_FISTTP \ vdso_restorer TARGETS_C_64BIT_ONLY := fsgsbase sysret_rip syscall_numbering \ - corrupt_xstate_header amx + corrupt_xstate_header amx test_shadow_stack # Some selftests require 32bit support enabled also on 64bit systems TARGETS_C_32BIT_NEEDED := ldt_gdt ptrace_syscall diff --git a/tools/testing/selftests/x86/test_shadow_stack.c b/tools/testing/selftests/x86/test_shadow_stack.c new file mode 100644 index 000000000000..94eb223456f6 --- /dev/null +++ b/tools/testing/selftests/x86/test_shadow_stack.c @@ -0,0 +1,695 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This program test's basic kernel shadow stack support. It enables shadow + * stack manual via the arch_prctl(), instead of relying on glibc. It's + * Makefile doesn't compile with shadow stack support, so it doesn't rely on + * any particular glibc. As a result it can't do any operations that require + * special glibc shadow stack support (longjmp(), swapcontext(), etc). Just + * stick to the basics and hope the compiler doesn't do anything strange. + */ + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Define the ABI defines if needed, so people can run the tests + * without building the headers. + */ +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 451 + +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) + +#define ARCH_SHSTK_ENABLE 0x5001 +#define ARCH_SHSTK_DISABLE 0x5002 +#define ARCH_SHSTK_LOCK 0x5003 +#define ARCH_SHSTK_UNLOCK 0x5004 +#define ARCH_SHSTK_STATUS 0x5005 + +#define ARCH_SHSTK_SHSTK (1ULL << 0) +#define ARCH_SHSTK_WRSS (1ULL << 1) +#endif + +#define SS_SIZE 0x200000 + +#if (__GNUC__ < 8) || (__GNUC__ == 8 && __GNUC_MINOR__ < 5) +int main(int argc, char *argv[]) +{ + printf("[SKIP]\tCompiler does not support CET.\n"); + return 0; +} +#else +void write_shstk(unsigned long *addr, unsigned long val) +{ + asm volatile("wrssq %[val], (%[addr])\n" + : "=m" (addr) + : [addr] "r" (addr), [val] "r" (val)); +} + +static inline unsigned long __attribute__((always_inline)) get_ssp(void) +{ + unsigned long ret = 0; + + asm volatile("xor %0, %0; rdsspq %0" : "=r" (ret)); + return ret; +} + +/* + * For use in inline enablement of shadow stack. + * + * The program can't return from the point where shadow stack gets enabled + * because there will be no address on the shadow stack. So it can't use + * syscall() for enablement, since it is a function. + * + * Based on code from nolibc.h. Keep a copy here because this can't pull in all + * of nolibc.h. + */ +#define ARCH_PRCTL(arg1, arg2) \ +({ \ + long _ret; \ + register long _num asm("eax") = __NR_arch_prctl; \ + register long _arg1 asm("rdi") = (long)(arg1); \ + register long _arg2 asm("rsi") = (long)(arg2); \ + \ + asm volatile ( \ + "syscall\n" \ + : "=a"(_ret) \ + : "r"(_arg1), "r"(_arg2), \ + "0"(_num) \ + : "rcx", "r11", "memory", "cc" \ + ); \ + _ret; \ +}) + +void *create_shstk(void *addr) +{ + return (void *)syscall(__NR_map_shadow_stack, addr, SS_SIZE, SHADOW_STACK_SET_TOKEN); +} + +void *create_normal_mem(void *addr) +{ + return mmap(addr, SS_SIZE, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); +} + +void free_shstk(void *shstk) +{ + munmap(shstk, SS_SIZE); +} + +int reset_shstk(void *shstk) +{ + return madvise(shstk, SS_SIZE, MADV_DONTNEED); +} + +void try_shstk(unsigned long new_ssp) +{ + unsigned long ssp; + + printf("[INFO]\tnew_ssp = %lx, *new_ssp = %lx\n", + new_ssp, *((unsigned long *)new_ssp)); + + ssp = get_ssp(); + printf("[INFO]\tchanging ssp from %lx to %lx\n", ssp, new_ssp); + + asm volatile("rstorssp (%0)\n":: "r" (new_ssp)); + asm volatile("saveprevssp"); + printf("[INFO]\tssp is now %lx\n", get_ssp()); + + /* Switch back to original shadow stack */ + ssp -= 8; + asm volatile("rstorssp (%0)\n":: "r" (ssp)); + asm volatile("saveprevssp"); +} + +int test_shstk_pivot(void) +{ + void *shstk = create_shstk(0); + + if (shstk == MAP_FAILED) { + printf("[FAIL]\tError creating shadow stack: %d\n", errno); + return 1; + } + try_shstk((unsigned long)shstk + SS_SIZE - 8); + free_shstk(shstk); + + printf("[OK]\tShadow stack pivot\n"); + return 0; +} + +int test_shstk_faults(void) +{ + unsigned long *shstk = create_shstk(0); + + /* Read shadow stack, test if it's zero to not get read optimized out */ + if (*shstk != 0) + goto err; + + /* Wrss memory that was already read. */ + write_shstk(shstk, 1); + if (*shstk != 1) + goto err; + + /* Page out memory, so we can wrss it again. */ + if (reset_shstk((void *)shstk)) + goto err; + + write_shstk(shstk, 1); + if (*shstk != 1) + goto err; + + printf("[OK]\tShadow stack faults\n"); + return 0; + +err: + return 1; +} + +unsigned long saved_ssp; +unsigned long saved_ssp_val; +volatile bool segv_triggered; + +void __attribute__((noinline)) violate_ss(void) +{ + saved_ssp = get_ssp(); + saved_ssp_val = *(unsigned long *)saved_ssp; + + /* Corrupt shadow stack */ + printf("[INFO]\tCorrupting shadow stack\n"); + write_shstk((void *)saved_ssp, 0); +} + +void segv_handler(int signum, siginfo_t *si, void *uc) +{ + printf("[INFO]\tGenerated shadow stack violation successfully\n"); + + segv_triggered = true; + + /* Fix shadow stack */ + write_shstk((void *)saved_ssp, saved_ssp_val); +} + +int test_shstk_violation(void) +{ + struct sigaction sa; + + sa.sa_sigaction = segv_handler; + if (sigaction(SIGSEGV, &sa, NULL)) + return 1; + sa.sa_flags = SA_SIGINFO; + + segv_triggered = false; + + /* Make sure segv_triggered is set before violate_ss() */ + asm volatile("" : : : "memory"); + + violate_ss(); + + signal(SIGSEGV, SIG_DFL); + + printf("[OK]\tShadow stack violation test\n"); + + return !segv_triggered; +} + +/* Gup test state */ +#define MAGIC_VAL 0x12345678 +bool is_shstk_access; +void *shstk_ptr; +int fd; + +void reset_test_shstk(void *addr) +{ + if (shstk_ptr) + free_shstk(shstk_ptr); + shstk_ptr = create_shstk(addr); +} + +void test_access_fix_handler(int signum, siginfo_t *si, void *uc) +{ + printf("[INFO]\tViolation from %s\n", is_shstk_access ? "shstk access" : "normal write"); + + segv_triggered = true; + + /* Fix shadow stack */ + if (is_shstk_access) { + reset_test_shstk(shstk_ptr); + return; + } + + free_shstk(shstk_ptr); + create_normal_mem(shstk_ptr); +} + +bool test_shstk_access(void *ptr) +{ + is_shstk_access = true; + segv_triggered = false; + write_shstk(ptr, MAGIC_VAL); + + asm volatile("" : : : "memory"); + + return segv_triggered; +} + +bool test_write_access(void *ptr) +{ + is_shstk_access = false; + segv_triggered = false; + *(unsigned long *)ptr = MAGIC_VAL; + + asm volatile("" : : : "memory"); + + return segv_triggered; +} + +bool gup_write(void *ptr) +{ + unsigned long val; + + lseek(fd, (unsigned long)ptr, SEEK_SET); + if (write(fd, &val, sizeof(val)) < 0) + return 1; + + return 0; +} + +bool gup_read(void *ptr) +{ + unsigned long val; + + lseek(fd, (unsigned long)ptr, SEEK_SET); + if (read(fd, &val, sizeof(val)) < 0) + return 1; + + return 0; +} + +int test_gup(void) +{ + struct sigaction sa; + int status; + pid_t pid; + + sa.sa_sigaction = test_access_fix_handler; + if (sigaction(SIGSEGV, &sa, NULL)) + return 1; + sa.sa_flags = SA_SIGINFO; + + segv_triggered = false; + + fd = open("/proc/self/mem", O_RDWR); + if (fd == -1) + return 1; + + reset_test_shstk(0); + if (gup_read(shstk_ptr)) + return 1; + if (test_shstk_access(shstk_ptr)) + return 1; + printf("[INFO]\tGup read -> shstk access success\n"); + + reset_test_shstk(0); + if (gup_write(shstk_ptr)) + return 1; + if (test_shstk_access(shstk_ptr)) + return 1; + printf("[INFO]\tGup write -> shstk access success\n"); + + reset_test_shstk(0); + if (gup_read(shstk_ptr)) + return 1; + if (!test_write_access(shstk_ptr)) + return 1; + printf("[INFO]\tGup read -> write access success\n"); + + reset_test_shstk(0); + if (gup_write(shstk_ptr)) + return 1; + if (!test_write_access(shstk_ptr)) + return 1; + printf("[INFO]\tGup write -> write access success\n"); + + close(fd); + + /* COW/gup test */ + reset_test_shstk(0); + pid = fork(); + if (!pid) { + fd = open("/proc/self/mem", O_RDWR); + if (fd == -1) + exit(1); + + if (gup_write(shstk_ptr)) { + close(fd); + exit(1); + } + close(fd); + exit(0); + } + waitpid(pid, &status, 0); + if (WEXITSTATUS(status)) { + printf("[FAIL]\tWrite in child failed\n"); + return 1; + } + if (*(unsigned long *)shstk_ptr == MAGIC_VAL) { + printf("[FAIL]\tWrite in child wrote through to shared memory\n"); + return 1; + } + + printf("[INFO]\tCow gup write -> write access success\n"); + + free_shstk(shstk_ptr); + + signal(SIGSEGV, SIG_DFL); + + printf("[OK]\tShadow gup test\n"); + + return 0; +} + +int test_mprotect(void) +{ + struct sigaction sa; + + sa.sa_sigaction = test_access_fix_handler; + if (sigaction(SIGSEGV, &sa, NULL)) + return 1; + sa.sa_flags = SA_SIGINFO; + + segv_triggered = false; + + /* mprotect a shadow stack as read only */ + reset_test_shstk(0); + if (mprotect(shstk_ptr, SS_SIZE, PROT_READ) < 0) { + printf("[FAIL]\tmprotect(PROT_READ) failed\n"); + return 1; + } + + /* try to wrss it and fail */ + if (!test_shstk_access(shstk_ptr)) { + printf("[FAIL]\tShadow stack access to read-only memory succeeded\n"); + return 1; + } + + /* + * The shadow stack was reset above to resolve the fault, make the new one + * read-only. + */ + if (mprotect(shstk_ptr, SS_SIZE, PROT_READ) < 0) { + printf("[FAIL]\tmprotect(PROT_READ) failed\n"); + return 1; + } + + /* then back to writable */ + if (mprotect(shstk_ptr, SS_SIZE, PROT_WRITE | PROT_READ) < 0) { + printf("[FAIL]\tmprotect(PROT_WRITE) failed\n"); + return 1; + } + + /* then wrss to it and succeed */ + if (test_shstk_access(shstk_ptr)) { + printf("[FAIL]\tShadow stack access to mprotect() writable memory failed\n"); + return 1; + } + + free_shstk(shstk_ptr); + + signal(SIGSEGV, SIG_DFL); + + printf("[OK]\tmprotect() test\n"); + + return 0; +} + +char zero[4096]; + +static void *uffd_thread(void *arg) +{ + struct uffdio_copy req; + int uffd = *(int *)arg; + struct uffd_msg msg; + + if (read(uffd, &msg, sizeof(msg)) <= 0) + return (void *)1; + + req.dst = msg.arg.pagefault.address; + req.src = (__u64)zero; + req.len = 4096; + req.mode = 0; + + if (ioctl(uffd, UFFDIO_COPY, &req)) + return (void *)1; + + return (void *)0; +} + +int test_userfaultfd(void) +{ + struct uffdio_register uffdio_register; + struct uffdio_api uffdio_api; + struct sigaction sa; + pthread_t thread; + void *res; + int uffd; + + sa.sa_sigaction = test_access_fix_handler; + if (sigaction(SIGSEGV, &sa, NULL)) + return 1; + sa.sa_flags = SA_SIGINFO; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd < 0) { + printf("[SKIP]\tUserfaultfd unavailable.\n"); + return 0; + } + + reset_test_shstk(0); + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) + goto err; + + uffdio_register.range.start = (__u64)shstk_ptr; + uffdio_register.range.len = 4096; + uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register)) + goto err; + + if (pthread_create(&thread, NULL, &uffd_thread, &uffd)) + goto err; + + reset_shstk(shstk_ptr); + test_shstk_access(shstk_ptr); + + if (pthread_join(thread, &res)) + goto err; + + if (test_shstk_access(shstk_ptr)) + goto err; + + free_shstk(shstk_ptr); + + signal(SIGSEGV, SIG_DFL); + + if (!res) + printf("[OK]\tUserfaultfd test\n"); + return !!res; +err: + free_shstk(shstk_ptr); + close(uffd); + signal(SIGSEGV, SIG_DFL); + return 1; +} + +/* + * Too complicated to pull it out of the 32 bit header, but also get the + * 64 bit one needed above. Just define a copy here. + */ +#define __NR_compat_sigaction 67 + +/* + * Call 32 bit signal handler to get 32 bit signals ABI. Make sure + * to push the registers that will get clobbered. + */ +int sigaction32(int signum, const struct sigaction *restrict act, + struct sigaction *restrict oldact) +{ + register long syscall_reg asm("eax") = __NR_compat_sigaction; + register long signum_reg asm("ebx") = signum; + register long act_reg asm("ecx") = (long)act; + register long oldact_reg asm("edx") = (long)oldact; + int ret = 0; + + asm volatile ("int $0x80;" + : "=a"(ret), "=m"(oldact) + : "r"(syscall_reg), "r"(signum_reg), "r"(act_reg), + "r"(oldact_reg) + : "r8", "r9", "r10", "r11" + ); + + return ret; +} + +sigjmp_buf jmp_buffer; + +void segv_gp_handler(int signum, siginfo_t *si, void *uc) +{ + segv_triggered = true; + + /* + * To work with old glibc, this can't rely on siglongjmp working with + * shadow stack enabled, so disable shadow stack before siglongjmp(). + */ + ARCH_PRCTL(ARCH_SHSTK_DISABLE, ARCH_SHSTK_SHSTK); + siglongjmp(jmp_buffer, -1); +} + +/* + * Transition to 32 bit mode and check that a #GP triggers a segfault. + */ +int test_32bit(void) +{ + struct sigaction sa; + struct sigaction *sa32; + + /* Create sigaction in 32 bit address range */ + sa32 = mmap(0, 4096, PROT_READ | PROT_WRITE, + MAP_32BIT | MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); + sa32->sa_flags = SA_SIGINFO; + + sa.sa_sigaction = segv_gp_handler; + if (sigaction(SIGSEGV, &sa, NULL)) + return 1; + sa.sa_flags = SA_SIGINFO; + + segv_triggered = false; + + /* Make sure segv_triggered is set before triggering the #GP */ + asm volatile("" : : : "memory"); + + /* + * Set handler to somewhere in 32 bit address space + */ + sa32->sa_handler = (void *)sa32; + if (sigaction32(SIGUSR1, sa32, NULL)) + return 1; + + if (!sigsetjmp(jmp_buffer, 1)) + raise(SIGUSR1); + + if (segv_triggered) + printf("[OK]\t32 bit test\n"); + + return !segv_triggered; +} + +int main(int argc, char *argv[]) +{ + int ret = 0; + + if (ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK)) { + printf("[SKIP]\tCould not enable Shadow stack\n"); + return 1; + } + + if (ARCH_PRCTL(ARCH_SHSTK_DISABLE, ARCH_SHSTK_SHSTK)) { + ret = 1; + printf("[FAIL]\tDisabling shadow stack failed\n"); + } + + if (ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK)) { + printf("[SKIP]\tCould not re-enable Shadow stack\n"); + return 1; + } + + if (ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_WRSS)) { + printf("[SKIP]\tCould not enable WRSS\n"); + ret = 1; + goto out; + } + + /* Should have succeeded if here, but this is a test, so double check. */ + if (!get_ssp()) { + printf("[FAIL]\tShadow stack disabled\n"); + return 1; + } + + if (test_shstk_pivot()) { + ret = 1; + printf("[FAIL]\tShadow stack pivot\n"); + goto out; + } + + if (test_shstk_faults()) { + ret = 1; + printf("[FAIL]\tShadow stack fault test\n"); + goto out; + } + + if (test_shstk_violation()) { + ret = 1; + printf("[FAIL]\tShadow stack violation test\n"); + goto out; + } + + if (test_gup()) { + ret = 1; + printf("[FAIL]\tShadow shadow stack gup\n"); + goto out; + } + + if (test_mprotect()) { + ret = 1; + printf("[FAIL]\tShadow shadow mprotect test\n"); + goto out; + } + + if (test_userfaultfd()) { + ret = 1; + printf("[FAIL]\tUserfaultfd test\n"); + goto out; + } + + if (test_32bit()) { + ret = 1; + printf("[FAIL]\t32 bit test\n"); + } + + return ret; + +out: + /* + * Disable shadow stack before the function returns, or there will be a + * shadow stack violation. + */ + if (ARCH_PRCTL(ARCH_SHSTK_DISABLE, ARCH_SHSTK_SHSTK)) { + ret = 1; + printf("[FAIL]\tDisabling shadow stack failed\n"); + } + + return ret; +} +#endif From patchwork Sun Mar 19 00:15:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71676 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp529662wrt; Sat, 18 Mar 2023 18:26:39 -0700 (PDT) X-Google-Smtp-Source: AK7set9+nITCdstobQ9P7khO2YplyyeNXgUvvumoaIAzxYY/NRDHDcog+lQDgtMU7wp5XAh0V23f X-Received: by 2002:a05:6a20:3d92:b0:cc:ca9:4fde with SMTP id s18-20020a056a203d9200b000cc0ca94fdemr15988154pzi.33.1679189199432; Sat, 18 Mar 2023 18:26:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189199; cv=none; d=google.com; s=arc-20160816; b=Ys7muh0j/oWisIejuTs8Sg7DWZd2dBJhXayRo9nEQzVQlg0A4VogE7U8N/jpS7KTL+ C3SKwAiY02t5tfYaZcJJBVR7UGbpJw2Opm603uy4sNUPU6COVK7GOJjW6hUVB/K9Fo7Q ZPtT7UhkbwHb0crganANGWnizABpfj7Zlqt3DnruCRqH8mBNj53EeNYnKbABeI4TUDP3 NSRcGlKaVwts9nsnWUA2YMuNVGDtF/q40DRw1s0bN7ZQ0PNT5TeyYAkWpBHwrvP1VoC1 eCQaqOdFfoUNUhJDVeVTs/CCBOKEHigZsUxrjGtx+bYuI3HOhHUv4p36pTabp6Lp5Em0 Vaow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=yHuJtH0ZX4t1G+OD4X4Fg0aKTiRaA6fz+P/1xCa+LZ0=; b=lcjHg1YTzsbKxi39XwVilD6viEN1ctFG/Pwu6eO8LuAFPdz2Ki3qjvaU8/jRZU1f6s /eF3bcuFK5+iseTIrh/goYnNtgt5eNbPf78W7sqird95hc4qyH4RLuOPjdAvz+JhHTef pYHvU1gax/g95dFgPqIm9+Oujs1Ks7c3QED1Bb4tJXxLdOPLf4l19iO+FYdjbhfxgxjW QpLKRLnkeoSzT0bl5zK1U9xup0ABD+wxJ0MBYfYbGQCRzU3npRF7ru7ITfswCeIP1oV0 vuOmLQX4zoVMpHbC3w8pW48CU3B3zM/Aba3saX2M5ceGZuL4m+WPhDdxUz4SEAaVDG+a eb8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NsGne9sC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q28-20020a635c1c000000b0050b8a7635dfsi6259866pgb.295.2023.03.18.18.26.26; Sat, 18 Mar 2023 18:26:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NsGne9sC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230420AbjCSAX7 (ORCPT + 99 others); Sat, 18 Mar 2023 20:23:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229958AbjCSAXV (ORCPT ); Sat, 18 Mar 2023 20:23:21 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50BC42C644; Sat, 18 Mar 2023 17:20:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185253; x=1710721253; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=vzjvLf4i3g21tZUx0No2irY3Y9uThaxbSSa7EUI0VaM=; b=NsGne9sC8MfVPTp/BxZNE4Q2u/+TcbeOBk8lOIzyV39w/L4O7adgjq3D lgRr6c7qiwRinTSWktoC8lGIRFRB8zbWwvZyq6sRCg2b6/lUutII3NcjN T92aE7MblV3it8nMwXoif42u3BSfW03GdiThxiBrV6KDbgBBdQUJ2urRx TBJ1J8wZ24a6sSvuBSrGZRNfD2NOqF2d6i/pCVeVOmU3egaZuMLq4ADYK 7VMB/vh3QXb33q5QzMLFCzZAi1mZMrYU3wlEy5IAiejUEk/y6AlVOD5c+ 3Kgc9i40VrROOeJp9IzxnfowG6ww/bZZConHYuwjlCQFkK6EXP+U/OHSB g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491580" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491580" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749673008" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749673008" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:56 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 38/40] x86: Add PTRACE interface for shadow stack Date: Sat, 18 Mar 2023 17:15:33 -0700 Message-Id: <20230319001535.23210-39-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757494368819183?= X-GMAIL-MSGID: =?utf-8?q?1760757494368819183?= Some applications (like GDB) would like to tweak shadow stack state via ptrace. This allows for existing functionality to continue to work for seized shadow stack applications. Provide a regset interface for manipulating the shadow stack pointer (SSP). There is already ptrace functionality for accessing xstate, but this does not include supervisor xfeatures. So there is not a completely clear place for where to put the shadow stack state. Adding it to the user xfeatures regset would complicate that code, as it currently shares logic with signals which should not have supervisor features. Don't add a general supervisor xfeature regset like the user one, because it is better to maintain flexibility for other supervisor xfeatures to define their own interface. For example, an xfeature may decide not to expose all of it's state to userspace, as is actually the case for shadow stack ptrace functionality. A lot of enum values remain to be used, so just put it in dedicated shadow stack regset. The only downside to not having a generic supervisor xfeature regset, is that apps need to be enlightened of any new supervisor xfeature exposed this way (i.e. they can't try to have generic save/restore logic). But maybe that is a good thing, because they have to think through each new xfeature instead of encountering issues when a new supervisor xfeature was added. By adding a shadow stack regset, it also has the effect of including the shadow stack state in a core dump, which could be useful for debugging. The shadow stack specific xstate includes the SSP, and the shadow stack and WRSS enablement status. Enabling shadow stack or WRSS in the kernel involves more than just flipping the bit. The kernel is made aware that it has to do extra things when cloning or handling signals. That logic is triggered off of separate feature enablement state kept in the task struct. So the flipping on HW shadow stack enforcement without notifying the kernel to change its behavior would severely limit what an application could do without crashing, and the results would depend on kernel internal implementation details. There is also no known use for controlling this state via ptrace today. So only expose the SSP, which is something that userspace already has indirect control over. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Stop using init_xfeature() and just return an error if the init state is encountered, since it shouldn't be. (Boris) v5: - Check shadow stack enablement status for tracee (rppt) - Fix typo in comment v4: - Make shadow stack only. Reduce to only supporting SSP register, and remove CET references (peterz) - Add comment to not use 0x203, because binutils already looks for it in coredumps. (Christina Schimpe) v3: - Drop dependence on thread.shstk.size, and use thread.features bits - Drop 32 bit support --- arch/x86/include/asm/fpu/regset.h | 7 +-- arch/x86/kernel/fpu/regset.c | 78 +++++++++++++++++++++++++++++++ arch/x86/kernel/ptrace.c | 12 +++++ include/uapi/linux/elf.h | 2 + 4 files changed, 96 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/fpu/regset.h b/arch/x86/include/asm/fpu/regset.h index 4f928d6a367b..697b77e96025 100644 --- a/arch/x86/include/asm/fpu/regset.h +++ b/arch/x86/include/asm/fpu/regset.h @@ -7,11 +7,12 @@ #include -extern user_regset_active_fn regset_fpregs_active, regset_xregset_fpregs_active; +extern user_regset_active_fn regset_fpregs_active, regset_xregset_fpregs_active, + ssp_active; extern user_regset_get2_fn fpregs_get, xfpregs_get, fpregs_soft_get, - xstateregs_get; + xstateregs_get, ssp_get; extern user_regset_set_fn fpregs_set, xfpregs_set, fpregs_soft_set, - xstateregs_set; + xstateregs_set, ssp_set; /* * xstateregs_active == regset_fpregs_active. Please refer to the comment diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index 6d056b68f4ed..f0a8eaf7c52e 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "context.h" #include "internal.h" @@ -174,6 +175,83 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, return ret; } +#ifdef CONFIG_X86_USER_SHADOW_STACK +int ssp_active(struct task_struct *target, const struct user_regset *regset) +{ + if (target->thread.features & ARCH_SHSTK_SHSTK) + return regset->n; + + return 0; +} + +int ssp_get(struct task_struct *target, const struct user_regset *regset, + struct membuf to) +{ + struct fpu *fpu = &target->thread.fpu; + struct cet_user_state *cetregs; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return -ENODEV; + + sync_fpstate(fpu); + cetregs = get_xsave_addr(&fpu->fpstate->regs.xsave, XFEATURE_CET_USER); + if (WARN_ON(!cetregs)) { + /* + * This shouldn't ever be NULL because shadow stack was + * verified to be enabled above. This means + * MSR_IA32_U_CET.CET_SHSTK_EN should be 1 and so + * XFEATURE_CET_USER should not be in the init state. + */ + return -ENODEV; + } + + return membuf_write(&to, (unsigned long *)&cetregs->user_ssp, + sizeof(cetregs->user_ssp)); +} + +int ssp_set(struct task_struct *target, const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + struct fpu *fpu = &target->thread.fpu; + struct xregs_state *xsave = &fpu->fpstate->regs.xsave; + struct cet_user_state *cetregs; + unsigned long user_ssp; + int r; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || + !ssp_active(target, regset)) + return -ENODEV; + + r = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &user_ssp, 0, -1); + if (r) + return r; + + /* + * Some kernel instructions (IRET, etc) can cause exceptions in the case + * of disallowed CET register values. Just prevent invalid values. + */ + if (user_ssp >= TASK_SIZE_MAX || !IS_ALIGNED(user_ssp, 8)) + return -EINVAL; + + fpu_force_restore(fpu); + + cetregs = get_xsave_addr(xsave, XFEATURE_CET_USER); + if (WARN_ON(!cetregs)) { + /* + * This shouldn't ever be NULL because shadow stack was + * verified to be enabled above. This means + * MSR_IA32_U_CET.CET_SHSTK_EN should be 1 and so + * XFEATURE_CET_USER should not be in the init state. + */ + return -ENODEV; + } + + cetregs->user_ssp = user_ssp; + return 0; +} +#endif /* CONFIG_X86_USER_SHADOW_STACK */ + #if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION /* diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index dfaa270a7cc9..095f04bdabdc 100644 --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -58,6 +58,7 @@ enum x86_regset_64 { REGSET64_FP, REGSET64_IOPERM, REGSET64_XSTATE, + REGSET64_SSP, }; #define REGSET_GENERAL \ @@ -1267,6 +1268,17 @@ static struct user_regset x86_64_regsets[] __ro_after_init = { .active = ioperm_active, .regset_get = ioperm_get }, +#ifdef CONFIG_X86_USER_SHADOW_STACK + [REGSET64_SSP] = { + .core_note_type = NT_X86_SHSTK, + .n = 1, + .size = sizeof(u64), + .align = sizeof(u64), + .active = ssp_active, + .regset_get = ssp_get, + .set = ssp_set + }, +#endif }; static const struct user_regset_view user_x86_64_view = { diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index ac3da855fb19..fa1ceeae2596 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -406,6 +406,8 @@ typedef struct elf64_shdr { #define NT_386_TLS 0x200 /* i386 TLS slots (struct user_desc) */ #define NT_386_IOPERM 0x201 /* x86 io permission bitmap (1=deny) */ #define NT_X86_XSTATE 0x202 /* x86 extended state using xsave */ +/* Old binutils treats 0x203 as a CET state */ +#define NT_X86_SHSTK 0x204 /* x86 SHSTK state */ #define NT_S390_HIGH_GPRS 0x300 /* s390 upper register halves */ #define NT_S390_TIMER 0x301 /* s390 timer register */ #define NT_S390_TODCMP 0x302 /* s390 TOD clock comparator register */ From patchwork Sun Mar 19 00:15:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71680 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp531038wrt; Sat, 18 Mar 2023 18:31:11 -0700 (PDT) X-Google-Smtp-Source: AK7set+/fgwVekckAr9Jc4C/RPVkxc7aLllUMdVNwLQ84hZdEBqLvSFFblPmo2N4faszWbWvTtUF X-Received: by 2002:a17:902:ec8d:b0:1a1:9020:f9c7 with SMTP id x13-20020a170902ec8d00b001a19020f9c7mr13592230plg.44.1679189471018; Sat, 18 Mar 2023 18:31:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679189471; cv=none; d=google.com; s=arc-20160816; b=ab9b79nOkzKYk8U5qytAYkMN1lPpf+16LGWtr06W5WOt/J5bRthg/Ao9/XNQxRnYFT OA79I/vRDrMmtfwGuPgqtkEE3wcOItEXkQ/cwT4SimS6Il9eRDBPhE/faGgVfYXJ121W 9NzQPe36POdPXL8f/nd7U6tGyTDzRrGa82wgEu2iicVg7NE6oiBSC66az79KK4jI1ZOq ax+EWKr5DfXyVxCrnQZIpiBGGqGLy18LrxPqhJcUh6KTG4qAt39nZrvqC+5evLykBuA/ jiaxGIBZGkb2w0PhoeEdg2Wf3pLCuNf6RVIesq2PmGJNaEfHOK9k+We7L62Aw6/aVvM0 /Lmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=BYztGyXeSgKmXCeXVyD+9/C85SU+OQNRYv4uYO65s38=; b=ggfZsSurvhFcOg+S0J3QrFvD9+j7nn6zUQzbp5D4P7xhDf2DtX/f+krkTDmY+rNKzR vGTQ7Knh25YaG9yVQKSncAacgLaKC0Y9TgVs4m1X9urT60xttiKsQUckUHRdh68TUizj q9zR4K8jNljoq8MllTffjP/s9LA52h5qBe+A/qS249oGSQkV6obUlMj4PBUUy0mdNQnp iZANnfBbcsNilf9b9HoJXvrl0+cCqmp1nzJ76lpJjMEahR3WWK5HxfiC7dCKQEsay13d li4BsXEf9sznynvkRpa3DkXoxrP4Fj4tOVabNn1INLDY7H55vWlMS6g9jk1h2HGd0Pe8 oNkg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TJMBLNFV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a9-20020a170902b58900b001a11cf69a4dsi6540861pls.226.2023.03.18.18.30.57; Sat, 18 Mar 2023 18:31:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TJMBLNFV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230434AbjCSAYP (ORCPT + 99 others); Sat, 18 Mar 2023 20:24:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjCSAXZ (ORCPT ); Sat, 18 Mar 2023 20:23:25 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F1112BF1E; Sat, 18 Mar 2023 17:21:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185260; x=1710721260; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=7ZxUjaRiJBiGSF6h5aVdSGV+Yc7zbFIJZDgLv2rDKvs=; b=TJMBLNFVvu3KO0LOiwhAhE+KQEYLcRFY83knOzLR+yEb6bf7lA4QgFFj NsZoOfwZEUwkHEgseBWvYacclnTpR8s0jk9Iy0pdwXSr78d7kj17BHd2f krNbFCj1Q8OggCfpZhGuwN6iHAq4+nXBhhfiXHg3KW9Pvu+Y7K3p2Io95 vhr83JCgdC5MK4Aojtn5pVuUOJwVT8u2g3/9uNZC10g8dsxa4CypfoUL0 mAKdak48GlyWguKle2z2rpofbp/+1o9qO751rOm4EPndb1r3q4S6/wfxa Oh0AzGst2GVTjNE0b2mb+vEWA/nmmMoB8gU1swzrxHN12tbci258S/QuV Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491602" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491602" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749673018" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749673018" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:58 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Mike Rapoport Subject: [PATCH v8 39/40] x86/shstk: Add ARCH_SHSTK_UNLOCK Date: Sat, 18 Mar 2023 17:15:34 -0700 Message-Id: <20230319001535.23210-40-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757778758562210?= X-GMAIL-MSGID: =?utf-8?q?1760757778758562210?= From: Mike Rapoport Userspace loaders may lock features before a CRIU restore operation has the chance to set them to whatever state is required by the process being restored. Allow a way for CRIU to unlock features. Add it as an arch_prctl() like the other shadow stack operations, but restrict it being called by the ptrace arch_pctl() interface. [Merged into recent API changes, added commit log and docs] Signed-off-by: Mike Rapoport Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Remove Mike's ack from his own patch (Boris) v4: - Add to docs that it is ptrace only. - Remove "CET" references v3: - Depend on CONFIG_CHECKPOINT_RESTORE (Kees) --- Documentation/x86/shstk.rst | 4 ++++ arch/x86/include/uapi/asm/prctl.h | 1 + arch/x86/kernel/process_64.c | 1 + arch/x86/kernel/shstk.c | 9 +++++++-- 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/Documentation/x86/shstk.rst b/Documentation/x86/shstk.rst index f09afa504ec0..f3553cc8c758 100644 --- a/Documentation/x86/shstk.rst +++ b/Documentation/x86/shstk.rst @@ -75,6 +75,10 @@ arch_prctl(ARCH_SHSTK_LOCK, unsigned long features) are ignored. The mask is ORed with the existing value. So any feature bits set here cannot be enabled or disabled afterwards. +arch_prctl(ARCH_SHSTK_UNLOCK, unsigned long features) + Unlock features. 'features' is a mask of all features to unlock. All + bits set are processed, unset bits are ignored. Only works via ptrace. + The return values are as follows. On success, return 0. On error, errno can be:: diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index e31495668056..200efbbe5809 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -25,6 +25,7 @@ #define ARCH_SHSTK_ENABLE 0x5001 #define ARCH_SHSTK_DISABLE 0x5002 #define ARCH_SHSTK_LOCK 0x5003 +#define ARCH_SHSTK_UNLOCK 0x5004 /* ARCH_SHSTK_ features bits */ #define ARCH_SHSTK_SHSTK (1ULL << 0) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 9bbad1763e33..69d4ccaef56f 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -835,6 +835,7 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2) case ARCH_SHSTK_ENABLE: case ARCH_SHSTK_DISABLE: case ARCH_SHSTK_LOCK: + case ARCH_SHSTK_UNLOCK: return shstk_prctl(task, option, arg2); default: ret = -EINVAL; diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index ee89c4206ac9..ad336ab55ace 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -459,9 +459,14 @@ long shstk_prctl(struct task_struct *task, int option, unsigned long features) return 0; } - /* Don't allow via ptrace */ - if (task != current) + /* Only allow via ptrace */ + if (task != current) { + if (option == ARCH_SHSTK_UNLOCK && IS_ENABLED(CONFIG_CHECKPOINT_RESTORE)) { + task->thread.features_locked &= ~features; + return 0; + } return -EINVAL; + } /* Do not allow to change locked features */ if (features & task->thread.features_locked) From patchwork Sun Mar 19 00:15:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 71671 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp527821wrt; Sat, 18 Mar 2023 18:19:44 -0700 (PDT) X-Google-Smtp-Source: AK7set8PdITSKB9dPU/m5AxrEkePVXiy43L77RhXLY2WKhG3QABH6tNCjX7jdns1fugkJ9zt0wrv X-Received: by 2002:a17:902:e544:b0:1a1:7899:f009 with SMTP id n4-20020a170902e54400b001a17899f009mr7650853plf.17.1679188783854; Sat, 18 Mar 2023 18:19:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679188783; cv=none; d=google.com; s=arc-20160816; b=wpm/BGRzSPEe1RKb6DQgfz5Nd+0OXhYbxVAqEtP7Pe0kRKtLuhreRxtqpq9DeOaLjZ 3zZu2p+7z+QNq+2gOxa0P7idi/TCszfr6nOwyqox0SlpdvyhU+YQG9TKOe+2719jJ++Z xzUWpmEY/u005d4tIK+6pDrb6cuV7W7wUXwo4rx+kjPEw+XPizyHu2AA16kUpdAG9KSq /RiMGyb/A11tUTZHOvxdmleeGC408UvSlix9cHAB5XX6v0Bb8dWDStl5/3JkbDNRd4b8 14qUoxQI+KBDiCvLpbm3w+r5LPlT0U7Sg0YkZrDKjpVobNqdhqQwYV/6Kres9eFNS4Nb mCvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=hX1tIQjv6wLpquj+NXtr3HDoyM4MQw0DYXnBn2j4x04=; b=kD/xp4yava7UJA58dba5V3UPYZKhYS1pQFaMisEAQiYxHLjzKrlBaqjkFlj5vJtN3f T/NeX/pC5HFyuih0XyMimEu/m9mC3X5e+oDZyvNx8Hm+u0Zq8OlGk+ThLA90vAZaq1jF ysI4WHifz0rJRQ87+Z8L7ghli/OeWt6uqWPkd+ngbo2qRBJOx6qxKjGJKOY2nEFBDtw2 rPNHiEDLp8iKvCuZjpeii9Pmcc9UQbO3lAQh+JSWHRoek75HRguQXjmVzJBWe6wGpFCd T69QLzRxCvS8AwHnA/g/T75tX85CjUrBFFuhruqst+EBEYg0VcFCaaKWae4EYIkS5skL aMrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n+xtZv9i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x6-20020a1709028ec600b0019a71e14c19si6531045plo.320.2023.03.18.18.19.31; Sat, 18 Mar 2023 18:19:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n+xtZv9i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230052AbjCSAYX (ORCPT + 99 others); Sat, 18 Mar 2023 20:24:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230402AbjCSAXj (ORCPT ); Sat, 18 Mar 2023 20:23:39 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8CA92BF3D; Sat, 18 Mar 2023 17:21:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679185269; x=1710721269; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=NaxDC9okCEIT/JhArq/4DzJUEPU99GVMAiUn52jAzaM=; b=n+xtZv9ijWl9sxFAiGzgZzlg+GeMkUJniEZcIGXQeg0unjSsN3h4L8Za bYd7x3mlDgqSTRuwJvtHPXRV/f7d04PdQk20aisfdUFGS/3YJwEQP1RIW C4wptPybetyfipW4rGQz4OWgG1tZ6CQC8OM8QsVswTEKKb5Jzz9BCXZ5c AMCvRXL7zXfxfKKHj7uQin3P0omKs03k9t/nKr9HNwCetAzGHZQ3z7jM0 GQj2vNKSMsEzG0Z+3Z9AHYIAS+KiXQOAEjE9xd3tqDMHSfWNdHJegVShg ynGW4LZnKAAAImAaaPjK8uUcjXoz0EAwGbsFq/+W+q0ZROdPPkGPiJ529 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491625" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491625" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:17:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749673030" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749673030" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:59 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH v8 40/40] x86/shstk: Add ARCH_SHSTK_STATUS Date: Sat, 18 Mar 2023 17:15:35 -0700 Message-Id: <20230319001535.23210-41-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760757058298657373?= X-GMAIL-MSGID: =?utf-8?q?1760757058298657373?= CRIU and GDB need to get the current shadow stack and WRSS enablement status. This information is already available via /proc/pid/status, but this is inconvenient for CRIU because it involves parsing the text output in an area of the code where this is difficult. Provide a status arch_prctl(), ARCH_SHSTK_STATUS for retrieving the status. Have arg2 be a userspace address, and make the new arch_prctl simply copy the features out to userspace. Suggested-by: Mike Rapoport Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v5: - Fix typo in commit log v4: - New patch --- Documentation/x86/shstk.rst | 6 ++++++ arch/x86/include/asm/shstk.h | 2 +- arch/x86/include/uapi/asm/prctl.h | 1 + arch/x86/kernel/process_64.c | 1 + arch/x86/kernel/shstk.c | 8 +++++++- 5 files changed, 16 insertions(+), 2 deletions(-) diff --git a/Documentation/x86/shstk.rst b/Documentation/x86/shstk.rst index f3553cc8c758..60260e809baf 100644 --- a/Documentation/x86/shstk.rst +++ b/Documentation/x86/shstk.rst @@ -79,6 +79,11 @@ arch_prctl(ARCH_SHSTK_UNLOCK, unsigned long features) Unlock features. 'features' is a mask of all features to unlock. All bits set are processed, unset bits are ignored. Only works via ptrace. +arch_prctl(ARCH_SHSTK_STATUS, unsigned long addr) + Copy the currently enabled features to the address passed in addr. The + features are described using the bits passed into the others in + 'features'. + The return values are as follows. On success, return 0. On error, errno can be:: @@ -86,6 +91,7 @@ be:: -ENOTSUPP if the feature is not supported by the hardware or kernel. -EINVAL arguments (non existing feature, etc) + -EFAULT if could not copy information back to userspace The feature's bits supported are:: diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index ecb23a8ca47d..42fee8959df7 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -14,7 +14,7 @@ struct thread_shstk { u64 size; }; -long shstk_prctl(struct task_struct *task, int option, unsigned long features); +long shstk_prctl(struct task_struct *task, int option, unsigned long arg2); void reset_thread_features(void); unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, unsigned long stack_size); diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index 200efbbe5809..1b85bc876c2d 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -26,6 +26,7 @@ #define ARCH_SHSTK_DISABLE 0x5002 #define ARCH_SHSTK_LOCK 0x5003 #define ARCH_SHSTK_UNLOCK 0x5004 +#define ARCH_SHSTK_STATUS 0x5005 /* ARCH_SHSTK_ features bits */ #define ARCH_SHSTK_SHSTK (1ULL << 0) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 69d4ccaef56f..31241930b60c 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -836,6 +836,7 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2) case ARCH_SHSTK_DISABLE: case ARCH_SHSTK_LOCK: case ARCH_SHSTK_UNLOCK: + case ARCH_SHSTK_STATUS: return shstk_prctl(task, option, arg2); default: ret = -EINVAL; diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index ad336ab55ace..1f767c509ee9 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -452,8 +452,14 @@ SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsi return alloc_shstk(addr, aligned_size, size, set_tok); } -long shstk_prctl(struct task_struct *task, int option, unsigned long features) +long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { + unsigned long features = arg2; + + if (option == ARCH_SHSTK_STATUS) { + return put_user(task->thread.features, (unsigned long __user *)arg2); + } + if (option == ARCH_SHSTK_LOCK) { task->thread.features_locked |= features; return 0;