From patchwork Thu Jan 25 06:21:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 191875 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp1456910dyi; Wed, 24 Jan 2024 22:43:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IG+VjRjYXVCJLY8QKBBsOtPD9lPJFSBjrRzlJ49kEinzuuCvxx6CUg/tkh+gD/RajiKyrqO X-Received: by 2002:aca:181a:0:b0:3bd:cc3b:a894 with SMTP id h26-20020aca181a000000b003bdcc3ba894mr476659oih.93.1706164999937; Wed, 24 Jan 2024 22:43:19 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706164999; cv=pass; d=google.com; s=arc-20160816; b=rllkdEW2HiYIwPLWVanO2ClFQQlq9F8eeFkwK2L5TOnWQm70+IuMjY+xhb2aoGWWQv x416GFSbsvQQVIbQ55NkJgj0M5mmQfKSX+Qq0kaZwR84Lkeb88reny0S+9ASuVSjMf44 rCzOffBkkWwSaQekk71LHO0QWD2rzTDbiyVkfGaaevosO3Ud9hKIanhl9MBguymmlugd KEnhgTQxEbfAiLE70XIFuRxELolX223/jiYVnoBcKCLo/ujmBJpAvGjog7cmy43XXjct sN/R1zUGOmoFSO/Da3T9PLwSvmsqvOfqtMxg2kkfegsvpnrbCOyHnDQOqI0Igp9UXsJJ f8iQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; fh=oy7V1wWqiaug32hsWRXm98kKNp5NFPTWbY7PiGr4deM=; b=LhRt5nEvIDg2Zkokd+GcykNnSTvE3BJRnEehrC2duE9uAsZr3ZFaxOIsewBPAgSH35 QtvVZXO33vdogEpmpsREDANgUxVyFjxmAs96mQ3A/aNcxPVT5SJRUz/N7Snl4E/16dEX KJCB3IVqC9DCPkooyJ4Fg5J3H66MA17FopdiFWZRjeLbvMSxynxa2qAKOPeNKiiyzB9L WcrF00eQKI7QMcFDkkDWGN6l+1isCiRSymJ+mrtpdIMxnArHjYijNIoNJOogR4GWbhjW r/miEYIhlFn9Pf6aA7Z7sOzUAuYS/TEyxLmCkJeMz3j+xeqKCMKKNBqO1WvRHto+CagS WWpQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=UUbZ7Tum; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-38027-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-38027-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id b23-20020a63cf57000000b005cdfa5be05bsi12931478pgj.445.2024.01.24.22.43.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 22:43:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-38027-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=UUbZ7Tum; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-38027-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-38027-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 77F99B26CCD for ; Thu, 25 Jan 2024 06:35:57 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AE6271BF20; Thu, 25 Jan 2024 06:30:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="UUbZ7Tum" Received: from mail-oi1-f171.google.com (mail-oi1-f171.google.com [209.85.167.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5C4A1BC57 for ; Thu, 25 Jan 2024 06:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706164210; cv=none; b=Ai8yA2d8QWGFwiv3Sv2SLOfwCPsXH2VsbTauZsRAXVdt+J6w2HqjxgMjoX3qyBta287nqCJBqA8h0qj21hHMTt1pEnpSGJrfo/VWT31HNHsiMCHGgjaTqsqPp8EikXxscTIrhgli2LRUhQVBIBxdL24OvNOrqTDrzQYrHKtQCPY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706164210; c=relaxed/simple; bh=QB+xix8grv8j+GIQeAACDRjClrRowUHsxYLHjUa1Xas=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y7sOwf19SUMRtA4P1+n+51Qs9fawlqObce4Ea+776+aRoTsjn3eIE7k4TaOjPnQT6G1fO98/VwuZu0C1R2ThIRQh6oUfnJk/oKDyGN+AojiOEC37FlCNyLyhtJrHpmIzO+VhPZRSyal7MsR0yKquV9js30Gwlf9AIc9VWHVUIQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=UUbZ7Tum; arc=none smtp.client-ip=209.85.167.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-oi1-f171.google.com with SMTP id 5614622812f47-3bda741ad7dso4930592b6e.1 for ; Wed, 24 Jan 2024 22:30:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1706164207; x=1706769007; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; b=UUbZ7TumEE3Xgov2AB+3sH3AUZ5U2tsVVDUhng4sg+MAWDV8vbgqkMII5yhK0QIDap KDebZDSduO87SF0FkgdujYmBM8Gn+KRuzQK62TI5Wycd+V2SuFFbY6+ot04uDQgsh6mm tt4NfB5oVeEe52HDOvI/oyxL/32zgZBNNNhn7oGzhgHB1nPv2UT5vA6AApy3870G/nKr NBN0usEqa0rkROlh+RATUmhwvPfaHAfAMN/zh5rhaqfJX7fXOjsWEeSBHtJMHK/FhLFy pKjvqoHWMjbAhVyc0yZquFL4R6761+ENtmOKpkZrlZUhfaiIUpTbLyCK6M2Z5gDrbkNn NSyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706164207; x=1706769007; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; b=YWa/mxnhw+fg91sAlaukoUONvnwZe1Q8GvbO/b/gdjh/4PgmpSMqZNUYde5QKvlocg CQ4AaTLP+UXSB5MvKFTqUvvGCtMvCPZdrmo53hcDf2FqkVEobvrjOwxUOfrcmqh680a0 IICl9P6We+yHNz0RO5gFanvYL8uXep6PDpMEIJkkXvSZroKvBzp0yN51yKDuOW720FYI PUFjp4QumOj5kjOz32hIah85rnie0IaefdO1g70v52SSiYkTjxjNQaZkRI2et4AcQwaP wRJaXS0+j/sy3D/D1kG/t1qSEJfW6oUOJeexFSshPKNDh3VEj6eytfBBE1UTQpCXks2V +rBQ== X-Gm-Message-State: AOJu0YxlB87oGBZ/2O+Um7djWa5z7H580UZ4nUfouIqboqvJJ2/y7pSz PaKUbBj6R+eD89uerkbFNvYOlskPQV6K8z11gIIND9zGr2vfMQwpPpzT/5Da2js= X-Received: by 2002:a05:6808:3990:b0:3bc:7171:b7c7 with SMTP id gq16-20020a056808399000b003bc7171b7c7mr569515oib.67.1706164206921; Wed, 24 Jan 2024 22:30:06 -0800 (PST) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id t19-20020a056a00139300b006dd870b51b8sm3201139pfg.126.2024.01.24.22.30.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 22:30:06 -0800 (PST) From: debug@rivosinc.com To: rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, paul.walmsley@sifive.com, palmer@dabbelt.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com Cc: corbet@lwn.net, aou@eecs.berkeley.edu, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, shuah@kernel.org, brauner@kernel.org, debug@rivosinc.com, guoren@kernel.org, samitolvanen@google.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, waylingii@gmail.com, greentime.hu@sifive.com, heiko@sntech.de, jszhang@kernel.org, shikemeng@huaweicloud.com, david@redhat.com, charlie@rivosinc.com, panqinglin2020@iscas.ac.cn, willy@infradead.org, vincent.chen@sifive.com, andy.chiu@sifive.com, gerg@kernel.org, jeeheng.sia@starfivetech.com, mason.huo@starfivetech.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bhe@redhat.com, chenjiahao16@huawei.com, ruscur@russell.cc, bgray@linux.ibm.com, alx@kernel.org, baruch@tkos.co.il, zhangqing@loongson.cn, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, joey.gouly@arm.com, shr@devkernel.io, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH v1 19/28] riscv: Implements arch agnostic shadow stack prctls Date: Wed, 24 Jan 2024 22:21:44 -0800 Message-ID: <20240125062739.1339782-20-debug@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240125062739.1339782-1-debug@rivosinc.com> References: <20240125062739.1339782-1-debug@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789043671067498052 X-GMAIL-MSGID: 1789043671067498052 From: Deepak Gupta Implement architecture agnostic prctls() interface for setting and getting shadow stack status. prctls implemented are PR_GET_SHADOW_STACK_STATUS, PR_SET_SHADOW_STACK_STATUS and PR_LOCK_SHADOW_STACK_STATUS. As part of PR_SET_SHADOW_STACK_STATUS/PR_GET_SHADOW_STACK_STATUS, only PR_SHADOW_STACK_ENABLE is implemented because RISCV allows each mode to write to their own shadow stack using `sspush` or `ssamoswap`. PR_LOCK_SHADOW_STACK_STATUS locks current configuration of shadow stack enabling Following is not supported "Enable shadow stack, then disable and enable again." It's not sure whether providing such semantics are useful. It's better to return error code when such situation arises. Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 12 +++- arch/riscv/kernel/usercfi.c | 105 +++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h index eb9a0905e72b..72bcfa773752 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ #include +#include struct task_struct; struct kernel_clone_args; @@ -14,7 +15,8 @@ struct kernel_clone_args; #ifdef CONFIG_RISCV_USER_CFI struct cfi_status { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ - unsigned long rsvd : ((sizeof(unsigned long)*8) - 1); + unsigned long ubcfi_locked : 1; + unsigned long rsvd : ((sizeof(unsigned long)*8) - 2); unsigned long user_shdw_stk; /* Current user shadow stack pointer */ unsigned long shdw_stk_base; /* Base address of shadow stack */ unsigned long shdw_stk_size; /* size of shadow stack */ @@ -26,6 +28,9 @@ void shstk_release(struct task_struct *tsk); void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size); void set_active_shstk(struct task_struct *task, unsigned long shstk_addr); bool is_shstk_enabled(struct task_struct *task); +bool is_shstk_locked(struct task_struct *task); + +#define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) #else @@ -56,6 +61,11 @@ static inline bool is_shstk_enabled(struct task_struct *task) return false; } +static inline bool is_shstk_locked(struct task_struct *task) +{ + return false; +} + #endif /* CONFIG_RISCV_USER_CFI */ #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index 36cac0d653f5..be3a071272d8 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -24,6 +24,16 @@ bool is_shstk_enabled(struct task_struct *task) return task->thread_info.user_cfi_state.ubcfi_en ? true : false; } +bool is_shstk_allocated(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.shdw_stk_base ? true : false; +} + +bool is_shstk_locked(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ubcfi_locked ? true : false; +} + void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size) { task->thread_info.user_cfi_state.shdw_stk_base = shstk_addr; @@ -42,6 +52,21 @@ void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) task->thread_info.user_cfi_state.user_shdw_stk = shstk_addr; } +void set_shstk_status(struct task_struct *task, bool enable) +{ + task->thread_info.user_cfi_state.ubcfi_en = enable ? 1 : 0; + + if (enable) + task->thread_info.envcfg |= ENVCFG_SSE; + else + task->thread_info.envcfg &= ~ENVCFG_SSE; +} + +void set_shstk_lock(struct task_struct *task) +{ + task->thread_info.user_cfi_state.ubcfi_locked = 1; +} + /* * If size is 0, then to be compatible with regular stack we want it to be as big as * regular stack. Else PAGE_ALIGN it and return back @@ -269,3 +294,83 @@ void shstk_release(struct task_struct *tsk) vm_munmap(base, size); set_shstk_base(tsk, 0, 0); } + +int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status) +{ + unsigned long bcfi_status = 0; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* this means shadow stack is enabled on the task */ + bcfi_status |= (is_shstk_enabled(t) ? PR_SHADOW_STACK_ENABLE : 0); + + return copy_to_user(status, &bcfi_status, sizeof(bcfi_status)) ? -EFAULT : 0; +} + +int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status) +{ + unsigned long size = 0, addr = 0; + bool enable_shstk = false; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* Reject unknown flags */ + if (status & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + /* bcfi status is locked and further can't be modified by user */ + if (is_shstk_locked(t)) + return -EINVAL; + + enable_shstk = status & PR_SHADOW_STACK_ENABLE; + /* Request is to enable shadow stack and shadow stack is not enabled already */ + if (enable_shstk && !is_shstk_enabled(t)) { + /* shadow stack was allocated and enable request again + * no need to support such usecase and return EINVAL. + */ + if (is_shstk_allocated(t)) + return -EINVAL; + + size = calc_shstk_size(0); + addr = allocate_shadow_stack(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return -ENOMEM; + set_shstk_base(t, addr, size); + set_active_shstk(t, addr + size); + } + + /* + * If a request to disable shadow stack happens, let's go ahead and release it + * Although, if CLONE_VFORKed child did this, then in that case we will end up + * not releasing the shadow stack (because it might be needed in parent). Although + * we will disable it for VFORKed child. And if VFORKed child tries to enable again + * then in that case, it'll get entirely new shadow stack because following condition + * are true + * - shadow stack was not enabled for vforked child + * - shadow stack base was anyways pointing to 0 + * This shouldn't be a big issue because we want parent to have availability of shadow + * stack whenever VFORKed child releases resources via exit or exec but at the same + * time we want VFORKed child to break away and establish new shadow stack if it desires + * + */ + if (!enable_shstk) + shstk_release(t); + + set_shstk_status(t, enable_shstk); + return 0; +} + +int arch_lock_shadow_stack_status(struct task_struct *task, + unsigned long arg) +{ + /* If shtstk not supported or not enabled on task, nothing to lock here */ + if (!cpu_supports_shadow_stack() || + !is_shstk_enabled(task)) + return -EINVAL; + + set_shstk_lock(task); + + return 0; +}