Message ID | 20240125062739.1339782-10-debug@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2553:b0:103:945f:af90 with SMTP id p19csp1453419dyi; Wed, 24 Jan 2024 22:31:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IFYmGxpcWmJY3J6Ghwq6GaAXgKJJISITJ9+SwwpwbTWn2YzQeYbGFAXtwYdq6ub5TOzHNWp X-Received: by 2002:a25:ae27:0:b0:dc2:43d0:6e07 with SMTP id a39-20020a25ae27000000b00dc243d06e07mr368509ybj.118.1706164316710; Wed, 24 Jan 2024 22:31:56 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706164316; cv=pass; d=google.com; s=arc-20160816; b=zfvW/bPNJTzci2WZoWBAleiHzTLO6P4o4hW6ncP52vUelY1avCdRwGt1avU7X4RFBu 5XduyRWn59Igll8AviwSi25eVrEPN8BXglvcu1g5flCY4HmAFsJrB7+Kc+1Hx9wcwMSf Teqs94ea9GEY278NNUbct7biK0mTVYJeWBSGEuXmQ+ICCgXc1MIsDAHP8NJBe//PCYDo elspB0YwklbSJQQ1Jt4PGWI6HmnVuXU950rasVbDfTKU+0MBKB/iDsLIeG2YvgZzjTpL AW3HUA/kGNNynvrcwGZLwgt1ttjQe/RQ6mnGYgl+4zhsrBsRGzy2EJZn2Xus/5b8HUrY 9qmg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=T6jrxQTzk5v6yDwRhrNBVQOQ/gk4SzlXhOza5budY2g=; fh=oy7V1wWqiaug32hsWRXm98kKNp5NFPTWbY7PiGr4deM=; b=ZQtcvLNhZLnISymOzBSh7uZe/eBmoSWlVir+SfLUPR7TpdJb9t8fFX5S0BuFkRZ6DH Wuu7PvSsJRZuWTFP3FfN7qf+qDUigXvuimCXSuTTg29P1ZzqQ9Xv4NWrB2Hf9NYZ+YJ8 hvvvvGKPVfHpGKC2mPfYsn3U79MMgU7Q7R9l+acoUUvmT5YdAqzCr8LhaJFzZJnwYJ2w fSLI54963LpHHLFcliQxUUovPQX0OPRoPiIZaDJ9KFIq2e+3ooxHYq0suEPVj4dJ9Qeg ee9/oMQ66htA09wTimqVN21YEneAbLpMYZ6UwKHaTephgf3kdax+/opBO4kLxYJCnj5d rIWA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=UsrOrape; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org" Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id m19-20020a0cf193000000b0068295111dc3si11959393qvl.361.2024.01.24.22.31.56 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 22:31:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=UsrOrape; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-38015-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 347601C249DC for <ouuuleilei@gmail.com>; Thu, 25 Jan 2024 06:31:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B1A9F17BB4; Thu, 25 Jan 2024 06:29:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="UsrOrape" Received: from mail-oi1-f180.google.com (mail-oi1-f180.google.com [209.85.167.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 088C01756B for <linux-kernel@vger.kernel.org>; Thu, 25 Jan 2024 06:29:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706164155; cv=none; b=by/GqMo9JYGYlbj3LnOEf4k8wc2d5B3ULifFDSrw3hQOEWgCEEjaFqSIAibVuMGvP26g4X3dTAbTccEZ3G+jrJn+85cLQDf4DlzOquhYo7qjzrAP9kLNgzt23BRfb+vKawtkSLw7HhxV2A2pMDslMTfY3MEfpGk6dZxca/+e9f8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706164155; c=relaxed/simple; bh=LvZLfFtPohYNVrhlo8kXoghOnc8le0NrJCzRCX+N24Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uCSD37APvcjtTcR8xGYT5bIny2FRywSLXecUyZMICuPcN9HZ06cF7SrkvLeGCd01xqRtkopzCjum9OmhbsFtpIAAsEGtciEHts/eVUbdzaj0wYoRRjxVHC0Yux2tkRiQ9OCTQ+IFCrGTbygqozkCU3JGWdzRvWHOZIPnhxu8c/Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=UsrOrape; arc=none smtp.client-ip=209.85.167.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-oi1-f180.google.com with SMTP id 5614622812f47-3bda6c92ce7so3913077b6e.2 for <linux-kernel@vger.kernel.org>; Wed, 24 Jan 2024 22:29:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1706164153; x=1706768953; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T6jrxQTzk5v6yDwRhrNBVQOQ/gk4SzlXhOza5budY2g=; b=UsrOrapetzWJq063MRcQ0mXV0VrorxfFNWACj28CEwZMAbZE+VfaggVsMh80z6EbTm CmH7G6EvjiUHmKEUUgBlZ47/r3QGi/KfZUwCbp/xN4UXdhS06fnq1XnyxQ9Goyz8V6rg DrxLB1SmQ9kZMHMDgG+XZPr5sQ13kN+MmGOCL8EndylfyshIQdiSncQDi5xUkrOOy6tS CYSyWjz0c/8Ehq0ZmPRgsIP8YbEYhu3rbW0Gr8fFDfBr3LKa7CRWggc0uhu3lITacmf4 EZ5qCAclpQGrHjPC9syyIc2hQGFGsOzLKmtRN16pKgawKH1Il+weI9qf3HPsZmUf7+Lj q/aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706164153; x=1706768953; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T6jrxQTzk5v6yDwRhrNBVQOQ/gk4SzlXhOza5budY2g=; b=YodSy47r3fozkRY1eRdDF24lZf1LPo/MCHT8BVwkTSaL7fjBuLBdH+4/fSvnJQxbYh eCFeHoqJ70aS6gUGVnMWOCmbuWMLXczE3i+FHe/STqXHjzlauvBm4a1jnndZMbuL1FnC 5EkY47sNBTFt/B8j1ewMyovL6QLXOns+Jmwe7UR68giJkjfQV7VHFBkvKdyVWdWz63gJ zPhjD84sj93rQEkmba5gRG5DtWvRuFWmkOLPxQF27hvroL0Kb5PnfaIoJ2ekAYSyAbnN H4Fd7IGFaFdBshhslKc1jLcQiZqywIvqK+43zCFisHUWSEzkeDol+JOsNa8lmWxJse1D +nIw== X-Gm-Message-State: AOJu0YzlxCRuFHC5EKVgjUSi5t8qfjKrsEOCGm9pyvPDFckWabBrQsIf dNykFAKm/JrIxOKWS55IVNsdmxJd47QQAis2lWCnZ9DjMy62oukdHYFapKlvrqA= X-Received: by 2002:a54:478d:0:b0:3bd:be66:c0c7 with SMTP id o13-20020a54478d000000b003bdbe66c0c7mr441553oic.98.1706164153039; Wed, 24 Jan 2024 22:29:13 -0800 (PST) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id t19-20020a056a00139300b006dd870b51b8sm3201139pfg.126.2024.01.24.22.29.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 22:29:12 -0800 (PST) From: debug@rivosinc.com To: rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, paul.walmsley@sifive.com, palmer@dabbelt.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com Cc: corbet@lwn.net, aou@eecs.berkeley.edu, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, shuah@kernel.org, brauner@kernel.org, debug@rivosinc.com, guoren@kernel.org, samitolvanen@google.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, waylingii@gmail.com, greentime.hu@sifive.com, heiko@sntech.de, jszhang@kernel.org, shikemeng@huaweicloud.com, david@redhat.com, charlie@rivosinc.com, panqinglin2020@iscas.ac.cn, willy@infradead.org, vincent.chen@sifive.com, andy.chiu@sifive.com, gerg@kernel.org, jeeheng.sia@starfivetech.com, mason.huo@starfivetech.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bhe@redhat.com, chenjiahao16@huawei.com, ruscur@russell.cc, bgray@linux.ibm.com, alx@kernel.org, baruch@tkos.co.il, zhangqing@loongson.cn, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, joey.gouly@arm.com, shr@devkernel.io, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH v1 09/28] mm: abstract shadow stack vma behind `arch_is_shadow_stack` Date: Wed, 24 Jan 2024 22:21:34 -0800 Message-ID: <20240125062739.1339782-10-debug@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240125062739.1339782-1-debug@rivosinc.com> References: <20240125062739.1339782-1-debug@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789042954752827390 X-GMAIL-MSGID: 1789042954752827390 |
Series |
riscv control-flow integrity for usermode
|
|
Commit Message
Deepak Gupta
Jan. 25, 2024, 6:21 a.m. UTC
From: Deepak Gupta <debug@rivosinc.com> x86 has used VM_SHADOW_STACK (alias to VM_HIGH_ARCH_5) to encode shadow stack VMA. VM_SHADOW_STACK is thus not possible on 32bit. Some arches may need a way to encode shadow stack on 32bit and 64bit both and they may encode this information differently in VMAs. This patch changes checks of VM_SHADOW_STACK flag in generic code to call to a function `arch_is_shadow_stack` which will return true if arch supports shadow stack and vma is shadow stack else stub returns false. There was a suggestion to name it as `vma_is_shadow_stack`. I preferred to keep `arch` prefix in there because it's each arch specific. Signed-off-by: Deepak Gupta <debug@rivosinc.com> --- include/linux/mm.h | 18 +++++++++++++++++- mm/gup.c | 5 +++-- mm/internal.h | 2 +- 3 files changed, 21 insertions(+), 4 deletions(-)
Comments
On 25.01.24 07:21, debug@rivosinc.com wrote: > From: Deepak Gupta <debug@rivosinc.com> > > x86 has used VM_SHADOW_STACK (alias to VM_HIGH_ARCH_5) to encode shadow > stack VMA. VM_SHADOW_STACK is thus not possible on 32bit. Some arches may > need a way to encode shadow stack on 32bit and 64bit both and they may > encode this information differently in VMAs. > > This patch changes checks of VM_SHADOW_STACK flag in generic code to call > to a function `arch_is_shadow_stack` which will return true if arch > supports shadow stack and vma is shadow stack else stub returns false. > > There was a suggestion to name it as `vma_is_shadow_stack`. I preferred to > keep `arch` prefix in there because it's each arch specific. > > Signed-off-by: Deepak Gupta <debug@rivosinc.com> > --- > include/linux/mm.h | 18 +++++++++++++++++- > mm/gup.c | 5 +++-- > mm/internal.h | 2 +- > 3 files changed, 21 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index dfe0e8118669..15c70fc677a3 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -352,6 +352,10 @@ extern unsigned int kobjsize(const void *objp); > * for more details on the guard size. > */ > # define VM_SHADOW_STACK VM_HIGH_ARCH_5 > +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) > +{ > + return (vm_flags & VM_SHADOW_STACK); > +} > #endif > > #ifdef CONFIG_RISCV_USER_CFI > @@ -362,10 +366,22 @@ extern unsigned int kobjsize(const void *objp); > * with VM_SHARED. > */ > #define VM_SHADOW_STACK VM_WRITE > + > +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) > +{ > + return ((vm_flags & (VM_WRITE | VM_READ | VM_EXEC)) == VM_WRITE); > +} > + Please no such hacks just to work around the 32bit vmflags limitation.
On Thu, Jan 25, 2024 at 09:18:07AM +0100, David Hildenbrand wrote: >On 25.01.24 07:21, debug@rivosinc.com wrote: >>From: Deepak Gupta <debug@rivosinc.com> >> >>x86 has used VM_SHADOW_STACK (alias to VM_HIGH_ARCH_5) to encode shadow >>stack VMA. VM_SHADOW_STACK is thus not possible on 32bit. Some arches may >>need a way to encode shadow stack on 32bit and 64bit both and they may >>encode this information differently in VMAs. >> >>This patch changes checks of VM_SHADOW_STACK flag in generic code to call >>to a function `arch_is_shadow_stack` which will return true if arch >>supports shadow stack and vma is shadow stack else stub returns false. >> >>There was a suggestion to name it as `vma_is_shadow_stack`. I preferred to >>keep `arch` prefix in there because it's each arch specific. >> >>Signed-off-by: Deepak Gupta <debug@rivosinc.com> >>--- >> include/linux/mm.h | 18 +++++++++++++++++- >> mm/gup.c | 5 +++-- >> mm/internal.h | 2 +- >> 3 files changed, 21 insertions(+), 4 deletions(-) >> >>diff --git a/include/linux/mm.h b/include/linux/mm.h >>index dfe0e8118669..15c70fc677a3 100644 >>--- a/include/linux/mm.h >>+++ b/include/linux/mm.h >>@@ -352,6 +352,10 @@ extern unsigned int kobjsize(const void *objp); >> * for more details on the guard size. >> */ >> # define VM_SHADOW_STACK VM_HIGH_ARCH_5 >>+static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>+{ >>+ return (vm_flags & VM_SHADOW_STACK); >>+} >> #endif >> #ifdef CONFIG_RISCV_USER_CFI >>@@ -362,10 +366,22 @@ extern unsigned int kobjsize(const void *objp); >> * with VM_SHARED. >> */ >> #define VM_SHADOW_STACK VM_WRITE >>+ >>+static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>+{ >>+ return ((vm_flags & (VM_WRITE | VM_READ | VM_EXEC)) == VM_WRITE); >>+} >>+ > >Please no such hacks just to work around the 32bit vmflags limitation. As I said in another response. Noted. And if there're no takers for 32bit on riscv (which highly likely is the case) This will go away in next version of patchsets. > >-- >Cheers, > >David / dhildenb >
On 25.01.24 18:07, Deepak Gupta wrote: > On Thu, Jan 25, 2024 at 09:18:07AM +0100, David Hildenbrand wrote: >> On 25.01.24 07:21, debug@rivosinc.com wrote: >>> From: Deepak Gupta <debug@rivosinc.com> >>> >>> x86 has used VM_SHADOW_STACK (alias to VM_HIGH_ARCH_5) to encode shadow >>> stack VMA. VM_SHADOW_STACK is thus not possible on 32bit. Some arches may >>> need a way to encode shadow stack on 32bit and 64bit both and they may >>> encode this information differently in VMAs. >>> >>> This patch changes checks of VM_SHADOW_STACK flag in generic code to call >>> to a function `arch_is_shadow_stack` which will return true if arch >>> supports shadow stack and vma is shadow stack else stub returns false. >>> >>> There was a suggestion to name it as `vma_is_shadow_stack`. I preferred to >>> keep `arch` prefix in there because it's each arch specific. >>> >>> Signed-off-by: Deepak Gupta <debug@rivosinc.com> >>> --- >>> include/linux/mm.h | 18 +++++++++++++++++- >>> mm/gup.c | 5 +++-- >>> mm/internal.h | 2 +- >>> 3 files changed, 21 insertions(+), 4 deletions(-) >>> >>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>> index dfe0e8118669..15c70fc677a3 100644 >>> --- a/include/linux/mm.h >>> +++ b/include/linux/mm.h >>> @@ -352,6 +352,10 @@ extern unsigned int kobjsize(const void *objp); >>> * for more details on the guard size. >>> */ >>> # define VM_SHADOW_STACK VM_HIGH_ARCH_5 >>> +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>> +{ >>> + return (vm_flags & VM_SHADOW_STACK); >>> +} >>> #endif >>> #ifdef CONFIG_RISCV_USER_CFI >>> @@ -362,10 +366,22 @@ extern unsigned int kobjsize(const void *objp); >>> * with VM_SHARED. >>> */ >>> #define VM_SHADOW_STACK VM_WRITE >>> + >>> +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>> +{ >>> + return ((vm_flags & (VM_WRITE | VM_READ | VM_EXEC)) == VM_WRITE); >>> +} >>> + >> >> Please no such hacks just to work around the 32bit vmflags limitation. > > As I said in another response. Noted. > And if there're no takers for 32bit on riscv (which highly likely is the case) > This will go away in next version of patchsets. Sorry for the (unusually for me) late reply. Simplifying to riscv64 sounds great. Alternatively, maybe VM_SHADOW_STACK is not even required at all on riscv if we can teach all code to only stare at arch_is_shadow_stack() instead. .. but, just using the same VM_SHADOW_STACK will it all much cleaner. Eventually, we can just stop playing arch-specific games with arch_is_shadow_stack and VM_SHADOW_STACK.
On Tue, Feb 13, 2024 at 11:34:59AM +0100, David Hildenbrand wrote: >On 25.01.24 18:07, Deepak Gupta wrote: >>On Thu, Jan 25, 2024 at 09:18:07AM +0100, David Hildenbrand wrote: >>>On 25.01.24 07:21, debug@rivosinc.com wrote: >>>>From: Deepak Gupta <debug@rivosinc.com> >>>> >>>>x86 has used VM_SHADOW_STACK (alias to VM_HIGH_ARCH_5) to encode shadow >>>>stack VMA. VM_SHADOW_STACK is thus not possible on 32bit. Some arches may >>>>need a way to encode shadow stack on 32bit and 64bit both and they may >>>>encode this information differently in VMAs. >>>> >>>>This patch changes checks of VM_SHADOW_STACK flag in generic code to call >>>>to a function `arch_is_shadow_stack` which will return true if arch >>>>supports shadow stack and vma is shadow stack else stub returns false. >>>> >>>>There was a suggestion to name it as `vma_is_shadow_stack`. I preferred to >>>>keep `arch` prefix in there because it's each arch specific. >>>> >>>>Signed-off-by: Deepak Gupta <debug@rivosinc.com> >>>>--- >>>> include/linux/mm.h | 18 +++++++++++++++++- >>>> mm/gup.c | 5 +++-- >>>> mm/internal.h | 2 +- >>>> 3 files changed, 21 insertions(+), 4 deletions(-) >>>> >>>>diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>index dfe0e8118669..15c70fc677a3 100644 >>>>--- a/include/linux/mm.h >>>>+++ b/include/linux/mm.h >>>>@@ -352,6 +352,10 @@ extern unsigned int kobjsize(const void *objp); >>>> * for more details on the guard size. >>>> */ >>>> # define VM_SHADOW_STACK VM_HIGH_ARCH_5 >>>>+static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>>>+{ >>>>+ return (vm_flags & VM_SHADOW_STACK); >>>>+} >>>> #endif >>>> #ifdef CONFIG_RISCV_USER_CFI >>>>@@ -362,10 +366,22 @@ extern unsigned int kobjsize(const void *objp); >>>> * with VM_SHARED. >>>> */ >>>> #define VM_SHADOW_STACK VM_WRITE >>>>+ >>>>+static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) >>>>+{ >>>>+ return ((vm_flags & (VM_WRITE | VM_READ | VM_EXEC)) == VM_WRITE); >>>>+} >>>>+ >>> >>>Please no such hacks just to work around the 32bit vmflags limitation. >> >>As I said in another response. Noted. >>And if there're no takers for 32bit on riscv (which highly likely is the case) >>This will go away in next version of patchsets. > >Sorry for the (unusually for me) late reply. Simplifying to riscv64 >sounds great. > >Alternatively, maybe VM_SHADOW_STACK is not even required at all on >riscv if we can teach all code to only stare at arch_is_shadow_stack() >instead. Sorry for late reply. I think for risc-v this can be done, if done in following way static inline bool arch_is_shadow_stack(struct vm_flags_t vm_flags) { return (vm_get_page_prot(vm_flags) == PAGE_SHADOWSTACK); } But doing above will require following - Inventing a new PROT_XXX type (let's call it PROT_SHADOWSTACK) that is only exposed to kernel. PROT_SHADOWSTACK protection flag would allow `do_mmap` to do right thing and setup appropriate protection perms. We wouldn't want to expose this protection type to user mode (because `map_shadow_stack` already exists for that). Current patch series uses PROT_SHADOWSTACK because VM_SHADOW_STACK was aliased to VM_WRITE. - As you said teach all the generic code as well to use arch_is_shadow_stack which might become complicated (can't say for sure) > >... but, just using the same VM_SHADOW_STACK will it all much cleaner. >Eventually, we can just stop playing arch-specific games with >arch_is_shadow_stack and VM_SHADOW_STACK. Yeah for now, looks like easier thing to do. > >-- >Cheers, > >David / dhildenb >
diff --git a/include/linux/mm.h b/include/linux/mm.h index dfe0e8118669..15c70fc677a3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -352,6 +352,10 @@ extern unsigned int kobjsize(const void *objp); * for more details on the guard size. */ # define VM_SHADOW_STACK VM_HIGH_ARCH_5 +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) +{ + return (vm_flags & VM_SHADOW_STACK); +} #endif #ifdef CONFIG_RISCV_USER_CFI @@ -362,10 +366,22 @@ extern unsigned int kobjsize(const void *objp); * with VM_SHARED. */ #define VM_SHADOW_STACK VM_WRITE + +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) +{ + return ((vm_flags & (VM_WRITE | VM_READ | VM_EXEC)) == VM_WRITE); +} + #endif #ifndef VM_SHADOW_STACK # define VM_SHADOW_STACK VM_NONE + +static inline bool arch_is_shadow_stack(vm_flags_t vm_flags) +{ + return false; +} + #endif #if defined(CONFIG_X86) @@ -3464,7 +3480,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) return stack_guard_gap; /* See reasoning around the VM_SHADOW_STACK definition */ - if (vma->vm_flags & VM_SHADOW_STACK) + if (vma->vm_flags && arch_is_shadow_stack(vma->vm_flags)) return PAGE_SIZE; return 0; diff --git a/mm/gup.c b/mm/gup.c index 231711efa390..45798782ed2c 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1051,7 +1051,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) !writable_file_mapping_allowed(vma, gup_flags)) return -EFAULT; - if (!(vm_flags & VM_WRITE) || (vm_flags & VM_SHADOW_STACK)) { + if (!(vm_flags & VM_WRITE) || arch_is_shadow_stack(vm_flags)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; /* hugetlb does not support FOLL_FORCE|FOLL_WRITE. */ @@ -1069,7 +1069,8 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (!is_cow_mapping(vm_flags)) return -EFAULT; } - } else if (!(vm_flags & VM_READ)) { + } else if (!(vm_flags & VM_READ) && !arch_is_shadow_stack(vm_flags)) { + /* reads allowed if its shadow stack vma */ if (!(gup_flags & FOLL_FORCE)) return -EFAULT; /* diff --git a/mm/internal.h b/mm/internal.h index b61034bd50f5..0abf00c93fe1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -572,7 +572,7 @@ static inline bool is_exec_mapping(vm_flags_t flags) */ static inline bool is_stack_mapping(vm_flags_t flags) { - return ((flags & VM_STACK) == VM_STACK) || (flags & VM_SHADOW_STACK); + return ((flags & VM_STACK) == VM_STACK) || arch_is_shadow_stack(flags); } /*