Message ID | 20231124113803.165431-1-cleger@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp1107121vqx; Fri, 24 Nov 2023 03:39:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IGm5piuNMTLvOxzW3DtuG9LMAUKff+05Lje7iiVN/m5lO4Ho1MwUl9duqAt5VOHt5bFuoIy X-Received: by 2002:a05:6a00:2919:b0:6c3:38f5:99d2 with SMTP id cg25-20020a056a00291900b006c338f599d2mr2398925pfb.11.1700825940621; Fri, 24 Nov 2023 03:39:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700825940; cv=none; d=google.com; s=arc-20160816; b=p+LGa9oTkocgEYh/bqBgbM9LBgsT8vaC8vYOOOVrQrc633Z8Vpq33UJhTFj0AtdDRi 91FrWUsuCBBbFYnv98HubJnU2fWNxLPx+LV5EXN7qd46FdC1HYSiVZVfhYCuorfb5KUs YFSqcU8RfWUwpE6u3hiF8N9rDSaO8NBRvClYrg+0dh6PkUMJkvP2CPGqrUm5PxXdduVP 5rKt5RpjObXeToHBlcrPT0pdLQjlHSszVGkor4zqo2dA26HSlyaqr+QoZj/aCwPrccOW dSUorZILKymoXZ6kDauFjAQaAXnuUMEhuEptcSfV0Yq/gkcpNd9/yOCzhFB1Da5QbY6g xNGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=xIniP635t4BDCH0OpyDCPI9DgPxQOg+yG0PZN+Nm/+4=; fh=L+6n1gU1tjJo3Gx57HtHqywxFXUJGV2hJYoVqp412pU=; b=IG4CIb05zMcvqMVl4+cFQin6w7xsEEViLGCnwB3nLou+8k87QY9HB4kAfqhSHF8aSL /96nb7axrOtkBmgUCukLO1qPmNIcphoMghiaWdr18+FSTh0kfmqete/qTgmpAqUYBLzh eZjCNzOSCYblUE/919PtxXlvCXUegFHe+vMDggHqJzky0ZssfO7E6vjx3vEfgLuCQfYo Q5siEKkM0+JZxb3oPKOI2u1zt4dpwC/XWeAXWaM7gjxAEGTw6o3YSukLwrjRHPwevqOn t+87TuAOICdYZanU0ymS7PHi3exLLtYk/i+J8TPuvZItgZ+5xGvOpzT+Q1ALZmYjY+Hs 2Bdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=v+aLr4O8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id j3-20020a056a00174300b006cb8505bc3fsi3431907pfc.40.2023.11.24.03.39.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Nov 2023 03:39:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=v+aLr4O8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 2FC8A8200C8E; Fri, 24 Nov 2023 03:38:55 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229742AbjKXLip (ORCPT <rfc822;ouuuleilei@gmail.com> + 99 others); Fri, 24 Nov 2023 06:38:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229580AbjKXLim (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 24 Nov 2023 06:38:42 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC24ED68 for <linux-kernel@vger.kernel.org>; Fri, 24 Nov 2023 03:38:48 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1cf88973da5so4060465ad.0 for <linux-kernel@vger.kernel.org>; Fri, 24 Nov 2023 03:38:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1700825928; x=1701430728; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=xIniP635t4BDCH0OpyDCPI9DgPxQOg+yG0PZN+Nm/+4=; b=v+aLr4O87vF/e1qtuXgj1XzmnPRHDPBbWNDAnoG8gPXcL4ErOOs2J72sW6c2yd8OVM G4T5Tt1PtSObzjbtneR7xyDPQp5BwOphgLfxG69vZRV3/oG0W/MzrG/pyTmKtDTVqD7g U5pYQYxQmN1k5gU9RDUuEjT6W2Debjz6TzdYTTbDgmXO8R3MrOjnQS58AbNlffK5kobS t5gooOKEsjWbl12N9g/jWSqEtY5XyBOZjf72QycP7l2A6LyPRiyOl5AcX3IfSJRxNuNu 0v4WhvnX123rGlfS3E0ceAD8rhyAMwcnQ7uGSGrFDVC/5XpJiPILLSYqHDLlczEFt3ao Msxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700825928; x=1701430728; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xIniP635t4BDCH0OpyDCPI9DgPxQOg+yG0PZN+Nm/+4=; b=g76uVz6hLIAuCnMSylCWdy1wUnm2TYcmBMUYGQ2Mh8SaF7w/XbaA54I+KmoLmCNa9C WNfnrIuGQ1RzkZ2jecQZhhre7VrnwhvfuS8dDOjoE51mAlBe6oEnFOM9l3XARRHYW7So T+JTqsf/vajMIQFmpgdMQPzp1ug0LKK2CZ/qQuk+E6yCIc59lZl/75RDZZfoLA9yz60+ n90LNSitv2+zsbkDbeG3KJtnKJc+SBVa2S7r+QlExpGlI7xcqep1ZvXMTnZyNCpYhiVa 5Nx8CnqyOqhOQ8bkt8FaSlREGZxQDlbOZ/4zIi0+S+Kx7BbPHVFlSbe3oAN8yk8Wcd7f BdZw== X-Gm-Message-State: AOJu0YwguOWRLHHc04iFY1eWP6PYy5rDsQQoIpmJIUrUOWviG9U6OgPE Xq+QaGpdlXWFGyH/tWolXkGD6GEiHG3qE97ujnI= X-Received: by 2002:a17:902:ced0:b0:1cf:658f:d2d with SMTP id d16-20020a170902ced000b001cf658f0d2dmr2531156plg.5.1700825928102; Fri, 24 Nov 2023 03:38:48 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:3471:930b:671b:cf77]) by smtp.gmail.com with ESMTPSA id u4-20020a17090282c400b001bb1f0605b2sm2964966plz.214.2023.11.24.03.38.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Nov 2023 03:38:47 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= <cleger@rivosinc.com> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= <cleger@rivosinc.com>, Christoph Hellwig <hch@infradead.org>, Ben Dooks <ben.dooks@codethink.co.uk>, kernel test robot <lkp@intel.com> Subject: [PATCH v2] riscv: fix incorrect use of __user pointer Date: Fri, 24 Nov 2023 12:38:03 +0100 Message-ID: <20231124113803.165431-1-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Fri, 24 Nov 2023 03:38:55 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783445261796866501 X-GMAIL-MSGID: 1783445261796866501 |
Series |
[v2] riscv: fix incorrect use of __user pointer
|
|
Commit Message
Clément Léger
Nov. 24, 2023, 11:38 a.m. UTC
These warnings were reported by sparse and were due to lack of __user
annotation as well as dereferencing such pointer:
arch/riscv/kernel/traps_misaligned.c:361:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:373:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:381:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:322:24: warning: incorrect type in initializer (different address spaces)
arch/riscv/kernel/traps_misaligned.c:322:24: expected unsigned char const [noderef] __user *__gu_ptr
arch/riscv/kernel/traps_misaligned.c:322:24: got unsigned char const [usertype] *addr
arch/riscv/kernel/traps_misaligned.c:361:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:373:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:381:21: warning: dereference of noderef expression
arch/riscv/kernel/traps_misaligned.c:332:24: warning: incorrect type in initializer (different address spaces)
arch/riscv/kernel/traps_misaligned.c:332:24: expected unsigned char [noderef] __user *__gu_ptr
arch/riscv/kernel/traps_misaligned.c:332:24: got unsigned char [usertype] *addr
As suggested by Christoph Hellwig, casting pointers from an address
space to another is not a good idea and we should rather cast the
untyped unsigned long to their final address space. Fix the ones in
load_u8()/store_u8()/__read_insn() by passing a unsigned long and then
casting it to the appropriate type (__user of not) depending if used in
kernel/ user mode. Also remove unneeded else construct in store_u8()/
load_u8().
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202311160606.obGOOwB3-lkp@intel.com/
Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
arch/riscv/kernel/traps_misaligned.c | 55 +++++++++++++---------------
1 file changed, 25 insertions(+), 30 deletions(-)
Comments
On Fri, Nov 24, 2023 at 12:38:03PM +0100, Clément Léger wrote: > > #ifdef CONFIG_RISCV_M_MODE > -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) > +static inline int load_u8(struct pt_regs *regs, const unsigned long addr, u8 *r_val) Please avoid the overly long line here. Or just drop the const as that is rather pointless for a scalaer (unlike the pointer). Otherwise looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
... > @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) > > val.data_u64 = 0; > for (i = 0; i < len; i++) { > - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) > + if (load_u8(regs, addr + i, &val.data_bytes[i])) > return -1; > } I'd really have thought that you'd want to pull the kernel/user check way outside the loop? In any case, for a misaligned read why not just read (addr & ~7)[0] and (if needed) (addr & ~7)[1] and then ahift and or together? clang will do it for misaligned structure members with known misalignment, but it is almost certainly also better for reads with unknown misalignment. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 25/11/2023 16:37, David Laight wrote: > ... >> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) >> >> val.data_u64 = 0; >> for (i = 0; i < len; i++) { >> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) >> + if (load_u8(regs, addr + i, &val.data_bytes[i])) >> return -1; >> } > > I'd really have thought that you'd want to pull the kernel/user > check way outside the loop? Hi David, I hope the compiler is able to extract that 'if' out of the loop since regs isn't modified in the loop. Nevertheless, that could be more "clear" if put outside indeed. > In any case, for a misaligned read why not just read (addr & ~7)[0] > and (if needed) (addr & ~7)[1] and then ahift and or together? Makes sense, the original code was like that but probably copy/pasted from openSBI I guess. (?) Let's keep that for the moment (this patch is about fixing wrong __user address space). I will try to submit another series using what you proposed. Regards, Clément > > clang will do it for misaligned structure members with known > misalignment, but it is almost certainly also better for reads > with unknown misalignment. > > David > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales)
From: Clément Léger > Sent: 27 November 2023 10:24 > > On 25/11/2023 16:37, David Laight wrote: > > ... > >> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) > >> > >> val.data_u64 = 0; > >> for (i = 0; i < len; i++) { > >> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) > >> + if (load_u8(regs, addr + i, &val.data_bytes[i])) > >> return -1; > >> } > > > > I'd really have thought that you'd want to pull the kernel/user > > check way outside the loop? > > Hi David, > > I hope the compiler is able to extract that 'if' out of the loop since > regs isn't modified in the loop. Nevertheless, that could be more > "clear" if put outside indeed. If has access regs->xxx then the compiler can't do so because it will must assume that the assignment might alias into 'regs'. That is even true for byte writes if 'strict-aliasing' is enabled - which it isn't for linux kernel builds. It might do so if 'regs' were 'const'; it tends to assume that if it can't change something nothing can - although that isn't true. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 27/11/2023 11:35, David Laight wrote: > From: Clément Léger >> Sent: 27 November 2023 10:24 >> >> On 25/11/2023 16:37, David Laight wrote: >>> ... >>>> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) >>>> >>>> val.data_u64 = 0; >>>> for (i = 0; i < len; i++) { >>>> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) >>>> + if (load_u8(regs, addr + i, &val.data_bytes[i])) >>>> return -1; >>>> } >>> >>> I'd really have thought that you'd want to pull the kernel/user >>> check way outside the loop? >> >> Hi David, >> >> I hope the compiler is able to extract that 'if' out of the loop since >> regs isn't modified in the loop. Nevertheless, that could be more >> "clear" if put outside indeed. > > If has access regs->xxx then the compiler can't do so because it > will must assume that the assignment might alias into 'regs'. > That is even true for byte writes if 'strict-aliasing' is enabled > - which it isn't for linux kernel builds. > > It might do so if 'regs' were 'const'; it tends to assume that if > it can't change something nothing can - although that isn't true. Ok, good to know ! As I said, I'll modify that in a subsequent patch. Thanks, Clément > > David > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales)
From: Clément Léger > Sent: 27 November 2023 10:37 > > On 27/11/2023 11:35, David Laight wrote: > > From: Clément Léger > >> Sent: 27 November 2023 10:24 > >> > >> On 25/11/2023 16:37, David Laight wrote: > >>> ... > >>>> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) > >>>> > >>>> val.data_u64 = 0; > >>>> for (i = 0; i < len; i++) { > >>>> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) > >>>> + if (load_u8(regs, addr + i, &val.data_bytes[i])) > >>>> return -1; > >>>> } > >>> > >>> I'd really have thought that you'd want to pull the kernel/user > >>> check way outside the loop? > >> > >> Hi David, > >> > >> I hope the compiler is able to extract that 'if' out of the loop since > >> regs isn't modified in the loop. Nevertheless, that could be more > >> "clear" if put outside indeed. > > > > If has access regs->xxx then the compiler can't do so because it > > will must assume that the assignment might alias into 'regs'. > > That is even true for byte writes if 'strict-aliasing' is enabled > > - which it isn't for linux kernel builds. > > > > It might do so if 'regs' were 'const'; it tends to assume that if > > it can't change something nothing can - although that isn't true. > > Ok, good to know ! As I said, I'll modify that in a subsequent patch. Actually the following loops will (probably) generate much better code: // Read kernel val = 0; for (i = 0; i < len; i++) val |= addr[i] << (i * 8); // write kernel for (i = 0; i < len; i++, val >>= 8) addr[i] = val; For user using __get/put_user() as appropriate. I think there is a 'goto' variant of the user access functions that probably make the code clearer. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 27/11/2023 13:02, Ben Dooks wrote: > On 24/11/2023 11:38, Clément Léger wrote: >> These warnings were reported by sparse and were due to lack of __user >> annotation as well as dereferencing such pointer: >> >> arch/riscv/kernel/traps_misaligned.c:361:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:373:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:381:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:322:24: warning: incorrect type >> in initializer (different address spaces) >> arch/riscv/kernel/traps_misaligned.c:322:24: expected unsigned char >> const [noderef] __user *__gu_ptr >> arch/riscv/kernel/traps_misaligned.c:322:24: got unsigned char >> const [usertype] *addr >> arch/riscv/kernel/traps_misaligned.c:361:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:373:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:381:21: warning: dereference of >> noderef expression >> arch/riscv/kernel/traps_misaligned.c:332:24: warning: incorrect type >> in initializer (different address spaces) >> arch/riscv/kernel/traps_misaligned.c:332:24: expected unsigned char >> [noderef] __user *__gu_ptr >> arch/riscv/kernel/traps_misaligned.c:332:24: got unsigned char >> [usertype] *addr >> >> As suggested by Christoph Hellwig, casting pointers from an address >> space to another is not a good idea and we should rather cast the >> untyped unsigned long to their final address space. Fix the ones in >> load_u8()/store_u8()/__read_insn() by passing a unsigned long and then >> casting it to the appropriate type (__user of not) depending if used in >> kernel/ user mode. Also remove unneeded else construct in store_u8()/ >> load_u8(). >> >> Reported-by: kernel test robot <lkp@intel.com> >> Closes: >> https://lore.kernel.org/oe-kbuild-all/202311160606.obGOOwB3-lkp@intel.com/ >> Signed-off-by: Clément Léger <cleger@rivosinc.com> >> --- >> arch/riscv/kernel/traps_misaligned.c | 55 +++++++++++++--------------- >> 1 file changed, 25 insertions(+), 30 deletions(-) >> >> diff --git a/arch/riscv/kernel/traps_misaligned.c >> b/arch/riscv/kernel/traps_misaligned.c >> index 5eba37147caa..a92b88af855a 100644 >> --- a/arch/riscv/kernel/traps_misaligned.c >> +++ b/arch/riscv/kernel/traps_misaligned.c >> @@ -265,19 +265,19 @@ static unsigned long get_f32_rs(unsigned long >> insn, u8 fp_reg_offset, >> #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs)) >> #ifdef CONFIG_RISCV_M_MODE >> -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 >> *r_val) >> +static inline int load_u8(struct pt_regs *regs, const unsigned long >> addr, u8 *r_val) >> { >> u8 val; >> - asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr)); >> + asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*(const u8 *)addr)); >> *r_val = val; >> return 0; >> } >> -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) >> +static inline int store_u8(struct pt_regs *regs, unsigned long addr, >> u8 val) >> { >> - asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr)); >> + asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*(u8 *)addr)); >> return 0; >> } >> @@ -316,34 +316,32 @@ static inline int get_insn(struct pt_regs *regs, >> ulong mepc, ulong *r_insn) >> return 0; >> } >> #else >> -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 >> *r_val) >> +static inline int load_u8(struct pt_regs *regs, const unsigned long >> addr, u8 *r_val) >> { >> - if (user_mode(regs)) { >> - return __get_user(*r_val, addr); >> - } else { >> - *r_val = *addr; >> - return 0; >> - } >> + if (user_mode(regs)) >> + return __get_user(*r_val, (u8 __user *)addr); >> + >> + *r_val = *(const u8 *)addr; >> + return 0; >> } >> -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) >> +static inline int store_u8(struct pt_regs *regs, unsigned long addr, >> u8 val) >> { >> - if (user_mode(regs)) { >> - return __put_user(val, addr); >> - } else { >> - *addr = val; >> - return 0; >> - } >> + if (user_mode(regs)) >> + return __put_user(val, (u8 __user *)addr); >> + >> + *(u8 *)addr = val; >> + return 0; >> } >> -#define __read_insn(regs, insn, insn_addr) \ >> +#define __read_insn(regs, insn, insn_addr, type) \ >> ({ \ >> int __ret; \ >> \ >> if (user_mode(regs)) { \ >> - __ret = __get_user(insn, insn_addr); \ >> + __ret = __get_user(insn, (type __user *) insn_addr); \ >> } else { \ >> - insn = *insn_addr; \ >> + insn = *(type *)insn_addr; \ >> __ret = 0; \ >> } \ >> \ >> @@ -356,9 +354,8 @@ static inline int get_insn(struct pt_regs *regs, >> ulong epc, ulong *r_insn) >> if (epc & 0x2) { >> ulong tmp = 0; >> - u16 __user *insn_addr = (u16 __user *)epc; >> - if (__read_insn(regs, insn, insn_addr)) >> + if (__read_insn(regs, insn, epc, u16)) >> return -EFAULT; >> /* __get_user() uses regular "lw" which sign extend the loaded >> * value make sure to clear higher order bits in case we >> "or" it >> @@ -369,16 +366,14 @@ static inline int get_insn(struct pt_regs *regs, >> ulong epc, ulong *r_insn) >> *r_insn = insn; >> return 0; >> } >> - insn_addr++; >> - if (__read_insn(regs, tmp, insn_addr)) >> + epc += sizeof(u16); >> + if (__read_insn(regs, tmp, epc, u16)) >> return -EFAULT; >> *r_insn = (tmp << 16) | insn; >> return 0; >> } else { >> - u32 __user *insn_addr = (u32 __user *)epc; >> - >> - if (__read_insn(regs, insn, insn_addr)) >> + if (__read_insn(regs, insn, epc, u32)) >> return -EFAULT; >> if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { >> *r_insn = insn; >> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) >> val.data_u64 = 0; >> for (i = 0; i < len; i++) { >> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) >> + if (load_u8(regs, addr + i, &val.data_bytes[i])) >> return -1; >> } >> @@ -589,7 +584,7 @@ int handle_misaligned_store(struct pt_regs *regs) >> return -EOPNOTSUPP; >> for (i = 0; i < len; i++) { >> - if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) >> + if (store_u8(regs, addr + i, val.data_bytes[i])) >> > > Would it not be easier to have a switch here for memcpy or copy_to_user? Most probably yes. I'll update the V3 with these modifications. We'll get rid of store_u8()/load_u8() entirely. Thanks, Clément > > return -1; >> } >> >
On 27/11/2023 13:46, Clément Léger wrote: > > >>> @@ -369,16 +366,14 @@ static inline int get_insn(struct pt_regs *regs, >>> ulong epc, ulong *r_insn) >>> *r_insn = insn; >>> return 0; >>> } >>> - insn_addr++; >>> - if (__read_insn(regs, tmp, insn_addr)) >>> + epc += sizeof(u16); >>> + if (__read_insn(regs, tmp, epc, u16)) >>> return -EFAULT; >>> *r_insn = (tmp << 16) | insn; >>> return 0; >>> } else { >>> - u32 __user *insn_addr = (u32 __user *)epc; >>> - >>> - if (__read_insn(regs, insn, insn_addr)) >>> + if (__read_insn(regs, insn, epc, u32)) >>> return -EFAULT; >>> if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { >>> *r_insn = insn; >>> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) >>> val.data_u64 = 0; >>> for (i = 0; i < len; i++) { >>> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) >>> + if (load_u8(regs, addr + i, &val.data_bytes[i])) >>> return -1; >>> } >>> @@ -589,7 +584,7 @@ int handle_misaligned_store(struct pt_regs *regs) >>> return -EOPNOTSUPP; >>> for (i = 0; i < len; i++) { >>> - if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) >>> + if (store_u8(regs, addr + i, val.data_bytes[i])) >>> >> >> Would it not be easier to have a switch here for memcpy or copy_to_user? > > Most probably yes. I'll update the V3 with these modifications. We'll > get rid of store_u8()/load_u8() entirely. While memcpy/copy_from_user will be slower than David Laight proposed solution (read two 4/8 bytes long once and shifting the content) it is more maintainable and this path of code will not suffer a few lost cycle (emulating misaligned access is already horribly slow...). BTW, need to check but maybe we can get rid of all the specific M_MODE code entirely (__read_insn) also. Clément > > Thanks, > > Clément > >> >> return -1; >>> } >>> >>
diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 5eba37147caa..a92b88af855a 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -265,19 +265,19 @@ static unsigned long get_f32_rs(unsigned long insn, u8 fp_reg_offset, #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs)) #ifdef CONFIG_RISCV_M_MODE -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) +static inline int load_u8(struct pt_regs *regs, const unsigned long addr, u8 *r_val) { u8 val; - asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr)); + asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*(const u8 *)addr)); *r_val = val; return 0; } -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) +static inline int store_u8(struct pt_regs *regs, unsigned long addr, u8 val) { - asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr)); + asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*(u8 *)addr)); return 0; } @@ -316,34 +316,32 @@ static inline int get_insn(struct pt_regs *regs, ulong mepc, ulong *r_insn) return 0; } #else -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) +static inline int load_u8(struct pt_regs *regs, const unsigned long addr, u8 *r_val) { - if (user_mode(regs)) { - return __get_user(*r_val, addr); - } else { - *r_val = *addr; - return 0; - } + if (user_mode(regs)) + return __get_user(*r_val, (u8 __user *)addr); + + *r_val = *(const u8 *)addr; + return 0; } -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) +static inline int store_u8(struct pt_regs *regs, unsigned long addr, u8 val) { - if (user_mode(regs)) { - return __put_user(val, addr); - } else { - *addr = val; - return 0; - } + if (user_mode(regs)) + return __put_user(val, (u8 __user *)addr); + + *(u8 *)addr = val; + return 0; } -#define __read_insn(regs, insn, insn_addr) \ +#define __read_insn(regs, insn, insn_addr, type) \ ({ \ int __ret; \ \ if (user_mode(regs)) { \ - __ret = __get_user(insn, insn_addr); \ + __ret = __get_user(insn, (type __user *) insn_addr); \ } else { \ - insn = *insn_addr; \ + insn = *(type *)insn_addr; \ __ret = 0; \ } \ \ @@ -356,9 +354,8 @@ static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn) if (epc & 0x2) { ulong tmp = 0; - u16 __user *insn_addr = (u16 __user *)epc; - if (__read_insn(regs, insn, insn_addr)) + if (__read_insn(regs, insn, epc, u16)) return -EFAULT; /* __get_user() uses regular "lw" which sign extend the loaded * value make sure to clear higher order bits in case we "or" it @@ -369,16 +366,14 @@ static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn) *r_insn = insn; return 0; } - insn_addr++; - if (__read_insn(regs, tmp, insn_addr)) + epc += sizeof(u16); + if (__read_insn(regs, tmp, epc, u16)) return -EFAULT; *r_insn = (tmp << 16) | insn; return 0; } else { - u32 __user *insn_addr = (u32 __user *)epc; - - if (__read_insn(regs, insn, insn_addr)) + if (__read_insn(regs, insn, epc, u32)) return -EFAULT; if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { *r_insn = insn; @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs) val.data_u64 = 0; for (i = 0; i < len; i++) { - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) + if (load_u8(regs, addr + i, &val.data_bytes[i])) return -1; } @@ -589,7 +584,7 @@ int handle_misaligned_store(struct pt_regs *regs) return -EOPNOTSUPP; for (i = 0; i < len; i++) { - if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) + if (store_u8(regs, addr + i, val.data_bytes[i])) return -1; }