Message ID | 20230505035127.195387-1-mpe@ellerman.id.au |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp123798vqo; Thu, 4 May 2023 21:00:33 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4aTTiawDjwJt4R6U8ILzUtzE+CYxzuCbcGSPKvuXKoEK6dHUWPg/wyhI8pS/3NgH8WfVoJ X-Received: by 2002:a05:6a20:e415:b0:f9:431e:a076 with SMTP id nh21-20020a056a20e41500b000f9431ea076mr128100pzb.27.1683259233233; Thu, 04 May 2023 21:00:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683259233; cv=none; d=google.com; s=arc-20160816; b=wfAkwQBzkz3ecJG2Ox6XR2H2/BbjsztmmgTWC51T06pOzbkYVhhb+vxKuiXvDDudku GQGVCs7O70kJYtRgATB95aeI+5ft7L2os3Qw2QvAzPbd0IlxO6AbFlYcM47hOBt3D7ze lVkkmsbTDDUckBEA630GAnK5dNw/wJQ+NGoOinXMyQHmw8KxdozqFh2Y7zNoT0yj+NUo vFrLW5ZQyiB301/cdIOcu5TVjtjrAiUfD21Bx9w0cSGATX8HU2dezGxP3+gdpIE7flNr w38mnC8PhrMPD8+K9bDcq4uGdKN7MFa4uxPCy9ItSd0cImy05xdGSJugRXPZ+4jzcD5M xJ0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=mVhP5af+O4pwjB4DNxiBtegJzD+t5vqxTNie3cd58II=; b=nTiNRDrnOH9Yc7KfslYsES2QyNOmiTuDHDV6Q+hOqC7brdDDgDuNkGtXpodZP3k22Z AuLP/XE17kvPxQrowf30iYCUeJL9VZWoPyiI8MFWVN8r2RALi8+opA7JW7fHN/tKPE/7 IXRfrFMLRX1IwpefhMkHdmgdW0cmjVtyOb4afrtNTQ0pY2tUy6xsd2p29z1iSLnEjrWz R+6R5/G99aKwyZ7yAmUsqmTEUfic6loQFUR1l8w6jO4vHOWktIGxeZBfvLEM47tmYjof HPYmuDsPS8RkPTxagU8GxjEHYRUAYyZxQfvNHJTeB0FosJ7VvNxL4VH9qd7Dy4gml3XX 24YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ellerman.id.au header.s=201909 header.b="YGAKJi/h"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k18-20020aa79732000000b0063afe9f1fb1si1136614pfg.119.2023.05.04.21.00.17; Thu, 04 May 2023 21:00:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ellerman.id.au header.s=201909 header.b="YGAKJi/h"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230172AbjEEDwD (ORCPT <rfc822;b08248@gmail.com> + 99 others); Thu, 4 May 2023 23:52:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229502AbjEEDwB (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 4 May 2023 23:52:01 -0400 Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A371A11562 for <linux-kernel@vger.kernel.org>; Thu, 4 May 2023 20:51:58 -0700 (PDT) Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 4QCGvr1fKVz4x3g; Fri, 5 May 2023 13:51:52 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ellerman.id.au; s=201909; t=1683258712; bh=mVhP5af+O4pwjB4DNxiBtegJzD+t5vqxTNie3cd58II=; h=From:To:Cc:Subject:Date:From; b=YGAKJi/haN+xysLJd5BAil8L/L/+hwaBEmRB9E8upgP9LF0JoWDnkPvsu8h2VRrPE XnBegrFs6ewtA32QyJVeomyJvhhp9Ob8K3vFyil5OLtRNydWprDwoglTClkI/soP/w Bmppa8JvSFoqn05S04yL487C7wqFfFXN/NgLd1OKZQErGhk8IeUIoKHit5ZzcB0EB3 NrRMnLlI23zd77LEisLdrRgnePMZd9j9q72bteCPqXESFDlkNk0ertVWfis6wgTk1r S0Hk8dHXLfc8LPhSyJOaJ2uHhr92Y/rSETFn+dr36EgrSPn2Dgi3nlS0utee6kHm74 Nrxjn2rmFcaUg== From: Michael Ellerman <mpe@ellerman.id.au> To: glider@google.com, elver@google.com, <akpm@linux-foundation.org>, zhangpeng.00@bytedance.com Cc: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>, <linuxppc-dev@lists.ozlabs.org> Subject: [PATCH] mm: kfence: Fix false positives on big endian Date: Fri, 5 May 2023 13:51:27 +1000 Message-Id: <20230505035127.195387-1-mpe@ellerman.id.au> X-Mailer: git-send-email 2.40.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765025233635874531?= X-GMAIL-MSGID: =?utf-8?q?1765025233635874531?= |
Series |
mm: kfence: Fix false positives on big endian
|
|
Commit Message
Michael Ellerman
May 5, 2023, 3:51 a.m. UTC
Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
__kfence_alloc() and __kfence_free()"), kfence reports failures in
random places at boot on big endian machines.
The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
address of each byte in its value, so it needs to be byte swapped on big
endian machines.
The compiler is smart enough to do the le64_to_cpu() at compile time, so
there is no runtime overhead.
Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
mm/kfence/kfence.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote: > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()"), kfence reports failures in > random places at boot on big endian machines. > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > address of each byte in its value, so it needs to be byte swapped on big > endian machines. > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > there is no runtime overhead. > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Marco Elver <elver@google.com> Andrew, is the Fixes enough to make it to stable as well or do we also need Cc: stable? Thanks, -- Marco > --- > mm/kfence/kfence.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 2aafc46a4aaf..392fb273e7bd 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,7 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) > > /* Maximum stack depth for reports. */ > #define KFENCE_STACK_DEPTH 64 > -- > 2.40.1 >
Marco Elver <elver@google.com> writes: > On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote: >> >> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >> __kfence_alloc() and __kfence_free()"), kfence reports failures in >> random places at boot on big endian machines. >> >> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >> address of each byte in its value, so it needs to be byte swapped on big >> endian machines. >> >> The compiler is smart enough to do the le64_to_cpu() at compile time, so >> there is no runtime overhead. >> >> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > Reviewed-by: Marco Elver <elver@google.com> Thanks. > Andrew, is the Fixes enough to make it to stable as well or do we also > need Cc: stable? That commit is not in any releases yet (or even an rc), so as long as it gets picked up before v6.4 then it won't need to go to stable. cheers
From: Michael Ellerman > Sent: 05 May 2023 04:51 > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()"), kfence reports failures in > random places at boot on big endian machines. > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > address of each byte in its value, so it needs to be byte swapped on big > endian machines. > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > there is no runtime overhead. > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > --- > mm/kfence/kfence.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 2aafc46a4aaf..392fb273e7bd 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,7 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) What at the (u64) casts for? The constants should probably have a ul (or ull) suffix. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > From: Michael Ellerman > > Sent: 05 May 2023 04:51 > > > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > > __kfence_alloc() and __kfence_free()"), kfence reports failures in > > random places at boot on big endian machines. > > > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > > address of each byte in its value, so it needs to be byte swapped on big > > endian machines. > > > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > > there is no runtime overhead. > > > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > --- > > mm/kfence/kfence.h | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > > index 2aafc46a4aaf..392fb273e7bd 100644 > > --- a/mm/kfence/kfence.h > > +++ b/mm/kfence/kfence.h > > @@ -29,7 +29,7 @@ > > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > > * at a time instead of byte by byte to improve performance. > > */ > > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) > > What at the (u64) casts for? > The constants should probably have a ul (or ull) suffix. > I tried that, didn't fix the sparse warnings described at https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. Michael, have you looked into this? I'll merge it upstream - I guess we can live with the warnings for a while.
Andrew Morton <akpm@linux-foundation.org> writes: > On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > >> From: Michael Ellerman >> > Sent: 05 May 2023 04:51 >> > >> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >> > __kfence_alloc() and __kfence_free()"), kfence reports failures in >> > random places at boot on big endian machines. >> > >> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >> > address of each byte in its value, so it needs to be byte swapped on big >> > endian machines. >> > >> > The compiler is smart enough to do the le64_to_cpu() at compile time, so >> > there is no runtime overhead. >> > >> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> >> > --- >> > mm/kfence/kfence.h | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h >> > index 2aafc46a4aaf..392fb273e7bd 100644 >> > --- a/mm/kfence/kfence.h >> > +++ b/mm/kfence/kfence.h >> > @@ -29,7 +29,7 @@ >> > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked >> > * at a time instead of byte by byte to improve performance. >> > */ >> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) >> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) >> >> What at the (u64) casts for? >> The constants should probably have a ul (or ull) suffix. >> > > I tried that, didn't fix the sparse warnings described at > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > Michael, have you looked into this? I haven't sorry, been chasing other bugs. > I'll merge it upstream - I guess we can live with the warnings for a while. Thanks, yeah spurious WARNs are more of a pain than some sparse warnings. Maybe using le64_to_cpu() is too fancy, could just do it with an ifdef? eg. diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 392fb273e7bd..510355a5382b 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -29,7 +29,11 @@ * canary of every 8 bytes is the same. 64-bit memory can be filled and checked * at a time instead of byte by byte to improve performance. */ -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) +#ifdef __LITTLE_ENDIAN__ +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0706050403020100ULL) +#else +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0001020304050607ULL) +#endif /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 cheers
Le 18/05/2023 à 00:20, Andrew Morton a écrit : > On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > >> From: Michael Ellerman >>> Sent: 05 May 2023 04:51 >>> >>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >>> __kfence_alloc() and __kfence_free()"), kfence reports failures in >>> random places at boot on big endian machines. >>> >>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >>> address of each byte in its value, so it needs to be byte swapped on big >>> endian machines. >>> >>> The compiler is smart enough to do the le64_to_cpu() at compile time, so >>> there is no runtime overhead. >>> >>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> >>> --- >>> mm/kfence/kfence.h | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h >>> index 2aafc46a4aaf..392fb273e7bd 100644 >>> --- a/mm/kfence/kfence.h >>> +++ b/mm/kfence/kfence.h >>> @@ -29,7 +29,7 @@ >>> * canary of every 8 bytes is the same. 64-bit memory can be filled and checked >>> * at a time instead of byte by byte to improve performance. >>> */ >>> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) >>> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) >> >> What at the (u64) casts for? >> The constants should probably have a ul (or ull) suffix. >> > > I tried that, didn't fix the sparse warnings described at > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > Michael, have you looked into this? > > I'll merge it upstream - I guess we can live with the warnings for a while. > sparse warning goes away with: #define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ le64_to_cpu((__force __le64)0x0706050403020100)) Christophe
On Fri, 2023-05-19 at 15:14 +1000, Michael Ellerman wrote: > Andrew Morton <akpm@linux-foundation.org> writes: > > On Fri, 5 May 2023 16:02:17 +0000 David Laight > > <David.Laight@ACULAB.COM> wrote: > > > > > From: Michael Ellerman > > > > Sent: 05 May 2023 04:51 > > > > > > > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance > > > > of > > > > __kfence_alloc() and __kfence_free()"), kfence reports failures > > > > in > > > > random places at boot on big endian machines. > > > > > > > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes > > > > the > > > > address of each byte in its value, so it needs to be byte > > > > swapped on big > > > > endian machines. > > > > > > > > The compiler is smart enough to do the le64_to_cpu() at compile > > > > time, so > > > > there is no runtime overhead. > > > > > > > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of > > > > __kfence_alloc() and __kfence_free()") > > > > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > > > --- > > > > mm/kfence/kfence.h | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > > > > index 2aafc46a4aaf..392fb273e7bd 100644 > > > > --- a/mm/kfence/kfence.h > > > > +++ b/mm/kfence/kfence.h > > > > @@ -29,7 +29,7 @@ > > > > * canary of every 8 bytes is the same. 64-bit memory can be > > > > filled and checked > > > > * at a time instead of byte by byte to improve performance. > > > > */ > > > > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > > > > (u64)(0x0706050403020100)) > > > > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > > > > (u64)(le64_to_cpu(0x0706050403020100))) > > > > > > What at the (u64) casts for? > > > The constants should probably have a ul (or ull) suffix. > > > > > > > I tried that, didn't fix the sparse warnings described at > > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > > > Michael, have you looked into this? > > I haven't sorry, been chasing other bugs. > > > I'll merge it upstream - I guess we can live with the warnings for > > a while. > > Thanks, yeah spurious WARNs are more of a pain than some sparse > warnings. > > Maybe using le64_to_cpu() is too fancy, could just do it with an > ifdef? eg. > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 392fb273e7bd..510355a5382b 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,11 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled > and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > (u64)(le64_to_cpu(0x0706050403020100))) > +#ifdef __LITTLE_ENDIAN__ > +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ > 0x0706050403020100ULL) > +#else > +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ > 0x0001020304050607ULL) > +#endif > > /* Maximum stack depth for reports. */ > #define KFENCE_STACK_DEPTH 64 > > > cheers (for the sparse errors) As I understand, we require memory to look like "00 01 02 03 04 05 06 07" such that iterating byte-by-byte gives 00, 01, etc. (with everything XORed with aaa...) I think it would be most semantically correct to use cpu_to_le64 on KFENCE_CANARY_PATTERN_U64 and annotate the values being compared against it as __le64. This is because we want the integer literal 0x0706050403020100 to be stored as "00 01 02 03 04 05 06 07", which is the definition of little endian. Masking this with an #ifdef leaves the type as cpu endian, which could result in future issues. (or I've just misunderstood and can disregard this)
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 2aafc46a4aaf..392fb273e7bd 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -29,7 +29,7 @@ * canary of every 8 bytes is the same. 64-bit memory can be filled and checked * at a time instead of byte by byte to improve performance. */ -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64