Message ID | 20230309101752.2025459-1-elver@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp215081wrd; Thu, 9 Mar 2023 02:39:44 -0800 (PST) X-Google-Smtp-Source: AK7set8NgSiq2bD58526c5aK6oGMqhwRYuL0xpWBeMa3hctFb8rx2Ngd0avuQyRnANiripHBpnoI X-Received: by 2002:a17:903:18d:b0:19a:eb93:6165 with SMTP id z13-20020a170903018d00b0019aeb936165mr28915575plg.22.1678358384601; Thu, 09 Mar 2023 02:39:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678358384; cv=none; d=google.com; s=arc-20160816; b=ePoVeiUbJi8bjFffhR7wzB54EFfCDvPm3ju54iXoMjSWNhGhzTXI7CiR8iuo05ZJeC 9lMTVALNPdLe34K00wbRF3kcfGo0b8wGOvyF0TwFY3yJW7ballt2QzkIkrC5uWzygZl/ tEN6yuWPUUcoTNkzKv4oDeQAZhMRQPXX4IQLG2f2oxmHtQtpFsBigJFrCZazGvKhtTTL g/afzb6i76mZpFTf1c7viZ9DovlaUqU1SS9AfqNyEGMFAXrqCMK1k9STHBA3NiVugBRx ssjrl7EHKevzQnCroyL4WOhw38vBRAZha/uCWjxv65cnWVqyFhgJ9I9WQO/elfdcFXwT JBiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=/g34wc/bzNoY5huESod7peN9hpH8a494unvdVXAlQmU=; b=0PYB6V+Y1wH4hSLSJLfRLlbkZBIULGI4CRbCquHtuvlQUD31QiEHxUIg8/uDaRpsfP tylt5/g7SXYB+ATZUwY3Y6j9mjnaxp/i6UYcjL71Y3pwkULX9Cuvwo8d94mfPASKJau8 wNz47SoDjb7+6+xotrPqinUIQE5ElbxbkKigwwmTMK34IBnCjOX1SvWt6ZawC+8rxwpf 3htuYUtdHOKuBXMYvtTUYE6/FR2gLavz8BvsjrEBbHb/Ys8uWUETuX1YFPZjVFECJeXm pCFzSWnFbKHpnvBtHbatXijIHvA1XaJrPKUocNNuwlQf+8khmqF0WkJw/21ka4Owf5wD J+VQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="e/3pzWXG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n11-20020a170902d2cb00b00196486c4b17si8070884plc.56.2023.03.09.02.39.32; Thu, 09 Mar 2023 02:39:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="e/3pzWXG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230224AbjCIKTO (ORCPT <rfc822;carlos.wei.hk@gmail.com> + 99 others); Thu, 9 Mar 2023 05:19:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229746AbjCIKSy (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 9 Mar 2023 05:18:54 -0500 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB1261F5EC for <linux-kernel@vger.kernel.org>; Thu, 9 Mar 2023 02:18:06 -0800 (PST) Received: by mail-ed1-x549.google.com with SMTP id k12-20020a50c8cc000000b004accf30f6d3so2252884edh.14 for <linux-kernel@vger.kernel.org>; Thu, 09 Mar 2023 02:18:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678357085; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=/g34wc/bzNoY5huESod7peN9hpH8a494unvdVXAlQmU=; b=e/3pzWXG4gieRCbBxuedbCvfWgSt3DXpMWw8fLP20MPVDEDnQUq7xbbz5A3jgWhd07 nfhPJOOylsfr52BEgUwj3vvx9hl306SUgSPlaaBneBja9Hkht/JcVnzpJVF6/RoGRv41 y3vbbioTdg9Znwm9rxJAWNtejkYEWU6kq/1JCRP3kRp8xb1KL5m4QKReFM0m1MN6QGGI Pl9NGQ6uk7MlAh6e1K+3awbpbpWh/XUVthcVvg+CJcBaKKO14sM71tzMiLg2ae2v6xAX 6KcaT11VwHrsDxcIq79sOgv4IkIL1aV5jmrblwpimoIMiwi1UH7tahHAIF1vG4N7nH47 Z56Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678357085; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=/g34wc/bzNoY5huESod7peN9hpH8a494unvdVXAlQmU=; b=pZCQJ2WEugtm526EsZFLtrqur7uWc0ZX/pRb401AChTgMCcVIiPUBGozJ65pGsPLNS rQXEsgx/zLCvCm18IBB4MmK8wQUTpJJIp+XMg0ZsNLYQsZLFUd4OB0UBzQ4/FaNZnCzK mV9K1qP8m6Y50tQraf5H2rDZ+VXUaS1LlbphJa+RDwqtAauAFN1CrewAq001bnDbeHE0 ieYJ3e2+YUListVM/sI7TGXvmG8bUIY9uyXArTiksQymlvsM19Bhl296VbpOGd9qXEP6 LxAdPL/O73Nn0JnWE2lEQ7aPB6JBOo169vvtju3JOmdyrqy/ePW4gFBh7VExTEzoRtyb IEAg== X-Gm-Message-State: AO0yUKXl79pBCtuMTZz+uQdN2gDYqoiqD7zPE4eP+rbg+vOPKAtDIcbU ps3DQZ3seWYU7564SguvMW2n26se0Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:2628:265b:fcee:2ca0]) (user=elver job=sendgmr) by 2002:a17:906:d041:b0:8bf:e82a:2988 with SMTP id bo1-20020a170906d04100b008bfe82a2988mr11015180ejb.4.1678357085082; Thu, 09 Mar 2023 02:18:05 -0800 (PST) Date: Thu, 9 Mar 2023 11:17:52 +0100 Mime-Version: 1.0 X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230309101752.2025459-1-elver@google.com> Subject: [PATCH] kcsan: Avoid READ_ONCE() in read_instrumented_memory() From: Marco Elver <elver@google.com> To: elver@google.com, "Paul E. McKenney" <paulmck@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com>, Dmitry Vyukov <dvyukov@google.com>, Alexander Potapenko <glider@google.com>, Boqun Feng <boqun.feng@gmail.com>, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Haibo Li <haibo.li@mediatek.com>, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759886321422394267?= X-GMAIL-MSGID: =?utf-8?q?1759886321422394267?= |
Series |
kcsan: Avoid READ_ONCE() in read_instrumented_memory()
|
|
Commit Message
Marco Elver
March 9, 2023, 10:17 a.m. UTC
Haibo Li reported: | Unable to handle kernel paging request at virtual address | ffffff802a0d8d7171 | Mem abort info:o: | ESR = 0x9600002121 | EC = 0x25: DABT (current EL), IL = 32 bitsts | SET = 0, FnV = 0 0 | EA = 0, S1PTW = 0 0 | FSC = 0x21: alignment fault | Data abort info:o: | ISV = 0, ISS = 0x0000002121 | CM = 0, WnR = 0 0 | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 | Internal error: Oops: 96000021 [#1] PREEMPT SMP | Modules linked in: | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted | 5.15.78-android13-8-g63561175bbda-dirty #1 | ... | pc : kcsan_setup_watchpoint+0x26c/0x6bc | lr : kcsan_setup_watchpoint+0x88/0x6bc | sp : ffffffc00ab4b7f0 | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 | Call trace: | kcsan_setup_watchpoint+0x26c/0x6bc | __tsan_read2+0x1f0/0x234 | inflate_fast+0x498/0x750 | zlib_inflate+0x1304/0x2384 | __gunzip+0x3a0/0x45c | gunzip+0x20/0x30 | unpack_to_rootfs+0x2a8/0x3fc | do_populate_rootfs+0xe8/0x11c | async_run_entry_fn+0x58/0x1bc | process_one_work+0x3ec/0x738 | worker_thread+0x4c4/0x838 | kthread+0x20c/0x258 | ret_from_fork+0x10/0x20 | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) | ---[ end trace 613a943cb0a572b6 ]----- The reason for this is that on certain arm64 configuration since e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire instruction which cannot be used on unaligned addresses. Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply forcing the compiler to do the required access by casting to the appropriate volatile type. In terms of generated code this currently only affects architectures that do not use the default READ_ONCE() implementation. The only downside is that we are not guaranteed atomicity of the access itself, although on most architectures a plain load up to machine word size should still be atomic (a fact the default READ_ONCE() still relies on itself). Reported-by: Haibo Li <haibo.li@mediatek.com> Tested-by: Haibo Li <haibo.li@mediatek.com> Cc: <stable@vger.kernel.org> # 5.17+ Signed-off-by: Marco Elver <elver@google.com> --- kernel/kcsan/core.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)
Comments
On Thu, Mar 09, 2023 at 11:17:52AM +0100, Marco Elver wrote: > Haibo Li reported: > > | Unable to handle kernel paging request at virtual address > | ffffff802a0d8d7171 > | Mem abort info:o: > | ESR = 0x9600002121 > | EC = 0x25: DABT (current EL), IL = 32 bitsts > | SET = 0, FnV = 0 0 > | EA = 0, S1PTW = 0 0 > | FSC = 0x21: alignment fault > | Data abort info:o: > | ISV = 0, ISS = 0x0000002121 > | CM = 0, WnR = 0 0 > | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 > | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, > | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 > | Internal error: Oops: 96000021 [#1] PREEMPT SMP > | Modules linked in: > | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted > | 5.15.78-android13-8-g63561175bbda-dirty #1 > | ... > | pc : kcsan_setup_watchpoint+0x26c/0x6bc > | lr : kcsan_setup_watchpoint+0x88/0x6bc > | sp : ffffffc00ab4b7f0 > | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 > | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 > | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 > | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 > | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 > | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 > | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 > | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 > | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 > | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 > | Call trace: > | kcsan_setup_watchpoint+0x26c/0x6bc > | __tsan_read2+0x1f0/0x234 > | inflate_fast+0x498/0x750 > | zlib_inflate+0x1304/0x2384 > | __gunzip+0x3a0/0x45c > | gunzip+0x20/0x30 > | unpack_to_rootfs+0x2a8/0x3fc > | do_populate_rootfs+0xe8/0x11c > | async_run_entry_fn+0x58/0x1bc > | process_one_work+0x3ec/0x738 > | worker_thread+0x4c4/0x838 > | kthread+0x20c/0x258 > | ret_from_fork+0x10/0x20 > | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) > | ---[ end trace 613a943cb0a572b6 ]----- > > The reason for this is that on certain arm64 configuration since > e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when > CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire > instruction which cannot be used on unaligned addresses. > > Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply > forcing the compiler to do the required access by casting to the > appropriate volatile type. In terms of generated code this currently > only affects architectures that do not use the default READ_ONCE() > implementation. > > The only downside is that we are not guaranteed atomicity of the access > itself, although on most architectures a plain load up to machine word > size should still be atomic (a fact the default READ_ONCE() still relies > on itself). > > Reported-by: Haibo Li <haibo.li@mediatek.com> > Tested-by: Haibo Li <haibo.li@mediatek.com> > Cc: <stable@vger.kernel.org> # 5.17+ > Signed-off-by: Marco Elver <elver@google.com> Queued, thank you! This one looks like it might want to go into v6.4 rather than later. Thanx, Paul > --- > kernel/kcsan/core.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c > index 54d077e1a2dc..5a60cc52adc0 100644 > --- a/kernel/kcsan/core.c > +++ b/kernel/kcsan/core.c > @@ -337,11 +337,20 @@ static void delay_access(int type) > */ > static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) > { > + /* > + * In the below we don't necessarily need the read of the location to > + * be atomic, and we don't use READ_ONCE(), since all we need for race > + * detection is to observe 2 different values. > + * > + * Furthermore, on certain architectures (such as arm64), READ_ONCE() > + * may turn into more complex instructions than a plain load that cannot > + * do unaligned accesses. > + */ > switch (size) { > - case 1: return READ_ONCE(*(const u8 *)ptr); > - case 2: return READ_ONCE(*(const u16 *)ptr); > - case 4: return READ_ONCE(*(const u32 *)ptr); > - case 8: return READ_ONCE(*(const u64 *)ptr); > + case 1: return *(const volatile u8 *)ptr; > + case 2: return *(const volatile u16 *)ptr; > + case 4: return *(const volatile u32 *)ptr; > + case 8: return *(const volatile u64 *)ptr; > default: return 0; /* Ignore; we do not diff the values. */ > } > } > -- > 2.40.0.rc1.284.g88254d51c5-goog >
On Thu, 9 Mar 2023 at 23:08, Paul E. McKenney <paulmck@kernel.org> wrote: > > On Thu, Mar 09, 2023 at 11:17:52AM +0100, Marco Elver wrote: > > Haibo Li reported: > > > > | Unable to handle kernel paging request at virtual address > > | ffffff802a0d8d7171 > > | Mem abort info:o: > > | ESR = 0x9600002121 > > | EC = 0x25: DABT (current EL), IL = 32 bitsts > > | SET = 0, FnV = 0 0 > > | EA = 0, S1PTW = 0 0 > > | FSC = 0x21: alignment fault > > | Data abort info:o: > > | ISV = 0, ISS = 0x0000002121 > > | CM = 0, WnR = 0 0 > > | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 > > | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, > > | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 > > | Internal error: Oops: 96000021 [#1] PREEMPT SMP > > | Modules linked in: > > | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted > > | 5.15.78-android13-8-g63561175bbda-dirty #1 > > | ... > > | pc : kcsan_setup_watchpoint+0x26c/0x6bc > > | lr : kcsan_setup_watchpoint+0x88/0x6bc > > | sp : ffffffc00ab4b7f0 > > | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 > > | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 > > | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 > > | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 > > | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 > > | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 > > | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 > > | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 > > | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 > > | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 > > | Call trace: > > | kcsan_setup_watchpoint+0x26c/0x6bc > > | __tsan_read2+0x1f0/0x234 > > | inflate_fast+0x498/0x750 > > | zlib_inflate+0x1304/0x2384 > > | __gunzip+0x3a0/0x45c > > | gunzip+0x20/0x30 > > | unpack_to_rootfs+0x2a8/0x3fc > > | do_populate_rootfs+0xe8/0x11c > > | async_run_entry_fn+0x58/0x1bc > > | process_one_work+0x3ec/0x738 > > | worker_thread+0x4c4/0x838 > > | kthread+0x20c/0x258 > > | ret_from_fork+0x10/0x20 > > | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) > > | ---[ end trace 613a943cb0a572b6 ]----- > > > > The reason for this is that on certain arm64 configuration since > > e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when > > CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire > > instruction which cannot be used on unaligned addresses. > > > > Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply > > forcing the compiler to do the required access by casting to the > > appropriate volatile type. In terms of generated code this currently > > only affects architectures that do not use the default READ_ONCE() > > implementation. > > > > The only downside is that we are not guaranteed atomicity of the access > > itself, although on most architectures a plain load up to machine word > > size should still be atomic (a fact the default READ_ONCE() still relies > > on itself). > > > > Reported-by: Haibo Li <haibo.li@mediatek.com> > > Tested-by: Haibo Li <haibo.li@mediatek.com> > > Cc: <stable@vger.kernel.org> # 5.17+ > > Signed-off-by: Marco Elver <elver@google.com> > > Queued, thank you! > > This one looks like it might want to go into v6.4 rather than later. Yes, I think that'd be appropriate - thank you! Thanks, -- Marco
diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 54d077e1a2dc..5a60cc52adc0 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -337,11 +337,20 @@ static void delay_access(int type) */ static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) { + /* + * In the below we don't necessarily need the read of the location to + * be atomic, and we don't use READ_ONCE(), since all we need for race + * detection is to observe 2 different values. + * + * Furthermore, on certain architectures (such as arm64), READ_ONCE() + * may turn into more complex instructions than a plain load that cannot + * do unaligned accesses. + */ switch (size) { - case 1: return READ_ONCE(*(const u8 *)ptr); - case 2: return READ_ONCE(*(const u16 *)ptr); - case 4: return READ_ONCE(*(const u32 *)ptr); - case 8: return READ_ONCE(*(const u64 *)ptr); + case 1: return *(const volatile u8 *)ptr; + case 2: return *(const volatile u16 *)ptr; + case 4: return *(const volatile u32 *)ptr; + case 8: return *(const volatile u64 *)ptr; default: return 0; /* Ignore; we do not diff the values. */ } }