Message ID | 20230628094938.2318171-1-arnd@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp8804670vqr; Wed, 28 Jun 2023 03:07:09 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5T2UZLo+72ZU3KL0GJbG5u1WlzIaV7LkUp3FY+eDiBodkSpqw/s01ogRI6Ad8iZ7t5Qso5 X-Received: by 2002:a17:903:278c:b0:1b5:26d4:517d with SMTP id jw12-20020a170903278c00b001b526d4517dmr7384412plb.29.1687946828895; Wed, 28 Jun 2023 03:07:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687946828; cv=none; d=google.com; s=arc-20160816; b=IZNYSUTrSsolUyZvPltcJ9HOTLxMn83cEo0FBF3NrAXdPNXr8DBqswSsmPHUv9x4Jf Nzq5Coh2XFmYU3K6+lr1tQWzX4Nvc8PJDGpcdXR+PPeC/e+RdCXfvtIDohJcp1IkGUcS HSczTncLQjEgK4vJ1MI61iFJfqHbnXr/XLjBzR6mH6/G5Nl60U4XS2Zqx7tQ8nIf+K/9 1isaoU10kqUif+2uhv6p0I1/6g9aRc/0H2go+FsJjMt0zyCW/fwaFTLssiG6Fik45cfu NynAReUUa5gsxuDMHf90MePTHhXrXuCAYxojMN7u4+epkGQYW4lSJPuedZ9PiL/BEW0d zucw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=+9kVjURCHFrdWMia37Js/lqdKw/zow8aqFX1OIVKzKA=; fh=GTRzeVWDRAj2xnIRwh9JLxYvhtxkLedsxZDUCeyiTZM=; b=EpXsNrrj0Ak0MyZuU2yaYzPP4Sz12GrS/LWvoAHSCP5T1slk+JDXSZZEAIWINDSDSO 4e9piW+RQcX3y6RloDGBjFdtS6cbwkH4jWFCZ3wHypL4YOjTRHgwQdo64Go2csp9U0Hv 6rRGtK8PPRRBeKZ/CNDxIY/J8yMNq1wwRAPAeteK7n0FxfYcgOVSZc+kh9fYWBRXLATn T1Fz7sh5hoMMck63QMKsTVNbYw2d7KdNz6xZqbFvp1lvUt9IkOvSLZhZdmMtzFwseybW Onk+p7B2xKCFx0vSCWBzixq6Yg3EtKSfOAAyTuv/thiUi/1zaG6uuPg+L6tcyzHI6izo csJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f+0sJIxI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n14-20020a170902d2ce00b001afe110eb66si9277516plc.532.2023.06.28.03.06.40; Wed, 28 Jun 2023 03:07:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f+0sJIxI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234507AbjF1J55 (ORCPT <rfc822;adanhawthorn@gmail.com> + 99 others); Wed, 28 Jun 2023 05:57:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233512AbjF1Jy1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 28 Jun 2023 05:54:27 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 941DC30FC; Wed, 28 Jun 2023 02:49:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 246EE61260; Wed, 28 Jun 2023 09:49:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7838BC433C0; Wed, 28 Jun 2023 09:49:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1687945792; bh=rQo+ivr1Jk6fLZn+NaRy01I9YN0NbBC+DKUTXazksTM=; h=From:To:Cc:Subject:Date:From; b=f+0sJIxIinYqm1D3xPb+WilOJAfna6B8zybTiF74kKZTT5Pgszfx3sVzUtxjl9PQW a1dVvDHbOipjBf7ktFnDjAyxjI3e8Gn01OwfBu3S61V/MlqeQfBpKyTyzI5XkvLggg nAnZr6J4bU6PIQ13rsscic0Q4RTDF2rq6R8/KwYHYqPHgJUkOFJ/x8S3BCJ0sxxal1 5LLku+cMm6zV8wXuIO/5cym+y03rplp+d8FYpcHV4QwDlD6FMHf3gvPhpRbSRHBgvY ryjTzj9lfAzZ6NbkI/Yd884E9FOkI91GTF4EDlWts1vnk1r0TOGaXN3AMMuNooL0zV wCB3QXsHF4ykw== From: Arnd Bergmann <arnd@kernel.org> To: "David S. Miller" <davem@davemloft.net>, Kees Cook <keescook@chromium.org>, Mark Rutland <mark.rutland@arm.com>, "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de>, Guenter Roeck <linux@roeck-us.net>, Geert Uytterhoeven <geert@linux-m68k.org>, Ingo Molnar <mingo@kernel.org>, Andi Shyti <andi.shyti@linux.intel.com>, Andrzej Hajda <andrzej.hajda@intel.com>, Palmer Dabbelt <palmer@rivosinc.com>, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] sparc: mark __arch_xchg() as __always_inline Date: Wed, 28 Jun 2023 11:49:18 +0200 Message-Id: <20230628094938.2318171-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769940534405651368?= X-GMAIL-MSGID: =?utf-8?q?1769940534405651368?= |
Series |
sparc: mark __arch_xchg() as __always_inline
|
|
Commit Message
Arnd Bergmann
June 28, 2023, 9:49 a.m. UTC
From: Arnd Bergmann <arnd@arndb.de> An otherwise correct change to the atomic operations uncovered an existing bug in the sparc __arch_xchg() function, which is calls __xchg_called_with_bad_pointer() when its arguments are unknown at compile time: ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! This now happens because gcc determines that it's better to not inline the function. Avoid this by just marking the function as __always_inline to force the compiler to do the right thing here. Reported-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- arch/sparc/include/asm/cmpxchg_32.h | 2 +- arch/sparc/include/asm/cmpxchg_64.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
Comments
On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote: > From: Arnd Bergmann <arnd@arndb.de> > > An otherwise correct change to the atomic operations uncovered an > existing bug in the sparc __arch_xchg() function, which is calls > __xchg_called_with_bad_pointer() when its arguments are unknown at > compile time: > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > This now happens because gcc determines that it's better to not inline the > function. Avoid this by just marking the function as __always_inline > to force the compiler to do the right thing here. > > Reported-by: Guenter Roeck <linux@roeck-us.net> > Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ > Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> Aha; you saved me writing a patch! :) We should probably do likewise for all the other bits like __cmpxchg(), but either way: Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > --- > arch/sparc/include/asm/cmpxchg_32.h | 2 +- > arch/sparc/include/asm/cmpxchg_64.h | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h > index 7a1339533d1d7..d0af82c240b73 100644 > --- a/arch/sparc/include/asm/cmpxchg_32.h > +++ b/arch/sparc/include/asm/cmpxchg_32.h > @@ -15,7 +15,7 @@ > unsigned long __xchg_u32(volatile u32 *m, u32 new); > void __xchg_called_with_bad_pointer(void); > > -static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > { > switch (size) { > case 4: > diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h > index 66cd61dde9ec1..3de25262c4118 100644 > --- a/arch/sparc/include/asm/cmpxchg_64.h > +++ b/arch/sparc/include/asm/cmpxchg_64.h > @@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val) > return (load32 & mask) >> bit_shift; > } > > -static inline unsigned long > +static __always_inline unsigned long > __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > { > switch (size) { > -- > 2.39.2 >
On 6/28/23 02:49, Arnd Bergmann wrote: > From: Arnd Bergmann <arnd@arndb.de> > > An otherwise correct change to the atomic operations uncovered an > existing bug in the sparc __arch_xchg() function, which is calls > __xchg_called_with_bad_pointer() when its arguments are unknown at > compile time: > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > This now happens because gcc determines that it's better to not inline the > function. Avoid this by just marking the function as __always_inline > to force the compiler to do the right thing here. > > Reported-by: Guenter Roeck <linux@roeck-us.net> > Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ > Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> Nice catch. Acked-by: Guenter Roeck <linux@roeck-us.net> > --- > arch/sparc/include/asm/cmpxchg_32.h | 2 +- > arch/sparc/include/asm/cmpxchg_64.h | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h > index 7a1339533d1d7..d0af82c240b73 100644 > --- a/arch/sparc/include/asm/cmpxchg_32.h > +++ b/arch/sparc/include/asm/cmpxchg_32.h > @@ -15,7 +15,7 @@ > unsigned long __xchg_u32(volatile u32 *m, u32 new); > void __xchg_called_with_bad_pointer(void); > > -static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > { > switch (size) { > case 4: > diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h > index 66cd61dde9ec1..3de25262c4118 100644 > --- a/arch/sparc/include/asm/cmpxchg_64.h > +++ b/arch/sparc/include/asm/cmpxchg_64.h > @@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val) > return (load32 & mask) >> bit_shift; > } > > -static inline unsigned long > +static __always_inline unsigned long > __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) > { > switch (size) {
On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote: > From: Arnd Bergmann <arnd@arndb.de> > > An otherwise correct change to the atomic operations uncovered an > existing bug in the sparc __arch_xchg() function, which is calls > __xchg_called_with_bad_pointer() when its arguments are unknown at > compile time: > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > This now happens because gcc determines that it's better to not inline the > function. Avoid this by just marking the function as __always_inline > to force the compiler to do the right thing here. > > Reported-by: Guenter Roeck <linux@roeck-us.net> > Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ > Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> I assume you will find a way to apply the patch. Sam
Hi Arnd, > An otherwise correct change to the atomic operations uncovered an > existing bug in the sparc __arch_xchg() function, which is calls > __xchg_called_with_bad_pointer() when its arguments are unknown at > compile time: > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > This now happens because gcc determines that it's better to not inline the > function. Avoid this by just marking the function as __always_inline > to force the compiler to do the right thing here. > > Reported-by: Guenter Roeck <linux@roeck-us.net> > Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ > Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Andi Shyti <andi.shyti@linux.intel.com> Thanks, Andi
Linux regression tracking (Thorsten Leemhuis)
July 13, 2023, 1:47 p.m. UTC |
#5
Addressed
Unaddressed
On 28.06.23 17:51, Sam Ravnborg wrote: > On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote: >> From: Arnd Bergmann <arnd@arndb.de> >> >> An otherwise correct change to the atomic operations uncovered an >> existing bug in the sparc __arch_xchg() function, which is calls >> __xchg_called_with_bad_pointer() when its arguments are unknown at >> compile time: >> >> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! >> >> This now happens because gcc determines that it's better to not inline the >> function. Avoid this by just marking the function as __always_inline >> to force the compiler to do the right thing here. >> >> Reported-by: Guenter Roeck <linux@roeck-us.net> >> Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ >> Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") >> Signed-off-by: Arnd Bergmann <arnd@arndb.de> > Reviewed-by: Sam Ravnborg <sam@ravnborg.org> > > I assume you will find a way to apply the patch. Hmmm, looks to me like this patch is sitting here for two weeks now without having made any progress. From a quick search on lore it also looks like Dave is not very active currently. Hence: Arnd, is that maybe something that is worth sending straight to Linus? Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat) -- Everything you wanna know about Linux kernel regression tracking: https://linux-regtracking.leemhuis.info/about/#tldr If I did something stupid, please tell me, as explained on that page.
On Wed, 28 Jun 2023 04:45:43 PDT (-0700), Mark Rutland wrote: > On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote: >> From: Arnd Bergmann <arnd@arndb.de> >> >> An otherwise correct change to the atomic operations uncovered an >> existing bug in the sparc __arch_xchg() function, which is calls >> __xchg_called_with_bad_pointer() when its arguments are unknown at >> compile time: >> >> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! >> >> This now happens because gcc determines that it's better to not inline the >> function. Avoid this by just marking the function as __always_inline >> to force the compiler to do the right thing here. >> >> Reported-by: Guenter Roeck <linux@roeck-us.net> >> Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ >> Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") >> Signed-off-by: Arnd Bergmann <arnd@arndb.de> > > Aha; you saved me writing a patch! :) > > We should probably do likewise for all the other bits like __cmpxchg(), but > either way: > > Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Though I'm not sure that means a whole lot over here ;) > Mark. > >> --- >> arch/sparc/include/asm/cmpxchg_32.h | 2 +- >> arch/sparc/include/asm/cmpxchg_64.h | 2 +- >> 2 files changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h >> index 7a1339533d1d7..d0af82c240b73 100644 >> --- a/arch/sparc/include/asm/cmpxchg_32.h >> +++ b/arch/sparc/include/asm/cmpxchg_32.h >> @@ -15,7 +15,7 @@ >> unsigned long __xchg_u32(volatile u32 *m, u32 new); >> void __xchg_called_with_bad_pointer(void); >> >> -static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) >> +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) >> { >> switch (size) { >> case 4: >> diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h >> index 66cd61dde9ec1..3de25262c4118 100644 >> --- a/arch/sparc/include/asm/cmpxchg_64.h >> +++ b/arch/sparc/include/asm/cmpxchg_64.h >> @@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val) >> return (load32 & mask) >> bit_shift; >> } >> >> -static inline unsigned long >> +static __always_inline unsigned long >> __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) >> { >> switch (size) { >> -- >> 2.39.2 >>
On Thu, Jul 13, 2023 at 07:00:37AM -0700, Palmer Dabbelt wrote: > On Wed, 28 Jun 2023 04:45:43 PDT (-0700), Mark Rutland wrote: > > On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote: > > > From: Arnd Bergmann <arnd@arndb.de> > > > > > > An otherwise correct change to the atomic operations uncovered an > > > existing bug in the sparc __arch_xchg() function, which is calls > > > __xchg_called_with_bad_pointer() when its arguments are unknown at > > > compile time: > > > > > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > > > > > This now happens because gcc determines that it's better to not inline the > > > function. Avoid this by just marking the function as __always_inline > > > to force the compiler to do the right thing here. > > > > > > Reported-by: Guenter Roeck <linux@roeck-us.net> > > > Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/ > > > Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional") > > > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > > > > Aha; you saved me writing a patch! :) > > > > We should probably do likewise for all the other bits like __cmpxchg(), but > > either way: > > > > Acked-by: Mark Rutland <mark.rutland@arm.com> > > Acked-by: Palmer Dabbelt <palmer@rivosinc.com> > > Though I'm not sure that means a whole lot over here ;) I've carried some other sparc stuff before. I can send this to Linus with other fixes.
On Wed, 28 Jun 2023 11:49:18 +0200, Arnd Bergmann wrote: > An otherwise correct change to the atomic operations uncovered an > existing bug in the sparc __arch_xchg() function, which is calls > __xchg_called_with_bad_pointer() when its arguments are unknown at > compile time: > > ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined! > > [...] Applied, thanks! [1/1] sparc: mark __arch_xchg() as __always_inline https://git.kernel.org/kees/c/ec7633de404e Best regards,
diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h index 7a1339533d1d7..d0af82c240b73 100644 --- a/arch/sparc/include/asm/cmpxchg_32.h +++ b/arch/sparc/include/asm/cmpxchg_32.h @@ -15,7 +15,7 @@ unsigned long __xchg_u32(volatile u32 *m, u32 new); void __xchg_called_with_bad_pointer(void); -static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) { switch (size) { case 4: diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h index 66cd61dde9ec1..3de25262c4118 100644 --- a/arch/sparc/include/asm/cmpxchg_64.h +++ b/arch/sparc/include/asm/cmpxchg_64.h @@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val) return (load32 & mask) >> bit_shift; } -static inline unsigned long +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) { switch (size) {