From patchwork Mon May 22 12:24:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97414 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1424780vqo; Mon, 22 May 2023 05:54:17 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6+ulg0cISpxBli9X8Fmw+e6yvg0U/izPXvBt40nNVipfdO2ZW4zsLDC8Sa4ydV8Y7g4xkY X-Received: by 2002:a17:90a:c706:b0:253:96e8:d77d with SMTP id o6-20020a17090ac70600b0025396e8d77dmr8280210pjt.10.1684760056764; Mon, 22 May 2023 05:54:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760056; cv=none; d=google.com; s=arc-20160816; b=mpI3qAlM0yqW01IIlbKANQL3MqVbgg+5ZWvNYAOtLJfUN4ZYN5ZXqiMotY3y0y+dF0 4qJyOiV4PCTOFGCjjDMXN9+ZzIr6eY3bMaP3MpJfMOnclUePtO2ZwlafXPz/s6M6rV/B bfEPR8gmDPwfNM+fEQmbrdgDrxYBnrYZvHn+6oY73u/6RJX69unA5kkXwT+8tRXkxonk fgnrm5PLYU8emblkPefrmyDLr+FJyoImjA3AyBrzXDH5C34sQ3FVupoppbHA6QP3az1P ozVCIi6c5JdDgKFEKxG/ft3JVXkjBXC4jnl+G+Bjo4PcCT/XuPDPfZjjizjA9G/FouwO rvtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hGcDdv6Ulc13cNbP7xtIUjc0GbR8R0Kixv/DGZZ0Bmo=; b=Mbq8FReS9hcJy8RMB2ncVM+L4lYnVavR7Mpv57fLssxqV2Ws/TxifggZqABDJgIzXM aOUI5EiLsQSWpUZRcM52Pha6OgokCvqdk9T7nWvL86TwRhkGDnIvxzvpsYa10L3Gnb+X 6R0ttfBkt/HUBLLv1Z85oUndaaOkxgu/OC822q6YbRrl77EqIj8E5Sne2sINYHoMp2hk BoojfSWyqwh2KIVKz+1TUetEaptkbK2N4MmoHZ9F9P9GWIfEn+TsJDPVcHth19UcGboR RBzrArRyaxLrHpmIu/iroKmdGxz1CEOMhRuYE/IBk/bLRkTpdm/sfmIl9+LlFj/om6xB W1LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b193-20020a6334ca000000b0053488d41c5asi1666865pga.330.2023.05.22.05.54.02; Mon, 22 May 2023 05:54:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234000AbjEVM1C (ORCPT + 99 others); Mon, 22 May 2023 08:27:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234014AbjEVM0e (ORCPT ); Mon, 22 May 2023 08:26:34 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6F84F2126; Mon, 22 May 2023 05:24:48 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0DCF4139F; Mon, 22 May 2023 05:25:25 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 63EE03F59C; Mon, 22 May 2023 05:24:38 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 01/26] locking/atomic: arm: fix sync ops Date: Mon, 22 May 2023 13:24:04 +0100 Message-Id: <20230522122429.1915021-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598960821582505?= X-GMAIL-MSGID: =?utf-8?q?1766598960821582505?= The sync_*() ops on arch/arm are defined in terms of the regular bitops with no special handling. This is not correct, as UP kernels elide barriers for the fully-ordered operations, and so the required ordering is lost when such UP kernels are run under a hypervsior on an SMP system. Fix this by defining sync ops with the required barriers. Note: On 32-bit arm, the sync_*() ops are currently only used by Xen, which requires ARMv7, but the semantics can be implemented for ARMv6+. Fixes: e54d2f61528165bb ("xen/arm: sync_bitops") Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King Cc: Stefano Stabellini Cc: Will Deacon --- arch/arm/include/asm/assembler.h | 17 +++++++++++++++++ arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++---- arch/arm/lib/bitops.h | 14 +++++++++++--- arch/arm/lib/testchangebit.S | 4 ++++ arch/arm/lib/testclearbit.S | 4 ++++ arch/arm/lib/testsetbit.S | 4 ++++ 6 files changed, 65 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 505a306e0271a..aebe2c8f6a686 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@) #endif .endm +/* + * Raw SMP data memory barrier + */ + .macro __smp_dmb mode +#if __LINUX_ARM_ARCH__ >= 7 + .ifeqs "\mode","arm" + dmb ish + .else + W(dmb) ish + .endif +#elif __LINUX_ARM_ARCH__ == 6 + mcr p15, 0, r0, c7, c10, 5 @ dmb +#else + .error "Incompatible SMP platform" +#endif + .endm + #if defined(CONFIG_CPU_V7M) /* * setmode is used to assert to be in svc mode during boot. For v7-M diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h index 6f5d627c44a3c..f46b3c570f92e 100644 --- a/arch/arm/include/asm/sync_bitops.h +++ b/arch/arm/include/asm/sync_bitops.h @@ -14,14 +14,35 @@ * ops which are SMP safe even on a UP kernel. */ +/* + * Unordered + */ + #define sync_set_bit(nr, p) _set_bit(nr, p) #define sync_clear_bit(nr, p) _clear_bit(nr, p) #define sync_change_bit(nr, p) _change_bit(nr, p) -#define sync_test_and_set_bit(nr, p) _test_and_set_bit(nr, p) -#define sync_test_and_clear_bit(nr, p) _test_and_clear_bit(nr, p) -#define sync_test_and_change_bit(nr, p) _test_and_change_bit(nr, p) #define sync_test_bit(nr, addr) test_bit(nr, addr) -#define arch_sync_cmpxchg arch_cmpxchg +/* + * Fully ordered + */ + +int _sync_test_and_set_bit(int nr, volatile unsigned long * p); +#define sync_test_and_set_bit(nr, p) _sync_test_and_set_bit(nr, p) + +int _sync_test_and_clear_bit(int nr, volatile unsigned long * p); +#define sync_test_and_clear_bit(nr, p) _sync_test_and_clear_bit(nr, p) + +int _sync_test_and_change_bit(int nr, volatile unsigned long * p); +#define sync_test_and_change_bit(nr, p) _sync_test_and_change_bit(nr, p) + +#define arch_sync_cmpxchg(ptr, old, new) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __smp_mb__before_atomic(); \ + __ret = arch_cmpxchg_relaxed((ptr), (old), (new)); \ + __smp_mb__after_atomic(); \ + __ret; \ +}) #endif diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h index 95bd359912889..f069d1b2318e6 100644 --- a/arch/arm/lib/bitops.h +++ b/arch/arm/lib/bitops.h @@ -28,7 +28,7 @@ UNWIND( .fnend ) ENDPROC(\name ) .endm - .macro testop, name, instr, store + .macro __testop, name, instr, store, barrier ENTRY( \name ) UNWIND( .fnstart ) ands ip, r1, #3 @@ -38,7 +38,7 @@ UNWIND( .fnstart ) mov r0, r0, lsr #5 add r1, r1, r0, lsl #2 @ Get word offset mov r3, r2, lsl r3 @ create mask - smp_dmb + \barrier #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP) .arch_extension mp ALT_SMP(W(pldw) [r1]) @@ -50,13 +50,21 @@ UNWIND( .fnstart ) strex ip, r2, [r1] cmp ip, #0 bne 1b - smp_dmb + \barrier cmp r0, #0 movne r0, #1 2: bx lr UNWIND( .fnend ) ENDPROC(\name ) .endm + + .macro testop, name, instr, store + __testop \name, \instr, \store, smp_dmb + .endm + + .macro sync_testop, name, instr, store + __testop \name, \instr, \store, __smp_dmb + .endm #else .macro bitop, name, instr ENTRY( \name ) diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S index 4ebecc67e6e04..f13fe9bc2399a 100644 --- a/arch/arm/lib/testchangebit.S +++ b/arch/arm/lib/testchangebit.S @@ -10,3 +10,7 @@ .text testop _test_and_change_bit, eor, str + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_change_bit, eor, str +#endif diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S index 009afa0f5b4a7..4d2c5ca620ebf 100644 --- a/arch/arm/lib/testclearbit.S +++ b/arch/arm/lib/testclearbit.S @@ -10,3 +10,7 @@ .text testop _test_and_clear_bit, bicne, strne + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_clear_bit, bicne, strne +#endif diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S index f3192e55acc87..649dbab65d8d0 100644 --- a/arch/arm/lib/testsetbit.S +++ b/arch/arm/lib/testsetbit.S @@ -10,3 +10,7 @@ .text testop _test_and_set_bit, orreq, streq + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_set_bit, orreq, streq +#endif From patchwork Mon May 22 12:24:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97412 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1423915vqo; Mon, 22 May 2023 05:52:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ65MHP9glQl38j4nnDmsJw3GCgc8BKiKY/BW6SOr2Y1rLUHuoHnREVCR/Bh2N8iRWAUJVmK X-Received: by 2002:a05:6a00:15c7:b0:626:2ce1:263c with SMTP id o7-20020a056a0015c700b006262ce1263cmr13716645pfu.5.1684759965144; Mon, 22 May 2023 05:52:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684759965; cv=none; d=google.com; s=arc-20160816; b=Egfy2ideUUhn5vSNbZ3oq+alFZ1B85uAXX7RkwS3c0LGunVQaWW6nSs1j+qK/yTFfr 8VZpPEwAPaGw7/5gcfX3pNubIeq/+4F7/bsOM5RkCRMAEU016cHq1wZjBHz25hVPR0VR 6KP76LcGK6l/aBt/tZrgeDbvTlmsmSzCJkc+Pc0/Fo51B9bzoxwtWP9voCe6G4YXzi8N OpjOFV8eTQJRUCbPxK+L0JhxxFoXHEN79PI570T3sXUKR07n8sCCDqD50bAGVB3nYvtE FpQHc1V1SyKKz/BQitqM4aqPNNfjE5oVEDOkiumiZfOdoISoyCGjvPqreEAqJ4/tbM2b vG9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=FAnF5ZwdDCCtlvByFsW7HEm1WAeNhBAlnrp9xO7gqiA=; b=HhgjGoYm5C1vXQhWencclcmMSqqrFrlubReDSQ7aaGdL0e9gBWOQ31qyX0gNxnnoSx nvdHvB+XuBigI796cD/5isybBwXE61GzABSyHNS4Q9aPX/5GXJ71am94qdhQzb+iEfr7 BpiImU4upZGoIdK/H7sm3nZ3eF0D6SVJ+83P3143SVEQlP1QLY991PMZZWp+OsalxrNI C9ru7O/86ujoCz8djaJwKmplMUlmYowBGXQVhRggoSPIF0BKuS89+UWVu4JIvrtn/PSB D/3+nvdrv5qdyr+mFAhUkNJfI+khu7O1kr2CrIEBuoIV17rlq+eZjGCFyfbkF3SQnjoY +xNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a10-20020a63704a000000b005306389636esi4642606pgn.689.2023.05.22.05.52.29; Mon, 22 May 2023 05:52:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233584AbjEVM1E (ORCPT + 99 others); Mon, 22 May 2023 08:27:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234011AbjEVM0e (ORCPT ); Mon, 22 May 2023 08:26:34 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E28BA26A2; Mon, 22 May 2023 05:24:48 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CEF111480; Mon, 22 May 2023 05:25:27 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 163BB3F59C; Mon, 22 May 2023 05:24:40 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 02/26] locking/atomic: remove fallback comments Date: Mon, 22 May 2023 13:24:05 +0100 Message-Id: <20230522122429.1915021-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598864741904899?= X-GMAIL-MSGID: =?utf-8?q?1766598864741904899?= Currently a subset of the fallback templates have kerneldoc comments, resulting in a haphazard set of generated kerneldoc comments as only some operations have fallback templates to begin with. We'd like to generate more consistent kerneldoc comments, and to do so we'll need to restructure the way the fallback code is generated. To minimize churn and to make it easier to restructure the fallback code, this patch removes the existing kerneldoc comments from the fallback templates. We can add new kerneldoc comments in subsequent patches. Aside from generated documentation being reduced, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic/atomic-arch-fallback.h | 166 +------------------- scripts/atomic/fallbacks/add_negative | 8 - scripts/atomic/fallbacks/add_unless | 9 -- scripts/atomic/fallbacks/dec_and_test | 8 - scripts/atomic/fallbacks/fetch_add_unless | 9 -- scripts/atomic/fallbacks/inc_and_test | 8 - scripts/atomic/fallbacks/inc_not_zero | 7 - scripts/atomic/fallbacks/sub_and_test | 9 -- 8 files changed, 1 insertion(+), 223 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 1722ddb6f17e0..3ce4cb5e790c5 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) #endif /* arch_atomic_try_cmpxchg_relaxed */ #ifndef arch_atomic_sub_and_test -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { @@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v) #endif #ifndef arch_atomic_dec_and_test -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { @@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v) #endif #ifndef arch_atomic_inc_and_test -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { @@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v) #endif /* arch_atomic_add_negative */ #ifndef arch_atomic_add_negative -/** - * arch_atomic_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { @@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_acquire -/** - * arch_atomic_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_release -/** - * arch_atomic_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) { @@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_relaxed -/** - * arch_atomic_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif /* arch_atomic_add_negative_relaxed */ #ifndef arch_atomic_fetch_add_unless -/** - * arch_atomic_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_add_unless -/** - * arch_atomic_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic_add_unless(atomic_t *v, int a, int u) { @@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_inc_not_zero -/** - * arch_atomic_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic_inc_not_zero(atomic_t *v) { @@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) #endif /* arch_atomic64_try_cmpxchg_relaxed */ #ifndef arch_atomic64_sub_and_test -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_dec_and_test -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v) { @@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v) #endif #ifndef arch_atomic64_inc_and_test -/** - * arch_atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v) { @@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v) #endif /* arch_atomic64_add_negative */ #ifndef arch_atomic64_add_negative -/** - * arch_atomic64_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_acquire -/** - * arch_atomic64_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_release -/** - * arch_atomic64_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_relaxed -/** - * arch_atomic64_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif /* arch_atomic64_add_negative_relaxed */ #ifndef arch_atomic64_fetch_add_unless -/** - * arch_atomic64_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_add_unless -/** - * arch_atomic64_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_inc_not_zero -/** - * arch_atomic64_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic64_inc_not_zero(atomic64_t *v) { @@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53 +// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277 diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative index e5980abf5904e..d0bd2dfbb244c 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -1,12 +1,4 @@ cat < X-Patchwork-Id: 97423 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1429028vqo; Mon, 22 May 2023 06:01:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4WlzSc4c/RNe+P4gNxv5I9C7Ler/9Rz10c0rlTARPQnL1qFMQiq4u9TQmEgibXbOmTlLua X-Received: by 2002:a17:902:da88:b0:1ae:4562:14f1 with SMTP id j8-20020a170902da8800b001ae456214f1mr13869503plx.9.1684760483536; Mon, 22 May 2023 06:01:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760483; cv=none; d=google.com; s=arc-20160816; b=N0VEXLdjuzDJAU7PJYC4RSnJWTKr8T0QolH5AVY5HldMY827zXww+7hwL7PITFaCAv b3QflWmM1QCStpkS53y5P/gJCM95amC/zBZJXwIcFG+RGItjMBmeCc4+Chl6E5RuvkpM KFsR+dHbirA/Qw5oh3j6jj6+VP5Cup7kmTcPNqFYVaFmq3RhH+hXyPPi7iS9lr1g3fIG AYA6tGbdGlsr8GIrFZ/VL/Xo9ef2bRzerhWO2e4nuK2U2XBH/K9vl2rMxwl3KY8Rz3zv raf5j7jjaKhfooVpJJe83m3dmgaWqCAPLOGEmSsJ5KmnxrNAPgxoPFS9tmvQMnLMhd+E qB3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=gJBcBCXE3nLfyLrQAfJhQjvLFFIMjBp8NOey+c9Ffmw=; b=g9vUZiVQvLI9qtf/TtShF3KP2nRByotFWn09eGJ8gq29B7PUuf17zFda1fkqoJY0zc 7UTyhDIo/PCsPatwnWfnE/igtoukCTh/pSWbUjc7tvtKnrlCsWrCz2m3+dAi2bnbXOB8 +vGh22L3Hsys/NXMsoNOWKWkfY2DlXHhKNwyIu41rUA2mlHp5VAcVsQ4AK3K+l+52vWu jALUYrAugcQQeCof3nxQq97R2N7Nm0dLcfwCnYffqdHP+V2KVynOgFJCQw92EHjio/g9 GdrZ8s3cefepdh/QqiYYaJi6vISKQWk9FLNggiUAVYlElVJMdr8riUDlF+l/4bCydqV2 l63w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e21-20020a170902d39500b001ac84f55591si1354804pld.293.2023.05.22.06.01.10; Mon, 22 May 2023 06:01:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233585AbjEVM1L (ORCPT + 99 others); Mon, 22 May 2023 08:27:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234019AbjEVM0g (ORCPT ); Mon, 22 May 2023 08:26:36 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 600B21FCA; Mon, 22 May 2023 05:24:50 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57F9F150C; Mon, 22 May 2023 05:25:30 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ADA8B3F59C; Mon, 22 May 2023 05:24:43 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 03/26] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Date: Mon, 22 May 2023 13:24:06 +0100 Message-Id: <20230522122429.1915021-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599409043853320?= X-GMAIL-MSGID: =?utf-8?q?1766599409043853320?= Hexagon's implementation of arch_atomic_cmpxchg() is identical to its implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg() in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg() and arch_xchg(). At the same time, remove the kerneldoc comments for hexagon's arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*() namespace is shared by all architectures and the API should be documented centrally, and the comments aren't all that helpful as-is. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/hexagon/include/asm/atomic.h | 46 +++---------------------------- 1 file changed, 4 insertions(+), 42 deletions(-) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 6e94f8d04146f..738857e10d6ec 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new) */ #define arch_atomic_read(v) READ_ONCE((v)->counter) -/** - * arch_atomic_xchg - atomic - * @v: pointer to memory to change - * @new: new value (technically passed in a register -- see xchg) - */ -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - - -/** - * arch_atomic_cmpxchg - atomic compare-and-exchange values - * @v: pointer to value to change - * @old: desired old value to match - * @new: new value to put in - * - * Parameters are then pointer, value-in-register, value-in-register, - * and the output is the old value. - * - * Apparently this is complicated for archs that don't support - * the memw_locked like we do (or it's broken or whatever). - * - * Kind of the lynchpin of the rest of the generically defined routines. - * Remember V2 had that bug with dotnew predicate set by memw_locked. - * - * "old" is "expected" old val, __oldval is actual old value - */ -static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - int __oldval; +#define arch_atomic_xchg(v, new) \ + (arch_xchg(&((v)->counter), (new))) - asm volatile( - "1: %0 = memw_locked(%1);\n" - " { P0 = cmp.eq(%0,%2);\n" - " if (!P0.new) jump:nt 2f; }\n" - " memw_locked(%1,P0) = %3;\n" - " if (!P0) jump 1b;\n" - "2:\n" - : "=&r" (__oldval) - : "r" (&v->counter), "r" (old), "r" (new) - : "memory", "p0" - ); - - return __oldval; -} +#define arch_atomic_cmpxchg(v, old, new) \ + (arch_cmpxchg(&((v)->counter), (old), (new))) #define ATOMIC_OP(op) \ static inline void arch_atomic_##op(int i, atomic_t *v) \ From patchwork Mon May 22 12:24:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97417 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1425289vqo; Mon, 22 May 2023 05:55:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6CxM18umAeuYwHe2A8+HYhaZWkVcDWqnE6MoXOGPOtfMGIOUxC9aJzgDWVrRiQIvxKeYFl X-Received: by 2002:a05:6a00:1896:b0:63b:854c:e0f6 with SMTP id x22-20020a056a00189600b0063b854ce0f6mr15213887pfh.21.1684760122888; Mon, 22 May 2023 05:55:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760122; cv=none; d=google.com; s=arc-20160816; b=LMKR+Qp0mf/nUMeD6GAy8qVB37/nJ20IVJI3pDg+1+oJfY4+uEacvj+Ky9bXz4RCdd iWBfKOhoBbwwAYTuLOc6QCb1BvOcCGfdgdk0pknU0ySr7sj483vo54BGHh9OfvvC3AJ5 WC/CRXFL/B2qiWSSBvXBtsOuWkZIgUGPr3k4acqHx3S1tGAwpuv/7SflWY8baMY+OvjK paYd2IEFcZgReaZraqbSgNvG1X966UxGX/rSJz7ataA9lqVJ0dZePPEKYZajUTf2a1zD 5eT4D0/0aqqSsNGfFToUM6Nlp+mO9NJCcDM54n1YF9zpIFxzEYJJjsh1OUTxzrThhGa/ 3RlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=xoETF8J/pNbLxTKWOY+PAeO8YG4T1BjQpKJrjdtezZU=; b=uq9vqbI+Sh/VNJLVKmCAaQ2W6DhHdObF7uD3EdqTHhwPTQ0bPXC0EI4HmHSi9+SWXM lRp4bUTOV5Yroyty0DzUthGYdu7pdKsXSRfVNhXZvrQ1J6TqfhDfWoMqxPSf8mmA6Fr8 K50X8A1HjKAQ1SKQdpwN9rUjAKFyRQNw/F/8Rf2MFN12abp9F2dJ9VRFI98KvTqRnkMq 818I7NV257gXFCj1sB2mjzRU9l0NHrOD9+M9C5CpmHgd75vjRH8nbielWxeovOZTqqy6 yL9pkGEq1ZO8tsfTlttyLGkjSt0DogkyNzVyssZELJhpAQsdNDYgeUYlPXJK3lnnncrR u8/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x1-20020aa79a41000000b006466ef42dafsi1436786pfj.179.2023.05.22.05.55.08; Mon, 22 May 2023 05:55:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229486AbjEVM1T (ORCPT + 99 others); Mon, 22 May 2023 08:27:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233578AbjEVM0j (ORCPT ); Mon, 22 May 2023 08:26:39 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E6621FCB; Mon, 22 May 2023 05:24:53 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E40691515; Mon, 22 May 2023 05:25:32 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 29E7A3F59C; Mon, 22 May 2023 05:24:46 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 04/26] locking/atomic: make atomic*_{cmp,}xchg optional Date: Mon, 22 May 2023 13:24:07 +0100 Message-Id: <20230522122429.1915021-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599030368382989?= X-GMAIL-MSGID: =?utf-8?q?1766599030368382989?= Most architectures define the atomic/atomic64 xchg and cmpxchg operations in terms of arch_xchg and arch_cmpxchg respectfully. Add fallbacks for these cases and remove the trivial cases from arch code. On some architectures the existing definitions are kept as these are used to build other arch_atomic*() operations. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 10 -- arch/arc/include/asm/atomic.h | 24 --- arch/arc/include/asm/atomic64-arcv2.h | 2 + arch/arm/include/asm/atomic.h | 3 +- arch/arm64/include/asm/atomic.h | 28 ---- arch/csky/include/asm/atomic.h | 35 ----- arch/hexagon/include/asm/atomic.h | 6 - arch/ia64/include/asm/atomic.h | 7 - arch/loongarch/include/asm/atomic.h | 7 - arch/m68k/include/asm/atomic.h | 9 +- arch/mips/include/asm/atomic.h | 11 -- arch/openrisc/include/asm/atomic.h | 3 - arch/parisc/include/asm/atomic.h | 9 -- arch/powerpc/include/asm/atomic.h | 24 --- arch/riscv/include/asm/atomic.h | 72 --------- arch/sh/include/asm/atomic.h | 3 - arch/sparc/include/asm/atomic_32.h | 2 + arch/sparc/include/asm/atomic_64.h | 11 -- arch/xtensa/include/asm/atomic.h | 3 - include/asm-generic/atomic.h | 3 - include/linux/atomic/atomic-arch-fallback.h | 158 +++++++++++++++++++- scripts/atomic/fallbacks/cmpxchg | 7 + scripts/atomic/fallbacks/xchg | 7 + 23 files changed, 179 insertions(+), 265 deletions(-) create mode 100755 scripts/atomic/fallbacks/cmpxchg create mode 100755 scripts/atomic/fallbacks/xchg diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index f2861a43a61ef..ec8ab552c527a 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -200,16 +200,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic64_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic64_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) - -#define arch_atomic_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) - /** * arch_atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 52ee51e1ff7c2..592d7fffc223c 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -22,30 +22,6 @@ #include #endif -#define arch_atomic_cmpxchg(v, o, n) \ -({ \ - arch_cmpxchg(&((v)->counter), (o), (n)); \ -}) - -#ifdef arch_cmpxchg_relaxed -#define arch_atomic_cmpxchg_relaxed(v, o, n) \ -({ \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)); \ -}) -#endif - -#define arch_atomic_xchg(v, n) \ -({ \ - arch_xchg(&((v)->counter), (n)); \ -}) - -#ifdef arch_xchg_relaxed -#define arch_atomic_xchg_relaxed(v, n) \ -({ \ - arch_xchg_relaxed(&((v)->counter), (n)); \ -}) -#endif - /* * 64-bit atomics */ diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h index c5a8010fdc97d..2b7c9e61a2947 100644 --- a/arch/arc/include/asm/atomic64-arcv2.h +++ b/arch/arc/include/asm/atomic64-arcv2.h @@ -159,6 +159,7 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) return prev; } +#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) { @@ -179,6 +180,7 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) return prev; } +#define arch_atomic64_xchg arch_atomic64_xchg /** * arch_atomic64_dec_if_positive - decrement by 1 if old value positive diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index db8512d9a918d..9458d47ff209c 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -210,6 +210,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } +#define arch_atomic_cmpxchg arch_atomic_cmpxchg #define arch_atomic_fetch_andnot arch_atomic_fetch_andnot @@ -240,8 +241,6 @@ ATOMIC_OPS(xor, ^=, eor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { s64 counter; diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index c9979273d3898..400d279e0f8d0 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -142,24 +142,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) #define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release #define arch_atomic_fetch_xor arch_atomic_fetch_xor -#define arch_atomic_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) -#define arch_atomic_xchg_acquire(v, new) \ - arch_xchg_acquire(&((v)->counter), (new)) -#define arch_atomic_xchg_release(v, new) \ - arch_xchg_release(&((v)->counter), (new)) -#define arch_atomic_xchg(v, new) \ - arch_xchg(&((v)->counter), (new)) - -#define arch_atomic_cmpxchg_relaxed(v, old, new) \ - arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_acquire(v, old, new) \ - arch_cmpxchg_acquire(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_release(v, old, new) \ - arch_cmpxchg_release(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg(v, old, new) \ - arch_cmpxchg(&((v)->counter), (old), (new)) - #define arch_atomic_andnot arch_atomic_andnot /* @@ -209,16 +191,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) #define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release #define arch_atomic64_fetch_xor arch_atomic64_fetch_xor -#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic64_xchg_release arch_atomic_xchg_release -#define arch_atomic64_xchg arch_atomic_xchg - -#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release -#define arch_atomic64_cmpxchg arch_atomic_cmpxchg - #define arch_atomic64_andnot arch_atomic64_andnot #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive diff --git a/arch/csky/include/asm/atomic.h b/arch/csky/include/asm/atomic.h index 60406ef9c2bbc..4dab44f6143a5 100644 --- a/arch/csky/include/asm/atomic.h +++ b/arch/csky/include/asm/atomic.h @@ -195,41 +195,6 @@ arch_atomic_dec_if_positive(atomic_t *v) } #define arch_atomic_dec_if_positive arch_atomic_dec_if_positive -#define ATOMIC_OP() \ -static __always_inline \ -int arch_atomic_xchg_relaxed(atomic_t *v, int n) \ -{ \ - return __xchg_relaxed(n, &(v->counter), 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg_relaxed(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg_relaxed(&(v->counter), o, n, 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg_acquire(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg_acquire(&(v->counter), o, n, 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg(&(v->counter), o, n, 4); \ -} - -#define ATOMIC_OPS() \ - ATOMIC_OP() - -ATOMIC_OPS() - -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic_cmpxchg arch_atomic_cmpxchg - -#undef ATOMIC_OPS -#undef ATOMIC_OP - #else #include #endif diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 738857e10d6ec..ad6c111e9c10f 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -36,12 +36,6 @@ static inline void arch_atomic_set(atomic_t *v, int new) */ #define arch_atomic_read(v) READ_ONCE((v)->counter) -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), (new))) - -#define arch_atomic_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), (old), (new))) - #define ATOMIC_OP(op) \ static inline void arch_atomic_##op(int i, atomic_t *v) \ { \ diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 266c429b91372..6540a628d2573 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -207,13 +207,6 @@ ATOMIC64_FETCH_OP(xor, ^) #undef ATOMIC64_FETCH_OP #undef ATOMIC64_OP -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - -#define arch_atomic64_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #define arch_atomic_add(i,v) (void)arch_atomic_add_return((i), (v)) #define arch_atomic_sub(i,v) (void)arch_atomic_sub_return((i), (v)) diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h index 6b9aca9ab6e9f..8d73c85911b08 100644 --- a/arch/loongarch/include/asm/atomic.h +++ b/arch/loongarch/include/asm/atomic.h @@ -181,9 +181,6 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) return result; } -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - /* * arch_atomic_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t @@ -342,10 +339,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) return result; } -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - /* * arch_atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic64_t diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index cfba83d230fde..190a032f19be7 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -158,12 +158,7 @@ static inline int arch_atomic_inc_and_test(atomic_t *v) } #define arch_atomic_inc_and_test arch_atomic_inc_and_test -#ifdef CONFIG_RMW_INSNS - -#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - -#else /* !CONFIG_RMW_INSNS */ +#ifndef CONFIG_RMW_INSNS static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -177,6 +172,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) local_irq_restore(flags); return prev; } +#define arch_atomic_cmpxchg arch_atomic_cmpxchg static inline int arch_atomic_xchg(atomic_t *v, int new) { @@ -189,6 +185,7 @@ static inline int arch_atomic_xchg(atomic_t *v, int new) local_irq_restore(flags); return prev; } +#define arch_atomic_xchg arch_atomic_xchg #endif /* !CONFIG_RMW_INSNS */ diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 712fb5a6a5682..ba188e77768b2 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -33,17 +33,6 @@ static __always_inline void arch_##pfx##_set(pfx##_t *v, type i) \ { \ WRITE_ONCE(v->counter, i); \ } \ - \ -static __always_inline type \ -arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n) \ -{ \ - return arch_cmpxchg(&v->counter, o, n); \ -} \ - \ -static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n) \ -{ \ - return arch_xchg(&v->counter, n); \ -} ATOMIC_OPS(atomic, int) diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h index 326167e4783a9..8ce67ec7c9a30 100644 --- a/arch/openrisc/include/asm/atomic.h +++ b/arch/openrisc/include/asm/atomic.h @@ -130,7 +130,4 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) #include -#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v))) -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new))) - #endif /* __ASM_OPENRISC_ATOMIC_H */ diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index dd5a299ada695..0b3f64c92e3c0 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -73,10 +73,6 @@ static __inline__ int arch_atomic_read(const atomic_t *v) return READ_ONCE((v)->counter); } -/* exported interface */ -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #define ATOMIC_OP(op, c_op) \ static __inline__ void arch_atomic_##op(int i, atomic_t *v) \ { \ @@ -218,11 +214,6 @@ arch_atomic64_read(const atomic64_t *v) return READ_ONCE((v)->counter); } -/* exported interface */ -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #endif /* !CONFIG_64BIT */ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 47228b1774781..5bf6a4d49268c 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -126,18 +126,6 @@ ATOMIC_OPS(xor, xor, "", K) #undef ATOMIC_OP_RETURN_RELAXED #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) \ - (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_cmpxchg_relaxed(v, o, n) \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)) -#define arch_atomic_cmpxchg_acquire(v, o, n) \ - arch_cmpxchg_acquire(&((v)->counter), (o), (n)) - -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) -#define arch_atomic_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) - /** * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t @@ -396,18 +384,6 @@ static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v) } #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive -#define arch_atomic64_cmpxchg(v, o, n) \ - (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_cmpxchg_relaxed(v, o, n) \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)) -#define arch_atomic64_cmpxchg_acquire(v, o, n) \ - arch_cmpxchg_acquire(&((v)->counter), (o), (n)) - -#define arch_atomic64_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) -#define arch_atomic64_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) - /** * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index bba472928b539..f5dfef6c2153f 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -238,78 +238,6 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif -/* - * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as - * {cmp,}xchg and the operations that return, so they need a full barrier. - */ -#define ATOMIC_OP(c_t, prefix, size) \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_relaxed(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_acquire(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_release(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \ -{ \ - return __arch_xchg(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_relaxed(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_acquire(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_release(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ -{ \ - return __cmpxchg(&(v->counter), o, n, size); \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) -#else -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) \ - ATOMIC_OP(s64, 64, 8) -#endif - -ATOMIC_OPS() - -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic_xchg_release arch_atomic_xchg_release -#define arch_atomic_xchg arch_atomic_xchg -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release -#define arch_atomic_cmpxchg arch_atomic_cmpxchg - -#undef ATOMIC_OPS -#undef ATOMIC_OP - static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) { int prev, rc; diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 528bfeda78f56..7a18cb2a1c1ac 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -30,9 +30,6 @@ #include #endif -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) - #endif /* CONFIG_CPU_J2 */ #endif /* __ASM_SH_ATOMIC_H */ diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index d775daa83d129..1c9e6c7366e41 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -24,7 +24,9 @@ int arch_atomic_fetch_and(int, atomic_t *); int arch_atomic_fetch_or(int, atomic_t *); int arch_atomic_fetch_xor(int, atomic_t *); int arch_atomic_cmpxchg(atomic_t *, int, int); +#define arch_atomic_cmpxchg arch_atomic_cmpxchg int arch_atomic_xchg(atomic_t *, int); +#define arch_atomic_xchg arch_atomic_xchg int arch_atomic_fetch_add_unless(atomic_t *, int, int); void arch_atomic_set(atomic_t *, int); diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 077891686715a..df6a8b07d7e63 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -49,17 +49,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) - -static inline int arch_atomic_xchg(atomic_t *v, int new) -{ - return arch_xchg(&v->counter, new); -} - -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - s64 arch_atomic64_dec_if_positive(atomic64_t *v); #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 52da614f953ce..1d323a864002c 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -257,7 +257,4 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #endif /* _XTENSA_ATOMIC_H */ diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index e271d6708c876..22142c71d35a1 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -130,7 +130,4 @@ ATOMIC_OP(xor, ^) #define arch_atomic_read(v) READ_ONCE((v)->counter) #define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) -#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (u32)(v))) -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new))) - #endif /* __ASM_GENERIC_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 3ce4cb5e790c5..1a2d81dbc2e48 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1091,9 +1091,48 @@ arch_atomic_fetch_xor(int i, atomic_t *v) #endif /* arch_atomic_fetch_xor_relaxed */ #ifndef arch_atomic_xchg_relaxed +#ifdef arch_atomic_xchg #define arch_atomic_xchg_acquire arch_atomic_xchg #define arch_atomic_xchg_release arch_atomic_xchg #define arch_atomic_xchg_relaxed arch_atomic_xchg +#endif /* arch_atomic_xchg */ + +#ifndef arch_atomic_xchg +static __always_inline int +arch_atomic_xchg(atomic_t *v, int new) +{ + return arch_xchg(&v->counter, new); +} +#define arch_atomic_xchg arch_atomic_xchg +#endif + +#ifndef arch_atomic_xchg_acquire +static __always_inline int +arch_atomic_xchg_acquire(atomic_t *v, int new) +{ + return arch_xchg_acquire(&v->counter, new); +} +#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire +#endif + +#ifndef arch_atomic_xchg_release +static __always_inline int +arch_atomic_xchg_release(atomic_t *v, int new) +{ + return arch_xchg_release(&v->counter, new); +} +#define arch_atomic_xchg_release arch_atomic_xchg_release +#endif + +#ifndef arch_atomic_xchg_relaxed +static __always_inline int +arch_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return arch_xchg_relaxed(&v->counter, new); +} +#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed +#endif + #else /* arch_atomic_xchg_relaxed */ #ifndef arch_atomic_xchg_acquire @@ -1133,9 +1172,48 @@ arch_atomic_xchg(atomic_t *v, int i) #endif /* arch_atomic_xchg_relaxed */ #ifndef arch_atomic_cmpxchg_relaxed +#ifdef arch_atomic_cmpxchg #define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg #define arch_atomic_cmpxchg_release arch_atomic_cmpxchg #define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg +#endif /* arch_atomic_cmpxchg */ + +#ifndef arch_atomic_cmpxchg +static __always_inline int +arch_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_cmpxchg(&v->counter, old, new); +} +#define arch_atomic_cmpxchg arch_atomic_cmpxchg +#endif + +#ifndef arch_atomic_cmpxchg_acquire +static __always_inline int +arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_acquire(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#endif + +#ifndef arch_atomic_cmpxchg_release +static __always_inline int +arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_release(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release +#endif + +#ifndef arch_atomic_cmpxchg_relaxed +static __always_inline int +arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_relaxed(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#endif + #else /* arch_atomic_cmpxchg_relaxed */ #ifndef arch_atomic_cmpxchg_acquire @@ -2225,9 +2303,48 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v) #endif /* arch_atomic64_fetch_xor_relaxed */ #ifndef arch_atomic64_xchg_relaxed +#ifdef arch_atomic64_xchg #define arch_atomic64_xchg_acquire arch_atomic64_xchg #define arch_atomic64_xchg_release arch_atomic64_xchg #define arch_atomic64_xchg_relaxed arch_atomic64_xchg +#endif /* arch_atomic64_xchg */ + +#ifndef arch_atomic64_xchg +static __always_inline s64 +arch_atomic64_xchg(atomic64_t *v, s64 new) +{ + return arch_xchg(&v->counter, new); +} +#define arch_atomic64_xchg arch_atomic64_xchg +#endif + +#ifndef arch_atomic64_xchg_acquire +static __always_inline s64 +arch_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return arch_xchg_acquire(&v->counter, new); +} +#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire +#endif + +#ifndef arch_atomic64_xchg_release +static __always_inline s64 +arch_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return arch_xchg_release(&v->counter, new); +} +#define arch_atomic64_xchg_release arch_atomic64_xchg_release +#endif + +#ifndef arch_atomic64_xchg_relaxed +static __always_inline s64 +arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return arch_xchg_relaxed(&v->counter, new); +} +#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed +#endif + #else /* arch_atomic64_xchg_relaxed */ #ifndef arch_atomic64_xchg_acquire @@ -2267,9 +2384,48 @@ arch_atomic64_xchg(atomic64_t *v, s64 i) #endif /* arch_atomic64_xchg_relaxed */ #ifndef arch_atomic64_cmpxchg_relaxed +#ifdef arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg +#endif /* arch_atomic64_cmpxchg */ + +#ifndef arch_atomic64_cmpxchg +static __always_inline s64 +arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg +#endif + +#ifndef arch_atomic64_cmpxchg_acquire +static __always_inline s64 +arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_acquire(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire +#endif + +#ifndef arch_atomic64_cmpxchg_release +static __always_inline s64 +arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_release(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release +#endif + +#ifndef arch_atomic64_cmpxchg_relaxed +static __always_inline s64 +arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_relaxed(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed +#endif + #else /* arch_atomic64_cmpxchg_relaxed */ #ifndef arch_atomic64_cmpxchg_acquire @@ -2597,4 +2753,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277 +// e1cee558cc61cae887890db30fcdf93baca9f498 diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg new file mode 100755 index 0000000000000..87cd010f98d58 --- /dev/null +++ b/scripts/atomic/fallbacks/cmpxchg @@ -0,0 +1,7 @@ +cat <counter, old, new); +} +EOF diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg new file mode 100755 index 0000000000000..733b8980b2f3b --- /dev/null +++ b/scripts/atomic/fallbacks/xchg @@ -0,0 +1,7 @@ +cat <counter, new); +} +EOF From patchwork Mon May 22 12:24:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97416 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1425291vqo; Mon, 22 May 2023 05:55:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4HHY/lekUxsiX9Cow+qW3Zrn9UNTy1Unb1ydH6RcZykdutDNjG5y2XloYQ8O3qLHNeGQ6c X-Received: by 2002:a05:6a20:1612:b0:103:bad9:1254 with SMTP id l18-20020a056a20161200b00103bad91254mr13034083pzj.6.1684760123029; Mon, 22 May 2023 05:55:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760123; cv=none; d=google.com; s=arc-20160816; b=y696e54wkB091xwtUIlUcoYRf5L+WG0eTx0GrR0Wesx/2ec2eMvvV9bJ3K/Zv+gWap a+pB1JloeqV4KxQve2LNi+RkhREBAexpHM5UDcGpuxHaGz8mt62KRuFN9Axhl8t2PQ8T UiiwB4D3R6SsUFL6P3gZh7VrfGRJH2Oob3k7yc5EwGa3GHOupgxeK5sHQwVRrEmGoyom Wp3xH9u4XpOFh4FJ0MpqEeXiZphmgjGJzTRUM8q03Nl289ZvlMZz0Nzk4hA6vnAuBtF/ K62Wm4NF4gbhcQd5IEpJt76PVJGgXZ4Bh9pn0WU3M5IDVlFUztwMEZ1VzJ59Fz5gd7Hg w2IA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=RcF1z8B17guRFnV1x41ZkhRVronsetw65N32rSEXxZ0=; b=d1u/tx2hgvcLVdb2uHfELG8p181hb6c/dW4CxH6raO6kvt0ebb2TjiATIium3q2pxB YnQU1qum/Jl3PPDV2d0/r6adot78CsoomXBYOLCec0hIjswEpEcqdDMIZL25j+Y6wgoM UO2BEaFmXpdKf+w7jbmuVzr0qPODbLFuvDW6QdDRQlbOpREn69XFkMPpf/sclgi++Ii9 zCqNtLAQx/jPODyMNhvPjwW/yg06c2IxOf93TAR/l2bKemiB1crHzUoAO2yBC5t42e6+ 1nLLc0bLuwNwRm5zSPlBJ0IMKGuAberfGUoXticrBJYezt+4DNrDB9/it6bEULGgaikT bxyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i23-20020a633c57000000b0051f179c48e7si4601814pgn.866.2023.05.22.05.55.08; Mon, 22 May 2023 05:55:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234080AbjEVM1x (ORCPT + 99 others); Mon, 22 May 2023 08:27:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233487AbjEVM0y (ORCPT ); Mon, 22 May 2023 08:26:54 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 72C5210C; Mon, 22 May 2023 05:25:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44D711516; Mon, 22 May 2023 05:25:35 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9AB693F59C; Mon, 22 May 2023 05:24:48 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 05/26] locking/atomic: arc: add preprocessor symbols Date: Mon, 22 May 2023 13:24:08 +0100 Message-Id: <20230522122429.1915021-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599030859918737?= X-GMAIL-MSGID: =?utf-8?q?1766599030859918737?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/arc. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/arc/include/asm/atomic-spinlock.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/arc/include/asm/atomic-spinlock.h b/arch/arc/include/asm/atomic-spinlock.h index 2c830347bfb4e..89d12a60f84c0 100644 --- a/arch/arc/include/asm/atomic-spinlock.h +++ b/arch/arc/include/asm/atomic-spinlock.h @@ -81,6 +81,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ ATOMIC_OP(op, c_op, asm_op) \ @@ -92,7 +97,11 @@ ATOMIC_OPS(or, |=, or) ATOMIC_OPS(xor, ^=, xor) #define arch_atomic_andnot arch_atomic_andnot + +#define arch_atomic_fetch_and arch_atomic_fetch_and #define arch_atomic_fetch_andnot arch_atomic_fetch_andnot +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP From patchwork Mon May 22 12:24:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97394 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1410688vqo; Mon, 22 May 2023 05:30:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5KNY5G05RcqPVwDf5AJoUS3K3HrVaia41R5lYBKbDgS+swqmnr3qsRkNj0MLxYoEFvdxxQ X-Received: by 2002:a17:90a:e644:b0:24e:1215:c280 with SMTP id ep4-20020a17090ae64400b0024e1215c280mr10244997pjb.45.1684758621078; Mon, 22 May 2023 05:30:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758621; cv=none; d=google.com; s=arc-20160816; b=i1hglvNq5+clKKji2DV6gxXaosG8Lb0Maui0XCJ/+cE7fgm/ngFtGfCI5v7o4MTnkD +sYQdUsZXGOym5qoIfZ4VU2ATO3JeExzU9nsvot+d/fkZQRezZmzWoMKKD57aazBhuq7 53m07UkcjOizf/vsCh8vq/i8mj0WMYE2/fIdPv7M3s1jogdOa5seYQXy2bESeN5D56Kp rIGq9ZNN9eoijK5vfIWmjTnxKM/c2mUipItmeWQkl4YSHScLtxmKKAYW8LylJdcU7p86 /paXpUDWrU2/dHlRBD8NCKqYOCwacowy3RoMC2iMBiOJcuG1Q9fLFGlrSaCdviHmiXAR NrVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ko8bYLMKJkH7GNpft9Su7Hpxb9vsoq7XQD48RAldmSs=; b=P3CXjp5zBq1ng7vlud1xC9YX0c5YHesnvMnutWjHk3ctZyIzAig2Ywc3DsjP6sQaHC YfHjXsgbsczbT8K8ETxnd7lB3Cjxlth1GGekU5SQTPVmVkMUwTBNpYQu7ngsYc7M7YFd uiYv2kCAmHZJulxLmk7jgB2lJatGdgLrtpfeFsjrutGALaGGwfOMrY1kS3IR4OIi+MaC NBFoBnt9e9HtShWKW2mAvbeSAJ4JCZ/h9pSY2UzNqYVb9b3wbZJwjvXSxOQvPHhJXHkr Vfy2X6BGGEXK9t50y7/YVR4IRhqiuTevnv2wi28P0k/6JoEIsSEmeSVQsmJ+br3rVjzI 4lMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q1-20020a17090a2dc100b00250cb2a2000si4586316pjm.113.2023.05.22.05.30.05; Mon, 22 May 2023 05:30:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234068AbjEVM1t (ORCPT + 99 others); Mon, 22 May 2023 08:27:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233289AbjEVM0x (ORCPT ); Mon, 22 May 2023 08:26:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 72286F9; Mon, 22 May 2023 05:25:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9CC54152B; Mon, 22 May 2023 05:25:37 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F2B553F59C; Mon, 22 May 2023 05:24:50 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 06/26] locking/atomic: arm: add preprocessor symbols Date: Mon, 22 May 2023 13:24:09 +0100 Message-Id: <20230522122429.1915021-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597455492558766?= X-GMAIL-MSGID: =?utf-8?q?1766597455492558766?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/arm. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/arm/include/asm/atomic.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 9458d47ff209c..f0e3b01afa746 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -197,6 +197,16 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ return val; \ } +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; @@ -212,8 +222,6 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) } #define arch_atomic_cmpxchg arch_atomic_cmpxchg -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot - #endif /* __LINUX_ARM_ARCH__ */ #define ATOMIC_OPS(op, c_op, asm_op) \ From patchwork Mon May 22 12:24:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97404 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1415937vqo; Mon, 22 May 2023 05:38:51 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5gdM9AlBkevZzLzWY+rBdg3PGJAohSNQR8heIEzBdA/t2Vtchpoe+fAngK9e+kyJXLEIMD X-Received: by 2002:a17:902:ab14:b0:1ae:50cc:455 with SMTP id ik20-20020a170902ab1400b001ae50cc0455mr9279810plb.39.1684759131597; Mon, 22 May 2023 05:38:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684759131; cv=none; d=google.com; s=arc-20160816; b=LUqKY35uJzL18bc6xDBM2Bh41T8elZP2Rj4jkgaqWC6a6uIsUMnfAdrjh7lH03ff20 173wTItGySLMHGgs+WZNCeQVUjsCHw8h4IN+Fgb2IywLXS86SeIIK2OjnV7mV+pDz/oU hGea7nqerg8ufoDQ2/IOlYy3CfbKAbyfEqPRPOKJej6w1GFnnT510m9oGYlBElSJDyDV YLLTYDpIYGuhMoewz/FspTOTcfPS2kAuVmcpnLR6w3K/I+odh1iDdXopLlmrWXFpZWeC f4EnmoECUGsmdiQp4JOQs0aWC/AMFh9nH1jJTF2+oVQU+HdlcZnRlTew0FuuTPliwRxc c2nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=vVDNqp31U9XZsofkAh7PKfbJUrhSYNtUNjI6U7FkaHU=; b=O6IXL3h8bVGk2X/t2MOqun5nRwTZ/rd1riI+iYwI3QNbSRwjq5jD1Z5EKhcgsEfaX/ +nngFKKd1DVcYAewOcAgg/wDj1VjU3IyWD61i3h+0vKr4ElzrbOtKos+JpYkwEC19dR7 sSso81Ohl/R9ZTlAcmfmJexckfEjNApW5I6nQ+QR9tN5o664NgwAQasTJ2iaqaPJ+vwt s2UAghhlXZ8BGZ7N/7NL5GiMvQsMystt/UL2ZjmOBGmVHbEstuMxC9sBIiKwU3zSYIdM t/APh7Ra1tPRQPXFN1R0dtSBSaifodrEdiCriaiwoQIDbxGEFsPJ93y47Kp5zKPVg+hm 4gbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c9-20020a170902724900b001ac5dd95bc5si4362584pll.476.2023.05.22.05.38.36; Mon, 22 May 2023 05:38:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234084AbjEVM1z (ORCPT + 99 others); Mon, 22 May 2023 08:27:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233862AbjEVM05 (ORCPT ); Mon, 22 May 2023 08:26:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AE864E5E; Mon, 22 May 2023 05:25:07 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 05BF3153B; Mon, 22 May 2023 05:25:40 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5B8A03F59C; Mon, 22 May 2023 05:24:53 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 07/26] locking/atomic: hexagon: add preprocessor symbols Date: Mon, 22 May 2023 13:24:10 +0100 Message-Id: <20230522122429.1915021-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597990778858925?= X-GMAIL-MSGID: =?utf-8?q?1766597990778858925?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/hexagon. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/hexagon/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index ad6c111e9c10f..5c8440016c762 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -91,6 +91,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -98,6 +103,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon May 22 12:24:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97405 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1416249vqo; Mon, 22 May 2023 05:39:26 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4GzsemPqUnAZImlmTEcKdGX5RkQDhJjooTnx9BdvkOccbCMgByC8Gzlv+VVnkDImP3QVuX X-Received: by 2002:a17:90b:1d03:b0:24d:fba9:80e9 with SMTP id on3-20020a17090b1d0300b0024dfba980e9mr10989559pjb.23.1684759165991; Mon, 22 May 2023 05:39:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684759165; cv=none; d=google.com; s=arc-20160816; b=gTMY/GswhFIThvma7YB/t+wAt+ZtGFv/P6d5Pq1XCTyb568znNqubdn1Dt93WMrm6R Ro/Cl0TUBm2XcyyVwt0HmMs8XkYTrhsowCcAj5/riUmTp3lyYUKxraSCORIdTio6IbaH jLWthc/M7piSPiW8cNYnSSQX3PTZ7gIyJg0tJMOZrAoU6VzMiuYkS10H2krKkq3E61tX ZCze3KI/kwASBPAnRx9yma9F7wYWdknTvUP+VTIEK+TkO/ObSj2m51Cs5z1uVSnOWRF3 9vtjX7NMEetMwc2UcxwdwjXq1smORraFOAmKZkJvMAXDYykLjUCnU+pODME1zEndOc4J IsYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Xm27TMXG6esWkyHOHYr9F8jKFSxCQJYy9Sje7/ErZZA=; b=W+Y+IckjX5um3UHWOgZrGzjMNSjjPfB3KTOvTBSynliyrlvbKE5oChodPhuXVENtZ9 yWbfqUxkFsEk7o521to5ft2kTmBt+dseVDaff0V5dSt73GPTT/QJXniF4GGGfOFP+GPB s5xDCLtCot4nufN/8+k2A0l6Z3KC7kZ/ilMkroPz4nHutlyI3sf05f71DmnXTFlhhMsY 5wNntUPAQ4NmyDXAmuipyURWoiOLO+IXyq+qlfZwdnZryVECvQNdt8+oZVDXAdNbMXVI +4Ul3MxYvEIwcVrQpw52gNcOLPB8+fB0js3h4O9bjC7ZSqh0E7BFBqS4vct1LcyBz9wI ltSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mi17-20020a17090b4b5100b002507aba141asi4683664pjb.171.2023.05.22.05.39.11; Mon, 22 May 2023 05:39:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234090AbjEVM16 (ORCPT + 99 others); Mon, 22 May 2023 08:27:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233955AbjEVM05 (ORCPT ); Mon, 22 May 2023 08:26:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ADB55186; Mon, 22 May 2023 05:25:07 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57F381570; Mon, 22 May 2023 05:25:42 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AD1B63F59C; Mon, 22 May 2023 05:24:55 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 08/26] locking/atomic: m68k: add preprocessor symbols Date: Mon, 22 May 2023 13:24:11 +0100 Message-Id: <20230522122429.1915021-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598027504399554?= X-GMAIL-MSGID: =?utf-8?q?1766598027504399554?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/m68k. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/m68k/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index 190a032f19be7..4bfbc25f6ecf4 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -106,6 +106,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ ATOMIC_OP(op, c_op, asm_op) \ @@ -115,6 +120,10 @@ ATOMIC_OPS(and, &=, and) ATOMIC_OPS(or, |=, or) ATOMIC_OPS(xor, ^=, eor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon May 22 12:24:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97420 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1426552vqo; Mon, 22 May 2023 05:57:51 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4zzGvRcCzbWnPacrdnP9BCiZV1c3Gn8pTZn58TbSoQPEMbj+nAfYaNOyiHc9/XGlME2Ugh X-Received: by 2002:a05:6a20:3950:b0:100:9969:8cf with SMTP id r16-20020a056a20395000b00100996908cfmr12996931pzg.49.1684760271242; Mon, 22 May 2023 05:57:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760271; cv=none; d=google.com; s=arc-20160816; b=fhAjZvogfCUvb0Qsgj8T98CDABQ8Tl6AjW/QThe6LcqjRJQGCkpnbaIbJNI5jjvfDJ I1XbpOVrMtHmT5LtuSD95f/8oe19vCUaDViQR6Zo2YPptgiLX7PnqrvPrHBlvJjwihLx mvvD6IaOl6g+HfffVi043BSE4nlkFjHV5wsw/gZOxX4raCMsrP6ZCrCuwD9laEXKx2rl +OyF955X3RkB1dqi6POxJJcO1xlxiGDPrnGSq6Q8XMht64+TzQyUPhEqIbULxc4nQ6cQ dPNFbPDQEBbvifXqe4ctxAmrwEwnhNoEVJIANMJczxtn0+JLxQ7ojmnjTnPatXoCD3QK ngqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hlxwMsZqvicFQVYA0UDhrWKILTH45Zw6yjCQLnEv22I=; b=bZ2dOZG/g8fj/7LtqFaSriIvoS4o+ub68MnWksGD0mbRgHXF3woY2VYWimspAVGjHm lCvUEwLoXPIlOFcc/oHoubwvjXJT9HVum5vU/LUNvB7fBvgpNHGkBbDvSeKhiksEk7c8 Sp8JzoumH7Q5I7J8ERnfACZnJryjfwVbWvL9hOFhP/mFhdi6B4gFVk1aC5fmLrVrMyjS 0ERVhno+ogNDLnJHgh+XzT22Ve3XjsH8dKUZvlBVMX3NMWg4OSK4yk+KjISuyTmrgvEt vK23uF0Tc3DAEDOnGvMw0VHdOGDrwDWwdVtikqcCbN6aw8qvJrDQvJ0OI4mtQT/Ue0c9 hP6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h189-20020a6383c6000000b00524ea64ba6esi2988555pge.530.2023.05.22.05.57.37; Mon, 22 May 2023 05:57:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234092AbjEVM2E (ORCPT + 99 others); Mon, 22 May 2023 08:28:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233979AbjEVM05 (ORCPT ); Mon, 22 May 2023 08:26:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 46F8310DB; Mon, 22 May 2023 05:25:09 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6EB71576; Mon, 22 May 2023 05:25:44 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 062FA3F59C; Mon, 22 May 2023 05:24:57 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 09/26] locking/atomic: parisc: add preprocessor symbols Date: Mon, 22 May 2023 13:24:12 +0100 Message-Id: <20230522122429.1915021-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599186280862480?= X-GMAIL-MSGID: =?utf-8?q?1766599186280862480?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/parisc. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/parisc/include/asm/atomic.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 0b3f64c92e3c0..d4f023887ff87 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -118,6 +118,11 @@ static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=) ATOMIC_OPS(sub, -=) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op) \ ATOMIC_OP(op, c_op) \ @@ -127,6 +132,10 @@ ATOMIC_OPS(and, &=) ATOMIC_OPS(or, |=) ATOMIC_OPS(xor, ^=) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN @@ -181,6 +190,11 @@ static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \ ATOMIC64_OPS(add, +=) ATOMIC64_OPS(sub, -=) +#define arch_atomic64_add_return arch_atomic64_add_return +#define arch_atomic64_sub_return arch_atomic64_sub_return +#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + #undef ATOMIC64_OPS #define ATOMIC64_OPS(op, c_op) \ ATOMIC64_OP(op, c_op) \ @@ -190,6 +204,10 @@ ATOMIC64_OPS(and, &=) ATOMIC64_OPS(or, |=) ATOMIC64_OPS(xor, ^=) +#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + #undef ATOMIC64_OPS #undef ATOMIC64_FETCH_OP #undef ATOMIC64_OP_RETURN From patchwork Mon May 22 12:24:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97427 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1432717vqo; Mon, 22 May 2023 06:05:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6YC47VCWqTTV2DXTQHZWoPDNLPwgFbo36DUuWpgNqiq3S9h5QQsD2/tgVT/y2W3t1dWnbc X-Received: by 2002:a05:6a20:7288:b0:106:feff:40ce with SMTP id o8-20020a056a20728800b00106feff40cemr11943373pzk.1.1684760706566; Mon, 22 May 2023 06:05:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760706; cv=none; d=google.com; s=arc-20160816; b=h/1HxIsNLPemEw8/wzN1Tq5Uhr/roO0OPnntimYn2dJzvYCKfO8CNlAK/wquf5NZ3l ukz8uCFbC1DbZ7c3DmzgycZYDJd2QPhKdOYXe9uXFV6cY8Dqktf0Q7vNaMr9APNU/pQA +22bx+ubtd7yTNCCQhP9M4lOZ8Fi+P+bph49/R4dMmt2o1eKEbTHfDKACBLGD54NKyDX h8E51Lb0ZoKSaEHG1zIAHvbXRUasmzLD0pj7QectH5Q/JjCxiunsJfnyjzaDM8jLvjHw 8dIP0h5cAi/nkvI3eskDZC6Xe56i0PgC/krF2rS0eJOn2WZuJrX87U/ZO0yquIht/k4I yByA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=NrfxPRLrc7JtDWdGgwYuG64Z1nKtGLcpSHj084pak6k=; b=uPfW3IcijJPPQy1J6z0KjuAzecV8O+fvCgJR2R3H81JmT1pu8pzulczSzAsaYvNV6R M89Lo20nKZA3PsLuwauVXVMNKF13x4/nMgaHEgCICLEFIZCSvPq+AxLJ3k0DceCU4YFy n9vLdMUFZiZ5bExy33ihV6nPqF3eAwJJgdSyGq+GaEgQ7bQNgfcDtWP11Y+RXMOfCpUJ HJSvmb21UAHIfGXEMK6JepEfgPJD6fJA7RgMXvcUkcBi8RY0kpDT3xfJmdeaYIBkGpn1 2A7Z8pntiKCrN6N3rn69XgjYLEuloWS0nnFpWIOFC6LTl1HFpoAEYDqW0IJkTGPkipZ3 xHww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x29-20020a63b21d000000b00528513d5e9dsi336058pge.109.2023.05.22.06.04.48; Mon, 22 May 2023 06:05:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233454AbjEVM2B (ORCPT + 99 others); Mon, 22 May 2023 08:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232744AbjEVM05 (ORCPT ); Mon, 22 May 2023 08:26:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 45EB810D9; Mon, 22 May 2023 05:25:09 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F306A1682; Mon, 22 May 2023 05:25:46 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 54BC73F59C; Mon, 22 May 2023 05:25:00 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 10/26] locking/atomic: sh: add preprocessor symbols Date: Mon, 22 May 2023 13:24:13 +0100 Message-Id: <20230522122429.1915021-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599642210576324?= X-GMAIL-MSGID: =?utf-8?q?1766599642210576324?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/sh. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/sh/include/asm/atomic-grb.h | 9 +++++++++ arch/sh/include/asm/atomic-irq.h | 9 +++++++++ arch/sh/include/asm/atomic-llsc.h | 9 +++++++++ 3 files changed, 27 insertions(+) diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h index 059791fd394fc..cf1c10f15528b 100644 --- a/arch/sh/include/asm/atomic-grb.h +++ b/arch/sh/include/asm/atomic-grb.h @@ -71,6 +71,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -78,6 +83,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h index 7665de9d00d0d..b4090cc354935 100644 --- a/arch/sh/include/asm/atomic-irq.h +++ b/arch/sh/include/asm/atomic-irq.h @@ -55,6 +55,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=) ATOMIC_OPS(sub, -=) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op) \ ATOMIC_OP(op, c_op) \ @@ -64,6 +69,10 @@ ATOMIC_OPS(and, &=) ATOMIC_OPS(or, |=) ATOMIC_OPS(xor, ^=) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h index b63dcfbfa14ef..9ef1fb1dd12ee 100644 --- a/arch/sh/include/asm/atomic-llsc.h +++ b/arch/sh/include/asm/atomic-llsc.h @@ -73,6 +73,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -80,6 +85,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon May 22 12:24:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97406 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1416750vqo; Mon, 22 May 2023 05:40:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6AlJGK0ZuZF97ep36n+yMp3KKFcJMhj9kEDuJ/+Fd46+AH8yVp0eQFj2gUu93oYNgT2/sY X-Received: by 2002:a17:90b:190d:b0:247:afed:6d62 with SMTP id mp13-20020a17090b190d00b00247afed6d62mr9838663pjb.46.1684759222565; Mon, 22 May 2023 05:40:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684759222; cv=none; d=google.com; s=arc-20160816; b=Gv/kaFbiMxj4L0xhx5YjWiMteaeanyHKcBtdYuviWY7L3Y89DuQcg5eyDcMMk8e+ol zICSPGZpzKKV/i1ronDhWs2L/TBzTjSSd+J2vXEp2EOSdZ1mN2w95NiR5p/bH5Bb2L4K NJC51GtpVpi8fZVciwHK58X+MBLI8gYWI4eokxGBXwXuioQPwm0bpl0lzjPHK1/O4TEX DQvTmzEkw57SVIQN9zkpCn0BxEhkojnJNz/GxoCeXtRT+xhaHT88RG6s5Dy14ShoZYOU T6a5PHmFC3CIkHgghH6UCnzWBo3l3xw+SUg0Y0DZp+hu1qVKLRAxWLFcxgCxBpZ7g+YJ jOng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=dr1ox7G8JdUuxBTJqrJo/ysSGszkRUlSo5eO/MKce4s=; b=C1rzp+RGImoK6SB7U3LywcDJS3B4VIsh1X45vLE3dEZm9W2fspgieb0HmibD/dEEFk BCcJZL5KdejtcSWd8K9m+JoxFFwyeLvWkNZrl9ntGi9n/+9CNkVB7pL/AqxsxVBRnApl 6m0gSyjutwD6INuJTYKYm9Z67O6g2nBXtRhXMo1JRcYVaQYa5jwQOx/HGGQeB/KZd3Rv OzarMQstk0TlKxv6H4xwKskTTUfffNhe99g5MOryqyMNigSS3kXKzP2/bVSA0qtPov+r oaKJ7vsB2dMl1BfBDhvR1i3WuR2EdaNZpr06UwfI3f7zYXF0smOc4qZPwGLxWxeGlQBN gaZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hg9-20020a17090b300900b0025348666bfcsi4481079pjb.93.2023.05.22.05.40.07; Mon, 22 May 2023 05:40:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234099AbjEVM2K (ORCPT + 99 others); Mon, 22 May 2023 08:28:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232942AbjEVM06 (ORCPT ); Mon, 22 May 2023 08:26:58 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AA4721A4; Mon, 22 May 2023 05:25:10 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6806E1684; Mon, 22 May 2023 05:25:49 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BDDDC3F59C; Mon, 22 May 2023 05:25:02 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 11/26] locking/atomic: sparc: add preprocessor symbols Date: Mon, 22 May 2023 13:24:14 +0100 Message-Id: <20230522122429.1915021-12-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598086773426767?= X-GMAIL-MSGID: =?utf-8?q?1766598086773426767?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/sparc. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/sparc/include/asm/atomic_32.h | 16 ++++++++++++++-- arch/sparc/include/asm/atomic_64.h | 18 ++++++++++++++++++ 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index 1c9e6c7366e41..60ce2fe57fcd7 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -19,19 +19,31 @@ #include int arch_atomic_add_return(int, atomic_t *); +#define arch_atomic_add_return arch_atomic_add_return + int arch_atomic_fetch_add(int, atomic_t *); +#define arch_atomic_fetch_add arch_atomic_fetch_add + int arch_atomic_fetch_and(int, atomic_t *); +#define arch_atomic_fetch_and arch_atomic_fetch_and + int arch_atomic_fetch_or(int, atomic_t *); +#define arch_atomic_fetch_or arch_atomic_fetch_or + int arch_atomic_fetch_xor(int, atomic_t *); +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + int arch_atomic_cmpxchg(atomic_t *, int, int); #define arch_atomic_cmpxchg arch_atomic_cmpxchg + int arch_atomic_xchg(atomic_t *, int); #define arch_atomic_xchg arch_atomic_xchg -int arch_atomic_fetch_add_unless(atomic_t *, int, int); -void arch_atomic_set(atomic_t *, int); +int arch_atomic_fetch_add_unless(atomic_t *, int, int); #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless +void arch_atomic_set(atomic_t *, int); + #define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) #define arch_atomic_read(v) READ_ONCE((v)->counter) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index df6a8b07d7e63..a5e9c37605a70 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -37,6 +37,16 @@ s64 arch_atomic64_fetch_##op(s64, atomic64_t *); ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic64_add_return arch_atomic64_add_return +#define arch_atomic64_sub_return arch_atomic64_sub_return +#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -44,6 +54,14 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + +#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon May 22 12:24:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97425 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1429346vqo; Mon, 22 May 2023 06:01:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4atI/IBmgwHgBHEnjJn26Egl/WUE+piDt6GEhx9AJLCBUyiYkIqAbHDEJREqxnk0DOvPvG X-Received: by 2002:a05:6a20:728a:b0:10b:9527:7127 with SMTP id o10-20020a056a20728a00b0010b95277127mr3469240pzk.20.1684760501755; Mon, 22 May 2023 06:01:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760501; cv=none; d=google.com; s=arc-20160816; b=zDvPfglqat2psMd/U0NTlenZRXG7m9tkl9s66ZM+iVOyTE3hjhvkndW8OX6Gb7kmc7 0KeZEQZhgtRoIe5QIwNlxJ8kKTmoW/aNu6kYFR2rZ8q+lYQmUNnPqKVGErDj3Wl0QY/Q NFYjluCzrRnPaReEkxlmH6wjguNTIllnEK4JrR1hWNhUPrKA/RWsmm46H0upZhzYUZCh E4U9d3Pzi7TYNWkd9mo3RsPAHH+GijjCdlYuJYHi/o2Xne7Ffdy+i9u7mWzzKALYB6wV DRsnX2U8FWvzTHW9O6dmy2EuuqorCBI4VfhS0bG55utbPdU307NUQTTkJDQzpxc4otND qi9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=JndSrmJDOrtCqtEyViodm6mEKBmwMm81/ZsSxnnJmOA=; b=vVne7Hg2HpA9VTl5JyWjjNb5yUsl668GdjUDBjdPBzcqe7suqiLi8sI15jz+b1af4V yoZi50qJkRht0wjBoWh2iMHQ6Luc+kVmEFm7gUkRNIw5XPbtdGuxhb0nJH9tcIQxfMUb UWTZ4s5FXxBx0JO/UkF2iLMSdU88IZtHVT8xYCo96dw0onN1g/YjRksTt98JOeZMcjga kgZeTxerSw1ODo6KW+UqOBC5L5DZcqa6JEdV7AP7Gyf1snThOCDfKd0uxqMbK7JLb8CN xaCprKUiRbbJpNkyXpbukXnQkwXXY/sisZlAYDZHi5v3M//OEEqlhhmxIbSBWAIuE/Y1 Zyog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c17-20020a056a00009100b006436dfd441dsi4604630pfj.356.2023.05.22.06.01.17; Mon, 22 May 2023 06:01:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233642AbjEVM2H (ORCPT + 99 others); Mon, 22 May 2023 08:28:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233344AbjEVM06 (ORCPT ); Mon, 22 May 2023 08:26:58 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AA55910EF; Mon, 22 May 2023 05:25:10 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB65611FB; Mon, 22 May 2023 05:25:51 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2C9983F59C; Mon, 22 May 2023 05:25:05 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 12/26] locking/atomic: x86: add preprocessor symbols Date: Mon, 22 May 2023 13:24:15 +0100 Message-Id: <20230522122429.1915021-13-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599427622307662?= X-GMAIL-MSGID: =?utf-8?q?1766599427622307662?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/x86. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/x86/include/asm/cmpxchg_64.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h index 3e6e3eef701b3..44b08b53ab32f 100644 --- a/arch/x86/include/asm/cmpxchg_64.h +++ b/arch/x86/include/asm/cmpxchg_64.h @@ -45,11 +45,13 @@ static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 n { return __arch_cmpxchg128(ptr, old, new, LOCK_PREFIX); } +#define arch_cmpxchg128 arch_cmpxchg128 static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new) { return __arch_cmpxchg128(ptr, old, new,); } +#define arch_cmpxchg128_local arch_cmpxchg128_local #define __arch_try_cmpxchg128(_ptr, _oldp, _new, _lock) \ ({ \ @@ -75,11 +77,13 @@ static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp, { return __arch_try_cmpxchg128(ptr, oldp, new, LOCK_PREFIX); } +#define arch_try_cmpxchg128 arch_try_cmpxchg128 static __always_inline bool arch_try_cmpxchg128_local(volatile u128 *ptr, u128 *oldp, u128 new) { return __arch_try_cmpxchg128(ptr, oldp, new,); } +#define arch_try_cmpxchg128_local arch_try_cmpxchg128_local #define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16) From patchwork Mon May 22 12:24:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97395 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1411029vqo; Mon, 22 May 2023 05:30:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6W8pBs3CCfSDDpXkYfcGiTGIDGDt64U9eI+MRYwUoNnwg+H0DJdX6LuysfbFyTPVJD74Aw X-Received: by 2002:a17:90a:db16:b0:250:1b4c:d861 with SMTP id g22-20020a17090adb1600b002501b4cd861mr11666985pjv.13.1684758648428; Mon, 22 May 2023 05:30:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758648; cv=none; d=google.com; s=arc-20160816; b=T5AFHWft6DUdgUNn3zX5YiI6Aocqz6TB9n79j2bBG9tpGUfUXBxhUYVmKovovXhYUJ 2AihXncrKp9WQqy3Oj0KcVFQi2wk+iun2dUregWQs4ZfO/kSCF4U1aqnuSftD05qVrNV r61oqXfMD2wz8/Sr2zliQSFfHRQduT0ee0SqViut1uBoFsE1QDdrLDsgMIY96yIgMIqj slO5ER5xaFMjRCSVSUP+c2VEoLz5hY2a2FTh/OzBe84hlnrmk4SrmkiM5q8e8y3GAjdc x8OHs+QNT8+p14+IMT/7JlBU95JkWhjgg6a1rGJirk+rVPtR5qMmU949SWmjnGTyHBBf YiRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uysVNB3f55HIvLNeR4VK9LwWHBg8Brt1uwUjSQXotmA=; b=G+IXn9+tZvMwSj2fBFZ9xqdCIyXwTCjk87WJwZ5H2OxsRJBmHCKgOBRtAG5XWrgHSh QmJbLjS/AQqDIee4YD9qRIFVPTFWdGjMVM6FiVtn6sk96fXPiZHdICHgMiQaGCTCAGRf QQm1lqXZSQY4KCVi92g6BhoWisqXhE400EwKo4n+Lc1eAVtQ2lNqinxAOkCFS6O6S/nM N1HTLdO6ZIPXwhK+KL68pRuyi5Ynx5/dxgxT/lZKEVYJA1vTsCyXo7JNojUAEX2oIvOH dhnqGoDnjP7+vJnC9vJLG+6OURdppMFVKREzSBT6r82bNldWXE9g2K5ydUA34AB+myeq k+Hw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s2-20020a17090ad48200b002537bd7454dsi4540943pju.101.2023.05.22.05.30.31; Mon, 22 May 2023 05:30:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234101AbjEVM2L (ORCPT + 99 others); Mon, 22 May 2023 08:28:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234002AbjEVM07 (ORCPT ); Mon, 22 May 2023 08:26:59 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 13DF2185; Mon, 22 May 2023 05:25:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E2F6A1480; Mon, 22 May 2023 05:25:54 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 43C0E3F59C; Mon, 22 May 2023 05:25:08 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 13/26] locking/atomic: xtensa: add preprocessor symbols Date: Mon, 22 May 2023 13:24:16 +0100 Message-Id: <20230522122429.1915021-14-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597484502394737?= X-GMAIL-MSGID: =?utf-8?q?1766597484502394737?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/xtensa. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/xtensa/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 1d323a864002c..7308b7f777d79 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -245,6 +245,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -252,6 +257,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon May 22 12:24:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97396 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1411175vqo; Mon, 22 May 2023 05:31:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Asgmy1jyWlendLWs4cGAhgZhGF87Fjjb55IdOiOI5NWwUi1tYr4DhB1bnJKKkpXcBIStY X-Received: by 2002:a17:90b:3113:b0:23d:a2a:3ae4 with SMTP id gc19-20020a17090b311300b0023d0a2a3ae4mr10424052pjb.44.1684758659768; Mon, 22 May 2023 05:30:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758659; cv=none; d=google.com; s=arc-20160816; b=ONhHrZyS/gWgq9y9j0tP2W8ljNzu+/O50cxpJHPz6RXyhn3+mt8zKGZIjY10MzGL54 ta6UnpeHPxBQ9Rs2JLssZlBph8r8zHgZfBrLTk3qWcqBxrxYtryZr7xJ4b6lxrQNEO0k B/OlG7EBESAJRR9ARZDAsmPJWUIuq9cStpvxhXy8O88M6SQKyPj45PnvJidCPwQ8Kn3g 89Vc+G4Ph50QEE/RxT1VI4RsUb8Te2Q1Myob4ZwmspJ39wd/WZ5ijRFR9sRaoB2/WomK PGqEnNx+1rj5X/PqcYYid8HR8E56y2YD/fIYZvDdfJA8WMEZBtQraW5xxCjP5ilevyLf RmDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=95ZSUv5KbvF9/+XvnL2GyQiZp0i1/KNIDJp/l4EjP+s=; b=0ptMFwKre88ViPbwWZTo7bk3VF0nj1WxPBsefZB2oGUpOWBHCppiVLQfsThylqkwxh Wd93/iw8gVW/8kwllf5eUtc/NbHPZAGaH3Fc5YrRldefTRAfzAHXqJ1ZSuBX3wGYacdy LRRd24+8q4tenNTLhT/cjhSM/T9C+9MCWTg6gpwKzEEAx8jHqaO6Skufxr3ERKHt4wYj 3BAN79058GKhl43zjQdM3LthJs2T+Ce0mtYxaSENloa7yC1l2vXnj9QP4/Av4VEkm/gR zitDVpFg9OifTpXXA1grSmjacF8xnuNBey58E6Hb604HJ0+FXc6oGJrcsBUB0jJmhmms axgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u190-20020a6385c7000000b0052c9458bc07si757547pgd.468.2023.05.22.05.30.43; Mon, 22 May 2023 05:30:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234103AbjEVM2O (ORCPT + 99 others); Mon, 22 May 2023 08:28:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbjEVM07 (ORCPT ); Mon, 22 May 2023 08:26:59 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 14AB418B; Mon, 22 May 2023 05:25:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61614139F; Mon, 22 May 2023 05:25:57 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ADF3F3F59C; Mon, 22 May 2023 05:25:10 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 14/26] locking/atomic: scripts: remove bogus order parameter Date: Mon, 22 May 2023 13:24:17 +0100 Message-Id: <20230522122429.1915021-15-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597496401906423?= X-GMAIL-MSGID: =?utf-8?q?1766597496401906423?= At the start of gen_proto_order_variants(), the ${order} variable is not yet defined, and will be substituted with an empty string. Replace the current bogus use of ${order} with an empty string instead. This results in no change to the generated headers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- scripts/atomic/gen-atomic-fallback.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index a70acd548fcd8..7a6bcea8f565b 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -81,7 +81,7 @@ gen_proto_order_variants() local basename="arch_${atomic}_${pfx}${name}${sfx}" - local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" + local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")" # If we don't have relaxed atomics, then we don't bother with ordering fallbacks # read_acquire and set_release need to be templated, though From patchwork Mon May 22 12:24:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97397 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1411409vqo; Mon, 22 May 2023 05:31:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4qJiRCj2XEj5PBUxj9qRDoTKokU/W5sR2rqQWlCpYd729pe6voKf2fOqXyoGoZhwkBpkT8 X-Received: by 2002:a17:90a:760c:b0:253:38bf:9757 with SMTP id s12-20020a17090a760c00b0025338bf9757mr9997976pjk.43.1684758682641; Mon, 22 May 2023 05:31:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758682; cv=none; d=google.com; s=arc-20160816; b=Ce5K3C4F3Pc0StB8O5V2zLcj2xZQZMea3hnKZNxJNwXYcwBOkAPUcwjvUHsTyULxWl U3ucZ+5ottiDJQcIOxa8eOjFbBpg/8t8XCFD3zm982/fCA+OtsNpQV3J4mAxr8EkkOpL z9d8n+xOwkNyFxU/ZNRNMspfRjN5aFSTQyojLl2lAH6B7GUrN549x2C54KyiRKmfeHJe xmQBMxwPyjHlGutJpeweWCYaYeFiuXNY/rcz0qEX8YOoCCo3aG3j1aIdrX/l3odxqaNd dbn0OMF6AVDDZIWrTdG0KrRIXj+cQ5rkco1V0zYrrr0c01e4+Q6uYUHNk18MevvljgfX /+Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8ZeilnvNz0gS3VWQLcUssOsUH2yQyhKs/QWogCyB1j8=; b=NZcKUC1Belf7JbGMktmSN8WRLpIEPArUqnKkWb3A49opkwWy2+kj0VpjW0TI8Fx3Eo UcFoxWNHnL/RnJI5ANBBQc2/cXPZSknBnbLWXEG6++G4LrVamW1Y0Js0vp3Kpd6pB1Pi KVfFcCB8RnynPYqwxCkhF142D0TP5LvOn2DpVfzJBTinifWE4Ldf4tkCosVF4k0bnrYC 8CT50n+E2C1jIfkVaC0RefVrLV3GNBr28XTaB98XSvTnLB3qKR6l5bgUfl5/GPjVixor eUYTbA7VjubUO/yScwVMSfn25pILzTqLo4xxEPaWEb8oft2x+Pu/KdfSTE0ktsKtTVpm 6ylQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q1-20020a17090a2dc100b00250cb2a2000si4586316pjm.113.2023.05.22.05.31.06; Mon, 22 May 2023 05:31:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231665AbjEVM2l (ORCPT + 99 others); Mon, 22 May 2023 08:28:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234050AbjEVM1R (ORCPT ); Mon, 22 May 2023 08:27:17 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E25C9E9; Mon, 22 May 2023 05:25:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C6F05150C; Mon, 22 May 2023 05:25:59 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1FDFB3F59C; Mon, 22 May 2023 05:25:13 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 15/26] locking/atomic: scripts: remove leftover "${mult}" Date: Mon, 22 May 2023 13:24:18 +0100 Message-Id: <20230522122429.1915021-16-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597520592561311?= X-GMAIL-MSGID: =?utf-8?q?1766597520592561311?= We removed cmpxchg_double() and variants in commit: b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double") Which removed the need for "${mult}" in the instrumentation logic. Unfortunately we missed an instance of "${mult}". There is no change to the generated header. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- scripts/atomic/gen-atomic-instrumented.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index a2ef735be8ca9..68557bfbbdc5e 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -118,7 +118,7 @@ cat < X-Patchwork-Id: 97398 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1411426vqo; Mon, 22 May 2023 05:31:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ78C+yPTxtIg1F5+YeX5p/ujalJYc9vIloRt/25ygCywKhem0ytLjh3fcYNXcBUHiHfxulv X-Received: by 2002:a05:6a20:54a4:b0:10b:2203:6ab1 with SMTP id i36-20020a056a2054a400b0010b22036ab1mr4773131pzk.4.1684758683899; Mon, 22 May 2023 05:31:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758683; cv=none; d=google.com; s=arc-20160816; b=eaR/8dHWnGMWzS6tcSKHjRA0qsAR19h0WgKIjcfBeJmm4wzbYVkoWrvAColrTUwS8C yyBY3zlGsRQjYj+Uu+fDBPUfmcBcFLH7t0E6FLWD0tEvymJagbbsrjOCZbKxFYO2Jl0z wo2jBSTI7dXuinvuoqGCEgOutJ9v4X9BXsBUXhEH7QGjQZvjSb492PBRhYWC0aJ0F/my fWPQFaXi2jEEfC+G9eACYSr31G10jc5LojBQwdwI6zo7ZrowABMd3Hh77COKwBqwDcaB awOkR1Rexm3xHZYzUs4kMNQqwAFvOjvRUMNoYtOuSXX5mV/ow8uzpy4BgzczVgR8edOO w9Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=XUj6Oj222/mpr27SuFOkM51Ydsi0FSikoAA8fgJC0Ms=; b=eoBaQNTq7boP8tYYRu56R/XNb+8bRReGBUwKbjWeKUS7ym81rwGFKMsRopPiW/2RoJ ExlP1he+bMgmHfthNUmRMdl9sXjGv04KaXAFU+FANUGv7ZTyWp5+LSfo7sVxeXkNSUir KkgpLzQkbDhYFWq8oMGEYqGgNqZE2CkfHPCbUCC5Vv7w0cRQsLai6dLXYcjgzFcn1Iaa 08BTHH57Dn+YDThKE4gd02W0WInTRHnQTKYh/8V+mMaaz1/G+tD/OTk1RQ3xjOHyuQBP O5WLPJYIHQNyJoAvNx75uCm6six+P3uGMXt+MCxtTB6EMT4QZ6jRL+QNz5OoGT1TcQIE m5+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m24-20020a638c18000000b0053481a225desi4534057pgd.340.2023.05.22.05.31.11; Mon, 22 May 2023 05:31:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233739AbjEVM2p (ORCPT + 99 others); Mon, 22 May 2023 08:28:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234049AbjEVM1R (ORCPT ); Mon, 22 May 2023 08:27:17 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0E8F310F3; Mon, 22 May 2023 05:25:23 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 347DC1515; Mon, 22 May 2023 05:26:02 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A6683F59C; Mon, 22 May 2023 05:25:15 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 16/26] locking/atomic: scripts: factor out order template generation Date: Mon, 22 May 2023 13:24:19 +0100 Message-Id: <20230522122429.1915021-17-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597521733625034?= X-GMAIL-MSGID: =?utf-8?q?1766597521733625034?= Currently gen_proto_order_variants() hard codes the path for the templates used for order fallbacks. Factor this out into a helper so that it can be reused elsewhere. This results in no change to the generated headers, so there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- scripts/atomic/gen-atomic-fallback.sh | 34 +++++++++++++-------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 7a6bcea8f565b..337330865fa2e 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -32,6 +32,20 @@ gen_template_fallback() fi } +#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...) +gen_order_fallback() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local tmpl_order=${order#_} + local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}" + gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" +} + #gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...) gen_proto_fallback() { @@ -56,20 +70,6 @@ cat << EOF EOF } -gen_proto_order_variant() -{ - local meta="$1"; shift - local pfx="$1"; shift - local name="$1"; shift - local sfx="$1"; shift - local order="$1"; shift - local atomic="$1" - - local basename="arch_${atomic}_${pfx}${name}${sfx}" - - printf "#define ${basename}${order} ${basename}${order}\n" -} - #gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) gen_proto_order_variants() { @@ -117,9 +117,9 @@ gen_proto_order_variants() printf "#else /* ${basename}_relaxed */\n\n" - gen_template_fallback "${ATOMICDIR}/fallbacks/acquire" "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_template_fallback "${ATOMICDIR}/fallbacks/release" "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_template_fallback "${ATOMICDIR}/fallbacks/fence" "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" printf "#endif /* ${basename}_relaxed */\n\n" } From patchwork Mon May 22 12:24:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97399 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1411788vqo; Mon, 22 May 2023 05:31:54 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5j1illxnvoZ+Ot4xLYL0TTS11X0bwcwXvVYqTg+/V1nlpryQpQaxpdCjw87K+UB7oJ9gR8 X-Received: by 2002:a05:6a00:1819:b0:646:e940:c2c4 with SMTP id y25-20020a056a00181900b00646e940c2c4mr14187160pfa.14.1684758714053; Mon, 22 May 2023 05:31:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758714; cv=none; d=google.com; s=arc-20160816; b=R9tdID4untyYY4Hd53+guuT5KDXIOi6wE/8Mc74/4sT4X4xZxhmrxQkQT6x9JK8tBN 4dq9YprzmXmWrWLltdx5tpIugsqMrhQe+VVkipU8Nl06u0V2phog2iXD/3oviFtsFxrW yhsaBmTodDD3K1ziou0uMQuiLOgwZucoX6j0smnSaaZD+igGfTP3lXcSdkmiZLfvYqM/ cui8m9Pg8/KbsgF+x/36F44DUjU7j1LxQhpwEhivFyxsRpkg0ZoPiU0ebgDzoVg+er2V TjdjL1gqmNQRckU2XFO02LDlOEhcfsdZZku/J6ElW9bp0xmaj2BSG+4yFed3js3jOHse 59lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zZE2tqO3JjC35Kk7T3N099Ggcl31KMyMszm1oVWJGC4=; b=uINUwrTj+xDbe8RJ0WZs8VA4amOCftdd78i7gdHHSUjUOiDzjJrmjZjh1CoLM3FL8D xG2fJWroaXcB7NE67nUUqpzjukFGOgv5XTkFFP+tXT01/1ZjifhYqUuGVh2ZfpilWTql 71miLIgFY/yK845bqkz7NMSGKkANc6DWGZlfjcJbcWmL5tfJ10UYAhPHQ7M827C355r0 kz2YCIoT7V7+G+eddGt/cs75hoybzq7nR3q3/SHv6jyjrmKq2fFGWfpwXoskuOHWK54T PVVQgA6LdtxDEbCFPzMaRnPd/XE/QGw20GnIyO+olFmqkdC1T5jr7U82viiTGdEnh3nC qb/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u190-20020a6385c7000000b0052c9458bc07si757547pgd.468.2023.05.22.05.31.39; Mon, 22 May 2023 05:31:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233604AbjEVM2v (ORCPT + 99 others); Mon, 22 May 2023 08:28:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233514AbjEVM1Y (ORCPT ); Mon, 22 May 2023 08:27:24 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3DDA71705; Mon, 22 May 2023 05:25:23 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F1FFA1688; Mon, 22 May 2023 05:26:04 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 014E73F59C; Mon, 22 May 2023 05:25:17 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 17/26] locking/atomic: scripts: add trivial raw_atomic*_() Date: Mon, 22 May 2023 13:24:20 +0100 Message-Id: <20230522122429.1915021-18-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597553592098742?= X-GMAIL-MSGID: =?utf-8?q?1766597553592098742?= Currently a number of arch_atomic*_() functions are optional, and where an arch does not provide a given arch_atomic*_() we will define an implementation of arch_atomic*_() in atomic-arch-fallback.h. Filling in the missing ops requires special care as we want to select the optimal definition of each op (e.g. preferentially defining ops in terms of their relaxed form rather than their fully-ordered form). The ifdeffery necessary for this requires us to group ordering variants together, which can be a bit painful to read, and is painful for kerneldoc generation. It would be easier to handle this if we generated ops into a separate namespace, as this would remove the need to take special care with the ifdeffery, and allow each ordering variant to be generated separately. This patch adds a new set of raw_atomic_() definitions, which are currently trivial wrappers of their arch_atomic_() equivalent. This will allow us to move treewide users of arch_atomic_() over to raw atomic op before we rework the fallback generation to generate raw_atomic_ directly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- include/linux/atomic.h | 1 + include/linux/atomic/atomic-instrumented.h | 595 ++++--- include/linux/atomic/atomic-raw.h | 1645 ++++++++++++++++++++ scripts/atomic/gen-atomic-instrumented.sh | 19 +- scripts/atomic/gen-atomic-raw.sh | 84 + scripts/atomic/gen-atomics.sh | 1 + 6 files changed, 2033 insertions(+), 312 deletions(-) create mode 100644 include/linux/atomic/atomic-raw.h create mode 100755 scripts/atomic/gen-atomic-raw.sh diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8dd57c3a99e9b..127f5dc63a7df 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -79,6 +79,7 @@ #include #include +#include #include #endif /* _LINUX_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index a55b5b70a3e15..90ee2f55af770 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -4,15 +4,10 @@ // DO NOT MODIFY THIS FILE DIRECTLY /* - * This file provides wrappers with KASAN instrumentation for atomic operations. - * To use this functionality an arch's atomic.h file needs to define all - * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include - * this file at the end. This file provides atomic_read() that forwards to - * arch_atomic_read() for actual atomic operation. - * Note: if an arch atomic operation is implemented by means of other atomic - * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use - * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid - * double instrumentation. + * This file provoides atomic operations with explicit instrumentation (e.g. + * KASAN, KCSAN), which should be used unless it is necessary to avoid + * instrumentation. Where it is necessary to aovid instrumenation, the + * raw_atomic*() operations should be used. */ #ifndef _LINUX_ATOMIC_INSTRUMENTED_H #define _LINUX_ATOMIC_INSTRUMENTED_H @@ -25,21 +20,21 @@ static __always_inline int atomic_read(const atomic_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_read(v); + return raw_atomic_read(v); } static __always_inline int atomic_read_acquire(const atomic_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_read_acquire(v); + return raw_atomic_read_acquire(v); } static __always_inline void atomic_set(atomic_t *v, int i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic_set(v, i); + raw_atomic_set(v, i); } static __always_inline void @@ -47,14 +42,14 @@ atomic_set_release(atomic_t *v, int i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic_set_release(v, i); + raw_atomic_set_release(v, i); } static __always_inline void atomic_add(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_add(i, v); + raw_atomic_add(i, v); } static __always_inline int @@ -62,14 +57,14 @@ atomic_add_return(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return(i, v); + return raw_atomic_add_return(i, v); } static __always_inline int atomic_add_return_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_acquire(i, v); + return raw_atomic_add_return_acquire(i, v); } static __always_inline int @@ -77,14 +72,14 @@ atomic_add_return_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_release(i, v); + return raw_atomic_add_return_release(i, v); } static __always_inline int atomic_add_return_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_relaxed(i, v); + return raw_atomic_add_return_relaxed(i, v); } static __always_inline int @@ -92,14 +87,14 @@ atomic_fetch_add(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add(i, v); + return raw_atomic_fetch_add(i, v); } static __always_inline int atomic_fetch_add_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_acquire(i, v); + return raw_atomic_fetch_add_acquire(i, v); } static __always_inline int @@ -107,21 +102,21 @@ atomic_fetch_add_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_release(i, v); + return raw_atomic_fetch_add_release(i, v); } static __always_inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_relaxed(i, v); + return raw_atomic_fetch_add_relaxed(i, v); } static __always_inline void atomic_sub(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_sub(i, v); + raw_atomic_sub(i, v); } static __always_inline int @@ -129,14 +124,14 @@ atomic_sub_return(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return(i, v); + return raw_atomic_sub_return(i, v); } static __always_inline int atomic_sub_return_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_acquire(i, v); + return raw_atomic_sub_return_acquire(i, v); } static __always_inline int @@ -144,14 +139,14 @@ atomic_sub_return_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_release(i, v); + return raw_atomic_sub_return_release(i, v); } static __always_inline int atomic_sub_return_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_relaxed(i, v); + return raw_atomic_sub_return_relaxed(i, v); } static __always_inline int @@ -159,14 +154,14 @@ atomic_fetch_sub(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub(i, v); + return raw_atomic_fetch_sub(i, v); } static __always_inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_acquire(i, v); + return raw_atomic_fetch_sub_acquire(i, v); } static __always_inline int @@ -174,21 +169,21 @@ atomic_fetch_sub_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_release(i, v); + return raw_atomic_fetch_sub_release(i, v); } static __always_inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_relaxed(i, v); + return raw_atomic_fetch_sub_relaxed(i, v); } static __always_inline void atomic_inc(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_inc(v); + raw_atomic_inc(v); } static __always_inline int @@ -196,14 +191,14 @@ atomic_inc_return(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return(v); + return raw_atomic_inc_return(v); } static __always_inline int atomic_inc_return_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_acquire(v); + return raw_atomic_inc_return_acquire(v); } static __always_inline int @@ -211,14 +206,14 @@ atomic_inc_return_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_release(v); + return raw_atomic_inc_return_release(v); } static __always_inline int atomic_inc_return_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_inc_return_relaxed(v); } static __always_inline int @@ -226,14 +221,14 @@ atomic_fetch_inc(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc(v); + return raw_atomic_fetch_inc(v); } static __always_inline int atomic_fetch_inc_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_acquire(v); + return raw_atomic_fetch_inc_acquire(v); } static __always_inline int @@ -241,21 +236,21 @@ atomic_fetch_inc_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_release(v); + return raw_atomic_fetch_inc_release(v); } static __always_inline int atomic_fetch_inc_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_relaxed(v); + return raw_atomic_fetch_inc_relaxed(v); } static __always_inline void atomic_dec(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_dec(v); + raw_atomic_dec(v); } static __always_inline int @@ -263,14 +258,14 @@ atomic_dec_return(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return(v); + return raw_atomic_dec_return(v); } static __always_inline int atomic_dec_return_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_acquire(v); + return raw_atomic_dec_return_acquire(v); } static __always_inline int @@ -278,14 +273,14 @@ atomic_dec_return_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_release(v); + return raw_atomic_dec_return_release(v); } static __always_inline int atomic_dec_return_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_dec_return_relaxed(v); } static __always_inline int @@ -293,14 +288,14 @@ atomic_fetch_dec(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec(v); + return raw_atomic_fetch_dec(v); } static __always_inline int atomic_fetch_dec_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_acquire(v); + return raw_atomic_fetch_dec_acquire(v); } static __always_inline int @@ -308,21 +303,21 @@ atomic_fetch_dec_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_release(v); + return raw_atomic_fetch_dec_release(v); } static __always_inline int atomic_fetch_dec_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_dec_relaxed(v); } static __always_inline void atomic_and(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_and(i, v); + raw_atomic_and(i, v); } static __always_inline int @@ -330,14 +325,14 @@ atomic_fetch_and(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and(i, v); + return raw_atomic_fetch_and(i, v); } static __always_inline int atomic_fetch_and_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_acquire(i, v); + return raw_atomic_fetch_and_acquire(i, v); } static __always_inline int @@ -345,21 +340,21 @@ atomic_fetch_and_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_release(i, v); + return raw_atomic_fetch_and_release(i, v); } static __always_inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_relaxed(i, v); + return raw_atomic_fetch_and_relaxed(i, v); } static __always_inline void atomic_andnot(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_andnot(i, v); + raw_atomic_andnot(i, v); } static __always_inline int @@ -367,14 +362,14 @@ atomic_fetch_andnot(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot(i, v); + return raw_atomic_fetch_andnot(i, v); } static __always_inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_acquire(i, v); + return raw_atomic_fetch_andnot_acquire(i, v); } static __always_inline int @@ -382,21 +377,21 @@ atomic_fetch_andnot_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_release(i, v); + return raw_atomic_fetch_andnot_release(i, v); } static __always_inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_relaxed(i, v); + return raw_atomic_fetch_andnot_relaxed(i, v); } static __always_inline void atomic_or(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_or(i, v); + raw_atomic_or(i, v); } static __always_inline int @@ -404,14 +399,14 @@ atomic_fetch_or(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or(i, v); + return raw_atomic_fetch_or(i, v); } static __always_inline int atomic_fetch_or_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_acquire(i, v); + return raw_atomic_fetch_or_acquire(i, v); } static __always_inline int @@ -419,21 +414,21 @@ atomic_fetch_or_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_release(i, v); + return raw_atomic_fetch_or_release(i, v); } static __always_inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_relaxed(i, v); + return raw_atomic_fetch_or_relaxed(i, v); } static __always_inline void atomic_xor(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_xor(i, v); + raw_atomic_xor(i, v); } static __always_inline int @@ -441,14 +436,14 @@ atomic_fetch_xor(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor(i, v); + return raw_atomic_fetch_xor(i, v); } static __always_inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_acquire(i, v); + return raw_atomic_fetch_xor_acquire(i, v); } static __always_inline int @@ -456,14 +451,14 @@ atomic_fetch_xor_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_release(i, v); + return raw_atomic_fetch_xor_release(i, v); } static __always_inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_relaxed(i, v); + return raw_atomic_fetch_xor_relaxed(i, v); } static __always_inline int @@ -471,14 +466,14 @@ atomic_xchg(atomic_t *v, int i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg(v, i); + return raw_atomic_xchg(v, i); } static __always_inline int atomic_xchg_acquire(atomic_t *v, int i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, i); } static __always_inline int @@ -486,14 +481,14 @@ atomic_xchg_release(atomic_t *v, int i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, i); } static __always_inline int atomic_xchg_relaxed(atomic_t *v, int i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, i); } static __always_inline int @@ -501,14 +496,14 @@ atomic_cmpxchg(atomic_t *v, int old, int new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg(v, old, new); + return raw_atomic_cmpxchg(v, old, new); } static __always_inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_acquire(v, old, new); + return raw_atomic_cmpxchg_acquire(v, old, new); } static __always_inline int @@ -516,14 +511,14 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_release(v, old, new); + return raw_atomic_cmpxchg_release(v, old, new); } static __always_inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_atomic_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -532,7 +527,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg(v, old, new); + return raw_atomic_try_cmpxchg(v, old, new); } static __always_inline bool @@ -540,7 +535,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_acquire(v, old, new); + return raw_atomic_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -549,7 +544,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_release(v, old, new); + return raw_atomic_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -557,7 +552,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_relaxed(v, old, new); + return raw_atomic_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -565,7 +560,7 @@ atomic_sub_and_test(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_and_test(i, v); + return raw_atomic_sub_and_test(i, v); } static __always_inline bool @@ -573,7 +568,7 @@ atomic_dec_and_test(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_and_test(v); + return raw_atomic_dec_and_test(v); } static __always_inline bool @@ -581,7 +576,7 @@ atomic_inc_and_test(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_and_test(v); + return raw_atomic_inc_and_test(v); } static __always_inline bool @@ -589,14 +584,14 @@ atomic_add_negative(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative(i, v); + return raw_atomic_add_negative(i, v); } static __always_inline bool atomic_add_negative_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_acquire(i, v); + return raw_atomic_add_negative_acquire(i, v); } static __always_inline bool @@ -604,14 +599,14 @@ atomic_add_negative_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_release(i, v); + return raw_atomic_add_negative_release(i, v); } static __always_inline bool atomic_add_negative_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_negative_relaxed(i, v); } static __always_inline int @@ -619,7 +614,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_unless(v, a, u); + return raw_atomic_fetch_add_unless(v, a, u); } static __always_inline bool @@ -627,7 +622,7 @@ atomic_add_unless(atomic_t *v, int a, int u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_unless(v, a, u); + return raw_atomic_add_unless(v, a, u); } static __always_inline bool @@ -635,7 +630,7 @@ atomic_inc_not_zero(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_not_zero(v); + return raw_atomic_inc_not_zero(v); } static __always_inline bool @@ -643,7 +638,7 @@ atomic_inc_unless_negative(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_unless_negative(v); + return raw_atomic_inc_unless_negative(v); } static __always_inline bool @@ -651,7 +646,7 @@ atomic_dec_unless_positive(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_unless_positive(v); + return raw_atomic_dec_unless_positive(v); } static __always_inline int @@ -659,28 +654,28 @@ atomic_dec_if_positive(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_if_positive(v); + return raw_atomic_dec_if_positive(v); } static __always_inline s64 atomic64_read(const atomic64_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic64_read(v); + return raw_atomic64_read(v); } static __always_inline s64 atomic64_read_acquire(const atomic64_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic64_read_acquire(v); + return raw_atomic64_read_acquire(v); } static __always_inline void atomic64_set(atomic64_t *v, s64 i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } static __always_inline void @@ -688,14 +683,14 @@ atomic64_set_release(atomic64_t *v, s64 i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic64_set_release(v, i); + raw_atomic64_set_release(v, i); } static __always_inline void atomic64_add(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_add(i, v); + raw_atomic64_add(i, v); } static __always_inline s64 @@ -703,14 +698,14 @@ atomic64_add_return(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return(i, v); + return raw_atomic64_add_return(i, v); } static __always_inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_acquire(i, v); + return raw_atomic64_add_return_acquire(i, v); } static __always_inline s64 @@ -718,14 +713,14 @@ atomic64_add_return_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_release(i, v); + return raw_atomic64_add_return_release(i, v); } static __always_inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_relaxed(i, v); + return raw_atomic64_add_return_relaxed(i, v); } static __always_inline s64 @@ -733,14 +728,14 @@ atomic64_fetch_add(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add(i, v); + return raw_atomic64_fetch_add(i, v); } static __always_inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_acquire(i, v); + return raw_atomic64_fetch_add_acquire(i, v); } static __always_inline s64 @@ -748,21 +743,21 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_release(i, v); + return raw_atomic64_fetch_add_release(i, v); } static __always_inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_relaxed(i, v); + return raw_atomic64_fetch_add_relaxed(i, v); } static __always_inline void atomic64_sub(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_sub(i, v); + raw_atomic64_sub(i, v); } static __always_inline s64 @@ -770,14 +765,14 @@ atomic64_sub_return(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return(i, v); + return raw_atomic64_sub_return(i, v); } static __always_inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_acquire(i, v); + return raw_atomic64_sub_return_acquire(i, v); } static __always_inline s64 @@ -785,14 +780,14 @@ atomic64_sub_return_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_release(i, v); + return raw_atomic64_sub_return_release(i, v); } static __always_inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_relaxed(i, v); + return raw_atomic64_sub_return_relaxed(i, v); } static __always_inline s64 @@ -800,14 +795,14 @@ atomic64_fetch_sub(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub(i, v); + return raw_atomic64_fetch_sub(i, v); } static __always_inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_acquire(i, v); + return raw_atomic64_fetch_sub_acquire(i, v); } static __always_inline s64 @@ -815,21 +810,21 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_release(i, v); + return raw_atomic64_fetch_sub_release(i, v); } static __always_inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_relaxed(i, v); + return raw_atomic64_fetch_sub_relaxed(i, v); } static __always_inline void atomic64_inc(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_inc(v); + raw_atomic64_inc(v); } static __always_inline s64 @@ -837,14 +832,14 @@ atomic64_inc_return(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return(v); + return raw_atomic64_inc_return(v); } static __always_inline s64 atomic64_inc_return_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_acquire(v); + return raw_atomic64_inc_return_acquire(v); } static __always_inline s64 @@ -852,14 +847,14 @@ atomic64_inc_return_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_release(v); + return raw_atomic64_inc_return_release(v); } static __always_inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_inc_return_relaxed(v); } static __always_inline s64 @@ -867,14 +862,14 @@ atomic64_fetch_inc(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc(v); + return raw_atomic64_fetch_inc(v); } static __always_inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_acquire(v); + return raw_atomic64_fetch_inc_acquire(v); } static __always_inline s64 @@ -882,21 +877,21 @@ atomic64_fetch_inc_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_release(v); + return raw_atomic64_fetch_inc_release(v); } static __always_inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_relaxed(v); + return raw_atomic64_fetch_inc_relaxed(v); } static __always_inline void atomic64_dec(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_dec(v); + raw_atomic64_dec(v); } static __always_inline s64 @@ -904,14 +899,14 @@ atomic64_dec_return(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return(v); + return raw_atomic64_dec_return(v); } static __always_inline s64 atomic64_dec_return_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_acquire(v); + return raw_atomic64_dec_return_acquire(v); } static __always_inline s64 @@ -919,14 +914,14 @@ atomic64_dec_return_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_release(v); + return raw_atomic64_dec_return_release(v); } static __always_inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_dec_return_relaxed(v); } static __always_inline s64 @@ -934,14 +929,14 @@ atomic64_fetch_dec(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec(v); + return raw_atomic64_fetch_dec(v); } static __always_inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_acquire(v); + return raw_atomic64_fetch_dec_acquire(v); } static __always_inline s64 @@ -949,21 +944,21 @@ atomic64_fetch_dec_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_release(v); + return raw_atomic64_fetch_dec_release(v); } static __always_inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_dec_relaxed(v); } static __always_inline void atomic64_and(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_and(i, v); + raw_atomic64_and(i, v); } static __always_inline s64 @@ -971,14 +966,14 @@ atomic64_fetch_and(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and(i, v); + return raw_atomic64_fetch_and(i, v); } static __always_inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_acquire(i, v); + return raw_atomic64_fetch_and_acquire(i, v); } static __always_inline s64 @@ -986,21 +981,21 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_release(i, v); + return raw_atomic64_fetch_and_release(i, v); } static __always_inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(i, v); } static __always_inline void atomic64_andnot(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_andnot(i, v); + raw_atomic64_andnot(i, v); } static __always_inline s64 @@ -1008,14 +1003,14 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot(i, v); + return raw_atomic64_fetch_andnot(i, v); } static __always_inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_acquire(i, v); + return raw_atomic64_fetch_andnot_acquire(i, v); } static __always_inline s64 @@ -1023,21 +1018,21 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_release(i, v); + return raw_atomic64_fetch_andnot_release(i, v); } static __always_inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_andnot_relaxed(i, v); } static __always_inline void atomic64_or(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_or(i, v); + raw_atomic64_or(i, v); } static __always_inline s64 @@ -1045,14 +1040,14 @@ atomic64_fetch_or(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or(i, v); + return raw_atomic64_fetch_or(i, v); } static __always_inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_acquire(i, v); + return raw_atomic64_fetch_or_acquire(i, v); } static __always_inline s64 @@ -1060,21 +1055,21 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_release(i, v); + return raw_atomic64_fetch_or_release(i, v); } static __always_inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_relaxed(i, v); + return raw_atomic64_fetch_or_relaxed(i, v); } static __always_inline void atomic64_xor(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_xor(i, v); + raw_atomic64_xor(i, v); } static __always_inline s64 @@ -1082,14 +1077,14 @@ atomic64_fetch_xor(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor(i, v); + return raw_atomic64_fetch_xor(i, v); } static __always_inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_acquire(i, v); + return raw_atomic64_fetch_xor_acquire(i, v); } static __always_inline s64 @@ -1097,14 +1092,14 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_release(i, v); + return raw_atomic64_fetch_xor_release(i, v); } static __always_inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_relaxed(i, v); + return raw_atomic64_fetch_xor_relaxed(i, v); } static __always_inline s64 @@ -1112,14 +1107,14 @@ atomic64_xchg(atomic64_t *v, s64 i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, i); } static __always_inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, i); } static __always_inline s64 @@ -1127,14 +1122,14 @@ atomic64_xchg_release(atomic64_t *v, s64 i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, i); } static __always_inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, i); } static __always_inline s64 @@ -1142,14 +1137,14 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg(v, old, new); + return raw_atomic64_cmpxchg(v, old, new); } static __always_inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_acquire(v, old, new); + return raw_atomic64_cmpxchg_acquire(v, old, new); } static __always_inline s64 @@ -1157,14 +1152,14 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_release(v, old, new); + return raw_atomic64_cmpxchg_release(v, old, new); } static __always_inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_atomic64_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1173,7 +1168,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg(v, old, new); + return raw_atomic64_try_cmpxchg(v, old, new); } static __always_inline bool @@ -1181,7 +1176,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_acquire(v, old, new); + return raw_atomic64_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -1190,7 +1185,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_release(v, old, new); + return raw_atomic64_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -1198,7 +1193,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); + return raw_atomic64_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1206,7 +1201,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_and_test(i, v); + return raw_atomic64_sub_and_test(i, v); } static __always_inline bool @@ -1214,7 +1209,7 @@ atomic64_dec_and_test(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_and_test(v); + return raw_atomic64_dec_and_test(v); } static __always_inline bool @@ -1222,7 +1217,7 @@ atomic64_inc_and_test(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_and_test(v); + return raw_atomic64_inc_and_test(v); } static __always_inline bool @@ -1230,14 +1225,14 @@ atomic64_add_negative(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative(i, v); + return raw_atomic64_add_negative(i, v); } static __always_inline bool atomic64_add_negative_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_acquire(i, v); + return raw_atomic64_add_negative_acquire(i, v); } static __always_inline bool @@ -1245,14 +1240,14 @@ atomic64_add_negative_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_release(i, v); + return raw_atomic64_add_negative_release(i, v); } static __always_inline bool atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_negative_relaxed(i, v); } static __always_inline s64 @@ -1260,7 +1255,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_unless(v, a, u); + return raw_atomic64_fetch_add_unless(v, a, u); } static __always_inline bool @@ -1268,7 +1263,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_unless(v, a, u); + return raw_atomic64_add_unless(v, a, u); } static __always_inline bool @@ -1276,7 +1271,7 @@ atomic64_inc_not_zero(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_not_zero(v); + return raw_atomic64_inc_not_zero(v); } static __always_inline bool @@ -1284,7 +1279,7 @@ atomic64_inc_unless_negative(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_unless_negative(v); + return raw_atomic64_inc_unless_negative(v); } static __always_inline bool @@ -1292,7 +1287,7 @@ atomic64_dec_unless_positive(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_unless_positive(v); + return raw_atomic64_dec_unless_positive(v); } static __always_inline s64 @@ -1300,28 +1295,28 @@ atomic64_dec_if_positive(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_if_positive(v); + return raw_atomic64_dec_if_positive(v); } static __always_inline long atomic_long_read(const atomic_long_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_long_read(v); + return raw_atomic_long_read(v); } static __always_inline long atomic_long_read_acquire(const atomic_long_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_long_read_acquire(v); + return raw_atomic_long_read_acquire(v); } static __always_inline void atomic_long_set(atomic_long_t *v, long i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic_long_set(v, i); + raw_atomic_long_set(v, i); } static __always_inline void @@ -1329,14 +1324,14 @@ atomic_long_set_release(atomic_long_t *v, long i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic_long_set_release(v, i); + raw_atomic_long_set_release(v, i); } static __always_inline void atomic_long_add(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_add(i, v); + raw_atomic_long_add(i, v); } static __always_inline long @@ -1344,14 +1339,14 @@ atomic_long_add_return(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return(i, v); + return raw_atomic_long_add_return(i, v); } static __always_inline long atomic_long_add_return_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_acquire(i, v); + return raw_atomic_long_add_return_acquire(i, v); } static __always_inline long @@ -1359,14 +1354,14 @@ atomic_long_add_return_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_release(i, v); + return raw_atomic_long_add_return_release(i, v); } static __always_inline long atomic_long_add_return_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_relaxed(i, v); + return raw_atomic_long_add_return_relaxed(i, v); } static __always_inline long @@ -1374,14 +1369,14 @@ atomic_long_fetch_add(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add(i, v); + return raw_atomic_long_fetch_add(i, v); } static __always_inline long atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_acquire(i, v); + return raw_atomic_long_fetch_add_acquire(i, v); } static __always_inline long @@ -1389,21 +1384,21 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_release(i, v); + return raw_atomic_long_fetch_add_release(i, v); } static __always_inline long atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_relaxed(i, v); + return raw_atomic_long_fetch_add_relaxed(i, v); } static __always_inline void atomic_long_sub(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_sub(i, v); + raw_atomic_long_sub(i, v); } static __always_inline long @@ -1411,14 +1406,14 @@ atomic_long_sub_return(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return(i, v); + return raw_atomic_long_sub_return(i, v); } static __always_inline long atomic_long_sub_return_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_acquire(i, v); + return raw_atomic_long_sub_return_acquire(i, v); } static __always_inline long @@ -1426,14 +1421,14 @@ atomic_long_sub_return_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_release(i, v); + return raw_atomic_long_sub_return_release(i, v); } static __always_inline long atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_relaxed(i, v); + return raw_atomic_long_sub_return_relaxed(i, v); } static __always_inline long @@ -1441,14 +1436,14 @@ atomic_long_fetch_sub(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub(i, v); + return raw_atomic_long_fetch_sub(i, v); } static __always_inline long atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_acquire(i, v); + return raw_atomic_long_fetch_sub_acquire(i, v); } static __always_inline long @@ -1456,21 +1451,21 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_release(i, v); + return raw_atomic_long_fetch_sub_release(i, v); } static __always_inline long atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_relaxed(i, v); + return raw_atomic_long_fetch_sub_relaxed(i, v); } static __always_inline void atomic_long_inc(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_inc(v); + raw_atomic_long_inc(v); } static __always_inline long @@ -1478,14 +1473,14 @@ atomic_long_inc_return(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return(v); + return raw_atomic_long_inc_return(v); } static __always_inline long atomic_long_inc_return_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_acquire(v); + return raw_atomic_long_inc_return_acquire(v); } static __always_inline long @@ -1493,14 +1488,14 @@ atomic_long_inc_return_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_release(v); + return raw_atomic_long_inc_return_release(v); } static __always_inline long atomic_long_inc_return_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_relaxed(v); + return raw_atomic_long_inc_return_relaxed(v); } static __always_inline long @@ -1508,14 +1503,14 @@ atomic_long_fetch_inc(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc(v); + return raw_atomic_long_fetch_inc(v); } static __always_inline long atomic_long_fetch_inc_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_acquire(v); + return raw_atomic_long_fetch_inc_acquire(v); } static __always_inline long @@ -1523,21 +1518,21 @@ atomic_long_fetch_inc_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_release(v); + return raw_atomic_long_fetch_inc_release(v); } static __always_inline long atomic_long_fetch_inc_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_relaxed(v); + return raw_atomic_long_fetch_inc_relaxed(v); } static __always_inline void atomic_long_dec(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_dec(v); + raw_atomic_long_dec(v); } static __always_inline long @@ -1545,14 +1540,14 @@ atomic_long_dec_return(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return(v); + return raw_atomic_long_dec_return(v); } static __always_inline long atomic_long_dec_return_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_acquire(v); + return raw_atomic_long_dec_return_acquire(v); } static __always_inline long @@ -1560,14 +1555,14 @@ atomic_long_dec_return_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_release(v); + return raw_atomic_long_dec_return_release(v); } static __always_inline long atomic_long_dec_return_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_relaxed(v); + return raw_atomic_long_dec_return_relaxed(v); } static __always_inline long @@ -1575,14 +1570,14 @@ atomic_long_fetch_dec(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec(v); + return raw_atomic_long_fetch_dec(v); } static __always_inline long atomic_long_fetch_dec_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_acquire(v); + return raw_atomic_long_fetch_dec_acquire(v); } static __always_inline long @@ -1590,21 +1585,21 @@ atomic_long_fetch_dec_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_release(v); + return raw_atomic_long_fetch_dec_release(v); } static __always_inline long atomic_long_fetch_dec_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_relaxed(v); + return raw_atomic_long_fetch_dec_relaxed(v); } static __always_inline void atomic_long_and(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_and(i, v); + raw_atomic_long_and(i, v); } static __always_inline long @@ -1612,14 +1607,14 @@ atomic_long_fetch_and(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and(i, v); + return raw_atomic_long_fetch_and(i, v); } static __always_inline long atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_acquire(i, v); + return raw_atomic_long_fetch_and_acquire(i, v); } static __always_inline long @@ -1627,21 +1622,21 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_release(i, v); + return raw_atomic_long_fetch_and_release(i, v); } static __always_inline long atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_relaxed(i, v); + return raw_atomic_long_fetch_and_relaxed(i, v); } static __always_inline void atomic_long_andnot(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_andnot(i, v); + raw_atomic_long_andnot(i, v); } static __always_inline long @@ -1649,14 +1644,14 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot(i, v); + return raw_atomic_long_fetch_andnot(i, v); } static __always_inline long atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_acquire(i, v); + return raw_atomic_long_fetch_andnot_acquire(i, v); } static __always_inline long @@ -1664,21 +1659,21 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_release(i, v); + return raw_atomic_long_fetch_andnot_release(i, v); } static __always_inline long atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_relaxed(i, v); + return raw_atomic_long_fetch_andnot_relaxed(i, v); } static __always_inline void atomic_long_or(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_or(i, v); + raw_atomic_long_or(i, v); } static __always_inline long @@ -1686,14 +1681,14 @@ atomic_long_fetch_or(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or(i, v); + return raw_atomic_long_fetch_or(i, v); } static __always_inline long atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_acquire(i, v); + return raw_atomic_long_fetch_or_acquire(i, v); } static __always_inline long @@ -1701,21 +1696,21 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_release(i, v); + return raw_atomic_long_fetch_or_release(i, v); } static __always_inline long atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_relaxed(i, v); + return raw_atomic_long_fetch_or_relaxed(i, v); } static __always_inline void atomic_long_xor(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_xor(i, v); + raw_atomic_long_xor(i, v); } static __always_inline long @@ -1723,14 +1718,14 @@ atomic_long_fetch_xor(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor(i, v); + return raw_atomic_long_fetch_xor(i, v); } static __always_inline long atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_acquire(i, v); + return raw_atomic_long_fetch_xor_acquire(i, v); } static __always_inline long @@ -1738,14 +1733,14 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_release(i, v); + return raw_atomic_long_fetch_xor_release(i, v); } static __always_inline long atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_relaxed(i, v); + return raw_atomic_long_fetch_xor_relaxed(i, v); } static __always_inline long @@ -1753,14 +1748,14 @@ atomic_long_xchg(atomic_long_t *v, long i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg(v, i); + return raw_atomic_long_xchg(v, i); } static __always_inline long atomic_long_xchg_acquire(atomic_long_t *v, long i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_acquire(v, i); + return raw_atomic_long_xchg_acquire(v, i); } static __always_inline long @@ -1768,14 +1763,14 @@ atomic_long_xchg_release(atomic_long_t *v, long i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_release(v, i); + return raw_atomic_long_xchg_release(v, i); } static __always_inline long atomic_long_xchg_relaxed(atomic_long_t *v, long i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_relaxed(v, i); + return raw_atomic_long_xchg_relaxed(v, i); } static __always_inline long @@ -1783,14 +1778,14 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg(v, old, new); + return raw_atomic_long_cmpxchg(v, old, new); } static __always_inline long atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_acquire(v, old, new); + return raw_atomic_long_cmpxchg_acquire(v, old, new); } static __always_inline long @@ -1798,14 +1793,14 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_release(v, old, new); + return raw_atomic_long_cmpxchg_release(v, old, new); } static __always_inline long atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_relaxed(v, old, new); + return raw_atomic_long_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1814,7 +1809,7 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg(v, old, new); + return raw_atomic_long_try_cmpxchg(v, old, new); } static __always_inline bool @@ -1822,7 +1817,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_acquire(v, old, new); + return raw_atomic_long_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -1831,7 +1826,7 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_release(v, old, new); + return raw_atomic_long_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -1839,7 +1834,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); + return raw_atomic_long_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1847,7 +1842,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_and_test(i, v); + return raw_atomic_long_sub_and_test(i, v); } static __always_inline bool @@ -1855,7 +1850,7 @@ atomic_long_dec_and_test(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_and_test(v); + return raw_atomic_long_dec_and_test(v); } static __always_inline bool @@ -1863,7 +1858,7 @@ atomic_long_inc_and_test(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_and_test(v); + return raw_atomic_long_inc_and_test(v); } static __always_inline bool @@ -1871,14 +1866,14 @@ atomic_long_add_negative(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative(i, v); + return raw_atomic_long_add_negative(i, v); } static __always_inline bool atomic_long_add_negative_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_acquire(i, v); + return raw_atomic_long_add_negative_acquire(i, v); } static __always_inline bool @@ -1886,14 +1881,14 @@ atomic_long_add_negative_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_release(i, v); + return raw_atomic_long_add_negative_release(i, v); } static __always_inline bool atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_relaxed(i, v); + return raw_atomic_long_add_negative_relaxed(i, v); } static __always_inline long @@ -1901,7 +1896,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_unless(v, a, u); + return raw_atomic_long_fetch_add_unless(v, a, u); } static __always_inline bool @@ -1909,7 +1904,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_unless(v, a, u); + return raw_atomic_long_add_unless(v, a, u); } static __always_inline bool @@ -1917,7 +1912,7 @@ atomic_long_inc_not_zero(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_not_zero(v); + return raw_atomic_long_inc_not_zero(v); } static __always_inline bool @@ -1925,7 +1920,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_unless_negative(v); + return raw_atomic_long_inc_unless_negative(v); } static __always_inline bool @@ -1933,7 +1928,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_unless_positive(v); + return raw_atomic_long_dec_unless_positive(v); } static __always_inline long @@ -1941,7 +1936,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_if_positive(v); + return raw_atomic_long_dec_if_positive(v); } #define xchg(ptr, ...) \ @@ -1949,14 +1944,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg(__ai_ptr, __VA_ARGS__); \ + raw_xchg(__ai_ptr, __VA_ARGS__); \ }) #define xchg_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ + raw_xchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #define xchg_release(ptr, ...) \ @@ -1964,14 +1959,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_release(__ai_ptr, __VA_ARGS__); \ + raw_xchg_release(__ai_ptr, __VA_ARGS__); \ }) #define xchg_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg(ptr, ...) \ @@ -1979,14 +1974,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_release(ptr, ...) \ @@ -1994,14 +1989,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64(ptr, ...) \ @@ -2009,14 +2004,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_release(ptr, ...) \ @@ -2024,14 +2019,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128(ptr, ...) \ @@ -2039,14 +2034,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_release(ptr, ...) \ @@ -2054,14 +2049,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define try_cmpxchg(ptr, oldp, ...) \ @@ -2071,7 +2066,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_acquire(ptr, oldp, ...) \ @@ -2080,7 +2075,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_release(ptr, oldp, ...) \ @@ -2090,7 +2085,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_relaxed(ptr, oldp, ...) \ @@ -2099,7 +2094,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64(ptr, oldp, ...) \ @@ -2109,7 +2104,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_acquire(ptr, oldp, ...) \ @@ -2118,7 +2113,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_release(ptr, oldp, ...) \ @@ -2128,7 +2123,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_relaxed(ptr, oldp, ...) \ @@ -2137,7 +2132,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128(ptr, oldp, ...) \ @@ -2147,7 +2142,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_acquire(ptr, oldp, ...) \ @@ -2156,7 +2151,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_release(ptr, oldp, ...) \ @@ -2166,7 +2161,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_relaxed(ptr, oldp, ...) \ @@ -2175,28 +2170,28 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define cmpxchg_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \ }) #define sync_cmpxchg(ptr, ...) \ @@ -2204,7 +2199,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ + raw_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) #define try_cmpxchg_local(ptr, oldp, ...) \ @@ -2213,7 +2208,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_local(ptr, oldp, ...) \ @@ -2222,7 +2217,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_local(ptr, oldp, ...) \ @@ -2231,9 +2226,9 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 3611991b015450e119bcd7417a9431af7f3ba13c +// f6502977180430e61c1a7c4e5e665f04f501fb8d diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h new file mode 100644 index 0000000000000..83ff0269657e7 --- /dev/null +++ b/include/linux/atomic/atomic-raw.h @@ -0,0 +1,1645 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-atomic-raw.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +#ifndef _LINUX_ATOMIC_RAW_H +#define _LINUX_ATOMIC_RAW_H + +static __always_inline int +raw_atomic_read(const atomic_t *v) +{ + return arch_atomic_read(v); +} + +static __always_inline int +raw_atomic_read_acquire(const atomic_t *v) +{ + return arch_atomic_read_acquire(v); +} + +static __always_inline void +raw_atomic_set(atomic_t *v, int i) +{ + arch_atomic_set(v, i); +} + +static __always_inline void +raw_atomic_set_release(atomic_t *v, int i) +{ + arch_atomic_set_release(v, i); +} + +static __always_inline void +raw_atomic_add(int i, atomic_t *v) +{ + arch_atomic_add(i, v); +} + +static __always_inline int +raw_atomic_add_return(int i, atomic_t *v) +{ + return arch_atomic_add_return(i, v); +} + +static __always_inline int +raw_atomic_add_return_acquire(int i, atomic_t *v) +{ + return arch_atomic_add_return_acquire(i, v); +} + +static __always_inline int +raw_atomic_add_return_release(int i, atomic_t *v) +{ + return arch_atomic_add_return_release(i, v); +} + +static __always_inline int +raw_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return arch_atomic_add_return_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_add(int i, atomic_t *v) +{ + return arch_atomic_fetch_add(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic_sub(int i, atomic_t *v) +{ + arch_atomic_sub(i, v); +} + +static __always_inline int +raw_atomic_sub_return(int i, atomic_t *v) +{ + return arch_atomic_sub_return(i, v); +} + +static __always_inline int +raw_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return arch_atomic_sub_return_acquire(i, v); +} + +static __always_inline int +raw_atomic_sub_return_release(int i, atomic_t *v) +{ + return arch_atomic_sub_return_release(i, v); +} + +static __always_inline int +raw_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return arch_atomic_sub_return_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic_inc(atomic_t *v) +{ + arch_atomic_inc(v); +} + +static __always_inline int +raw_atomic_inc_return(atomic_t *v) +{ + return arch_atomic_inc_return(v); +} + +static __always_inline int +raw_atomic_inc_return_acquire(atomic_t *v) +{ + return arch_atomic_inc_return_acquire(v); +} + +static __always_inline int +raw_atomic_inc_return_release(atomic_t *v) +{ + return arch_atomic_inc_return_release(v); +} + +static __always_inline int +raw_atomic_inc_return_relaxed(atomic_t *v) +{ + return arch_atomic_inc_return_relaxed(v); +} + +static __always_inline int +raw_atomic_fetch_inc(atomic_t *v) +{ + return arch_atomic_fetch_inc(v); +} + +static __always_inline int +raw_atomic_fetch_inc_acquire(atomic_t *v) +{ + return arch_atomic_fetch_inc_acquire(v); +} + +static __always_inline int +raw_atomic_fetch_inc_release(atomic_t *v) +{ + return arch_atomic_fetch_inc_release(v); +} + +static __always_inline int +raw_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return arch_atomic_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic_dec(atomic_t *v) +{ + arch_atomic_dec(v); +} + +static __always_inline int +raw_atomic_dec_return(atomic_t *v) +{ + return arch_atomic_dec_return(v); +} + +static __always_inline int +raw_atomic_dec_return_acquire(atomic_t *v) +{ + return arch_atomic_dec_return_acquire(v); +} + +static __always_inline int +raw_atomic_dec_return_release(atomic_t *v) +{ + return arch_atomic_dec_return_release(v); +} + +static __always_inline int +raw_atomic_dec_return_relaxed(atomic_t *v) +{ + return arch_atomic_dec_return_relaxed(v); +} + +static __always_inline int +raw_atomic_fetch_dec(atomic_t *v) +{ + return arch_atomic_fetch_dec(v); +} + +static __always_inline int +raw_atomic_fetch_dec_acquire(atomic_t *v) +{ + return arch_atomic_fetch_dec_acquire(v); +} + +static __always_inline int +raw_atomic_fetch_dec_release(atomic_t *v) +{ + return arch_atomic_fetch_dec_release(v); +} + +static __always_inline int +raw_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return arch_atomic_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic_and(int i, atomic_t *v) +{ + arch_atomic_and(i, v); +} + +static __always_inline int +raw_atomic_fetch_and(int i, atomic_t *v) +{ + return arch_atomic_fetch_and(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic_andnot(int i, atomic_t *v) +{ + arch_atomic_andnot(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic_or(int i, atomic_t *v) +{ + arch_atomic_or(i, v); +} + +static __always_inline int +raw_atomic_fetch_or(int i, atomic_t *v) +{ + return arch_atomic_fetch_or(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic_xor(int i, atomic_t *v) +{ + arch_atomic_xor(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_relaxed(i, v); +} + +static __always_inline int +raw_atomic_xchg(atomic_t *v, int i) +{ + return arch_atomic_xchg(v, i); +} + +static __always_inline int +raw_atomic_xchg_acquire(atomic_t *v, int i) +{ + return arch_atomic_xchg_acquire(v, i); +} + +static __always_inline int +raw_atomic_xchg_release(atomic_t *v, int i) +{ + return arch_atomic_xchg_release(v, i); +} + +static __always_inline int +raw_atomic_xchg_relaxed(atomic_t *v, int i) +{ + return arch_atomic_xchg_relaxed(v, i); +} + +static __always_inline int +raw_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_acquire(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_release(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_sub_and_test(int i, atomic_t *v) +{ + return arch_atomic_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic_dec_and_test(atomic_t *v) +{ + return arch_atomic_dec_and_test(v); +} + +static __always_inline bool +raw_atomic_inc_and_test(atomic_t *v) +{ + return arch_atomic_inc_and_test(v); +} + +static __always_inline bool +raw_atomic_add_negative(int i, atomic_t *v) +{ + return arch_atomic_add_negative(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return arch_atomic_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_release(int i, atomic_t *v) +{ + return arch_atomic_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return arch_atomic_add_negative_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return arch_atomic_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_add_unless(atomic_t *v, int a, int u) +{ + return arch_atomic_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_inc_not_zero(atomic_t *v) +{ + return arch_atomic_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic_inc_unless_negative(atomic_t *v) +{ + return arch_atomic_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic_dec_unless_positive(atomic_t *v) +{ + return arch_atomic_dec_unless_positive(v); +} + +static __always_inline int +raw_atomic_dec_if_positive(atomic_t *v) +{ + return arch_atomic_dec_if_positive(v); +} + +static __always_inline s64 +raw_atomic64_read(const atomic64_t *v) +{ + return arch_atomic64_read(v); +} + +static __always_inline s64 +raw_atomic64_read_acquire(const atomic64_t *v) +{ + return arch_atomic64_read_acquire(v); +} + +static __always_inline void +raw_atomic64_set(atomic64_t *v, s64 i) +{ + arch_atomic64_set(v, i); +} + +static __always_inline void +raw_atomic64_set_release(atomic64_t *v, s64 i) +{ + arch_atomic64_set_release(v, i); +} + +static __always_inline void +raw_atomic64_add(s64 i, atomic64_t *v) +{ + arch_atomic64_add(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_release(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_sub(s64 i, atomic64_t *v) +{ + arch_atomic64_sub(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_release(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_inc(atomic64_t *v) +{ + arch_atomic64_inc(v); +} + +static __always_inline s64 +raw_atomic64_inc_return(atomic64_t *v) +{ + return arch_atomic64_inc_return(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_acquire(atomic64_t *v) +{ + return arch_atomic64_inc_return_acquire(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_release(atomic64_t *v) +{ + return arch_atomic64_inc_return_release(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return arch_atomic64_inc_return_relaxed(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc(atomic64_t *v) +{ + return arch_atomic64_fetch_inc(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_acquire(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_release(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_release(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic64_dec(atomic64_t *v) +{ + arch_atomic64_dec(v); +} + +static __always_inline s64 +raw_atomic64_dec_return(atomic64_t *v) +{ + return arch_atomic64_dec_return(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_acquire(atomic64_t *v) +{ + return arch_atomic64_dec_return_acquire(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_release(atomic64_t *v) +{ + return arch_atomic64_dec_return_release(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return arch_atomic64_dec_return_relaxed(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec(atomic64_t *v) +{ + return arch_atomic64_fetch_dec(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_acquire(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_release(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_release(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic64_and(s64 i, atomic64_t *v) +{ + arch_atomic64_and(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_andnot(s64 i, atomic64_t *v) +{ + arch_atomic64_andnot(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_or(s64 i, atomic64_t *v) +{ + arch_atomic64_or(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_xor(s64 i, atomic64_t *v) +{ + arch_atomic64_xor(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_xchg(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_acquire(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_release(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_release(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_relaxed(v, i); +} + +static __always_inline s64 +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_acquire(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_release(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic64_dec_and_test(atomic64_t *v) +{ + return arch_atomic64_dec_and_test(v); +} + +static __always_inline bool +raw_atomic64_inc_and_test(atomic64_t *v) +{ + return arch_atomic64_inc_and_test(v); +} + +static __always_inline bool +raw_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return arch_atomic64_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return arch_atomic64_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic64_inc_not_zero(atomic64_t *v) +{ + return arch_atomic64_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic64_inc_unless_negative(atomic64_t *v) +{ + return arch_atomic64_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic64_dec_unless_positive(atomic64_t *v) +{ + return arch_atomic64_dec_unless_positive(v); +} + +static __always_inline s64 +raw_atomic64_dec_if_positive(atomic64_t *v) +{ + return arch_atomic64_dec_if_positive(v); +} + +static __always_inline long +raw_atomic_long_read(const atomic_long_t *v) +{ + return arch_atomic_long_read(v); +} + +static __always_inline long +raw_atomic_long_read_acquire(const atomic_long_t *v) +{ + return arch_atomic_long_read_acquire(v); +} + +static __always_inline void +raw_atomic_long_set(atomic_long_t *v, long i) +{ + arch_atomic_long_set(v, i); +} + +static __always_inline void +raw_atomic_long_set_release(atomic_long_t *v, long i) +{ + arch_atomic_long_set_release(v, i); +} + +static __always_inline void +raw_atomic_long_add(long i, atomic_long_t *v) +{ + arch_atomic_long_add(i, v); +} + +static __always_inline long +raw_atomic_long_add_return(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_release(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_sub(long i, atomic_long_t *v) +{ + arch_atomic_long_sub(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_release(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_inc(atomic_long_t *v) +{ + arch_atomic_long_inc(v); +} + +static __always_inline long +raw_atomic_long_inc_return(atomic_long_t *v) +{ + return arch_atomic_long_inc_return(v); +} + +static __always_inline long +raw_atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_acquire(v); +} + +static __always_inline long +raw_atomic_long_inc_return_release(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_release(v); +} + +static __always_inline long +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_relaxed(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_acquire(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_release(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic_long_dec(atomic_long_t *v) +{ + arch_atomic_long_dec(v); +} + +static __always_inline long +raw_atomic_long_dec_return(atomic_long_t *v) +{ + return arch_atomic_long_dec_return(v); +} + +static __always_inline long +raw_atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_acquire(v); +} + +static __always_inline long +raw_atomic_long_dec_return_release(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_release(v); +} + +static __always_inline long +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_relaxed(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_acquire(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_release(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic_long_and(long i, atomic_long_t *v) +{ + arch_atomic_long_and(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_andnot(long i, atomic_long_t *v) +{ + arch_atomic_long_andnot(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_or(long i, atomic_long_t *v) +{ + arch_atomic_long_or(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_xor(long i, atomic_long_t *v) +{ + arch_atomic_long_xor(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_xchg(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_acquire(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_release(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_relaxed(v, i); +} + +static __always_inline long +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_acquire(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_release(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic_long_dec_and_test(atomic_long_t *v) +{ + return arch_atomic_long_dec_and_test(v); +} + +static __always_inline bool +raw_atomic_long_inc_and_test(atomic_long_t *v) +{ + return arch_atomic_long_inc_and_test(v); +} + +static __always_inline bool +raw_atomic_long_add_negative(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return arch_atomic_long_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return arch_atomic_long_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_long_inc_not_zero(atomic_long_t *v) +{ + return arch_atomic_long_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return arch_atomic_long_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return arch_atomic_long_dec_unless_positive(v); +} + +static __always_inline long +raw_atomic_long_dec_if_positive(atomic_long_t *v) +{ + return arch_atomic_long_dec_if_positive(v); +} + +#define raw_xchg(...) \ + arch_xchg(__VA_ARGS__) + +#define raw_xchg_acquire(...) \ + arch_xchg_acquire(__VA_ARGS__) + +#define raw_xchg_release(...) \ + arch_xchg_release(__VA_ARGS__) + +#define raw_xchg_relaxed(...) \ + arch_xchg_relaxed(__VA_ARGS__) + +#define raw_cmpxchg(...) \ + arch_cmpxchg(__VA_ARGS__) + +#define raw_cmpxchg_acquire(...) \ + arch_cmpxchg_acquire(__VA_ARGS__) + +#define raw_cmpxchg_release(...) \ + arch_cmpxchg_release(__VA_ARGS__) + +#define raw_cmpxchg_relaxed(...) \ + arch_cmpxchg_relaxed(__VA_ARGS__) + +#define raw_cmpxchg64(...) \ + arch_cmpxchg64(__VA_ARGS__) + +#define raw_cmpxchg64_acquire(...) \ + arch_cmpxchg64_acquire(__VA_ARGS__) + +#define raw_cmpxchg64_release(...) \ + arch_cmpxchg64_release(__VA_ARGS__) + +#define raw_cmpxchg64_relaxed(...) \ + arch_cmpxchg64_relaxed(__VA_ARGS__) + +#define raw_cmpxchg128(...) \ + arch_cmpxchg128(__VA_ARGS__) + +#define raw_cmpxchg128_acquire(...) \ + arch_cmpxchg128_acquire(__VA_ARGS__) + +#define raw_cmpxchg128_release(...) \ + arch_cmpxchg128_release(__VA_ARGS__) + +#define raw_cmpxchg128_relaxed(...) \ + arch_cmpxchg128_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg(...) \ + arch_try_cmpxchg(__VA_ARGS__) + +#define raw_try_cmpxchg_acquire(...) \ + arch_try_cmpxchg_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg_release(...) \ + arch_try_cmpxchg_release(__VA_ARGS__) + +#define raw_try_cmpxchg_relaxed(...) \ + arch_try_cmpxchg_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg64(...) \ + arch_try_cmpxchg64(__VA_ARGS__) + +#define raw_try_cmpxchg64_acquire(...) \ + arch_try_cmpxchg64_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg64_release(...) \ + arch_try_cmpxchg64_release(__VA_ARGS__) + +#define raw_try_cmpxchg64_relaxed(...) \ + arch_try_cmpxchg64_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg128(...) \ + arch_try_cmpxchg128(__VA_ARGS__) + +#define raw_try_cmpxchg128_acquire(...) \ + arch_try_cmpxchg128_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg128_release(...) \ + arch_try_cmpxchg128_release(__VA_ARGS__) + +#define raw_try_cmpxchg128_relaxed(...) \ + arch_try_cmpxchg128_relaxed(__VA_ARGS__) + +#define raw_cmpxchg_local(...) \ + arch_cmpxchg_local(__VA_ARGS__) + +#define raw_cmpxchg64_local(...) \ + arch_cmpxchg64_local(__VA_ARGS__) + +#define raw_cmpxchg128_local(...) \ + arch_cmpxchg128_local(__VA_ARGS__) + +#define raw_sync_cmpxchg(...) \ + arch_sync_cmpxchg(__VA_ARGS__) + +#define raw_try_cmpxchg_local(...) \ + arch_try_cmpxchg_local(__VA_ARGS__) + +#define raw_try_cmpxchg64_local(...) \ + arch_try_cmpxchg64_local(__VA_ARGS__) + +#define raw_try_cmpxchg128_local(...) \ + arch_try_cmpxchg128_local(__VA_ARGS__) + +#endif /* _LINUX_ATOMIC_RAW_H */ +// 01d54200571b3857755a07c10074a4fd58cef6b1 diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 68557bfbbdc5e..93c949aa9e544 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -73,7 +73,7 @@ static __always_inline ${ret} ${atomicname}(${params}) { ${checks} - ${retstmt}arch_${atomicname}(${args}); + ${retstmt}raw_${atomicname}(${args}); } EOF @@ -105,7 +105,7 @@ EOF cat < ${LINUXDIR}/include/${header} From patchwork Mon May 22 12:24:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97415 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1424999vqo; Mon, 22 May 2023 05:54:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Gk6UBihJGZYdHukcM5Fk1rl39vrvd9vZulf63dsif8NzPqkmrEcIicaxzkJKDHvWJhNBj X-Received: by 2002:a05:6a21:1514:b0:105:63b0:5bf8 with SMTP id nq20-20020a056a21151400b0010563b05bf8mr8179896pzb.18.1684760079428; Mon, 22 May 2023 05:54:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760079; cv=none; d=google.com; s=arc-20160816; b=Fbn4+/bumsuWMg6EWi9iZAz6esClAWjb27r7t948SrQQ+RHvyfayF8bHLYRjPXikFs DQsCmh77+zN4vFBZu7C5L3scpYedDtYuJAR+DXqbt1AIRb3rEzxGC9FJi32Y3uxKFFPB CfbZlapF2F+c0C9iKQIi/Jr23GoMCdeYp59iFrbYdsjyVyzYcAsm4YohUu2ejxBwhZFr /vcnyiDH22gXYqe0saALFlaYqEc+7JN7Tzqd6Gw/g7P+JzYEYdHogR0q+tZY1/0N0mPu qnE5gbDwPHWGaBYRt61yEzm+7Kalf7saJuP8/OfuiVYRIT2Kmxfm1Bo8qfuXPyd773nT UNIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uh6m7Z1pa6rJt079Wn78dOeBJFsq1zoeOE77aCXv2gs=; b=PqOBalvkax+p+6UamwdPpYHyafqQUiNhY9ycUA88zWtu9QBnjSQkRM9baMWv8gMJ6X +sCDP4tK5IXQXA5UAXgTSw6UVxNy4sPNo8PP1fzl/0r54/7YfenWSlhjxl8aLaP1fcDZ Z6Afx7BjeEueOFKpRp6oryFXGPZwMDay4sdBBus7IU6RD1OeUITXATgFKRFXsgtR+vph j1p9afOLFEPa7QQbMbM/3vT2P/K3KB2r/Jsc2SGfsP6LrvrqnEetTipmZXXRyWpAWjXx HTxRsUzW9RDeKrh+zU9KPllymJeLhq0h+5NQY5jvtxywEFDmPbHPSdIPr/CuFSGCBJrH /6gA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f5-20020a637545000000b00524d6d12581si4726387pgn.691.2023.05.22.05.54.23; Mon, 22 May 2023 05:54:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233901AbjEVM2s (ORCPT + 99 others); Mon, 22 May 2023 08:28:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234051AbjEVM1R (ORCPT ); Mon, 22 May 2023 08:27:17 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 590201708; Mon, 22 May 2023 05:25:23 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 683A5168F; Mon, 22 May 2023 05:26:07 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BCA263F59C; Mon, 22 May 2023 05:25:20 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 18/26] locking/atomic: treewide: use raw_atomic*_() Date: Mon, 22 May 2023 13:24:21 +0100 Message-Id: <20230522122429.1915021-19-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598985194464265?= X-GMAIL-MSGID: =?utf-8?q?1766598985194464265?= Now that we have raw_atomic*_() definitions, there's no need to use arch_atomic*_() definitions outside of the low-level atomic definitions. Move treewide users of arch_atomic*_() over to the equivalent raw_atomic*_(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- arch/powerpc/kernel/smp.c | 12 ++++++------ arch/x86/kernel/alternative.c | 4 ++-- arch/x86/kernel/cpu/mce/core.c | 16 ++++++++-------- arch/x86/kernel/nmi.c | 2 +- arch/x86/kernel/pvclock.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- include/asm-generic/bitops/atomic.h | 12 ++++++------ include/asm-generic/bitops/lock.h | 8 ++++---- include/linux/context_tracking.h | 4 ++-- include/linux/context_tracking_state.h | 2 +- include/linux/cpumask.h | 2 +- include/linux/jump_label.h | 2 +- kernel/context_tracking.c | 12 ++++++------ kernel/sched/clock.c | 2 +- 14 files changed, 42 insertions(+), 42 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 265801a3e94cf..e8965f18686f0 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -417,9 +417,9 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags) { raw_local_irq_save(*flags); hard_irq_disable(); - while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { + while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { raw_local_irq_restore(*flags); - spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); + spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0); raw_local_irq_save(*flags); hard_irq_disable(); } @@ -427,15 +427,15 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags) noinstr static void nmi_ipi_lock(void) { - while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) - spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); + while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) + spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0); } noinstr static void nmi_ipi_unlock(void) { smp_mb(); - WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1); - arch_atomic_set(&__nmi_ipi_lock, 0); + WARN_ON(raw_atomic_read(&__nmi_ipi_lock) != 1); + raw_atomic_set(&__nmi_ipi_lock, 0); } noinstr static void nmi_ipi_unlock_end(unsigned long *flags) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index f615e0cb6d932..18f16e93838fe 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1799,7 +1799,7 @@ struct bp_patching_desc *try_get_desc(void) { struct bp_patching_desc *desc = &bp_desc; - if (!arch_atomic_inc_not_zero(&desc->refs)) + if (!raw_atomic_inc_not_zero(&desc->refs)) return NULL; return desc; @@ -1810,7 +1810,7 @@ static __always_inline void put_desc(void) struct bp_patching_desc *desc = &bp_desc; smp_mb__before_atomic(); - arch_atomic_dec(&desc->refs); + raw_atomic_dec(&desc->refs); } static __always_inline void *text_poke_addr(struct text_poke_loc *tp) diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2eec60f50057a..ab156e6e71208 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1022,12 +1022,12 @@ static noinstr int mce_start(int *no_way_out) if (!timeout) return ret; - arch_atomic_add(*no_way_out, &global_nwo); + raw_atomic_add(*no_way_out, &global_nwo); /* * Rely on the implied barrier below, such that global_nwo * is updated before mce_callin. */ - order = arch_atomic_inc_return(&mce_callin); + order = raw_atomic_inc_return(&mce_callin); arch_cpumask_clear_cpu(smp_processor_id(), &mce_missing_cpus); /* Enable instrumentation around calls to external facilities */ @@ -1036,10 +1036,10 @@ static noinstr int mce_start(int *no_way_out) /* * Wait for everyone. */ - while (arch_atomic_read(&mce_callin) != num_online_cpus()) { + while (raw_atomic_read(&mce_callin) != num_online_cpus()) { if (mce_timed_out(&timeout, "Timeout: Not all CPUs entered broadcast exception handler")) { - arch_atomic_set(&global_nwo, 0); + raw_atomic_set(&global_nwo, 0); goto out; } ndelay(SPINUNIT); @@ -1054,7 +1054,7 @@ static noinstr int mce_start(int *no_way_out) /* * Monarch: Starts executing now, the others wait. */ - arch_atomic_set(&mce_executing, 1); + raw_atomic_set(&mce_executing, 1); } else { /* * Subject: Now start the scanning loop one by one in @@ -1062,10 +1062,10 @@ static noinstr int mce_start(int *no_way_out) * This way when there are any shared banks it will be * only seen by one CPU before cleared, avoiding duplicates. */ - while (arch_atomic_read(&mce_executing) < order) { + while (raw_atomic_read(&mce_executing) < order) { if (mce_timed_out(&timeout, "Timeout: Subject CPUs unable to finish machine check processing")) { - arch_atomic_set(&global_nwo, 0); + raw_atomic_set(&global_nwo, 0); goto out; } ndelay(SPINUNIT); @@ -1075,7 +1075,7 @@ static noinstr int mce_start(int *no_way_out) /* * Cache the global no_way_out state. */ - *no_way_out = arch_atomic_read(&global_nwo); + *no_way_out = raw_atomic_read(&global_nwo); ret = order; diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 776f4b1e395b5..a0c551846b35f 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -496,7 +496,7 @@ DEFINE_IDTENTRY_RAW(exc_nmi) */ sev_es_nmi_complete(); if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) - arch_atomic_long_inc(&nsp->idt_calls); + raw_atomic_long_inc(&nsp->idt_calls); if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) return; diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c index 56acf53a782ad..b3f81379c2fc0 100644 --- a/arch/x86/kernel/pvclock.c +++ b/arch/x86/kernel/pvclock.c @@ -101,11 +101,11 @@ u64 __pvclock_clocksource_read(struct pvclock_vcpu_time_info *src, bool dowd) * updating at the same time, and one of them could be slightly behind, * making the assumption that last_value always go forward fail to hold. */ - last = arch_atomic64_read(&last_value); + last = raw_atomic64_read(&last_value); do { if (ret <= last) return last; - } while (!arch_atomic64_try_cmpxchg(&last_value, &last, ret)); + } while (!raw_atomic64_try_cmpxchg(&last_value, &last, ret)); return ret; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ceb7c5e9cf9e9..ac6f609068106 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13155,7 +13155,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment); bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm) { - return arch_atomic_read(&kvm->arch.assigned_device_count); + return raw_atomic_read(&kvm->arch.assigned_device_count); } EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h index 71ab4ba9c25d1..e076e079f6b2e 100644 --- a/include/asm-generic/bitops/atomic.h +++ b/include/asm-generic/bitops/atomic.h @@ -15,21 +15,21 @@ static __always_inline void arch_set_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline void arch_clear_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline void arch_change_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline int @@ -39,7 +39,7 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_or(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -50,7 +50,7 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -61,7 +61,7 @@ arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_xor(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_xor(mask, (atomic_long_t *)p); return !!(old & mask); } diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 630f2f6b95956..40913516e654c 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -25,7 +25,7 @@ arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) if (READ_ONCE(*p) & mask) return 1; - old = arch_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -41,7 +41,7 @@ static __always_inline void arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p); } /** @@ -63,7 +63,7 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p) p += BIT_WORD(nr); old = READ_ONCE(*p); old &= ~BIT_MASK(nr); - arch_atomic_long_set_release((atomic_long_t *)p, old); + raw_atomic_long_set_release((atomic_long_t *)p, old); } /** @@ -83,7 +83,7 @@ static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr, unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p); return !!(old & BIT(7)); } #define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index d3cbb6c16babf..6e76b9dba00e7 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -119,7 +119,7 @@ extern void ct_idle_exit(void); */ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) { - return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); + return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); } /* @@ -128,7 +128,7 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) */ static __always_inline unsigned long ct_state_inc(int incby) { - return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state)); + return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state)); } static __always_inline bool warn_rcu_enter(void) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index fdd537ea513ff..bbff5f7f88030 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -51,7 +51,7 @@ DECLARE_PER_CPU(struct context_tracking, context_tracking); #ifdef CONFIG_CONTEXT_TRACKING_USER static __always_inline int __ct_state(void) { - return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK; + return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK; } #endif diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index ca736b05ec7b0..0d2e2a38b92d0 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -1071,7 +1071,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu) */ static __always_inline unsigned int num_online_cpus(void) { - return arch_atomic_read(&__num_online_cpus); + return raw_atomic_read(&__num_online_cpus); } #define num_possible_cpus() cpumask_weight(cpu_possible_mask) #define num_present_cpus() cpumask_weight(cpu_present_mask) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 4e968ebadce60..f0a949b7c9733 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -257,7 +257,7 @@ extern enum jump_label_type jump_label_init_type(struct jump_entry *entry); static __always_inline int static_key_count(struct static_key *key) { - return arch_atomic_read(&key->enabled); + return raw_atomic_read(&key->enabled); } static __always_inline void jump_label_init(void) diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index a09f1c19336ae..6ef0b35fc28c5 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * In this we case we don't care about any concurrency/ordering. */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) - arch_atomic_set(&ct->state, state); + raw_atomic_set(&ct->state, state); } else { /* * Even if context tracking is disabled on this CPU, because it's outside @@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state) */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { /* Tracking for vtime only, no concurrent RCU EQS accounting */ - arch_atomic_set(&ct->state, state); + raw_atomic_set(&ct->state, state); } else { /* * Tracking for vtime and RCU EQS. Make sure we don't race @@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * RCU only requires RCU_DYNTICKS_IDX increments to be fully * ordered. */ - arch_atomic_add(state, &ct->state); + raw_atomic_add(state, &ct->state); } } } @@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state) * In this we case we don't care about any concurrency/ordering. */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) - arch_atomic_set(&ct->state, CONTEXT_KERNEL); + raw_atomic_set(&ct->state, CONTEXT_KERNEL); } else { if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { /* Tracking for vtime only, no concurrent RCU EQS accounting */ - arch_atomic_set(&ct->state, CONTEXT_KERNEL); + raw_atomic_set(&ct->state, CONTEXT_KERNEL); } else { /* * Tracking for vtime and RCU EQS. Make sure we don't race @@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state) * RCU only requires RCU_DYNTICKS_IDX increments to be fully * ordered. */ - arch_atomic_sub(state, &ct->state); + raw_atomic_sub(state, &ct->state); } } } diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index b5cc2b53464de..71443cff31f0d 100644 --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -287,7 +287,7 @@ static __always_inline u64 sched_clock_local(struct sched_clock_data *scd) clock = wrap_max(clock, min_clock); clock = wrap_min(clock, max_clock); - if (!arch_try_cmpxchg64(&scd->clock, &old_clock, clock)) + if (!raw_try_cmpxchg64(&scd->clock, &old_clock, clock)) goto again; return clock; From patchwork Mon May 22 12:24:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97413 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1424023vqo; Mon, 22 May 2023 05:52:56 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6UKsa9poUKlPg809gI6lFf7oSj4WC/RVkXut2ahOLzoeC1LMrsLDMQLzZaQ1vNHaNehkXR X-Received: by 2002:a17:90a:bf0e:b0:252:a2e5:4c3f with SMTP id c14-20020a17090abf0e00b00252a2e54c3fmr10840335pjs.25.1684759976187; Mon, 22 May 2023 05:52:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684759976; cv=none; d=google.com; s=arc-20160816; b=GlZENPpUaM5LbG2GoRqedaip5NLOcE7T4lY9PdUswb42XLDVC6T61adhhoQOha6Zzx zxhfDigQlFdH++qo+ULEK4T0WodUCXIs8c6xC4CBf1MZsWyh5KnYAHylz4IPPXMSV6uo mZZeW7NgTsFYiJGcqjOsbA/ONjt11IXl77hrdvatPVp9HS+riTyB5pYc7PkU+rOv4hSC IVFT6AvSPpAOl3dFmr/vaRe3wZOLLesbp2ZV9m26WFxFhYDIDWbtOOSAKCqjjkQW+Xqv cyEg4rGcWUSj5KrPTdC1cm5a0Z3waa+rkRy7vSmJ6TXVfh1WUaM2UWY7PbVfGlsVSqm+ 6DNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=RfwS46g6DFHGHrlu4M1/dkDFT/FFOkNFYdx/jnPhWtw=; b=TEEdUiLWySFVMSOFZ9k/NLB4rEahEGZGHCtuHIKYVtaW0eX4JLJqhY/R3nWiv2PU9G UTK+7BaUa00R5ZSEhdcjr2anjzQ2foIS70h+dAo+OVX4Cr8wk7ZoAFBayrc6CQ4V67e+ wdong0ITRtxVJOfk3eJOWaaik3Vwr3lnhDWFMAFyVhHWdMtE/mKZrl/08vXmu3I4J6C0 QkL1i1qteEAf1bZSyiXJ29J+mkA2vAgNxsrGo1Z42LVQ7vaFhPOD4gWM0kJyJ/xBUhkX +vG1lgODL2rioga+1lBojS7MghSXt+bwRq36XEIYkmysPAAsIIPQL4MoHXcrnwZF389e ZYdA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j127-20020a636e85000000b00534919e40fcsi2407843pgc.626.2023.05.22.05.52.43; Mon, 22 May 2023 05:52:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233797AbjEVM25 (ORCPT + 99 others); Mon, 22 May 2023 08:28:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233521AbjEVM13 (ORCPT ); Mon, 22 May 2023 08:27:29 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1D1E9132; Mon, 22 May 2023 05:25:25 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0D461516; Mon, 22 May 2023 05:26:09 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 371B23F59C; Mon, 22 May 2023 05:25:23 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 19/26] locking/atomic: scripts: build raw_atomic_long*() directly Date: Mon, 22 May 2023 13:24:22 +0100 Message-Id: <20230522122429.1915021-20-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766598876796478822?= X-GMAIL-MSGID: =?utf-8?q?1766598876796478822?= Now that arch_atomic*() usage is limited to the atomic headers, we no longer have any users of arch_atomic_long_*(), and can generate raw_atomic_long_*() directly. Generate the raw_atomic_long_*() ops directly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- include/linux/atomic.h | 2 +- include/linux/atomic/atomic-long.h | 682 ++++++++++++++--------------- include/linux/atomic/atomic-raw.h | 512 +--------------------- scripts/atomic/gen-atomic-long.sh | 4 +- scripts/atomic/gen-atomic-raw.sh | 4 - 5 files changed, 345 insertions(+), 859 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 127f5dc63a7df..296cfae0389fe 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -78,8 +78,8 @@ }) #include -#include #include +#include #include #endif /* _LINUX_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 2fc51ba66bebd..92dc82ce1ce6d 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -24,1027 +24,1027 @@ typedef atomic_t atomic_long_t; #ifdef CONFIG_64BIT static __always_inline long -arch_atomic_long_read(const atomic_long_t *v) +raw_atomic_long_read(const atomic_long_t *v) { - return arch_atomic64_read(v); + return raw_atomic64_read(v); } static __always_inline long -arch_atomic_long_read_acquire(const atomic_long_t *v) +raw_atomic_long_read_acquire(const atomic_long_t *v) { - return arch_atomic64_read_acquire(v); + return raw_atomic64_read_acquire(v); } static __always_inline void -arch_atomic_long_set(atomic_long_t *v, long i) +raw_atomic_long_set(atomic_long_t *v, long i) { - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } static __always_inline void -arch_atomic_long_set_release(atomic_long_t *v, long i) +raw_atomic_long_set_release(atomic_long_t *v, long i) { - arch_atomic64_set_release(v, i); + raw_atomic64_set_release(v, i); } static __always_inline void -arch_atomic_long_add(long i, atomic_long_t *v) +raw_atomic_long_add(long i, atomic_long_t *v) { - arch_atomic64_add(i, v); + raw_atomic64_add(i, v); } static __always_inline long -arch_atomic_long_add_return(long i, atomic_long_t *v) +raw_atomic_long_add_return(long i, atomic_long_t *v) { - return arch_atomic64_add_return(i, v); + return raw_atomic64_add_return(i, v); } static __always_inline long -arch_atomic_long_add_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { - return arch_atomic64_add_return_acquire(i, v); + return raw_atomic64_add_return_acquire(i, v); } static __always_inline long -arch_atomic_long_add_return_release(long i, atomic_long_t *v) +raw_atomic_long_add_return_release(long i, atomic_long_t *v) { - return arch_atomic64_add_return_release(i, v); + return raw_atomic64_add_return_release(i, v); } static __always_inline long -arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_add_return_relaxed(i, v); + return raw_atomic64_add_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add(long i, atomic_long_t *v) +raw_atomic_long_fetch_add(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add(i, v); + return raw_atomic64_fetch_add(i, v); } static __always_inline long -arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_acquire(i, v); + return raw_atomic64_fetch_add_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_add_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_release(i, v); + return raw_atomic64_fetch_add_release(i, v); } static __always_inline long -arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_relaxed(i, v); + return raw_atomic64_fetch_add_relaxed(i, v); } static __always_inline void -arch_atomic_long_sub(long i, atomic_long_t *v) +raw_atomic_long_sub(long i, atomic_long_t *v) { - arch_atomic64_sub(i, v); + raw_atomic64_sub(i, v); } static __always_inline long -arch_atomic_long_sub_return(long i, atomic_long_t *v) +raw_atomic_long_sub_return(long i, atomic_long_t *v) { - return arch_atomic64_sub_return(i, v); + return raw_atomic64_sub_return(i, v); } static __always_inline long -arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_acquire(i, v); + return raw_atomic64_sub_return_acquire(i, v); } static __always_inline long -arch_atomic_long_sub_return_release(long i, atomic_long_t *v) +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_release(i, v); + return raw_atomic64_sub_return_release(i, v); } static __always_inline long -arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_relaxed(i, v); + return raw_atomic64_sub_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_sub(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub(i, v); + return raw_atomic64_fetch_sub(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_acquire(i, v); + return raw_atomic64_fetch_sub_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_release(i, v); + return raw_atomic64_fetch_sub_release(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_relaxed(i, v); + return raw_atomic64_fetch_sub_relaxed(i, v); } static __always_inline void -arch_atomic_long_inc(atomic_long_t *v) +raw_atomic_long_inc(atomic_long_t *v) { - arch_atomic64_inc(v); + raw_atomic64_inc(v); } static __always_inline long -arch_atomic_long_inc_return(atomic_long_t *v) +raw_atomic_long_inc_return(atomic_long_t *v) { - return arch_atomic64_inc_return(v); + return raw_atomic64_inc_return(v); } static __always_inline long -arch_atomic_long_inc_return_acquire(atomic_long_t *v) +raw_atomic_long_inc_return_acquire(atomic_long_t *v) { - return arch_atomic64_inc_return_acquire(v); + return raw_atomic64_inc_return_acquire(v); } static __always_inline long -arch_atomic_long_inc_return_release(atomic_long_t *v) +raw_atomic_long_inc_return_release(atomic_long_t *v) { - return arch_atomic64_inc_return_release(v); + return raw_atomic64_inc_return_release(v); } static __always_inline long -arch_atomic_long_inc_return_relaxed(atomic_long_t *v) +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_inc_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_inc(atomic_long_t *v) +raw_atomic_long_fetch_inc(atomic_long_t *v) { - return arch_atomic64_fetch_inc(v); + return raw_atomic64_fetch_inc(v); } static __always_inline long -arch_atomic_long_fetch_inc_acquire(atomic_long_t *v) +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { - return arch_atomic64_fetch_inc_acquire(v); + return raw_atomic64_fetch_inc_acquire(v); } static __always_inline long -arch_atomic_long_fetch_inc_release(atomic_long_t *v) +raw_atomic_long_fetch_inc_release(atomic_long_t *v) { - return arch_atomic64_fetch_inc_release(v); + return raw_atomic64_fetch_inc_release(v); } static __always_inline long -arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { - return arch_atomic64_fetch_inc_relaxed(v); + return raw_atomic64_fetch_inc_relaxed(v); } static __always_inline void -arch_atomic_long_dec(atomic_long_t *v) +raw_atomic_long_dec(atomic_long_t *v) { - arch_atomic64_dec(v); + raw_atomic64_dec(v); } static __always_inline long -arch_atomic_long_dec_return(atomic_long_t *v) +raw_atomic_long_dec_return(atomic_long_t *v) { - return arch_atomic64_dec_return(v); + return raw_atomic64_dec_return(v); } static __always_inline long -arch_atomic_long_dec_return_acquire(atomic_long_t *v) +raw_atomic_long_dec_return_acquire(atomic_long_t *v) { - return arch_atomic64_dec_return_acquire(v); + return raw_atomic64_dec_return_acquire(v); } static __always_inline long -arch_atomic_long_dec_return_release(atomic_long_t *v) +raw_atomic_long_dec_return_release(atomic_long_t *v) { - return arch_atomic64_dec_return_release(v); + return raw_atomic64_dec_return_release(v); } static __always_inline long -arch_atomic_long_dec_return_relaxed(atomic_long_t *v) +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_dec_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_dec(atomic_long_t *v) +raw_atomic_long_fetch_dec(atomic_long_t *v) { - return arch_atomic64_fetch_dec(v); + return raw_atomic64_fetch_dec(v); } static __always_inline long -arch_atomic_long_fetch_dec_acquire(atomic_long_t *v) +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { - return arch_atomic64_fetch_dec_acquire(v); + return raw_atomic64_fetch_dec_acquire(v); } static __always_inline long -arch_atomic_long_fetch_dec_release(atomic_long_t *v) +raw_atomic_long_fetch_dec_release(atomic_long_t *v) { - return arch_atomic64_fetch_dec_release(v); + return raw_atomic64_fetch_dec_release(v); } static __always_inline long -arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_dec_relaxed(v); } static __always_inline void -arch_atomic_long_and(long i, atomic_long_t *v) +raw_atomic_long_and(long i, atomic_long_t *v) { - arch_atomic64_and(i, v); + raw_atomic64_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and(long i, atomic_long_t *v) +raw_atomic_long_fetch_and(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and(i, v); + return raw_atomic64_fetch_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_acquire(i, v); + return raw_atomic64_fetch_and_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_and_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_release(i, v); + return raw_atomic64_fetch_and_release(i, v); } static __always_inline long -arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(i, v); } static __always_inline void -arch_atomic_long_andnot(long i, atomic_long_t *v) +raw_atomic_long_andnot(long i, atomic_long_t *v) { - arch_atomic64_andnot(i, v); + raw_atomic64_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot(i, v); + return raw_atomic64_fetch_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_acquire(i, v); + return raw_atomic64_fetch_andnot_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_release(i, v); + return raw_atomic64_fetch_andnot_release(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_andnot_relaxed(i, v); } static __always_inline void -arch_atomic_long_or(long i, atomic_long_t *v) +raw_atomic_long_or(long i, atomic_long_t *v) { - arch_atomic64_or(i, v); + raw_atomic64_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or(long i, atomic_long_t *v) +raw_atomic_long_fetch_or(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or(i, v); + return raw_atomic64_fetch_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_acquire(i, v); + return raw_atomic64_fetch_or_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_or_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_release(i, v); + return raw_atomic64_fetch_or_release(i, v); } static __always_inline long -arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_relaxed(i, v); + return raw_atomic64_fetch_or_relaxed(i, v); } static __always_inline void -arch_atomic_long_xor(long i, atomic_long_t *v) +raw_atomic_long_xor(long i, atomic_long_t *v) { - arch_atomic64_xor(i, v); + raw_atomic64_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor(i, v); + return raw_atomic64_fetch_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_acquire(i, v); + return raw_atomic64_fetch_xor_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_release(i, v); + return raw_atomic64_fetch_xor_release(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_relaxed(i, v); + return raw_atomic64_fetch_xor_relaxed(i, v); } static __always_inline long -arch_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long i) { - return arch_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, i); } static __always_inline long -arch_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { - return arch_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, i); } static __always_inline long -arch_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long i) { - return arch_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, i); } static __always_inline long -arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { - return arch_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, i); } static __always_inline long -arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg(v, old, new); + return raw_atomic64_cmpxchg(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_acquire(v, old, new); + return raw_atomic64_cmpxchg_acquire(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_release(v, old, new); + return raw_atomic64_cmpxchg_release(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_atomic64_cmpxchg_relaxed(v, old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_release(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_sub_and_test(long i, atomic_long_t *v) +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { - return arch_atomic64_sub_and_test(i, v); + return raw_atomic64_sub_and_test(i, v); } static __always_inline bool -arch_atomic_long_dec_and_test(atomic_long_t *v) +raw_atomic_long_dec_and_test(atomic_long_t *v) { - return arch_atomic64_dec_and_test(v); + return raw_atomic64_dec_and_test(v); } static __always_inline bool -arch_atomic_long_inc_and_test(atomic_long_t *v) +raw_atomic_long_inc_and_test(atomic_long_t *v) { - return arch_atomic64_inc_and_test(v); + return raw_atomic64_inc_and_test(v); } static __always_inline bool -arch_atomic_long_add_negative(long i, atomic_long_t *v) +raw_atomic_long_add_negative(long i, atomic_long_t *v) { - return arch_atomic64_add_negative(i, v); + return raw_atomic64_add_negative(i, v); } static __always_inline bool -arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_acquire(i, v); + return raw_atomic64_add_negative_acquire(i, v); } static __always_inline bool -arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_release(i, v); + return raw_atomic64_add_negative_release(i, v); } static __always_inline bool -arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_negative_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic64_fetch_add_unless(v, a, u); + return raw_atomic64_fetch_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic64_add_unless(v, a, u); + return raw_atomic64_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_inc_not_zero(atomic_long_t *v) +raw_atomic_long_inc_not_zero(atomic_long_t *v) { - return arch_atomic64_inc_not_zero(v); + return raw_atomic64_inc_not_zero(v); } static __always_inline bool -arch_atomic_long_inc_unless_negative(atomic_long_t *v) +raw_atomic_long_inc_unless_negative(atomic_long_t *v) { - return arch_atomic64_inc_unless_negative(v); + return raw_atomic64_inc_unless_negative(v); } static __always_inline bool -arch_atomic_long_dec_unless_positive(atomic_long_t *v) +raw_atomic_long_dec_unless_positive(atomic_long_t *v) { - return arch_atomic64_dec_unless_positive(v); + return raw_atomic64_dec_unless_positive(v); } static __always_inline long -arch_atomic_long_dec_if_positive(atomic_long_t *v) +raw_atomic_long_dec_if_positive(atomic_long_t *v) { - return arch_atomic64_dec_if_positive(v); + return raw_atomic64_dec_if_positive(v); } #else /* CONFIG_64BIT */ static __always_inline long -arch_atomic_long_read(const atomic_long_t *v) +raw_atomic_long_read(const atomic_long_t *v) { - return arch_atomic_read(v); + return raw_atomic_read(v); } static __always_inline long -arch_atomic_long_read_acquire(const atomic_long_t *v) +raw_atomic_long_read_acquire(const atomic_long_t *v) { - return arch_atomic_read_acquire(v); + return raw_atomic_read_acquire(v); } static __always_inline void -arch_atomic_long_set(atomic_long_t *v, long i) +raw_atomic_long_set(atomic_long_t *v, long i) { - arch_atomic_set(v, i); + raw_atomic_set(v, i); } static __always_inline void -arch_atomic_long_set_release(atomic_long_t *v, long i) +raw_atomic_long_set_release(atomic_long_t *v, long i) { - arch_atomic_set_release(v, i); + raw_atomic_set_release(v, i); } static __always_inline void -arch_atomic_long_add(long i, atomic_long_t *v) +raw_atomic_long_add(long i, atomic_long_t *v) { - arch_atomic_add(i, v); + raw_atomic_add(i, v); } static __always_inline long -arch_atomic_long_add_return(long i, atomic_long_t *v) +raw_atomic_long_add_return(long i, atomic_long_t *v) { - return arch_atomic_add_return(i, v); + return raw_atomic_add_return(i, v); } static __always_inline long -arch_atomic_long_add_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { - return arch_atomic_add_return_acquire(i, v); + return raw_atomic_add_return_acquire(i, v); } static __always_inline long -arch_atomic_long_add_return_release(long i, atomic_long_t *v) +raw_atomic_long_add_return_release(long i, atomic_long_t *v) { - return arch_atomic_add_return_release(i, v); + return raw_atomic_add_return_release(i, v); } static __always_inline long -arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic_add_return_relaxed(i, v); + return raw_atomic_add_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add(long i, atomic_long_t *v) +raw_atomic_long_fetch_add(long i, atomic_long_t *v) { - return arch_atomic_fetch_add(i, v); + return raw_atomic_fetch_add(i, v); } static __always_inline long -arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_acquire(i, v); + return raw_atomic_fetch_add_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_add_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_release(i, v); + return raw_atomic_fetch_add_release(i, v); } static __always_inline long -arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_relaxed(i, v); + return raw_atomic_fetch_add_relaxed(i, v); } static __always_inline void -arch_atomic_long_sub(long i, atomic_long_t *v) +raw_atomic_long_sub(long i, atomic_long_t *v) { - arch_atomic_sub(i, v); + raw_atomic_sub(i, v); } static __always_inline long -arch_atomic_long_sub_return(long i, atomic_long_t *v) +raw_atomic_long_sub_return(long i, atomic_long_t *v) { - return arch_atomic_sub_return(i, v); + return raw_atomic_sub_return(i, v); } static __always_inline long -arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { - return arch_atomic_sub_return_acquire(i, v); + return raw_atomic_sub_return_acquire(i, v); } static __always_inline long -arch_atomic_long_sub_return_release(long i, atomic_long_t *v) +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { - return arch_atomic_sub_return_release(i, v); + return raw_atomic_sub_return_release(i, v); } static __always_inline long -arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic_sub_return_relaxed(i, v); + return raw_atomic_sub_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_sub(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub(i, v); + return raw_atomic_fetch_sub(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_acquire(i, v); + return raw_atomic_fetch_sub_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_release(i, v); + return raw_atomic_fetch_sub_release(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_relaxed(i, v); + return raw_atomic_fetch_sub_relaxed(i, v); } static __always_inline void -arch_atomic_long_inc(atomic_long_t *v) +raw_atomic_long_inc(atomic_long_t *v) { - arch_atomic_inc(v); + raw_atomic_inc(v); } static __always_inline long -arch_atomic_long_inc_return(atomic_long_t *v) +raw_atomic_long_inc_return(atomic_long_t *v) { - return arch_atomic_inc_return(v); + return raw_atomic_inc_return(v); } static __always_inline long -arch_atomic_long_inc_return_acquire(atomic_long_t *v) +raw_atomic_long_inc_return_acquire(atomic_long_t *v) { - return arch_atomic_inc_return_acquire(v); + return raw_atomic_inc_return_acquire(v); } static __always_inline long -arch_atomic_long_inc_return_release(atomic_long_t *v) +raw_atomic_long_inc_return_release(atomic_long_t *v) { - return arch_atomic_inc_return_release(v); + return raw_atomic_inc_return_release(v); } static __always_inline long -arch_atomic_long_inc_return_relaxed(atomic_long_t *v) +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_inc_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_inc(atomic_long_t *v) +raw_atomic_long_fetch_inc(atomic_long_t *v) { - return arch_atomic_fetch_inc(v); + return raw_atomic_fetch_inc(v); } static __always_inline long -arch_atomic_long_fetch_inc_acquire(atomic_long_t *v) +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { - return arch_atomic_fetch_inc_acquire(v); + return raw_atomic_fetch_inc_acquire(v); } static __always_inline long -arch_atomic_long_fetch_inc_release(atomic_long_t *v) +raw_atomic_long_fetch_inc_release(atomic_long_t *v) { - return arch_atomic_fetch_inc_release(v); + return raw_atomic_fetch_inc_release(v); } static __always_inline long -arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { - return arch_atomic_fetch_inc_relaxed(v); + return raw_atomic_fetch_inc_relaxed(v); } static __always_inline void -arch_atomic_long_dec(atomic_long_t *v) +raw_atomic_long_dec(atomic_long_t *v) { - arch_atomic_dec(v); + raw_atomic_dec(v); } static __always_inline long -arch_atomic_long_dec_return(atomic_long_t *v) +raw_atomic_long_dec_return(atomic_long_t *v) { - return arch_atomic_dec_return(v); + return raw_atomic_dec_return(v); } static __always_inline long -arch_atomic_long_dec_return_acquire(atomic_long_t *v) +raw_atomic_long_dec_return_acquire(atomic_long_t *v) { - return arch_atomic_dec_return_acquire(v); + return raw_atomic_dec_return_acquire(v); } static __always_inline long -arch_atomic_long_dec_return_release(atomic_long_t *v) +raw_atomic_long_dec_return_release(atomic_long_t *v) { - return arch_atomic_dec_return_release(v); + return raw_atomic_dec_return_release(v); } static __always_inline long -arch_atomic_long_dec_return_relaxed(atomic_long_t *v) +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_dec_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_dec(atomic_long_t *v) +raw_atomic_long_fetch_dec(atomic_long_t *v) { - return arch_atomic_fetch_dec(v); + return raw_atomic_fetch_dec(v); } static __always_inline long -arch_atomic_long_fetch_dec_acquire(atomic_long_t *v) +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { - return arch_atomic_fetch_dec_acquire(v); + return raw_atomic_fetch_dec_acquire(v); } static __always_inline long -arch_atomic_long_fetch_dec_release(atomic_long_t *v) +raw_atomic_long_fetch_dec_release(atomic_long_t *v) { - return arch_atomic_fetch_dec_release(v); + return raw_atomic_fetch_dec_release(v); } static __always_inline long -arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_dec_relaxed(v); } static __always_inline void -arch_atomic_long_and(long i, atomic_long_t *v) +raw_atomic_long_and(long i, atomic_long_t *v) { - arch_atomic_and(i, v); + raw_atomic_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and(long i, atomic_long_t *v) +raw_atomic_long_fetch_and(long i, atomic_long_t *v) { - return arch_atomic_fetch_and(i, v); + return raw_atomic_fetch_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_acquire(i, v); + return raw_atomic_fetch_and_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_and_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_release(i, v); + return raw_atomic_fetch_and_release(i, v); } static __always_inline long -arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_relaxed(i, v); + return raw_atomic_fetch_and_relaxed(i, v); } static __always_inline void -arch_atomic_long_andnot(long i, atomic_long_t *v) +raw_atomic_long_andnot(long i, atomic_long_t *v) { - arch_atomic_andnot(i, v); + raw_atomic_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot(i, v); + return raw_atomic_fetch_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_acquire(i, v); + return raw_atomic_fetch_andnot_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_release(i, v); + return raw_atomic_fetch_andnot_release(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_relaxed(i, v); + return raw_atomic_fetch_andnot_relaxed(i, v); } static __always_inline void -arch_atomic_long_or(long i, atomic_long_t *v) +raw_atomic_long_or(long i, atomic_long_t *v) { - arch_atomic_or(i, v); + raw_atomic_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or(long i, atomic_long_t *v) +raw_atomic_long_fetch_or(long i, atomic_long_t *v) { - return arch_atomic_fetch_or(i, v); + return raw_atomic_fetch_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_acquire(i, v); + return raw_atomic_fetch_or_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_or_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_release(i, v); + return raw_atomic_fetch_or_release(i, v); } static __always_inline long -arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_relaxed(i, v); + return raw_atomic_fetch_or_relaxed(i, v); } static __always_inline void -arch_atomic_long_xor(long i, atomic_long_t *v) +raw_atomic_long_xor(long i, atomic_long_t *v) { - arch_atomic_xor(i, v); + raw_atomic_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor(i, v); + return raw_atomic_fetch_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_acquire(i, v); + return raw_atomic_fetch_xor_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_release(i, v); + return raw_atomic_fetch_xor_release(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_relaxed(i, v); + return raw_atomic_fetch_xor_relaxed(i, v); } static __always_inline long -arch_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long i) { - return arch_atomic_xchg(v, i); + return raw_atomic_xchg(v, i); } static __always_inline long -arch_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { - return arch_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, i); } static __always_inline long -arch_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long i) { - return arch_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, i); } static __always_inline long -arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { - return arch_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, i); } static __always_inline long -arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg(v, old, new); + return raw_atomic_cmpxchg(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_acquire(v, old, new); + return raw_atomic_cmpxchg_acquire(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_release(v, old, new); + return raw_atomic_cmpxchg_release(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_atomic_cmpxchg_relaxed(v, old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg(v, (int *)old, new); + return raw_atomic_try_cmpxchg(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_acquire(v, (int *)old, new); + return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_release(v, (int *)old, new); + return raw_atomic_try_cmpxchg_release(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_relaxed(v, (int *)old, new); + return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_sub_and_test(long i, atomic_long_t *v) +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { - return arch_atomic_sub_and_test(i, v); + return raw_atomic_sub_and_test(i, v); } static __always_inline bool -arch_atomic_long_dec_and_test(atomic_long_t *v) +raw_atomic_long_dec_and_test(atomic_long_t *v) { - return arch_atomic_dec_and_test(v); + return raw_atomic_dec_and_test(v); } static __always_inline bool -arch_atomic_long_inc_and_test(atomic_long_t *v) +raw_atomic_long_inc_and_test(atomic_long_t *v) { - return arch_atomic_inc_and_test(v); + return raw_atomic_inc_and_test(v); } static __always_inline bool -arch_atomic_long_add_negative(long i, atomic_long_t *v) +raw_atomic_long_add_negative(long i, atomic_long_t *v) { - return arch_atomic_add_negative(i, v); + return raw_atomic_add_negative(i, v); } static __always_inline bool -arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { - return arch_atomic_add_negative_acquire(i, v); + return raw_atomic_add_negative_acquire(i, v); } static __always_inline bool -arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { - return arch_atomic_add_negative_release(i, v); + return raw_atomic_add_negative_release(i, v); } static __always_inline bool -arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_negative_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic_fetch_add_unless(v, a, u); + return raw_atomic_fetch_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic_add_unless(v, a, u); + return raw_atomic_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_inc_not_zero(atomic_long_t *v) +raw_atomic_long_inc_not_zero(atomic_long_t *v) { - return arch_atomic_inc_not_zero(v); + return raw_atomic_inc_not_zero(v); } static __always_inline bool -arch_atomic_long_inc_unless_negative(atomic_long_t *v) +raw_atomic_long_inc_unless_negative(atomic_long_t *v) { - return arch_atomic_inc_unless_negative(v); + return raw_atomic_inc_unless_negative(v); } static __always_inline bool -arch_atomic_long_dec_unless_positive(atomic_long_t *v) +raw_atomic_long_dec_unless_positive(atomic_long_t *v) { - return arch_atomic_dec_unless_positive(v); + return raw_atomic_dec_unless_positive(v); } static __always_inline long -arch_atomic_long_dec_if_positive(atomic_long_t *v) +raw_atomic_long_dec_if_positive(atomic_long_t *v) { - return arch_atomic_dec_if_positive(v); + return raw_atomic_dec_if_positive(v); } #endif /* CONFIG_64BIT */ #endif /* _LINUX_ATOMIC_LONG_H */ -// a194c07d7d2f4b0e178d3c118c919775d5d65f50 +// 108784846d3bbbb201b8dabe621c5dc30b216206 diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h index 83ff0269657e7..8b2fc04cf8c54 100644 --- a/include/linux/atomic/atomic-raw.h +++ b/include/linux/atomic/atomic-raw.h @@ -1026,516 +1026,6 @@ raw_atomic64_dec_if_positive(atomic64_t *v) return arch_atomic64_dec_if_positive(v); } -static __always_inline long -raw_atomic_long_read(const atomic_long_t *v) -{ - return arch_atomic_long_read(v); -} - -static __always_inline long -raw_atomic_long_read_acquire(const atomic_long_t *v) -{ - return arch_atomic_long_read_acquire(v); -} - -static __always_inline void -raw_atomic_long_set(atomic_long_t *v, long i) -{ - arch_atomic_long_set(v, i); -} - -static __always_inline void -raw_atomic_long_set_release(atomic_long_t *v, long i) -{ - arch_atomic_long_set_release(v, i); -} - -static __always_inline void -raw_atomic_long_add(long i, atomic_long_t *v) -{ - arch_atomic_long_add(i, v); -} - -static __always_inline long -raw_atomic_long_add_return(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_sub(long i, atomic_long_t *v) -{ - arch_atomic_long_sub(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_inc(atomic_long_t *v) -{ - arch_atomic_long_inc(v); -} - -static __always_inline long -raw_atomic_long_inc_return(atomic_long_t *v) -{ - return arch_atomic_long_inc_return(v); -} - -static __always_inline long -raw_atomic_long_inc_return_acquire(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_inc_return_release(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_release(v); -} - -static __always_inline long -raw_atomic_long_inc_return_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_release(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_long_dec(atomic_long_t *v) -{ - arch_atomic_long_dec(v); -} - -static __always_inline long -raw_atomic_long_dec_return(atomic_long_t *v) -{ - return arch_atomic_long_dec_return(v); -} - -static __always_inline long -raw_atomic_long_dec_return_acquire(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_dec_return_release(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_release(v); -} - -static __always_inline long -raw_atomic_long_dec_return_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_release(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_long_and(long i, atomic_long_t *v) -{ - arch_atomic_long_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_andnot(long i, atomic_long_t *v) -{ - arch_atomic_long_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_or(long i, atomic_long_t *v) -{ - arch_atomic_long_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_xor(long i, atomic_long_t *v) -{ - arch_atomic_long_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_acquire(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_release(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_relaxed(v, i); -} - -static __always_inline long -raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_acquire(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_release(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_sub_and_test(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_long_dec_and_test(atomic_long_t *v) -{ - return arch_atomic_long_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_long_inc_and_test(atomic_long_t *v) -{ - return arch_atomic_long_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_long_add_negative(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) -{ - return arch_atomic_long_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) -{ - return arch_atomic_long_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_inc_not_zero(atomic_long_t *v) -{ - return arch_atomic_long_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_long_inc_unless_negative(atomic_long_t *v) -{ - return arch_atomic_long_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_long_dec_unless_positive(atomic_long_t *v) -{ - return arch_atomic_long_dec_unless_positive(v); -} - -static __always_inline long -raw_atomic_long_dec_if_positive(atomic_long_t *v) -{ - return arch_atomic_long_dec_if_positive(v); -} - #define raw_xchg(...) \ arch_xchg(__VA_ARGS__) @@ -1642,4 +1132,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) arch_try_cmpxchg128_local(__VA_ARGS__) #endif /* _LINUX_ATOMIC_RAW_H */ -// 01d54200571b3857755a07c10074a4fd58cef6b1 +// b23ed4424e85200e200ded094522e1d743b3a5b1 diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index eda89cea6e1d1..75e91d6da30d3 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -47,9 +47,9 @@ gen_proto_order_variant() cat < X-Patchwork-Id: 97401 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1412510vqo; Mon, 22 May 2023 05:32:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7iR//R12R0STT3mqt0DLot3tPtlQlMbsmk9LDLONXNAJdOvzuZg56pxIXRuhhVDyAXzkjv X-Received: by 2002:a05:6a20:a58f:b0:104:2200:8933 with SMTP id bc15-20020a056a20a58f00b0010422008933mr11524029pzb.62.1684758772994; Mon, 22 May 2023 05:32:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758772; cv=none; d=google.com; s=arc-20160816; b=OXryZArjJwRyys5kHs9jt+Q0SMyAnImqOqQA8wzzScLx/gnCHOUklQtdWpocF4QPs0 kjic8hO+dfWopLAQYpTW8cWaTPKAsadiFFbkI8F5DHsQr6L6DL2q8HPke6YnjI+TDtdX S6fDv1OLHQGLpT8qgMeM2l5plLmvsSNGew+UA54BFtFhwSQTm2rHx9sCS7NrGVKOtOzk VQWLsfOls8oTiD4k6xZzRILKfCNhYn/6XCPLSAnk/ZUIj4/LPodfnj9elmLPlWVHxk3I G6+9jIozi1acOMoLGe64+HTtWgyOCMylwmZrbJHtKJQjPgR/ZEGkvpj4yCBQAmo7bIN+ 8B7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=L82fEAlqHQ3ZhMrorMdKtTRCoDZN99hljqU6cASwX1Y=; b=Jf0gRjcX5f9mbIQcgCHO0zaQNygvY8Wc3xAfULIEWirxtstsUND/NKhMBFxqy0QJl3 b3+8u5WGodh6WYw8n1asMqVFeSHvL9X/n5kVlB16SDjgWncmO1PurnJM7/4t9JLpzCBk tL5qQfwVmEOXPTLVWgYDjrobe8tc4ajnKW5EBg53UChL3cUVx12vdLbpB+mZiuoVVzuT mHS5lHqsBY21zlXcw5TrbsYNJPkVd1sW90ryCxWk2OsuUoVwKtaw7cCFjk3yedxUcgZm lGHFPT/tUulmchch+a1exjBcNtB7HqwrOMDT7mM89+dIc8LQZ1a4DY45Xaw/TM9zrxe4 SDhA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bs186-20020a6328c3000000b0053578f3de77si176504pgb.239.2023.05.22.05.32.37; Mon, 22 May 2023 05:32:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234135AbjEVM3C (ORCPT + 99 others); Mon, 22 May 2023 08:29:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233848AbjEVM1b (ORCPT ); Mon, 22 May 2023 08:27:31 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9DF58102; Mon, 22 May 2023 05:25:27 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 73C4E11FB; Mon, 22 May 2023 05:26:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 927A73F59C; Mon, 22 May 2023 05:25:25 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 20/26] locking/atomic: scripts: restructure fallback ifdeffery Date: Mon, 22 May 2023 13:24:23 +0100 Message-Id: <20230522122429.1915021-21-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597615356611153?= X-GMAIL-MSGID: =?utf-8?q?1766597615356611153?= Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_() namespace, and only need to fill in the raw_atomic*_() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_ ordering variant with its own ifdeffery checking for the arch_atomic*_ ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- include/linux/atomic.h | 1 - include/linux/atomic/atomic-arch-fallback.h | 3178 +++++++++--------- include/linux/atomic/atomic-raw.h | 1135 ------- scripts/atomic/fallbacks/acquire | 2 +- scripts/atomic/fallbacks/add_negative | 4 +- scripts/atomic/fallbacks/add_unless | 4 +- scripts/atomic/fallbacks/andnot | 4 +- scripts/atomic/fallbacks/cmpxchg | 4 +- scripts/atomic/fallbacks/dec | 4 +- scripts/atomic/fallbacks/dec_and_test | 4 +- scripts/atomic/fallbacks/dec_if_positive | 6 +- scripts/atomic/fallbacks/dec_unless_positive | 6 +- scripts/atomic/fallbacks/fence | 2 +- scripts/atomic/fallbacks/fetch_add_unless | 6 +- scripts/atomic/fallbacks/inc | 4 +- scripts/atomic/fallbacks/inc_and_test | 4 +- scripts/atomic/fallbacks/inc_not_zero | 4 +- scripts/atomic/fallbacks/inc_unless_negative | 6 +- scripts/atomic/fallbacks/read_acquire | 4 +- scripts/atomic/fallbacks/release | 2 +- scripts/atomic/fallbacks/set_release | 4 +- scripts/atomic/fallbacks/sub_and_test | 4 +- scripts/atomic/fallbacks/try_cmpxchg | 4 +- scripts/atomic/fallbacks/xchg | 4 +- scripts/atomic/gen-atomic-fallback.sh | 236 +- scripts/atomic/gen-atomic-raw.sh | 80 - scripts/atomic/gen-atomics.sh | 1 - 27 files changed, 1866 insertions(+), 2851 deletions(-) delete mode 100644 include/linux/atomic/atomic-raw.h delete mode 100755 scripts/atomic/gen-atomic-raw.sh diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 296cfae0389fe..8dd57c3a99e9b 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -78,7 +78,6 @@ }) #include -#include #include #include diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 1a2d81dbc2e48..99bc1a871dc12 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -8,2749 +8,2911 @@ #include -#ifndef arch_xchg_relaxed -#define arch_xchg_acquire arch_xchg -#define arch_xchg_release arch_xchg -#define arch_xchg_relaxed arch_xchg -#else /* arch_xchg_relaxed */ - -#ifndef arch_xchg_acquire -#define arch_xchg_acquire(...) \ - __atomic_op_acquire(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg) +#define raw_xchg arch_xchg +#elif defined(arch_xchg_relaxed) +#define raw_xchg(...) \ + __atomic_op_fence(arch_xchg, __VA_ARGS__) +#else +extern void raw_xchg_not_implemented(void); +#define raw_xchg(...) raw_xchg_not_implemented() #endif -#ifndef arch_xchg_release -#define arch_xchg_release(...) \ - __atomic_op_release(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg_acquire) +#define raw_xchg_acquire arch_xchg_acquire +#elif defined(arch_xchg_relaxed) +#define raw_xchg_acquire(...) \ + __atomic_op_acquire(arch_xchg, __VA_ARGS__) +#elif defined(arch_xchg) +#define raw_xchg_acquire arch_xchg +#else +extern void raw_xchg_acquire_not_implemented(void); +#define raw_xchg_acquire(...) raw_xchg_acquire_not_implemented() #endif -#ifndef arch_xchg -#define arch_xchg(...) \ - __atomic_op_fence(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg_release) +#define raw_xchg_release arch_xchg_release +#elif defined(arch_xchg_relaxed) +#define raw_xchg_release(...) \ + __atomic_op_release(arch_xchg, __VA_ARGS__) +#elif defined(arch_xchg) +#define raw_xchg_release arch_xchg +#else +extern void raw_xchg_release_not_implemented(void); +#define raw_xchg_release(...) raw_xchg_release_not_implemented() +#endif + +#if defined(arch_xchg_relaxed) +#define raw_xchg_relaxed arch_xchg_relaxed +#elif defined(arch_xchg) +#define raw_xchg_relaxed arch_xchg +#else +extern void raw_xchg_relaxed_not_implemented(void); +#define raw_xchg_relaxed(...) raw_xchg_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg) +#define raw_cmpxchg arch_cmpxchg +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg(...) \ + __atomic_op_fence(arch_cmpxchg, __VA_ARGS__) +#else +extern void raw_cmpxchg_not_implemented(void); +#define raw_cmpxchg(...) raw_cmpxchg_not_implemented() #endif -#endif /* arch_xchg_relaxed */ - -#ifndef arch_cmpxchg_relaxed -#define arch_cmpxchg_acquire arch_cmpxchg -#define arch_cmpxchg_release arch_cmpxchg -#define arch_cmpxchg_relaxed arch_cmpxchg -#else /* arch_cmpxchg_relaxed */ - -#ifndef arch_cmpxchg_acquire -#define arch_cmpxchg_acquire(...) \ +#if defined(arch_cmpxchg_acquire) +#define raw_cmpxchg_acquire arch_cmpxchg_acquire +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_acquire(...) \ __atomic_op_acquire(arch_cmpxchg, __VA_ARGS__) +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_acquire arch_cmpxchg +#else +extern void raw_cmpxchg_acquire_not_implemented(void); +#define raw_cmpxchg_acquire(...) raw_cmpxchg_acquire_not_implemented() #endif -#ifndef arch_cmpxchg_release -#define arch_cmpxchg_release(...) \ +#if defined(arch_cmpxchg_release) +#define raw_cmpxchg_release arch_cmpxchg_release +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_release(...) \ __atomic_op_release(arch_cmpxchg, __VA_ARGS__) +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_release arch_cmpxchg +#else +extern void raw_cmpxchg_release_not_implemented(void); +#define raw_cmpxchg_release(...) raw_cmpxchg_release_not_implemented() +#endif + +#if defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_relaxed arch_cmpxchg_relaxed +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_relaxed arch_cmpxchg +#else +extern void raw_cmpxchg_relaxed_not_implemented(void); +#define raw_cmpxchg_relaxed(...) raw_cmpxchg_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg64) +#define raw_cmpxchg64 arch_cmpxchg64 +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64(...) \ + __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__) +#else +extern void raw_cmpxchg64_not_implemented(void); +#define raw_cmpxchg64(...) raw_cmpxchg64_not_implemented() #endif -#ifndef arch_cmpxchg -#define arch_cmpxchg(...) \ - __atomic_op_fence(arch_cmpxchg, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg_relaxed */ - -#ifndef arch_cmpxchg64_relaxed -#define arch_cmpxchg64_acquire arch_cmpxchg64 -#define arch_cmpxchg64_release arch_cmpxchg64 -#define arch_cmpxchg64_relaxed arch_cmpxchg64 -#else /* arch_cmpxchg64_relaxed */ - -#ifndef arch_cmpxchg64_acquire -#define arch_cmpxchg64_acquire(...) \ +#if defined(arch_cmpxchg64_acquire) +#define raw_cmpxchg64_acquire arch_cmpxchg64_acquire +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_acquire(...) \ __atomic_op_acquire(arch_cmpxchg64, __VA_ARGS__) +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_acquire arch_cmpxchg64 +#else +extern void raw_cmpxchg64_acquire_not_implemented(void); +#define raw_cmpxchg64_acquire(...) raw_cmpxchg64_acquire_not_implemented() #endif -#ifndef arch_cmpxchg64_release -#define arch_cmpxchg64_release(...) \ +#if defined(arch_cmpxchg64_release) +#define raw_cmpxchg64_release arch_cmpxchg64_release +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_release(...) \ __atomic_op_release(arch_cmpxchg64, __VA_ARGS__) +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_release arch_cmpxchg64 +#else +extern void raw_cmpxchg64_release_not_implemented(void); +#define raw_cmpxchg64_release(...) raw_cmpxchg64_release_not_implemented() +#endif + +#if defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_relaxed arch_cmpxchg64_relaxed +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_relaxed arch_cmpxchg64 +#else +extern void raw_cmpxchg64_relaxed_not_implemented(void); +#define raw_cmpxchg64_relaxed(...) raw_cmpxchg64_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg128) +#define raw_cmpxchg128 arch_cmpxchg128 +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128(...) \ + __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__) +#else +extern void raw_cmpxchg128_not_implemented(void); +#define raw_cmpxchg128(...) raw_cmpxchg128_not_implemented() #endif -#ifndef arch_cmpxchg64 -#define arch_cmpxchg64(...) \ - __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg64_relaxed */ - -#ifndef arch_cmpxchg128_relaxed -#define arch_cmpxchg128_acquire arch_cmpxchg128 -#define arch_cmpxchg128_release arch_cmpxchg128 -#define arch_cmpxchg128_relaxed arch_cmpxchg128 -#else /* arch_cmpxchg128_relaxed */ - -#ifndef arch_cmpxchg128_acquire -#define arch_cmpxchg128_acquire(...) \ +#if defined(arch_cmpxchg128_acquire) +#define raw_cmpxchg128_acquire arch_cmpxchg128_acquire +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_acquire(...) \ __atomic_op_acquire(arch_cmpxchg128, __VA_ARGS__) +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_acquire arch_cmpxchg128 +#else +extern void raw_cmpxchg128_acquire_not_implemented(void); +#define raw_cmpxchg128_acquire(...) raw_cmpxchg128_acquire_not_implemented() #endif -#ifndef arch_cmpxchg128_release -#define arch_cmpxchg128_release(...) \ +#if defined(arch_cmpxchg128_release) +#define raw_cmpxchg128_release arch_cmpxchg128_release +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_release(...) \ __atomic_op_release(arch_cmpxchg128, __VA_ARGS__) -#endif - -#ifndef arch_cmpxchg128 -#define arch_cmpxchg128(...) \ - __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg128_relaxed */ - -#ifndef arch_try_cmpxchg_relaxed -#ifdef arch_try_cmpxchg -#define arch_try_cmpxchg_acquire arch_try_cmpxchg -#define arch_try_cmpxchg_release arch_try_cmpxchg -#define arch_try_cmpxchg_relaxed arch_try_cmpxchg -#endif /* arch_try_cmpxchg */ - -#ifndef arch_try_cmpxchg -#define arch_try_cmpxchg(_ptr, _oldp, _new) \ +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_release arch_cmpxchg128 +#else +extern void raw_cmpxchg128_release_not_implemented(void); +#define raw_cmpxchg128_release(...) raw_cmpxchg128_release_not_implemented() +#endif + +#if defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_relaxed arch_cmpxchg128_relaxed +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_relaxed arch_cmpxchg128 +#else +extern void raw_cmpxchg128_relaxed_not_implemented(void); +#define raw_cmpxchg128_relaxed(...) raw_cmpxchg128_relaxed_not_implemented() +#endif + +#if defined(arch_try_cmpxchg) +#define raw_try_cmpxchg arch_try_cmpxchg +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg(...) \ + __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__) +#else +#define raw_try_cmpxchg(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg */ +#endif -#ifndef arch_try_cmpxchg_acquire -#define arch_try_cmpxchg_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_acquire) +#define raw_try_cmpxchg_acquire arch_try_cmpxchg_acquire +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__) +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_acquire arch_try_cmpxchg +#else +#define raw_try_cmpxchg_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_acquire */ +#endif -#ifndef arch_try_cmpxchg_release -#define arch_try_cmpxchg_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_release) +#define raw_try_cmpxchg_release arch_try_cmpxchg_release +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_release(...) \ + __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__) +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_release arch_try_cmpxchg +#else +#define raw_try_cmpxchg_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_release */ +#endif -#ifndef arch_try_cmpxchg_relaxed -#define arch_try_cmpxchg_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_relaxed arch_try_cmpxchg_relaxed +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_relaxed arch_try_cmpxchg +#else +#define raw_try_cmpxchg_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_relaxed */ - -#else /* arch_try_cmpxchg_relaxed */ - -#ifndef arch_try_cmpxchg_acquire -#define arch_try_cmpxchg_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg_release -#define arch_try_cmpxchg_release(...) \ - __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__) #endif -#ifndef arch_try_cmpxchg -#define arch_try_cmpxchg(...) \ - __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__) -#endif - -#endif /* arch_try_cmpxchg_relaxed */ - -#ifndef arch_try_cmpxchg64_relaxed -#ifdef arch_try_cmpxchg64 -#define arch_try_cmpxchg64_acquire arch_try_cmpxchg64 -#define arch_try_cmpxchg64_release arch_try_cmpxchg64 -#define arch_try_cmpxchg64_relaxed arch_try_cmpxchg64 -#endif /* arch_try_cmpxchg64 */ - -#ifndef arch_try_cmpxchg64 -#define arch_try_cmpxchg64(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64 arch_try_cmpxchg64 +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64(...) \ + __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__) +#else +#define raw_try_cmpxchg64(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64 */ +#endif -#ifndef arch_try_cmpxchg64_acquire -#define arch_try_cmpxchg64_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_acquire) +#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64_acquire +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__) +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_acquire */ +#endif -#ifndef arch_try_cmpxchg64_release -#define arch_try_cmpxchg64_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_release) +#define raw_try_cmpxchg64_release arch_try_cmpxchg64_release +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_release(...) \ + __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__) +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_release arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_release */ +#endif -#ifndef arch_try_cmpxchg64_relaxed -#define arch_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64_relaxed +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_relaxed */ - -#else /* arch_try_cmpxchg64_relaxed */ - -#ifndef arch_try_cmpxchg64_acquire -#define arch_try_cmpxchg64_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg64_release -#define arch_try_cmpxchg64_release(...) \ - __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg64 -#define arch_try_cmpxchg64(...) \ - __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__) #endif -#endif /* arch_try_cmpxchg64_relaxed */ - -#ifndef arch_try_cmpxchg128_relaxed -#ifdef arch_try_cmpxchg128 -#define arch_try_cmpxchg128_acquire arch_try_cmpxchg128 -#define arch_try_cmpxchg128_release arch_try_cmpxchg128 -#define arch_try_cmpxchg128_relaxed arch_try_cmpxchg128 -#endif /* arch_try_cmpxchg128 */ - -#ifndef arch_try_cmpxchg128 -#define arch_try_cmpxchg128(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128 arch_try_cmpxchg128 +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128(...) \ + __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__) +#else +#define raw_try_cmpxchg128(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128 */ +#endif -#ifndef arch_try_cmpxchg128_acquire -#define arch_try_cmpxchg128_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_acquire) +#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128_acquire +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__) +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_acquire */ +#endif -#ifndef arch_try_cmpxchg128_release -#define arch_try_cmpxchg128_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_release) +#define raw_try_cmpxchg128_release arch_try_cmpxchg128_release +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_release(...) \ + __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__) +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_release arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_release */ +#endif -#ifndef arch_try_cmpxchg128_relaxed -#define arch_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128_relaxed +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_relaxed */ - -#else /* arch_try_cmpxchg128_relaxed */ - -#ifndef arch_try_cmpxchg128_acquire -#define arch_try_cmpxchg128_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__) #endif -#ifndef arch_try_cmpxchg128_release -#define arch_try_cmpxchg128_release(...) \ - __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__) -#endif +#define raw_cmpxchg_local arch_cmpxchg_local -#ifndef arch_try_cmpxchg128 -#define arch_try_cmpxchg128(...) \ - __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__) +#ifdef arch_try_cmpxchg_local +#define raw_try_cmpxchg_local arch_try_cmpxchg_local +#else +#define raw_try_cmpxchg_local(_ptr, _oldp, _new) \ +({ \ + typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ + ___r = raw_cmpxchg_local((_ptr), ___o, (_new)); \ + if (unlikely(___r != ___o)) \ + *___op = ___r; \ + likely(___r == ___o); \ +}) #endif -#endif /* arch_try_cmpxchg128_relaxed */ +#define raw_cmpxchg64_local arch_cmpxchg64_local -#ifndef arch_try_cmpxchg_local -#define arch_try_cmpxchg_local(_ptr, _oldp, _new) \ +#ifdef arch_try_cmpxchg64_local +#define raw_try_cmpxchg64_local arch_try_cmpxchg64_local +#else +#define raw_try_cmpxchg64_local(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_local((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_local((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_local */ +#endif + +#define raw_cmpxchg128_local arch_cmpxchg128_local -#ifndef arch_try_cmpxchg64_local -#define arch_try_cmpxchg64_local(_ptr, _oldp, _new) \ +#ifdef arch_try_cmpxchg128_local +#define raw_try_cmpxchg128_local arch_try_cmpxchg128_local +#else +#define raw_try_cmpxchg128_local(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_local((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_local((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_local */ +#endif + +#define raw_sync_cmpxchg arch_sync_cmpxchg -#ifndef arch_atomic_read_acquire +#define raw_atomic_read arch_atomic_read + +#if defined(arch_atomic_read_acquire) +#define raw_atomic_read_acquire arch_atomic_read_acquire +#elif defined(arch_atomic_read) +#define raw_atomic_read_acquire arch_atomic_read +#else static __always_inline int -arch_atomic_read_acquire(const atomic_t *v) +raw_atomic_read_acquire(const atomic_t *v) { int ret; if (__native_word(atomic_t)) { ret = smp_load_acquire(&(v)->counter); } else { - ret = arch_atomic_read(v); + ret = raw_atomic_read(v); __atomic_acquire_fence(); } return ret; } -#define arch_atomic_read_acquire arch_atomic_read_acquire #endif -#ifndef arch_atomic_set_release +#define raw_atomic_set arch_atomic_set + +#if defined(arch_atomic_set_release) +#define raw_atomic_set_release arch_atomic_set_release +#elif defined(arch_atomic_set) +#define raw_atomic_set_release arch_atomic_set +#else static __always_inline void -arch_atomic_set_release(atomic_t *v, int i) +raw_atomic_set_release(atomic_t *v, int i) { if (__native_word(atomic_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); - arch_atomic_set(v, i); + raw_atomic_set(v, i); } } -#define arch_atomic_set_release arch_atomic_set_release #endif -#ifndef arch_atomic_add_return_relaxed -#define arch_atomic_add_return_acquire arch_atomic_add_return -#define arch_atomic_add_return_release arch_atomic_add_return -#define arch_atomic_add_return_relaxed arch_atomic_add_return -#else /* arch_atomic_add_return_relaxed */ +#define raw_atomic_add arch_atomic_add + +#if defined(arch_atomic_add_return) +#define raw_atomic_add_return arch_atomic_add_return +#elif defined(arch_atomic_add_return_relaxed) +static __always_inline int +raw_atomic_add_return(int i, atomic_t *v) +{ + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_add_return_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#else +#error "Unable to define raw_atomic_add_return" +#endif -#ifndef arch_atomic_add_return_acquire +#if defined(arch_atomic_add_return_acquire) +#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire +#elif defined(arch_atomic_add_return_relaxed) static __always_inline int -arch_atomic_add_return_acquire(int i, atomic_t *v) +raw_atomic_add_return_acquire(int i, atomic_t *v) { int ret = arch_atomic_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_acquire arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_acquire" #endif -#ifndef arch_atomic_add_return_release +#if defined(arch_atomic_add_return_release) +#define raw_atomic_add_return_release arch_atomic_add_return_release +#elif defined(arch_atomic_add_return_relaxed) static __always_inline int -arch_atomic_add_return_release(int i, atomic_t *v) +raw_atomic_add_return_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_add_return_relaxed(i, v); } -#define arch_atomic_add_return_release arch_atomic_add_return_release +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_release arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_release" #endif -#ifndef arch_atomic_add_return +#if defined(arch_atomic_add_return_relaxed) +#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_relaxed arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_relaxed" +#endif + +#if defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add arch_atomic_fetch_add +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_add_return(int i, atomic_t *v) +raw_atomic_fetch_add(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_add_return_relaxed(i, v); + ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_add_return arch_atomic_add_return +#else +#error "Unable to define raw_atomic_fetch_add" #endif -#endif /* arch_atomic_add_return_relaxed */ - -#ifndef arch_atomic_fetch_add_relaxed -#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add -#define arch_atomic_fetch_add_release arch_atomic_fetch_add -#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add -#else /* arch_atomic_fetch_add_relaxed */ - -#ifndef arch_atomic_fetch_add_acquire +#if defined(arch_atomic_fetch_add_acquire) +#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_fetch_add_acquire(int i, atomic_t *v) +raw_atomic_fetch_add_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_acquire" #endif -#ifndef arch_atomic_fetch_add_release +#if defined(arch_atomic_fetch_add_release) +#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_fetch_add_release(int i, atomic_t *v) +raw_atomic_fetch_add_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_add_relaxed(i, v); } -#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_release arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_release" +#endif + +#if defined(arch_atomic_fetch_add_relaxed) +#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_relaxed" #endif -#ifndef arch_atomic_fetch_add +#define raw_atomic_sub arch_atomic_sub + +#if defined(arch_atomic_sub_return) +#define raw_atomic_sub_return arch_atomic_sub_return +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_fetch_add(int i, atomic_t *v) +raw_atomic_sub_return(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_add_relaxed(i, v); + ret = arch_atomic_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_add arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_sub_return" #endif -#endif /* arch_atomic_fetch_add_relaxed */ - -#ifndef arch_atomic_sub_return_relaxed -#define arch_atomic_sub_return_acquire arch_atomic_sub_return -#define arch_atomic_sub_return_release arch_atomic_sub_return -#define arch_atomic_sub_return_relaxed arch_atomic_sub_return -#else /* arch_atomic_sub_return_relaxed */ - -#ifndef arch_atomic_sub_return_acquire +#if defined(arch_atomic_sub_return_acquire) +#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_sub_return_acquire(int i, atomic_t *v) +raw_atomic_sub_return_acquire(int i, atomic_t *v) { int ret = arch_atomic_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_acquire arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_acquire" #endif -#ifndef arch_atomic_sub_return_release +#if defined(arch_atomic_sub_return_release) +#define raw_atomic_sub_return_release arch_atomic_sub_return_release +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_sub_return_release(int i, atomic_t *v) +raw_atomic_sub_return_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_sub_return_relaxed(i, v); } -#define arch_atomic_sub_return_release arch_atomic_sub_return_release +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_release arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_release" +#endif + +#if defined(arch_atomic_sub_return_relaxed) +#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_relaxed arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_relaxed" #endif -#ifndef arch_atomic_sub_return +#if defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub arch_atomic_fetch_sub +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_sub_return(int i, atomic_t *v) +raw_atomic_fetch_sub(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_sub_return_relaxed(i, v); + ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_sub_return arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_fetch_sub" #endif -#endif /* arch_atomic_sub_return_relaxed */ - -#ifndef arch_atomic_fetch_sub_relaxed -#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub -#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub -#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub -#else /* arch_atomic_fetch_sub_relaxed */ - -#ifndef arch_atomic_fetch_sub_acquire +#if defined(arch_atomic_fetch_sub_acquire) +#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_fetch_sub_acquire(int i, atomic_t *v) +raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_acquire" #endif -#ifndef arch_atomic_fetch_sub_release +#if defined(arch_atomic_fetch_sub_release) +#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_fetch_sub_release(int i, atomic_t *v) +raw_atomic_fetch_sub_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_sub_relaxed(i, v); } -#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_release" #endif -#ifndef arch_atomic_fetch_sub -static __always_inline int -arch_atomic_fetch_sub(int i, atomic_t *v) -{ - int ret; - __atomic_pre_full_fence(); - ret = arch_atomic_fetch_sub_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#if defined(arch_atomic_fetch_sub_relaxed) +#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_relaxed" #endif -#endif /* arch_atomic_fetch_sub_relaxed */ - -#ifndef arch_atomic_inc +#if defined(arch_atomic_inc) +#define raw_atomic_inc arch_atomic_inc +#else static __always_inline void -arch_atomic_inc(atomic_t *v) +raw_atomic_inc(atomic_t *v) { - arch_atomic_add(1, v); + raw_atomic_add(1, v); } -#define arch_atomic_inc arch_atomic_inc #endif -#ifndef arch_atomic_inc_return_relaxed -#ifdef arch_atomic_inc_return -#define arch_atomic_inc_return_acquire arch_atomic_inc_return -#define arch_atomic_inc_return_release arch_atomic_inc_return -#define arch_atomic_inc_return_relaxed arch_atomic_inc_return -#endif /* arch_atomic_inc_return */ - -#ifndef arch_atomic_inc_return +#if defined(arch_atomic_inc_return) +#define raw_atomic_inc_return arch_atomic_inc_return +#elif defined(arch_atomic_inc_return_relaxed) +static __always_inline int +raw_atomic_inc_return(atomic_t *v) +{ + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_inc_return_relaxed(v); + __atomic_post_full_fence(); + return ret; +} +#else static __always_inline int -arch_atomic_inc_return(atomic_t *v) +raw_atomic_inc_return(atomic_t *v) { - return arch_atomic_add_return(1, v); + return raw_atomic_add_return(1, v); } -#define arch_atomic_inc_return arch_atomic_inc_return #endif -#ifndef arch_atomic_inc_return_acquire +#if defined(arch_atomic_inc_return_acquire) +#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire +#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int -arch_atomic_inc_return_acquire(atomic_t *v) +raw_atomic_inc_return_acquire(atomic_t *v) { - return arch_atomic_add_return_acquire(1, v); + int ret = arch_atomic_inc_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire -#endif - -#ifndef arch_atomic_inc_return_release +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_acquire arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_release(atomic_t *v) +raw_atomic_inc_return_acquire(atomic_t *v) { - return arch_atomic_add_return_release(1, v); + return raw_atomic_add_return_acquire(1, v); } -#define arch_atomic_inc_return_release arch_atomic_inc_return_release #endif -#ifndef arch_atomic_inc_return_relaxed +#if defined(arch_atomic_inc_return_release) +#define raw_atomic_inc_return_release arch_atomic_inc_return_release +#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int -arch_atomic_inc_return_relaxed(atomic_t *v) +raw_atomic_inc_return_release(atomic_t *v) { - return arch_atomic_add_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_inc_return_relaxed(v); } -#define arch_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed -#endif - -#else /* arch_atomic_inc_return_relaxed */ - -#ifndef arch_atomic_inc_return_acquire +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_release arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_acquire(atomic_t *v) +raw_atomic_inc_return_release(atomic_t *v) { - int ret = arch_atomic_inc_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_add_return_release(1, v); } -#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire #endif -#ifndef arch_atomic_inc_return_release +#if defined(arch_atomic_inc_return_relaxed) +#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_relaxed arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_release(atomic_t *v) +raw_atomic_inc_return_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_add_return_relaxed(1, v); } -#define arch_atomic_inc_return_release arch_atomic_inc_return_release #endif -#ifndef arch_atomic_inc_return +#if defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc arch_atomic_fetch_inc +#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int -arch_atomic_inc_return(atomic_t *v) +raw_atomic_fetch_inc(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_inc_return_relaxed(v); + ret = arch_atomic_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_inc_return arch_atomic_inc_return -#endif - -#endif /* arch_atomic_inc_return_relaxed */ - -#ifndef arch_atomic_fetch_inc_relaxed -#ifdef arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc -#endif /* arch_atomic_fetch_inc */ - -#ifndef arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc(atomic_t *v) +raw_atomic_fetch_inc(atomic_t *v) { - return arch_atomic_fetch_add(1, v); + return raw_atomic_fetch_add(1, v); } -#define arch_atomic_fetch_inc arch_atomic_fetch_inc #endif -#ifndef arch_atomic_fetch_inc_acquire +#if defined(arch_atomic_fetch_inc_acquire) +#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire +#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int -arch_atomic_fetch_inc_acquire(atomic_t *v) +raw_atomic_fetch_inc_acquire(atomic_t *v) { - return arch_atomic_fetch_add_acquire(1, v); + int ret = arch_atomic_fetch_inc_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire -#endif - -#ifndef arch_atomic_fetch_inc_release +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_release(atomic_t *v) +raw_atomic_fetch_inc_acquire(atomic_t *v) { - return arch_atomic_fetch_add_release(1, v); + return raw_atomic_fetch_add_acquire(1, v); } -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release #endif -#ifndef arch_atomic_fetch_inc_relaxed +#if defined(arch_atomic_fetch_inc_release) +#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release +#elif defined(arch_atomic_fetch_inc_relaxed) +static __always_inline int +raw_atomic_fetch_inc_release(atomic_t *v) +{ + __atomic_release_fence(); + return arch_atomic_fetch_inc_relaxed(v); +} +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_relaxed(atomic_t *v) +raw_atomic_fetch_inc_release(atomic_t *v) { - return arch_atomic_fetch_add_relaxed(1, v); + return raw_atomic_fetch_add_release(1, v); } -#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed #endif -#else /* arch_atomic_fetch_inc_relaxed */ - -#ifndef arch_atomic_fetch_inc_acquire +#if defined(arch_atomic_fetch_inc_relaxed) +#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_acquire(atomic_t *v) +raw_atomic_fetch_inc_relaxed(atomic_t *v) { - int ret = arch_atomic_fetch_inc_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_fetch_add_relaxed(1, v); } -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire #endif -#ifndef arch_atomic_fetch_inc_release -static __always_inline int -arch_atomic_fetch_inc_release(atomic_t *v) +#if defined(arch_atomic_dec) +#define raw_atomic_dec arch_atomic_dec +#else +static __always_inline void +raw_atomic_dec(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_fetch_inc_relaxed(v); + raw_atomic_sub(1, v); } -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release #endif -#ifndef arch_atomic_fetch_inc +#if defined(arch_atomic_dec_return) +#define raw_atomic_dec_return arch_atomic_dec_return +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_fetch_inc(atomic_t *v) +raw_atomic_dec_return(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_inc_relaxed(v); + ret = arch_atomic_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_inc arch_atomic_fetch_inc -#endif - -#endif /* arch_atomic_fetch_inc_relaxed */ - -#ifndef arch_atomic_dec -static __always_inline void -arch_atomic_dec(atomic_t *v) -{ - arch_atomic_sub(1, v); -} -#define arch_atomic_dec arch_atomic_dec -#endif - -#ifndef arch_atomic_dec_return_relaxed -#ifdef arch_atomic_dec_return -#define arch_atomic_dec_return_acquire arch_atomic_dec_return -#define arch_atomic_dec_return_release arch_atomic_dec_return -#define arch_atomic_dec_return_relaxed arch_atomic_dec_return -#endif /* arch_atomic_dec_return */ - -#ifndef arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return(atomic_t *v) +raw_atomic_dec_return(atomic_t *v) { - return arch_atomic_sub_return(1, v); + return raw_atomic_sub_return(1, v); } -#define arch_atomic_dec_return arch_atomic_dec_return #endif -#ifndef arch_atomic_dec_return_acquire +#if defined(arch_atomic_dec_return_acquire) +#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_dec_return_acquire(atomic_t *v) +raw_atomic_dec_return_acquire(atomic_t *v) { - return arch_atomic_sub_return_acquire(1, v); + int ret = arch_atomic_dec_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire -#endif - -#ifndef arch_atomic_dec_return_release +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_acquire arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_release(atomic_t *v) +raw_atomic_dec_return_acquire(atomic_t *v) { - return arch_atomic_sub_return_release(1, v); + return raw_atomic_sub_return_acquire(1, v); } -#define arch_atomic_dec_return_release arch_atomic_dec_return_release #endif -#ifndef arch_atomic_dec_return_relaxed +#if defined(arch_atomic_dec_return_release) +#define raw_atomic_dec_return_release arch_atomic_dec_return_release +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_dec_return_relaxed(atomic_t *v) +raw_atomic_dec_return_release(atomic_t *v) { - return arch_atomic_sub_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_dec_return_relaxed(v); } -#define arch_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed -#endif - -#else /* arch_atomic_dec_return_relaxed */ - -#ifndef arch_atomic_dec_return_acquire +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_release arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_acquire(atomic_t *v) +raw_atomic_dec_return_release(atomic_t *v) { - int ret = arch_atomic_dec_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_sub_return_release(1, v); } -#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire #endif -#ifndef arch_atomic_dec_return_release +#if defined(arch_atomic_dec_return_relaxed) +#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_relaxed arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_release(atomic_t *v) +raw_atomic_dec_return_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_sub_return_relaxed(1, v); } -#define arch_atomic_dec_return_release arch_atomic_dec_return_release #endif -#ifndef arch_atomic_dec_return +#if defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec arch_atomic_fetch_dec +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_dec_return(atomic_t *v) +raw_atomic_fetch_dec(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_dec_return_relaxed(v); + ret = arch_atomic_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_dec_return arch_atomic_dec_return -#endif - -#endif /* arch_atomic_dec_return_relaxed */ - -#ifndef arch_atomic_fetch_dec_relaxed -#ifdef arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec -#endif /* arch_atomic_fetch_dec */ - -#ifndef arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec(atomic_t *v) +raw_atomic_fetch_dec(atomic_t *v) { - return arch_atomic_fetch_sub(1, v); + return raw_atomic_fetch_sub(1, v); } -#define arch_atomic_fetch_dec arch_atomic_fetch_dec #endif -#ifndef arch_atomic_fetch_dec_acquire +#if defined(arch_atomic_fetch_dec_acquire) +#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_fetch_dec_acquire(atomic_t *v) +raw_atomic_fetch_dec_acquire(atomic_t *v) { - return arch_atomic_fetch_sub_acquire(1, v); + int ret = arch_atomic_fetch_dec_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire -#endif - -#ifndef arch_atomic_fetch_dec_release +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_release(atomic_t *v) +raw_atomic_fetch_dec_acquire(atomic_t *v) { - return arch_atomic_fetch_sub_release(1, v); + return raw_atomic_fetch_sub_acquire(1, v); } -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release #endif -#ifndef arch_atomic_fetch_dec_relaxed +#if defined(arch_atomic_fetch_dec_release) +#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_fetch_dec_relaxed(atomic_t *v) +raw_atomic_fetch_dec_release(atomic_t *v) { - return arch_atomic_fetch_sub_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_fetch_dec_relaxed(v); } -#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed -#endif - -#else /* arch_atomic_fetch_dec_relaxed */ - -#ifndef arch_atomic_fetch_dec_acquire +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_acquire(atomic_t *v) +raw_atomic_fetch_dec_release(atomic_t *v) { - int ret = arch_atomic_fetch_dec_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_fetch_sub_release(1, v); } -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire #endif -#ifndef arch_atomic_fetch_dec_release +#if defined(arch_atomic_fetch_dec_relaxed) +#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_release(atomic_t *v) +raw_atomic_fetch_dec_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_sub_relaxed(1, v); } -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release #endif -#ifndef arch_atomic_fetch_dec +#define raw_atomic_and arch_atomic_and + +#if defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and arch_atomic_fetch_and +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_dec(atomic_t *v) +raw_atomic_fetch_and(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_dec_relaxed(v); + ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_dec arch_atomic_fetch_dec +#else +#error "Unable to define raw_atomic_fetch_and" #endif -#endif /* arch_atomic_fetch_dec_relaxed */ - -#ifndef arch_atomic_fetch_and_relaxed -#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and -#define arch_atomic_fetch_and_release arch_atomic_fetch_and -#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and -#else /* arch_atomic_fetch_and_relaxed */ - -#ifndef arch_atomic_fetch_and_acquire +#if defined(arch_atomic_fetch_and_acquire) +#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_and_acquire(int i, atomic_t *v) +raw_atomic_fetch_and_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_acquire" #endif -#ifndef arch_atomic_fetch_and_release +#if defined(arch_atomic_fetch_and_release) +#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_and_release(int i, atomic_t *v) +raw_atomic_fetch_and_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_and_relaxed(i, v); } -#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_release arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_release" #endif -#ifndef arch_atomic_fetch_and +#if defined(arch_atomic_fetch_and_relaxed) +#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_relaxed" +#endif + +#if defined(arch_atomic_andnot) +#define raw_atomic_andnot arch_atomic_andnot +#else +static __always_inline void +raw_atomic_andnot(int i, atomic_t *v) +{ + raw_atomic_and(~i, v); +} +#endif + +#if defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot +#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int -arch_atomic_fetch_and(int i, atomic_t *v) +raw_atomic_fetch_andnot(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_and_relaxed(i, v); + ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_and arch_atomic_fetch_and -#endif - -#endif /* arch_atomic_fetch_and_relaxed */ - -#ifndef arch_atomic_andnot -static __always_inline void -arch_atomic_andnot(int i, atomic_t *v) +#else +static __always_inline int +raw_atomic_fetch_andnot(int i, atomic_t *v) { - arch_atomic_and(~i, v); + return raw_atomic_fetch_and(~i, v); } -#define arch_atomic_andnot arch_atomic_andnot #endif -#ifndef arch_atomic_fetch_andnot_relaxed -#ifdef arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot -#endif /* arch_atomic_fetch_andnot */ - -#ifndef arch_atomic_fetch_andnot +#if defined(arch_atomic_fetch_andnot_acquire) +#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#elif defined(arch_atomic_fetch_andnot_relaxed) +static __always_inline int +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + int ret = arch_atomic_fetch_andnot_relaxed(i, v); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot(int i, atomic_t *v) +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { - return arch_atomic_fetch_and(~i, v); + return raw_atomic_fetch_and_acquire(~i, v); } -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot #endif -#ifndef arch_atomic_fetch_andnot_acquire +#if defined(arch_atomic_fetch_andnot_release) +#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release +#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int -arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) +raw_atomic_fetch_andnot_release(int i, atomic_t *v) { - return arch_atomic_fetch_and_acquire(~i, v); + __atomic_release_fence(); + return arch_atomic_fetch_andnot_relaxed(i, v); } -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire -#endif - -#ifndef arch_atomic_fetch_andnot_release +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot_release(int i, atomic_t *v) +raw_atomic_fetch_andnot_release(int i, atomic_t *v) { - return arch_atomic_fetch_and_release(~i, v); + return raw_atomic_fetch_and_release(~i, v); } -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release #endif -#ifndef arch_atomic_fetch_andnot_relaxed +#if defined(arch_atomic_fetch_andnot_relaxed) +#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { - return arch_atomic_fetch_and_relaxed(~i, v); + return raw_atomic_fetch_and_relaxed(~i, v); } -#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed #endif -#else /* arch_atomic_fetch_andnot_relaxed */ +#define raw_atomic_or arch_atomic_or -#ifndef arch_atomic_fetch_andnot_acquire +#if defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or arch_atomic_fetch_or +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) +raw_atomic_fetch_or(int i, atomic_t *v) { - int ret = arch_atomic_fetch_andnot_relaxed(i, v); - __atomic_acquire_fence(); + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_fetch_or_relaxed(i, v); + __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#else +#error "Unable to define raw_atomic_fetch_or" #endif -#ifndef arch_atomic_fetch_andnot_release +#if defined(arch_atomic_fetch_or_acquire) +#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_andnot_release(int i, atomic_t *v) -{ - __atomic_release_fence(); - return arch_atomic_fetch_andnot_relaxed(i, v); -} -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release -#endif - -#ifndef arch_atomic_fetch_andnot -static __always_inline int -arch_atomic_fetch_andnot(int i, atomic_t *v) -{ - int ret; - __atomic_pre_full_fence(); - ret = arch_atomic_fetch_andnot_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot -#endif - -#endif /* arch_atomic_fetch_andnot_relaxed */ - -#ifndef arch_atomic_fetch_or_relaxed -#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or -#define arch_atomic_fetch_or_release arch_atomic_fetch_or -#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or -#else /* arch_atomic_fetch_or_relaxed */ - -#ifndef arch_atomic_fetch_or_acquire -static __always_inline int -arch_atomic_fetch_or_acquire(int i, atomic_t *v) +raw_atomic_fetch_or_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_acquire" #endif -#ifndef arch_atomic_fetch_or_release +#if defined(arch_atomic_fetch_or_release) +#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_or_release(int i, atomic_t *v) +raw_atomic_fetch_or_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_or_relaxed(i, v); } -#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_release arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_release" #endif -#ifndef arch_atomic_fetch_or +#if defined(arch_atomic_fetch_or_relaxed) +#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_relaxed" +#endif + +#define raw_atomic_xor arch_atomic_xor + +#if defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor arch_atomic_fetch_xor +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_or(int i, atomic_t *v) +raw_atomic_fetch_xor(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_or_relaxed(i, v); + ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_or arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_xor" #endif -#endif /* arch_atomic_fetch_or_relaxed */ - -#ifndef arch_atomic_fetch_xor_relaxed -#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor -#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor -#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor -#else /* arch_atomic_fetch_xor_relaxed */ - -#ifndef arch_atomic_fetch_xor_acquire +#if defined(arch_atomic_fetch_xor_acquire) +#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_xor_acquire(int i, atomic_t *v) +raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_acquire" #endif -#ifndef arch_atomic_fetch_xor_release +#if defined(arch_atomic_fetch_xor_release) +#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_xor_release(int i, atomic_t *v) +raw_atomic_fetch_xor_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_xor_relaxed(i, v); } -#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_release" #endif -#ifndef arch_atomic_fetch_xor +#if defined(arch_atomic_fetch_xor_relaxed) +#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_relaxed" +#endif + +#if defined(arch_atomic_xchg) +#define raw_atomic_xchg arch_atomic_xchg +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_fetch_xor(int i, atomic_t *v) +raw_atomic_xchg(atomic_t *v, int i) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_xor_relaxed(i, v); + ret = arch_atomic_xchg_relaxed(v, i); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_xor arch_atomic_fetch_xor -#endif - -#endif /* arch_atomic_fetch_xor_relaxed */ - -#ifndef arch_atomic_xchg_relaxed -#ifdef arch_atomic_xchg -#define arch_atomic_xchg_acquire arch_atomic_xchg -#define arch_atomic_xchg_release arch_atomic_xchg -#define arch_atomic_xchg_relaxed arch_atomic_xchg -#endif /* arch_atomic_xchg */ - -#ifndef arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg(atomic_t *v, int new) +raw_atomic_xchg(atomic_t *v, int new) { - return arch_xchg(&v->counter, new); + return raw_xchg(&v->counter, new); } -#define arch_atomic_xchg arch_atomic_xchg #endif -#ifndef arch_atomic_xchg_acquire +#if defined(arch_atomic_xchg_acquire) +#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_xchg_acquire(atomic_t *v, int new) +raw_atomic_xchg_acquire(atomic_t *v, int i) { - return arch_xchg_acquire(&v->counter, new); + int ret = arch_atomic_xchg_relaxed(v, i); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire -#endif - -#ifndef arch_atomic_xchg_release +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_acquire arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_release(atomic_t *v, int new) +raw_atomic_xchg_acquire(atomic_t *v, int new) { - return arch_xchg_release(&v->counter, new); + return raw_xchg_acquire(&v->counter, new); } -#define arch_atomic_xchg_release arch_atomic_xchg_release #endif -#ifndef arch_atomic_xchg_relaxed +#if defined(arch_atomic_xchg_release) +#define raw_atomic_xchg_release arch_atomic_xchg_release +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_xchg_relaxed(atomic_t *v, int new) +raw_atomic_xchg_release(atomic_t *v, int i) { - return arch_xchg_relaxed(&v->counter, new); + __atomic_release_fence(); + return arch_atomic_xchg_relaxed(v, i); } -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#endif - -#else /* arch_atomic_xchg_relaxed */ - -#ifndef arch_atomic_xchg_acquire +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_release arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_acquire(atomic_t *v, int i) +raw_atomic_xchg_release(atomic_t *v, int new) { - int ret = arch_atomic_xchg_relaxed(v, i); - __atomic_acquire_fence(); - return ret; + return raw_xchg_release(&v->counter, new); } -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire #endif -#ifndef arch_atomic_xchg_release +#if defined(arch_atomic_xchg_relaxed) +#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_relaxed arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_release(atomic_t *v, int i) +raw_atomic_xchg_relaxed(atomic_t *v, int new) { - __atomic_release_fence(); - return arch_atomic_xchg_relaxed(v, i); + return raw_xchg_relaxed(&v->counter, new); } -#define arch_atomic_xchg_release arch_atomic_xchg_release #endif -#ifndef arch_atomic_xchg +#if defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg arch_atomic_cmpxchg +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_xchg(atomic_t *v, int i) +raw_atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_xchg_relaxed(v, i); + ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic_xchg arch_atomic_xchg -#endif - -#endif /* arch_atomic_xchg_relaxed */ - -#ifndef arch_atomic_cmpxchg_relaxed -#ifdef arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg -#endif /* arch_atomic_cmpxchg */ - -#ifndef arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg(atomic_t *v, int old, int new) +raw_atomic_cmpxchg(atomic_t *v, int old, int new) { - return arch_cmpxchg(&v->counter, old, new); + return raw_cmpxchg(&v->counter, old, new); } -#define arch_atomic_cmpxchg arch_atomic_cmpxchg #endif -#ifndef arch_atomic_cmpxchg_acquire +#if defined(arch_atomic_cmpxchg_acquire) +#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { - return arch_cmpxchg_acquire(&v->counter, old, new); + int ret = arch_atomic_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#endif - -#ifndef arch_atomic_cmpxchg_release +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { - return arch_cmpxchg_release(&v->counter, old, new); + return raw_cmpxchg_acquire(&v->counter, old, new); } -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release #endif -#ifndef arch_atomic_cmpxchg_relaxed +#if defined(arch_atomic_cmpxchg_release) +#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { - return arch_cmpxchg_relaxed(&v->counter, old, new); + __atomic_release_fence(); + return arch_atomic_cmpxchg_relaxed(v, old, new); } -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#endif - -#else /* arch_atomic_cmpxchg_relaxed */ - -#ifndef arch_atomic_cmpxchg_acquire +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { - int ret = arch_atomic_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; + return raw_cmpxchg_release(&v->counter, old, new); } -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire #endif -#ifndef arch_atomic_cmpxchg_release +#if defined(arch_atomic_cmpxchg_relaxed) +#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { - __atomic_release_fence(); - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_cmpxchg_relaxed(&v->counter, old, new); } -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release #endif -#ifndef arch_atomic_cmpxchg -static __always_inline int -arch_atomic_cmpxchg(atomic_t *v, int old, int new) +#if defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg +#elif defined(arch_atomic_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { - int ret; + bool ret; __atomic_pre_full_fence(); - ret = arch_atomic_cmpxchg_relaxed(v, old, new); + ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic_cmpxchg arch_atomic_cmpxchg -#endif - -#endif /* arch_atomic_cmpxchg_relaxed */ - -#ifndef arch_atomic_try_cmpxchg_relaxed -#ifdef arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg -#endif /* arch_atomic_try_cmpxchg */ - -#ifndef arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg(v, o, new); + r = raw_atomic_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #endif -#ifndef arch_atomic_try_cmpxchg_acquire +#if defined(arch_atomic_try_cmpxchg_acquire) +#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire +#elif defined(arch_atomic_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_acquire(v, o, new); + r = raw_atomic_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire #endif -#ifndef arch_atomic_try_cmpxchg_release +#if defined(arch_atomic_try_cmpxchg_release) +#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release +#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool -arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + __atomic_release_fence(); + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +} +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg +#else +static __always_inline bool +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_release(v, o, new); + r = raw_atomic_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release #endif -#ifndef arch_atomic_try_cmpxchg_relaxed +#if defined(arch_atomic_try_cmpxchg_relaxed) +#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_relaxed(v, o, new); + r = raw_atomic_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed -#endif - -#else /* arch_atomic_try_cmpxchg_relaxed */ - -#ifndef arch_atomic_try_cmpxchg_acquire -static __always_inline bool -arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ - bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; -} -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire -#endif - -#ifndef arch_atomic_try_cmpxchg_release -static __always_inline bool -arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ - __atomic_release_fence(); - return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release -#endif - -#ifndef arch_atomic_try_cmpxchg -static __always_inline bool -arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #endif -#endif /* arch_atomic_try_cmpxchg_relaxed */ - -#ifndef arch_atomic_sub_and_test +#if defined(arch_atomic_sub_and_test) +#define raw_atomic_sub_and_test arch_atomic_sub_and_test +#else static __always_inline bool -arch_atomic_sub_and_test(int i, atomic_t *v) +raw_atomic_sub_and_test(int i, atomic_t *v) { - return arch_atomic_sub_return(i, v) == 0; + return raw_atomic_sub_return(i, v) == 0; } -#define arch_atomic_sub_and_test arch_atomic_sub_and_test #endif -#ifndef arch_atomic_dec_and_test +#if defined(arch_atomic_dec_and_test) +#define raw_atomic_dec_and_test arch_atomic_dec_and_test +#else static __always_inline bool -arch_atomic_dec_and_test(atomic_t *v) +raw_atomic_dec_and_test(atomic_t *v) { - return arch_atomic_dec_return(v) == 0; + return raw_atomic_dec_return(v) == 0; } -#define arch_atomic_dec_and_test arch_atomic_dec_and_test #endif -#ifndef arch_atomic_inc_and_test +#if defined(arch_atomic_inc_and_test) +#define raw_atomic_inc_and_test arch_atomic_inc_and_test +#else static __always_inline bool -arch_atomic_inc_and_test(atomic_t *v) +raw_atomic_inc_and_test(atomic_t *v) { - return arch_atomic_inc_return(v) == 0; + return raw_atomic_inc_return(v) == 0; } -#define arch_atomic_inc_and_test arch_atomic_inc_and_test #endif -#ifndef arch_atomic_add_negative_relaxed -#ifdef arch_atomic_add_negative -#define arch_atomic_add_negative_acquire arch_atomic_add_negative -#define arch_atomic_add_negative_release arch_atomic_add_negative -#define arch_atomic_add_negative_relaxed arch_atomic_add_negative -#endif /* arch_atomic_add_negative */ - -#ifndef arch_atomic_add_negative +#if defined(arch_atomic_add_negative) +#define raw_atomic_add_negative arch_atomic_add_negative +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative(int i, atomic_t *v) +raw_atomic_add_negative(int i, atomic_t *v) { - return arch_atomic_add_return(i, v) < 0; + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic_add_negative arch_atomic_add_negative -#endif - -#ifndef arch_atomic_add_negative_acquire +#else static __always_inline bool -arch_atomic_add_negative_acquire(int i, atomic_t *v) +raw_atomic_add_negative(int i, atomic_t *v) { - return arch_atomic_add_return_acquire(i, v) < 0; + return raw_atomic_add_return(i, v) < 0; } -#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire #endif -#ifndef arch_atomic_add_negative_release +#if defined(arch_atomic_add_negative_acquire) +#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative_release(int i, atomic_t *v) +raw_atomic_add_negative_acquire(int i, atomic_t *v) { - return arch_atomic_add_return_release(i, v) < 0; + bool ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_add_negative_release arch_atomic_add_negative_release -#endif - -#ifndef arch_atomic_add_negative_relaxed +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_acquire arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative_relaxed(int i, atomic_t *v) +raw_atomic_add_negative_acquire(int i, atomic_t *v) { - return arch_atomic_add_return_relaxed(i, v) < 0; + return raw_atomic_add_return_acquire(i, v) < 0; } -#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed #endif -#else /* arch_atomic_add_negative_relaxed */ - -#ifndef arch_atomic_add_negative_acquire +#if defined(arch_atomic_add_negative_release) +#define raw_atomic_add_negative_release arch_atomic_add_negative_release +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative_acquire(int i, atomic_t *v) +raw_atomic_add_negative_release(int i, atomic_t *v) { - bool ret = arch_atomic_add_negative_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic_add_negative_relaxed(i, v); } -#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire -#endif - -#ifndef arch_atomic_add_negative_release +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_release arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative_release(int i, atomic_t *v) +raw_atomic_add_negative_release(int i, atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_return_release(i, v) < 0; } -#define arch_atomic_add_negative_release arch_atomic_add_negative_release #endif -#ifndef arch_atomic_add_negative +#if defined(arch_atomic_add_negative_relaxed) +#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_relaxed arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative(int i, atomic_t *v) +raw_atomic_add_negative_relaxed(int i, atomic_t *v) { - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic_add_negative_relaxed(i, v); - __atomic_post_full_fence(); - return ret; + return raw_atomic_add_return_relaxed(i, v) < 0; } -#define arch_atomic_add_negative arch_atomic_add_negative #endif -#endif /* arch_atomic_add_negative_relaxed */ - -#ifndef arch_atomic_fetch_add_unless +#if defined(arch_atomic_fetch_add_unless) +#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless +#else static __always_inline int -arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) +raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c == u)) break; - } while (!arch_atomic_try_cmpxchg(v, &c, c + a)); + } while (!raw_atomic_try_cmpxchg(v, &c, c + a)); return c; } -#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless #endif -#ifndef arch_atomic_add_unless +#if defined(arch_atomic_add_unless) +#define raw_atomic_add_unless arch_atomic_add_unless +#else static __always_inline bool -arch_atomic_add_unless(atomic_t *v, int a, int u) +raw_atomic_add_unless(atomic_t *v, int a, int u) { - return arch_atomic_fetch_add_unless(v, a, u) != u; + return raw_atomic_fetch_add_unless(v, a, u) != u; } -#define arch_atomic_add_unless arch_atomic_add_unless #endif -#ifndef arch_atomic_inc_not_zero +#if defined(arch_atomic_inc_not_zero) +#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero +#else static __always_inline bool -arch_atomic_inc_not_zero(atomic_t *v) +raw_atomic_inc_not_zero(atomic_t *v) { - return arch_atomic_add_unless(v, 1, 0); + return raw_atomic_add_unless(v, 1, 0); } -#define arch_atomic_inc_not_zero arch_atomic_inc_not_zero #endif -#ifndef arch_atomic_inc_unless_negative +#if defined(arch_atomic_inc_unless_negative) +#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative +#else static __always_inline bool -arch_atomic_inc_unless_negative(atomic_t *v) +raw_atomic_inc_unless_negative(atomic_t *v) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c < 0)) return false; - } while (!arch_atomic_try_cmpxchg(v, &c, c + 1)); + } while (!raw_atomic_try_cmpxchg(v, &c, c + 1)); return true; } -#define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative #endif -#ifndef arch_atomic_dec_unless_positive +#if defined(arch_atomic_dec_unless_positive) +#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive +#else static __always_inline bool -arch_atomic_dec_unless_positive(atomic_t *v) +raw_atomic_dec_unless_positive(atomic_t *v) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c > 0)) return false; - } while (!arch_atomic_try_cmpxchg(v, &c, c - 1)); + } while (!raw_atomic_try_cmpxchg(v, &c, c - 1)); return true; } -#define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive #endif -#ifndef arch_atomic_dec_if_positive +#if defined(arch_atomic_dec_if_positive) +#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive +#else static __always_inline int -arch_atomic_dec_if_positive(atomic_t *v) +raw_atomic_dec_if_positive(atomic_t *v) { - int dec, c = arch_atomic_read(v); + int dec, c = raw_atomic_read(v); do { dec = c - 1; if (unlikely(dec < 0)) break; - } while (!arch_atomic_try_cmpxchg(v, &c, dec)); + } while (!raw_atomic_try_cmpxchg(v, &c, dec)); return dec; } -#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive #endif #ifdef CONFIG_GENERIC_ATOMIC64 #include #endif -#ifndef arch_atomic64_read_acquire +#define raw_atomic64_read arch_atomic64_read + +#if defined(arch_atomic64_read_acquire) +#define raw_atomic64_read_acquire arch_atomic64_read_acquire +#elif defined(arch_atomic64_read) +#define raw_atomic64_read_acquire arch_atomic64_read +#else static __always_inline s64 -arch_atomic64_read_acquire(const atomic64_t *v) +raw_atomic64_read_acquire(const atomic64_t *v) { s64 ret; if (__native_word(atomic64_t)) { ret = smp_load_acquire(&(v)->counter); } else { - ret = arch_atomic64_read(v); + ret = raw_atomic64_read(v); __atomic_acquire_fence(); } return ret; } -#define arch_atomic64_read_acquire arch_atomic64_read_acquire #endif -#ifndef arch_atomic64_set_release +#define raw_atomic64_set arch_atomic64_set + +#if defined(arch_atomic64_set_release) +#define raw_atomic64_set_release arch_atomic64_set_release +#elif defined(arch_atomic64_set) +#define raw_atomic64_set_release arch_atomic64_set +#else static __always_inline void -arch_atomic64_set_release(atomic64_t *v, s64 i) +raw_atomic64_set_release(atomic64_t *v, s64 i) { if (__native_word(atomic64_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } } -#define arch_atomic64_set_release arch_atomic64_set_release #endif -#ifndef arch_atomic64_add_return_relaxed -#define arch_atomic64_add_return_acquire arch_atomic64_add_return -#define arch_atomic64_add_return_release arch_atomic64_add_return -#define arch_atomic64_add_return_relaxed arch_atomic64_add_return -#else /* arch_atomic64_add_return_relaxed */ +#define raw_atomic64_add arch_atomic64_add + +#if defined(arch_atomic64_add_return) +#define raw_atomic64_add_return arch_atomic64_add_return +#elif defined(arch_atomic64_add_return_relaxed) +static __always_inline s64 +raw_atomic64_add_return(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_add_return_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#else +#error "Unable to define raw_atomic64_add_return" +#endif -#ifndef arch_atomic64_add_return_acquire +#if defined(arch_atomic64_add_return_acquire) +#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 -arch_atomic64_add_return_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_acquire arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_acquire" #endif -#ifndef arch_atomic64_add_return_release +#if defined(arch_atomic64_add_return_release) +#define raw_atomic64_add_return_release arch_atomic64_add_return_release +#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 -arch_atomic64_add_return_release(s64 i, atomic64_t *v) +raw_atomic64_add_return_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_add_return_relaxed(i, v); } -#define arch_atomic64_add_return_release arch_atomic64_add_return_release +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_release arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_release" +#endif + +#if defined(arch_atomic64_add_return_relaxed) +#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_relaxed arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_relaxed" #endif -#ifndef arch_atomic64_add_return +#if defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add arch_atomic64_fetch_add +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_add_return(s64 i, atomic64_t *v) +raw_atomic64_fetch_add(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_add_return_relaxed(i, v); + ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_add_return arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_fetch_add" #endif -#endif /* arch_atomic64_add_return_relaxed */ - -#ifndef arch_atomic64_fetch_add_relaxed -#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add -#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add -#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add -#else /* arch_atomic64_fetch_add_relaxed */ - -#ifndef arch_atomic64_fetch_add_acquire +#if defined(arch_atomic64_fetch_add_acquire) +#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_acquire" #endif -#ifndef arch_atomic64_fetch_add_release +#if defined(arch_atomic64_fetch_add_release) +#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_fetch_add_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_add_relaxed(i, v); } -#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_release" +#endif + +#if defined(arch_atomic64_fetch_add_relaxed) +#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_relaxed" #endif -#ifndef arch_atomic64_fetch_add +#define raw_atomic64_sub arch_atomic64_sub + +#if defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return arch_atomic64_sub_return +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_fetch_add(s64 i, atomic64_t *v) +raw_atomic64_sub_return(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_add_relaxed(i, v); + ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_sub_return" #endif -#endif /* arch_atomic64_fetch_add_relaxed */ - -#ifndef arch_atomic64_sub_return_relaxed -#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return -#define arch_atomic64_sub_return_release arch_atomic64_sub_return -#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return -#else /* arch_atomic64_sub_return_relaxed */ - -#ifndef arch_atomic64_sub_return_acquire +#if defined(arch_atomic64_sub_return_acquire) +#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_acquire" #endif -#ifndef arch_atomic64_sub_return_release +#if defined(arch_atomic64_sub_return_release) +#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_sub_return_release(s64 i, atomic64_t *v) +raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_sub_return_relaxed(i, v); } -#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_release arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_release" +#endif + +#if defined(arch_atomic64_sub_return_relaxed) +#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_relaxed" #endif -#ifndef arch_atomic64_sub_return +#if defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_sub_return(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_sub_return_relaxed(i, v); + ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_sub_return arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_fetch_sub" #endif -#endif /* arch_atomic64_sub_return_relaxed */ - -#ifndef arch_atomic64_fetch_sub_relaxed -#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub -#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub -#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub -#else /* arch_atomic64_fetch_sub_relaxed */ - -#ifndef arch_atomic64_fetch_sub_acquire +#if defined(arch_atomic64_fetch_sub_acquire) +#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_acquire" #endif -#ifndef arch_atomic64_fetch_sub_release +#if defined(arch_atomic64_fetch_sub_release) +#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_sub_relaxed(i, v); } -#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_release" #endif -#ifndef arch_atomic64_fetch_sub -static __always_inline s64 -arch_atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - s64 ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_sub_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub +#if defined(arch_atomic64_fetch_sub_relaxed) +#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_relaxed" #endif -#endif /* arch_atomic64_fetch_sub_relaxed */ - -#ifndef arch_atomic64_inc +#if defined(arch_atomic64_inc) +#define raw_atomic64_inc arch_atomic64_inc +#else static __always_inline void -arch_atomic64_inc(atomic64_t *v) +raw_atomic64_inc(atomic64_t *v) { - arch_atomic64_add(1, v); + raw_atomic64_add(1, v); } -#define arch_atomic64_inc arch_atomic64_inc #endif -#ifndef arch_atomic64_inc_return_relaxed -#ifdef arch_atomic64_inc_return -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return -#define arch_atomic64_inc_return_release arch_atomic64_inc_return -#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return -#endif /* arch_atomic64_inc_return */ - -#ifndef arch_atomic64_inc_return +#if defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return arch_atomic64_inc_return +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return(atomic64_t *v) +raw_atomic64_inc_return(atomic64_t *v) { - return arch_atomic64_add_return(1, v); + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_inc_return_relaxed(v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_inc_return arch_atomic64_inc_return -#endif - -#ifndef arch_atomic64_inc_return_acquire +#else static __always_inline s64 -arch_atomic64_inc_return_acquire(atomic64_t *v) +raw_atomic64_inc_return(atomic64_t *v) { - return arch_atomic64_add_return_acquire(1, v); + return raw_atomic64_add_return(1, v); } -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire #endif -#ifndef arch_atomic64_inc_return_release +#if defined(arch_atomic64_inc_return_acquire) +#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return_release(atomic64_t *v) +raw_atomic64_inc_return_acquire(atomic64_t *v) { - return arch_atomic64_add_return_release(1, v); + s64 ret = arch_atomic64_inc_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release -#endif - -#ifndef arch_atomic64_inc_return_relaxed +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return +#else static __always_inline s64 -arch_atomic64_inc_return_relaxed(atomic64_t *v) +raw_atomic64_inc_return_acquire(atomic64_t *v) { - return arch_atomic64_add_return_relaxed(1, v); + return raw_atomic64_add_return_acquire(1, v); } -#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed #endif -#else /* arch_atomic64_inc_return_relaxed */ - -#ifndef arch_atomic64_inc_return_acquire +#if defined(arch_atomic64_inc_return_release) +#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return_acquire(atomic64_t *v) +raw_atomic64_inc_return_release(atomic64_t *v) { - s64 ret = arch_atomic64_inc_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_inc_return_relaxed(v); +} +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_release arch_atomic64_inc_return +#else +static __always_inline s64 +raw_atomic64_inc_return_release(atomic64_t *v) +{ + return raw_atomic64_add_return_release(1, v); } -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire #endif -#ifndef arch_atomic64_inc_return_release +#if defined(arch_atomic64_inc_return_relaxed) +#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return +#else static __always_inline s64 -arch_atomic64_inc_return_release(atomic64_t *v) +raw_atomic64_inc_return_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_add_return_relaxed(1, v); } -#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release #endif -#ifndef arch_atomic64_inc_return +#if defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc +#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 -arch_atomic64_inc_return(atomic64_t *v) +raw_atomic64_fetch_inc(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_inc_return_relaxed(v); + ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_inc_return arch_atomic64_inc_return -#endif - -#endif /* arch_atomic64_inc_return_relaxed */ - -#ifndef arch_atomic64_fetch_inc_relaxed -#ifdef arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc -#endif /* arch_atomic64_fetch_inc */ - -#ifndef arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc(atomic64_t *v) +raw_atomic64_fetch_inc(atomic64_t *v) { - return arch_atomic64_fetch_add(1, v); + return raw_atomic64_fetch_add(1, v); } -#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc #endif -#ifndef arch_atomic64_fetch_inc_acquire +#if defined(arch_atomic64_fetch_inc_acquire) +#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire +#elif defined(arch_atomic64_fetch_inc_relaxed) +static __always_inline s64 +raw_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + s64 ret = arch_atomic64_fetch_inc_relaxed(v); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_acquire(atomic64_t *v) +raw_atomic64_fetch_inc_acquire(atomic64_t *v) { - return arch_atomic64_fetch_add_acquire(1, v); + return raw_atomic64_fetch_add_acquire(1, v); } -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire #endif -#ifndef arch_atomic64_fetch_inc_release +#if defined(arch_atomic64_fetch_inc_release) +#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release +#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 -arch_atomic64_fetch_inc_release(atomic64_t *v) +raw_atomic64_fetch_inc_release(atomic64_t *v) { - return arch_atomic64_fetch_add_release(1, v); + __atomic_release_fence(); + return arch_atomic64_fetch_inc_relaxed(v); } -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release -#endif - -#ifndef arch_atomic64_fetch_inc_relaxed +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_relaxed(atomic64_t *v) +raw_atomic64_fetch_inc_release(atomic64_t *v) { - return arch_atomic64_fetch_add_relaxed(1, v); + return raw_atomic64_fetch_add_release(1, v); } -#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed #endif -#else /* arch_atomic64_fetch_inc_relaxed */ - -#ifndef arch_atomic64_fetch_inc_acquire +#if defined(arch_atomic64_fetch_inc_relaxed) +#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_acquire(atomic64_t *v) +raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { - s64 ret = arch_atomic64_fetch_inc_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_fetch_add_relaxed(1, v); } -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire #endif -#ifndef arch_atomic64_fetch_inc_release -static __always_inline s64 -arch_atomic64_fetch_inc_release(atomic64_t *v) +#if defined(arch_atomic64_dec) +#define raw_atomic64_dec arch_atomic64_dec +#else +static __always_inline void +raw_atomic64_dec(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_inc_relaxed(v); + raw_atomic64_sub(1, v); } -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release #endif -#ifndef arch_atomic64_fetch_inc +#if defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return arch_atomic64_dec_return +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_fetch_inc(atomic64_t *v) +raw_atomic64_dec_return(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_inc_relaxed(v); + ret = arch_atomic64_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc -#endif - -#endif /* arch_atomic64_fetch_inc_relaxed */ - -#ifndef arch_atomic64_dec -static __always_inline void -arch_atomic64_dec(atomic64_t *v) -{ - arch_atomic64_sub(1, v); -} -#define arch_atomic64_dec arch_atomic64_dec -#endif - -#ifndef arch_atomic64_dec_return_relaxed -#ifdef arch_atomic64_dec_return -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return -#define arch_atomic64_dec_return_release arch_atomic64_dec_return -#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return -#endif /* arch_atomic64_dec_return */ - -#ifndef arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return(atomic64_t *v) +raw_atomic64_dec_return(atomic64_t *v) { - return arch_atomic64_sub_return(1, v); + return raw_atomic64_sub_return(1, v); } -#define arch_atomic64_dec_return arch_atomic64_dec_return #endif -#ifndef arch_atomic64_dec_return_acquire +#if defined(arch_atomic64_dec_return_acquire) +#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_dec_return_acquire(atomic64_t *v) +raw_atomic64_dec_return_acquire(atomic64_t *v) { - return arch_atomic64_sub_return_acquire(1, v); + s64 ret = arch_atomic64_dec_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire -#endif - -#ifndef arch_atomic64_dec_return_release +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_release(atomic64_t *v) +raw_atomic64_dec_return_acquire(atomic64_t *v) { - return arch_atomic64_sub_return_release(1, v); + return raw_atomic64_sub_return_acquire(1, v); } -#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release #endif -#ifndef arch_atomic64_dec_return_relaxed +#if defined(arch_atomic64_dec_return_release) +#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_dec_return_relaxed(atomic64_t *v) +raw_atomic64_dec_return_release(atomic64_t *v) { - return arch_atomic64_sub_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic64_dec_return_relaxed(v); } -#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed -#endif - -#else /* arch_atomic64_dec_return_relaxed */ - -#ifndef arch_atomic64_dec_return_acquire +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_release arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_acquire(atomic64_t *v) +raw_atomic64_dec_return_release(atomic64_t *v) { - s64 ret = arch_atomic64_dec_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_sub_return_release(1, v); } -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire #endif -#ifndef arch_atomic64_dec_return_release +#if defined(arch_atomic64_dec_return_relaxed) +#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_release(atomic64_t *v) +raw_atomic64_dec_return_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_sub_return_relaxed(1, v); } -#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release #endif -#ifndef arch_atomic64_dec_return +#if defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_dec_return(atomic64_t *v) +raw_atomic64_fetch_dec(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_dec_return_relaxed(v); + ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_dec_return arch_atomic64_dec_return -#endif - -#endif /* arch_atomic64_dec_return_relaxed */ - -#ifndef arch_atomic64_fetch_dec_relaxed -#ifdef arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec -#endif /* arch_atomic64_fetch_dec */ - -#ifndef arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec(atomic64_t *v) +raw_atomic64_fetch_dec(atomic64_t *v) { - return arch_atomic64_fetch_sub(1, v); + return raw_atomic64_fetch_sub(1, v); } -#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec #endif -#ifndef arch_atomic64_fetch_dec_acquire +#if defined(arch_atomic64_fetch_dec_acquire) +#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec_acquire(atomic64_t *v) +raw_atomic64_fetch_dec_acquire(atomic64_t *v) { - return arch_atomic64_fetch_sub_acquire(1, v); + s64 ret = arch_atomic64_fetch_dec_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire -#endif - -#ifndef arch_atomic64_fetch_dec_release +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_release(atomic64_t *v) +raw_atomic64_fetch_dec_acquire(atomic64_t *v) { - return arch_atomic64_fetch_sub_release(1, v); + return raw_atomic64_fetch_sub_acquire(1, v); } -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release #endif -#ifndef arch_atomic64_fetch_dec_relaxed +#if defined(arch_atomic64_fetch_dec_release) +#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec_relaxed(atomic64_t *v) +raw_atomic64_fetch_dec_release(atomic64_t *v) { - return arch_atomic64_fetch_sub_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic64_fetch_dec_relaxed(v); } -#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed -#endif - -#else /* arch_atomic64_fetch_dec_relaxed */ - -#ifndef arch_atomic64_fetch_dec_acquire +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_acquire(atomic64_t *v) +raw_atomic64_fetch_dec_release(atomic64_t *v) { - s64 ret = arch_atomic64_fetch_dec_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_fetch_sub_release(1, v); } -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire #endif -#ifndef arch_atomic64_fetch_dec_release +#if defined(arch_atomic64_fetch_dec_relaxed) +#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_release(atomic64_t *v) +raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_sub_relaxed(1, v); } -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release #endif -#ifndef arch_atomic64_fetch_dec +#define raw_atomic64_and arch_atomic64_and + +#if defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and arch_atomic64_fetch_and +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec(atomic64_t *v) +raw_atomic64_fetch_and(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_dec_relaxed(v); + ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec +#else +#error "Unable to define raw_atomic64_fetch_and" #endif -#endif /* arch_atomic64_fetch_dec_relaxed */ - -#ifndef arch_atomic64_fetch_and_relaxed -#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and -#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and -#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and -#else /* arch_atomic64_fetch_and_relaxed */ - -#ifndef arch_atomic64_fetch_and_acquire +#if defined(arch_atomic64_fetch_and_acquire) +#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_acquire" #endif -#ifndef arch_atomic64_fetch_and_release +#if defined(arch_atomic64_fetch_and_release) +#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_and_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_and_relaxed(i, v); } -#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_release" #endif -#ifndef arch_atomic64_fetch_and -static __always_inline s64 -arch_atomic64_fetch_and(s64 i, atomic64_t *v) -{ - s64 ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_and_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#if defined(arch_atomic64_fetch_and_relaxed) +#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_relaxed" #endif -#endif /* arch_atomic64_fetch_and_relaxed */ - -#ifndef arch_atomic64_andnot +#if defined(arch_atomic64_andnot) +#define raw_atomic64_andnot arch_atomic64_andnot +#else static __always_inline void -arch_atomic64_andnot(s64 i, atomic64_t *v) +raw_atomic64_andnot(s64 i, atomic64_t *v) { - arch_atomic64_and(~i, v); + raw_atomic64_and(~i, v); } -#define arch_atomic64_andnot arch_atomic64_andnot #endif -#ifndef arch_atomic64_fetch_andnot_relaxed -#ifdef arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot -#endif /* arch_atomic64_fetch_andnot */ - -#ifndef arch_atomic64_fetch_andnot +#if defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and(~i, v); + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_fetch_andnot_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot -#endif - -#ifndef arch_atomic64_fetch_andnot_acquire +#else static __always_inline s64 -arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_acquire(~i, v); + return raw_atomic64_fetch_and(~i, v); } -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire #endif -#ifndef arch_atomic64_fetch_andnot_release +#if defined(arch_atomic64_fetch_andnot_acquire) +#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_release(~i, v); + s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release -#endif - -#ifndef arch_atomic64_fetch_andnot_relaxed +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot +#else static __always_inline s64 -arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_relaxed(~i, v); + return raw_atomic64_fetch_and_acquire(~i, v); } -#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed #endif -#else /* arch_atomic64_fetch_andnot_relaxed */ - -#ifndef arch_atomic64_fetch_andnot_acquire +#if defined(arch_atomic64_fetch_andnot_release) +#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { - s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_fetch_andnot_relaxed(i, v); +} +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot +#else +static __always_inline s64 +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return raw_atomic64_fetch_and_release(~i, v); } -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire #endif -#ifndef arch_atomic64_fetch_andnot_release +#if defined(arch_atomic64_fetch_andnot_relaxed) +#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot +#else static __always_inline s64 -arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(~i, v); } -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release #endif -#ifndef arch_atomic64_fetch_andnot +#define raw_atomic64_or arch_atomic64_or + +#if defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or arch_atomic64_fetch_or +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) +raw_atomic64_fetch_or(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_andnot_relaxed(i, v); + ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot +#else +#error "Unable to define raw_atomic64_fetch_or" #endif -#endif /* arch_atomic64_fetch_andnot_relaxed */ - -#ifndef arch_atomic64_fetch_or_relaxed -#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or -#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or -#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or -#else /* arch_atomic64_fetch_or_relaxed */ - -#ifndef arch_atomic64_fetch_or_acquire +#if defined(arch_atomic64_fetch_or_acquire) +#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_acquire" #endif -#ifndef arch_atomic64_fetch_or_release +#if defined(arch_atomic64_fetch_or_release) +#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_or_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_or_relaxed(i, v); } -#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_release" +#endif + +#if defined(arch_atomic64_fetch_or_relaxed) +#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_relaxed" #endif -#ifndef arch_atomic64_fetch_or +#define raw_atomic64_xor arch_atomic64_xor + +#if defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_or(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_or_relaxed(i, v); + ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_xor" #endif -#endif /* arch_atomic64_fetch_or_relaxed */ - -#ifndef arch_atomic64_fetch_xor_relaxed -#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor -#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor -#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor -#else /* arch_atomic64_fetch_xor_relaxed */ - -#ifndef arch_atomic64_fetch_xor_acquire +#if defined(arch_atomic64_fetch_xor_acquire) +#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_acquire" #endif -#ifndef arch_atomic64_fetch_xor_release +#if defined(arch_atomic64_fetch_xor_release) +#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_xor_relaxed(i, v); } -#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_release" +#endif + +#if defined(arch_atomic64_fetch_xor_relaxed) +#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_relaxed" #endif -#ifndef arch_atomic64_fetch_xor +#if defined(arch_atomic64_xchg) +#define raw_atomic64_xchg arch_atomic64_xchg +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor(s64 i, atomic64_t *v) +raw_atomic64_xchg(atomic64_t *v, s64 i) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_xor_relaxed(i, v); + ret = arch_atomic64_xchg_relaxed(v, i); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor -#endif - -#endif /* arch_atomic64_fetch_xor_relaxed */ - -#ifndef arch_atomic64_xchg_relaxed -#ifdef arch_atomic64_xchg -#define arch_atomic64_xchg_acquire arch_atomic64_xchg -#define arch_atomic64_xchg_release arch_atomic64_xchg -#define arch_atomic64_xchg_relaxed arch_atomic64_xchg -#endif /* arch_atomic64_xchg */ - -#ifndef arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg(atomic64_t *v, s64 new) +raw_atomic64_xchg(atomic64_t *v, s64 new) { - return arch_xchg(&v->counter, new); + return raw_xchg(&v->counter, new); } -#define arch_atomic64_xchg arch_atomic64_xchg #endif -#ifndef arch_atomic64_xchg_acquire +#if defined(arch_atomic64_xchg_acquire) +#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_xchg_acquire(atomic64_t *v, s64 new) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) { - return arch_xchg_acquire(&v->counter, new); + s64 ret = arch_atomic64_xchg_relaxed(v, i); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire -#endif - -#ifndef arch_atomic64_xchg_release +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_acquire arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_release(atomic64_t *v, s64 new) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { - return arch_xchg_release(&v->counter, new); + return raw_xchg_acquire(&v->counter, new); } -#define arch_atomic64_xchg_release arch_atomic64_xchg_release #endif -#ifndef arch_atomic64_xchg_relaxed +#if defined(arch_atomic64_xchg_release) +#define raw_atomic64_xchg_release arch_atomic64_xchg_release +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +raw_atomic64_xchg_release(atomic64_t *v, s64 i) { - return arch_xchg_relaxed(&v->counter, new); + __atomic_release_fence(); + return arch_atomic64_xchg_relaxed(v, i); } -#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed -#endif - -#else /* arch_atomic64_xchg_relaxed */ - -#ifndef arch_atomic64_xchg_acquire +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_release arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_acquire(atomic64_t *v, s64 i) +raw_atomic64_xchg_release(atomic64_t *v, s64 new) { - s64 ret = arch_atomic64_xchg_relaxed(v, i); - __atomic_acquire_fence(); - return ret; + return raw_xchg_release(&v->counter, new); } -#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire #endif -#ifndef arch_atomic64_xchg_release +#if defined(arch_atomic64_xchg_relaxed) +#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_relaxed arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_release(atomic64_t *v, s64 i) +raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { - __atomic_release_fence(); - return arch_atomic64_xchg_relaxed(v, i); + return raw_xchg_relaxed(&v->counter, new); } -#define arch_atomic64_xchg_release arch_atomic64_xchg_release #endif -#ifndef arch_atomic64_xchg +#if defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_xchg(atomic64_t *v, s64 i) +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_xchg_relaxed(v, i); + ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_xchg arch_atomic64_xchg -#endif - -#endif /* arch_atomic64_xchg_relaxed */ - -#ifndef arch_atomic64_cmpxchg_relaxed -#ifdef arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg -#endif /* arch_atomic64_cmpxchg */ - -#ifndef arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg(&v->counter, old, new); + return raw_cmpxchg(&v->counter, old, new); } -#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg #endif -#ifndef arch_atomic64_cmpxchg_acquire +#if defined(arch_atomic64_cmpxchg_acquire) +#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_acquire(&v->counter, old, new); + s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire -#endif - -#ifndef arch_atomic64_cmpxchg_release +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_release(&v->counter, old, new); + return raw_cmpxchg_acquire(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release #endif -#ifndef arch_atomic64_cmpxchg_relaxed +#if defined(arch_atomic64_cmpxchg_release) +#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_relaxed(&v->counter, old, new); + __atomic_release_fence(); + return arch_atomic64_cmpxchg_relaxed(v, old, new); } -#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed -#endif - -#else /* arch_atomic64_cmpxchg_relaxed */ - -#ifndef arch_atomic64_cmpxchg_acquire +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { - s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; + return raw_cmpxchg_release(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire #endif -#ifndef arch_atomic64_cmpxchg_release +#if defined(arch_atomic64_cmpxchg_relaxed) +#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { - __atomic_release_fence(); - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_cmpxchg_relaxed(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release #endif -#ifndef arch_atomic64_cmpxchg -static __always_inline s64 -arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +#if defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { - s64 ret; + bool ret; __atomic_pre_full_fence(); - ret = arch_atomic64_cmpxchg_relaxed(v, old, new); + ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg -#endif - -#endif /* arch_atomic64_cmpxchg_relaxed */ - -#ifndef arch_atomic64_try_cmpxchg_relaxed -#ifdef arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg -#endif /* arch_atomic64_try_cmpxchg */ - -#ifndef arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg(v, o, new); + r = raw_atomic64_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg #endif -#ifndef arch_atomic64_try_cmpxchg_acquire +#if defined(arch_atomic64_try_cmpxchg_acquire) +#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_acquire(v, o, new); + r = raw_atomic64_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire #endif -#ifndef arch_atomic64_try_cmpxchg_release +#if defined(arch_atomic64_try_cmpxchg_release) +#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + __atomic_release_fence(); + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +} +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_release(v, o, new); + r = raw_atomic64_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release #endif -#ifndef arch_atomic64_try_cmpxchg_relaxed +#if defined(arch_atomic64_try_cmpxchg_relaxed) +#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_relaxed(v, o, new); + r = raw_atomic64_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed #endif -#else /* arch_atomic64_try_cmpxchg_relaxed */ - -#ifndef arch_atomic64_try_cmpxchg_acquire -static __always_inline bool -arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ - bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; -} -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire -#endif - -#ifndef arch_atomic64_try_cmpxchg_release -static __always_inline bool -arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ - __atomic_release_fence(); - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release -#endif - -#ifndef arch_atomic64_try_cmpxchg -static __always_inline bool -arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -#endif - -#endif /* arch_atomic64_try_cmpxchg_relaxed */ - -#ifndef arch_atomic64_sub_and_test +#if defined(arch_atomic64_sub_and_test) +#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test +#else static __always_inline bool -arch_atomic64_sub_and_test(s64 i, atomic64_t *v) +raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { - return arch_atomic64_sub_return(i, v) == 0; + return raw_atomic64_sub_return(i, v) == 0; } -#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test #endif -#ifndef arch_atomic64_dec_and_test +#if defined(arch_atomic64_dec_and_test) +#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test +#else static __always_inline bool -arch_atomic64_dec_and_test(atomic64_t *v) +raw_atomic64_dec_and_test(atomic64_t *v) { - return arch_atomic64_dec_return(v) == 0; + return raw_atomic64_dec_return(v) == 0; } -#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test #endif -#ifndef arch_atomic64_inc_and_test +#if defined(arch_atomic64_inc_and_test) +#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test +#else static __always_inline bool -arch_atomic64_inc_and_test(atomic64_t *v) +raw_atomic64_inc_and_test(atomic64_t *v) { - return arch_atomic64_inc_return(v) == 0; + return raw_atomic64_inc_return(v) == 0; } -#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test #endif -#ifndef arch_atomic64_add_negative_relaxed -#ifdef arch_atomic64_add_negative -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative -#define arch_atomic64_add_negative_release arch_atomic64_add_negative -#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative -#endif /* arch_atomic64_add_negative */ - -#ifndef arch_atomic64_add_negative +#if defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative arch_atomic64_add_negative +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative(s64 i, atomic64_t *v) +raw_atomic64_add_negative(s64 i, atomic64_t *v) { - return arch_atomic64_add_return(i, v) < 0; + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_add_negative arch_atomic64_add_negative -#endif - -#ifndef arch_atomic64_add_negative_acquire +#else static __always_inline bool -arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_negative(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_acquire(i, v) < 0; + return raw_atomic64_add_return(i, v) < 0; } -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire #endif -#ifndef arch_atomic64_add_negative_release +#if defined(arch_atomic64_add_negative_acquire) +#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_release(i, v) < 0; + bool ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release -#endif - -#ifndef arch_atomic64_add_negative_relaxed +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_relaxed(i, v) < 0; + return raw_atomic64_add_return_acquire(i, v) < 0; } -#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed #endif -#else /* arch_atomic64_add_negative_relaxed */ - -#ifndef arch_atomic64_add_negative_acquire +#if defined(arch_atomic64_add_negative_release) +#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { - bool ret = arch_atomic64_add_negative_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_add_negative_relaxed(i, v); } -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire -#endif - -#ifndef arch_atomic64_add_negative_release +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_release arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_return_release(i, v) < 0; } -#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release #endif -#ifndef arch_atomic64_add_negative +#if defined(arch_atomic64_add_negative_relaxed) +#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative(s64 i, atomic64_t *v) +raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_add_negative_relaxed(i, v); - __atomic_post_full_fence(); - return ret; + return raw_atomic64_add_return_relaxed(i, v) < 0; } -#define arch_atomic64_add_negative arch_atomic64_add_negative #endif -#endif /* arch_atomic64_add_negative_relaxed */ - -#ifndef arch_atomic64_fetch_add_unless +#if defined(arch_atomic64_fetch_add_unless) +#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless +#else static __always_inline s64 -arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c == u)) break; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + a)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c + a)); return c; } -#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif -#ifndef arch_atomic64_add_unless +#if defined(arch_atomic64_add_unless) +#define raw_atomic64_add_unless arch_atomic64_add_unless +#else static __always_inline bool -arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { - return arch_atomic64_fetch_add_unless(v, a, u) != u; + return raw_atomic64_fetch_add_unless(v, a, u) != u; } -#define arch_atomic64_add_unless arch_atomic64_add_unless #endif -#ifndef arch_atomic64_inc_not_zero +#if defined(arch_atomic64_inc_not_zero) +#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero +#else static __always_inline bool -arch_atomic64_inc_not_zero(atomic64_t *v) +raw_atomic64_inc_not_zero(atomic64_t *v) { - return arch_atomic64_add_unless(v, 1, 0); + return raw_atomic64_add_unless(v, 1, 0); } -#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero #endif -#ifndef arch_atomic64_inc_unless_negative +#if defined(arch_atomic64_inc_unless_negative) +#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative +#else static __always_inline bool -arch_atomic64_inc_unless_negative(atomic64_t *v) +raw_atomic64_inc_unless_negative(atomic64_t *v) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c < 0)) return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + 1)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1)); return true; } -#define arch_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative #endif -#ifndef arch_atomic64_dec_unless_positive +#if defined(arch_atomic64_dec_unless_positive) +#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive +#else static __always_inline bool -arch_atomic64_dec_unless_positive(atomic64_t *v) +raw_atomic64_dec_unless_positive(atomic64_t *v) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c > 0)) return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c - 1)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1)); return true; } -#define arch_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive #endif -#ifndef arch_atomic64_dec_if_positive +#if defined(arch_atomic64_dec_if_positive) +#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive +#else static __always_inline s64 -arch_atomic64_dec_if_positive(atomic64_t *v) +raw_atomic64_dec_if_positive(atomic64_t *v) { - s64 dec, c = arch_atomic64_read(v); + s64 dec, c = raw_atomic64_read(v); do { dec = c - 1; if (unlikely(dec < 0)) break; - } while (!arch_atomic64_try_cmpxchg(v, &c, dec)); + } while (!raw_atomic64_try_cmpxchg(v, &c, dec)); return dec; } -#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// e1cee558cc61cae887890db30fcdf93baca9f498 +// c2048fccede6fac923252290e2b303949d5dec83 diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h deleted file mode 100644 index 8b2fc04cf8c54..0000000000000 --- a/include/linux/atomic/atomic-raw.h +++ /dev/null @@ -1,1135 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -// Generated by scripts/atomic/gen-atomic-raw.sh -// DO NOT MODIFY THIS FILE DIRECTLY - -#ifndef _LINUX_ATOMIC_RAW_H -#define _LINUX_ATOMIC_RAW_H - -static __always_inline int -raw_atomic_read(const atomic_t *v) -{ - return arch_atomic_read(v); -} - -static __always_inline int -raw_atomic_read_acquire(const atomic_t *v) -{ - return arch_atomic_read_acquire(v); -} - -static __always_inline void -raw_atomic_set(atomic_t *v, int i) -{ - arch_atomic_set(v, i); -} - -static __always_inline void -raw_atomic_set_release(atomic_t *v, int i) -{ - arch_atomic_set_release(v, i); -} - -static __always_inline void -raw_atomic_add(int i, atomic_t *v) -{ - arch_atomic_add(i, v); -} - -static __always_inline int -raw_atomic_add_return(int i, atomic_t *v) -{ - return arch_atomic_add_return(i, v); -} - -static __always_inline int -raw_atomic_add_return_acquire(int i, atomic_t *v) -{ - return arch_atomic_add_return_acquire(i, v); -} - -static __always_inline int -raw_atomic_add_return_release(int i, atomic_t *v) -{ - return arch_atomic_add_return_release(i, v); -} - -static __always_inline int -raw_atomic_add_return_relaxed(int i, atomic_t *v) -{ - return arch_atomic_add_return_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_add(int i, atomic_t *v) -{ - return arch_atomic_fetch_add(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_sub(int i, atomic_t *v) -{ - arch_atomic_sub(i, v); -} - -static __always_inline int -raw_atomic_sub_return(int i, atomic_t *v) -{ - return arch_atomic_sub_return(i, v); -} - -static __always_inline int -raw_atomic_sub_return_acquire(int i, atomic_t *v) -{ - return arch_atomic_sub_return_acquire(i, v); -} - -static __always_inline int -raw_atomic_sub_return_release(int i, atomic_t *v) -{ - return arch_atomic_sub_return_release(i, v); -} - -static __always_inline int -raw_atomic_sub_return_relaxed(int i, atomic_t *v) -{ - return arch_atomic_sub_return_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_inc(atomic_t *v) -{ - arch_atomic_inc(v); -} - -static __always_inline int -raw_atomic_inc_return(atomic_t *v) -{ - return arch_atomic_inc_return(v); -} - -static __always_inline int -raw_atomic_inc_return_acquire(atomic_t *v) -{ - return arch_atomic_inc_return_acquire(v); -} - -static __always_inline int -raw_atomic_inc_return_release(atomic_t *v) -{ - return arch_atomic_inc_return_release(v); -} - -static __always_inline int -raw_atomic_inc_return_relaxed(atomic_t *v) -{ - return arch_atomic_inc_return_relaxed(v); -} - -static __always_inline int -raw_atomic_fetch_inc(atomic_t *v) -{ - return arch_atomic_fetch_inc(v); -} - -static __always_inline int -raw_atomic_fetch_inc_acquire(atomic_t *v) -{ - return arch_atomic_fetch_inc_acquire(v); -} - -static __always_inline int -raw_atomic_fetch_inc_release(atomic_t *v) -{ - return arch_atomic_fetch_inc_release(v); -} - -static __always_inline int -raw_atomic_fetch_inc_relaxed(atomic_t *v) -{ - return arch_atomic_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_dec(atomic_t *v) -{ - arch_atomic_dec(v); -} - -static __always_inline int -raw_atomic_dec_return(atomic_t *v) -{ - return arch_atomic_dec_return(v); -} - -static __always_inline int -raw_atomic_dec_return_acquire(atomic_t *v) -{ - return arch_atomic_dec_return_acquire(v); -} - -static __always_inline int -raw_atomic_dec_return_release(atomic_t *v) -{ - return arch_atomic_dec_return_release(v); -} - -static __always_inline int -raw_atomic_dec_return_relaxed(atomic_t *v) -{ - return arch_atomic_dec_return_relaxed(v); -} - -static __always_inline int -raw_atomic_fetch_dec(atomic_t *v) -{ - return arch_atomic_fetch_dec(v); -} - -static __always_inline int -raw_atomic_fetch_dec_acquire(atomic_t *v) -{ - return arch_atomic_fetch_dec_acquire(v); -} - -static __always_inline int -raw_atomic_fetch_dec_release(atomic_t *v) -{ - return arch_atomic_fetch_dec_release(v); -} - -static __always_inline int -raw_atomic_fetch_dec_relaxed(atomic_t *v) -{ - return arch_atomic_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_and(int i, atomic_t *v) -{ - arch_atomic_and(i, v); -} - -static __always_inline int -raw_atomic_fetch_and(int i, atomic_t *v) -{ - return arch_atomic_fetch_and(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_andnot(int i, atomic_t *v) -{ - arch_atomic_andnot(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_or(int i, atomic_t *v) -{ - arch_atomic_or(i, v); -} - -static __always_inline int -raw_atomic_fetch_or(int i, atomic_t *v) -{ - return arch_atomic_fetch_or(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_xor(int i, atomic_t *v) -{ - arch_atomic_xor(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_relaxed(i, v); -} - -static __always_inline int -raw_atomic_xchg(atomic_t *v, int i) -{ - return arch_atomic_xchg(v, i); -} - -static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int i) -{ - return arch_atomic_xchg_acquire(v, i); -} - -static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int i) -{ - return arch_atomic_xchg_release(v, i); -} - -static __always_inline int -raw_atomic_xchg_relaxed(atomic_t *v, int i) -{ - return arch_atomic_xchg_relaxed(v, i); -} - -static __always_inline int -raw_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_acquire(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_release(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_sub_and_test(int i, atomic_t *v) -{ - return arch_atomic_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_dec_and_test(atomic_t *v) -{ - return arch_atomic_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_inc_and_test(atomic_t *v) -{ - return arch_atomic_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_add_negative(int i, atomic_t *v) -{ - return arch_atomic_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_acquire(int i, atomic_t *v) -{ - return arch_atomic_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_release(int i, atomic_t *v) -{ - return arch_atomic_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_relaxed(int i, atomic_t *v) -{ - return arch_atomic_add_negative_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - return arch_atomic_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_add_unless(atomic_t *v, int a, int u) -{ - return arch_atomic_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_inc_not_zero(atomic_t *v) -{ - return arch_atomic_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_inc_unless_negative(atomic_t *v) -{ - return arch_atomic_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_dec_unless_positive(atomic_t *v) -{ - return arch_atomic_dec_unless_positive(v); -} - -static __always_inline int -raw_atomic_dec_if_positive(atomic_t *v) -{ - return arch_atomic_dec_if_positive(v); -} - -static __always_inline s64 -raw_atomic64_read(const atomic64_t *v) -{ - return arch_atomic64_read(v); -} - -static __always_inline s64 -raw_atomic64_read_acquire(const atomic64_t *v) -{ - return arch_atomic64_read_acquire(v); -} - -static __always_inline void -raw_atomic64_set(atomic64_t *v, s64 i) -{ - arch_atomic64_set(v, i); -} - -static __always_inline void -raw_atomic64_set_release(atomic64_t *v, s64 i) -{ - arch_atomic64_set_release(v, i); -} - -static __always_inline void -raw_atomic64_add(s64 i, atomic64_t *v) -{ - arch_atomic64_add(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_release(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_sub(s64 i, atomic64_t *v) -{ - arch_atomic64_sub(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_release(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_inc(atomic64_t *v) -{ - arch_atomic64_inc(v); -} - -static __always_inline s64 -raw_atomic64_inc_return(atomic64_t *v) -{ - return arch_atomic64_inc_return(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_acquire(atomic64_t *v) -{ - return arch_atomic64_inc_return_acquire(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_release(atomic64_t *v) -{ - return arch_atomic64_inc_return_release(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_relaxed(atomic64_t *v) -{ - return arch_atomic64_inc_return_relaxed(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc(atomic64_t *v) -{ - return arch_atomic64_fetch_inc(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_acquire(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_acquire(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_release(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_release(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_relaxed(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic64_dec(atomic64_t *v) -{ - arch_atomic64_dec(v); -} - -static __always_inline s64 -raw_atomic64_dec_return(atomic64_t *v) -{ - return arch_atomic64_dec_return(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_acquire(atomic64_t *v) -{ - return arch_atomic64_dec_return_acquire(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_release(atomic64_t *v) -{ - return arch_atomic64_dec_return_release(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_relaxed(atomic64_t *v) -{ - return arch_atomic64_dec_return_relaxed(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec(atomic64_t *v) -{ - return arch_atomic64_fetch_dec(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_acquire(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_acquire(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_release(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_release(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_relaxed(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic64_and(s64 i, atomic64_t *v) -{ - arch_atomic64_and(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_andnot(s64 i, atomic64_t *v) -{ - arch_atomic64_andnot(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_or(s64 i, atomic64_t *v) -{ - arch_atomic64_or(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_xor(s64 i, atomic64_t *v) -{ - arch_atomic64_xor(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_acquire(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_release(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_relaxed(v, i); -} - -static __always_inline s64 -raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_acquire(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_release(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic64_sub_and_test(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic64_dec_and_test(atomic64_t *v) -{ - return arch_atomic64_dec_and_test(v); -} - -static __always_inline bool -raw_atomic64_inc_and_test(atomic64_t *v) -{ - return arch_atomic64_inc_and_test(v); -} - -static __always_inline bool -raw_atomic64_add_negative(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) -{ - return arch_atomic64_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) -{ - return arch_atomic64_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic64_inc_not_zero(atomic64_t *v) -{ - return arch_atomic64_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic64_inc_unless_negative(atomic64_t *v) -{ - return arch_atomic64_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic64_dec_unless_positive(atomic64_t *v) -{ - return arch_atomic64_dec_unless_positive(v); -} - -static __always_inline s64 -raw_atomic64_dec_if_positive(atomic64_t *v) -{ - return arch_atomic64_dec_if_positive(v); -} - -#define raw_xchg(...) \ - arch_xchg(__VA_ARGS__) - -#define raw_xchg_acquire(...) \ - arch_xchg_acquire(__VA_ARGS__) - -#define raw_xchg_release(...) \ - arch_xchg_release(__VA_ARGS__) - -#define raw_xchg_relaxed(...) \ - arch_xchg_relaxed(__VA_ARGS__) - -#define raw_cmpxchg(...) \ - arch_cmpxchg(__VA_ARGS__) - -#define raw_cmpxchg_acquire(...) \ - arch_cmpxchg_acquire(__VA_ARGS__) - -#define raw_cmpxchg_release(...) \ - arch_cmpxchg_release(__VA_ARGS__) - -#define raw_cmpxchg_relaxed(...) \ - arch_cmpxchg_relaxed(__VA_ARGS__) - -#define raw_cmpxchg64(...) \ - arch_cmpxchg64(__VA_ARGS__) - -#define raw_cmpxchg64_acquire(...) \ - arch_cmpxchg64_acquire(__VA_ARGS__) - -#define raw_cmpxchg64_release(...) \ - arch_cmpxchg64_release(__VA_ARGS__) - -#define raw_cmpxchg64_relaxed(...) \ - arch_cmpxchg64_relaxed(__VA_ARGS__) - -#define raw_cmpxchg128(...) \ - arch_cmpxchg128(__VA_ARGS__) - -#define raw_cmpxchg128_acquire(...) \ - arch_cmpxchg128_acquire(__VA_ARGS__) - -#define raw_cmpxchg128_release(...) \ - arch_cmpxchg128_release(__VA_ARGS__) - -#define raw_cmpxchg128_relaxed(...) \ - arch_cmpxchg128_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg(...) \ - arch_try_cmpxchg(__VA_ARGS__) - -#define raw_try_cmpxchg_acquire(...) \ - arch_try_cmpxchg_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg_release(...) \ - arch_try_cmpxchg_release(__VA_ARGS__) - -#define raw_try_cmpxchg_relaxed(...) \ - arch_try_cmpxchg_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg64(...) \ - arch_try_cmpxchg64(__VA_ARGS__) - -#define raw_try_cmpxchg64_acquire(...) \ - arch_try_cmpxchg64_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg64_release(...) \ - arch_try_cmpxchg64_release(__VA_ARGS__) - -#define raw_try_cmpxchg64_relaxed(...) \ - arch_try_cmpxchg64_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg128(...) \ - arch_try_cmpxchg128(__VA_ARGS__) - -#define raw_try_cmpxchg128_acquire(...) \ - arch_try_cmpxchg128_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg128_release(...) \ - arch_try_cmpxchg128_release(__VA_ARGS__) - -#define raw_try_cmpxchg128_relaxed(...) \ - arch_try_cmpxchg128_relaxed(__VA_ARGS__) - -#define raw_cmpxchg_local(...) \ - arch_cmpxchg_local(__VA_ARGS__) - -#define raw_cmpxchg64_local(...) \ - arch_cmpxchg64_local(__VA_ARGS__) - -#define raw_cmpxchg128_local(...) \ - arch_cmpxchg128_local(__VA_ARGS__) - -#define raw_sync_cmpxchg(...) \ - arch_sync_cmpxchg(__VA_ARGS__) - -#define raw_try_cmpxchg_local(...) \ - arch_try_cmpxchg_local(__VA_ARGS__) - -#define raw_try_cmpxchg64_local(...) \ - arch_try_cmpxchg64_local(__VA_ARGS__) - -#define raw_try_cmpxchg128_local(...) \ - arch_try_cmpxchg128_local(__VA_ARGS__) - -#endif /* _LINUX_ATOMIC_RAW_H */ -// b23ed4424e85200e200ded094522e1d743b3a5b1 diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire index ef764085c79aa..b0f732a5c46ef 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,6 +1,6 @@ cat <counter, old, new); + return raw_cmpxchg${order}(&v->counter, old, new); } EOF diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index 8c144c818e9ed..a660ac65994bd 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,7 +1,7 @@ cat < 0)) return false; - } while (!arch_${atomic}_try_cmpxchg(v, &c, c - 1)); + } while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1)); return true; } diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence index 07757d8e338ef..067eea553f5e0 100755 --- a/scripts/atomic/fallbacks/fence +++ b/scripts/atomic/fallbacks/fence @@ -1,6 +1,6 @@ cat <counter); } else { - ret = arch_${atomic}_read(v); + ret = raw_${atomic}_read(v); __atomic_acquire_fence(); } diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release index b46feb56d69ca..cbbff708129b8 100755 --- a/scripts/atomic/fallbacks/release +++ b/scripts/atomic/fallbacks/release @@ -1,6 +1,6 @@ cat <counter, i); } else { __atomic_release_fence(); - arch_${atomic}_set(v, i); + raw_${atomic}_set(v, i); } } EOF diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test index da8a049c9b02b..8975a496d495c 100755 --- a/scripts/atomic/fallbacks/sub_and_test +++ b/scripts/atomic/fallbacks/sub_and_test @@ -1,7 +1,7 @@ cat <counter, new); + return raw_xchg${order}(&v->counter, new); } EOF diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 337330865fa2e..86aca4f9f315a 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -17,19 +17,12 @@ gen_template_fallback() local atomic="$1"; shift local int="$1"; shift - local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}" - local ret="$(gen_ret_type "${meta}" "${int}")" local retstmt="$(gen_ret_stmt "${meta}")" local params="$(gen_params "${int}" "${atomic}" "$@")" local args="$(gen_args "$@")" - if [ ! -z "${template}" ]; then - printf "#ifndef ${atomicname}\n" - . ${template} - printf "#define ${atomicname} ${atomicname}\n" - printf "#endif\n\n" - fi + . ${template} } #gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...) @@ -59,69 +52,92 @@ gen_proto_fallback() gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" } -#gen_basic_fallbacks(basename) -gen_basic_fallbacks() -{ - local basename="$1"; shift -cat << EOF -#define ${basename}_acquire ${basename} -#define ${basename}_release ${basename} -#define ${basename}_relaxed ${basename} -EOF -} - -#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) -gen_proto_order_variants() +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, args...) +gen_proto_order_variant() { local meta="$1"; shift local pfx="$1"; shift local name="$1"; shift local sfx="$1"; shift + local order="$1"; shift local atomic="$1" - local basename="arch_${atomic}_${pfx}${name}${sfx}" - - local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")" + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + local basename="${atomic}_${pfx}${name}${sfx}" - # If we don't have relaxed atomics, then we don't bother with ordering fallbacks - # read_acquire and set_release need to be templated, though - if ! meta_has_relaxed "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" - if meta_has_acquire "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - fi + # Where there is no possible fallback, this order variant is mandatory + # and must be provided by arch code. Add a comment to the header to + # make this obvious. + # + # Ideally we'd error on a missing definition, but arch code might + # define this order variant as a C function without a preprocessor + # symbol. + if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then + printf "#define raw_${atomicname} arch_${atomicname}\n\n" + return + fi - if meta_has_release "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - fi + printf "#if defined(arch_${atomicname})\n" + printf "#define raw_${atomicname} arch_${atomicname}\n" - return + # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops + if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then + printf "#elif defined(arch_${basename}_relaxed)\n" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" fi - printf "#ifndef ${basename}_relaxed\n" + # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops + if [ ! -z "${order}" ]; then + printf "#elif defined(arch_${basename})\n" + printf "#define raw_${atomicname} arch_${basename}\n" + fi + printf "#else\n" if [ ! -z "${template}" ]; then - printf "#ifdef ${basename}\n" + gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + else + printf "#error \"Unable to define raw_${atomicname}\"\n" fi - gen_basic_fallbacks "${basename}" + printf "#endif\n\n" +} - if [ ! -z "${template}" ]; then - printf "#endif /* ${basename} */\n\n" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" + +#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) +gen_proto_order_variants() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local atomic="$1" + + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + + if meta_has_acquire "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" fi - printf "#else /* ${basename}_relaxed */\n\n" + if meta_has_release "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" + fi - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + if meta_has_relaxed "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" + fi +} - printf "#endif /* ${basename}_relaxed */\n\n" +#gen_basic_fallbacks(basename) +gen_basic_fallbacks() +{ + local basename="$1"; shift +cat << EOF +#define raw_${basename}_acquire arch_${basename} +#define raw_${basename}_release arch_${basename} +#define raw_${basename}_relaxed arch_${basename} +EOF } gen_order_fallbacks() @@ -130,36 +146,65 @@ gen_order_fallbacks() cat < ${LINUXDIR}/include/${header} From patchwork Mon May 22 12:24:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97400 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1412073vqo; Mon, 22 May 2023 05:32:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4MOAO+uqyr+5euws53BDbMmt32vvGNSGkvboAdthTaGQHZIMAnM8vCgNw16i2p/5+vwwdm X-Received: by 2002:a17:90a:d513:b0:24d:d377:d1 with SMTP id t19-20020a17090ad51300b0024dd37700d1mr10426247pju.45.1684758735327; Mon, 22 May 2023 05:32:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758735; cv=none; d=google.com; s=arc-20160816; b=cuUZEzXB4uqTfIxFFXbxsdFOhzQZeNiIdlnDPOr7+2Jzs5bsQdnoaViFQxHnrBbkNb iVZKqUfitR1yiavwfEP/xOmPVCWbmu26rngVDEgOjq7p/cZO9msarC4Ctq97waAMafRf 6etOcE3B9VrkHaD6bRWmNd9nsUeWQYVSHf3IENi3AkLwzxnMfRJA+c8quaWEVAXD0hda O3dBYDba2t+EGHjP9WcmAPo8B6t6W3qNcCS1CfAgI7RSQsXeYUq6WZAXoZO4obIoRgV5 QOYq4d+dnPLIowhk8ffoiHEv3xgh7J3CIJvEwFrEWpIsyvudG6Bt2hbQcVt3ufiJKEtz jdQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=2mBtWn9XFegQnu5QeNphAP8sNYcnImakWEuY0JpHT4A=; b=f5Y6bwUldpymcP92x1njyNAtZmC8fIYpefxvDW38tDrBkNL9Bl+FOvFFBEqUBIev+Z uDnSqF4UsPQaodsnJxjFtMg5TBpLK7i1mm+ZcXb3Wmv9EEnpRl7P56WOlvtMsT798IL2 rR2HX+Hrsa6iOpWCW4Oh0qfI8gZN/RwUdgTkfxyQRWzHpPYa19co8SC7cEPVAjuL+Pai Puno2KdqtwE1VjFTCglfqAHHNtzTRbsdUyTwtaPF+YSQSVa33opZ15U2EIF0DWLiGlCu 3Cm+ubRkd5hThl2A10wyHj0bjOgAajFRnPGPUji8n0Wx+tNC6hpHmroekX1174gSxLx7 IdKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s2-20020a17090ad48200b002537bd7454dsi4540943pju.101.2023.05.22.05.32.03; Mon, 22 May 2023 05:32:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233849AbjEVM2z (ORCPT + 99 others); Mon, 22 May 2023 08:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233730AbjEVM13 (ORCPT ); Mon, 22 May 2023 08:27:29 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E410D1FE2; Mon, 22 May 2023 05:25:29 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ADBC1139F; Mon, 22 May 2023 05:26:14 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0F7F23F59C; Mon, 22 May 2023 05:25:27 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 21/26] locking/atomic: scripts: split pfx/name/sfx/order Date: Mon, 22 May 2023 13:24:24 +0100 Message-Id: <20230522122429.1915021-22-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597575468436423?= X-GMAIL-MSGID: =?utf-8?q?1766597575468436423?= Currently gen-atomic-long.sh's gen_proto_order_variant() function combines the pfx/name/sfx/order variables immediately, unlike other functions in gen-atomic-*.sh. This is fine today, but subsequent patches will require the individual individual pfx/name/sfx/order variables within gen-atomic-long.sh's gen_proto_order_variant() function. In preparation for this, split the variables in the style of other gen-atomic-*.sh scripts. This results in no change to the generated headers, so there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- scripts/atomic/gen-atomic-long.sh | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 75e91d6da30d3..13832171f7219 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -36,10 +36,15 @@ gen_args_cast() gen_proto_order_variant() { local meta="$1"; shift - local name="$1$2$3$4"; shift; shift; shift; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift local atomic="$1"; shift local int="$1"; shift + local atomicname="${pfx}${name}${sfx}${order}" + local ret="$(gen_ret_type "${meta}" "long")" local params="$(gen_params "long" "atomic_long" "$@")" local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")" @@ -47,9 +52,9 @@ gen_proto_order_variant() cat < X-Patchwork-Id: 97419 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1426044vqo; Mon, 22 May 2023 05:56:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5qB/BrrJxu8FdNdh7uiSBBC+j9CkOSEnxzsAe8/LUJsHjuso/UKg8xEc6mfP7xMiQXE6tz X-Received: by 2002:a05:6a21:789a:b0:105:8e4d:d726 with SMTP id bf26-20020a056a21789a00b001058e4dd726mr12515755pzc.27.1684760206702; Mon, 22 May 2023 05:56:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760206; cv=none; d=google.com; s=arc-20160816; b=uOuPrTcDtJq0HUp5jS5+JXFmkRmMb25IOcaWwSGCatAwJTv4XzM+9FoqbtGxNUEpye m8YRc+OWuW8e/jiZ68JwWevb+BHpA61sHOG6roytJF55UiIK1b7+GcXKI+NyQRgQrVoZ Jd6B+u+XMm4ZYfYq4ABogr55Af2UUfGZFKhsBafqkXieBgvCSo6oqTCS16dUS5AJUjAt Ggtj2AKPQCj7u1zq+Nfz+aAbqv0TaiUdeuzKw+TMzw3PA1dYWKYBC9OMw4sMgBn9cgRP r7GuBjqzaY8vo5ZCVniSVZQrlYSJy6Jyi1cWPNhaeT4Cly7W3Se7GJQVl6oN0R1jLwyZ XLJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=SFkDGGB9/AQAaFGEAvp501gDuBBE4HyATBS/eg/5NDc=; b=sXYcQnPvIMPNTKyzQK6/fEKxxGwx4UbG5n6e4YWEQ+1hNUuCv8nr78LwNM+L5wV1f2 nL7AcSM0rZ1SiyCxodE70QQghhuIv2C6BcYgG6FXl51+Jh0xLG78UAQxLatDFQt49cFH ddC7sEB1g02Ps8LLN+MIN+WFoTMdwqZuC1kuXyTqW2f9EyavaLidnhP7QKSdUDKMWeJR VybXRYUWjqCg4qdBiZ5aqdhTFg58h4P/zOcHVj4YHhI7jJzWqh5PwsMNhyvuwqQgFxsT ZZr/LmYc0CCs68Ui5mTC3wbL7AGA0x4pTry5Ol2zjv6Dgsgb8hrdpmRBnXc2Wd8NnEUZ bSYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 19-20020a631553000000b00528d90d40e4si4676623pgv.88.2023.05.22.05.56.33; Mon, 22 May 2023 05:56:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234138AbjEVM3I (ORCPT + 99 others); Mon, 22 May 2023 08:29:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233900AbjEVM1c (ORCPT ); Mon, 22 May 2023 08:27:32 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 50DDBC5; Mon, 22 May 2023 05:25:32 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1808E1480; Mon, 22 May 2023 05:26:17 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6E1053F59C; Mon, 22 May 2023 05:25:30 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 22/26] locking/atomic: scripts: simplify raw_atomic_long*() definitions Date: Mon, 22 May 2023 13:24:25 +0100 Message-Id: <20230522122429.1915021-23-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599118142589932?= X-GMAIL-MSGID: =?utf-8?q?1766599118142589932?= Currently, atomic-long is split into two sections, one defining the raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw atomic_long_*() ops for !CONFIG_64BIT. With many lines elided, this looks like: | #ifdef CONFIG_64BIT | ... | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); | } | ... | #else /* CONFIG_64BIT */ | ... | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | return raw_atomic_try_cmpxchg(v, (int *)old, new); | } | ... | #endif The two definitions are spread far apart in the file, and duplicate the prototype, making it hard to have a legible set of kerneldoc comments. Make this simpler by defining the C prototype once, and writing the two definitions inline. For example, the above becomes: | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | #ifdef CONFIG_64BIT | return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); | #else | return raw_atomic_try_cmpxchg(v, (int *)old, new); | #endif | } As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. As a bonus, both the script and the generated file are somewhat shorter. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- include/linux/atomic/atomic-long.h | 857 ++++++++++++----------------- scripts/atomic/gen-atomic-long.sh | 27 +- 2 files changed, 350 insertions(+), 534 deletions(-) diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 92dc82ce1ce6d..63e0b4078ebd5 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -21,1030 +21,855 @@ typedef atomic_t atomic_long_t; #define atomic_long_cond_read_relaxed atomic_cond_read_relaxed #endif -#ifdef CONFIG_64BIT - -static __always_inline long -raw_atomic_long_read(const atomic_long_t *v) -{ - return raw_atomic64_read(v); -} - -static __always_inline long -raw_atomic_long_read_acquire(const atomic_long_t *v) -{ - return raw_atomic64_read_acquire(v); -} - -static __always_inline void -raw_atomic_long_set(atomic_long_t *v, long i) -{ - raw_atomic64_set(v, i); -} - -static __always_inline void -raw_atomic_long_set_release(atomic_long_t *v, long i) -{ - raw_atomic64_set_release(v, i); -} - -static __always_inline void -raw_atomic_long_add(long i, atomic_long_t *v) -{ - raw_atomic64_add(i, v); -} - -static __always_inline long -raw_atomic_long_add_return(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_release(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_sub(long i, atomic_long_t *v) -{ - raw_atomic64_sub(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_release(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_inc(atomic_long_t *v) -{ - raw_atomic64_inc(v); -} - -static __always_inline long -raw_atomic_long_inc_return(atomic_long_t *v) -{ - return raw_atomic64_inc_return(v); -} - -static __always_inline long -raw_atomic_long_inc_return_acquire(atomic_long_t *v) -{ - return raw_atomic64_inc_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_inc_return_release(atomic_long_t *v) -{ - return raw_atomic64_inc_return_release(v); -} - -static __always_inline long -raw_atomic_long_inc_return_relaxed(atomic_long_t *v) -{ - return raw_atomic64_inc_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_release(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_long_dec(atomic_long_t *v) -{ - raw_atomic64_dec(v); -} - -static __always_inline long -raw_atomic_long_dec_return(atomic_long_t *v) -{ - return raw_atomic64_dec_return(v); -} - -static __always_inline long -raw_atomic_long_dec_return_acquire(atomic_long_t *v) -{ - return raw_atomic64_dec_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_dec_return_release(atomic_long_t *v) -{ - return raw_atomic64_dec_return_release(v); -} - -static __always_inline long -raw_atomic_long_dec_return_relaxed(atomic_long_t *v) -{ - return raw_atomic64_dec_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_release(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_long_and(long i, atomic_long_t *v) -{ - raw_atomic64_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_andnot(long i, atomic_long_t *v) -{ - raw_atomic64_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_or(long i, atomic_long_t *v) -{ - raw_atomic64_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_xor(long i, atomic_long_t *v) -{ - raw_atomic64_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_acquire(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_release(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_relaxed(v, i); -} - -static __always_inline long -raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_acquire(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_release(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_sub_and_test(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_long_dec_and_test(atomic_long_t *v) -{ - return raw_atomic64_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_long_inc_and_test(atomic_long_t *v) -{ - return raw_atomic64_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_long_add_negative(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_release(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) -{ - return raw_atomic64_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) -{ - return raw_atomic64_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_inc_not_zero(atomic_long_t *v) -{ - return raw_atomic64_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_long_inc_unless_negative(atomic_long_t *v) -{ - return raw_atomic64_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_long_dec_unless_positive(atomic_long_t *v) -{ - return raw_atomic64_dec_unless_positive(v); -} - -static __always_inline long -raw_atomic_long_dec_if_positive(atomic_long_t *v) -{ - return raw_atomic64_dec_if_positive(v); -} - -#else /* CONFIG_64BIT */ - static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_read(v); +#else return raw_atomic_read(v); +#endif } static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_read_acquire(v); +#else return raw_atomic_read_acquire(v); +#endif } static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + raw_atomic64_set(v, i); +#else raw_atomic_set(v, i); +#endif } static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + raw_atomic64_set_release(v, i); +#else raw_atomic_set_release(v, i); +#endif } static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_add(i, v); +#else raw_atomic_add(i, v); +#endif } static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return(i, v); +#else return raw_atomic_add_return(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_acquire(i, v); +#else return raw_atomic_add_return_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_release(i, v); +#else return raw_atomic_add_return_release(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_relaxed(i, v); +#else return raw_atomic_add_return_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add(i, v); +#else return raw_atomic_fetch_add(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_acquire(i, v); +#else return raw_atomic_fetch_add_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_release(i, v); +#else return raw_atomic_fetch_add_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_relaxed(i, v); +#else return raw_atomic_fetch_add_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_sub(i, v); +#else raw_atomic_sub(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return(i, v); +#else return raw_atomic_sub_return(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_acquire(i, v); +#else return raw_atomic_sub_return_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_release(i, v); +#else return raw_atomic_sub_return_release(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_relaxed(i, v); +#else return raw_atomic_sub_return_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub(i, v); +#else return raw_atomic_fetch_sub(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_acquire(i, v); +#else return raw_atomic_fetch_sub_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_release(i, v); +#else return raw_atomic_fetch_sub_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_relaxed(i, v); +#else return raw_atomic_fetch_sub_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_inc(v); +#else raw_atomic_inc(v); +#endif } static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return(v); +#else return raw_atomic_inc_return(v); +#endif } static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_acquire(v); +#else return raw_atomic_inc_return_acquire(v); +#endif } static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_release(v); +#else return raw_atomic_inc_return_release(v); +#endif } static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_relaxed(v); +#else return raw_atomic_inc_return_relaxed(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc(v); +#else return raw_atomic_fetch_inc(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_acquire(v); +#else return raw_atomic_fetch_inc_acquire(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_release(v); +#else return raw_atomic_fetch_inc_release(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_relaxed(v); +#else return raw_atomic_fetch_inc_relaxed(v); +#endif } static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_dec(v); +#else raw_atomic_dec(v); +#endif } static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return(v); +#else return raw_atomic_dec_return(v); +#endif } static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_acquire(v); +#else return raw_atomic_dec_return_acquire(v); +#endif } static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_release(v); +#else return raw_atomic_dec_return_release(v); +#endif } static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_relaxed(v); +#else return raw_atomic_dec_return_relaxed(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec(v); +#else return raw_atomic_fetch_dec(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_acquire(v); +#else return raw_atomic_fetch_dec_acquire(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_release(v); +#else return raw_atomic_fetch_dec_release(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_relaxed(v); +#else return raw_atomic_fetch_dec_relaxed(v); +#endif } static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_and(i, v); +#else raw_atomic_and(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and(i, v); +#else return raw_atomic_fetch_and(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_acquire(i, v); +#else return raw_atomic_fetch_and_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_release(i, v); +#else return raw_atomic_fetch_and_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_relaxed(i, v); +#else return raw_atomic_fetch_and_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_andnot(i, v); +#else raw_atomic_andnot(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot(i, v); +#else return raw_atomic_fetch_andnot(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_acquire(i, v); +#else return raw_atomic_fetch_andnot_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_release(i, v); +#else return raw_atomic_fetch_andnot_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_relaxed(i, v); +#else return raw_atomic_fetch_andnot_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_or(i, v); +#else raw_atomic_or(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or(i, v); +#else return raw_atomic_fetch_or(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_acquire(i, v); +#else return raw_atomic_fetch_or_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_release(i, v); +#else return raw_atomic_fetch_or_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_relaxed(i, v); +#else return raw_atomic_fetch_or_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_xor(i, v); +#else raw_atomic_xor(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor(i, v); +#else return raw_atomic_fetch_xor(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_acquire(i, v); +#else return raw_atomic_fetch_xor_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_release(i, v); +#else return raw_atomic_fetch_xor_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_relaxed(i, v); +#else return raw_atomic_fetch_xor_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg(v, i); +#else return raw_atomic_xchg(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_acquire(v, i); +#else return raw_atomic_xchg_acquire(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_release(v, i); +#else return raw_atomic_xchg_release(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_relaxed(v, i); +#else return raw_atomic_xchg_relaxed(v, i); +#endif } static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg(v, old, new); +#else return raw_atomic_cmpxchg(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_acquire(v, old, new); +#else return raw_atomic_cmpxchg_acquire(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_release(v, old, new); +#else return raw_atomic_cmpxchg_release(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_relaxed(v, old, new); +#else return raw_atomic_cmpxchg_relaxed(v, old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_release(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_and_test(i, v); +#else return raw_atomic_sub_and_test(i, v); +#endif } static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_and_test(v); +#else return raw_atomic_dec_and_test(v); +#endif } static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_and_test(v); +#else return raw_atomic_inc_and_test(v); +#endif } static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative(i, v); +#else return raw_atomic_add_negative(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_acquire(i, v); +#else return raw_atomic_add_negative_acquire(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_release(i, v); +#else return raw_atomic_add_negative_release(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_relaxed(i, v); +#else return raw_atomic_add_negative_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_unless(v, a, u); +#else return raw_atomic_fetch_add_unless(v, a, u); +#endif } static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_unless(v, a, u); +#else return raw_atomic_add_unless(v, a, u); +#endif } static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_not_zero(v); +#else return raw_atomic_inc_not_zero(v); +#endif } static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_unless_negative(v); +#else return raw_atomic_inc_unless_negative(v); +#endif } static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_unless_positive(v); +#else return raw_atomic_dec_unless_positive(v); +#endif } static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_if_positive(v); +#else return raw_atomic_dec_if_positive(v); +#endif } -#endif /* CONFIG_64BIT */ #endif /* _LINUX_ATOMIC_LONG_H */ -// 108784846d3bbbb201b8dabe621c5dc30b216206 +// ad09f849db0db5b30c82e497eeb9056a394c5f22 diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 13832171f7219..af27a71b37ef1 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -32,7 +32,7 @@ gen_args_cast() done } -#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +#gen_proto_order_variant(meta, pfx, name, sfx, order, arg...) gen_proto_order_variant() { local meta="$1"; shift @@ -40,21 +40,24 @@ gen_proto_order_variant() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local atomic="$1"; shift - local int="$1"; shift local atomicname="${pfx}${name}${sfx}${order}" local ret="$(gen_ret_type "${meta}" "long")" local params="$(gen_params "long" "atomic_long" "$@")" - local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")" + local argscast_32="$(gen_args_cast "int" "atomic" "$@")" + local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")" local retstmt="$(gen_ret_stmt "${meta}")" cat < X-Patchwork-Id: 97424 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1429011vqo; Mon, 22 May 2023 06:01:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6EedwDiOxW7+L77irJXGHfpKWC2Bp7Iz34qQ8loWF0J7BXe25vbO53Yv6K7SG1r5MK6tvj X-Received: by 2002:a05:6a20:8421:b0:10b:5305:6a11 with SMTP id c33-20020a056a20842100b0010b53056a11mr5239345pzd.52.1684760482381; Mon, 22 May 2023 06:01:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760482; cv=none; d=google.com; s=arc-20160816; b=uQB8WCnYJF16VmRDKxMdvXFrfLM5Z/vO1J+JwrvDPqcKSabuw5jksaPcezLa1oGAuu ql7rDXoSlOCjpZbbQ3HGTUd9sshgK+JyqGLHa7UBgzMTD4Pe8i79704qkwdotNV/4tt2 gqNAAlQ9gP2F7mKnxJGRygNHpNChsqyXluaJh8OSS9UwHbrbLcLt2X5Xy699bOKUXVDF +6TyihoYgZq9smZMEaQ55h6kw007qBSUneg+dnTMq4TykI3+QoQTfWNA8abcwH00TPa/ Rh1Y1+QkRJ0Xus5iMgNxz6ca0oKfbCzW6l2Fgy26WRBnOVmb9Re/xrhFjUmvx3cZLsOp EjPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lHfDarBF0knA9OWC9X0k/sgM7B3Yo08oNL7Q8UQJPVQ=; b=ns2SWPAgUHmh0jM3aULdK2T3uuFsQ4dl1r1n9iXeh4YZ98W0eCzWHW2c0HfTckx5iM Hbu12xnRsbi/f3Qa1Jsxal87q9rLv6StvrX0Sx6tonX82pIqcXuMLEuXmV33H9q08DnS aGYMvhad1p/VMuStAjkpJhHpsqOmA5qm1icviO39O/GiA21A0mjvospGgN3OTKV4lMdB cZJp+8G+/rD7rxJs/nJH8LX5lgFBysFQ+HWKw9dpxjYqVCi5CVUYCojY8U1rm2z2IslG UcLDvm35Vs/ClhGaX1ux9COpm10k9TzVF21T6bWsEpdkVCkPXnhDG5VlYzUj9UCFTLrS +0Ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b5-20020a63eb45000000b005347ed33096si4826678pgk.258.2023.05.22.06.01.08; Mon, 22 May 2023 06:01:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230375AbjEVM3O (ORCPT + 99 others); Mon, 22 May 2023 08:29:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232778AbjEVM1f (ORCPT ); Mon, 22 May 2023 08:27:35 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 015B011A; Mon, 22 May 2023 05:25:34 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D468F150C; Mon, 22 May 2023 05:26:19 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 002DF3F59C; Mon, 22 May 2023 05:25:32 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 23/26] locking/atomic: scripts: simplify raw_atomic*() definitions Date: Mon, 22 May 2023 13:24:26 +0100 Message-Id: <20230522122429.1915021-24-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599407586156671?= X-GMAIL-MSGID: =?utf-8?q?1766599407586156671?= Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- include/linux/atomic/atomic-arch-fallback.h | 1790 +++++++++--------- include/linux/atomic/atomic-instrumented.h | 50 +- include/linux/atomic/atomic-long.h | 26 +- scripts/atomic/atomics.tbl | 2 +- scripts/atomic/fallbacks/acquire | 4 - scripts/atomic/fallbacks/add_negative | 4 - scripts/atomic/fallbacks/add_unless | 4 - scripts/atomic/fallbacks/andnot | 4 - scripts/atomic/fallbacks/cmpxchg | 4 - scripts/atomic/fallbacks/dec | 4 - scripts/atomic/fallbacks/dec_and_test | 4 - scripts/atomic/fallbacks/dec_if_positive | 4 - scripts/atomic/fallbacks/dec_unless_positive | 4 - scripts/atomic/fallbacks/fence | 4 - scripts/atomic/fallbacks/fetch_add_unless | 4 - scripts/atomic/fallbacks/inc | 4 - scripts/atomic/fallbacks/inc_and_test | 4 - scripts/atomic/fallbacks/inc_not_zero | 4 - scripts/atomic/fallbacks/inc_unless_negative | 4 - scripts/atomic/fallbacks/read_acquire | 4 - scripts/atomic/fallbacks/release | 4 - scripts/atomic/fallbacks/set_release | 4 - scripts/atomic/fallbacks/sub_and_test | 4 - scripts/atomic/fallbacks/try_cmpxchg | 4 - scripts/atomic/fallbacks/xchg | 4 - scripts/atomic/gen-atomic-fallback.sh | 26 +- 26 files changed, 901 insertions(+), 1077 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 99bc1a871dc12..470c2890ab8d6 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -428,16 +428,20 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void); #define raw_sync_cmpxchg arch_sync_cmpxchg -#define raw_atomic_read arch_atomic_read +static __always_inline int +raw_atomic_read(const atomic_t *v) +{ + return arch_atomic_read(v); +} -#if defined(arch_atomic_read_acquire) -#define raw_atomic_read_acquire arch_atomic_read_acquire -#elif defined(arch_atomic_read) -#define raw_atomic_read_acquire arch_atomic_read -#else static __always_inline int raw_atomic_read_acquire(const atomic_t *v) { +#if defined(arch_atomic_read_acquire) + return arch_atomic_read_acquire(v); +#elif defined(arch_atomic_read) + return arch_atomic_read(v); +#else int ret; if (__native_word(atomic_t)) { @@ -448,1144 +452,1088 @@ raw_atomic_read_acquire(const atomic_t *v) } return ret; -} #endif +} -#define raw_atomic_set arch_atomic_set +static __always_inline void +raw_atomic_set(atomic_t *v, int i) +{ + arch_atomic_set(v, i); +} -#if defined(arch_atomic_set_release) -#define raw_atomic_set_release arch_atomic_set_release -#elif defined(arch_atomic_set) -#define raw_atomic_set_release arch_atomic_set -#else static __always_inline void raw_atomic_set_release(atomic_t *v, int i) { +#if defined(arch_atomic_set_release) + arch_atomic_set_release(v, i); +#elif defined(arch_atomic_set) + arch_atomic_set(v, i); +#else if (__native_word(atomic_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); raw_atomic_set(v, i); } -} #endif +} -#define raw_atomic_add arch_atomic_add +static __always_inline void +raw_atomic_add(int i, atomic_t *v) +{ + arch_atomic_add(i, v); +} -#if defined(arch_atomic_add_return) -#define raw_atomic_add_return arch_atomic_add_return -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return(int i, atomic_t *v) { +#if defined(arch_atomic_add_return) + return arch_atomic_add_return(i, v); +#elif defined(arch_atomic_add_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_add_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_add_return" #endif +} -#if defined(arch_atomic_add_return_acquire) -#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_add_return_acquire) + return arch_atomic_add_return_acquire(i, v); +#elif defined(arch_atomic_add_return_relaxed) int ret = arch_atomic_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_acquire arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_acquire" #endif +} -#if defined(arch_atomic_add_return_release) -#define raw_atomic_add_return_release arch_atomic_add_return_release -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return_release(int i, atomic_t *v) { +#if defined(arch_atomic_add_return_release) + return arch_atomic_add_return_release(i, v); +#elif defined(arch_atomic_add_return_relaxed) __atomic_release_fence(); return arch_atomic_add_return_relaxed(i, v); -} #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_release arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_release" #endif +} +static __always_inline int +raw_atomic_add_return_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_add_return_relaxed) -#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed + return arch_atomic_add_return_relaxed(i, v); #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_relaxed arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_relaxed" #endif +} -#if defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add arch_atomic_fetch_add -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add) + return arch_atomic_fetch_add(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_add" #endif +} -#if defined(arch_atomic_fetch_add_acquire) -#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add_acquire) + return arch_atomic_fetch_add_acquire(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) int ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_acquire" #endif +} -#if defined(arch_atomic_fetch_add_release) -#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add_release) + return arch_atomic_fetch_add_release(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) __atomic_release_fence(); return arch_atomic_fetch_add_relaxed(i, v); -} #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_release arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_release" #endif +} +static __always_inline int +raw_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_add_relaxed) -#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed + return arch_atomic_fetch_add_relaxed(i, v); #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_relaxed" #endif +} -#define raw_atomic_sub arch_atomic_sub +static __always_inline void +raw_atomic_sub(int i, atomic_t *v) +{ + arch_atomic_sub(i, v); +} -#if defined(arch_atomic_sub_return) -#define raw_atomic_sub_return arch_atomic_sub_return -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return) + return arch_atomic_sub_return(i, v); +#elif defined(arch_atomic_sub_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_sub_return" #endif +} -#if defined(arch_atomic_sub_return_acquire) -#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return_acquire) + return arch_atomic_sub_return_acquire(i, v); +#elif defined(arch_atomic_sub_return_relaxed) int ret = arch_atomic_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_acquire arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_acquire" #endif +} -#if defined(arch_atomic_sub_return_release) -#define raw_atomic_sub_return_release arch_atomic_sub_return_release -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return_release(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return_release) + return arch_atomic_sub_return_release(i, v); +#elif defined(arch_atomic_sub_return_relaxed) __atomic_release_fence(); return arch_atomic_sub_return_relaxed(i, v); -} #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_release arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_release" #endif +} +static __always_inline int +raw_atomic_sub_return_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_sub_return_relaxed) -#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed + return arch_atomic_sub_return_relaxed(i, v); #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_relaxed arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_relaxed" #endif +} -#if defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub arch_atomic_fetch_sub -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub) + return arch_atomic_fetch_sub(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_sub" #endif +} -#if defined(arch_atomic_fetch_sub_acquire) -#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub_acquire) + return arch_atomic_fetch_sub_acquire(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) int ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_acquire" #endif +} -#if defined(arch_atomic_fetch_sub_release) -#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub_release) + return arch_atomic_fetch_sub_release(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) __atomic_release_fence(); return arch_atomic_fetch_sub_relaxed(i, v); -} #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_release" #endif +} +static __always_inline int +raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_sub_relaxed) -#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed + return arch_atomic_fetch_sub_relaxed(i, v); #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_relaxed" #endif +} -#if defined(arch_atomic_inc) -#define raw_atomic_inc arch_atomic_inc -#else static __always_inline void raw_atomic_inc(atomic_t *v) { +#if defined(arch_atomic_inc) + arch_atomic_inc(v); +#else raw_atomic_add(1, v); -} #endif +} -#if defined(arch_atomic_inc_return) -#define raw_atomic_inc_return arch_atomic_inc_return -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return(atomic_t *v) { +#if defined(arch_atomic_inc_return) + return arch_atomic_inc_return(v); +#elif defined(arch_atomic_inc_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_inc_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_inc_return(atomic_t *v) -{ return raw_atomic_add_return(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_acquire) -#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return_acquire(atomic_t *v) { +#if defined(arch_atomic_inc_return_acquire) + return arch_atomic_inc_return_acquire(v); +#elif defined(arch_atomic_inc_return_relaxed) int ret = arch_atomic_inc_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_acquire arch_atomic_inc_return + return arch_atomic_inc_return(v); #else -static __always_inline int -raw_atomic_inc_return_acquire(atomic_t *v) -{ return raw_atomic_add_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_release) -#define raw_atomic_inc_return_release arch_atomic_inc_return_release -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return_release(atomic_t *v) { +#if defined(arch_atomic_inc_return_release) + return arch_atomic_inc_return_release(v); +#elif defined(arch_atomic_inc_return_relaxed) __atomic_release_fence(); return arch_atomic_inc_return_relaxed(v); -} #elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_release arch_atomic_inc_return + return arch_atomic_inc_return(v); #else -static __always_inline int -raw_atomic_inc_return_release(atomic_t *v) -{ return raw_atomic_add_return_release(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_relaxed) -#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed -#elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_relaxed arch_atomic_inc_return -#else static __always_inline int raw_atomic_inc_return_relaxed(atomic_t *v) { +#if defined(arch_atomic_inc_return_relaxed) + return arch_atomic_inc_return_relaxed(v); +#elif defined(arch_atomic_inc_return) + return arch_atomic_inc_return(v); +#else return raw_atomic_add_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc arch_atomic_fetch_inc -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc(atomic_t *v) { +#if defined(arch_atomic_fetch_inc) + return arch_atomic_fetch_inc(v); +#elif defined(arch_atomic_fetch_inc_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_inc(atomic_t *v) -{ return raw_atomic_fetch_add(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_acquire) -#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc_acquire(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_acquire) + return arch_atomic_fetch_inc_acquire(v); +#elif defined(arch_atomic_fetch_inc_relaxed) int ret = arch_atomic_fetch_inc_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc + return arch_atomic_fetch_inc(v); #else -static __always_inline int -raw_atomic_fetch_inc_acquire(atomic_t *v) -{ return raw_atomic_fetch_add_acquire(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_release) -#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc_release(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_release) + return arch_atomic_fetch_inc_release(v); +#elif defined(arch_atomic_fetch_inc_relaxed) __atomic_release_fence(); return arch_atomic_fetch_inc_relaxed(v); -} #elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc + return arch_atomic_fetch_inc(v); #else -static __always_inline int -raw_atomic_fetch_inc_release(atomic_t *v) -{ return raw_atomic_fetch_add_release(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_relaxed) -#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed -#elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc -#else static __always_inline int raw_atomic_fetch_inc_relaxed(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_relaxed) + return arch_atomic_fetch_inc_relaxed(v); +#elif defined(arch_atomic_fetch_inc) + return arch_atomic_fetch_inc(v); +#else return raw_atomic_fetch_add_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_dec) -#define raw_atomic_dec arch_atomic_dec -#else static __always_inline void raw_atomic_dec(atomic_t *v) { +#if defined(arch_atomic_dec) + arch_atomic_dec(v); +#else raw_atomic_sub(1, v); -} #endif +} -#if defined(arch_atomic_dec_return) -#define raw_atomic_dec_return arch_atomic_dec_return -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return(atomic_t *v) { +#if defined(arch_atomic_dec_return) + return arch_atomic_dec_return(v); +#elif defined(arch_atomic_dec_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_dec_return(atomic_t *v) -{ return raw_atomic_sub_return(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_acquire) -#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return_acquire(atomic_t *v) { +#if defined(arch_atomic_dec_return_acquire) + return arch_atomic_dec_return_acquire(v); +#elif defined(arch_atomic_dec_return_relaxed) int ret = arch_atomic_dec_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_acquire arch_atomic_dec_return + return arch_atomic_dec_return(v); #else -static __always_inline int -raw_atomic_dec_return_acquire(atomic_t *v) -{ return raw_atomic_sub_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_release) -#define raw_atomic_dec_return_release arch_atomic_dec_return_release -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return_release(atomic_t *v) { +#if defined(arch_atomic_dec_return_release) + return arch_atomic_dec_return_release(v); +#elif defined(arch_atomic_dec_return_relaxed) __atomic_release_fence(); return arch_atomic_dec_return_relaxed(v); -} #elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_release arch_atomic_dec_return + return arch_atomic_dec_return(v); #else -static __always_inline int -raw_atomic_dec_return_release(atomic_t *v) -{ return raw_atomic_sub_return_release(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_relaxed) -#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed -#elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_relaxed arch_atomic_dec_return -#else static __always_inline int raw_atomic_dec_return_relaxed(atomic_t *v) { +#if defined(arch_atomic_dec_return_relaxed) + return arch_atomic_dec_return_relaxed(v); +#elif defined(arch_atomic_dec_return) + return arch_atomic_dec_return(v); +#else return raw_atomic_sub_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec arch_atomic_fetch_dec -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec(atomic_t *v) { +#if defined(arch_atomic_fetch_dec) + return arch_atomic_fetch_dec(v); +#elif defined(arch_atomic_fetch_dec_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_dec(atomic_t *v) -{ return raw_atomic_fetch_sub(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_acquire) -#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec_acquire(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_acquire) + return arch_atomic_fetch_dec_acquire(v); +#elif defined(arch_atomic_fetch_dec_relaxed) int ret = arch_atomic_fetch_dec_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec + return arch_atomic_fetch_dec(v); #else -static __always_inline int -raw_atomic_fetch_dec_acquire(atomic_t *v) -{ return raw_atomic_fetch_sub_acquire(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_release) -#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec_release(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_release) + return arch_atomic_fetch_dec_release(v); +#elif defined(arch_atomic_fetch_dec_relaxed) __atomic_release_fence(); return arch_atomic_fetch_dec_relaxed(v); -} #elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec + return arch_atomic_fetch_dec(v); #else -static __always_inline int -raw_atomic_fetch_dec_release(atomic_t *v) -{ return raw_atomic_fetch_sub_release(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_relaxed) -#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed -#elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec -#else static __always_inline int raw_atomic_fetch_dec_relaxed(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_relaxed) + return arch_atomic_fetch_dec_relaxed(v); +#elif defined(arch_atomic_fetch_dec) + return arch_atomic_fetch_dec(v); +#else return raw_atomic_fetch_sub_relaxed(1, v); -} #endif +} -#define raw_atomic_and arch_atomic_and +static __always_inline void +raw_atomic_and(int i, atomic_t *v) +{ + arch_atomic_and(i, v); +} -#if defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and arch_atomic_fetch_and -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and) + return arch_atomic_fetch_and(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_and" #endif +} -#if defined(arch_atomic_fetch_and_acquire) -#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and_acquire) + return arch_atomic_fetch_and_acquire(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) int ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_acquire" #endif +} -#if defined(arch_atomic_fetch_and_release) -#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and_release) + return arch_atomic_fetch_and_release(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) __atomic_release_fence(); return arch_atomic_fetch_and_relaxed(i, v); -} #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_release arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_release" #endif +} +static __always_inline int +raw_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_and_relaxed) -#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed + return arch_atomic_fetch_and_relaxed(i, v); #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_relaxed" #endif +} -#if defined(arch_atomic_andnot) -#define raw_atomic_andnot arch_atomic_andnot -#else static __always_inline void raw_atomic_andnot(int i, atomic_t *v) { +#if defined(arch_atomic_andnot) + arch_atomic_andnot(i, v); +#else raw_atomic_and(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot) + return arch_atomic_fetch_andnot(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_andnot(int i, atomic_t *v) -{ return raw_atomic_fetch_and(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_acquire) -#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_acquire) + return arch_atomic_fetch_andnot_acquire(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) int ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot + return arch_atomic_fetch_andnot(i, v); #else -static __always_inline int -raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ return raw_atomic_fetch_and_acquire(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_release) -#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_release) + return arch_atomic_fetch_andnot_release(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) __atomic_release_fence(); return arch_atomic_fetch_andnot_relaxed(i, v); -} #elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot + return arch_atomic_fetch_andnot(i, v); #else -static __always_inline int -raw_atomic_fetch_andnot_release(int i, atomic_t *v) -{ return raw_atomic_fetch_and_release(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_relaxed) -#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed -#elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot -#else static __always_inline int raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_relaxed) + return arch_atomic_fetch_andnot_relaxed(i, v); +#elif defined(arch_atomic_fetch_andnot) + return arch_atomic_fetch_andnot(i, v); +#else return raw_atomic_fetch_and_relaxed(~i, v); -} #endif +} -#define raw_atomic_or arch_atomic_or +static __always_inline void +raw_atomic_or(int i, atomic_t *v) +{ + arch_atomic_or(i, v); +} -#if defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or arch_atomic_fetch_or -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or) + return arch_atomic_fetch_or(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_or" #endif +} -#if defined(arch_atomic_fetch_or_acquire) -#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or_acquire) + return arch_atomic_fetch_or_acquire(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) int ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_acquire" #endif +} -#if defined(arch_atomic_fetch_or_release) -#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or_release) + return arch_atomic_fetch_or_release(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) __atomic_release_fence(); return arch_atomic_fetch_or_relaxed(i, v); -} #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_release arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_release" #endif +} +static __always_inline int +raw_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_or_relaxed) -#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed + return arch_atomic_fetch_or_relaxed(i, v); #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_relaxed" #endif +} -#define raw_atomic_xor arch_atomic_xor +static __always_inline void +raw_atomic_xor(int i, atomic_t *v) +{ + arch_atomic_xor(i, v); +} -#if defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor arch_atomic_fetch_xor -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor) + return arch_atomic_fetch_xor(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_xor" #endif +} -#if defined(arch_atomic_fetch_xor_acquire) -#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor_acquire) + return arch_atomic_fetch_xor_acquire(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) int ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_acquire" #endif +} -#if defined(arch_atomic_fetch_xor_release) -#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor_release) + return arch_atomic_fetch_xor_release(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) __atomic_release_fence(); return arch_atomic_fetch_xor_relaxed(i, v); -} #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_release" #endif +} +static __always_inline int +raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_xor_relaxed) -#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed + return arch_atomic_fetch_xor_relaxed(i, v); #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_relaxed" #endif +} -#if defined(arch_atomic_xchg) -#define raw_atomic_xchg arch_atomic_xchg -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg(atomic_t *v, int i) +raw_atomic_xchg(atomic_t *v, int new) { +#if defined(arch_atomic_xchg) + return arch_atomic_xchg(v, new); +#elif defined(arch_atomic_xchg_relaxed) int ret; __atomic_pre_full_fence(); - ret = arch_atomic_xchg_relaxed(v, i); + ret = arch_atomic_xchg_relaxed(v, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_xchg(atomic_t *v, int new) -{ return raw_xchg(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_acquire) -#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int i) +raw_atomic_xchg_acquire(atomic_t *v, int new) { - int ret = arch_atomic_xchg_relaxed(v, i); +#if defined(arch_atomic_xchg_acquire) + return arch_atomic_xchg_acquire(v, new); +#elif defined(arch_atomic_xchg_relaxed) + int ret = arch_atomic_xchg_relaxed(v, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_acquire arch_atomic_xchg + return arch_atomic_xchg(v, new); #else -static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int new) -{ return raw_xchg_acquire(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_release) -#define raw_atomic_xchg_release arch_atomic_xchg_release -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int i) +raw_atomic_xchg_release(atomic_t *v, int new) { +#if defined(arch_atomic_xchg_release) + return arch_atomic_xchg_release(v, new); +#elif defined(arch_atomic_xchg_relaxed) __atomic_release_fence(); - return arch_atomic_xchg_relaxed(v, i); -} + return arch_atomic_xchg_relaxed(v, new); #elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_release arch_atomic_xchg + return arch_atomic_xchg(v, new); #else -static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int new) -{ return raw_xchg_release(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_relaxed) -#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_relaxed arch_atomic_xchg -#else static __always_inline int raw_atomic_xchg_relaxed(atomic_t *v, int new) { +#if defined(arch_atomic_xchg_relaxed) + return arch_atomic_xchg_relaxed(v, new); +#elif defined(arch_atomic_xchg) + return arch_atomic_xchg(v, new); +#else return raw_xchg_relaxed(&v->counter, new); -} #endif +} -#if defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg arch_atomic_cmpxchg -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg) + return arch_atomic_cmpxchg(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_cmpxchg(atomic_t *v, int old, int new) -{ return raw_cmpxchg(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_acquire) -#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_acquire) + return arch_atomic_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) int ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg + return arch_atomic_cmpxchg(v, old, new); #else -static __always_inline int -raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) -{ return raw_cmpxchg_acquire(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_release) -#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_release) + return arch_atomic_cmpxchg_release(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg + return arch_atomic_cmpxchg(v, old, new); #else -static __always_inline int -raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) -{ return raw_cmpxchg_release(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_relaxed) -#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg -#else static __always_inline int raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_relaxed) + return arch_atomic_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic_cmpxchg) + return arch_atomic_cmpxchg(v, old, new); +#else return raw_cmpxchg_relaxed(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg) + return arch_atomic_try_cmpxchg(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_acquire) -#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_acquire) + return arch_atomic_try_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg + return arch_atomic_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_release) -#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_release) + return arch_atomic_try_cmpxchg_release(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg + return arch_atomic_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_relaxed) -#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed -#elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg -#else static __always_inline bool raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_relaxed) + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic_try_cmpxchg) + return arch_atomic_try_cmpxchg(v, old, new); +#else int r, o = *old; r = raw_atomic_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_sub_and_test) -#define raw_atomic_sub_and_test arch_atomic_sub_and_test -#else static __always_inline bool raw_atomic_sub_and_test(int i, atomic_t *v) { +#if defined(arch_atomic_sub_and_test) + return arch_atomic_sub_and_test(i, v); +#else return raw_atomic_sub_return(i, v) == 0; -} #endif +} -#if defined(arch_atomic_dec_and_test) -#define raw_atomic_dec_and_test arch_atomic_dec_and_test -#else static __always_inline bool raw_atomic_dec_and_test(atomic_t *v) { +#if defined(arch_atomic_dec_and_test) + return arch_atomic_dec_and_test(v); +#else return raw_atomic_dec_return(v) == 0; -} #endif +} -#if defined(arch_atomic_inc_and_test) -#define raw_atomic_inc_and_test arch_atomic_inc_and_test -#else static __always_inline bool raw_atomic_inc_and_test(atomic_t *v) { +#if defined(arch_atomic_inc_and_test) + return arch_atomic_inc_and_test(v); +#else return raw_atomic_inc_return(v) == 0; -} #endif +} -#if defined(arch_atomic_add_negative) -#define raw_atomic_add_negative arch_atomic_add_negative -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative) + return arch_atomic_add_negative(i, v); +#elif defined(arch_atomic_add_negative_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic_add_negative_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic_add_negative(int i, atomic_t *v) -{ return raw_atomic_add_return(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_acquire) -#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_acquire) + return arch_atomic_add_negative_acquire(i, v); +#elif defined(arch_atomic_add_negative_relaxed) bool ret = arch_atomic_add_negative_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_acquire arch_atomic_add_negative + return arch_atomic_add_negative(i, v); #else -static __always_inline bool -raw_atomic_add_negative_acquire(int i, atomic_t *v) -{ return raw_atomic_add_return_acquire(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_release) -#define raw_atomic_add_negative_release arch_atomic_add_negative_release -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative_release(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_release) + return arch_atomic_add_negative_release(i, v); +#elif defined(arch_atomic_add_negative_relaxed) __atomic_release_fence(); return arch_atomic_add_negative_relaxed(i, v); -} #elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_release arch_atomic_add_negative + return arch_atomic_add_negative(i, v); #else -static __always_inline bool -raw_atomic_add_negative_release(int i, atomic_t *v) -{ return raw_atomic_add_return_release(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_relaxed) -#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed -#elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_relaxed arch_atomic_add_negative -#else static __always_inline bool raw_atomic_add_negative_relaxed(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_relaxed) + return arch_atomic_add_negative_relaxed(i, v); +#elif defined(arch_atomic_add_negative) + return arch_atomic_add_negative(i, v); +#else return raw_atomic_add_return_relaxed(i, v) < 0; -} #endif +} -#if defined(arch_atomic_fetch_add_unless) -#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless -#else static __always_inline int raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { +#if defined(arch_atomic_fetch_add_unless) + return arch_atomic_fetch_add_unless(v, a, u); +#else int c = raw_atomic_read(v); do { @@ -1594,35 +1542,35 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) } while (!raw_atomic_try_cmpxchg(v, &c, c + a)); return c; -} #endif +} -#if defined(arch_atomic_add_unless) -#define raw_atomic_add_unless arch_atomic_add_unless -#else static __always_inline bool raw_atomic_add_unless(atomic_t *v, int a, int u) { +#if defined(arch_atomic_add_unless) + return arch_atomic_add_unless(v, a, u); +#else return raw_atomic_fetch_add_unless(v, a, u) != u; -} #endif +} -#if defined(arch_atomic_inc_not_zero) -#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero -#else static __always_inline bool raw_atomic_inc_not_zero(atomic_t *v) { +#if defined(arch_atomic_inc_not_zero) + return arch_atomic_inc_not_zero(v); +#else return raw_atomic_add_unless(v, 1, 0); -} #endif +} -#if defined(arch_atomic_inc_unless_negative) -#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative -#else static __always_inline bool raw_atomic_inc_unless_negative(atomic_t *v) { +#if defined(arch_atomic_inc_unless_negative) + return arch_atomic_inc_unless_negative(v); +#else int c = raw_atomic_read(v); do { @@ -1631,15 +1579,15 @@ raw_atomic_inc_unless_negative(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, c + 1)); return true; -} #endif +} -#if defined(arch_atomic_dec_unless_positive) -#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive -#else static __always_inline bool raw_atomic_dec_unless_positive(atomic_t *v) { +#if defined(arch_atomic_dec_unless_positive) + return arch_atomic_dec_unless_positive(v); +#else int c = raw_atomic_read(v); do { @@ -1648,15 +1596,15 @@ raw_atomic_dec_unless_positive(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, c - 1)); return true; -} #endif +} -#if defined(arch_atomic_dec_if_positive) -#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive -#else static __always_inline int raw_atomic_dec_if_positive(atomic_t *v) { +#if defined(arch_atomic_dec_if_positive) + return arch_atomic_dec_if_positive(v); +#else int dec, c = raw_atomic_read(v); do { @@ -1666,23 +1614,27 @@ raw_atomic_dec_if_positive(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, dec)); return dec; -} #endif +} #ifdef CONFIG_GENERIC_ATOMIC64 #include #endif -#define raw_atomic64_read arch_atomic64_read +static __always_inline s64 +raw_atomic64_read(const atomic64_t *v) +{ + return arch_atomic64_read(v); +} -#if defined(arch_atomic64_read_acquire) -#define raw_atomic64_read_acquire arch_atomic64_read_acquire -#elif defined(arch_atomic64_read) -#define raw_atomic64_read_acquire arch_atomic64_read -#else static __always_inline s64 raw_atomic64_read_acquire(const atomic64_t *v) { +#if defined(arch_atomic64_read_acquire) + return arch_atomic64_read_acquire(v); +#elif defined(arch_atomic64_read) + return arch_atomic64_read(v); +#else s64 ret; if (__native_word(atomic64_t)) { @@ -1693,1144 +1645,1088 @@ raw_atomic64_read_acquire(const atomic64_t *v) } return ret; -} #endif +} -#define raw_atomic64_set arch_atomic64_set +static __always_inline void +raw_atomic64_set(atomic64_t *v, s64 i) +{ + arch_atomic64_set(v, i); +} -#if defined(arch_atomic64_set_release) -#define raw_atomic64_set_release arch_atomic64_set_release -#elif defined(arch_atomic64_set) -#define raw_atomic64_set_release arch_atomic64_set -#else static __always_inline void raw_atomic64_set_release(atomic64_t *v, s64 i) { +#if defined(arch_atomic64_set_release) + arch_atomic64_set_release(v, i); +#elif defined(arch_atomic64_set) + arch_atomic64_set(v, i); +#else if (__native_word(atomic64_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); raw_atomic64_set(v, i); } -} #endif +} -#define raw_atomic64_add arch_atomic64_add +static __always_inline void +raw_atomic64_add(s64 i, atomic64_t *v) +{ + arch_atomic64_add(i, v); +} -#if defined(arch_atomic64_add_return) -#define raw_atomic64_add_return arch_atomic64_add_return -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return) + return arch_atomic64_add_return(i, v); +#elif defined(arch_atomic64_add_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_add_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_add_return" #endif +} -#if defined(arch_atomic64_add_return_acquire) -#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return_acquire) + return arch_atomic64_add_return_acquire(i, v); +#elif defined(arch_atomic64_add_return_relaxed) s64 ret = arch_atomic64_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_acquire arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_acquire" #endif +} -#if defined(arch_atomic64_add_return_release) -#define raw_atomic64_add_return_release arch_atomic64_add_return_release -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return_release) + return arch_atomic64_add_return_release(i, v); +#elif defined(arch_atomic64_add_return_relaxed) __atomic_release_fence(); return arch_atomic64_add_return_relaxed(i, v); -} #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_release arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_release" #endif +} +static __always_inline s64 +raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_add_return_relaxed) -#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed + return arch_atomic64_add_return_relaxed(i, v); #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_relaxed arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_relaxed" #endif +} -#if defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add arch_atomic64_fetch_add -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add) + return arch_atomic64_fetch_add(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_add" #endif +} -#if defined(arch_atomic64_fetch_add_acquire) -#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add_acquire) + return arch_atomic64_fetch_add_acquire(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) s64 ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_acquire" #endif +} -#if defined(arch_atomic64_fetch_add_release) -#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add_release) + return arch_atomic64_fetch_add_release(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_add_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_add_relaxed) -#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed + return arch_atomic64_fetch_add_relaxed(i, v); #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_relaxed" #endif +} -#define raw_atomic64_sub arch_atomic64_sub +static __always_inline void +raw_atomic64_sub(s64 i, atomic64_t *v) +{ + arch_atomic64_sub(i, v); +} -#if defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return arch_atomic64_sub_return -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return) + return arch_atomic64_sub_return(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_sub_return" #endif +} -#if defined(arch_atomic64_sub_return_acquire) -#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return_acquire) + return arch_atomic64_sub_return_acquire(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) s64 ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_acquire" #endif +} -#if defined(arch_atomic64_sub_return_release) -#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return_release) + return arch_atomic64_sub_return_release(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) __atomic_release_fence(); return arch_atomic64_sub_return_relaxed(i, v); -} #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_release arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_release" #endif +} +static __always_inline s64 +raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_sub_return_relaxed) -#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed + return arch_atomic64_sub_return_relaxed(i, v); #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_relaxed" #endif +} -#if defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub) + return arch_atomic64_fetch_sub(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_sub" #endif +} -#if defined(arch_atomic64_fetch_sub_acquire) -#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub_acquire) + return arch_atomic64_fetch_sub_acquire(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) s64 ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_acquire" #endif +} -#if defined(arch_atomic64_fetch_sub_release) -#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub_release) + return arch_atomic64_fetch_sub_release(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_sub_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_sub_relaxed) -#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed + return arch_atomic64_fetch_sub_relaxed(i, v); #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_relaxed" #endif +} -#if defined(arch_atomic64_inc) -#define raw_atomic64_inc arch_atomic64_inc -#else static __always_inline void raw_atomic64_inc(atomic64_t *v) { +#if defined(arch_atomic64_inc) + arch_atomic64_inc(v); +#else raw_atomic64_add(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return arch_atomic64_inc_return -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return(atomic64_t *v) { +#if defined(arch_atomic64_inc_return) + return arch_atomic64_inc_return(v); +#elif defined(arch_atomic64_inc_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_inc_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_inc_return(atomic64_t *v) -{ return raw_atomic64_add_return(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_acquire) -#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return_acquire(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_acquire) + return arch_atomic64_inc_return_acquire(v); +#elif defined(arch_atomic64_inc_return_relaxed) s64 ret = arch_atomic64_inc_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return + return arch_atomic64_inc_return(v); #else -static __always_inline s64 -raw_atomic64_inc_return_acquire(atomic64_t *v) -{ return raw_atomic64_add_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_release) -#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return_release(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_release) + return arch_atomic64_inc_return_release(v); +#elif defined(arch_atomic64_inc_return_relaxed) __atomic_release_fence(); return arch_atomic64_inc_return_relaxed(v); -} #elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_release arch_atomic64_inc_return + return arch_atomic64_inc_return(v); #else -static __always_inline s64 -raw_atomic64_inc_return_release(atomic64_t *v) -{ return raw_atomic64_add_return_release(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_relaxed) -#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed -#elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return -#else static __always_inline s64 raw_atomic64_inc_return_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_relaxed) + return arch_atomic64_inc_return_relaxed(v); +#elif defined(arch_atomic64_inc_return) + return arch_atomic64_inc_return(v); +#else return raw_atomic64_add_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc) + return arch_atomic64_fetch_inc(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_inc(atomic64_t *v) -{ return raw_atomic64_fetch_add(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_acquire) -#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc_acquire(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_acquire) + return arch_atomic64_fetch_inc_acquire(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) s64 ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc + return arch_atomic64_fetch_inc(v); #else -static __always_inline s64 -raw_atomic64_fetch_inc_acquire(atomic64_t *v) -{ return raw_atomic64_fetch_add_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_release) -#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc_release(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_release) + return arch_atomic64_fetch_inc_release(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_inc_relaxed(v); -} #elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc + return arch_atomic64_fetch_inc(v); #else -static __always_inline s64 -raw_atomic64_fetch_inc_release(atomic64_t *v) -{ return raw_atomic64_fetch_add_release(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_relaxed) -#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed -#elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc -#else static __always_inline s64 raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_relaxed) + return arch_atomic64_fetch_inc_relaxed(v); +#elif defined(arch_atomic64_fetch_inc) + return arch_atomic64_fetch_inc(v); +#else return raw_atomic64_fetch_add_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_dec) -#define raw_atomic64_dec arch_atomic64_dec -#else static __always_inline void raw_atomic64_dec(atomic64_t *v) { +#if defined(arch_atomic64_dec) + arch_atomic64_dec(v); +#else raw_atomic64_sub(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return arch_atomic64_dec_return -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return(atomic64_t *v) { +#if defined(arch_atomic64_dec_return) + return arch_atomic64_dec_return(v); +#elif defined(arch_atomic64_dec_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_dec_return(atomic64_t *v) -{ return raw_atomic64_sub_return(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_acquire) -#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return_acquire(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_acquire) + return arch_atomic64_dec_return_acquire(v); +#elif defined(arch_atomic64_dec_return_relaxed) s64 ret = arch_atomic64_dec_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return + return arch_atomic64_dec_return(v); #else -static __always_inline s64 -raw_atomic64_dec_return_acquire(atomic64_t *v) -{ return raw_atomic64_sub_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_release) -#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return_release(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_release) + return arch_atomic64_dec_return_release(v); +#elif defined(arch_atomic64_dec_return_relaxed) __atomic_release_fence(); return arch_atomic64_dec_return_relaxed(v); -} #elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_release arch_atomic64_dec_return + return arch_atomic64_dec_return(v); #else -static __always_inline s64 -raw_atomic64_dec_return_release(atomic64_t *v) -{ return raw_atomic64_sub_return_release(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_relaxed) -#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed -#elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return -#else static __always_inline s64 raw_atomic64_dec_return_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_relaxed) + return arch_atomic64_dec_return_relaxed(v); +#elif defined(arch_atomic64_dec_return) + return arch_atomic64_dec_return(v); +#else return raw_atomic64_sub_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec) + return arch_atomic64_fetch_dec(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_dec(atomic64_t *v) -{ return raw_atomic64_fetch_sub(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_acquire) -#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec_acquire(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_acquire) + return arch_atomic64_fetch_dec_acquire(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) s64 ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec + return arch_atomic64_fetch_dec(v); #else -static __always_inline s64 -raw_atomic64_fetch_dec_acquire(atomic64_t *v) -{ return raw_atomic64_fetch_sub_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_release) -#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec_release(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_release) + return arch_atomic64_fetch_dec_release(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_dec_relaxed(v); -} #elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec + return arch_atomic64_fetch_dec(v); #else -static __always_inline s64 -raw_atomic64_fetch_dec_release(atomic64_t *v) -{ return raw_atomic64_fetch_sub_release(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_relaxed) -#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed -#elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec -#else static __always_inline s64 raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_relaxed) + return arch_atomic64_fetch_dec_relaxed(v); +#elif defined(arch_atomic64_fetch_dec) + return arch_atomic64_fetch_dec(v); +#else return raw_atomic64_fetch_sub_relaxed(1, v); -} #endif +} -#define raw_atomic64_and arch_atomic64_and +static __always_inline void +raw_atomic64_and(s64 i, atomic64_t *v) +{ + arch_atomic64_and(i, v); +} -#if defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and arch_atomic64_fetch_and -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and) + return arch_atomic64_fetch_and(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_and" #endif +} -#if defined(arch_atomic64_fetch_and_acquire) -#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and_acquire) + return arch_atomic64_fetch_and_acquire(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) s64 ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_acquire" #endif +} -#if defined(arch_atomic64_fetch_and_release) -#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and_release) + return arch_atomic64_fetch_and_release(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_and_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_and_relaxed) -#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed + return arch_atomic64_fetch_and_relaxed(i, v); #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_relaxed" #endif +} -#if defined(arch_atomic64_andnot) -#define raw_atomic64_andnot arch_atomic64_andnot -#else static __always_inline void raw_atomic64_andnot(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_andnot) + arch_atomic64_andnot(i, v); +#else raw_atomic64_and(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot) + return arch_atomic64_fetch_andnot(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_acquire) -#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_acquire) + return arch_atomic64_fetch_andnot_acquire(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot + return arch_atomic64_fetch_andnot(i, v); #else -static __always_inline s64 -raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and_acquire(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_release) -#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_release) + return arch_atomic64_fetch_andnot_release(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_andnot_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot + return arch_atomic64_fetch_andnot(i, v); #else -static __always_inline s64 -raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and_release(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_relaxed) -#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed -#elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot -#else static __always_inline s64 raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_relaxed) + return arch_atomic64_fetch_andnot_relaxed(i, v); +#elif defined(arch_atomic64_fetch_andnot) + return arch_atomic64_fetch_andnot(i, v); +#else return raw_atomic64_fetch_and_relaxed(~i, v); -} #endif +} -#define raw_atomic64_or arch_atomic64_or +static __always_inline void +raw_atomic64_or(s64 i, atomic64_t *v) +{ + arch_atomic64_or(i, v); +} -#if defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or arch_atomic64_fetch_or -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or) + return arch_atomic64_fetch_or(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_or" #endif +} -#if defined(arch_atomic64_fetch_or_acquire) -#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or_acquire) + return arch_atomic64_fetch_or_acquire(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) s64 ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_acquire" #endif +} -#if defined(arch_atomic64_fetch_or_release) -#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or_release) + return arch_atomic64_fetch_or_release(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_or_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_or_relaxed) -#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed + return arch_atomic64_fetch_or_relaxed(i, v); #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_relaxed" #endif +} -#define raw_atomic64_xor arch_atomic64_xor +static __always_inline void +raw_atomic64_xor(s64 i, atomic64_t *v) +{ + arch_atomic64_xor(i, v); +} -#if defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor) + return arch_atomic64_fetch_xor(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_xor" #endif +} -#if defined(arch_atomic64_fetch_xor_acquire) -#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor_acquire) + return arch_atomic64_fetch_xor_acquire(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) s64 ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_acquire" #endif +} -#if defined(arch_atomic64_fetch_xor_release) -#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor_release) + return arch_atomic64_fetch_xor_release(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_xor_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_xor_relaxed) -#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed + return arch_atomic64_fetch_xor_relaxed(i, v); #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_relaxed" #endif +} -#if defined(arch_atomic64_xchg) -#define raw_atomic64_xchg arch_atomic64_xchg -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 i) +raw_atomic64_xchg(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg) + return arch_atomic64_xchg(v, new); +#elif defined(arch_atomic64_xchg_relaxed) s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_xchg_relaxed(v, i); + ret = arch_atomic64_xchg_relaxed(v, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 new) -{ return raw_xchg(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_acquire) -#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { - s64 ret = arch_atomic64_xchg_relaxed(v, i); +#if defined(arch_atomic64_xchg_acquire) + return arch_atomic64_xchg_acquire(v, new); +#elif defined(arch_atomic64_xchg_relaxed) + s64 ret = arch_atomic64_xchg_relaxed(v, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_acquire arch_atomic64_xchg + return arch_atomic64_xchg(v, new); #else -static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) -{ return raw_xchg_acquire(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_release) -#define raw_atomic64_xchg_release arch_atomic64_xchg_release -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 i) +raw_atomic64_xchg_release(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg_release) + return arch_atomic64_xchg_release(v, new); +#elif defined(arch_atomic64_xchg_relaxed) __atomic_release_fence(); - return arch_atomic64_xchg_relaxed(v, i); -} + return arch_atomic64_xchg_relaxed(v, new); #elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_release arch_atomic64_xchg + return arch_atomic64_xchg(v, new); #else -static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 new) -{ return raw_xchg_release(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_relaxed) -#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed -#elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_relaxed arch_atomic64_xchg -#else static __always_inline s64 raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg_relaxed) + return arch_atomic64_xchg_relaxed(v, new); +#elif defined(arch_atomic64_xchg) + return arch_atomic64_xchg(v, new); +#else return raw_xchg_relaxed(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg) + return arch_atomic64_cmpxchg(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_acquire) -#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_acquire) + return arch_atomic64_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg + return arch_atomic64_cmpxchg(v, old, new); #else -static __always_inline s64 -raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg_acquire(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_release) -#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_release) + return arch_atomic64_cmpxchg_release(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic64_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg + return arch_atomic64_cmpxchg(v, old, new); #else -static __always_inline s64 -raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg_release(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_relaxed) -#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed -#elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg -#else static __always_inline s64 raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_relaxed) + return arch_atomic64_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic64_cmpxchg) + return arch_atomic64_cmpxchg(v, old, new); +#else return raw_cmpxchg_relaxed(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg) + return arch_atomic64_try_cmpxchg(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_acquire) -#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_acquire) + return arch_atomic64_try_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg + return arch_atomic64_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_release) -#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_release) + return arch_atomic64_try_cmpxchg_release(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg + return arch_atomic64_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_relaxed) -#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed -#elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg -#else static __always_inline bool raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_relaxed) + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg) + return arch_atomic64_try_cmpxchg(v, old, new); +#else s64 r, o = *old; r = raw_atomic64_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_sub_and_test) -#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test -#else static __always_inline bool raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_and_test) + return arch_atomic64_sub_and_test(i, v); +#else return raw_atomic64_sub_return(i, v) == 0; -} #endif +} -#if defined(arch_atomic64_dec_and_test) -#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test -#else static __always_inline bool raw_atomic64_dec_and_test(atomic64_t *v) { +#if defined(arch_atomic64_dec_and_test) + return arch_atomic64_dec_and_test(v); +#else return raw_atomic64_dec_return(v) == 0; -} #endif +} -#if defined(arch_atomic64_inc_and_test) -#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test -#else static __always_inline bool raw_atomic64_inc_and_test(atomic64_t *v) { +#if defined(arch_atomic64_inc_and_test) + return arch_atomic64_inc_and_test(v); +#else return raw_atomic64_inc_return(v) == 0; -} #endif +} -#if defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative arch_atomic64_add_negative -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative) + return arch_atomic64_add_negative(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic64_add_negative_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic64_add_negative(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_acquire) -#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_acquire) + return arch_atomic64_add_negative_acquire(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) bool ret = arch_atomic64_add_negative_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative + return arch_atomic64_add_negative(i, v); #else -static __always_inline bool -raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return_acquire(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_release) -#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_release) + return arch_atomic64_add_negative_release(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) __atomic_release_fence(); return arch_atomic64_add_negative_relaxed(i, v); -} #elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_release arch_atomic64_add_negative + return arch_atomic64_add_negative(i, v); #else -static __always_inline bool -raw_atomic64_add_negative_release(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return_release(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_relaxed) -#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed -#elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative -#else static __always_inline bool raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_relaxed) + return arch_atomic64_add_negative_relaxed(i, v); +#elif defined(arch_atomic64_add_negative) + return arch_atomic64_add_negative(i, v); +#else return raw_atomic64_add_return_relaxed(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_fetch_add_unless) -#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -#else static __always_inline s64 raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { +#if defined(arch_atomic64_fetch_add_unless) + return arch_atomic64_fetch_add_unless(v, a, u); +#else s64 c = raw_atomic64_read(v); do { @@ -2839,35 +2735,35 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) } while (!raw_atomic64_try_cmpxchg(v, &c, c + a)); return c; -} #endif +} -#if defined(arch_atomic64_add_unless) -#define raw_atomic64_add_unless arch_atomic64_add_unless -#else static __always_inline bool raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { +#if defined(arch_atomic64_add_unless) + return arch_atomic64_add_unless(v, a, u); +#else return raw_atomic64_fetch_add_unless(v, a, u) != u; -} #endif +} -#if defined(arch_atomic64_inc_not_zero) -#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero -#else static __always_inline bool raw_atomic64_inc_not_zero(atomic64_t *v) { +#if defined(arch_atomic64_inc_not_zero) + return arch_atomic64_inc_not_zero(v); +#else return raw_atomic64_add_unless(v, 1, 0); -} #endif +} -#if defined(arch_atomic64_inc_unless_negative) -#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative -#else static __always_inline bool raw_atomic64_inc_unless_negative(atomic64_t *v) { +#if defined(arch_atomic64_inc_unless_negative) + return arch_atomic64_inc_unless_negative(v); +#else s64 c = raw_atomic64_read(v); do { @@ -2876,15 +2772,15 @@ raw_atomic64_inc_unless_negative(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1)); return true; -} #endif +} -#if defined(arch_atomic64_dec_unless_positive) -#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive -#else static __always_inline bool raw_atomic64_dec_unless_positive(atomic64_t *v) { +#if defined(arch_atomic64_dec_unless_positive) + return arch_atomic64_dec_unless_positive(v); +#else s64 c = raw_atomic64_read(v); do { @@ -2893,15 +2789,15 @@ raw_atomic64_dec_unless_positive(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1)); return true; -} #endif +} -#if defined(arch_atomic64_dec_if_positive) -#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive -#else static __always_inline s64 raw_atomic64_dec_if_positive(atomic64_t *v) { +#if defined(arch_atomic64_dec_if_positive) + return arch_atomic64_dec_if_positive(v); +#else s64 dec, c = raw_atomic64_read(v); do { @@ -2911,8 +2807,8 @@ raw_atomic64_dec_if_positive(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, dec)); return dec; -} #endif +} #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// c2048fccede6fac923252290e2b303949d5dec83 +// 205e090382132f1fc85e48b46e722865f9c81309 diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index 90ee2f55af770..5491c89dc03a0 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -462,33 +462,33 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) } static __always_inline int -atomic_xchg(atomic_t *v, int i) +atomic_xchg(atomic_t *v, int new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg(v, i); + return raw_atomic_xchg(v, new); } static __always_inline int -atomic_xchg_acquire(atomic_t *v, int i) +atomic_xchg_acquire(atomic_t *v, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, new); } static __always_inline int -atomic_xchg_release(atomic_t *v, int i) +atomic_xchg_release(atomic_t *v, int new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, new); } static __always_inline int -atomic_xchg_relaxed(atomic_t *v, int i) +atomic_xchg_relaxed(atomic_t *v, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, new); } static __always_inline int @@ -1103,33 +1103,33 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) } static __always_inline s64 -atomic64_xchg(atomic64_t *v, s64 i) +atomic64_xchg(atomic64_t *v, s64 new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, new); } static __always_inline s64 -atomic64_xchg_acquire(atomic64_t *v, s64 i) +atomic64_xchg_acquire(atomic64_t *v, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, new); } static __always_inline s64 -atomic64_xchg_release(atomic64_t *v, s64 i) +atomic64_xchg_release(atomic64_t *v, s64 new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, new); } static __always_inline s64 -atomic64_xchg_relaxed(atomic64_t *v, s64 i) +atomic64_xchg_relaxed(atomic64_t *v, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, new); } static __always_inline s64 @@ -1744,33 +1744,33 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) } static __always_inline long -atomic_long_xchg(atomic_long_t *v, long i) +atomic_long_xchg(atomic_long_t *v, long new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg(v, i); + return raw_atomic_long_xchg(v, new); } static __always_inline long -atomic_long_xchg_acquire(atomic_long_t *v, long i) +atomic_long_xchg_acquire(atomic_long_t *v, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_acquire(v, i); + return raw_atomic_long_xchg_acquire(v, new); } static __always_inline long -atomic_long_xchg_release(atomic_long_t *v, long i) +atomic_long_xchg_release(atomic_long_t *v, long new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_release(v, i); + return raw_atomic_long_xchg_release(v, new); } static __always_inline long -atomic_long_xchg_relaxed(atomic_long_t *v, long i) +atomic_long_xchg_relaxed(atomic_long_t *v, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_relaxed(v, i); + return raw_atomic_long_xchg_relaxed(v, new); } static __always_inline long @@ -2231,4 +2231,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// f6502977180430e61c1a7c4e5e665f04f501fb8d +// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 63e0b4078ebd5..f564f71ff8afc 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -622,42 +622,42 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) } static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, new); #else - return raw_atomic_xchg(v, i); + return raw_atomic_xchg(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, new); #else - return raw_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, new); #else - return raw_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, new); #else - return raw_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, new); #endif } @@ -872,4 +872,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// ad09f849db0db5b30c82e497eeb9056a394c5f22 +// e785d25cc3f220b7d473d36aac9da85dd7eb13a8 diff --git a/scripts/atomic/atomics.tbl b/scripts/atomic/atomics.tbl index 85ca8d9b5c279..903946cbf1b3e 100644 --- a/scripts/atomic/atomics.tbl +++ b/scripts/atomic/atomics.tbl @@ -27,7 +27,7 @@ and vF i v andnot vF i v or vF i v xor vF i v -xchg I v i +xchg I v i:new cmpxchg I v i:old i:new try_cmpxchg B v p:old i:new sub_and_test b i v diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire index b0f732a5c46ef..4da0cab3604e2 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,9 +1,5 @@ cat <counter, old, new); -} EOF diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index a660ac65994bd..60d286d40300f 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,7 +1,3 @@ cat <counter, i); } else { __atomic_release_fence(); raw_${atomic}_set(v, i); } -} EOF diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test index 8975a496d495c..d1f746fe0ca4d 100755 --- a/scripts/atomic/fallbacks/sub_and_test +++ b/scripts/atomic/fallbacks/sub_and_test @@ -1,7 +1,3 @@ cat <counter, new); -} EOF diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 86aca4f9f315a..2b470d31e3539 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -60,13 +60,23 @@ gen_proto_order_variant() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local atomic="$1" + local atomic="$1"; shift + local int="$1"; shift local atomicname="${atomic}_${pfx}${name}${sfx}${order}" local basename="${atomic}_${pfx}${name}${sfx}" local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" + local ret="$(gen_ret_type "${meta}" "${int}")" + local retstmt="$(gen_ret_stmt "${meta}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + + printf "static __always_inline ${ret}\n" + printf "raw_${atomicname}(${params})\n" + printf "{\n" + # Where there is no possible fallback, this order variant is mandatory # and must be provided by arch code. Add a comment to the header to # make this obvious. @@ -75,33 +85,35 @@ gen_proto_order_variant() # define this order variant as a C function without a preprocessor # symbol. if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then - printf "#define raw_${atomicname} arch_${atomicname}\n\n" + printf "\t${retstmt}arch_${atomicname}(${args});\n" + printf "}\n\n" return fi printf "#if defined(arch_${atomicname})\n" - printf "#define raw_${atomicname} arch_${atomicname}\n" + printf "\t${retstmt}arch_${atomicname}(${args});\n" # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then printf "#elif defined(arch_${basename}_relaxed)\n" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" fi # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops if [ ! -z "${order}" ]; then printf "#elif defined(arch_${basename})\n" - printf "#define raw_${atomicname} arch_${basename}\n" + printf "\t${retstmt}arch_${basename}(${args});\n" fi printf "#else\n" if [ ! -z "${template}" ]; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" else printf "#error \"Unable to define raw_${atomicname}\"\n" fi - printf "#endif\n\n" + printf "#endif\n" + printf "}\n\n" } From patchwork Mon May 22 12:24:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97402 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1412895vqo; Mon, 22 May 2023 05:33:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4SCvtxOSc3qpRuWu5XnnsT/SmFr4QIpLcqgiFRDB6N5xi+1Ptsvt4D4/DnlIkxVbixSupA X-Received: by 2002:a17:903:1d1:b0:1ad:bccc:af77 with SMTP id e17-20020a17090301d100b001adbcccaf77mr16385777plh.18.1684758818539; Mon, 22 May 2023 05:33:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758818; cv=none; d=google.com; s=arc-20160816; b=an5kcTLdL8oRhh0vEb7bxhZzBejkaXEPbAO8W04SIuxugpD5Z+CfdOaVFq60e8EHGK mnHLGwzrMw58m6NtD5aRslDxZQJNphGkvQrwxUzpbCwijXkod9wgidMEgZGN1xBg4lSK wi/UIjQYExCLQxmspw8l09Zduoi7s3Laj4Jza6D7AzfSRbTXWMTVCgSnLpjbKfuenbvp MdMp8XRocW76xojbKwSf4ccTp3MCZbGxnnI9FfntltwmNaIJy6v9+HZJ9YaMyks2F/yO iirfH+kZlOBLz6rw3uu1fqFVnmv+IvunnEp9wazoFitDJBw85nS4/eXVY+km5C7I5vGT jU7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7M/leV3EPZac2wKt/Jrbk0RIpgoDO204KDKT6WkmKvE=; b=a6REUUt82UHe6gf7AHXsvteBShx6Bm1186vD/VeKE1D2/EXLXft90ChmpkH+HEqOZz 02BUxfYwymMiVA1GTrACx+x74cIgSJgKjkrDdO1Pzww+Gt+QAw1IzSVDmuCgGKmCdloC Gm2HAUXCMogj9wmaM3VUCWWgGkDeyiznSYIrcCl2fhU/UVZGH+HBr7ubC0KmFkvvwyOL PEd2AywnMZ55bj0ANqBa16PZECwh5cDdPwwoPo4/8NQpEzOagVb+bJPqb24c+UYqGeYo dxyrBtU1wEkxl+Mlfy+kV1w/2gMiOzbdseioVt+BxmGgThsq1TVwdSViyfna3lVqEP9L 8Ovg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s2-20020a17090ad48200b002537bd7454dsi4540943pju.101.2023.05.22.05.33.26; Mon, 22 May 2023 05:33:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234154AbjEVM3R (ORCPT + 99 others); Mon, 22 May 2023 08:29:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233366AbjEVM1l (ORCPT ); Mon, 22 May 2023 08:27:41 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DB9EDFD; Mon, 22 May 2023 05:25:37 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B2941515; Mon, 22 May 2023 05:26:22 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B48B83F59C; Mon, 22 May 2023 05:25:35 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments Date: Mon, 22 May 2023 13:24:27 +0100 Message-Id: <20230522122429.1915021-25-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597663051892887?= X-GMAIL-MSGID: =?utf-8?q?1766597663051892887?= Currently the atomics are documented in Documentation/atomic_t.txt, and have no kerneldoc comments. There are a sufficient number of gotchas (e.g. semantics, noinstr-safety) that it would be nice to have comments to call these out, and it would be nice to have kerneldoc comments such that these can be collated. While it's possible to derive the semantics from the code, this can be painful given the amount of indirection we currently have (e.g. fallback paths), and it's easy to be mislead by naming, e.g. * The unconditional void-returning ops *only* have relaxed variants without a _relaxed suffix, and can easily be mistaken for being fully ordered. It would be nice to give these a _relaxed() suffix, but this would result in significant churn throughout the kernel. * Our naming of conditional and unconditional+test ops is rather inconsistent, and it can be difficult to derive the name of an operation, or to identify where an op is conditional or unconditional+test. Some ops are clearly conditional: - dec_if_positive - add_unless - dec_unless_positive - inc_unless_negative Some ops are clearly unconditional+test: - sub_and_test - dec_and_test - inc_and_test However, what exactly those test is not obvious. A _test_zero suffix might be clearer. Others could be read ambiguously: - inc_not_zero // conditional - add_negative // unconditional+test It would probably be worth renaming these, e.g. to inc_unless_zero and add_test_negative. As a step towards making this more consistent and easier to understand, this patch adds kerneldoc comments for all generated *atomic*_*() functions. These are generated from templates, with some common text shared, making it easy to extend these in future if necessary. I've tried to make these as consistent and clear as possible, and I've deliberately ensured: * All ops have their ordering explicitly mentioned in the short and long description. * All test ops have "test" in their short description. * All ops are described as an expression using their usual C operator. For example: andnot: "Atomically updates @v to (@v & ~@i)" inc: "Atomically updates @v to (@v + 1)" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All conditional ops have their condition described as an expression using the usual C operators. For example: add_unless: "If (@v != @u), atomically updates @v to (@v + @i)" cmpxchg: "If (@v == @old), atomically updates @v to @new" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All bitwise ops (and,andnot,or,xor) explicitly mention that they are bitwise in their short description, so that they are not mistaken for performing their logical equivalents. * The noinstr safety of each op is explicitly described, with a description of whether or not to use the raw_ form of the op. There should be no functional change as a result of this patch. Reported-by: Paul E. McKenney Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Reviewed-by: Akira Yokosawa --- include/linux/atomic/atomic-arch-fallback.h | 1848 +++++++++++- include/linux/atomic/atomic-instrumented.h | 2771 +++++++++++++++++- include/linux/atomic/atomic-long.h | 925 +++++- scripts/atomic/atomic-tbl.sh | 112 +- scripts/atomic/gen-atomic-fallback.sh | 2 + scripts/atomic/gen-atomic-instrumented.sh | 2 + scripts/atomic/gen-atomic-long.sh | 2 + scripts/atomic/kerneldoc/add | 13 + scripts/atomic/kerneldoc/add_negative | 13 + scripts/atomic/kerneldoc/add_unless | 18 + scripts/atomic/kerneldoc/and | 13 + scripts/atomic/kerneldoc/andnot | 13 + scripts/atomic/kerneldoc/cmpxchg | 14 + scripts/atomic/kerneldoc/dec | 12 + scripts/atomic/kerneldoc/dec_and_test | 12 + scripts/atomic/kerneldoc/dec_if_positive | 12 + scripts/atomic/kerneldoc/dec_unless_positive | 12 + scripts/atomic/kerneldoc/inc | 12 + scripts/atomic/kerneldoc/inc_and_test | 12 + scripts/atomic/kerneldoc/inc_not_zero | 12 + scripts/atomic/kerneldoc/inc_unless_negative | 12 + scripts/atomic/kerneldoc/or | 13 + scripts/atomic/kerneldoc/read | 12 + scripts/atomic/kerneldoc/set | 13 + scripts/atomic/kerneldoc/sub | 13 + scripts/atomic/kerneldoc/sub_and_test | 13 + scripts/atomic/kerneldoc/try_cmpxchg | 15 + scripts/atomic/kerneldoc/xchg | 13 + scripts/atomic/kerneldoc/xor | 13 + 29 files changed, 5940 insertions(+), 7 deletions(-) create mode 100644 scripts/atomic/kerneldoc/add create mode 100644 scripts/atomic/kerneldoc/add_negative create mode 100644 scripts/atomic/kerneldoc/add_unless create mode 100644 scripts/atomic/kerneldoc/and create mode 100644 scripts/atomic/kerneldoc/andnot create mode 100644 scripts/atomic/kerneldoc/cmpxchg create mode 100644 scripts/atomic/kerneldoc/dec create mode 100644 scripts/atomic/kerneldoc/dec_and_test create mode 100644 scripts/atomic/kerneldoc/dec_if_positive create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive create mode 100644 scripts/atomic/kerneldoc/inc create mode 100644 scripts/atomic/kerneldoc/inc_and_test create mode 100644 scripts/atomic/kerneldoc/inc_not_zero create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative create mode 100644 scripts/atomic/kerneldoc/or create mode 100644 scripts/atomic/kerneldoc/read create mode 100644 scripts/atomic/kerneldoc/set create mode 100644 scripts/atomic/kerneldoc/sub create mode 100644 scripts/atomic/kerneldoc/sub_and_test create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg create mode 100644 scripts/atomic/kerneldoc/xchg create mode 100644 scripts/atomic/kerneldoc/xor diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 470c2890ab8d6..fa676565453c0 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -428,12 +428,32 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void); #define raw_sync_cmpxchg arch_sync_cmpxchg +/** + * raw_atomic_read() - atomic load with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_read() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline int raw_atomic_read(const atomic_t *v) { return arch_atomic_read(v); } +/** + * raw_atomic_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline int raw_atomic_read_acquire(const atomic_t *v) { @@ -455,12 +475,34 @@ raw_atomic_read_acquire(const atomic_t *v) #endif } +/** + * raw_atomic_set() - atomic set with relaxed ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_set() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_set(atomic_t *v, int i) { arch_atomic_set(v, i); } +/** + * raw_atomic_set_release() - atomic set with release ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic_set_release() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_set_release(atomic_t *v, int i) { @@ -478,12 +520,34 @@ raw_atomic_set_release(atomic_t *v, int i) #endif } +/** + * raw_atomic_add() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_add(int i, atomic_t *v) { arch_atomic_add(i, v); } +/** + * raw_atomic_add_return() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_add_return(int i, atomic_t *v) { @@ -500,6 +564,17 @@ raw_atomic_add_return(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_add_return_acquire(int i, atomic_t *v) { @@ -516,6 +591,17 @@ raw_atomic_add_return_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_add_return_release(int i, atomic_t *v) { @@ -531,6 +617,17 @@ raw_atomic_add_return_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_add_return_relaxed(int i, atomic_t *v) { @@ -543,6 +640,17 @@ raw_atomic_add_return_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_add(int i, atomic_t *v) { @@ -559,6 +667,17 @@ raw_atomic_fetch_add(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_add_acquire(int i, atomic_t *v) { @@ -575,6 +694,17 @@ raw_atomic_fetch_add_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_add_release(int i, atomic_t *v) { @@ -590,6 +720,17 @@ raw_atomic_fetch_add_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_add_relaxed(int i, atomic_t *v) { @@ -602,12 +743,34 @@ raw_atomic_fetch_add_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_sub() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_sub(int i, atomic_t *v) { arch_atomic_sub(i, v); } +/** + * raw_atomic_sub_return() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_sub_return(int i, atomic_t *v) { @@ -624,6 +787,17 @@ raw_atomic_sub_return(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_sub_return_acquire(int i, atomic_t *v) { @@ -640,6 +814,17 @@ raw_atomic_sub_return_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_sub_return_release(int i, atomic_t *v) { @@ -655,6 +840,17 @@ raw_atomic_sub_return_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_sub_return_relaxed(int i, atomic_t *v) { @@ -667,6 +863,17 @@ raw_atomic_sub_return_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_sub(int i, atomic_t *v) { @@ -683,6 +890,17 @@ raw_atomic_fetch_sub(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { @@ -699,6 +917,17 @@ raw_atomic_fetch_sub_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_sub_release(int i, atomic_t *v) { @@ -714,6 +943,17 @@ raw_atomic_fetch_sub_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) { @@ -726,6 +966,16 @@ raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_inc() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_inc(atomic_t *v) { @@ -736,6 +986,16 @@ raw_atomic_inc(atomic_t *v) #endif } +/** + * raw_atomic_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_inc_return(atomic_t *v) { @@ -752,6 +1012,16 @@ raw_atomic_inc_return(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_inc_return_acquire(atomic_t *v) { @@ -768,6 +1038,16 @@ raw_atomic_inc_return_acquire(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_inc_return_release(atomic_t *v) { @@ -783,6 +1063,16 @@ raw_atomic_inc_return_release(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_inc_return_relaxed(atomic_t *v) { @@ -795,6 +1085,16 @@ raw_atomic_inc_return_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_inc(atomic_t *v) { @@ -811,6 +1111,16 @@ raw_atomic_fetch_inc(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_inc_acquire(atomic_t *v) { @@ -827,6 +1137,16 @@ raw_atomic_fetch_inc_acquire(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_inc_release(atomic_t *v) { @@ -842,6 +1162,16 @@ raw_atomic_fetch_inc_release(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_inc_relaxed(atomic_t *v) { @@ -854,6 +1184,16 @@ raw_atomic_fetch_inc_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_dec() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_dec(atomic_t *v) { @@ -864,6 +1204,16 @@ raw_atomic_dec(atomic_t *v) #endif } +/** + * raw_atomic_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_dec_return(atomic_t *v) { @@ -880,6 +1230,16 @@ raw_atomic_dec_return(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_dec_return_acquire(atomic_t *v) { @@ -896,6 +1256,16 @@ raw_atomic_dec_return_acquire(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_dec_return_release(atomic_t *v) { @@ -911,6 +1281,16 @@ raw_atomic_dec_return_release(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline int raw_atomic_dec_return_relaxed(atomic_t *v) { @@ -923,6 +1303,16 @@ raw_atomic_dec_return_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_dec(atomic_t *v) { @@ -939,6 +1329,16 @@ raw_atomic_fetch_dec(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_dec_acquire(atomic_t *v) { @@ -955,6 +1355,16 @@ raw_atomic_fetch_dec_acquire(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_dec_release(atomic_t *v) { @@ -970,6 +1380,16 @@ raw_atomic_fetch_dec_release(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_dec_relaxed(atomic_t *v) { @@ -982,12 +1402,34 @@ raw_atomic_fetch_dec_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_and() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_and() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_and(int i, atomic_t *v) { arch_atomic_and(i, v); } +/** + * raw_atomic_fetch_and() - atomic bitwise AND with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_and(int i, atomic_t *v) { @@ -1004,6 +1446,17 @@ raw_atomic_fetch_and(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_and_acquire(int i, atomic_t *v) { @@ -1020,6 +1473,17 @@ raw_atomic_fetch_and_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_release() - atomic bitwise AND with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_and_release(int i, atomic_t *v) { @@ -1035,6 +1499,17 @@ raw_atomic_fetch_and_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_and_relaxed(int i, atomic_t *v) { @@ -1047,6 +1522,17 @@ raw_atomic_fetch_and_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_andnot() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_andnot(int i, atomic_t *v) { @@ -1057,6 +1543,17 @@ raw_atomic_andnot(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_andnot(int i, atomic_t *v) { @@ -1073,6 +1570,17 @@ raw_atomic_fetch_andnot(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -1089,6 +1597,17 @@ raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -1104,6 +1623,17 @@ raw_atomic_fetch_andnot_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { @@ -1116,12 +1646,34 @@ raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_or() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_or() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_or(int i, atomic_t *v) { arch_atomic_or(i, v); } +/** + * raw_atomic_fetch_or() - atomic bitwise OR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_or(int i, atomic_t *v) { @@ -1138,6 +1690,17 @@ raw_atomic_fetch_or(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_or_acquire(int i, atomic_t *v) { @@ -1154,6 +1717,17 @@ raw_atomic_fetch_or_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_release() - atomic bitwise OR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_or_release(int i, atomic_t *v) { @@ -1169,6 +1743,17 @@ raw_atomic_fetch_or_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_or_relaxed(int i, atomic_t *v) { @@ -1181,12 +1766,34 @@ raw_atomic_fetch_or_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_xor() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_xor() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_xor(int i, atomic_t *v) { arch_atomic_xor(i, v); } +/** + * raw_atomic_fetch_xor() - atomic bitwise XOR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_xor(int i, atomic_t *v) { @@ -1203,6 +1810,17 @@ raw_atomic_fetch_xor(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { @@ -1219,6 +1837,17 @@ raw_atomic_fetch_xor_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_xor_release(int i, atomic_t *v) { @@ -1234,6 +1863,17 @@ raw_atomic_fetch_xor_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) { @@ -1246,6 +1886,17 @@ raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_xchg(atomic_t *v, int new) { @@ -1262,6 +1913,17 @@ raw_atomic_xchg(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_xchg_acquire(atomic_t *v, int new) { @@ -1278,6 +1940,17 @@ raw_atomic_xchg_acquire(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_xchg_release(atomic_t *v, int new) { @@ -1293,6 +1966,17 @@ raw_atomic_xchg_release(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_xchg_relaxed(atomic_t *v, int new) { @@ -1305,6 +1989,18 @@ raw_atomic_xchg_relaxed(atomic_t *v, int new) #endif } +/** + * raw_atomic_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -1321,6 +2017,18 @@ raw_atomic_cmpxchg(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { @@ -1337,6 +2045,18 @@ raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { @@ -1352,6 +2072,18 @@ raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline int raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { @@ -1364,6 +2096,19 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -1384,6 +2129,19 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -1404,6 +2162,19 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -1423,6 +2194,19 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { @@ -1439,6 +2223,17 @@ raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_sub_and_test(int i, atomic_t *v) { @@ -1449,6 +2244,16 @@ raw_atomic_sub_and_test(int i, atomic_t *v) #endif } +/** + * raw_atomic_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_dec_and_test(atomic_t *v) { @@ -1459,6 +2264,16 @@ raw_atomic_dec_and_test(atomic_t *v) #endif } +/** + * raw_atomic_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_inc_and_test(atomic_t *v) { @@ -1469,6 +2284,17 @@ raw_atomic_inc_and_test(atomic_t *v) #endif } +/** + * raw_atomic_add_negative() - atomic add and test if negative with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative(int i, atomic_t *v) { @@ -1485,6 +2311,17 @@ raw_atomic_add_negative(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1501,6 +2338,17 @@ raw_atomic_add_negative_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_release() - atomic add and test if negative with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_release(int i, atomic_t *v) { @@ -1516,6 +2364,17 @@ raw_atomic_add_negative_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -1528,6 +2387,18 @@ raw_atomic_add_negative_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline int raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1545,6 +2416,18 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) #endif } +/** + * raw_atomic_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_add_unless(atomic_t *v, int a, int u) { @@ -1555,6 +2438,16 @@ raw_atomic_add_unless(atomic_t *v, int a, int u) #endif } +/** + * raw_atomic_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_inc_not_zero(atomic_t *v) { @@ -1565,6 +2458,16 @@ raw_atomic_inc_not_zero(atomic_t *v) #endif } +/** + * raw_atomic_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_inc_unless_negative(atomic_t *v) { @@ -1582,6 +2485,16 @@ raw_atomic_inc_unless_negative(atomic_t *v) #endif } +/** + * raw_atomic_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_dec_unless_positive(atomic_t *v) { @@ -1599,6 +2512,16 @@ raw_atomic_dec_unless_positive(atomic_t *v) #endif } +/** + * raw_atomic_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline int raw_atomic_dec_if_positive(atomic_t *v) { @@ -1621,12 +2544,32 @@ raw_atomic_dec_if_positive(atomic_t *v) #include #endif +/** + * raw_atomic64_read() - atomic load with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_read() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline s64 raw_atomic64_read(const atomic64_t *v) { return arch_atomic64_read(v); } +/** + * raw_atomic64_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline s64 raw_atomic64_read_acquire(const atomic64_t *v) { @@ -1648,12 +2591,34 @@ raw_atomic64_read_acquire(const atomic64_t *v) #endif } +/** + * raw_atomic64_set() - atomic set with relaxed ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_set() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_set(atomic64_t *v, s64 i) { arch_atomic64_set(v, i); } +/** + * raw_atomic64_set_release() - atomic set with release ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_set_release() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_set_release(atomic64_t *v, s64 i) { @@ -1671,12 +2636,34 @@ raw_atomic64_set_release(atomic64_t *v, s64 i) #endif } +/** + * raw_atomic64_add() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_add(s64 i, atomic64_t *v) { arch_atomic64_add(i, v); } +/** + * raw_atomic64_add_return() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_add_return(s64 i, atomic64_t *v) { @@ -1693,6 +2680,17 @@ raw_atomic64_add_return(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { @@ -1709,6 +2707,17 @@ raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_add_return_release(s64 i, atomic64_t *v) { @@ -1724,6 +2733,17 @@ raw_atomic64_add_return_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) { @@ -1736,6 +2756,17 @@ raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add(s64 i, atomic64_t *v) { @@ -1752,6 +2783,17 @@ raw_atomic64_fetch_add(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { @@ -1768,6 +2810,17 @@ raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { @@ -1783,6 +2836,17 @@ raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { @@ -1795,12 +2859,34 @@ raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_sub(s64 i, atomic64_t *v) { arch_atomic64_sub(i, v); } +/** + * raw_atomic64_sub_return() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_sub_return(s64 i, atomic64_t *v) { @@ -1817,6 +2903,17 @@ raw_atomic64_sub_return(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { @@ -1833,6 +2930,17 @@ raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { @@ -1848,6 +2956,17 @@ raw_atomic64_sub_return_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { @@ -1860,6 +2979,17 @@ raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { @@ -1876,6 +3006,17 @@ raw_atomic64_fetch_sub(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { @@ -1892,6 +3033,17 @@ raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { @@ -1907,6 +3059,17 @@ raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { @@ -1919,6 +3082,16 @@ raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_inc(atomic64_t *v) { @@ -1929,6 +3102,16 @@ raw_atomic64_inc(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_inc_return(atomic64_t *v) { @@ -1945,6 +3128,16 @@ raw_atomic64_inc_return(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_acquire(atomic64_t *v) { @@ -1961,6 +3154,16 @@ raw_atomic64_inc_return_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_release(atomic64_t *v) { @@ -1976,6 +3179,16 @@ raw_atomic64_inc_return_release(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_relaxed(atomic64_t *v) { @@ -1988,6 +3201,16 @@ raw_atomic64_inc_return_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc(atomic64_t *v) { @@ -2004,6 +3227,16 @@ raw_atomic64_fetch_inc(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -2020,6 +3253,16 @@ raw_atomic64_fetch_inc_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_release(atomic64_t *v) { @@ -2035,6 +3278,16 @@ raw_atomic64_fetch_inc_release(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { @@ -2047,6 +3300,16 @@ raw_atomic64_fetch_inc_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_dec(atomic64_t *v) { @@ -2057,6 +3320,16 @@ raw_atomic64_dec(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_dec_return(atomic64_t *v) { @@ -2073,6 +3346,16 @@ raw_atomic64_dec_return(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_acquire(atomic64_t *v) { @@ -2089,6 +3372,16 @@ raw_atomic64_dec_return_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_release(atomic64_t *v) { @@ -2104,6 +3397,16 @@ raw_atomic64_dec_return_release(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_relaxed(atomic64_t *v) { @@ -2116,6 +3419,16 @@ raw_atomic64_dec_return_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec(atomic64_t *v) { @@ -2132,6 +3445,16 @@ raw_atomic64_fetch_dec(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -2148,6 +3471,16 @@ raw_atomic64_fetch_dec_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_release(atomic64_t *v) { @@ -2163,6 +3496,16 @@ raw_atomic64_fetch_dec_release(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { @@ -2175,12 +3518,34 @@ raw_atomic64_fetch_dec_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_and() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_and() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_and(s64 i, atomic64_t *v) { arch_atomic64_and(i, v); } +/** + * raw_atomic64_fetch_and() - atomic bitwise AND with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and(s64 i, atomic64_t *v) { @@ -2197,6 +3562,17 @@ raw_atomic64_fetch_and(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { @@ -2213,6 +3589,17 @@ raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_release() - atomic bitwise AND with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { @@ -2228,6 +3615,17 @@ raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { @@ -2240,6 +3638,17 @@ raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_andnot() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_andnot(s64 i, atomic64_t *v) { @@ -2250,6 +3659,17 @@ raw_atomic64_andnot(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -2266,6 +3686,17 @@ raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -2282,6 +3713,17 @@ raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -2297,6 +3739,17 @@ raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { @@ -2309,12 +3762,34 @@ raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_or() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_or() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_or(s64 i, atomic64_t *v) { arch_atomic64_or(i, v); } +/** + * raw_atomic64_fetch_or() - atomic bitwise OR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or(s64 i, atomic64_t *v) { @@ -2331,6 +3806,17 @@ raw_atomic64_fetch_or(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { @@ -2347,6 +3833,17 @@ raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_release() - atomic bitwise OR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { @@ -2362,6 +3859,17 @@ raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { @@ -2374,12 +3882,34 @@ raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_xor() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_xor() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic64_xor(s64 i, atomic64_t *v) { arch_atomic64_xor(i, v); } +/** + * raw_atomic64_fetch_xor() - atomic bitwise XOR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { @@ -2396,6 +3926,17 @@ raw_atomic64_fetch_xor(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { @@ -2412,6 +3953,17 @@ raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { @@ -2427,6 +3979,17 @@ raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { @@ -2439,6 +4002,17 @@ raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_xchg() - atomic exchange with full ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_xchg(atomic64_t *v, s64 new) { @@ -2455,6 +4029,17 @@ raw_atomic64_xchg(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { @@ -2471,6 +4056,17 @@ raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_xchg_release(atomic64_t *v, s64 new) { @@ -2486,6 +4082,17 @@ raw_atomic64_xchg_release(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { @@ -2498,6 +4105,18 @@ raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { @@ -2514,6 +4133,18 @@ raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { @@ -2530,6 +4161,18 @@ raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { @@ -2545,6 +4188,18 @@ raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { @@ -2557,6 +4212,19 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -2577,6 +4245,19 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -2597,6 +4278,19 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -2616,6 +4310,19 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { @@ -2632,6 +4339,17 @@ raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -2642,6 +4360,16 @@ raw_atomic64_sub_and_test(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_dec_and_test(atomic64_t *v) { @@ -2652,6 +4380,16 @@ raw_atomic64_dec_and_test(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_and_test(atomic64_t *v) { @@ -2662,6 +4400,17 @@ raw_atomic64_inc_and_test(atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative() - atomic add and test if negative with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -2678,6 +4427,17 @@ raw_atomic64_add_negative(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -2694,6 +4454,17 @@ raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_release() - atomic add and test if negative with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -2709,6 +4480,17 @@ raw_atomic64_add_negative_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -2721,6 +4503,18 @@ raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2738,6 +4532,18 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) #endif } +/** + * raw_atomic64_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2748,6 +4554,16 @@ raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) #endif } +/** + * raw_atomic64_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic64_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_not_zero(atomic64_t *v) { @@ -2758,6 +4574,16 @@ raw_atomic64_inc_not_zero(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic64_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_unless_negative(atomic64_t *v) { @@ -2775,6 +4601,16 @@ raw_atomic64_inc_unless_negative(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_dec_unless_positive(atomic64_t *v) { @@ -2792,6 +4628,16 @@ raw_atomic64_dec_unless_positive(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline s64 raw_atomic64_dec_if_positive(atomic64_t *v) { @@ -2811,4 +4657,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v) } #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 205e090382132f1fc85e48b46e722865f9c81309 +// 05af058ad6cbb042b0729969eb13ac6586f0fda7 diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index 5491c89dc03a0..248fa54f56265 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -16,6 +16,16 @@ #include #include +/** + * atomic_read() - atomic load with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_read() there. + * + * Return: the value loaded from @v + */ static __always_inline int atomic_read(const atomic_t *v) { @@ -23,6 +33,16 @@ atomic_read(const atomic_t *v) return raw_atomic_read(v); } +/** + * atomic_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_read_acquire() there. + * + * Return: the value loaded from @v + */ static __always_inline int atomic_read_acquire(const atomic_t *v) { @@ -30,6 +50,17 @@ atomic_read_acquire(const atomic_t *v) return raw_atomic_read_acquire(v); } +/** + * atomic_set() - atomic set with relaxed ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_set() there. + * + * Return: nothing. + */ static __always_inline void atomic_set(atomic_t *v, int i) { @@ -37,6 +68,17 @@ atomic_set(atomic_t *v, int i) raw_atomic_set(v, i); } +/** + * atomic_set_release() - atomic set with release ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_set_release() there. + * + * Return: nothing. + */ static __always_inline void atomic_set_release(atomic_t *v, int i) { @@ -45,6 +87,17 @@ atomic_set_release(atomic_t *v, int i) raw_atomic_set_release(v, i); } +/** + * atomic_add() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add() there. + * + * Return: nothing. + */ static __always_inline void atomic_add(int i, atomic_t *v) { @@ -52,6 +105,17 @@ atomic_add(int i, atomic_t *v) raw_atomic_add(i, v); } +/** + * atomic_add_return() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_add_return(int i, atomic_t *v) { @@ -60,6 +124,17 @@ atomic_add_return(int i, atomic_t *v) return raw_atomic_add_return(i, v); } +/** + * atomic_add_return_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_add_return_acquire(int i, atomic_t *v) { @@ -67,6 +142,17 @@ atomic_add_return_acquire(int i, atomic_t *v) return raw_atomic_add_return_acquire(i, v); } +/** + * atomic_add_return_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_add_return_release(int i, atomic_t *v) { @@ -75,6 +161,17 @@ atomic_add_return_release(int i, atomic_t *v) return raw_atomic_add_return_release(i, v); } +/** + * atomic_add_return_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_add_return_relaxed(int i, atomic_t *v) { @@ -82,6 +179,17 @@ atomic_add_return_relaxed(int i, atomic_t *v) return raw_atomic_add_return_relaxed(i, v); } +/** + * atomic_fetch_add() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_add(int i, atomic_t *v) { @@ -90,6 +198,17 @@ atomic_fetch_add(int i, atomic_t *v) return raw_atomic_fetch_add(i, v); } +/** + * atomic_fetch_add_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_add_acquire(int i, atomic_t *v) { @@ -97,6 +216,17 @@ atomic_fetch_add_acquire(int i, atomic_t *v) return raw_atomic_fetch_add_acquire(i, v); } +/** + * atomic_fetch_add_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { @@ -105,6 +235,17 @@ atomic_fetch_add_release(int i, atomic_t *v) return raw_atomic_fetch_add_release(i, v); } +/** + * atomic_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { @@ -112,6 +253,17 @@ atomic_fetch_add_relaxed(int i, atomic_t *v) return raw_atomic_fetch_add_relaxed(i, v); } +/** + * atomic_sub() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub() there. + * + * Return: nothing. + */ static __always_inline void atomic_sub(int i, atomic_t *v) { @@ -119,6 +271,17 @@ atomic_sub(int i, atomic_t *v) raw_atomic_sub(i, v); } +/** + * atomic_sub_return() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_sub_return(int i, atomic_t *v) { @@ -127,6 +290,17 @@ atomic_sub_return(int i, atomic_t *v) return raw_atomic_sub_return(i, v); } +/** + * atomic_sub_return_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_sub_return_acquire(int i, atomic_t *v) { @@ -134,6 +308,17 @@ atomic_sub_return_acquire(int i, atomic_t *v) return raw_atomic_sub_return_acquire(i, v); } +/** + * atomic_sub_return_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { @@ -142,6 +327,17 @@ atomic_sub_return_release(int i, atomic_t *v) return raw_atomic_sub_return_release(i, v); } +/** + * atomic_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_sub_return_relaxed(int i, atomic_t *v) { @@ -149,6 +345,17 @@ atomic_sub_return_relaxed(int i, atomic_t *v) return raw_atomic_sub_return_relaxed(i, v); } +/** + * atomic_fetch_sub() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { @@ -157,6 +364,17 @@ atomic_fetch_sub(int i, atomic_t *v) return raw_atomic_fetch_sub(i, v); } +/** + * atomic_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { @@ -164,6 +382,17 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) return raw_atomic_fetch_sub_acquire(i, v); } +/** + * atomic_fetch_sub_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { @@ -172,6 +401,17 @@ atomic_fetch_sub_release(int i, atomic_t *v) return raw_atomic_fetch_sub_release(i, v); } +/** + * atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { @@ -179,6 +419,16 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v) return raw_atomic_fetch_sub_relaxed(i, v); } +/** + * atomic_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc() there. + * + * Return: nothing. + */ static __always_inline void atomic_inc(atomic_t *v) { @@ -186,6 +436,16 @@ atomic_inc(atomic_t *v) raw_atomic_inc(v); } +/** + * atomic_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_inc_return(atomic_t *v) { @@ -194,6 +454,16 @@ atomic_inc_return(atomic_t *v) return raw_atomic_inc_return(v); } +/** + * atomic_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_inc_return_acquire(atomic_t *v) { @@ -201,6 +471,16 @@ atomic_inc_return_acquire(atomic_t *v) return raw_atomic_inc_return_acquire(v); } +/** + * atomic_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_inc_return_release(atomic_t *v) { @@ -209,6 +489,16 @@ atomic_inc_return_release(atomic_t *v) return raw_atomic_inc_return_release(v); } +/** + * atomic_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_inc_return_relaxed(atomic_t *v) { @@ -216,6 +506,16 @@ atomic_inc_return_relaxed(atomic_t *v) return raw_atomic_inc_return_relaxed(v); } +/** + * atomic_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_inc(atomic_t *v) { @@ -224,6 +524,16 @@ atomic_fetch_inc(atomic_t *v) return raw_atomic_fetch_inc(v); } +/** + * atomic_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_inc_acquire(atomic_t *v) { @@ -231,6 +541,16 @@ atomic_fetch_inc_acquire(atomic_t *v) return raw_atomic_fetch_inc_acquire(v); } +/** + * atomic_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_inc_release(atomic_t *v) { @@ -239,6 +559,16 @@ atomic_fetch_inc_release(atomic_t *v) return raw_atomic_fetch_inc_release(v); } +/** + * atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_inc_relaxed(atomic_t *v) { @@ -246,6 +576,16 @@ atomic_fetch_inc_relaxed(atomic_t *v) return raw_atomic_fetch_inc_relaxed(v); } +/** + * atomic_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec() there. + * + * Return: nothing. + */ static __always_inline void atomic_dec(atomic_t *v) { @@ -253,6 +593,16 @@ atomic_dec(atomic_t *v) raw_atomic_dec(v); } +/** + * atomic_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_dec_return(atomic_t *v) { @@ -261,6 +611,16 @@ atomic_dec_return(atomic_t *v) return raw_atomic_dec_return(v); } +/** + * atomic_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_dec_return_acquire(atomic_t *v) { @@ -268,6 +628,16 @@ atomic_dec_return_acquire(atomic_t *v) return raw_atomic_dec_return_acquire(v); } +/** + * atomic_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_dec_return_release(atomic_t *v) { @@ -276,6 +646,16 @@ atomic_dec_return_release(atomic_t *v) return raw_atomic_dec_return_release(v); } +/** + * atomic_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline int atomic_dec_return_relaxed(atomic_t *v) { @@ -283,6 +663,16 @@ atomic_dec_return_relaxed(atomic_t *v) return raw_atomic_dec_return_relaxed(v); } +/** + * atomic_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_dec(atomic_t *v) { @@ -291,6 +681,16 @@ atomic_fetch_dec(atomic_t *v) return raw_atomic_fetch_dec(v); } +/** + * atomic_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_dec_acquire(atomic_t *v) { @@ -298,6 +698,16 @@ atomic_fetch_dec_acquire(atomic_t *v) return raw_atomic_fetch_dec_acquire(v); } +/** + * atomic_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_dec_release(atomic_t *v) { @@ -306,6 +716,16 @@ atomic_fetch_dec_release(atomic_t *v) return raw_atomic_fetch_dec_release(v); } +/** + * atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_dec_relaxed(atomic_t *v) { @@ -313,6 +733,17 @@ atomic_fetch_dec_relaxed(atomic_t *v) return raw_atomic_fetch_dec_relaxed(v); } +/** + * atomic_and() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_and() there. + * + * Return: nothing. + */ static __always_inline void atomic_and(int i, atomic_t *v) { @@ -320,6 +751,17 @@ atomic_and(int i, atomic_t *v) raw_atomic_and(i, v); } +/** + * atomic_fetch_and() - atomic bitwise AND with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_and(int i, atomic_t *v) { @@ -328,6 +770,17 @@ atomic_fetch_and(int i, atomic_t *v) return raw_atomic_fetch_and(i, v); } +/** + * atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_and_acquire(int i, atomic_t *v) { @@ -335,6 +788,17 @@ atomic_fetch_and_acquire(int i, atomic_t *v) return raw_atomic_fetch_and_acquire(i, v); } +/** + * atomic_fetch_and_release() - atomic bitwise AND with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { @@ -343,6 +807,17 @@ atomic_fetch_and_release(int i, atomic_t *v) return raw_atomic_fetch_and_release(i, v); } +/** + * atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { @@ -350,6 +825,17 @@ atomic_fetch_and_relaxed(int i, atomic_t *v) return raw_atomic_fetch_and_relaxed(i, v); } +/** + * atomic_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_andnot() there. + * + * Return: nothing. + */ static __always_inline void atomic_andnot(int i, atomic_t *v) { @@ -357,6 +843,17 @@ atomic_andnot(int i, atomic_t *v) raw_atomic_andnot(i, v); } +/** + * atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { @@ -365,6 +862,17 @@ atomic_fetch_andnot(int i, atomic_t *v) return raw_atomic_fetch_andnot(i, v); } +/** + * atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -372,6 +880,17 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) return raw_atomic_fetch_andnot_acquire(i, v); } +/** + * atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -380,6 +899,17 @@ atomic_fetch_andnot_release(int i, atomic_t *v) return raw_atomic_fetch_andnot_release(i, v); } +/** + * atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { @@ -387,6 +917,17 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v) return raw_atomic_fetch_andnot_relaxed(i, v); } +/** + * atomic_or() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_or() there. + * + * Return: nothing. + */ static __always_inline void atomic_or(int i, atomic_t *v) { @@ -394,6 +935,17 @@ atomic_or(int i, atomic_t *v) raw_atomic_or(i, v); } +/** + * atomic_fetch_or() - atomic bitwise OR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_or(int i, atomic_t *v) { @@ -402,6 +954,17 @@ atomic_fetch_or(int i, atomic_t *v) return raw_atomic_fetch_or(i, v); } +/** + * atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_or_acquire(int i, atomic_t *v) { @@ -409,6 +972,17 @@ atomic_fetch_or_acquire(int i, atomic_t *v) return raw_atomic_fetch_or_acquire(i, v); } +/** + * atomic_fetch_or_release() - atomic bitwise OR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { @@ -417,6 +991,17 @@ atomic_fetch_or_release(int i, atomic_t *v) return raw_atomic_fetch_or_release(i, v); } +/** + * atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { @@ -424,6 +1009,17 @@ atomic_fetch_or_relaxed(int i, atomic_t *v) return raw_atomic_fetch_or_relaxed(i, v); } +/** + * atomic_xor() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xor() there. + * + * Return: nothing. + */ static __always_inline void atomic_xor(int i, atomic_t *v) { @@ -431,6 +1027,17 @@ atomic_xor(int i, atomic_t *v) raw_atomic_xor(i, v); } +/** + * atomic_fetch_xor() - atomic bitwise XOR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { @@ -439,6 +1046,17 @@ atomic_fetch_xor(int i, atomic_t *v) return raw_atomic_fetch_xor(i, v); } +/** + * atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { @@ -446,6 +1064,17 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) return raw_atomic_fetch_xor_acquire(i, v); } +/** + * atomic_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { @@ -454,6 +1083,17 @@ atomic_fetch_xor_release(int i, atomic_t *v) return raw_atomic_fetch_xor_release(i, v); } +/** + * atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { @@ -461,6 +1101,17 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) return raw_atomic_fetch_xor_relaxed(i, v); } +/** + * atomic_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_xchg(atomic_t *v, int new) { @@ -469,6 +1120,17 @@ atomic_xchg(atomic_t *v, int new) return raw_atomic_xchg(v, new); } +/** + * atomic_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_xchg_acquire(atomic_t *v, int new) { @@ -476,6 +1138,17 @@ atomic_xchg_acquire(atomic_t *v, int new) return raw_atomic_xchg_acquire(v, new); } +/** + * atomic_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_xchg_release(atomic_t *v, int new) { @@ -484,6 +1157,17 @@ atomic_xchg_release(atomic_t *v, int new) return raw_atomic_xchg_release(v, new); } +/** + * atomic_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_xchg_relaxed(atomic_t *v, int new) { @@ -491,6 +1175,18 @@ atomic_xchg_relaxed(atomic_t *v, int new) return raw_atomic_xchg_relaxed(v, new); } +/** + * atomic_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -499,6 +1195,18 @@ atomic_cmpxchg(atomic_t *v, int old, int new) return raw_atomic_cmpxchg(v, old, new); } +/** + * atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { @@ -506,6 +1214,18 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_acquire(v, old, new); } +/** + * atomic_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { @@ -514,6 +1234,18 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_release(v, old, new); } +/** + * atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { @@ -521,6 +1253,19 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_relaxed(v, old, new); } +/** + * atomic_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -530,6 +1275,19 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg(v, old, new); } +/** + * atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -538,6 +1296,19 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_acquire(v, old, new); } +/** + * atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -547,6 +1318,19 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_release(v, old, new); } +/** + * atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { @@ -555,6 +1339,17 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { @@ -563,6 +1358,16 @@ atomic_sub_and_test(int i, atomic_t *v) return raw_atomic_sub_and_test(i, v); } +/** + * atomic_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_dec_and_test(atomic_t *v) { @@ -571,6 +1376,16 @@ atomic_dec_and_test(atomic_t *v) return raw_atomic_dec_and_test(v); } +/** + * atomic_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_inc_and_test(atomic_t *v) { @@ -579,6 +1394,17 @@ atomic_inc_and_test(atomic_t *v) return raw_atomic_inc_and_test(v); } +/** + * atomic_add_negative() - atomic add and test if negative with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative(int i, atomic_t *v) { @@ -587,6 +1413,17 @@ atomic_add_negative(int i, atomic_t *v) return raw_atomic_add_negative(i, v); } +/** + * atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_acquire(int i, atomic_t *v) { @@ -594,6 +1431,17 @@ atomic_add_negative_acquire(int i, atomic_t *v) return raw_atomic_add_negative_acquire(i, v); } +/** + * atomic_add_negative_release() - atomic add and test if negative with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_release(int i, atomic_t *v) { @@ -602,6 +1450,17 @@ atomic_add_negative_release(int i, atomic_t *v) return raw_atomic_add_negative_release(i, v); } +/** + * atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -609,6 +1468,18 @@ atomic_add_negative_relaxed(int i, atomic_t *v) return raw_atomic_add_negative_relaxed(i, v); } +/** + * atomic_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there. + * + * Return: The old value of @v. + */ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -617,6 +1488,18 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) return raw_atomic_fetch_add_unless(v, a, u); } +/** + * atomic_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { @@ -625,6 +1508,16 @@ atomic_add_unless(atomic_t *v, int a, int u) return raw_atomic_add_unless(v, a, u); } +/** + * atomic_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_inc_not_zero(atomic_t *v) { @@ -633,6 +1526,16 @@ atomic_inc_not_zero(atomic_t *v) return raw_atomic_inc_not_zero(v); } +/** + * atomic_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { @@ -641,6 +1544,16 @@ atomic_inc_unless_negative(atomic_t *v) return raw_atomic_inc_unless_negative(v); } +/** + * atomic_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { @@ -649,6 +1562,16 @@ atomic_dec_unless_positive(atomic_t *v) return raw_atomic_dec_unless_positive(v); } +/** + * atomic_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline int atomic_dec_if_positive(atomic_t *v) { @@ -657,6 +1580,16 @@ atomic_dec_if_positive(atomic_t *v) return raw_atomic_dec_if_positive(v); } +/** + * atomic64_read() - atomic load with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_read() there. + * + * Return: the value loaded from @v + */ static __always_inline s64 atomic64_read(const atomic64_t *v) { @@ -664,6 +1597,16 @@ atomic64_read(const atomic64_t *v) return raw_atomic64_read(v); } +/** + * atomic64_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there. + * + * Return: the value loaded from @v + */ static __always_inline s64 atomic64_read_acquire(const atomic64_t *v) { @@ -671,6 +1614,17 @@ atomic64_read_acquire(const atomic64_t *v) return raw_atomic64_read_acquire(v); } +/** + * atomic64_set() - atomic set with relaxed ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_set() there. + * + * Return: nothing. + */ static __always_inline void atomic64_set(atomic64_t *v, s64 i) { @@ -678,6 +1632,17 @@ atomic64_set(atomic64_t *v, s64 i) raw_atomic64_set(v, i); } +/** + * atomic64_set_release() - atomic set with release ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_set_release() there. + * + * Return: nothing. + */ static __always_inline void atomic64_set_release(atomic64_t *v, s64 i) { @@ -686,6 +1651,17 @@ atomic64_set_release(atomic64_t *v, s64 i) raw_atomic64_set_release(v, i); } +/** + * atomic64_add() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add() there. + * + * Return: nothing. + */ static __always_inline void atomic64_add(s64 i, atomic64_t *v) { @@ -693,6 +1669,17 @@ atomic64_add(s64 i, atomic64_t *v) raw_atomic64_add(i, v); } +/** + * atomic64_add_return() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { @@ -701,6 +1688,17 @@ atomic64_add_return(s64 i, atomic64_t *v) return raw_atomic64_add_return(i, v); } +/** + * atomic64_add_return_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { @@ -708,6 +1706,17 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) return raw_atomic64_add_return_acquire(i, v); } +/** + * atomic64_add_return_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { @@ -716,6 +1725,17 @@ atomic64_add_return_release(s64 i, atomic64_t *v) return raw_atomic64_add_return_release(i, v); } +/** + * atomic64_add_return_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { @@ -723,6 +1743,17 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) return raw_atomic64_add_return_relaxed(i, v); } +/** + * atomic64_fetch_add() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { @@ -731,6 +1762,17 @@ atomic64_fetch_add(s64 i, atomic64_t *v) return raw_atomic64_fetch_add(i, v); } +/** + * atomic64_fetch_add_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { @@ -738,6 +1780,17 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_acquire(i, v); } +/** + * atomic64_fetch_add_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { @@ -746,6 +1799,17 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_release(i, v); } +/** + * atomic64_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { @@ -753,6 +1817,17 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_relaxed(i, v); } +/** + * atomic64_sub() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub() there. + * + * Return: nothing. + */ static __always_inline void atomic64_sub(s64 i, atomic64_t *v) { @@ -760,6 +1835,17 @@ atomic64_sub(s64 i, atomic64_t *v) raw_atomic64_sub(i, v); } +/** + * atomic64_sub_return() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { @@ -768,6 +1854,17 @@ atomic64_sub_return(s64 i, atomic64_t *v) return raw_atomic64_sub_return(i, v); } +/** + * atomic64_sub_return_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { @@ -775,6 +1872,17 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) return raw_atomic64_sub_return_acquire(i, v); } +/** + * atomic64_sub_return_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { @@ -783,6 +1891,17 @@ atomic64_sub_return_release(s64 i, atomic64_t *v) return raw_atomic64_sub_return_release(i, v); } +/** + * atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { @@ -790,6 +1909,17 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) return raw_atomic64_sub_return_relaxed(i, v); } +/** + * atomic64_fetch_sub() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { @@ -798,6 +1928,17 @@ atomic64_fetch_sub(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub(i, v); } +/** + * atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { @@ -805,6 +1946,17 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_acquire(i, v); } +/** + * atomic64_fetch_sub_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { @@ -813,6 +1965,17 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_release(i, v); } +/** + * atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { @@ -820,6 +1983,16 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_relaxed(i, v); } +/** + * atomic64_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc() there. + * + * Return: nothing. + */ static __always_inline void atomic64_inc(atomic64_t *v) { @@ -827,6 +2000,16 @@ atomic64_inc(atomic64_t *v) raw_atomic64_inc(v); } +/** + * atomic64_inc_return() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_inc_return(atomic64_t *v) { @@ -835,6 +2018,16 @@ atomic64_inc_return(atomic64_t *v) return raw_atomic64_inc_return(v); } +/** + * atomic64_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_inc_return_acquire(atomic64_t *v) { @@ -842,6 +2035,16 @@ atomic64_inc_return_acquire(atomic64_t *v) return raw_atomic64_inc_return_acquire(v); } +/** + * atomic64_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { @@ -850,6 +2053,16 @@ atomic64_inc_return_release(atomic64_t *v) return raw_atomic64_inc_return_release(v); } +/** + * atomic64_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { @@ -857,6 +2070,16 @@ atomic64_inc_return_relaxed(atomic64_t *v) return raw_atomic64_inc_return_relaxed(v); } +/** + * atomic64_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { @@ -865,6 +2088,16 @@ atomic64_fetch_inc(atomic64_t *v) return raw_atomic64_fetch_inc(v); } +/** + * atomic64_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -872,6 +2105,16 @@ atomic64_fetch_inc_acquire(atomic64_t *v) return raw_atomic64_fetch_inc_acquire(v); } +/** + * atomic64_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { @@ -880,6 +2123,16 @@ atomic64_fetch_inc_release(atomic64_t *v) return raw_atomic64_fetch_inc_release(v); } +/** + * atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { @@ -887,6 +2140,16 @@ atomic64_fetch_inc_relaxed(atomic64_t *v) return raw_atomic64_fetch_inc_relaxed(v); } +/** + * atomic64_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec() there. + * + * Return: nothing. + */ static __always_inline void atomic64_dec(atomic64_t *v) { @@ -894,6 +2157,16 @@ atomic64_dec(atomic64_t *v) raw_atomic64_dec(v); } +/** + * atomic64_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_dec_return(atomic64_t *v) { @@ -902,6 +2175,16 @@ atomic64_dec_return(atomic64_t *v) return raw_atomic64_dec_return(v); } +/** + * atomic64_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_dec_return_acquire(atomic64_t *v) { @@ -909,6 +2192,16 @@ atomic64_dec_return_acquire(atomic64_t *v) return raw_atomic64_dec_return_acquire(v); } +/** + * atomic64_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { @@ -917,6 +2210,16 @@ atomic64_dec_return_release(atomic64_t *v) return raw_atomic64_dec_return_release(v); } +/** + * atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { @@ -924,6 +2227,16 @@ atomic64_dec_return_relaxed(atomic64_t *v) return raw_atomic64_dec_return_relaxed(v); } +/** + * atomic64_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { @@ -932,6 +2245,16 @@ atomic64_fetch_dec(atomic64_t *v) return raw_atomic64_fetch_dec(v); } +/** + * atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -939,6 +2262,16 @@ atomic64_fetch_dec_acquire(atomic64_t *v) return raw_atomic64_fetch_dec_acquire(v); } +/** + * atomic64_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { @@ -947,6 +2280,16 @@ atomic64_fetch_dec_release(atomic64_t *v) return raw_atomic64_fetch_dec_release(v); } +/** + * atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { @@ -954,6 +2297,17 @@ atomic64_fetch_dec_relaxed(atomic64_t *v) return raw_atomic64_fetch_dec_relaxed(v); } +/** + * atomic64_and() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_and() there. + * + * Return: nothing. + */ static __always_inline void atomic64_and(s64 i, atomic64_t *v) { @@ -961,6 +2315,17 @@ atomic64_and(s64 i, atomic64_t *v) raw_atomic64_and(i, v); } +/** + * atomic64_fetch_and() - atomic bitwise AND with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { @@ -969,6 +2334,17 @@ atomic64_fetch_and(s64 i, atomic64_t *v) return raw_atomic64_fetch_and(i, v); } +/** + * atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { @@ -976,6 +2352,17 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_acquire(i, v); } +/** + * atomic64_fetch_and_release() - atomic bitwise AND with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { @@ -984,6 +2371,17 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_release(i, v); } +/** + * atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { @@ -991,6 +2389,17 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_relaxed(i, v); } +/** + * atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_andnot() there. + * + * Return: nothing. + */ static __always_inline void atomic64_andnot(s64 i, atomic64_t *v) { @@ -998,6 +2407,17 @@ atomic64_andnot(s64 i, atomic64_t *v) raw_atomic64_andnot(i, v); } +/** + * atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -1006,6 +2426,17 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot(i, v); } +/** + * atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -1013,6 +2444,17 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_acquire(i, v); } +/** + * atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -1021,6 +2463,17 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_release(i, v); } +/** + * atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { @@ -1028,6 +2481,17 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_relaxed(i, v); } +/** + * atomic64_or() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_or() there. + * + * Return: nothing. + */ static __always_inline void atomic64_or(s64 i, atomic64_t *v) { @@ -1035,6 +2499,17 @@ atomic64_or(s64 i, atomic64_t *v) raw_atomic64_or(i, v); } +/** + * atomic64_fetch_or() - atomic bitwise OR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { @@ -1043,6 +2518,17 @@ atomic64_fetch_or(s64 i, atomic64_t *v) return raw_atomic64_fetch_or(i, v); } +/** + * atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { @@ -1050,6 +2536,17 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_acquire(i, v); } +/** + * atomic64_fetch_or_release() - atomic bitwise OR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { @@ -1058,6 +2555,17 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_release(i, v); } +/** + * atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { @@ -1065,6 +2573,17 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_relaxed(i, v); } +/** + * atomic64_xor() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xor() there. + * + * Return: nothing. + */ static __always_inline void atomic64_xor(s64 i, atomic64_t *v) { @@ -1072,6 +2591,17 @@ atomic64_xor(s64 i, atomic64_t *v) raw_atomic64_xor(i, v); } +/** + * atomic64_fetch_xor() - atomic bitwise XOR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { @@ -1080,6 +2610,17 @@ atomic64_fetch_xor(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor(i, v); } +/** + * atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { @@ -1087,6 +2628,17 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_acquire(i, v); } +/** + * atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { @@ -1095,6 +2647,17 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_release(i, v); } +/** + * atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { @@ -1102,6 +2665,17 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_relaxed(i, v); } +/** + * atomic64_xchg() - atomic exchange with full ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 new) { @@ -1110,6 +2684,17 @@ atomic64_xchg(atomic64_t *v, s64 new) return raw_atomic64_xchg(v, new); } +/** + * atomic64_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 new) { @@ -1117,6 +2702,17 @@ atomic64_xchg_acquire(atomic64_t *v, s64 new) return raw_atomic64_xchg_acquire(v, new); } +/** + * atomic64_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 new) { @@ -1125,6 +2721,17 @@ atomic64_xchg_release(atomic64_t *v, s64 new) return raw_atomic64_xchg_release(v, new); } +/** + * atomic64_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 new) { @@ -1132,6 +2739,18 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 new) return raw_atomic64_xchg_relaxed(v, new); } +/** + * atomic64_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { @@ -1140,6 +2759,18 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg(v, old, new); } +/** + * atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { @@ -1147,6 +2778,18 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_acquire(v, old, new); } +/** + * atomic64_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { @@ -1155,6 +2798,18 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_release(v, old, new); } +/** + * atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { @@ -1162,6 +2817,19 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_relaxed(v, old, new); } +/** + * atomic64_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -1171,6 +2839,19 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg(v, old, new); } +/** + * atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -1179,6 +2860,19 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_acquire(v, old, new); } +/** + * atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -1188,6 +2882,19 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_release(v, old, new); } +/** + * atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { @@ -1196,6 +2903,17 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -1204,6 +2922,16 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) return raw_atomic64_sub_and_test(i, v); } +/** + * atomic64_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { @@ -1212,6 +2940,16 @@ atomic64_dec_and_test(atomic64_t *v) return raw_atomic64_dec_and_test(v); } +/** + * atomic64_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { @@ -1220,6 +2958,17 @@ atomic64_inc_and_test(atomic64_t *v) return raw_atomic64_inc_and_test(v); } +/** + * atomic64_add_negative() - atomic add and test if negative with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { @@ -1228,6 +2977,17 @@ atomic64_add_negative(s64 i, atomic64_t *v) return raw_atomic64_add_negative(i, v); } +/** + * atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -1235,6 +2995,17 @@ atomic64_add_negative_acquire(s64 i, atomic64_t *v) return raw_atomic64_add_negative_acquire(i, v); } +/** + * atomic64_add_negative_release() - atomic add and test if negative with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -1243,6 +3014,17 @@ atomic64_add_negative_release(s64 i, atomic64_t *v) return raw_atomic64_add_negative_release(i, v); } +/** + * atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -1250,6 +3032,18 @@ atomic64_add_negative_relaxed(s64 i, atomic64_t *v) return raw_atomic64_add_negative_relaxed(i, v); } +/** + * atomic64_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there. + * + * Return: The old value of @v. + */ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -1258,6 +3052,18 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) return raw_atomic64_fetch_add_unless(v, a, u); } +/** + * atomic64_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -1266,6 +3072,16 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) return raw_atomic64_add_unless(v, a, u); } +/** + * atomic64_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic64_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { @@ -1274,6 +3090,16 @@ atomic64_inc_not_zero(atomic64_t *v) return raw_atomic64_inc_not_zero(v); } +/** + * atomic64_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic64_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { @@ -1282,6 +3108,16 @@ atomic64_inc_unless_negative(atomic64_t *v) return raw_atomic64_inc_unless_negative(v); } +/** + * atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { @@ -1290,6 +3126,16 @@ atomic64_dec_unless_positive(atomic64_t *v) return raw_atomic64_dec_unless_positive(v); } +/** + * atomic64_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { @@ -1298,6 +3144,16 @@ atomic64_dec_if_positive(atomic64_t *v) return raw_atomic64_dec_if_positive(v); } +/** + * atomic_long_read() - atomic load with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_read() there. + * + * Return: the value loaded from @v + */ static __always_inline long atomic_long_read(const atomic_long_t *v) { @@ -1305,6 +3161,16 @@ atomic_long_read(const atomic_long_t *v) return raw_atomic_long_read(v); } +/** + * atomic_long_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there. + * + * Return: the value loaded from @v + */ static __always_inline long atomic_long_read_acquire(const atomic_long_t *v) { @@ -1312,6 +3178,17 @@ atomic_long_read_acquire(const atomic_long_t *v) return raw_atomic_long_read_acquire(v); } +/** + * atomic_long_set() - atomic set with relaxed ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_set() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_set(atomic_long_t *v, long i) { @@ -1319,6 +3196,17 @@ atomic_long_set(atomic_long_t *v, long i) raw_atomic_long_set(v, i); } +/** + * atomic_long_set_release() - atomic set with release ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_set_release() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_set_release(atomic_long_t *v, long i) { @@ -1327,6 +3215,17 @@ atomic_long_set_release(atomic_long_t *v, long i) raw_atomic_long_set_release(v, i); } +/** + * atomic_long_add() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_add(long i, atomic_long_t *v) { @@ -1334,6 +3233,17 @@ atomic_long_add(long i, atomic_long_t *v) raw_atomic_long_add(i, v); } +/** + * atomic_long_add_return() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_add_return(long i, atomic_long_t *v) { @@ -1342,6 +3252,17 @@ atomic_long_add_return(long i, atomic_long_t *v) return raw_atomic_long_add_return(i, v); } +/** + * atomic_long_add_return_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_add_return_acquire(long i, atomic_long_t *v) { @@ -1349,6 +3270,17 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v) return raw_atomic_long_add_return_acquire(i, v); } +/** + * atomic_long_add_return_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_add_return_release(long i, atomic_long_t *v) { @@ -1357,6 +3289,17 @@ atomic_long_add_return_release(long i, atomic_long_t *v) return raw_atomic_long_add_return_release(i, v); } +/** + * atomic_long_add_return_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_add_return_relaxed(long i, atomic_long_t *v) { @@ -1364,6 +3307,17 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v) return raw_atomic_long_add_return_relaxed(i, v); } +/** + * atomic_long_fetch_add() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_add(long i, atomic_long_t *v) { @@ -1372,6 +3326,17 @@ atomic_long_fetch_add(long i, atomic_long_t *v) return raw_atomic_long_fetch_add(i, v); } +/** + * atomic_long_fetch_add_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { @@ -1379,6 +3344,17 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_acquire(i, v); } +/** + * atomic_long_fetch_add_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_add_release(long i, atomic_long_t *v) { @@ -1387,6 +3363,17 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_release(i, v); } +/** + * atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { @@ -1394,6 +3381,17 @@ atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_relaxed(i, v); } +/** + * atomic_long_sub() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_sub(long i, atomic_long_t *v) { @@ -1401,6 +3399,17 @@ atomic_long_sub(long i, atomic_long_t *v) raw_atomic_long_sub(i, v); } +/** + * atomic_long_sub_return() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_sub_return(long i, atomic_long_t *v) { @@ -1409,6 +3418,17 @@ atomic_long_sub_return(long i, atomic_long_t *v) return raw_atomic_long_sub_return(i, v); } +/** + * atomic_long_sub_return_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_sub_return_acquire(long i, atomic_long_t *v) { @@ -1416,6 +3436,17 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v) return raw_atomic_long_sub_return_acquire(i, v); } +/** + * atomic_long_sub_return_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_sub_return_release(long i, atomic_long_t *v) { @@ -1424,6 +3455,17 @@ atomic_long_sub_return_release(long i, atomic_long_t *v) return raw_atomic_long_sub_return_release(i, v); } +/** + * atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { @@ -1431,6 +3473,17 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v) return raw_atomic_long_sub_return_relaxed(i, v); } +/** + * atomic_long_fetch_sub() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_sub(long i, atomic_long_t *v) { @@ -1439,6 +3492,17 @@ atomic_long_fetch_sub(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub(i, v); } +/** + * atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { @@ -1446,6 +3510,17 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_acquire(i, v); } +/** + * atomic_long_fetch_sub_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_sub_release(long i, atomic_long_t *v) { @@ -1454,6 +3529,17 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_release(i, v); } +/** + * atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { @@ -1461,6 +3547,16 @@ atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_relaxed(i, v); } +/** + * atomic_long_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_inc(atomic_long_t *v) { @@ -1468,6 +3564,16 @@ atomic_long_inc(atomic_long_t *v) raw_atomic_long_inc(v); } +/** + * atomic_long_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_inc_return(atomic_long_t *v) { @@ -1476,6 +3582,16 @@ atomic_long_inc_return(atomic_long_t *v) return raw_atomic_long_inc_return(v); } +/** + * atomic_long_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_inc_return_acquire(atomic_long_t *v) { @@ -1483,6 +3599,16 @@ atomic_long_inc_return_acquire(atomic_long_t *v) return raw_atomic_long_inc_return_acquire(v); } +/** + * atomic_long_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_inc_return_release(atomic_long_t *v) { @@ -1491,6 +3617,16 @@ atomic_long_inc_return_release(atomic_long_t *v) return raw_atomic_long_inc_return_release(v); } +/** + * atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_inc_return_relaxed(atomic_long_t *v) { @@ -1498,6 +3634,16 @@ atomic_long_inc_return_relaxed(atomic_long_t *v) return raw_atomic_long_inc_return_relaxed(v); } +/** + * atomic_long_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_inc(atomic_long_t *v) { @@ -1506,6 +3652,16 @@ atomic_long_fetch_inc(atomic_long_t *v) return raw_atomic_long_fetch_inc(v); } +/** + * atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_inc_acquire(atomic_long_t *v) { @@ -1513,6 +3669,16 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v) return raw_atomic_long_fetch_inc_acquire(v); } +/** + * atomic_long_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_inc_release(atomic_long_t *v) { @@ -1521,6 +3687,16 @@ atomic_long_fetch_inc_release(atomic_long_t *v) return raw_atomic_long_fetch_inc_release(v); } +/** + * atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_inc_relaxed(atomic_long_t *v) { @@ -1528,6 +3704,16 @@ atomic_long_fetch_inc_relaxed(atomic_long_t *v) return raw_atomic_long_fetch_inc_relaxed(v); } +/** + * atomic_long_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_dec(atomic_long_t *v) { @@ -1535,6 +3721,16 @@ atomic_long_dec(atomic_long_t *v) raw_atomic_long_dec(v); } +/** + * atomic_long_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_dec_return(atomic_long_t *v) { @@ -1543,6 +3739,16 @@ atomic_long_dec_return(atomic_long_t *v) return raw_atomic_long_dec_return(v); } +/** + * atomic_long_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_dec_return_acquire(atomic_long_t *v) { @@ -1550,6 +3756,16 @@ atomic_long_dec_return_acquire(atomic_long_t *v) return raw_atomic_long_dec_return_acquire(v); } +/** + * atomic_long_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_dec_return_release(atomic_long_t *v) { @@ -1558,6 +3774,16 @@ atomic_long_dec_return_release(atomic_long_t *v) return raw_atomic_long_dec_return_release(v); } +/** + * atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there. + * + * Return: the new value of @v. + */ static __always_inline long atomic_long_dec_return_relaxed(atomic_long_t *v) { @@ -1565,6 +3791,16 @@ atomic_long_dec_return_relaxed(atomic_long_t *v) return raw_atomic_long_dec_return_relaxed(v); } +/** + * atomic_long_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_dec(atomic_long_t *v) { @@ -1573,6 +3809,16 @@ atomic_long_fetch_dec(atomic_long_t *v) return raw_atomic_long_fetch_dec(v); } +/** + * atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_dec_acquire(atomic_long_t *v) { @@ -1580,6 +3826,16 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v) return raw_atomic_long_fetch_dec_acquire(v); } +/** + * atomic_long_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_dec_release(atomic_long_t *v) { @@ -1588,6 +3844,16 @@ atomic_long_fetch_dec_release(atomic_long_t *v) return raw_atomic_long_fetch_dec_release(v); } +/** + * atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_dec_relaxed(atomic_long_t *v) { @@ -1595,6 +3861,17 @@ atomic_long_fetch_dec_relaxed(atomic_long_t *v) return raw_atomic_long_fetch_dec_relaxed(v); } +/** + * atomic_long_and() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_and() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_and(long i, atomic_long_t *v) { @@ -1602,6 +3879,17 @@ atomic_long_and(long i, atomic_long_t *v) raw_atomic_long_and(i, v); } +/** + * atomic_long_fetch_and() - atomic bitwise AND with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_and(long i, atomic_long_t *v) { @@ -1610,6 +3898,17 @@ atomic_long_fetch_and(long i, atomic_long_t *v) return raw_atomic_long_fetch_and(i, v); } +/** + * atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { @@ -1617,6 +3916,17 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_acquire(i, v); } +/** + * atomic_long_fetch_and_release() - atomic bitwise AND with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_and_release(long i, atomic_long_t *v) { @@ -1625,6 +3935,17 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_release(i, v); } +/** + * atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { @@ -1632,6 +3953,17 @@ atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_relaxed(i, v); } +/** + * atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_andnot() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_andnot(long i, atomic_long_t *v) { @@ -1639,6 +3971,17 @@ atomic_long_andnot(long i, atomic_long_t *v) raw_atomic_long_andnot(i, v); } +/** + * atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_andnot(long i, atomic_long_t *v) { @@ -1647,6 +3990,17 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot(i, v); } +/** + * atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { @@ -1654,6 +4008,17 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_acquire(i, v); } +/** + * atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { @@ -1662,6 +4027,17 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_release(i, v); } +/** + * atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { @@ -1669,6 +4045,17 @@ atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_relaxed(i, v); } +/** + * atomic_long_or() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_or() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_or(long i, atomic_long_t *v) { @@ -1676,6 +4063,17 @@ atomic_long_or(long i, atomic_long_t *v) raw_atomic_long_or(i, v); } +/** + * atomic_long_fetch_or() - atomic bitwise OR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_or(long i, atomic_long_t *v) { @@ -1684,6 +4082,17 @@ atomic_long_fetch_or(long i, atomic_long_t *v) return raw_atomic_long_fetch_or(i, v); } +/** + * atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { @@ -1691,6 +4100,17 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_acquire(i, v); } +/** + * atomic_long_fetch_or_release() - atomic bitwise OR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_or_release(long i, atomic_long_t *v) { @@ -1699,6 +4119,17 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_release(i, v); } +/** + * atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { @@ -1706,6 +4137,17 @@ atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_relaxed(i, v); } +/** + * atomic_long_xor() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xor() there. + * + * Return: nothing. + */ static __always_inline void atomic_long_xor(long i, atomic_long_t *v) { @@ -1713,6 +4155,17 @@ atomic_long_xor(long i, atomic_long_t *v) raw_atomic_long_xor(i, v); } +/** + * atomic_long_fetch_xor() - atomic bitwise XOR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_xor(long i, atomic_long_t *v) { @@ -1721,6 +4174,17 @@ atomic_long_fetch_xor(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor(i, v); } +/** + * atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { @@ -1728,6 +4192,17 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_acquire(i, v); } +/** + * atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_xor_release(long i, atomic_long_t *v) { @@ -1736,6 +4211,17 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_release(i, v); } +/** + * atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { @@ -1743,6 +4229,17 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_relaxed(i, v); } +/** + * atomic_long_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_xchg(atomic_long_t *v, long new) { @@ -1751,6 +4248,17 @@ atomic_long_xchg(atomic_long_t *v, long new) return raw_atomic_long_xchg(v, new); } +/** + * atomic_long_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_xchg_acquire(atomic_long_t *v, long new) { @@ -1758,6 +4266,17 @@ atomic_long_xchg_acquire(atomic_long_t *v, long new) return raw_atomic_long_xchg_acquire(v, new); } +/** + * atomic_long_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_xchg_release(atomic_long_t *v, long new) { @@ -1766,6 +4285,17 @@ atomic_long_xchg_release(atomic_long_t *v, long new) return raw_atomic_long_xchg_release(v, new); } +/** + * atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_xchg_relaxed(atomic_long_t *v, long new) { @@ -1773,6 +4303,18 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long new) return raw_atomic_long_xchg_relaxed(v, new); } +/** + * atomic_long_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { @@ -1781,6 +4323,18 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg(v, old, new); } +/** + * atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { @@ -1788,6 +4342,18 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_acquire(v, old, new); } +/** + * atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { @@ -1796,6 +4362,18 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_release(v, old, new); } +/** + * atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there. + * + * Return: the old value of @v. + */ static __always_inline long atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { @@ -1803,6 +4381,19 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_relaxed(v, old, new); } +/** + * atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { @@ -1812,6 +4403,19 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg(v, old, new); } +/** + * atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { @@ -1820,6 +4424,19 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_acquire(v, old, new); } +/** + * atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { @@ -1829,6 +4446,19 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_release(v, old, new); } +/** + * atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { @@ -1837,6 +4467,17 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_sub_and_test(long i, atomic_long_t *v) { @@ -1845,6 +4486,16 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) return raw_atomic_long_sub_and_test(i, v); } +/** + * atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_dec_and_test(atomic_long_t *v) { @@ -1853,6 +4504,16 @@ atomic_long_dec_and_test(atomic_long_t *v) return raw_atomic_long_dec_and_test(v); } +/** + * atomic_long_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_inc_and_test(atomic_long_t *v) { @@ -1861,6 +4522,17 @@ atomic_long_inc_and_test(atomic_long_t *v) return raw_atomic_long_inc_and_test(v); } +/** + * atomic_long_add_negative() - atomic add and test if negative with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative(long i, atomic_long_t *v) { @@ -1869,6 +4541,17 @@ atomic_long_add_negative(long i, atomic_long_t *v) return raw_atomic_long_add_negative(i, v); } +/** + * atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_acquire(long i, atomic_long_t *v) { @@ -1876,6 +4559,17 @@ atomic_long_add_negative_acquire(long i, atomic_long_t *v) return raw_atomic_long_add_negative_acquire(i, v); } +/** + * atomic_long_add_negative_release() - atomic add and test if negative with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_release(long i, atomic_long_t *v) { @@ -1884,6 +4578,17 @@ atomic_long_add_negative_release(long i, atomic_long_t *v) return raw_atomic_long_add_negative_release(i, v); } +/** + * atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { @@ -1891,6 +4596,18 @@ atomic_long_add_negative_relaxed(long i, atomic_long_t *v) return raw_atomic_long_add_negative_relaxed(i, v); } +/** + * atomic_long_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there. + * + * Return: The old value of @v. + */ static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -1899,6 +4616,18 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) return raw_atomic_long_fetch_add_unless(v, a, u); } +/** + * atomic_long_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_add_unless(atomic_long_t *v, long a, long u) { @@ -1907,6 +4636,16 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) return raw_atomic_long_add_unless(v, a, u); } +/** + * atomic_long_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_long_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_inc_not_zero(atomic_long_t *v) { @@ -1915,6 +4654,16 @@ atomic_long_inc_not_zero(atomic_long_t *v) return raw_atomic_long_inc_not_zero(v); } +/** + * atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_long_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_inc_unless_negative(atomic_long_t *v) { @@ -1923,6 +4672,16 @@ atomic_long_inc_unless_negative(atomic_long_t *v) return raw_atomic_long_inc_unless_negative(v); } +/** + * atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_dec_unless_positive(atomic_long_t *v) { @@ -1931,6 +4690,16 @@ atomic_long_dec_unless_positive(atomic_long_t *v) return raw_atomic_long_dec_unless_positive(v); } +/** + * atomic_long_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline long atomic_long_dec_if_positive(atomic_long_t *v) { @@ -2231,4 +5000,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c +// 92b07cc6336f94f5511e9e16f184d28f9d97be95 diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index f564f71ff8afc..c1dc7a6a0f85f 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -21,6 +21,16 @@ typedef atomic_t atomic_long_t; #define atomic_long_cond_read_relaxed atomic_cond_read_relaxed #endif +/** + * raw_atomic_long_read() - atomic load with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_read() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { @@ -31,6 +41,16 @@ raw_atomic_long_read(const atomic_long_t *v) #endif } +/** + * raw_atomic_long_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere. + * + * Return: the value loaded from @v + */ static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { @@ -41,6 +61,17 @@ raw_atomic_long_read_acquire(const atomic_long_t *v) #endif } +/** + * raw_atomic_long_set() - atomic set with relaxed ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_set() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { @@ -51,6 +82,17 @@ raw_atomic_long_set(atomic_long_t *v, long i) #endif } +/** + * raw_atomic_long_set_release() - atomic set with release ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { @@ -61,6 +103,17 @@ raw_atomic_long_set_release(atomic_long_t *v, long i) #endif } +/** + * raw_atomic_long_add() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { @@ -71,6 +124,17 @@ raw_atomic_long_add(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { @@ -81,6 +145,17 @@ raw_atomic_long_add_return(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { @@ -91,6 +166,17 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { @@ -101,6 +187,17 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { @@ -111,6 +208,17 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { @@ -121,6 +229,17 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { @@ -131,6 +250,17 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { @@ -141,6 +271,17 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { @@ -151,6 +292,17 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { @@ -161,6 +313,17 @@ raw_atomic_long_sub(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { @@ -171,6 +334,17 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { @@ -181,6 +355,17 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { @@ -191,6 +376,17 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { @@ -201,6 +397,17 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { @@ -211,6 +418,17 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { @@ -221,6 +439,17 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { @@ -231,6 +460,17 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { @@ -241,6 +481,16 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { @@ -251,6 +501,16 @@ raw_atomic_long_inc(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { @@ -261,6 +521,16 @@ raw_atomic_long_inc_return(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { @@ -271,6 +541,16 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { @@ -281,6 +561,16 @@ raw_atomic_long_inc_return_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { @@ -291,6 +581,16 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { @@ -301,6 +601,16 @@ raw_atomic_long_fetch_inc(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { @@ -311,6 +621,16 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { @@ -321,6 +641,16 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { @@ -331,6 +661,16 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { @@ -341,6 +681,16 @@ raw_atomic_long_dec(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { @@ -351,6 +701,16 @@ raw_atomic_long_dec_return(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { @@ -361,6 +721,16 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { @@ -371,6 +741,16 @@ raw_atomic_long_dec_return_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere. + * + * Return: the new value of @v. + */ static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { @@ -381,6 +761,16 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { @@ -391,6 +781,16 @@ raw_atomic_long_fetch_dec(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { @@ -401,6 +801,16 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { @@ -411,6 +821,16 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { @@ -421,6 +841,17 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_and() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_and() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { @@ -431,6 +862,17 @@ raw_atomic_long_and(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and() - atomic bitwise AND with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { @@ -441,6 +883,17 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { @@ -451,6 +904,17 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_release() - atomic bitwise AND with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { @@ -461,6 +925,17 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { @@ -471,6 +946,17 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { @@ -481,6 +967,17 @@ raw_atomic_long_andnot(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { @@ -491,6 +988,17 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { @@ -501,6 +1009,17 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { @@ -511,6 +1030,17 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { @@ -521,6 +1051,17 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_or() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_or() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { @@ -531,6 +1072,17 @@ raw_atomic_long_or(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or() - atomic bitwise OR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { @@ -541,6 +1093,17 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { @@ -551,6 +1114,17 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_release() - atomic bitwise OR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { @@ -561,6 +1135,17 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { @@ -571,6 +1156,17 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_xor() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xor() elsewhere. + * + * Return: nothing. + */ static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { @@ -581,6 +1177,17 @@ raw_atomic_long_xor(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor() - atomic bitwise XOR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { @@ -591,6 +1198,17 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { @@ -601,6 +1219,17 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { @@ -611,6 +1240,17 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { @@ -621,6 +1261,17 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long new) { @@ -631,6 +1282,17 @@ raw_atomic_long_xchg(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { @@ -641,6 +1303,17 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long new) { @@ -651,6 +1324,17 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { @@ -661,6 +1345,18 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { @@ -671,6 +1367,18 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { @@ -681,6 +1389,18 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { @@ -691,6 +1411,18 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere. + * + * Return: the old value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { @@ -701,6 +1433,19 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { @@ -711,6 +1456,19 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { @@ -721,6 +1479,19 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { @@ -731,6 +1502,19 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { @@ -741,6 +1525,17 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { @@ -751,6 +1546,16 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { @@ -761,6 +1566,16 @@ raw_atomic_long_dec_and_test(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { @@ -771,6 +1586,17 @@ raw_atomic_long_inc_and_test(atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative() - atomic add and test if negative with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { @@ -781,6 +1607,17 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { @@ -791,6 +1628,17 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_release() - atomic add and test if negative with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { @@ -801,6 +1649,17 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { @@ -811,6 +1670,18 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere. + * + * Return: The old value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -821,6 +1692,18 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) #endif } +/** + * raw_atomic_long_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { @@ -831,6 +1714,16 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) #endif } +/** + * raw_atomic_long_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_long_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { @@ -841,6 +1734,16 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_long_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { @@ -851,6 +1754,16 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { @@ -861,6 +1774,16 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { @@ -872,4 +1795,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// e785d25cc3f220b7d473d36aac9da85dd7eb13a8 +// ac6d232f716fae95b5f018bb1188e7edfcf8c48a diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh index 81d5c32039dd4..9a42647719598 100755 --- a/scripts/atomic/atomic-tbl.sh +++ b/scripts/atomic/atomic-tbl.sh @@ -36,9 +36,16 @@ meta_has_relaxed() meta_in "$1" "BFIR" } -#find_fallback_template(pfx, name, sfx, order) -find_fallback_template() +#meta_is_implicitly_relaxed(meta) +meta_is_implicitly_relaxed() +{ + meta_in "$1" "vls" +} + +#find_template(tmpltype, pfx, name, sfx, order) +find_template() { + local tmpltype="$1"; shift local pfx="$1"; shift local name="$1"; shift local sfx="$1"; shift @@ -52,8 +59,8 @@ find_fallback_template() # # Start at the most specific, and fall back to the most general. Once # we find a specific fallback, don't bother looking for more. - for base in "${pfx}${name}${sfx}${order}" "${name}"; do - file="${ATOMICDIR}/fallbacks/${base}" + for base in "${pfx}${name}${sfx}${order}" "${pfx}${name}${sfx}" "${name}"; do + file="${ATOMICDIR}/${tmpltype}/${base}" if [ -f "${file}" ]; then printf "${file}" @@ -62,6 +69,18 @@ find_fallback_template() done } +#find_fallback_template(pfx, name, sfx, order) +find_fallback_template() +{ + find_template "fallbacks" "$@" +} + +#find_kerneldoc_template(pfx, name, sfx, order) +find_kerneldoc_template() +{ + find_template "kerneldoc" "$@" +} + #gen_ret_type(meta, int) gen_ret_type() { local meta="$1"; shift @@ -142,6 +161,91 @@ gen_args() done } +#gen_desc_return(meta) +gen_desc_return() +{ + local meta="$1"; shift + + case "${meta}" in + [v]) + printf "Return: nothing." + ;; + [Ff]) + printf "Return: The old value of @v." + ;; + [R]) + printf "Return: the new value of @v." + ;; + [l]) + printf "Return: the value of @v." + ;; + esac +} + +#gen_template_kerneldoc(template, class, meta, pfx, name, sfx, order, atomic, int, args...) +gen_template_kerneldoc() +{ + local template="$1"; shift + local class="$1"; shift + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local retstmt="$(gen_ret_stmt "${meta}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + local desc_order="" + local desc_instrumentation="" + local desc_return="" + + if [ ! -z "${order}" ]; then + desc_order="${order##_}" + elif meta_is_implicitly_relaxed "${meta}"; then + desc_order="relaxed" + else + desc_order="full" + fi + + if [ -z "${class}" ]; then + desc_noinstr="Unsafe to use in noinstr code; use raw_${atomicname}() there." + else + desc_noinstr="Safe to use in noinstr code; prefer ${atomicname}() elsewhere." + fi + + desc_return="$(gen_desc_return "${meta}")" + + . ${template} +} + +#gen_kerneldoc(class, meta, pfx, name, sfx, order, atomic, int, args...) +gen_kerneldoc() +{ + local class="$1"; shift + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local tmpl="$(find_kerneldoc_template "${pfx}" "${name}" "${sfx}" "${order}")" + if [ -z "${tmpl}" ]; then + printf "/*\n" + printf " * No kerneldoc available for ${class}${atomicname}\n" + printf " */\n" + else + gen_template_kerneldoc "${tmpl}" "${class}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + fi +} + #gen_proto_order_variants(meta, pfx, name, sfx, ...) gen_proto_order_variants() { diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 2b470d31e3539..c0c8a85d7c81b 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -73,6 +73,8 @@ gen_proto_order_variant() local params="$(gen_params "${int}" "${atomic}" "$@")" local args="$(gen_args "$@")" + gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" + printf "static __always_inline ${ret}\n" printf "raw_${atomicname}(${params})\n" printf "{\n" diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 93c949aa9e544..9d3863ceb4d48 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -67,6 +67,8 @@ gen_proto_order_variant() local checks="$(gen_params_checks "${meta}" "${order}" "$@")" local args="$(gen_args "$@")" local retstmt="$(gen_ret_stmt "${meta}")" + + gen_kerneldoc "" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" cat < 0), atomically updates @v to (@v - 1) with ${desc_order} ordering. + * + * ${desc_noinstr} + * + * Return: @true if @v was updated, @false otherwise. + */ +EOF diff --git a/scripts/atomic/kerneldoc/dec_unless_positive b/scripts/atomic/kerneldoc/dec_unless_positive new file mode 100644 index 0000000000000..ee73612f03547 --- /dev/null +++ b/scripts/atomic/kerneldoc/dec_unless_positive @@ -0,0 +1,12 @@ +cat <= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering. + * + * ${desc_noinstr} + * + * Return: @true if @v was updated, @false otherwise. + */ +EOF diff --git a/scripts/atomic/kerneldoc/or b/scripts/atomic/kerneldoc/or new file mode 100644 index 0000000000000..55b33de504165 --- /dev/null +++ b/scripts/atomic/kerneldoc/or @@ -0,0 +1,13 @@ +cat < X-Patchwork-Id: 97418 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1425325vqo; Mon, 22 May 2023 05:55:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6njXpfvCaBipp2iitdg352IbvyuLrhY6dFg+rY6oRYWxsooQ6svWv6A4iFC+PWSkQ/m6NA X-Received: by 2002:a05:6a20:440d:b0:ec:8f81:e9f7 with SMTP id ce13-20020a056a20440d00b000ec8f81e9f7mr13699031pzb.16.1684760128456; Mon, 22 May 2023 05:55:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684760128; cv=none; d=google.com; s=arc-20160816; b=mQW/TPgJL946xaLAbySvrprbOFT2KeofMLP4iJca3Al9IDPvfPIFDzGJb2W1s9j9x3 weCgrGkY9V+0YFUCJa4D0b3EGKLfOVwdrVRcwJPdHZAYUqFUdwmFZUbBwMCWs980EQfo g/V+FR8Ne+MyhNIjGv8v10ZliIyNILNrt/kPD1PhCSIkNoiR8OtkSpBF2t+UQlyB6h8A puadZPmTrV864F9kYsvCrM4i5DqXY0xKd9z7WPggyN9r9phNwxLxScAKcX2Akd8Fqj1q RpkNYJR4y0lgQKDum4YHjCHYCBykjdJgrNwKVz9HuSW6zGpfkzGH0Hpsgeo2UiI+Eu+S w6qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hsij5K+YWec1UvQIOA03/MurzcbKVEwQCONQnWcymaQ=; b=MH1OJKDylfLwGmvtn61+qctKSYforUNHBmAA6rO1EHF32tpop2mPhylcxucrVnlORR 9gtC11zGHWbDLRXRY1IgtxtBVUagQMdrZMrYJjM0IZ4TCGsMBLjaxbiJRH/LETyU4qkw D0+CT0EKg/dbDRWGR+mTB1p9HUnGGO29p5QiiSLTwQaBEFhup3BXmr1itatZK4LfvwXo oATOYBs5zwuYX9JbHuTZRC3v49hAF7jk1rDuFyJo/MLwPMmeXfB8+ERRvtULhVFFEoO/ z7FMCiqWpl2M7mA5xsFBtndpIigcddCHqKnq45uJPizycza/D1WcBtg2/0Y3jC1UdqcB wRNg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o23-20020a17090ac71700b0024df9227b1asi6789918pjt.167.2023.05.22.05.55.13; Mon, 22 May 2023 05:55:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234145AbjEVM3K (ORCPT + 99 others); Mon, 22 May 2023 08:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232305AbjEVM1e (ORCPT ); Mon, 22 May 2023 08:27:34 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4B2BB18C; Mon, 22 May 2023 05:25:40 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 29A4D11FB; Mon, 22 May 2023 05:26:25 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 762E43F59C; Mon, 22 May 2023 05:25:38 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation Date: Mon, 22 May 2023 13:24:28 +0100 Message-Id: <20230522122429.1915021-26-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766599036230280401?= X-GMAIL-MSGID: =?utf-8?q?1766599036230280401?= From: "Paul E. McKenney" Add the generated atomic headers to driver-api/basics.rst in order to provide documentation for the Linux kernel's atomic operations. Signed-off-by: Paul E. McKenney Cc: Jonathan Corbet Cc: Kees Cook Cc: Akira Yokosawa Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland Cc: Reviewed-by: Kees Cook [Mark: add atomic-long.h] Signed-off-by: Mark Rutland --- Documentation/driver-api/basics.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst index 4b4d8e28d3be4..a1fbd97fb79fb 100644 --- a/Documentation/driver-api/basics.rst +++ b/Documentation/driver-api/basics.rst @@ -87,6 +87,12 @@ Atomics .. kernel-doc:: arch/x86/include/asm/atomic.h :internal: +.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h + :internal: + +.. kernel-doc:: include/linux/atomic/atomic-long.h + :internal: + Kernel objects manipulation --------------------------- From patchwork Mon May 22 12:24:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 97403 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1413373vqo; Mon, 22 May 2023 05:34:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7MtkM7nr0fISS5Sg/xYZyK3EFitYY5fmgpATZrblYkLNzRx9mjzKHngk0CEtTMBSSBsjHJ X-Received: by 2002:a17:90b:30d6:b0:24b:be0c:6134 with SMTP id hi22-20020a17090b30d600b0024bbe0c6134mr9409774pjb.33.1684758867729; Mon, 22 May 2023 05:34:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684758867; cv=none; d=google.com; s=arc-20160816; b=0g3H8+JL/x9m2vaQlgHjjy4qDzT3vqtqvBJ8xR/QldlzsK9MT6uJ3GOF6L4tKtOUot H8Yfo5qjPr9NMYMziX/SRI2KLHmz013oY9ziVHqgzFE+J3/Q5v2PxKJSIGwMe1k1AgLY gK6nvFVNin4ckWSFS1ChWckaaRm+vcrQm93MC2XQ4zoP11UHqO1CbCtsIjEt8onU9yBb aT1geJn174jfxEtkJpm1JNReYJG3u8+EEMiMONOzC2PDIvZ6rOnNmJLhKjlXXAcxsugS ARz2WcJE57e3egKP89mt0320ZsIaE8O4FqjXP0cMhJiHvHTol0uypgUsknxB4YbFUfIh FSzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=31TfWN6WxHdSR9ETVX/GcuttOM85+wKPJ/trqR0uz2E=; b=JKnUDctKCAIniXJTdIOOsF46p0aaeFpnXjIB3K7Qte8hcueXtbpEMMR0PyFll9RyRV Q2fe4CH1cKZwO1lNpaScGa7Xtbwr0LCM/q+us/QNuUdymlRRTpREdlkJudWtqyH/PO17 YuPKV3zUozeb8TfhSb4lRefME7CpCG2PF/snFRl04FRt5R51lQSMtYWjJ0OAETdSIu8m GONqTnqDIIw92dRVXf9rxt7iVFZFL49CTUZzpZYJCavfMdP10KHRQA5bmjXqhuZVsIL2 zs68Skh1qqKGviBsWfl7MayjvDd4dDJeivEZrQbzYzT/Y9bdhS2VGWX2fQnH8nnLTbDu kbEQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s2-20020a17090ad48200b002537bd7454dsi4540943pju.101.2023.05.22.05.34.15; Mon, 22 May 2023 05:34:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234159AbjEVM3b (ORCPT + 99 others); Mon, 22 May 2023 08:29:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231418AbjEVM1m (ORCPT ); Mon, 22 May 2023 08:27:42 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 11770133; Mon, 22 May 2023 05:25:43 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E71B4139F; Mon, 22 May 2023 05:26:27 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 47FDA3F59C; Mon, 22 May 2023 05:25:41 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux-arch@vger.kernel.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, paulmck@kernel.org, peterz@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH 26/26] locking/atomic: treewide: delete arch_atomic_*() kerneldoc Date: Mon, 22 May 2023 13:24:29 +0100 Message-Id: <20230522122429.1915021-27-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230522122429.1915021-1-mark.rutland@arm.com> References: <20230522122429.1915021-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766597714285798432?= X-GMAIL-MSGID: =?utf-8?q?1766597714285798432?= Currently several architectures have kerneldoc comments for arch_atomic_*(), which is unhelpful as these live in a shared namespace where they clash, and the arch_atomic_*() ops are now an implementation detail of the raw_atomic_*() ops, which no-one should use those directly. Delete the kerneldoc comments for arch_atomic_*(), along with pseudo-kerneldoc comments which are in the correct style but are missing the leading '/**' necessary to be true kerneldoc comments. Drop x86's asm/atomic.h from Documentation/driver-api/basics.rst as it no longer contains any relevant kerneldoc comments. Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Paul E. McKenney --- Documentation/driver-api/basics.rst | 3 - arch/alpha/include/asm/atomic.h | 25 -------- arch/arc/include/asm/atomic64-arcv2.h | 17 ------ arch/hexagon/include/asm/atomic.h | 16 ----- arch/loongarch/include/asm/atomic.h | 49 --------------- arch/x86/include/asm/atomic.h | 87 --------------------------- arch/x86/include/asm/atomic64_32.h | 76 ----------------------- arch/x86/include/asm/atomic64_64.h | 81 ------------------------- 8 files changed, 354 deletions(-) diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst index a1fbd97fb79fb..d8e4e5c82bcf0 100644 --- a/Documentation/driver-api/basics.rst +++ b/Documentation/driver-api/basics.rst @@ -84,9 +84,6 @@ Reference counting Atomics ------- -.. kernel-doc:: arch/x86/include/asm/atomic.h - :internal: - .. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h :internal: diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index ec8ab552c527a..cbd9244571af0 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -200,15 +200,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * arch_atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, new, old; @@ -232,15 +223,6 @@ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) } #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless -/** - * arch_atomic64_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { s64 c, new, old; @@ -264,13 +246,6 @@ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u } #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { s64 old, tmp; diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h index 2b7c9e61a2947..6b6db981967ae 100644 --- a/arch/arc/include/asm/atomic64-arcv2.h +++ b/arch/arc/include/asm/atomic64-arcv2.h @@ -182,14 +182,6 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) } #define arch_atomic64_xchg arch_atomic64_xchg -/** - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic64_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ - static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { s64 val; @@ -214,15 +206,6 @@ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) } #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive -/** - * arch_atomic64_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if it was not @u. - * Returns the old value of @v - */ static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { s64 old, temp; diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 5c8440016c762..2447d083c432f 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -28,12 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int new) #define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) -/** - * arch_atomic_read - reads a word, atomically - * @v: pointer to atomic value - * - * Assumes all word reads on our architecture are atomic. - */ #define arch_atomic_read(v) READ_ONCE((v)->counter) #define ATOMIC_OP(op) \ @@ -112,16 +106,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * arch_atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer to value - * @a: amount to add - * @u: unless value is equal to u - * - * Returns old value. - * - */ - static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int __oldval; diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h index 8d73c85911b08..e27f0c72d3242 100644 --- a/arch/loongarch/include/asm/atomic.h +++ b/arch/loongarch/include/asm/atomic.h @@ -29,21 +29,7 @@ #define ATOMIC_INIT(i) { (i) } -/* - * arch_atomic_read - read atomic variable - * @v: pointer of type atomic_t - * - * Atomically reads the value of @v. - */ #define arch_atomic_read(v) READ_ONCE((v)->counter) - -/* - * arch_atomic_set - set atomic variable - * @v: pointer of type atomic_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ #define arch_atomic_set(v, i) WRITE_ONCE((v)->counter, (i)) #define ATOMIC_OP(op, I, asm_op) \ @@ -139,14 +125,6 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) } #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless -/* - * arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically test @v and subtract @i if @v is greater or equal than @i. - * The function returns the old value of @v minus @i. - */ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) { int result; @@ -181,28 +159,13 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) return result; } -/* - * arch_atomic_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - */ #define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(1, v) #ifdef CONFIG_64BIT #define ATOMIC64_INIT(i) { (i) } -/* - * arch_atomic64_read - read atomic variable - * @v: pointer of type atomic64_t - * - */ #define arch_atomic64_read(v) READ_ONCE((v)->counter) - -/* - * arch_atomic64_set - set atomic variable - * @v: pointer of type atomic64_t - * @i: required value - */ #define arch_atomic64_set(v, i) WRITE_ONCE((v)->counter, (i)) #define ATOMIC64_OP(op, I, asm_op) \ @@ -297,14 +260,6 @@ static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u) } #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -/* - * arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically test @v and subtract @i if @v is greater or equal than @i. - * The function returns the old value of @v minus @i. - */ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) { long result; @@ -339,10 +294,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) return result; } -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic64_t - */ #define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(1, v) #endif /* CONFIG_64BIT */ diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 5e754e8957671..55a55ec043502 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -14,12 +14,6 @@ * resource counting etc.. */ -/** - * arch_atomic_read - read atomic variable - * @v: pointer of type atomic_t - * - * Atomically reads the value of @v. - */ static __always_inline int arch_atomic_read(const atomic_t *v) { /* @@ -29,25 +23,11 @@ static __always_inline int arch_atomic_read(const atomic_t *v) return __READ_ONCE((v)->counter); } -/** - * arch_atomic_set - set atomic variable - * @v: pointer of type atomic_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ static __always_inline void arch_atomic_set(atomic_t *v, int i) { __WRITE_ONCE(v->counter, i); } -/** - * arch_atomic_add - add integer to atomic variable - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v. - */ static __always_inline void arch_atomic_add(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "addl %1,%0" @@ -55,13 +35,6 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v) : "ir" (i) : "memory"); } -/** - * arch_atomic_sub - subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v. - */ static __always_inline void arch_atomic_sub(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "subl %1,%0" @@ -69,27 +42,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v) : "ir" (i) : "memory"); } -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i); } #define arch_atomic_sub_and_test arch_atomic_sub_and_test -/** - * arch_atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" @@ -97,12 +55,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v) } #define arch_atomic_inc arch_atomic_inc -/** - * arch_atomic_dec - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" @@ -110,69 +62,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v) } #define arch_atomic_dec arch_atomic_dec -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e); } #define arch_atomic_dec_and_test arch_atomic_dec_and_test -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e); } #define arch_atomic_inc_and_test arch_atomic_inc_and_test -/** - * arch_atomic_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i); } #define arch_atomic_add_negative arch_atomic_add_negative -/** - * arch_atomic_add_return - add integer and return - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns @i + @v - */ static __always_inline int arch_atomic_add_return(int i, atomic_t *v) { return i + xadd(&v->counter, i); } #define arch_atomic_add_return arch_atomic_add_return -/** - * arch_atomic_sub_return - subtract integer and return - * @v: pointer of type atomic_t - * @i: integer value to subtract - * - * Atomically subtracts @i from @v and returns @v - @i - */ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) { return arch_atomic_add_return(-i, v); diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 808b4eece251e..3486d91b8595f 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -61,30 +61,12 @@ ATOMIC64_DECL(add_unless); #undef __ATOMIC64_DECL #undef ATOMIC64_EXPORT -/** - * arch_atomic64_cmpxchg - cmpxchg atomic64 variable - * @v: pointer to type atomic64_t - * @o: expected value - * @n: new value - * - * Atomically sets @v to @n if it was equal to @o and returns - * the old value. - */ - static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { return arch_cmpxchg64(&v->counter, o, n); } #define arch_atomic64_cmpxchg arch_atomic64_cmpxchg -/** - * arch_atomic64_xchg - xchg atomic64 variable - * @v: pointer to type atomic64_t - * @n: value to assign - * - * Atomically xchgs the value of @v to @n and returns - * the old value. - */ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) { s64 o; @@ -97,13 +79,6 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) } #define arch_atomic64_xchg arch_atomic64_xchg -/** - * arch_atomic64_set - set atomic64 variable - * @v: pointer to type atomic64_t - * @i: value to assign - * - * Atomically sets the value of @v to @n. - */ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) { unsigned high = (unsigned)(i >> 32); @@ -113,12 +88,6 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) : "eax", "edx", "memory"); } -/** - * arch_atomic64_read - read atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically reads the value of @v and returns it. - */ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) { s64 r; @@ -126,13 +95,6 @@ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) return r; } -/** - * arch_atomic64_add_return - add and return - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns @i + *@v - */ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { alternative_atomic64(add_return, @@ -142,9 +104,6 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) } #define arch_atomic64_add_return arch_atomic64_add_return -/* - * Other variants with different arithmetic operators: - */ static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { alternative_atomic64(sub_return, @@ -172,13 +131,6 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v) } #define arch_atomic64_dec_return arch_atomic64_dec_return -/** - * arch_atomic64_add - add integer to atomic64 variable - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v. - */ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v) { __alternative_atomic64(add, add_return, @@ -187,13 +139,6 @@ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v) return i; } -/** - * arch_atomic64_sub - subtract the atomic64 variable - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v. - */ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) { __alternative_atomic64(sub, sub_return, @@ -202,12 +147,6 @@ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) return i; } -/** - * arch_atomic64_inc - increment atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic64_inc(atomic64_t *v) { __alternative_atomic64(inc, inc_return, /* no output */, @@ -215,12 +154,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) } #define arch_atomic64_inc arch_atomic64_inc -/** - * arch_atomic64_dec - decrement atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic64_dec(atomic64_t *v) { __alternative_atomic64(dec, dec_return, /* no output */, @@ -228,15 +161,6 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) } #define arch_atomic64_dec arch_atomic64_dec -/** - * arch_atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns non-zero if the add was done, zero otherwise. - */ static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned low = (unsigned)u; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index c496595bf6012..3165c0feedf74 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -10,37 +10,16 @@ #define ATOMIC64_INIT(i) { (i) } -/** - * arch_atomic64_read - read atomic64 variable - * @v: pointer of type atomic64_t - * - * Atomically reads the value of @v. - * Doesn't imply a read memory barrier. - */ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) { return __READ_ONCE((v)->counter); } -/** - * arch_atomic64_set - set atomic64 variable - * @v: pointer to type atomic64_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) { __WRITE_ONCE(v->counter, i); } -/** - * arch_atomic64_add - add integer to atomic64 variable - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v. - */ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" @@ -48,13 +27,6 @@ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) : "er" (i), "m" (v->counter) : "memory"); } -/** - * arch_atomic64_sub - subtract the atomic64 variable - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v. - */ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" @@ -62,27 +34,12 @@ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v) : "er" (i), "m" (v->counter) : "memory"); } -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i); } #define arch_atomic64_sub_and_test arch_atomic64_sub_and_test -/** - * arch_atomic64_inc - increment atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic64_inc(atomic64_t *v) { asm volatile(LOCK_PREFIX "incq %0" @@ -91,12 +48,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) } #define arch_atomic64_inc arch_atomic64_inc -/** - * arch_atomic64_dec - decrement atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic64_dec(atomic64_t *v) { asm volatile(LOCK_PREFIX "decq %0" @@ -105,56 +56,24 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) } #define arch_atomic64_dec arch_atomic64_dec -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, e); } #define arch_atomic64_dec_and_test arch_atomic64_dec_and_test -/** - * arch_atomic64_inc_and_test - increment and test - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, e); } #define arch_atomic64_inc_and_test arch_atomic64_inc_and_test -/** - * arch_atomic64_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i); } #define arch_atomic64_add_negative arch_atomic64_add_negative -/** - * arch_atomic64_add_return - add and return - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns @i + @v - */ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { return i + xadd(&v->counter, i);