From patchwork Mon Jun 5 07:00:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103106 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497407vqr; Mon, 5 Jun 2023 00:03:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ONhDcL1aNI5+hJKUB87PBqyy8mKlVMmfwsVLk6RtzVaMdvw8xnk2f+PRCrtEtCMP6d90K X-Received: by 2002:a17:90a:498b:b0:258:c89e:b137 with SMTP id d11-20020a17090a498b00b00258c89eb137mr6072838pjh.14.1685948591413; Mon, 05 Jun 2023 00:03:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948591; cv=none; d=google.com; s=arc-20160816; b=RWBzmmiC6RxspDJEfVS6l4TsU6B31PKdYrxNc++/OLKdlMLg6gCn5fYJ1jP2gG3TXk xKnGmKwkwKKC3jtb4QlbjpXVtp5Us8zKXle9ikBG7CJp+EnlBAepubYWFyhU1q5dfD74 P2JXdLsw7Hv0u8dFaJ/39geXjWOw43/zJ6Ci67y8lJGKIRup2CQvV2vqLF6UOi4lPgJX ibX59gP1OGMGEdOmuMB8gUhugpoudIkMC6vRrOL+GljVUQTVKL10EpNKN7ncKleQb0g1 CVeAHosGoWqJ6b6hyxtwvOuTQAsd0BczWkI1+HxeB8+6Dr9UN5gmupg6cFg7b4bNmiCn VABQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=HYYZEdoyhCcFLtGVBCQpofpEcdbdiQhxfzmgwdFcnAQ=; b=Ipt0bXYzD3bMuX7baQIcnPEiqEqJT0WBIcJUmDQQU17YQzGspTzn3f2+3f85/wN04N XkYmP8BuFaR8YPZNIVWRhQ1RNCeEKOsOF10Us1UUDcb1UfKlYfQITMot05Tf/DHFbHJE JYeXTiablV01IlbPF1zcASFkhuhUfcHD3zdaMqznXkdU8GkOiRQXWcVcDfrHsft9M4H/ gN5rh2gZEnSDkJfDNDuM/oRysL0Nq2QkVGjYhwk7mX2vJ8rqutOEVc7+/YWjUVXsP6Yl F+5jGQSiLEfkr6ZeoOZX1fNHYiaJd1LWA0zaS+OyYc3CKmrX6Vy4dO15JERZi05nbBN0 y+6w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020a6540cb000000b005286d7712d1si5062280pgp.672.2023.06.05.00.02.57; Mon, 05 Jun 2023 00:03:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbjFEHBl (ORCPT + 99 others); Mon, 5 Jun 2023 03:01:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229964AbjFEHBf (ORCPT ); Mon, 5 Jun 2023 03:01:35 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B6680EA; Mon, 5 Jun 2023 00:01:33 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06301152B; Mon, 5 Jun 2023 00:02:19 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B5C653F793; Mon, 5 Jun 2023 00:01:31 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 01/27] locking/atomic: arm: fix sync ops Date: Mon, 5 Jun 2023 08:00:58 +0100 Message-Id: <20230605070124.3741859-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845230076255693?= X-GMAIL-MSGID: =?utf-8?q?1767845230076255693?= The sync_*() ops on arch/arm are defined in terms of the regular bitops with no special handling. This is not correct, as UP kernels elide barriers for the fully-ordered operations, and so the required ordering is lost when such UP kernels are run under a hypervsior on an SMP system. Fix this by defining sync ops with the required barriers. Note: On 32-bit arm, the sync_*() ops are currently only used by Xen, which requires ARMv7, but the semantics can be implemented for ARMv6+. Fixes: e54d2f61528165bb ("xen/arm: sync_bitops") Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King Cc: Stefano Stabellini Cc: Will Deacon --- arch/arm/include/asm/assembler.h | 17 +++++++++++++++++ arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++---- arch/arm/lib/bitops.h | 14 +++++++++++--- arch/arm/lib/testchangebit.S | 4 ++++ arch/arm/lib/testclearbit.S | 4 ++++ arch/arm/lib/testsetbit.S | 4 ++++ 6 files changed, 65 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 505a306e0271a..aebe2c8f6a686 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@) #endif .endm +/* + * Raw SMP data memory barrier + */ + .macro __smp_dmb mode +#if __LINUX_ARM_ARCH__ >= 7 + .ifeqs "\mode","arm" + dmb ish + .else + W(dmb) ish + .endif +#elif __LINUX_ARM_ARCH__ == 6 + mcr p15, 0, r0, c7, c10, 5 @ dmb +#else + .error "Incompatible SMP platform" +#endif + .endm + #if defined(CONFIG_CPU_V7M) /* * setmode is used to assert to be in svc mode during boot. For v7-M diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h index 6f5d627c44a3c..f46b3c570f92e 100644 --- a/arch/arm/include/asm/sync_bitops.h +++ b/arch/arm/include/asm/sync_bitops.h @@ -14,14 +14,35 @@ * ops which are SMP safe even on a UP kernel. */ +/* + * Unordered + */ + #define sync_set_bit(nr, p) _set_bit(nr, p) #define sync_clear_bit(nr, p) _clear_bit(nr, p) #define sync_change_bit(nr, p) _change_bit(nr, p) -#define sync_test_and_set_bit(nr, p) _test_and_set_bit(nr, p) -#define sync_test_and_clear_bit(nr, p) _test_and_clear_bit(nr, p) -#define sync_test_and_change_bit(nr, p) _test_and_change_bit(nr, p) #define sync_test_bit(nr, addr) test_bit(nr, addr) -#define arch_sync_cmpxchg arch_cmpxchg +/* + * Fully ordered + */ + +int _sync_test_and_set_bit(int nr, volatile unsigned long * p); +#define sync_test_and_set_bit(nr, p) _sync_test_and_set_bit(nr, p) + +int _sync_test_and_clear_bit(int nr, volatile unsigned long * p); +#define sync_test_and_clear_bit(nr, p) _sync_test_and_clear_bit(nr, p) + +int _sync_test_and_change_bit(int nr, volatile unsigned long * p); +#define sync_test_and_change_bit(nr, p) _sync_test_and_change_bit(nr, p) + +#define arch_sync_cmpxchg(ptr, old, new) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __smp_mb__before_atomic(); \ + __ret = arch_cmpxchg_relaxed((ptr), (old), (new)); \ + __smp_mb__after_atomic(); \ + __ret; \ +}) #endif diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h index 95bd359912889..f069d1b2318e6 100644 --- a/arch/arm/lib/bitops.h +++ b/arch/arm/lib/bitops.h @@ -28,7 +28,7 @@ UNWIND( .fnend ) ENDPROC(\name ) .endm - .macro testop, name, instr, store + .macro __testop, name, instr, store, barrier ENTRY( \name ) UNWIND( .fnstart ) ands ip, r1, #3 @@ -38,7 +38,7 @@ UNWIND( .fnstart ) mov r0, r0, lsr #5 add r1, r1, r0, lsl #2 @ Get word offset mov r3, r2, lsl r3 @ create mask - smp_dmb + \barrier #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP) .arch_extension mp ALT_SMP(W(pldw) [r1]) @@ -50,13 +50,21 @@ UNWIND( .fnstart ) strex ip, r2, [r1] cmp ip, #0 bne 1b - smp_dmb + \barrier cmp r0, #0 movne r0, #1 2: bx lr UNWIND( .fnend ) ENDPROC(\name ) .endm + + .macro testop, name, instr, store + __testop \name, \instr, \store, smp_dmb + .endm + + .macro sync_testop, name, instr, store + __testop \name, \instr, \store, __smp_dmb + .endm #else .macro bitop, name, instr ENTRY( \name ) diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S index 4ebecc67e6e04..f13fe9bc2399a 100644 --- a/arch/arm/lib/testchangebit.S +++ b/arch/arm/lib/testchangebit.S @@ -10,3 +10,7 @@ .text testop _test_and_change_bit, eor, str + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_change_bit, eor, str +#endif diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S index 009afa0f5b4a7..4d2c5ca620ebf 100644 --- a/arch/arm/lib/testclearbit.S +++ b/arch/arm/lib/testclearbit.S @@ -10,3 +10,7 @@ .text testop _test_and_clear_bit, bicne, strne + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_clear_bit, bicne, strne +#endif diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S index f3192e55acc87..649dbab65d8d0 100644 --- a/arch/arm/lib/testsetbit.S +++ b/arch/arm/lib/testsetbit.S @@ -10,3 +10,7 @@ .text testop _test_and_set_bit, orreq, streq + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_set_bit, orreq, streq +#endif From patchwork Mon Jun 5 07:00:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103105 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497304vqr; Mon, 5 Jun 2023 00:02:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5xyRfbVMRiIGZGQWbY89IogyGi9oGqnKKKZlYWbV0d6K73gJpzr/6ADonyt3ghOO0q16ff X-Received: by 2002:a17:90a:6b0b:b0:259:453:756e with SMTP id v11-20020a17090a6b0b00b002590453756emr6996470pjj.2.1685948578701; Mon, 05 Jun 2023 00:02:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948578; cv=none; d=google.com; s=arc-20160816; b=ZRDljGnUA7vZfyNPUZz3yqs7WObWEXYA5iu0ikD7mBGc9HGsym/SWeyy9uSnps0P9p bBnbngThHAp5KQmNXPx3QxGig3Lhq/EBtyE7xYnAJIoQ5KiaiV6Zq7c4ay4JZHeOBcHv 5g+cZ6fHbltvmMsMJe2zTdBCJCkTuYjM3erox2fosW+se/PoYspu+d5Rqr5TG/2r6img B2C4m7BWqTNghVaU3f9RbRkbiNswxrOZtNXJ30Nq632h6ZXeHdEK5uH+ldYsKetwTvQQ h4DvpbVmvMPEORPoxe9waHx0Yki1FqAhK4fnMa1/BI94RRLmnI+y9zumtoKNrfSmZG4k 2D0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=XEwMA7VUEoDLmb9HfM3XGyusfJlsj5zPytQtt+wXEfw=; b=jK9w0UoKVQgmueVgbR6fKHtxmc+Fsif22UsQbJmlQfBN58sp7zFpnK4CWwwr5MXt7V +uMyrH3bzgZyq9uM6YSteOQ8s7TIrywdFdNZx9HGJ1IEfnHvtE7d4X1cAYPvHLUoztYe Mccs0ZWTIZspaG1e+YOSLV3MtZ/XXuow8q1vuBJdh8K0TqYhKhRGpgURYrUMm/U5U7i+ GAxXByaxOZ6caNBOBsYVLl/avaLxR0XgyQWCdp/BrOUQYRv4ME3P0iETZ1cNauin1ho5 B4XAmlzL7yUtbANEM5GDChCNnsqCHSY0ojRe/+gYPXgJxuFelzqAoN8NDdkljlvhEjHv 0brg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bt14-20020a63290e000000b00543a6cf2b55si1685665pgb.527.2023.06.05.00.02.45; Mon, 05 Jun 2023 00:02:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229964AbjFEHBt (ORCPT + 99 others); Mon, 5 Jun 2023 03:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230127AbjFEHBi (ORCPT ); Mon, 5 Jun 2023 03:01:38 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 57AF6AD; Mon, 5 Jun 2023 00:01:36 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A58301596; Mon, 5 Jun 2023 00:02:21 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D2B03F793; Mon, 5 Jun 2023 00:01:34 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 02/27] locking/atomic: remove fallback comments Date: Mon, 5 Jun 2023 08:00:59 +0100 Message-Id: <20230605070124.3741859-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845216669756488?= X-GMAIL-MSGID: =?utf-8?q?1767845216669756488?= Currently a subset of the fallback templates have kerneldoc comments, resulting in a haphazard set of generated kerneldoc comments as only some operations have fallback templates to begin with. We'd like to generate more consistent kerneldoc comments, and to do so we'll need to restructure the way the fallback code is generated. To minimize churn and to make it easier to restructure the fallback code, this patch removes the existing kerneldoc comments from the fallback templates. We can add new kerneldoc comments in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic/atomic-arch-fallback.h | 166 +------------------- scripts/atomic/fallbacks/add_negative | 8 - scripts/atomic/fallbacks/add_unless | 9 -- scripts/atomic/fallbacks/dec_and_test | 8 - scripts/atomic/fallbacks/fetch_add_unless | 9 -- scripts/atomic/fallbacks/inc_and_test | 8 - scripts/atomic/fallbacks/inc_not_zero | 7 - scripts/atomic/fallbacks/sub_and_test | 9 -- 8 files changed, 1 insertion(+), 223 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 1722ddb6f17e0..3ce4cb5e790c5 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) #endif /* arch_atomic_try_cmpxchg_relaxed */ #ifndef arch_atomic_sub_and_test -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { @@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v) #endif #ifndef arch_atomic_dec_and_test -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { @@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v) #endif #ifndef arch_atomic_inc_and_test -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { @@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v) #endif /* arch_atomic_add_negative */ #ifndef arch_atomic_add_negative -/** - * arch_atomic_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { @@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_acquire -/** - * arch_atomic_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_release -/** - * arch_atomic_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) { @@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_relaxed -/** - * arch_atomic_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif /* arch_atomic_add_negative_relaxed */ #ifndef arch_atomic_fetch_add_unless -/** - * arch_atomic_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_add_unless -/** - * arch_atomic_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic_add_unless(atomic_t *v, int a, int u) { @@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_inc_not_zero -/** - * arch_atomic_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic_inc_not_zero(atomic_t *v) { @@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) #endif /* arch_atomic64_try_cmpxchg_relaxed */ #ifndef arch_atomic64_sub_and_test -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_dec_and_test -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v) { @@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v) #endif #ifndef arch_atomic64_inc_and_test -/** - * arch_atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v) { @@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v) #endif /* arch_atomic64_add_negative */ #ifndef arch_atomic64_add_negative -/** - * arch_atomic64_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_acquire -/** - * arch_atomic64_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_release -/** - * arch_atomic64_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_relaxed -/** - * arch_atomic64_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif /* arch_atomic64_add_negative_relaxed */ #ifndef arch_atomic64_fetch_add_unless -/** - * arch_atomic64_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_add_unless -/** - * arch_atomic64_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_inc_not_zero -/** - * arch_atomic64_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic64_inc_not_zero(atomic64_t *v) { @@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53 +// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277 diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative index e5980abf5904e..d0bd2dfbb244c 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -1,12 +1,4 @@ cat < X-Patchwork-Id: 103127 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501140vqr; Mon, 5 Jun 2023 00:12:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7ALK1RQgTx7LijiuCAUNunKd2xUU/zFxASHuvpOF8JSmJkGCGToUdHd8P3CZou1AVg8QUJ X-Received: by 2002:a17:902:a9c6:b0:1a6:4a64:4d27 with SMTP id b6-20020a170902a9c600b001a64a644d27mr3028652plr.40.1685949125368; Mon, 05 Jun 2023 00:12:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949125; cv=none; d=google.com; s=arc-20160816; b=iRPO82s7p24C7yneUCQPUI/mnF5UgV0fHrKWbkM8JCqHVa2KuqkjGCDuTuL4/UVOcp 5R97z34/Ddv8OgpGGwK4LFZo937/uRpVxbuJZbw22xvTCgOCWTAycnXWir7fBflRPwnz l2xwmE55fp0ovc2nJqj4KBl/KDDV0/9J/ofHxnj+sUZF9TR3hEu+Q80aTwJGve6fbQRs Z1TIC5sgWXCdiEfYT+IYNyteaGzfP46Z/IM31yZfY55v7leWXd2bcbKWm3GJHC2j5awn 27X66kQouA1dDLYvz73KHW5ahq409F4uMYNl+Ov+p0tEDZ/Or1D47KxzuvuTMnNaaBft U3Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1BkVbnQZF2zefeq5gO1cba1/ZPxFNjg0VV6FmzuUNPo=; b=UyMg3amEBqzwgyUE0K2oX1xyuXooXwa8RtxDHsTobrfRqCse2R5oKJ7gelG5XsKxR0 SzPa3Ux/BPCXIhcmlBOp2abw9ppwqYGEmVGEvA/7KVy/yVfcbtraL3VKg4XGXHrOoCXG J/mJ0fqlOrwUixc+BzbcYP+yKhLKnEDwWFryjPIWaDe/2LwBl6pdlR0wDYcRL5qmrTzY 1t7bW7IBN3XEpB+OQGM4VKCwPgUHx38D7ohZGxm8ieNlnVrj9fHm1JBo6epzC7SnTtTX Dbu7YcT6ghHCeTSwgTnDaGTV05lbkDcMEq6WNtXNeWR9+HKSIHvq8p4QLpT0XZ1yy3Ho +5rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p14-20020a170902e74e00b001ab0727a2c0si5225093plf.424.2023.06.05.00.11.53; Mon, 05 Jun 2023 00:12:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230185AbjFEHBy (ORCPT + 99 others); Mon, 5 Jun 2023 03:01:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230149AbjFEHBk (ORCPT ); Mon, 5 Jun 2023 03:01:40 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ED8BC9F; Mon, 5 Jun 2023 00:01:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 459C615A1; Mon, 5 Jun 2023 00:02:24 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0103D3F793; Mon, 5 Jun 2023 00:01:36 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 03/27] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Date: Mon, 5 Jun 2023 08:01:00 +0100 Message-Id: <20230605070124.3741859-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845790356446217?= X-GMAIL-MSGID: =?utf-8?q?1767845790356446217?= Hexagon's implementation of arch_atomic_cmpxchg() is identical to its implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg() in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg() and arch_xchg(). At the same time, remove the kerneldoc comments for hexagon's arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*() namespace is shared by all architectures and the API should be documented centrally, and the comments aren't all that helpful as-is. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/hexagon/include/asm/atomic.h | 46 +++---------------------------- 1 file changed, 4 insertions(+), 42 deletions(-) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 6e94f8d04146f..738857e10d6ec 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new) */ #define arch_atomic_read(v) READ_ONCE((v)->counter) -/** - * arch_atomic_xchg - atomic - * @v: pointer to memory to change - * @new: new value (technically passed in a register -- see xchg) - */ -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - - -/** - * arch_atomic_cmpxchg - atomic compare-and-exchange values - * @v: pointer to value to change - * @old: desired old value to match - * @new: new value to put in - * - * Parameters are then pointer, value-in-register, value-in-register, - * and the output is the old value. - * - * Apparently this is complicated for archs that don't support - * the memw_locked like we do (or it's broken or whatever). - * - * Kind of the lynchpin of the rest of the generically defined routines. - * Remember V2 had that bug with dotnew predicate set by memw_locked. - * - * "old" is "expected" old val, __oldval is actual old value - */ -static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - int __oldval; +#define arch_atomic_xchg(v, new) \ + (arch_xchg(&((v)->counter), (new))) - asm volatile( - "1: %0 = memw_locked(%1);\n" - " { P0 = cmp.eq(%0,%2);\n" - " if (!P0.new) jump:nt 2f; }\n" - " memw_locked(%1,P0) = %3;\n" - " if (!P0) jump 1b;\n" - "2:\n" - : "=&r" (__oldval) - : "r" (&v->counter), "r" (old), "r" (new) - : "memory", "p0" - ); - - return __oldval; -} +#define arch_atomic_cmpxchg(v, old, new) \ + (arch_cmpxchg(&((v)->counter), (old), (new))) #define ATOMIC_OP(op) \ static inline void arch_atomic_##op(int i, atomic_t *v) \ From patchwork Mon Jun 5 07:01:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103111 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2499780vqr; Mon, 5 Jun 2023 00:08:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4l8Z0p8plzR+HNDs6LQHKdhFzbZZ5yJBAbCDQQzM69vMQPZtJGkddRwhzpNDvnu817O9ZK X-Received: by 2002:a17:90a:d50e:b0:249:748b:a232 with SMTP id t14-20020a17090ad50e00b00249748ba232mr2184330pju.25.1685948926348; Mon, 05 Jun 2023 00:08:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948926; cv=none; d=google.com; s=arc-20160816; b=kbj6JY6cAWyI345FyBQV+FS2fUB1HI31AR24RRX+Tvb2P+MdIyJbMuEA4EPCAwTZR7 gw7WuKLy6T0g74f0LffET85OwSLBFoReMajeeaI7YaHt6f15qt9kymd/k0NQErivuVLi sKEcTgodtpbjRSfZoOizIPn3u1cQG/qf1mmUQNebQenPLyCPm8TX7W0iXebYPmH9obaz 7K8qszcr/oyGwAMUjpoiIDNqSXjFzRu+36PelXjKaT5d5yfUOsZFGI6QP3cKX0bfODuW SbOUcZPLa/HmAyGspUhuHtUCdoljXq3CAv7zUzxddb0EyzxL5WT2Cky4Y3MtpCZKVN5U JJdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=mVTDemF8Gln2gHDprwmco+iJPYbd+eLJlaUiGPSb+lM=; b=flc4g5VrORql/tHthtPt2EUHygKQggki78YZqvrIjvN5NHKmOhHDWQ+D8tYAt6f7Yj MEvcZwN7aHjqhdCkq0nww+o8qRM2it/3Vm3jYma28rOqj1Vu8dflApDOTa4Q4H4Wf1ZP 3E7P/Y8zVMPW7noNfWOU79d03HNND4eh4npT7AvRHUT2/7AgWCJyz56QnegqUVgNCM07 ks92XbR5k7WorFARTBCZmCcO1MygHQZqhRNkjMqQsCkblbAWkTf9CRLfwDu3AHn57JRj vUD/fdar579gPWXsIkMmsQyajU5pm8u0TBESDjespxinlvd2Ta8oddqZuDBv/haCBavN hXJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q7-20020a17090a1b0700b00259224a9d67si2626008pjq.167.2023.06.05.00.08.34; Mon, 05 Jun 2023 00:08:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230197AbjFEHCH (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbjFEHBr (ORCPT ); Mon, 5 Jun 2023 03:01:47 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 50EEEF1; Mon, 5 Jun 2023 00:01:42 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F7AD1650; Mon, 5 Jun 2023 00:02:27 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A387F3F793; Mon, 5 Jun 2023 00:01:39 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 04/27] locking/atomic: make atomic*_{cmp,}xchg optional Date: Mon, 5 Jun 2023 08:01:01 +0100 Message-Id: <20230605070124.3741859-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845581446645951?= X-GMAIL-MSGID: =?utf-8?q?1767845581446645951?= Most architectures define the atomic/atomic64 xchg and cmpxchg operations in terms of arch_xchg and arch_cmpxchg respectfully. Add fallbacks for these cases and remove the trivial cases from arch code. On some architectures the existing definitions are kept as these are used to build other arch_atomic*() operations. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 10 -- arch/arc/include/asm/atomic.h | 24 --- arch/arc/include/asm/atomic64-arcv2.h | 2 + arch/arm/include/asm/atomic.h | 3 +- arch/arm64/include/asm/atomic.h | 28 ---- arch/csky/include/asm/atomic.h | 35 ----- arch/hexagon/include/asm/atomic.h | 6 - arch/ia64/include/asm/atomic.h | 7 - arch/loongarch/include/asm/atomic.h | 7 - arch/m68k/include/asm/atomic.h | 9 +- arch/mips/include/asm/atomic.h | 11 -- arch/openrisc/include/asm/atomic.h | 3 - arch/parisc/include/asm/atomic.h | 9 -- arch/powerpc/include/asm/atomic.h | 24 --- arch/riscv/include/asm/atomic.h | 72 --------- arch/sh/include/asm/atomic.h | 3 - arch/sparc/include/asm/atomic_32.h | 2 + arch/sparc/include/asm/atomic_64.h | 11 -- arch/xtensa/include/asm/atomic.h | 3 - include/asm-generic/atomic.h | 3 - include/linux/atomic/atomic-arch-fallback.h | 158 +++++++++++++++++++- scripts/atomic/fallbacks/cmpxchg | 7 + scripts/atomic/fallbacks/xchg | 7 + 23 files changed, 179 insertions(+), 265 deletions(-) create mode 100755 scripts/atomic/fallbacks/cmpxchg create mode 100755 scripts/atomic/fallbacks/xchg diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index f2861a43a61ef..ec8ab552c527a 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -200,16 +200,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic64_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic64_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) - -#define arch_atomic_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) - /** * arch_atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 52ee51e1ff7c2..592d7fffc223c 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -22,30 +22,6 @@ #include #endif -#define arch_atomic_cmpxchg(v, o, n) \ -({ \ - arch_cmpxchg(&((v)->counter), (o), (n)); \ -}) - -#ifdef arch_cmpxchg_relaxed -#define arch_atomic_cmpxchg_relaxed(v, o, n) \ -({ \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)); \ -}) -#endif - -#define arch_atomic_xchg(v, n) \ -({ \ - arch_xchg(&((v)->counter), (n)); \ -}) - -#ifdef arch_xchg_relaxed -#define arch_atomic_xchg_relaxed(v, n) \ -({ \ - arch_xchg_relaxed(&((v)->counter), (n)); \ -}) -#endif - /* * 64-bit atomics */ diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h index c5a8010fdc97d..2b7c9e61a2947 100644 --- a/arch/arc/include/asm/atomic64-arcv2.h +++ b/arch/arc/include/asm/atomic64-arcv2.h @@ -159,6 +159,7 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) return prev; } +#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) { @@ -179,6 +180,7 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) return prev; } +#define arch_atomic64_xchg arch_atomic64_xchg /** * arch_atomic64_dec_if_positive - decrement by 1 if old value positive diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index db8512d9a918d..9458d47ff209c 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -210,6 +210,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } +#define arch_atomic_cmpxchg arch_atomic_cmpxchg #define arch_atomic_fetch_andnot arch_atomic_fetch_andnot @@ -240,8 +241,6 @@ ATOMIC_OPS(xor, ^=, eor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { s64 counter; diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index c9979273d3898..400d279e0f8d0 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -142,24 +142,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) #define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release #define arch_atomic_fetch_xor arch_atomic_fetch_xor -#define arch_atomic_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) -#define arch_atomic_xchg_acquire(v, new) \ - arch_xchg_acquire(&((v)->counter), (new)) -#define arch_atomic_xchg_release(v, new) \ - arch_xchg_release(&((v)->counter), (new)) -#define arch_atomic_xchg(v, new) \ - arch_xchg(&((v)->counter), (new)) - -#define arch_atomic_cmpxchg_relaxed(v, old, new) \ - arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_acquire(v, old, new) \ - arch_cmpxchg_acquire(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_release(v, old, new) \ - arch_cmpxchg_release(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg(v, old, new) \ - arch_cmpxchg(&((v)->counter), (old), (new)) - #define arch_atomic_andnot arch_atomic_andnot /* @@ -209,16 +191,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) #define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release #define arch_atomic64_fetch_xor arch_atomic64_fetch_xor -#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic64_xchg_release arch_atomic_xchg_release -#define arch_atomic64_xchg arch_atomic_xchg - -#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release -#define arch_atomic64_cmpxchg arch_atomic_cmpxchg - #define arch_atomic64_andnot arch_atomic64_andnot #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive diff --git a/arch/csky/include/asm/atomic.h b/arch/csky/include/asm/atomic.h index 60406ef9c2bbc..4dab44f6143a5 100644 --- a/arch/csky/include/asm/atomic.h +++ b/arch/csky/include/asm/atomic.h @@ -195,41 +195,6 @@ arch_atomic_dec_if_positive(atomic_t *v) } #define arch_atomic_dec_if_positive arch_atomic_dec_if_positive -#define ATOMIC_OP() \ -static __always_inline \ -int arch_atomic_xchg_relaxed(atomic_t *v, int n) \ -{ \ - return __xchg_relaxed(n, &(v->counter), 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg_relaxed(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg_relaxed(&(v->counter), o, n, 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg_acquire(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg_acquire(&(v->counter), o, n, 4); \ -} \ -static __always_inline \ -int arch_atomic_cmpxchg(atomic_t *v, int o, int n) \ -{ \ - return __cmpxchg(&(v->counter), o, n, 4); \ -} - -#define ATOMIC_OPS() \ - ATOMIC_OP() - -ATOMIC_OPS() - -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic_cmpxchg arch_atomic_cmpxchg - -#undef ATOMIC_OPS -#undef ATOMIC_OP - #else #include #endif diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 738857e10d6ec..ad6c111e9c10f 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -36,12 +36,6 @@ static inline void arch_atomic_set(atomic_t *v, int new) */ #define arch_atomic_read(v) READ_ONCE((v)->counter) -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), (new))) - -#define arch_atomic_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), (old), (new))) - #define ATOMIC_OP(op) \ static inline void arch_atomic_##op(int i, atomic_t *v) \ { \ diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 266c429b91372..6540a628d2573 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -207,13 +207,6 @@ ATOMIC64_FETCH_OP(xor, ^) #undef ATOMIC64_FETCH_OP #undef ATOMIC64_OP -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - -#define arch_atomic64_cmpxchg(v, old, new) \ - (arch_cmpxchg(&((v)->counter), old, new)) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #define arch_atomic_add(i,v) (void)arch_atomic_add_return((i), (v)) #define arch_atomic_sub(i,v) (void)arch_atomic_sub_return((i), (v)) diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h index 6b9aca9ab6e9f..8d73c85911b08 100644 --- a/arch/loongarch/include/asm/atomic.h +++ b/arch/loongarch/include/asm/atomic.h @@ -181,9 +181,6 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) return result; } -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - /* * arch_atomic_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t @@ -342,10 +339,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) return result; } -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - /* * arch_atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic64_t diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index cfba83d230fde..190a032f19be7 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -158,12 +158,7 @@ static inline int arch_atomic_inc_and_test(atomic_t *v) } #define arch_atomic_inc_and_test arch_atomic_inc_and_test -#ifdef CONFIG_RMW_INSNS - -#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - -#else /* !CONFIG_RMW_INSNS */ +#ifndef CONFIG_RMW_INSNS static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -177,6 +172,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) local_irq_restore(flags); return prev; } +#define arch_atomic_cmpxchg arch_atomic_cmpxchg static inline int arch_atomic_xchg(atomic_t *v, int new) { @@ -189,6 +185,7 @@ static inline int arch_atomic_xchg(atomic_t *v, int new) local_irq_restore(flags); return prev; } +#define arch_atomic_xchg arch_atomic_xchg #endif /* !CONFIG_RMW_INSNS */ diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 712fb5a6a5682..ba188e77768b2 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -33,17 +33,6 @@ static __always_inline void arch_##pfx##_set(pfx##_t *v, type i) \ { \ WRITE_ONCE(v->counter, i); \ } \ - \ -static __always_inline type \ -arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n) \ -{ \ - return arch_cmpxchg(&v->counter, o, n); \ -} \ - \ -static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n) \ -{ \ - return arch_xchg(&v->counter, n); \ -} ATOMIC_OPS(atomic, int) diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h index 326167e4783a9..8ce67ec7c9a30 100644 --- a/arch/openrisc/include/asm/atomic.h +++ b/arch/openrisc/include/asm/atomic.h @@ -130,7 +130,4 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) #include -#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v))) -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new))) - #endif /* __ASM_OPENRISC_ATOMIC_H */ diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index dd5a299ada695..0b3f64c92e3c0 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -73,10 +73,6 @@ static __inline__ int arch_atomic_read(const atomic_t *v) return READ_ONCE((v)->counter); } -/* exported interface */ -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #define ATOMIC_OP(op, c_op) \ static __inline__ void arch_atomic_##op(int i, atomic_t *v) \ { \ @@ -218,11 +214,6 @@ arch_atomic64_read(const atomic64_t *v) return READ_ONCE((v)->counter); } -/* exported interface */ -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #endif /* !CONFIG_64BIT */ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 47228b1774781..5bf6a4d49268c 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -126,18 +126,6 @@ ATOMIC_OPS(xor, xor, "", K) #undef ATOMIC_OP_RETURN_RELAXED #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) \ - (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_cmpxchg_relaxed(v, o, n) \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)) -#define arch_atomic_cmpxchg_acquire(v, o, n) \ - arch_cmpxchg_acquire(&((v)->counter), (o), (n)) - -#define arch_atomic_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) -#define arch_atomic_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) - /** * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t @@ -396,18 +384,6 @@ static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v) } #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive -#define arch_atomic64_cmpxchg(v, o, n) \ - (arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_cmpxchg_relaxed(v, o, n) \ - arch_cmpxchg_relaxed(&((v)->counter), (o), (n)) -#define arch_atomic64_cmpxchg_acquire(v, o, n) \ - arch_cmpxchg_acquire(&((v)->counter), (o), (n)) - -#define arch_atomic64_xchg(v, new) \ - (arch_xchg(&((v)->counter), new)) -#define arch_atomic64_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) - /** * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index bba472928b539..f5dfef6c2153f 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -238,78 +238,6 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif -/* - * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as - * {cmp,}xchg and the operations that return, so they need a full barrier. - */ -#define ATOMIC_OP(c_t, prefix, size) \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_relaxed(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_acquire(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \ -{ \ - return __xchg_release(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \ -{ \ - return __arch_xchg(&(v->counter), n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_relaxed(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_acquire(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \ - c_t o, c_t n) \ -{ \ - return __cmpxchg_release(&(v->counter), o, n, size); \ -} \ -static __always_inline \ -c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ -{ \ - return __cmpxchg(&(v->counter), o, n, size); \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) -#else -#define ATOMIC_OPS() \ - ATOMIC_OP(int, , 4) \ - ATOMIC_OP(s64, 64, 8) -#endif - -ATOMIC_OPS() - -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic_xchg_release arch_atomic_xchg_release -#define arch_atomic_xchg arch_atomic_xchg -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release -#define arch_atomic_cmpxchg arch_atomic_cmpxchg - -#undef ATOMIC_OPS -#undef ATOMIC_OP - static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) { int prev, rc; diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 528bfeda78f56..7a18cb2a1c1ac 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -30,9 +30,6 @@ #include #endif -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) - #endif /* CONFIG_CPU_J2 */ #endif /* __ASM_SH_ATOMIC_H */ diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index d775daa83d129..1c9e6c7366e41 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -24,7 +24,9 @@ int arch_atomic_fetch_and(int, atomic_t *); int arch_atomic_fetch_or(int, atomic_t *); int arch_atomic_fetch_xor(int, atomic_t *); int arch_atomic_cmpxchg(atomic_t *, int, int); +#define arch_atomic_cmpxchg arch_atomic_cmpxchg int arch_atomic_xchg(atomic_t *, int); +#define arch_atomic_xchg arch_atomic_xchg int arch_atomic_fetch_add_unless(atomic_t *, int, int); void arch_atomic_set(atomic_t *, int); diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 077891686715a..df6a8b07d7e63 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -49,17 +49,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n))) - -static inline int arch_atomic_xchg(atomic_t *v, int new) -{ - return arch_xchg(&v->counter, new); -} - -#define arch_atomic64_cmpxchg(v, o, n) \ - ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) - s64 arch_atomic64_dec_if_positive(atomic64_t *v); #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 52da614f953ce..1d323a864002c 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -257,7 +257,4 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n))) -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new)) - #endif /* _XTENSA_ATOMIC_H */ diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index e271d6708c876..22142c71d35a1 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -130,7 +130,4 @@ ATOMIC_OP(xor, ^) #define arch_atomic_read(v) READ_ONCE((v)->counter) #define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) -#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (u32)(v))) -#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new))) - #endif /* __ASM_GENERIC_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 3ce4cb5e790c5..1a2d81dbc2e48 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1091,9 +1091,48 @@ arch_atomic_fetch_xor(int i, atomic_t *v) #endif /* arch_atomic_fetch_xor_relaxed */ #ifndef arch_atomic_xchg_relaxed +#ifdef arch_atomic_xchg #define arch_atomic_xchg_acquire arch_atomic_xchg #define arch_atomic_xchg_release arch_atomic_xchg #define arch_atomic_xchg_relaxed arch_atomic_xchg +#endif /* arch_atomic_xchg */ + +#ifndef arch_atomic_xchg +static __always_inline int +arch_atomic_xchg(atomic_t *v, int new) +{ + return arch_xchg(&v->counter, new); +} +#define arch_atomic_xchg arch_atomic_xchg +#endif + +#ifndef arch_atomic_xchg_acquire +static __always_inline int +arch_atomic_xchg_acquire(atomic_t *v, int new) +{ + return arch_xchg_acquire(&v->counter, new); +} +#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire +#endif + +#ifndef arch_atomic_xchg_release +static __always_inline int +arch_atomic_xchg_release(atomic_t *v, int new) +{ + return arch_xchg_release(&v->counter, new); +} +#define arch_atomic_xchg_release arch_atomic_xchg_release +#endif + +#ifndef arch_atomic_xchg_relaxed +static __always_inline int +arch_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return arch_xchg_relaxed(&v->counter, new); +} +#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed +#endif + #else /* arch_atomic_xchg_relaxed */ #ifndef arch_atomic_xchg_acquire @@ -1133,9 +1172,48 @@ arch_atomic_xchg(atomic_t *v, int i) #endif /* arch_atomic_xchg_relaxed */ #ifndef arch_atomic_cmpxchg_relaxed +#ifdef arch_atomic_cmpxchg #define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg #define arch_atomic_cmpxchg_release arch_atomic_cmpxchg #define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg +#endif /* arch_atomic_cmpxchg */ + +#ifndef arch_atomic_cmpxchg +static __always_inline int +arch_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_cmpxchg(&v->counter, old, new); +} +#define arch_atomic_cmpxchg arch_atomic_cmpxchg +#endif + +#ifndef arch_atomic_cmpxchg_acquire +static __always_inline int +arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_acquire(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#endif + +#ifndef arch_atomic_cmpxchg_release +static __always_inline int +arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_release(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release +#endif + +#ifndef arch_atomic_cmpxchg_relaxed +static __always_inline int +arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return arch_cmpxchg_relaxed(&v->counter, old, new); +} +#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#endif + #else /* arch_atomic_cmpxchg_relaxed */ #ifndef arch_atomic_cmpxchg_acquire @@ -2225,9 +2303,48 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v) #endif /* arch_atomic64_fetch_xor_relaxed */ #ifndef arch_atomic64_xchg_relaxed +#ifdef arch_atomic64_xchg #define arch_atomic64_xchg_acquire arch_atomic64_xchg #define arch_atomic64_xchg_release arch_atomic64_xchg #define arch_atomic64_xchg_relaxed arch_atomic64_xchg +#endif /* arch_atomic64_xchg */ + +#ifndef arch_atomic64_xchg +static __always_inline s64 +arch_atomic64_xchg(atomic64_t *v, s64 new) +{ + return arch_xchg(&v->counter, new); +} +#define arch_atomic64_xchg arch_atomic64_xchg +#endif + +#ifndef arch_atomic64_xchg_acquire +static __always_inline s64 +arch_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return arch_xchg_acquire(&v->counter, new); +} +#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire +#endif + +#ifndef arch_atomic64_xchg_release +static __always_inline s64 +arch_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return arch_xchg_release(&v->counter, new); +} +#define arch_atomic64_xchg_release arch_atomic64_xchg_release +#endif + +#ifndef arch_atomic64_xchg_relaxed +static __always_inline s64 +arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return arch_xchg_relaxed(&v->counter, new); +} +#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed +#endif + #else /* arch_atomic64_xchg_relaxed */ #ifndef arch_atomic64_xchg_acquire @@ -2267,9 +2384,48 @@ arch_atomic64_xchg(atomic64_t *v, s64 i) #endif /* arch_atomic64_xchg_relaxed */ #ifndef arch_atomic64_cmpxchg_relaxed +#ifdef arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg #define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg +#endif /* arch_atomic64_cmpxchg */ + +#ifndef arch_atomic64_cmpxchg +static __always_inline s64 +arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg +#endif + +#ifndef arch_atomic64_cmpxchg_acquire +static __always_inline s64 +arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_acquire(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire +#endif + +#ifndef arch_atomic64_cmpxchg_release +static __always_inline s64 +arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_release(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release +#endif + +#ifndef arch_atomic64_cmpxchg_relaxed +static __always_inline s64 +arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return arch_cmpxchg_relaxed(&v->counter, old, new); +} +#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed +#endif + #else /* arch_atomic64_cmpxchg_relaxed */ #ifndef arch_atomic64_cmpxchg_acquire @@ -2597,4 +2753,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277 +// e1cee558cc61cae887890db30fcdf93baca9f498 diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg new file mode 100755 index 0000000000000..87cd010f98d58 --- /dev/null +++ b/scripts/atomic/fallbacks/cmpxchg @@ -0,0 +1,7 @@ +cat <counter, old, new); +} +EOF diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg new file mode 100755 index 0000000000000..733b8980b2f3b --- /dev/null +++ b/scripts/atomic/fallbacks/xchg @@ -0,0 +1,7 @@ +cat <counter, new); +} +EOF From patchwork Mon Jun 5 07:01:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103108 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497619vqr; Mon, 5 Jun 2023 00:03:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7nCdcWYE3ZNMYzebXUaMM7AxAxYRh/N7CTp/tkPIRJdTV3oDqqTKsR+cqc7wQVR9Z7g+dc X-Received: by 2002:a17:90a:d501:b0:256:2b45:8e1a with SMTP id t1-20020a17090ad50100b002562b458e1amr3474037pju.25.1685948616183; Mon, 05 Jun 2023 00:03:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948616; cv=none; d=google.com; s=arc-20160816; b=peIZZaNFmrguJH0KRz/doj1beokQ/ofIs926O/xougD3SanZR3UZWsQ+q91LerbGP+ 2OrncQjwu7Fss5mFCmWkKB60WZGs8KEXeWzPrHCJt4mVUdLiZyQDULd3go8pcLIVmiEI +8YbJcj7fHod+cTe4nqc4bKw5NiWLxM0q8BSnBAZyfqMX4OZCoidQ+0BYB3bjQ8h8ZMJ aFIFjW5krZ3+SSDloxh/WrUwd0NcB/r3b6nsfxRaG50Zx7kZ5XOnHUfP15VUA9NxcqAv W0wug4DiE3v1jVesQoAhivQANAs5wOgEI4gfJ6tiP7rXo8Jw8a1U1pWNZ6zwkikEIY4R B+NQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=0nRK4EfOPjSxCiNEuVggQvBD1uscs6VXmH9zNgJfCZc=; b=kgqbIiZ4nCtdnbuzwIzJawBy+gnq0sWANt8ahpnShKGKLHB6uKeJMpMLB+m+GAgvyX bl5r3R/46tl0hwQconz1zIljklTfRnTHhgxfLcIAMuCphGD8TFtvIN+QTZtY4GY7sNay b8gIKMpoR2JZ/9wwyo5xIG7B6WPhDs1/qYFdiAuuk3eNztK0uXViaj3k2xBAV6IvAhSx BKtzaQSvO8Q1uxewj1Dh9R5OBNDDyATCmIOh4930HtEnBnPgYfbO/YEaGyDvqQAlCzNZ pI5S24UyPmLGutmUuxd2NaEjAU/+NkxBfA5769uHNLWOqh7sWT1cnoGVb8+XA2r4GWMr lasg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x7-20020a17090ab00700b00258c1cef517si6243982pjq.135.2023.06.05.00.03.22; Mon, 05 Jun 2023 00:03:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbjFEHCP (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbjFEHBs (ORCPT ); Mon, 5 Jun 2023 03:01:48 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C5585110; Mon, 5 Jun 2023 00:01:44 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0DAAAD75; Mon, 5 Jun 2023 00:02:30 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6EEF23F793; Mon, 5 Jun 2023 00:01:42 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 05/27] locking/atomic: arc: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:02 +0100 Message-Id: <20230605070124.3741859-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845255879334353?= X-GMAIL-MSGID: =?utf-8?q?1767845255879334353?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/arc. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/arc/include/asm/atomic-spinlock.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/arc/include/asm/atomic-spinlock.h b/arch/arc/include/asm/atomic-spinlock.h index 2c830347bfb4e..89d12a60f84c0 100644 --- a/arch/arc/include/asm/atomic-spinlock.h +++ b/arch/arc/include/asm/atomic-spinlock.h @@ -81,6 +81,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ ATOMIC_OP(op, c_op, asm_op) \ @@ -92,7 +97,11 @@ ATOMIC_OPS(or, |=, or) ATOMIC_OPS(xor, ^=, xor) #define arch_atomic_andnot arch_atomic_andnot + +#define arch_atomic_fetch_and arch_atomic_fetch_and #define arch_atomic_fetch_andnot arch_atomic_fetch_andnot +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP From patchwork Mon Jun 5 07:01:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103131 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501559vqr; Mon, 5 Jun 2023 00:12:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5yAA9+79XpWstp4/KsjkxKXLoIMsNAxGTc+lOB9UTAuYHqMKLYVCi9e8RzInROMqqjtQ5K X-Received: by 2002:a05:6a00:2d10:b0:65d:98a2:8876 with SMTP id fa16-20020a056a002d1000b0065d98a28876mr60887pfb.2.1685949179037; Mon, 05 Jun 2023 00:12:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949179; cv=none; d=google.com; s=arc-20160816; b=uDJ9heeG7gRH2xmdYYiuOxsOrBAP0rfWoDi0DnCru5y3NRiRhKTKa5+QBzhyAyHDeI CNJvGxTyfuC9EsvdtywxMqkCIFVCTT3qm1hkciwyNIqQUpFmlaK5mrgCq3tmrLNQh8Xs /2l7cJLHwGtqx4aGyN0NFr4tLIb/h1uN95NRwrZbb8Nd+BDH9kKT6C9WBMwitcwDYm4t K6RNgm4vmooVJh2Xr+9WTpn13oiAvatEGSl01gXgBgYZVEKGxBb6QJYaUtIe1C9lBf+K mdQGFXpAIT91uZpLXjHp3Xxkfe+8p+jqYqJClzL1sGOEJk/glTyqiqcmlIBgABPfG7tH 07Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=20gvWKGWC5QlBhUE10TZ0RbsxhnZU8fpuMtJQE06lCs=; b=JfQLlPn8C0A5iY+vG1oWhA5nPvUyOuDo9D5QYBCQZ/91sywt/eB6SyyAvFOlXuMzvn dipMf6zlAOWzv7ogTZLya4e93wPXlczhy6OnmYod6PsCtgzWIUU+g9gq+cTn6AvzteOV 0I2pYYtZLLtLDk0nOg8VdEapwbi8TWz1qEcdKT4FYGXishxD+jEpylSOeoEUU1f/1pmN nZKCHQaS54UeFalqZh7Zm2rHNPYyBptncAL3ueFOqXBuhmfh/PKpH6ZDvpFw9YoJ7FJS h/xZXSbwNpVpCxR11MoTyJIZ00eb6E5xRGBHnSxrGrR4nctrpgb6Mw2RfOHW2o+63Wub FcSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e63-20020a636942000000b00543c8ad57f1si173394pgc.67.2023.06.05.00.12.46; Mon, 05 Jun 2023 00:12:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230350AbjFEHCY (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230226AbjFEHCB (ORCPT ); Mon, 5 Jun 2023 03:02:01 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9B6D1DA; Mon, 5 Jun 2023 00:01:48 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF301152B; Mon, 5 Jun 2023 00:02:33 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7AAE23F793; Mon, 5 Jun 2023 00:01:46 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 06/27] locking/atomic: arm: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:03 +0100 Message-Id: <20230605070124.3741859-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845846618580463?= X-GMAIL-MSGID: =?utf-8?q?1767845846618580463?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/arm. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/arm/include/asm/atomic.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 9458d47ff209c..f0e3b01afa746 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -197,6 +197,16 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ return val; \ } +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; @@ -212,8 +222,6 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) } #define arch_atomic_cmpxchg arch_atomic_cmpxchg -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot - #endif /* __LINUX_ARM_ARCH__ */ #define ATOMIC_OPS(op, c_op, asm_op) \ From patchwork Mon Jun 5 07:01:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103129 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501503vqr; Mon, 5 Jun 2023 00:12:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4iEmAzH+ez6bZn5BPV5Ej08mDD7Qs7hQA3Rcu4Rkm4iomLLYVvCpg20hnEWwb95+ZsZcTP X-Received: by 2002:a17:902:aa0a:b0:1b0:6e18:ce49 with SMTP id be10-20020a170902aa0a00b001b06e18ce49mr2346128plb.23.1685949169787; Mon, 05 Jun 2023 00:12:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949169; cv=none; d=google.com; s=arc-20160816; b=kut2rJc4UTRzhpvSFmWdMrhJHT+K5NcRnxRVv3MzWZKWfYY/CVL/9QLynSw/SvjX80 Mh+iJ/t1YMQXV6dj3TdE732wFF1l4FrTR+qSMrq/f1GjYQ7e+5oi6jNCSjjosSVlcppN hMmYRZOvbLAfFIGIl2eC87OSqBcPjvpemQ3dkYOJ9ZOSfWH48CxbSSBqDLSaH3gd7ubU x/nhT+bQCBCT1JPfywJBKR7BZ/o65O6qJYfI+lnZ1WURhlrUbSqF2lIi+5TRo8LS7dHB 9DjbCJCjSETq1lHtlExSbEEZzPpjp70XN1h/8RX2lYsFWrVNCtxC1IHjeTvdbhqM9nQn UuLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LQyFPhZ4TdxabsAQC7tB7VpjsBJo12REYjzJQquzNGc=; b=HW0Vm5rCP9clF82r9yNQLNkDW6QcZEwoasNEPmOqXMY11ik9dn1+7Hfymb1FaGZxsW eYuNUS3dDFYaHEjlJcDqC6lAOFpAo27EcwrAFqsytJZJNjiGynZz7dc4Bsf3DAJF27NX TPANjsEBz64Na/ZSOaSvmXlQErfcEutaJDSPfUGaBbm/MN8+/Y1Cnx869KpyWi3v2Ih1 I2VPcBeGnuf5cmGjjAV3zLOju4Ml6FlKG5MqT3zKpmGIi7A919kqoAGFYkqEPqAW31sz TENBy5JlAdgDd9IfeQA+o0kmD0Pr+HHftgY9Sfd8QJmPrIwsf4bcAYgw/0cmw3O0JjFz DuBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p14-20020a170902e74e00b001ab0727a2c0si5225093plf.424.2023.06.05.00.12.38; Mon, 05 Jun 2023 00:12:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230346AbjFEHCU (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbjFEHCI (ORCPT ); Mon, 5 Jun 2023 03:02:08 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DF06119A; Mon, 5 Jun 2023 00:01:50 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 396EED75; Mon, 5 Jun 2023 00:02:36 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E92E03F793; Mon, 5 Jun 2023 00:01:48 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 07/27] locking/atomic: hexagon: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:04 +0100 Message-Id: <20230605070124.3741859-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845837054559947?= X-GMAIL-MSGID: =?utf-8?q?1767845837054559947?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/hexagon. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/hexagon/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index ad6c111e9c10f..5c8440016c762 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -91,6 +91,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -98,6 +103,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon Jun 5 07:01:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103112 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2499872vqr; Mon, 5 Jun 2023 00:09:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5o5yW/PF6yCVjb9qLrpHo30qniSirwtszy459aYp8Ap0EJy9Q7W6UUrVhDCQPKo5TtFtRx X-Received: by 2002:a17:90a:cb97:b0:255:67e3:98a with SMTP id a23-20020a17090acb9700b0025567e3098amr6542425pju.11.1685948940503; Mon, 05 Jun 2023 00:09:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948940; cv=none; d=google.com; s=arc-20160816; b=G8+W0dz3kUPheZY8ivqiia5Sgu/cXGDcqpavBbASFZgjw3x1KXttc7DdAgmWmptQ7R 1Q/uvGI15njUQ5bNoeC2ywj4pjpa8z87Uy71Cov4WKBIliooEapOf7RRSUC4ievXErhh vYIuVPuZ+rMAkf7AnpsehcbUZUJeVFkzOWm0ZpeQvlENq2dFh2P5wJcnCjHOwJZ1tuNG 6RLLiBgZ5cwmzpOgMrk52pQdTolpeMJdzC34zgYeFX3nx7hBNXfPw2R8MU+8zYI21oOY f+7/5DObW1AcJceqcO0r6NngGxUqCqKr8VLkYmuK9CHsDNCw4lir1DjDH/+Gn8h2y50k AYIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=z75EPH70oqMFcvvlSJhGvw+/IIp/F47XAUq8/CH75ko=; b=gV43cYOa5R+0iU6RCH/ryRdswhZbekGr9T05kUfHiK6GKM889sEtQEwx8SljulgEhY uvC7cmNBMK8WRBRzxaoarJTMNc1C4r5XxJ/rLWkZ+sqjIgoF3bhAKkbv2bLaYAz8oayC BGjtod8w/J/TDUjzAiCPLfNZTSK8u6DikOisz/dgg1YnH//fu2BuBHUCjxaRaY+sAwsc KupPcCjmIu0lMlGTzjnRcO9RUD2725y7LjtG6o9Oo5YQkUWOb9wmzbkBUtRtxIDakthY TABvzrtl0AeW+fqh74ghZ0ADc7cN33LfxBFHxL5AsHdsrqD/OwGSlBBmNPAnsvu9Zv5L t7NQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cm3-20020a17090afa0300b00246973aef88si6833184pjb.29.2023.06.05.00.08.46; Mon, 05 Jun 2023 00:09:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbjFEHCd (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230211AbjFEHCK (ORCPT ); Mon, 5 Jun 2023 03:02:10 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8BF2D1B7; Mon, 5 Jun 2023 00:01:53 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C9CE2152B; Mon, 5 Jun 2023 00:02:38 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 853733F793; Mon, 5 Jun 2023 00:01:51 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 08/27] locking/atomic: m68k: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:05 +0100 Message-Id: <20230605070124.3741859-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845596402287196?= X-GMAIL-MSGID: =?utf-8?q?1767845596402287196?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/m68k. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/m68k/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index 190a032f19be7..4bfbc25f6ecf4 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -106,6 +106,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ ATOMIC_OP(op, c_op, asm_op) \ @@ -115,6 +120,10 @@ ATOMIC_OPS(and, &=, and) ATOMIC_OPS(or, |=, or) ATOMIC_OPS(xor, ^=, eor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon Jun 5 07:01:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103120 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500798vqr; Mon, 5 Jun 2023 00:11:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ76JJhYMe5UC0/BqEyP1llBQ9kVdLzB+K4qFYO0U4UIhefMieOMgVyH6c71HiiOP7KM9X2a X-Received: by 2002:a05:6102:2858:b0:43b:4de5:7d98 with SMTP id az24-20020a056102285800b0043b4de57d98mr330992vsb.35.1685949082996; Mon, 05 Jun 2023 00:11:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949082; cv=none; d=google.com; s=arc-20160816; b=TsAlfLxuFBy73Lsg3jNID+oDr5NLhNV95bmJTF1+akZNAs7wTQHvSOCuGtmVPdg+Qj e3KUW7FiNZqzreI9vCPGpmfP1WWOE6+tTWdKwdE36+vE8oXFz4I96CQzsBotaMriGktm 2nwAX4yNOB0ghjgKxbwNZzFhccYFxInHeVBr7rUqjx5SJ5JL8hEYqowbKk3PX9NtM6oz p0a7iddF2AZERpqnttRhW8/1CnzRKjWRKVA4AUtRMLqDwWFHHMgduWtiaO3RsGIusOsf X7agB91WMuGlUBR2nlKvxOXTkW9IE5r5yONJwmCW6K4oblB504xGii8BE/I7Xu/zpY91 7jdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oIXi2TWCr2duHalm7KErWvo5ZYpUMwnWAC4nRBb3t+8=; b=LhqbUStLKwhcH2RRmRwJ7jvly0FOe0oKjPVg8hknaIvF6Dp7yrXWCLSmOVvGfjf4nt S1dRlylc6ZPMP/fXiT9+qpBMy1dv493jE6sCa80IDn9fnGhkKM/GCNwK1wlssDPqK5Hl O3f5nBlk/x4zwxKZOKVg5BhvhnhiIwEt6W9FIDCGgEVS0w3qJQbjGVio4zw2cOZwWjXV nA+37tpt2IcDe+vxLbwGyQUediUhaI3zZ631i984r+FNkEDg2Y1DWAKyQfDIBxT8c5L2 caFL6Gh++Bcdde/TPLluLe67snRfzYeKwcvN/CBf4uJ9yffjitzkkYgDSGcm2y64zXQl ChrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j14-20020a636e0e000000b00541bf36f2d2si5163236pgc.208.2023.06.05.00.11.10; Mon, 05 Jun 2023 00:11:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230311AbjFEHCk (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbjFEHCL (ORCPT ); Mon, 5 Jun 2023 03:02:11 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 64B5CE51; Mon, 5 Jun 2023 00:01:56 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78D111596; Mon, 5 Jun 2023 00:02:41 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 33D4B3F793; Mon, 5 Jun 2023 00:01:54 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 09/27] locking/atomic: parisc: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:06 +0100 Message-Id: <20230605070124.3741859-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845745379357502?= X-GMAIL-MSGID: =?utf-8?q?1767845745379357502?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/parisc. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/parisc/include/asm/atomic.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 0b3f64c92e3c0..d4f023887ff87 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -118,6 +118,11 @@ static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=) ATOMIC_OPS(sub, -=) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op) \ ATOMIC_OP(op, c_op) \ @@ -127,6 +132,10 @@ ATOMIC_OPS(and, &=) ATOMIC_OPS(or, |=) ATOMIC_OPS(xor, ^=) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN @@ -181,6 +190,11 @@ static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \ ATOMIC64_OPS(add, +=) ATOMIC64_OPS(sub, -=) +#define arch_atomic64_add_return arch_atomic64_add_return +#define arch_atomic64_sub_return arch_atomic64_sub_return +#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + #undef ATOMIC64_OPS #define ATOMIC64_OPS(op, c_op) \ ATOMIC64_OP(op, c_op) \ @@ -190,6 +204,10 @@ ATOMIC64_OPS(and, &=) ATOMIC64_OPS(or, |=) ATOMIC64_OPS(xor, ^=) +#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + #undef ATOMIC64_OPS #undef ATOMIC64_FETCH_OP #undef ATOMIC64_OP_RETURN From patchwork Mon Jun 5 07:01:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103107 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497617vqr; Mon, 5 Jun 2023 00:03:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ63Twe/Vvq93BAPmCrDp380uJiNOSO7VQSlUH9h094DNNw/UI50aMUmypVaIi3MYwMHVpR6 X-Received: by 2002:a05:6a00:2e12:b0:640:defd:a6d5 with SMTP id fc18-20020a056a002e1200b00640defda6d5mr13825038pfb.12.1685948615893; Mon, 05 Jun 2023 00:03:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948615; cv=none; d=google.com; s=arc-20160816; b=Ld/pj6ClPaspFxSToFq22pmJ+sU+3T5hVWgcszPdz5D4DChl0TiNKA5AFSOF7/6fDX fM4eIVoY+iDIbzuHs5++mtH8Whwn5GSqblFbCjjswKWSXbhzoO6U79ggR3x1vm6yEVAa NJXgJ6B8tcLsATMk0vHFYSogSWbY7LhYOpsFES3Fexz7ZXBsed0P2J0fOYAAmjvXkhuu BwMAG6e70qajbsh+M8oykC683kq/wk7+/ZBkDqsQuHMjxWyBRn+s3mvOix/8byM7ZCoa I/OAc+07kZBI7yVW5JYfxvbqwupCLSrHLK/j9yxAtmI/zCp5wOvy57YgpKlZsGoUhBNE hlDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=reRqZKQ79lEgV7p7aIG2YadDghTShOawd2Gm2P44OT8=; b=oEhrBX71GkSz4o2snYJoJTTgi9AjUdjsa7PhTErP6kLLrfwPyHs7mzEPeAJGjcLGUS YV+jdjO8rBPiXRdGB3JxfApjMadpgBa7/+lyj/D5rK/M1cmoOTvOxVl3AAaIA71T1ggL 0NvuPa8y7NC33HATzMt1xRCUEqPL6RpHuXG0z4nYCPA5KVD68ONCOLLQq1t1BZS8SfoE WOm/3DYEgTyTB/HXuHMCkeVAJS6SZB0J7nvgpAcSB7hjF9wn3WTBbUIJ4hBt8YtfKvGM Y+mz2YWUc6u7TPpwN4r1CD0aK2n0oX0NDSmgm2RmePX5e+dcensd+Hb6tqRryEXJ4F93 X1/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j31-20020a63231f000000b0053f0d8ac4b0si5202451pgj.825.2023.06.05.00.03.23; Mon, 05 Jun 2023 00:03:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230331AbjFEHCp (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230325AbjFEHCO (ORCPT ); Mon, 5 Jun 2023 03:02:14 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2429FE69; Mon, 5 Jun 2023 00:01:59 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F5DBD75; Mon, 5 Jun 2023 00:02:44 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CD5963F793; Mon, 5 Jun 2023 00:01:56 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 10/27] locking/atomic: sh: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:07 +0100 Message-Id: <20230605070124.3741859-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845256054654580?= X-GMAIL-MSGID: =?utf-8?q?1767845256054654580?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/sh. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/sh/include/asm/atomic-grb.h | 9 +++++++++ arch/sh/include/asm/atomic-irq.h | 9 +++++++++ arch/sh/include/asm/atomic-llsc.h | 9 +++++++++ 3 files changed, 27 insertions(+) diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h index 059791fd394fc..cf1c10f15528b 100644 --- a/arch/sh/include/asm/atomic-grb.h +++ b/arch/sh/include/asm/atomic-grb.h @@ -71,6 +71,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -78,6 +83,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h index 7665de9d00d0d..b4090cc354935 100644 --- a/arch/sh/include/asm/atomic-irq.h +++ b/arch/sh/include/asm/atomic-irq.h @@ -55,6 +55,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=) ATOMIC_OPS(sub, -=) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op) \ ATOMIC_OP(op, c_op) \ @@ -64,6 +69,10 @@ ATOMIC_OPS(and, &=) ATOMIC_OPS(or, |=) ATOMIC_OPS(xor, ^=) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h index b63dcfbfa14ef..9ef1fb1dd12ee 100644 --- a/arch/sh/include/asm/atomic-llsc.h +++ b/arch/sh/include/asm/atomic-llsc.h @@ -73,6 +73,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -80,6 +85,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon Jun 5 07:01:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103113 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2499891vqr; Mon, 5 Jun 2023 00:09:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ73zqsHcg6pbFqz0FPelUaTPp2wCbHFl4WvfqAdoe4UVPSZ1jN5fkAZFeI/HJLtVihrRdhr X-Received: by 2002:a17:90a:f283:b0:255:8a12:241b with SMTP id fs3-20020a17090af28300b002558a12241bmr5723324pjb.22.1685948944101; Mon, 05 Jun 2023 00:09:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948944; cv=none; d=google.com; s=arc-20160816; b=uxyK0KI+pXMUGy8AEq0xRKpKxqnPTEuk0VkRG0NAcAzWc0SwdFs/P/0dfyQb19erlh sRw0fjlLgS22/q/CHpyG3ertT21LPMIai66s+3jMTHSpFJeJ9L9Uv/L/vjpvofAIzD1e d6sJQU85Q5SY8l5gI2cJ8JvI2jevasI/g8/PBDxu7Uo7ayrgQuzDIb79FENChEDSst/y NT0UcnLq+rOJqxmkBT5hFgOhRN3u8tWEXuPGIKLwh6xwicIzUWufLnV28yp8pnGbUvol wc0auKedgKzO7kvpjLNZWbzjDlun0RmDYn7w9slBneEjMS/4Ajb9wpVKVE1klgoNKCTw 7M6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=WG6PK6ZKbc78e2ZUfaMOSUKTFA/uJsN7uvssLK+CPEk=; b=ZIj8UwCmghtIxmXBtaRsHe2NIt+O3pzf4yD5uUDLxExJ9oD0OD77h19EIffnFq5LrC nqGhiFeYQ/XfgZLlacbGQcLtI2X86WR7LGZBqOrnP7p1PMTmPjRCGKcOAXO13ScViIki QPWfGvwRfus6ftjmhGPd9Bf4yJXqcMEdJg4J5HsBhN4GW4hXJgyvsJFdi0FnhDhW00G4 rFvwOaGtyuX/BMiFF9bgcB//6MaAT8rEID0TLEf7lms8HuH69nn2Eio7q4CuU4I4eY8q Joxn4UadtoVLFRUTBsI8pICu0FNfhUQCQG0bOc1EwaJZ2N6xubeDGYoIKGnJ5I+OPR6U vrwg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s1-20020a17090a880100b002449fd20726si6787312pjn.64.2023.06.05.00.08.50; Mon, 05 Jun 2023 00:09:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230265AbjFEHCu (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbjFEHC2 (ORCPT ); Mon, 5 Jun 2023 03:02:28 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CE245E7E; Mon, 5 Jun 2023 00:02:01 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A584215A1; Mon, 5 Jun 2023 00:02:46 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 609543F793; Mon, 5 Jun 2023 00:01:59 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 11/27] locking/atomic: sparc: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:08 +0100 Message-Id: <20230605070124.3741859-12-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845599864150847?= X-GMAIL-MSGID: =?utf-8?q?1767845599864150847?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/sparc. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/sparc/include/asm/atomic_32.h | 16 ++++++++++++++-- arch/sparc/include/asm/atomic_64.h | 18 ++++++++++++++++++ 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index 1c9e6c7366e41..60ce2fe57fcd7 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -19,19 +19,31 @@ #include int arch_atomic_add_return(int, atomic_t *); +#define arch_atomic_add_return arch_atomic_add_return + int arch_atomic_fetch_add(int, atomic_t *); +#define arch_atomic_fetch_add arch_atomic_fetch_add + int arch_atomic_fetch_and(int, atomic_t *); +#define arch_atomic_fetch_and arch_atomic_fetch_and + int arch_atomic_fetch_or(int, atomic_t *); +#define arch_atomic_fetch_or arch_atomic_fetch_or + int arch_atomic_fetch_xor(int, atomic_t *); +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + int arch_atomic_cmpxchg(atomic_t *, int, int); #define arch_atomic_cmpxchg arch_atomic_cmpxchg + int arch_atomic_xchg(atomic_t *, int); #define arch_atomic_xchg arch_atomic_xchg -int arch_atomic_fetch_add_unless(atomic_t *, int, int); -void arch_atomic_set(atomic_t *, int); +int arch_atomic_fetch_add_unless(atomic_t *, int, int); #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless +void arch_atomic_set(atomic_t *, int); + #define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) #define arch_atomic_read(v) READ_ONCE((v)->counter) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index df6a8b07d7e63..a5e9c37605a70 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -37,6 +37,16 @@ s64 arch_atomic64_fetch_##op(s64, atomic64_t *); ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic64_add_return arch_atomic64_add_return +#define arch_atomic64_sub_return arch_atomic64_sub_return +#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -44,6 +54,14 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + +#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon Jun 5 07:01:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103124 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501025vqr; Mon, 5 Jun 2023 00:11:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ48UUnCnepGsgfigmYN/u/Xw18ClbjcNNuviDX0rKX7s+dAYFeAcRXEPihf7kE9mUqtIi09 X-Received: by 2002:aca:1805:0:b0:396:11b3:5851 with SMTP id h5-20020aca1805000000b0039611b35851mr6233012oih.54.1685949112354; Mon, 05 Jun 2023 00:11:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949112; cv=none; d=google.com; s=arc-20160816; b=EKvm2A1yNSkjEfa3s6RiDHjY7Chf/lmMOjGF35KhaqczqCppwkxNRrO49NceI6pvWL bl9E8YJfHDO8z2YpBjR0FakTecl4iUs60MAuuayeewbhmAZhi/etYC4ECduuH0Qiws7W f0YessGzl+5Ik0GqAhSXhFhc+QJ75GQjH4txZF4dT1iGNvYPofcHsKlY1OTsYF92ZeSa Ml2nnOfdgDiixO4KcVxwQybAFwoTtRmKIMPy9BMPZKOpfCsLhfOExctfBbzvGI+SjH7t xSfcZf73FspQlBPwlUgKrjJiiL4Z1vGpj1b+tTYVJBBtcCBFsAFkIRvKfSZFd/ZaXwlL fcKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TstChfnbt9Hjx4FbiD3OGjFmsk+j1dPQxheN3Y2sz6I=; b=tzb1v8lLKNUGFpJ3RS59EGsAHyp6MzgpRL71+7CvSxz8qgvRN9D9ZMxH3F8TLsQWFl IiGOYAr4Jywsk+JKgnPjkXN9lk4LPpDXDc7Y12SjuUOMr9IxVIaEftYj4lOA6zR75BOV 6vOGKkPkRJto/8H2ZMUSBww3rFC61y0mNyKjaLYQ0GWhK7NPGJQHawPTsMCU45jfgyuc JPxTyk0jMzRua2s/fVUB3H/sQY87X4VZQBhfwgvTftph4FNcwhFz6LF7P7X8oQn31SXg /8Fpce/OyuKdnDKbtP8FhZji2MhzmFS2Qxab4ptnTGlLVDl9dAy4SMUzW8nFG/VEpF36 dpLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c26-20020a056a00009a00b0065c7d752e15si678332pfj.50.2023.06.05.00.11.40; Mon, 05 Jun 2023 00:11:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbjFEHCz (ORCPT + 99 others); Mon, 5 Jun 2023 03:02:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230377AbjFEHCa (ORCPT ); Mon, 5 Jun 2023 03:02:30 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E2D1A10CB; Mon, 5 Jun 2023 00:02:03 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2FFC9152B; Mon, 5 Jun 2023 00:02:49 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DF8DC3F793; Mon, 5 Jun 2023 00:02:01 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 12/27] locking/atomic: x86: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:09 +0100 Message-Id: <20230605070124.3741859-13-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845776379177375?= X-GMAIL-MSGID: =?utf-8?q?1767845776379177375?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/x86. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/x86/include/asm/cmpxchg_64.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h index 3e6e3eef701b3..44b08b53ab32f 100644 --- a/arch/x86/include/asm/cmpxchg_64.h +++ b/arch/x86/include/asm/cmpxchg_64.h @@ -45,11 +45,13 @@ static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 n { return __arch_cmpxchg128(ptr, old, new, LOCK_PREFIX); } +#define arch_cmpxchg128 arch_cmpxchg128 static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new) { return __arch_cmpxchg128(ptr, old, new,); } +#define arch_cmpxchg128_local arch_cmpxchg128_local #define __arch_try_cmpxchg128(_ptr, _oldp, _new, _lock) \ ({ \ @@ -75,11 +77,13 @@ static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp, { return __arch_try_cmpxchg128(ptr, oldp, new, LOCK_PREFIX); } +#define arch_try_cmpxchg128 arch_try_cmpxchg128 static __always_inline bool arch_try_cmpxchg128_local(volatile u128 *ptr, u128 *oldp, u128 new) { return __arch_try_cmpxchg128(ptr, oldp, new,); } +#define arch_try_cmpxchg128_local arch_try_cmpxchg128_local #define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16) From patchwork Mon Jun 5 07:01:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103121 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500855vqr; Mon, 5 Jun 2023 00:11:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4ffvf/N3S2VNADmFW5fOBVCwd8sksBuXfKMvm3p9eP46xunp74clhu4PHBMf6nreln8Nzt X-Received: by 2002:a17:90a:1994:b0:259:5c6:39bc with SMTP id 20-20020a17090a199400b0025905c639bcmr1651661pji.33.1685949089748; Mon, 05 Jun 2023 00:11:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949089; cv=none; d=google.com; s=arc-20160816; b=BgOYXZZWOkUrlMGFelkm8STieIaPK5KgzWxkEbKmsQulGgvRJKpBh1i2+0p8FUe/XS /Wp7TtYkafiGyQDhxyhxrzSI0F+KOEHF/6nB81pHIaTcT+scOThLllrZ8AWhDT41qKgr UmH5if0vr/63LP8/KxDndeq1Ga86B08GPEzztNWsAOF8lvLnzIB1VnpuUA1kKp43+A6D vzrFZbBtHNi23KiZXchrdrWzLyGkJ7fIFzDKEGyNW3dcBHG8xwLemDN84ezvh6i0B7QF ovEJxnBoyho3cAFS6IdDVyPYAVVVz/Es/lJzDOJ7AMzI+4s5xZa8cOi+NbE0IorbUVOb 4I3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=eUFgnstRV5mcGy9wrNjoeGVtbBqSl0l/AQyxZMhGU84=; b=M2N6bkURrtd1LxYebm9wJF0z0auMTwg4z66YkgaEqjwZcxcMRwDkfPyutvlgSXYL33 hOXHxCTWWiS4deTJtmGsRnQDMMLdg5YPyU6HZUTevoMVEESN5Rh6prtjk6esR4DyyhMP sIFu3YP5Vp9WDp3v9UkaR+ajC/u6WnzJCn/LP/oMNiZAS2p48jMw7Ji3fevNjb37Wiye AVfmQvkvZL5pIvXjDuINNUQrLPSjojo6cY22R3+Dkf1fZ31mp+agBA4W3K1V2kGGWNLs MALXz+43JygtEpL/Ecf9fZLs/hnuE1yw/trpnUbUZZPQqdtY7M2n8EMskAHJ/IMEqzr4 qqfQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l6-20020a17090a850600b002509203c8b2si5135349pjn.47.2023.06.05.00.11.17; Mon, 05 Jun 2023 00:11:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230036AbjFEHDF (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230381AbjFEHCb (ORCPT ); Mon, 5 Jun 2023 03:02:31 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8F6CB10DB; Mon, 5 Jun 2023 00:02:06 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A47511650; Mon, 5 Jun 2023 00:02:51 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5FFBB3F793; Mon, 5 Jun 2023 00:02:04 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 13/27] locking/atomic: xtensa: add preprocessor symbols Date: Mon, 5 Jun 2023 08:01:10 +0100 Message-Id: <20230605070124.3741859-14-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845752563506132?= X-GMAIL-MSGID: =?utf-8?q?1767845752563506132?= Some atomics can be implemented in several different ways, e.g. FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms of FULL ordered atomics. Other atomics are optional, and don't exist in some configurations (e.g. not all architectures implement the 128-bit cmpxchg ops). Subsequent patches will require that architectures define a preprocessor symbol for any atomic (or ordering variant) which is optional. This will make the fallback ifdeffery more robust, and simplify future changes. Add the required definitions to arch/xtensa. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/xtensa/include/asm/atomic.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 1d323a864002c..7308b7f777d79 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -245,6 +245,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \ ATOMIC_OPS(add) ATOMIC_OPS(sub) +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + #undef ATOMIC_OPS #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op) @@ -252,6 +257,10 @@ ATOMIC_OPS(and) ATOMIC_OPS(or) ATOMIC_OPS(xor) +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + #undef ATOMIC_OPS #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN From patchwork Mon Jun 5 07:01:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103109 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497785vqr; Mon, 5 Jun 2023 00:04:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5b1GVaX4zJupBqjQIvmlZ3B1oMK9/g28o8wjgFqHsK2f3LSuTeTX1dfq820u7Wk3VP0hqw X-Received: by 2002:a17:902:c951:b0:1b0:496b:5ae9 with SMTP id i17-20020a170902c95100b001b0496b5ae9mr3011793pla.25.1685948639601; Mon, 05 Jun 2023 00:03:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948639; cv=none; d=google.com; s=arc-20160816; b=iRpvMkY0/wxfSHcu8F8ygk5hSrGCnBUqeF6K5i8uMPp17fw5T5KjqgfdoIgZ5n+eIt aBQ7lV1Q7mI0knAOMgRJzgTGMPBRIwKs1/ua4E1ee2bmGELbEUNrQP87yDR8Bps9tExn lMHxjcXGdQ3uqMHxiCy1JxzsXMXwx1D6os3n23jWJyaMaszdkNbbq8ZUCPZmn0XArh9J qgtz9oJX6lMUSi50YnU4kbuwcuHPObSfs/pdLrvJQb0cDt5OHIPGgQhYlU8wydW7bieY 7BK9O7YRo/R1T0XCU3YcszbVqeRtwVtrOEpfmrs5C8NTRiGA11VQIXSYe6vjfnmx8yCO NVVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kqzCq8M9qAV86ElhoICQ7Txc7oKnD702PeU7Jwbk5Ts=; b=YEboyAGfu/my3Mx3lYzs23lhS4HH0AFcFiZNlwvNdkdgIVsZtcb+XqFV521PZWzy5a JeRC21h3VD80UuAkbbTqUmuoLj92cXhDnEMnlSLhHD5qEEmhBLF40wVntlvgN273bpoY sut5p+BoA0YrSTScEVQem4fauF6aPjfEKg+w3/bB6Wx2QWXQkkTGfjTVZhXdQ3DtKsgc TI9xV34abrPtqeueCECcfIsZCkqF+Uhh6bq10DnDDHHJ5rEQUSEoA4YhdtLGiSo7Bio4 cpR34JQCF/Mb5rNvciv6LLaRnaBXMhm43c/uX8vW2yj7e2ToaZNewpNNZVHn0QtIxz8c pysg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jn15-20020a170903050f00b001a6b273fef3si4944648plb.442.2023.06.05.00.03.47; Mon, 05 Jun 2023 00:03:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230412AbjFEHDI (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjFEHCi (ORCPT ); Mon, 5 Jun 2023 03:02:38 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8C41910E5; Mon, 5 Jun 2023 00:02:08 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E94E81596; Mon, 5 Jun 2023 00:02:53 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A40EA3F793; Mon, 5 Jun 2023 00:02:06 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 14/27] locking/atomic: scripts: remove bogus order parameter Date: Mon, 5 Jun 2023 08:01:11 +0100 Message-Id: <20230605070124.3741859-15-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845280460413108?= X-GMAIL-MSGID: =?utf-8?q?1767845280460413108?= At the start of gen_proto_order_variants(), the ${order} variable is not yet defined, and will be substituted with an empty string. Replace the current bogus use of ${order} with an empty string instead. This results in no change to the generated headers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- scripts/atomic/gen-atomic-fallback.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index a70acd548fcd8..7a6bcea8f565b 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -81,7 +81,7 @@ gen_proto_order_variants() local basename="arch_${atomic}_${pfx}${name}${sfx}" - local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" + local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")" # If we don't have relaxed atomics, then we don't bother with ordering fallbacks # read_acquire and set_release need to be templated, though From patchwork Mon Jun 5 07:01:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103110 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2499667vqr; Mon, 5 Jun 2023 00:08:26 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4uw+3AB9m64e6GHdDc45Fxy5pOWaMyH7zREqt3A3OKw7yZC7V4jWP4mM5edlljSA6vgpJD X-Received: by 2002:a05:6359:29:b0:127:f2f6:ef35 with SMTP id en41-20020a056359002900b00127f2f6ef35mr8962976rwb.3.1685948906381; Mon, 05 Jun 2023 00:08:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948906; cv=none; d=google.com; s=arc-20160816; b=WSSSvHkhJBA51RogIGvpXi8ggRHC0fjNyHpF+SHQ+jEe4ruW70AevDz6lyJblikZRw RBj8fi/fnKy3k5IVCVFia4pq9QQihGzyq61F+8LjtevlEKPliNF76heinsbhHjNgfdla nSlZ6HxYXiOYOmD4GGrK+wRPpx0sYYqk+Ei6sHWiZaWY7qKSgsUbpE4KtOXef6osDNWA 9D/9SjymMXZZa4yu5HhT+XhKtNv2EFvFtRkwY8sHzR82B41fSTeJsQYQ22xpGSxi7qNB TvFCJjeyDE/RBbZ8GNt4i0xHhSflYGMp32PD3IM00Ov/TlxX8K+91CElm9BTWWSwd8GQ EfQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DzG0Gmu2Wi7630BU3sryeUh3Cm6J0AGn3iOMiBvBoiQ=; b=FXHJsYpwunhUjZcVAvKkTcMNb3OxboA+TGoEjqeNr50r/tcO7IHeiR3xOJd8x2Sz6o UsU0kBXPTm3A7izCLAPWZ0j7PKZGqzM7rX4CTrS++qACRneJ6vDCawDfFXV++h7wR2vC vkBL/Z4fxLSTmVqEgGhxUjqFgyPystDzjB9yfh35pjDbRaLFULIy4SHqS1lMbeJmYkMH mhMl4mLVByaUPU4i/0jrAeO2hmGkBmi0KC5hPPSW/y6y7WIlYordnuWz1vKEMQ0B1IzH mmABgbAvOwfQ61fmgTMsLIvXCV0BZDdZlWkHsynlzTAHy70ELF5SF0TjAcTfw1ybO6L6 O48Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a17090ac70800b0024ddf3f8a0bsi6803255pjt.82.2023.06.05.00.08.14; Mon, 05 Jun 2023 00:08:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230229AbjFEHDW (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229649AbjFEHCo (ORCPT ); Mon, 5 Jun 2023 03:02:44 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 508D4F7; Mon, 5 Jun 2023 00:02:13 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9A08F1655; Mon, 5 Jun 2023 00:02:56 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4C21E3F793; Mon, 5 Jun 2023 00:02:09 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 15/27] locking/atomic: scripts: remove leftover "${mult}" Date: Mon, 5 Jun 2023 08:01:12 +0100 Message-Id: <20230605070124.3741859-16-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845560336372403?= X-GMAIL-MSGID: =?utf-8?q?1767845560336372403?= We removed cmpxchg_double() and variants in commit: b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double") Which removed the need for "${mult}" in the instrumentation logic. Unfortunately we missed an instance of "${mult}". There is no change to the generated header. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- scripts/atomic/gen-atomic-instrumented.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index a2ef735be8ca9..68557bfbbdc5e 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -118,7 +118,7 @@ cat < X-Patchwork-Id: 103117 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500680vqr; Mon, 5 Jun 2023 00:11:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7cV2651HyonnhtdRLTu3FN/wfTmrG/Z3RF7mFdXNqiYn8l2dwwm36/9vIyqK/yhrOvr02D X-Received: by 2002:a05:6830:12d5:b0:6b1:593d:539 with SMTP id a21-20020a05683012d500b006b1593d0539mr4771808otq.21.1685949066710; Mon, 05 Jun 2023 00:11:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949066; cv=none; d=google.com; s=arc-20160816; b=G/f4G9b0C09/SGPWmVKL+NY+Q2Q0TsiCl+e8VbMb0gKGq1OducSlGFP9fAhorCYNWC DuZJZWK+UdOTmV3qPnCkqQiLHa9KTV70iQTq1YLkg4C/g2B4cETreSqaQQvlGWa9dMCh EWYFgKMHMt1PUtiM87xlSw4GqtnBVQ4Ck0pR0zoRPN3c1dw+20KBUH3uLb3DJwC0A0MN iCI1swM1KqPWCnTRtGXBSLcTwKl7hdYDu4GwEv7M6vSBcXMHP75SenKCZEipN1KMKJq7 XLyPxJHGhkYu0DOFuQrJbzEw8zWf3tBAgTgb49eEjH1QBUPv1Dq//95Ikn2ghj/C1MJ4 107Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=5KytyxGgZNOk2hbRPbjU4ZDbbQU3UMchkl1+Uukl5ww=; b=gmU1zrlHEwJwmygcrqnlAqx2yBj511UpL/xZ8Yozrgt2ceQEeuNWsUPi8GdhKZmG1Y ZKXypJgU7c6IqX4OZ/rh7slfY/clWebvfhFUfO2E9vIXXYsZcT5Z0n2T8+mnFpdOyr6v OjUSY5ZkufnT8e7i+Lpl0lmWk59q05uHOHuHIOoBE899rGal/SmhKJpuohqnCR08+1jZ UNmSmm+Aya6hTPD+WaIYQPQl5ey0pVx5lhGTzMYkkJYr3/u4wvAcAbI26P+v4uAeF2qo y/3fvt+DxnH1Y2OP742QKqLzFXqsg942lrZsZig9dyqmKyuNNP1daTQO/Dly9iF+UDv0 51DA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b193-20020a6334ca000000b0051b35cb52b2si5100737pga.565.2023.06.05.00.10.54; Mon, 05 Jun 2023 00:11:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230377AbjFEHDZ (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230369AbjFEHCv (ORCPT ); Mon, 5 Jun 2023 03:02:51 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8DED9E6F; Mon, 5 Jun 2023 00:02:17 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 328DD1682; Mon, 5 Jun 2023 00:02:59 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E1EDF3F793; Mon, 5 Jun 2023 00:02:11 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 16/27] locking/atomic: scripts: factor out order template generation Date: Mon, 5 Jun 2023 08:01:13 +0100 Message-Id: <20230605070124.3741859-17-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845728254257480?= X-GMAIL-MSGID: =?utf-8?q?1767845728254257480?= Currently gen_proto_order_variants() hard codes the path for the templates used for order fallbacks. Factor this out into a helper so that it can be reused elsewhere. This results in no change to the generated headers, so there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- scripts/atomic/gen-atomic-fallback.sh | 34 +++++++++++++-------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 7a6bcea8f565b..337330865fa2e 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -32,6 +32,20 @@ gen_template_fallback() fi } +#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...) +gen_order_fallback() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local tmpl_order=${order#_} + local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}" + gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" +} + #gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...) gen_proto_fallback() { @@ -56,20 +70,6 @@ cat << EOF EOF } -gen_proto_order_variant() -{ - local meta="$1"; shift - local pfx="$1"; shift - local name="$1"; shift - local sfx="$1"; shift - local order="$1"; shift - local atomic="$1" - - local basename="arch_${atomic}_${pfx}${name}${sfx}" - - printf "#define ${basename}${order} ${basename}${order}\n" -} - #gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) gen_proto_order_variants() { @@ -117,9 +117,9 @@ gen_proto_order_variants() printf "#else /* ${basename}_relaxed */\n\n" - gen_template_fallback "${ATOMICDIR}/fallbacks/acquire" "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_template_fallback "${ATOMICDIR}/fallbacks/release" "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_template_fallback "${ATOMICDIR}/fallbacks/fence" "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" printf "#endif /* ${basename}_relaxed */\n\n" } From patchwork Mon Jun 5 07:01:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103125 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501028vqr; Mon, 5 Jun 2023 00:11:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6a7X0dzt1espd0JNUarDPcfEQtNvtvrIQADP0f5AMR5QzOA1tWxSUtGZyC8XREF3Cpz/aZ X-Received: by 2002:a92:cc43:0:b0:33b:820:41ad with SMTP id t3-20020a92cc43000000b0033b082041admr11197374ilq.9.1685949112446; Mon, 05 Jun 2023 00:11:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949112; cv=none; d=google.com; s=arc-20160816; b=ZMFRNoPRa35kbxsWf/xT96Cy1I7NoVj/gCDnomMs1KVFqT44QE6EIGfz81OdayVbDP 1K12+qKLlsCdx5QTFEV9t5guVrwoHUetHd/kS1oP9ItKkdGdcG1ydM9fcikZlzVmkckZ YC5BXsufkyFu+O24GrruzG1gAkS2nYbSYU01Zd0nJhjAZ9QPm7b1hIHQtc2YMyXLg930 PVDzIiFIBS5g2jMMrg3Bie5RaZ8ZtUv1tb/rbdpGAPUNJtD107hlBqmUe2RhaIMFgdLE RB3JUPwFnPIkOSeg1LzsKGuaKu5i5JSBBKvLfwbmAx2z1cocCDNzHzMknNIFpRDNigaK vrHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=A2MNmwVnnYlcXvhONf11T1JaWjhD5t67nYed8GDSDhc=; b=nhCx1uJc7syeskkseV4lrh0B265b9CkoexEUOLYcfoKVIAX+4NXQDhzkysCjeQEPaN P7CXnNP+EYsSmgG3/ocE7Tvo2v1vuiEF1/cJKC6TACvuepssZ7pWGcBJyQMCucdfpT2b v9RWe/9CIhoBG+Dspr1qtcF13EEN52UhnJNF40Ys0OAp8JoouVxz9bC//uNX/TF+UsxQ J5uX9jr7LbRveDXGpv8UAaXV/7VWPfLdmbEzK9CdXMHFB/nFiuX9GElF0skOZ3HhNFUo DiiQacVbmuJ/AP0cLVgbQ6xcJf9UwS5OJNee10FM4Dov1r9EyrwcobB1Y78p74vnp/Ki Q4QQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n18-20020a638f12000000b00533fce755adsi5084943pgd.130.2023.06.05.00.11.40; Mon, 05 Jun 2023 00:11:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230264AbjFEHD3 (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230282AbjFEHDC (ORCPT ); Mon, 5 Jun 2023 03:03:02 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0384810F5; Mon, 5 Jun 2023 00:02:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CBF6BD75; Mon, 5 Jun 2023 00:03:02 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D2483F793; Mon, 5 Jun 2023 00:02:14 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 17/27] locking/atomic: scripts: add trivial raw_atomic*_() Date: Mon, 5 Jun 2023 08:01:14 +0100 Message-Id: <20230605070124.3741859-18-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845776643316897?= X-GMAIL-MSGID: =?utf-8?q?1767845776643316897?= Currently a number of arch_atomic*_() functions are optional, and where an arch does not provide a given arch_atomic*_() we will define an implementation of arch_atomic*_() in atomic-arch-fallback.h. Filling in the missing ops requires special care as we want to select the optimal definition of each op (e.g. preferentially defining ops in terms of their relaxed form rather than their fully-ordered form). The ifdeffery necessary for this requires us to group ordering variants together, which can be a bit painful to read, and is painful for kerneldoc generation. It would be easier to handle this if we generated ops into a separate namespace, as this would remove the need to take special care with the ifdeffery, and allow each ordering variant to be generated separately. This patch adds a new set of raw_atomic_() definitions, which are currently trivial wrappers of their arch_atomic_() equivalent. This will allow us to move treewide users of arch_atomic_() over to raw atomic op before we rework the fallback generation to generate raw_atomic_ directly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic.h | 1 + include/linux/atomic/atomic-instrumented.h | 595 ++++--- include/linux/atomic/atomic-raw.h | 1645 ++++++++++++++++++++ scripts/atomic/gen-atomic-instrumented.sh | 19 +- scripts/atomic/gen-atomic-raw.sh | 84 + scripts/atomic/gen-atomics.sh | 1 + 6 files changed, 2033 insertions(+), 312 deletions(-) create mode 100644 include/linux/atomic/atomic-raw.h create mode 100755 scripts/atomic/gen-atomic-raw.sh diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8dd57c3a99e9b..127f5dc63a7df 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -79,6 +79,7 @@ #include #include +#include #include #endif /* _LINUX_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index a55b5b70a3e15..90ee2f55af770 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -4,15 +4,10 @@ // DO NOT MODIFY THIS FILE DIRECTLY /* - * This file provides wrappers with KASAN instrumentation for atomic operations. - * To use this functionality an arch's atomic.h file needs to define all - * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include - * this file at the end. This file provides atomic_read() that forwards to - * arch_atomic_read() for actual atomic operation. - * Note: if an arch atomic operation is implemented by means of other atomic - * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use - * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid - * double instrumentation. + * This file provoides atomic operations with explicit instrumentation (e.g. + * KASAN, KCSAN), which should be used unless it is necessary to avoid + * instrumentation. Where it is necessary to aovid instrumenation, the + * raw_atomic*() operations should be used. */ #ifndef _LINUX_ATOMIC_INSTRUMENTED_H #define _LINUX_ATOMIC_INSTRUMENTED_H @@ -25,21 +20,21 @@ static __always_inline int atomic_read(const atomic_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_read(v); + return raw_atomic_read(v); } static __always_inline int atomic_read_acquire(const atomic_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_read_acquire(v); + return raw_atomic_read_acquire(v); } static __always_inline void atomic_set(atomic_t *v, int i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic_set(v, i); + raw_atomic_set(v, i); } static __always_inline void @@ -47,14 +42,14 @@ atomic_set_release(atomic_t *v, int i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic_set_release(v, i); + raw_atomic_set_release(v, i); } static __always_inline void atomic_add(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_add(i, v); + raw_atomic_add(i, v); } static __always_inline int @@ -62,14 +57,14 @@ atomic_add_return(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return(i, v); + return raw_atomic_add_return(i, v); } static __always_inline int atomic_add_return_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_acquire(i, v); + return raw_atomic_add_return_acquire(i, v); } static __always_inline int @@ -77,14 +72,14 @@ atomic_add_return_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_release(i, v); + return raw_atomic_add_return_release(i, v); } static __always_inline int atomic_add_return_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_return_relaxed(i, v); + return raw_atomic_add_return_relaxed(i, v); } static __always_inline int @@ -92,14 +87,14 @@ atomic_fetch_add(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add(i, v); + return raw_atomic_fetch_add(i, v); } static __always_inline int atomic_fetch_add_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_acquire(i, v); + return raw_atomic_fetch_add_acquire(i, v); } static __always_inline int @@ -107,21 +102,21 @@ atomic_fetch_add_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_release(i, v); + return raw_atomic_fetch_add_release(i, v); } static __always_inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_relaxed(i, v); + return raw_atomic_fetch_add_relaxed(i, v); } static __always_inline void atomic_sub(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_sub(i, v); + raw_atomic_sub(i, v); } static __always_inline int @@ -129,14 +124,14 @@ atomic_sub_return(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return(i, v); + return raw_atomic_sub_return(i, v); } static __always_inline int atomic_sub_return_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_acquire(i, v); + return raw_atomic_sub_return_acquire(i, v); } static __always_inline int @@ -144,14 +139,14 @@ atomic_sub_return_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_release(i, v); + return raw_atomic_sub_return_release(i, v); } static __always_inline int atomic_sub_return_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_return_relaxed(i, v); + return raw_atomic_sub_return_relaxed(i, v); } static __always_inline int @@ -159,14 +154,14 @@ atomic_fetch_sub(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub(i, v); + return raw_atomic_fetch_sub(i, v); } static __always_inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_acquire(i, v); + return raw_atomic_fetch_sub_acquire(i, v); } static __always_inline int @@ -174,21 +169,21 @@ atomic_fetch_sub_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_release(i, v); + return raw_atomic_fetch_sub_release(i, v); } static __always_inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_sub_relaxed(i, v); + return raw_atomic_fetch_sub_relaxed(i, v); } static __always_inline void atomic_inc(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_inc(v); + raw_atomic_inc(v); } static __always_inline int @@ -196,14 +191,14 @@ atomic_inc_return(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return(v); + return raw_atomic_inc_return(v); } static __always_inline int atomic_inc_return_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_acquire(v); + return raw_atomic_inc_return_acquire(v); } static __always_inline int @@ -211,14 +206,14 @@ atomic_inc_return_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_release(v); + return raw_atomic_inc_return_release(v); } static __always_inline int atomic_inc_return_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_inc_return_relaxed(v); } static __always_inline int @@ -226,14 +221,14 @@ atomic_fetch_inc(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc(v); + return raw_atomic_fetch_inc(v); } static __always_inline int atomic_fetch_inc_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_acquire(v); + return raw_atomic_fetch_inc_acquire(v); } static __always_inline int @@ -241,21 +236,21 @@ atomic_fetch_inc_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_release(v); + return raw_atomic_fetch_inc_release(v); } static __always_inline int atomic_fetch_inc_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_inc_relaxed(v); + return raw_atomic_fetch_inc_relaxed(v); } static __always_inline void atomic_dec(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_dec(v); + raw_atomic_dec(v); } static __always_inline int @@ -263,14 +258,14 @@ atomic_dec_return(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return(v); + return raw_atomic_dec_return(v); } static __always_inline int atomic_dec_return_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_acquire(v); + return raw_atomic_dec_return_acquire(v); } static __always_inline int @@ -278,14 +273,14 @@ atomic_dec_return_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_release(v); + return raw_atomic_dec_return_release(v); } static __always_inline int atomic_dec_return_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_dec_return_relaxed(v); } static __always_inline int @@ -293,14 +288,14 @@ atomic_fetch_dec(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec(v); + return raw_atomic_fetch_dec(v); } static __always_inline int atomic_fetch_dec_acquire(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_acquire(v); + return raw_atomic_fetch_dec_acquire(v); } static __always_inline int @@ -308,21 +303,21 @@ atomic_fetch_dec_release(atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_release(v); + return raw_atomic_fetch_dec_release(v); } static __always_inline int atomic_fetch_dec_relaxed(atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_dec_relaxed(v); } static __always_inline void atomic_and(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_and(i, v); + raw_atomic_and(i, v); } static __always_inline int @@ -330,14 +325,14 @@ atomic_fetch_and(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and(i, v); + return raw_atomic_fetch_and(i, v); } static __always_inline int atomic_fetch_and_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_acquire(i, v); + return raw_atomic_fetch_and_acquire(i, v); } static __always_inline int @@ -345,21 +340,21 @@ atomic_fetch_and_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_release(i, v); + return raw_atomic_fetch_and_release(i, v); } static __always_inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_and_relaxed(i, v); + return raw_atomic_fetch_and_relaxed(i, v); } static __always_inline void atomic_andnot(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_andnot(i, v); + raw_atomic_andnot(i, v); } static __always_inline int @@ -367,14 +362,14 @@ atomic_fetch_andnot(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot(i, v); + return raw_atomic_fetch_andnot(i, v); } static __always_inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_acquire(i, v); + return raw_atomic_fetch_andnot_acquire(i, v); } static __always_inline int @@ -382,21 +377,21 @@ atomic_fetch_andnot_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_release(i, v); + return raw_atomic_fetch_andnot_release(i, v); } static __always_inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_andnot_relaxed(i, v); + return raw_atomic_fetch_andnot_relaxed(i, v); } static __always_inline void atomic_or(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_or(i, v); + raw_atomic_or(i, v); } static __always_inline int @@ -404,14 +399,14 @@ atomic_fetch_or(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or(i, v); + return raw_atomic_fetch_or(i, v); } static __always_inline int atomic_fetch_or_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_acquire(i, v); + return raw_atomic_fetch_or_acquire(i, v); } static __always_inline int @@ -419,21 +414,21 @@ atomic_fetch_or_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_release(i, v); + return raw_atomic_fetch_or_release(i, v); } static __always_inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_or_relaxed(i, v); + return raw_atomic_fetch_or_relaxed(i, v); } static __always_inline void atomic_xor(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_xor(i, v); + raw_atomic_xor(i, v); } static __always_inline int @@ -441,14 +436,14 @@ atomic_fetch_xor(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor(i, v); + return raw_atomic_fetch_xor(i, v); } static __always_inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_acquire(i, v); + return raw_atomic_fetch_xor_acquire(i, v); } static __always_inline int @@ -456,14 +451,14 @@ atomic_fetch_xor_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_release(i, v); + return raw_atomic_fetch_xor_release(i, v); } static __always_inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_xor_relaxed(i, v); + return raw_atomic_fetch_xor_relaxed(i, v); } static __always_inline int @@ -471,14 +466,14 @@ atomic_xchg(atomic_t *v, int i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg(v, i); + return raw_atomic_xchg(v, i); } static __always_inline int atomic_xchg_acquire(atomic_t *v, int i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, i); } static __always_inline int @@ -486,14 +481,14 @@ atomic_xchg_release(atomic_t *v, int i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, i); } static __always_inline int atomic_xchg_relaxed(atomic_t *v, int i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, i); } static __always_inline int @@ -501,14 +496,14 @@ atomic_cmpxchg(atomic_t *v, int old, int new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg(v, old, new); + return raw_atomic_cmpxchg(v, old, new); } static __always_inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_acquire(v, old, new); + return raw_atomic_cmpxchg_acquire(v, old, new); } static __always_inline int @@ -516,14 +511,14 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_release(v, old, new); + return raw_atomic_cmpxchg_release(v, old, new); } static __always_inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_atomic_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -532,7 +527,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg(v, old, new); + return raw_atomic_try_cmpxchg(v, old, new); } static __always_inline bool @@ -540,7 +535,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_acquire(v, old, new); + return raw_atomic_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -549,7 +544,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_release(v, old, new); + return raw_atomic_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -557,7 +552,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_try_cmpxchg_relaxed(v, old, new); + return raw_atomic_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -565,7 +560,7 @@ atomic_sub_and_test(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_sub_and_test(i, v); + return raw_atomic_sub_and_test(i, v); } static __always_inline bool @@ -573,7 +568,7 @@ atomic_dec_and_test(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_and_test(v); + return raw_atomic_dec_and_test(v); } static __always_inline bool @@ -581,7 +576,7 @@ atomic_inc_and_test(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_and_test(v); + return raw_atomic_inc_and_test(v); } static __always_inline bool @@ -589,14 +584,14 @@ atomic_add_negative(int i, atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative(i, v); + return raw_atomic_add_negative(i, v); } static __always_inline bool atomic_add_negative_acquire(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_acquire(i, v); + return raw_atomic_add_negative_acquire(i, v); } static __always_inline bool @@ -604,14 +599,14 @@ atomic_add_negative_release(int i, atomic_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_release(i, v); + return raw_atomic_add_negative_release(i, v); } static __always_inline bool atomic_add_negative_relaxed(int i, atomic_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_negative_relaxed(i, v); } static __always_inline int @@ -619,7 +614,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_fetch_add_unless(v, a, u); + return raw_atomic_fetch_add_unless(v, a, u); } static __always_inline bool @@ -627,7 +622,7 @@ atomic_add_unless(atomic_t *v, int a, int u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_add_unless(v, a, u); + return raw_atomic_add_unless(v, a, u); } static __always_inline bool @@ -635,7 +630,7 @@ atomic_inc_not_zero(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_not_zero(v); + return raw_atomic_inc_not_zero(v); } static __always_inline bool @@ -643,7 +638,7 @@ atomic_inc_unless_negative(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_inc_unless_negative(v); + return raw_atomic_inc_unless_negative(v); } static __always_inline bool @@ -651,7 +646,7 @@ atomic_dec_unless_positive(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_unless_positive(v); + return raw_atomic_dec_unless_positive(v); } static __always_inline int @@ -659,28 +654,28 @@ atomic_dec_if_positive(atomic_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_dec_if_positive(v); + return raw_atomic_dec_if_positive(v); } static __always_inline s64 atomic64_read(const atomic64_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic64_read(v); + return raw_atomic64_read(v); } static __always_inline s64 atomic64_read_acquire(const atomic64_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic64_read_acquire(v); + return raw_atomic64_read_acquire(v); } static __always_inline void atomic64_set(atomic64_t *v, s64 i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } static __always_inline void @@ -688,14 +683,14 @@ atomic64_set_release(atomic64_t *v, s64 i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic64_set_release(v, i); + raw_atomic64_set_release(v, i); } static __always_inline void atomic64_add(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_add(i, v); + raw_atomic64_add(i, v); } static __always_inline s64 @@ -703,14 +698,14 @@ atomic64_add_return(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return(i, v); + return raw_atomic64_add_return(i, v); } static __always_inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_acquire(i, v); + return raw_atomic64_add_return_acquire(i, v); } static __always_inline s64 @@ -718,14 +713,14 @@ atomic64_add_return_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_release(i, v); + return raw_atomic64_add_return_release(i, v); } static __always_inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_return_relaxed(i, v); + return raw_atomic64_add_return_relaxed(i, v); } static __always_inline s64 @@ -733,14 +728,14 @@ atomic64_fetch_add(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add(i, v); + return raw_atomic64_fetch_add(i, v); } static __always_inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_acquire(i, v); + return raw_atomic64_fetch_add_acquire(i, v); } static __always_inline s64 @@ -748,21 +743,21 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_release(i, v); + return raw_atomic64_fetch_add_release(i, v); } static __always_inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_relaxed(i, v); + return raw_atomic64_fetch_add_relaxed(i, v); } static __always_inline void atomic64_sub(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_sub(i, v); + raw_atomic64_sub(i, v); } static __always_inline s64 @@ -770,14 +765,14 @@ atomic64_sub_return(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return(i, v); + return raw_atomic64_sub_return(i, v); } static __always_inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_acquire(i, v); + return raw_atomic64_sub_return_acquire(i, v); } static __always_inline s64 @@ -785,14 +780,14 @@ atomic64_sub_return_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_release(i, v); + return raw_atomic64_sub_return_release(i, v); } static __always_inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_return_relaxed(i, v); + return raw_atomic64_sub_return_relaxed(i, v); } static __always_inline s64 @@ -800,14 +795,14 @@ atomic64_fetch_sub(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub(i, v); + return raw_atomic64_fetch_sub(i, v); } static __always_inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_acquire(i, v); + return raw_atomic64_fetch_sub_acquire(i, v); } static __always_inline s64 @@ -815,21 +810,21 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_release(i, v); + return raw_atomic64_fetch_sub_release(i, v); } static __always_inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub_relaxed(i, v); + return raw_atomic64_fetch_sub_relaxed(i, v); } static __always_inline void atomic64_inc(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_inc(v); + raw_atomic64_inc(v); } static __always_inline s64 @@ -837,14 +832,14 @@ atomic64_inc_return(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return(v); + return raw_atomic64_inc_return(v); } static __always_inline s64 atomic64_inc_return_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_acquire(v); + return raw_atomic64_inc_return_acquire(v); } static __always_inline s64 @@ -852,14 +847,14 @@ atomic64_inc_return_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_release(v); + return raw_atomic64_inc_return_release(v); } static __always_inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_inc_return_relaxed(v); } static __always_inline s64 @@ -867,14 +862,14 @@ atomic64_fetch_inc(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc(v); + return raw_atomic64_fetch_inc(v); } static __always_inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_acquire(v); + return raw_atomic64_fetch_inc_acquire(v); } static __always_inline s64 @@ -882,21 +877,21 @@ atomic64_fetch_inc_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_release(v); + return raw_atomic64_fetch_inc_release(v); } static __always_inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_inc_relaxed(v); + return raw_atomic64_fetch_inc_relaxed(v); } static __always_inline void atomic64_dec(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_dec(v); + raw_atomic64_dec(v); } static __always_inline s64 @@ -904,14 +899,14 @@ atomic64_dec_return(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return(v); + return raw_atomic64_dec_return(v); } static __always_inline s64 atomic64_dec_return_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_acquire(v); + return raw_atomic64_dec_return_acquire(v); } static __always_inline s64 @@ -919,14 +914,14 @@ atomic64_dec_return_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_release(v); + return raw_atomic64_dec_return_release(v); } static __always_inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_dec_return_relaxed(v); } static __always_inline s64 @@ -934,14 +929,14 @@ atomic64_fetch_dec(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec(v); + return raw_atomic64_fetch_dec(v); } static __always_inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_acquire(v); + return raw_atomic64_fetch_dec_acquire(v); } static __always_inline s64 @@ -949,21 +944,21 @@ atomic64_fetch_dec_release(atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_release(v); + return raw_atomic64_fetch_dec_release(v); } static __always_inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_dec_relaxed(v); } static __always_inline void atomic64_and(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_and(i, v); + raw_atomic64_and(i, v); } static __always_inline s64 @@ -971,14 +966,14 @@ atomic64_fetch_and(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and(i, v); + return raw_atomic64_fetch_and(i, v); } static __always_inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_acquire(i, v); + return raw_atomic64_fetch_and_acquire(i, v); } static __always_inline s64 @@ -986,21 +981,21 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_release(i, v); + return raw_atomic64_fetch_and_release(i, v); } static __always_inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_and_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(i, v); } static __always_inline void atomic64_andnot(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_andnot(i, v); + raw_atomic64_andnot(i, v); } static __always_inline s64 @@ -1008,14 +1003,14 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot(i, v); + return raw_atomic64_fetch_andnot(i, v); } static __always_inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_acquire(i, v); + return raw_atomic64_fetch_andnot_acquire(i, v); } static __always_inline s64 @@ -1023,21 +1018,21 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_release(i, v); + return raw_atomic64_fetch_andnot_release(i, v); } static __always_inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_andnot_relaxed(i, v); } static __always_inline void atomic64_or(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_or(i, v); + raw_atomic64_or(i, v); } static __always_inline s64 @@ -1045,14 +1040,14 @@ atomic64_fetch_or(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or(i, v); + return raw_atomic64_fetch_or(i, v); } static __always_inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_acquire(i, v); + return raw_atomic64_fetch_or_acquire(i, v); } static __always_inline s64 @@ -1060,21 +1055,21 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_release(i, v); + return raw_atomic64_fetch_or_release(i, v); } static __always_inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_or_relaxed(i, v); + return raw_atomic64_fetch_or_relaxed(i, v); } static __always_inline void atomic64_xor(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic64_xor(i, v); + raw_atomic64_xor(i, v); } static __always_inline s64 @@ -1082,14 +1077,14 @@ atomic64_fetch_xor(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor(i, v); + return raw_atomic64_fetch_xor(i, v); } static __always_inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_acquire(i, v); + return raw_atomic64_fetch_xor_acquire(i, v); } static __always_inline s64 @@ -1097,14 +1092,14 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_release(i, v); + return raw_atomic64_fetch_xor_release(i, v); } static __always_inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor_relaxed(i, v); + return raw_atomic64_fetch_xor_relaxed(i, v); } static __always_inline s64 @@ -1112,14 +1107,14 @@ atomic64_xchg(atomic64_t *v, s64 i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, i); } static __always_inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, i); } static __always_inline s64 @@ -1127,14 +1122,14 @@ atomic64_xchg_release(atomic64_t *v, s64 i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, i); } static __always_inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, i); } static __always_inline s64 @@ -1142,14 +1137,14 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg(v, old, new); + return raw_atomic64_cmpxchg(v, old, new); } static __always_inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_acquire(v, old, new); + return raw_atomic64_cmpxchg_acquire(v, old, new); } static __always_inline s64 @@ -1157,14 +1152,14 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_release(v, old, new); + return raw_atomic64_cmpxchg_release(v, old, new); } static __always_inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_atomic64_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1173,7 +1168,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg(v, old, new); + return raw_atomic64_try_cmpxchg(v, old, new); } static __always_inline bool @@ -1181,7 +1176,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_acquire(v, old, new); + return raw_atomic64_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -1190,7 +1185,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_release(v, old, new); + return raw_atomic64_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -1198,7 +1193,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); + return raw_atomic64_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1206,7 +1201,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_sub_and_test(i, v); + return raw_atomic64_sub_and_test(i, v); } static __always_inline bool @@ -1214,7 +1209,7 @@ atomic64_dec_and_test(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_and_test(v); + return raw_atomic64_dec_and_test(v); } static __always_inline bool @@ -1222,7 +1217,7 @@ atomic64_inc_and_test(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_and_test(v); + return raw_atomic64_inc_and_test(v); } static __always_inline bool @@ -1230,14 +1225,14 @@ atomic64_add_negative(s64 i, atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative(i, v); + return raw_atomic64_add_negative(i, v); } static __always_inline bool atomic64_add_negative_acquire(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_acquire(i, v); + return raw_atomic64_add_negative_acquire(i, v); } static __always_inline bool @@ -1245,14 +1240,14 @@ atomic64_add_negative_release(s64 i, atomic64_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_release(i, v); + return raw_atomic64_add_negative_release(i, v); } static __always_inline bool atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_negative_relaxed(i, v); } static __always_inline s64 @@ -1260,7 +1255,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_unless(v, a, u); + return raw_atomic64_fetch_add_unless(v, a, u); } static __always_inline bool @@ -1268,7 +1263,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_add_unless(v, a, u); + return raw_atomic64_add_unless(v, a, u); } static __always_inline bool @@ -1276,7 +1271,7 @@ atomic64_inc_not_zero(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_not_zero(v); + return raw_atomic64_inc_not_zero(v); } static __always_inline bool @@ -1284,7 +1279,7 @@ atomic64_inc_unless_negative(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_inc_unless_negative(v); + return raw_atomic64_inc_unless_negative(v); } static __always_inline bool @@ -1292,7 +1287,7 @@ atomic64_dec_unless_positive(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_unless_positive(v); + return raw_atomic64_dec_unless_positive(v); } static __always_inline s64 @@ -1300,28 +1295,28 @@ atomic64_dec_if_positive(atomic64_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic64_dec_if_positive(v); + return raw_atomic64_dec_if_positive(v); } static __always_inline long atomic_long_read(const atomic_long_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_long_read(v); + return raw_atomic_long_read(v); } static __always_inline long atomic_long_read_acquire(const atomic_long_t *v) { instrument_atomic_read(v, sizeof(*v)); - return arch_atomic_long_read_acquire(v); + return raw_atomic_long_read_acquire(v); } static __always_inline void atomic_long_set(atomic_long_t *v, long i) { instrument_atomic_write(v, sizeof(*v)); - arch_atomic_long_set(v, i); + raw_atomic_long_set(v, i); } static __always_inline void @@ -1329,14 +1324,14 @@ atomic_long_set_release(atomic_long_t *v, long i) { kcsan_release(); instrument_atomic_write(v, sizeof(*v)); - arch_atomic_long_set_release(v, i); + raw_atomic_long_set_release(v, i); } static __always_inline void atomic_long_add(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_add(i, v); + raw_atomic_long_add(i, v); } static __always_inline long @@ -1344,14 +1339,14 @@ atomic_long_add_return(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return(i, v); + return raw_atomic_long_add_return(i, v); } static __always_inline long atomic_long_add_return_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_acquire(i, v); + return raw_atomic_long_add_return_acquire(i, v); } static __always_inline long @@ -1359,14 +1354,14 @@ atomic_long_add_return_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_release(i, v); + return raw_atomic_long_add_return_release(i, v); } static __always_inline long atomic_long_add_return_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_return_relaxed(i, v); + return raw_atomic_long_add_return_relaxed(i, v); } static __always_inline long @@ -1374,14 +1369,14 @@ atomic_long_fetch_add(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add(i, v); + return raw_atomic_long_fetch_add(i, v); } static __always_inline long atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_acquire(i, v); + return raw_atomic_long_fetch_add_acquire(i, v); } static __always_inline long @@ -1389,21 +1384,21 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_release(i, v); + return raw_atomic_long_fetch_add_release(i, v); } static __always_inline long atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_relaxed(i, v); + return raw_atomic_long_fetch_add_relaxed(i, v); } static __always_inline void atomic_long_sub(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_sub(i, v); + raw_atomic_long_sub(i, v); } static __always_inline long @@ -1411,14 +1406,14 @@ atomic_long_sub_return(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return(i, v); + return raw_atomic_long_sub_return(i, v); } static __always_inline long atomic_long_sub_return_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_acquire(i, v); + return raw_atomic_long_sub_return_acquire(i, v); } static __always_inline long @@ -1426,14 +1421,14 @@ atomic_long_sub_return_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_release(i, v); + return raw_atomic_long_sub_return_release(i, v); } static __always_inline long atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_return_relaxed(i, v); + return raw_atomic_long_sub_return_relaxed(i, v); } static __always_inline long @@ -1441,14 +1436,14 @@ atomic_long_fetch_sub(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub(i, v); + return raw_atomic_long_fetch_sub(i, v); } static __always_inline long atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_acquire(i, v); + return raw_atomic_long_fetch_sub_acquire(i, v); } static __always_inline long @@ -1456,21 +1451,21 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_release(i, v); + return raw_atomic_long_fetch_sub_release(i, v); } static __always_inline long atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_sub_relaxed(i, v); + return raw_atomic_long_fetch_sub_relaxed(i, v); } static __always_inline void atomic_long_inc(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_inc(v); + raw_atomic_long_inc(v); } static __always_inline long @@ -1478,14 +1473,14 @@ atomic_long_inc_return(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return(v); + return raw_atomic_long_inc_return(v); } static __always_inline long atomic_long_inc_return_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_acquire(v); + return raw_atomic_long_inc_return_acquire(v); } static __always_inline long @@ -1493,14 +1488,14 @@ atomic_long_inc_return_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_release(v); + return raw_atomic_long_inc_return_release(v); } static __always_inline long atomic_long_inc_return_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_return_relaxed(v); + return raw_atomic_long_inc_return_relaxed(v); } static __always_inline long @@ -1508,14 +1503,14 @@ atomic_long_fetch_inc(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc(v); + return raw_atomic_long_fetch_inc(v); } static __always_inline long atomic_long_fetch_inc_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_acquire(v); + return raw_atomic_long_fetch_inc_acquire(v); } static __always_inline long @@ -1523,21 +1518,21 @@ atomic_long_fetch_inc_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_release(v); + return raw_atomic_long_fetch_inc_release(v); } static __always_inline long atomic_long_fetch_inc_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_inc_relaxed(v); + return raw_atomic_long_fetch_inc_relaxed(v); } static __always_inline void atomic_long_dec(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_dec(v); + raw_atomic_long_dec(v); } static __always_inline long @@ -1545,14 +1540,14 @@ atomic_long_dec_return(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return(v); + return raw_atomic_long_dec_return(v); } static __always_inline long atomic_long_dec_return_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_acquire(v); + return raw_atomic_long_dec_return_acquire(v); } static __always_inline long @@ -1560,14 +1555,14 @@ atomic_long_dec_return_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_release(v); + return raw_atomic_long_dec_return_release(v); } static __always_inline long atomic_long_dec_return_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_return_relaxed(v); + return raw_atomic_long_dec_return_relaxed(v); } static __always_inline long @@ -1575,14 +1570,14 @@ atomic_long_fetch_dec(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec(v); + return raw_atomic_long_fetch_dec(v); } static __always_inline long atomic_long_fetch_dec_acquire(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_acquire(v); + return raw_atomic_long_fetch_dec_acquire(v); } static __always_inline long @@ -1590,21 +1585,21 @@ atomic_long_fetch_dec_release(atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_release(v); + return raw_atomic_long_fetch_dec_release(v); } static __always_inline long atomic_long_fetch_dec_relaxed(atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_dec_relaxed(v); + return raw_atomic_long_fetch_dec_relaxed(v); } static __always_inline void atomic_long_and(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_and(i, v); + raw_atomic_long_and(i, v); } static __always_inline long @@ -1612,14 +1607,14 @@ atomic_long_fetch_and(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and(i, v); + return raw_atomic_long_fetch_and(i, v); } static __always_inline long atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_acquire(i, v); + return raw_atomic_long_fetch_and_acquire(i, v); } static __always_inline long @@ -1627,21 +1622,21 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_release(i, v); + return raw_atomic_long_fetch_and_release(i, v); } static __always_inline long atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_and_relaxed(i, v); + return raw_atomic_long_fetch_and_relaxed(i, v); } static __always_inline void atomic_long_andnot(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_andnot(i, v); + raw_atomic_long_andnot(i, v); } static __always_inline long @@ -1649,14 +1644,14 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot(i, v); + return raw_atomic_long_fetch_andnot(i, v); } static __always_inline long atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_acquire(i, v); + return raw_atomic_long_fetch_andnot_acquire(i, v); } static __always_inline long @@ -1664,21 +1659,21 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_release(i, v); + return raw_atomic_long_fetch_andnot_release(i, v); } static __always_inline long atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_andnot_relaxed(i, v); + return raw_atomic_long_fetch_andnot_relaxed(i, v); } static __always_inline void atomic_long_or(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_or(i, v); + raw_atomic_long_or(i, v); } static __always_inline long @@ -1686,14 +1681,14 @@ atomic_long_fetch_or(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or(i, v); + return raw_atomic_long_fetch_or(i, v); } static __always_inline long atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_acquire(i, v); + return raw_atomic_long_fetch_or_acquire(i, v); } static __always_inline long @@ -1701,21 +1696,21 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_release(i, v); + return raw_atomic_long_fetch_or_release(i, v); } static __always_inline long atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_or_relaxed(i, v); + return raw_atomic_long_fetch_or_relaxed(i, v); } static __always_inline void atomic_long_xor(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - arch_atomic_long_xor(i, v); + raw_atomic_long_xor(i, v); } static __always_inline long @@ -1723,14 +1718,14 @@ atomic_long_fetch_xor(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor(i, v); + return raw_atomic_long_fetch_xor(i, v); } static __always_inline long atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_acquire(i, v); + return raw_atomic_long_fetch_xor_acquire(i, v); } static __always_inline long @@ -1738,14 +1733,14 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_release(i, v); + return raw_atomic_long_fetch_xor_release(i, v); } static __always_inline long atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_xor_relaxed(i, v); + return raw_atomic_long_fetch_xor_relaxed(i, v); } static __always_inline long @@ -1753,14 +1748,14 @@ atomic_long_xchg(atomic_long_t *v, long i) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg(v, i); + return raw_atomic_long_xchg(v, i); } static __always_inline long atomic_long_xchg_acquire(atomic_long_t *v, long i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_acquire(v, i); + return raw_atomic_long_xchg_acquire(v, i); } static __always_inline long @@ -1768,14 +1763,14 @@ atomic_long_xchg_release(atomic_long_t *v, long i) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_release(v, i); + return raw_atomic_long_xchg_release(v, i); } static __always_inline long atomic_long_xchg_relaxed(atomic_long_t *v, long i) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_xchg_relaxed(v, i); + return raw_atomic_long_xchg_relaxed(v, i); } static __always_inline long @@ -1783,14 +1778,14 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg(v, old, new); + return raw_atomic_long_cmpxchg(v, old, new); } static __always_inline long atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_acquire(v, old, new); + return raw_atomic_long_cmpxchg_acquire(v, old, new); } static __always_inline long @@ -1798,14 +1793,14 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_release(v, old, new); + return raw_atomic_long_cmpxchg_release(v, old, new); } static __always_inline long atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_cmpxchg_relaxed(v, old, new); + return raw_atomic_long_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1814,7 +1809,7 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg(v, old, new); + return raw_atomic_long_try_cmpxchg(v, old, new); } static __always_inline bool @@ -1822,7 +1817,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_acquire(v, old, new); + return raw_atomic_long_try_cmpxchg_acquire(v, old, new); } static __always_inline bool @@ -1831,7 +1826,7 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_release(v, old, new); + return raw_atomic_long_try_cmpxchg_release(v, old, new); } static __always_inline bool @@ -1839,7 +1834,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); - return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); + return raw_atomic_long_try_cmpxchg_relaxed(v, old, new); } static __always_inline bool @@ -1847,7 +1842,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_sub_and_test(i, v); + return raw_atomic_long_sub_and_test(i, v); } static __always_inline bool @@ -1855,7 +1850,7 @@ atomic_long_dec_and_test(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_and_test(v); + return raw_atomic_long_dec_and_test(v); } static __always_inline bool @@ -1863,7 +1858,7 @@ atomic_long_inc_and_test(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_and_test(v); + return raw_atomic_long_inc_and_test(v); } static __always_inline bool @@ -1871,14 +1866,14 @@ atomic_long_add_negative(long i, atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative(i, v); + return raw_atomic_long_add_negative(i, v); } static __always_inline bool atomic_long_add_negative_acquire(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_acquire(i, v); + return raw_atomic_long_add_negative_acquire(i, v); } static __always_inline bool @@ -1886,14 +1881,14 @@ atomic_long_add_negative_release(long i, atomic_long_t *v) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_release(i, v); + return raw_atomic_long_add_negative_release(i, v); } static __always_inline bool atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_negative_relaxed(i, v); + return raw_atomic_long_add_negative_relaxed(i, v); } static __always_inline long @@ -1901,7 +1896,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_fetch_add_unless(v, a, u); + return raw_atomic_long_fetch_add_unless(v, a, u); } static __always_inline bool @@ -1909,7 +1904,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_add_unless(v, a, u); + return raw_atomic_long_add_unless(v, a, u); } static __always_inline bool @@ -1917,7 +1912,7 @@ atomic_long_inc_not_zero(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_not_zero(v); + return raw_atomic_long_inc_not_zero(v); } static __always_inline bool @@ -1925,7 +1920,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_inc_unless_negative(v); + return raw_atomic_long_inc_unless_negative(v); } static __always_inline bool @@ -1933,7 +1928,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_unless_positive(v); + return raw_atomic_long_dec_unless_positive(v); } static __always_inline long @@ -1941,7 +1936,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return arch_atomic_long_dec_if_positive(v); + return raw_atomic_long_dec_if_positive(v); } #define xchg(ptr, ...) \ @@ -1949,14 +1944,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg(__ai_ptr, __VA_ARGS__); \ + raw_xchg(__ai_ptr, __VA_ARGS__); \ }) #define xchg_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ + raw_xchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #define xchg_release(ptr, ...) \ @@ -1964,14 +1959,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_release(__ai_ptr, __VA_ARGS__); \ + raw_xchg_release(__ai_ptr, __VA_ARGS__); \ }) #define xchg_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg(ptr, ...) \ @@ -1979,14 +1974,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_release(ptr, ...) \ @@ -1994,14 +1989,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64(ptr, ...) \ @@ -2009,14 +2004,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_release(ptr, ...) \ @@ -2024,14 +2019,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128(ptr, ...) \ @@ -2039,14 +2034,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_acquire(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_release(ptr, ...) \ @@ -2054,14 +2049,14 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_relaxed(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \ }) #define try_cmpxchg(ptr, oldp, ...) \ @@ -2071,7 +2066,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_acquire(ptr, oldp, ...) \ @@ -2080,7 +2075,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_release(ptr, oldp, ...) \ @@ -2090,7 +2085,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg_relaxed(ptr, oldp, ...) \ @@ -2099,7 +2094,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64(ptr, oldp, ...) \ @@ -2109,7 +2104,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_acquire(ptr, oldp, ...) \ @@ -2118,7 +2113,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_release(ptr, oldp, ...) \ @@ -2128,7 +2123,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_relaxed(ptr, oldp, ...) \ @@ -2137,7 +2132,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128(ptr, oldp, ...) \ @@ -2147,7 +2142,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_acquire(ptr, oldp, ...) \ @@ -2156,7 +2151,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_release(ptr, oldp, ...) \ @@ -2166,7 +2161,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) kcsan_release(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_relaxed(ptr, oldp, ...) \ @@ -2175,28 +2170,28 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define cmpxchg_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg64_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ }) #define cmpxchg128_local(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \ + raw_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \ }) #define sync_cmpxchg(ptr, ...) \ @@ -2204,7 +2199,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(ptr) __ai_ptr = (ptr); \ kcsan_mb(); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ - arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ + raw_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) #define try_cmpxchg_local(ptr, oldp, ...) \ @@ -2213,7 +2208,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg64_local(ptr, oldp, ...) \ @@ -2222,7 +2217,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #define try_cmpxchg128_local(ptr, oldp, ...) \ @@ -2231,9 +2226,9 @@ atomic_long_dec_if_positive(atomic_long_t *v) typeof(oldp) __ai_oldp = (oldp); \ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \ - arch_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ + raw_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \ }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 3611991b015450e119bcd7417a9431af7f3ba13c +// f6502977180430e61c1a7c4e5e665f04f501fb8d diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h new file mode 100644 index 0000000000000..83ff0269657e7 --- /dev/null +++ b/include/linux/atomic/atomic-raw.h @@ -0,0 +1,1645 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-atomic-raw.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +#ifndef _LINUX_ATOMIC_RAW_H +#define _LINUX_ATOMIC_RAW_H + +static __always_inline int +raw_atomic_read(const atomic_t *v) +{ + return arch_atomic_read(v); +} + +static __always_inline int +raw_atomic_read_acquire(const atomic_t *v) +{ + return arch_atomic_read_acquire(v); +} + +static __always_inline void +raw_atomic_set(atomic_t *v, int i) +{ + arch_atomic_set(v, i); +} + +static __always_inline void +raw_atomic_set_release(atomic_t *v, int i) +{ + arch_atomic_set_release(v, i); +} + +static __always_inline void +raw_atomic_add(int i, atomic_t *v) +{ + arch_atomic_add(i, v); +} + +static __always_inline int +raw_atomic_add_return(int i, atomic_t *v) +{ + return arch_atomic_add_return(i, v); +} + +static __always_inline int +raw_atomic_add_return_acquire(int i, atomic_t *v) +{ + return arch_atomic_add_return_acquire(i, v); +} + +static __always_inline int +raw_atomic_add_return_release(int i, atomic_t *v) +{ + return arch_atomic_add_return_release(i, v); +} + +static __always_inline int +raw_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return arch_atomic_add_return_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_add(int i, atomic_t *v) +{ + return arch_atomic_fetch_add(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic_sub(int i, atomic_t *v) +{ + arch_atomic_sub(i, v); +} + +static __always_inline int +raw_atomic_sub_return(int i, atomic_t *v) +{ + return arch_atomic_sub_return(i, v); +} + +static __always_inline int +raw_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return arch_atomic_sub_return_acquire(i, v); +} + +static __always_inline int +raw_atomic_sub_return_release(int i, atomic_t *v) +{ + return arch_atomic_sub_return_release(i, v); +} + +static __always_inline int +raw_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return arch_atomic_sub_return_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic_inc(atomic_t *v) +{ + arch_atomic_inc(v); +} + +static __always_inline int +raw_atomic_inc_return(atomic_t *v) +{ + return arch_atomic_inc_return(v); +} + +static __always_inline int +raw_atomic_inc_return_acquire(atomic_t *v) +{ + return arch_atomic_inc_return_acquire(v); +} + +static __always_inline int +raw_atomic_inc_return_release(atomic_t *v) +{ + return arch_atomic_inc_return_release(v); +} + +static __always_inline int +raw_atomic_inc_return_relaxed(atomic_t *v) +{ + return arch_atomic_inc_return_relaxed(v); +} + +static __always_inline int +raw_atomic_fetch_inc(atomic_t *v) +{ + return arch_atomic_fetch_inc(v); +} + +static __always_inline int +raw_atomic_fetch_inc_acquire(atomic_t *v) +{ + return arch_atomic_fetch_inc_acquire(v); +} + +static __always_inline int +raw_atomic_fetch_inc_release(atomic_t *v) +{ + return arch_atomic_fetch_inc_release(v); +} + +static __always_inline int +raw_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return arch_atomic_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic_dec(atomic_t *v) +{ + arch_atomic_dec(v); +} + +static __always_inline int +raw_atomic_dec_return(atomic_t *v) +{ + return arch_atomic_dec_return(v); +} + +static __always_inline int +raw_atomic_dec_return_acquire(atomic_t *v) +{ + return arch_atomic_dec_return_acquire(v); +} + +static __always_inline int +raw_atomic_dec_return_release(atomic_t *v) +{ + return arch_atomic_dec_return_release(v); +} + +static __always_inline int +raw_atomic_dec_return_relaxed(atomic_t *v) +{ + return arch_atomic_dec_return_relaxed(v); +} + +static __always_inline int +raw_atomic_fetch_dec(atomic_t *v) +{ + return arch_atomic_fetch_dec(v); +} + +static __always_inline int +raw_atomic_fetch_dec_acquire(atomic_t *v) +{ + return arch_atomic_fetch_dec_acquire(v); +} + +static __always_inline int +raw_atomic_fetch_dec_release(atomic_t *v) +{ + return arch_atomic_fetch_dec_release(v); +} + +static __always_inline int +raw_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return arch_atomic_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic_and(int i, atomic_t *v) +{ + arch_atomic_and(i, v); +} + +static __always_inline int +raw_atomic_fetch_and(int i, atomic_t *v) +{ + return arch_atomic_fetch_and(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic_andnot(int i, atomic_t *v) +{ + arch_atomic_andnot(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic_or(int i, atomic_t *v) +{ + arch_atomic_or(i, v); +} + +static __always_inline int +raw_atomic_fetch_or(int i, atomic_t *v) +{ + return arch_atomic_fetch_or(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic_xor(int i, atomic_t *v) +{ + arch_atomic_xor(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_acquire(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_release(i, v); +} + +static __always_inline int +raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return arch_atomic_fetch_xor_relaxed(i, v); +} + +static __always_inline int +raw_atomic_xchg(atomic_t *v, int i) +{ + return arch_atomic_xchg(v, i); +} + +static __always_inline int +raw_atomic_xchg_acquire(atomic_t *v, int i) +{ + return arch_atomic_xchg_acquire(v, i); +} + +static __always_inline int +raw_atomic_xchg_release(atomic_t *v, int i) +{ + return arch_atomic_xchg_release(v, i); +} + +static __always_inline int +raw_atomic_xchg_relaxed(atomic_t *v, int i) +{ + return arch_atomic_xchg_relaxed(v, i); +} + +static __always_inline int +raw_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_acquire(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_release(v, old, new); +} + +static __always_inline int +raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return arch_atomic_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_sub_and_test(int i, atomic_t *v) +{ + return arch_atomic_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic_dec_and_test(atomic_t *v) +{ + return arch_atomic_dec_and_test(v); +} + +static __always_inline bool +raw_atomic_inc_and_test(atomic_t *v) +{ + return arch_atomic_inc_and_test(v); +} + +static __always_inline bool +raw_atomic_add_negative(int i, atomic_t *v) +{ + return arch_atomic_add_negative(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return arch_atomic_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_release(int i, atomic_t *v) +{ + return arch_atomic_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return arch_atomic_add_negative_relaxed(i, v); +} + +static __always_inline int +raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return arch_atomic_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_add_unless(atomic_t *v, int a, int u) +{ + return arch_atomic_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_inc_not_zero(atomic_t *v) +{ + return arch_atomic_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic_inc_unless_negative(atomic_t *v) +{ + return arch_atomic_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic_dec_unless_positive(atomic_t *v) +{ + return arch_atomic_dec_unless_positive(v); +} + +static __always_inline int +raw_atomic_dec_if_positive(atomic_t *v) +{ + return arch_atomic_dec_if_positive(v); +} + +static __always_inline s64 +raw_atomic64_read(const atomic64_t *v) +{ + return arch_atomic64_read(v); +} + +static __always_inline s64 +raw_atomic64_read_acquire(const atomic64_t *v) +{ + return arch_atomic64_read_acquire(v); +} + +static __always_inline void +raw_atomic64_set(atomic64_t *v, s64 i) +{ + arch_atomic64_set(v, i); +} + +static __always_inline void +raw_atomic64_set_release(atomic64_t *v, s64 i) +{ + arch_atomic64_set_release(v, i); +} + +static __always_inline void +raw_atomic64_add(s64 i, atomic64_t *v) +{ + arch_atomic64_add(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_release(i, v); +} + +static __always_inline s64 +raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_sub(s64 i, atomic64_t *v) +{ + arch_atomic64_sub(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_release(i, v); +} + +static __always_inline s64 +raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_return_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_inc(atomic64_t *v) +{ + arch_atomic64_inc(v); +} + +static __always_inline s64 +raw_atomic64_inc_return(atomic64_t *v) +{ + return arch_atomic64_inc_return(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_acquire(atomic64_t *v) +{ + return arch_atomic64_inc_return_acquire(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_release(atomic64_t *v) +{ + return arch_atomic64_inc_return_release(v); +} + +static __always_inline s64 +raw_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return arch_atomic64_inc_return_relaxed(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc(atomic64_t *v) +{ + return arch_atomic64_fetch_inc(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_acquire(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_release(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_release(v); +} + +static __always_inline s64 +raw_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return arch_atomic64_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic64_dec(atomic64_t *v) +{ + arch_atomic64_dec(v); +} + +static __always_inline s64 +raw_atomic64_dec_return(atomic64_t *v) +{ + return arch_atomic64_dec_return(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_acquire(atomic64_t *v) +{ + return arch_atomic64_dec_return_acquire(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_release(atomic64_t *v) +{ + return arch_atomic64_dec_return_release(v); +} + +static __always_inline s64 +raw_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return arch_atomic64_dec_return_relaxed(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec(atomic64_t *v) +{ + return arch_atomic64_fetch_dec(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_acquire(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_release(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_release(v); +} + +static __always_inline s64 +raw_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return arch_atomic64_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic64_and(s64 i, atomic64_t *v) +{ + arch_atomic64_and(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_andnot(s64 i, atomic64_t *v) +{ + arch_atomic64_andnot(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_or(s64 i, atomic64_t *v) +{ + arch_atomic64_or(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic64_xor(s64 i, atomic64_t *v) +{ + arch_atomic64_xor(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_acquire(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_release(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_fetch_xor_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_xchg(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_acquire(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_release(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_release(v, i); +} + +static __always_inline s64 +raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i) +{ + return arch_atomic64_xchg_relaxed(v, i); +} + +static __always_inline s64 +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_acquire(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_release(v, old, new); +} + +static __always_inline s64 +raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return arch_atomic64_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return arch_atomic64_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic64_dec_and_test(atomic64_t *v) +{ + return arch_atomic64_dec_and_test(v); +} + +static __always_inline bool +raw_atomic64_inc_and_test(atomic64_t *v) +{ + return arch_atomic64_inc_and_test(v); +} + +static __always_inline bool +raw_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_negative_relaxed(i, v); +} + +static __always_inline s64 +raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return arch_atomic64_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return arch_atomic64_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic64_inc_not_zero(atomic64_t *v) +{ + return arch_atomic64_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic64_inc_unless_negative(atomic64_t *v) +{ + return arch_atomic64_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic64_dec_unless_positive(atomic64_t *v) +{ + return arch_atomic64_dec_unless_positive(v); +} + +static __always_inline s64 +raw_atomic64_dec_if_positive(atomic64_t *v) +{ + return arch_atomic64_dec_if_positive(v); +} + +static __always_inline long +raw_atomic_long_read(const atomic_long_t *v) +{ + return arch_atomic_long_read(v); +} + +static __always_inline long +raw_atomic_long_read_acquire(const atomic_long_t *v) +{ + return arch_atomic_long_read_acquire(v); +} + +static __always_inline void +raw_atomic_long_set(atomic_long_t *v, long i) +{ + arch_atomic_long_set(v, i); +} + +static __always_inline void +raw_atomic_long_set_release(atomic_long_t *v, long i) +{ + arch_atomic_long_set_release(v, i); +} + +static __always_inline void +raw_atomic_long_add(long i, atomic_long_t *v) +{ + arch_atomic_long_add(i, v); +} + +static __always_inline long +raw_atomic_long_add_return(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_release(i, v); +} + +static __always_inline long +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_return_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_add_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_sub(long i, atomic_long_t *v) +{ + arch_atomic_long_sub(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_release(i, v); +} + +static __always_inline long +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_return_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_sub_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_inc(atomic_long_t *v) +{ + arch_atomic_long_inc(v); +} + +static __always_inline long +raw_atomic_long_inc_return(atomic_long_t *v) +{ + return arch_atomic_long_inc_return(v); +} + +static __always_inline long +raw_atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_acquire(v); +} + +static __always_inline long +raw_atomic_long_inc_return_release(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_release(v); +} + +static __always_inline long +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_inc_return_relaxed(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_acquire(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_release(v); +} + +static __always_inline long +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_fetch_inc_relaxed(v); +} + +static __always_inline void +raw_atomic_long_dec(atomic_long_t *v) +{ + arch_atomic_long_dec(v); +} + +static __always_inline long +raw_atomic_long_dec_return(atomic_long_t *v) +{ + return arch_atomic_long_dec_return(v); +} + +static __always_inline long +raw_atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_acquire(v); +} + +static __always_inline long +raw_atomic_long_dec_return_release(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_release(v); +} + +static __always_inline long +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_dec_return_relaxed(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_acquire(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_release(v); +} + +static __always_inline long +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return arch_atomic_long_fetch_dec_relaxed(v); +} + +static __always_inline void +raw_atomic_long_and(long i, atomic_long_t *v) +{ + arch_atomic_long_and(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_and_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_andnot(long i, atomic_long_t *v) +{ + arch_atomic_long_andnot(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_andnot_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_or(long i, atomic_long_t *v) +{ + arch_atomic_long_or(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_or_relaxed(i, v); +} + +static __always_inline void +raw_atomic_long_xor(long i, atomic_long_t *v) +{ + arch_atomic_long_xor(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_acquire(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_release(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_fetch_xor_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_xchg(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_acquire(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_release(v, i); +} + +static __always_inline long +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return arch_atomic_long_xchg_relaxed(v, i); +} + +static __always_inline long +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_acquire(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_release(v, old, new); +} + +static __always_inline long +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return arch_atomic_long_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_acquire(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_release(v, old, new); +} + +static __always_inline bool +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); +} + +static __always_inline bool +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) +{ + return arch_atomic_long_sub_and_test(i, v); +} + +static __always_inline bool +raw_atomic_long_dec_and_test(atomic_long_t *v) +{ + return arch_atomic_long_dec_and_test(v); +} + +static __always_inline bool +raw_atomic_long_inc_and_test(atomic_long_t *v) +{ + return arch_atomic_long_inc_and_test(v); +} + +static __always_inline bool +raw_atomic_long_add_negative(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_acquire(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_release(i, v); +} + +static __always_inline bool +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_long_add_negative_relaxed(i, v); +} + +static __always_inline long +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return arch_atomic_long_fetch_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return arch_atomic_long_add_unless(v, a, u); +} + +static __always_inline bool +raw_atomic_long_inc_not_zero(atomic_long_t *v) +{ + return arch_atomic_long_inc_not_zero(v); +} + +static __always_inline bool +raw_atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return arch_atomic_long_inc_unless_negative(v); +} + +static __always_inline bool +raw_atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return arch_atomic_long_dec_unless_positive(v); +} + +static __always_inline long +raw_atomic_long_dec_if_positive(atomic_long_t *v) +{ + return arch_atomic_long_dec_if_positive(v); +} + +#define raw_xchg(...) \ + arch_xchg(__VA_ARGS__) + +#define raw_xchg_acquire(...) \ + arch_xchg_acquire(__VA_ARGS__) + +#define raw_xchg_release(...) \ + arch_xchg_release(__VA_ARGS__) + +#define raw_xchg_relaxed(...) \ + arch_xchg_relaxed(__VA_ARGS__) + +#define raw_cmpxchg(...) \ + arch_cmpxchg(__VA_ARGS__) + +#define raw_cmpxchg_acquire(...) \ + arch_cmpxchg_acquire(__VA_ARGS__) + +#define raw_cmpxchg_release(...) \ + arch_cmpxchg_release(__VA_ARGS__) + +#define raw_cmpxchg_relaxed(...) \ + arch_cmpxchg_relaxed(__VA_ARGS__) + +#define raw_cmpxchg64(...) \ + arch_cmpxchg64(__VA_ARGS__) + +#define raw_cmpxchg64_acquire(...) \ + arch_cmpxchg64_acquire(__VA_ARGS__) + +#define raw_cmpxchg64_release(...) \ + arch_cmpxchg64_release(__VA_ARGS__) + +#define raw_cmpxchg64_relaxed(...) \ + arch_cmpxchg64_relaxed(__VA_ARGS__) + +#define raw_cmpxchg128(...) \ + arch_cmpxchg128(__VA_ARGS__) + +#define raw_cmpxchg128_acquire(...) \ + arch_cmpxchg128_acquire(__VA_ARGS__) + +#define raw_cmpxchg128_release(...) \ + arch_cmpxchg128_release(__VA_ARGS__) + +#define raw_cmpxchg128_relaxed(...) \ + arch_cmpxchg128_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg(...) \ + arch_try_cmpxchg(__VA_ARGS__) + +#define raw_try_cmpxchg_acquire(...) \ + arch_try_cmpxchg_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg_release(...) \ + arch_try_cmpxchg_release(__VA_ARGS__) + +#define raw_try_cmpxchg_relaxed(...) \ + arch_try_cmpxchg_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg64(...) \ + arch_try_cmpxchg64(__VA_ARGS__) + +#define raw_try_cmpxchg64_acquire(...) \ + arch_try_cmpxchg64_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg64_release(...) \ + arch_try_cmpxchg64_release(__VA_ARGS__) + +#define raw_try_cmpxchg64_relaxed(...) \ + arch_try_cmpxchg64_relaxed(__VA_ARGS__) + +#define raw_try_cmpxchg128(...) \ + arch_try_cmpxchg128(__VA_ARGS__) + +#define raw_try_cmpxchg128_acquire(...) \ + arch_try_cmpxchg128_acquire(__VA_ARGS__) + +#define raw_try_cmpxchg128_release(...) \ + arch_try_cmpxchg128_release(__VA_ARGS__) + +#define raw_try_cmpxchg128_relaxed(...) \ + arch_try_cmpxchg128_relaxed(__VA_ARGS__) + +#define raw_cmpxchg_local(...) \ + arch_cmpxchg_local(__VA_ARGS__) + +#define raw_cmpxchg64_local(...) \ + arch_cmpxchg64_local(__VA_ARGS__) + +#define raw_cmpxchg128_local(...) \ + arch_cmpxchg128_local(__VA_ARGS__) + +#define raw_sync_cmpxchg(...) \ + arch_sync_cmpxchg(__VA_ARGS__) + +#define raw_try_cmpxchg_local(...) \ + arch_try_cmpxchg_local(__VA_ARGS__) + +#define raw_try_cmpxchg64_local(...) \ + arch_try_cmpxchg64_local(__VA_ARGS__) + +#define raw_try_cmpxchg128_local(...) \ + arch_try_cmpxchg128_local(__VA_ARGS__) + +#endif /* _LINUX_ATOMIC_RAW_H */ +// 01d54200571b3857755a07c10074a4fd58cef6b1 diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 68557bfbbdc5e..93c949aa9e544 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -73,7 +73,7 @@ static __always_inline ${ret} ${atomicname}(${params}) { ${checks} - ${retstmt}arch_${atomicname}(${args}); + ${retstmt}raw_${atomicname}(${args}); } EOF @@ -105,7 +105,7 @@ EOF cat < ${LINUXDIR}/include/${header} From patchwork Mon Jun 5 07:01:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103122 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500936vqr; Mon, 5 Jun 2023 00:11:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6z20Hi0w5Qc3OHk3E02ybJolhTM1eMpaRZ5H7POrWeszohXAgDnCPwoSyLm6B+r77lKQrr X-Received: by 2002:a05:6a20:160a:b0:10c:322:72d5 with SMTP id l10-20020a056a20160a00b0010c032272d5mr6169401pzj.23.1685949101912; Mon, 05 Jun 2023 00:11:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949101; cv=none; d=google.com; s=arc-20160816; b=emuabp5trIISDrsv4jAI6FJ2Y9cD5tp/Z0daMibATOzuEA1OJSE3oRm+WnKt9D28TL oxeeNeDrlGjTcLfrP0pTxh1PJ9/pPmC5oKyMXgiSim3w6VNrT75FLeUzUaEp1vMWsywl VuJ0JWHNwa7lgbiLN/2piSrSzHUlrQCDf/uHr3N4TrVSvbIlMdGxYv8hRduZDYQPw0AS M/IoXf1LLWFBJPzS6vsZ9UHrYTuEgNA4ka6sfp2CryXLZHocZ+TgmbOvW6ttUAt+EL5l 1jX3MnRtnnFEjiFxePxPN4776sX1DCt3CBtJgTc5UjPwV7x3XS7UBIssFdqiEFblm11k Dy3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hyQeaDTlZqrZfp8dBKiuWXtubnPFXvAy0AmUHrT3h74=; b=LUwLBQxbigYbzj2yhTCJVp2mXcloGnouugCisrLBwgaaSD3p6t88fIZSbc9p7awu6u wkqgOyB+SBMgu0JuDXQidNVoX5DGa7CHCrFRon3l5R/1Vw6oHLhvEwlbFPymEG1AC9/6 8NW7UFXlnlANJ/WMCeFN35639EQ+ixfZKibRZxvSLc/2rMCxp1mCuEWqHNY42sfNBRPI 6pap7JQxcMoggytG0vV/LvSetywnRnrBDrr3TbSg+wNvA/WDczKdyNhjkCzuFXryLwqP box4E3K9045+kHPDpb4kZc7UUw95NBifXfRysISgpM5XV/Dy5gXWlI/kH4pXO7yNVxG3 yCCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w10-20020a63a74a000000b00534e67fd867si5294903pgo.62.2023.06.05.00.11.28; Mon, 05 Jun 2023 00:11:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229745AbjFEHDm (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230156AbjFEHDI (ORCPT ); Mon, 5 Jun 2023 03:03:08 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DCEC9170A; Mon, 5 Jun 2023 00:02:27 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 793D21684; Mon, 5 Jun 2023 00:03:05 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 324983F793; Mon, 5 Jun 2023 00:02:18 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 18/27] locking/atomic: treewide: use raw_atomic*_() Date: Mon, 5 Jun 2023 08:01:15 +0100 Message-Id: <20230605070124.3741859-19-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845765574889007?= X-GMAIL-MSGID: =?utf-8?q?1767845765574889007?= Now that we have raw_atomic*_() definitions, there's no need to use arch_atomic*_() definitions outside of the low-level atomic definitions. Move treewide users of arch_atomic*_() over to the equivalent raw_atomic*_(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/powerpc/kernel/smp.c | 12 ++++++------ arch/x86/kernel/alternative.c | 4 ++-- arch/x86/kernel/cpu/mce/core.c | 16 ++++++++-------- arch/x86/kernel/nmi.c | 2 +- arch/x86/kernel/pvclock.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- include/asm-generic/bitops/atomic.h | 12 ++++++------ include/asm-generic/bitops/lock.h | 8 ++++---- include/linux/context_tracking.h | 4 ++-- include/linux/context_tracking_state.h | 2 +- include/linux/cpumask.h | 2 +- include/linux/jump_label.h | 2 +- kernel/context_tracking.c | 12 ++++++------ kernel/sched/clock.c | 2 +- 14 files changed, 42 insertions(+), 42 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 265801a3e94cf..e8965f18686f0 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -417,9 +417,9 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags) { raw_local_irq_save(*flags); hard_irq_disable(); - while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { + while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { raw_local_irq_restore(*flags); - spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); + spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0); raw_local_irq_save(*flags); hard_irq_disable(); } @@ -427,15 +427,15 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags) noinstr static void nmi_ipi_lock(void) { - while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) - spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); + while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) + spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0); } noinstr static void nmi_ipi_unlock(void) { smp_mb(); - WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1); - arch_atomic_set(&__nmi_ipi_lock, 0); + WARN_ON(raw_atomic_read(&__nmi_ipi_lock) != 1); + raw_atomic_set(&__nmi_ipi_lock, 0); } noinstr static void nmi_ipi_unlock_end(unsigned long *flags) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index f615e0cb6d932..18f16e93838fe 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1799,7 +1799,7 @@ struct bp_patching_desc *try_get_desc(void) { struct bp_patching_desc *desc = &bp_desc; - if (!arch_atomic_inc_not_zero(&desc->refs)) + if (!raw_atomic_inc_not_zero(&desc->refs)) return NULL; return desc; @@ -1810,7 +1810,7 @@ static __always_inline void put_desc(void) struct bp_patching_desc *desc = &bp_desc; smp_mb__before_atomic(); - arch_atomic_dec(&desc->refs); + raw_atomic_dec(&desc->refs); } static __always_inline void *text_poke_addr(struct text_poke_loc *tp) diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2eec60f50057a..ab156e6e71208 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1022,12 +1022,12 @@ static noinstr int mce_start(int *no_way_out) if (!timeout) return ret; - arch_atomic_add(*no_way_out, &global_nwo); + raw_atomic_add(*no_way_out, &global_nwo); /* * Rely on the implied barrier below, such that global_nwo * is updated before mce_callin. */ - order = arch_atomic_inc_return(&mce_callin); + order = raw_atomic_inc_return(&mce_callin); arch_cpumask_clear_cpu(smp_processor_id(), &mce_missing_cpus); /* Enable instrumentation around calls to external facilities */ @@ -1036,10 +1036,10 @@ static noinstr int mce_start(int *no_way_out) /* * Wait for everyone. */ - while (arch_atomic_read(&mce_callin) != num_online_cpus()) { + while (raw_atomic_read(&mce_callin) != num_online_cpus()) { if (mce_timed_out(&timeout, "Timeout: Not all CPUs entered broadcast exception handler")) { - arch_atomic_set(&global_nwo, 0); + raw_atomic_set(&global_nwo, 0); goto out; } ndelay(SPINUNIT); @@ -1054,7 +1054,7 @@ static noinstr int mce_start(int *no_way_out) /* * Monarch: Starts executing now, the others wait. */ - arch_atomic_set(&mce_executing, 1); + raw_atomic_set(&mce_executing, 1); } else { /* * Subject: Now start the scanning loop one by one in @@ -1062,10 +1062,10 @@ static noinstr int mce_start(int *no_way_out) * This way when there are any shared banks it will be * only seen by one CPU before cleared, avoiding duplicates. */ - while (arch_atomic_read(&mce_executing) < order) { + while (raw_atomic_read(&mce_executing) < order) { if (mce_timed_out(&timeout, "Timeout: Subject CPUs unable to finish machine check processing")) { - arch_atomic_set(&global_nwo, 0); + raw_atomic_set(&global_nwo, 0); goto out; } ndelay(SPINUNIT); @@ -1075,7 +1075,7 @@ static noinstr int mce_start(int *no_way_out) /* * Cache the global no_way_out state. */ - *no_way_out = arch_atomic_read(&global_nwo); + *no_way_out = raw_atomic_read(&global_nwo); ret = order; diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 776f4b1e395b5..a0c551846b35f 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -496,7 +496,7 @@ DEFINE_IDTENTRY_RAW(exc_nmi) */ sev_es_nmi_complete(); if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) - arch_atomic_long_inc(&nsp->idt_calls); + raw_atomic_long_inc(&nsp->idt_calls); if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) return; diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c index 56acf53a782ad..b3f81379c2fc0 100644 --- a/arch/x86/kernel/pvclock.c +++ b/arch/x86/kernel/pvclock.c @@ -101,11 +101,11 @@ u64 __pvclock_clocksource_read(struct pvclock_vcpu_time_info *src, bool dowd) * updating at the same time, and one of them could be slightly behind, * making the assumption that last_value always go forward fail to hold. */ - last = arch_atomic64_read(&last_value); + last = raw_atomic64_read(&last_value); do { if (ret <= last) return last; - } while (!arch_atomic64_try_cmpxchg(&last_value, &last, ret)); + } while (!raw_atomic64_try_cmpxchg(&last_value, &last, ret)); return ret; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ceb7c5e9cf9e9..ac6f609068106 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13155,7 +13155,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment); bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm) { - return arch_atomic_read(&kvm->arch.assigned_device_count); + return raw_atomic_read(&kvm->arch.assigned_device_count); } EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h index 71ab4ba9c25d1..e076e079f6b2e 100644 --- a/include/asm-generic/bitops/atomic.h +++ b/include/asm-generic/bitops/atomic.h @@ -15,21 +15,21 @@ static __always_inline void arch_set_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline void arch_clear_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline void arch_change_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); } static __always_inline int @@ -39,7 +39,7 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_or(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -50,7 +50,7 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -61,7 +61,7 @@ arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_xor(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_xor(mask, (atomic_long_t *)p); return !!(old & mask); } diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 630f2f6b95956..40913516e654c 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -25,7 +25,7 @@ arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) if (READ_ONCE(*p) & mask) return 1; - old = arch_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } @@ -41,7 +41,7 @@ static __always_inline void arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); - arch_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p); + raw_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p); } /** @@ -63,7 +63,7 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p) p += BIT_WORD(nr); old = READ_ONCE(*p); old &= ~BIT_MASK(nr); - arch_atomic_long_set_release((atomic_long_t *)p, old); + raw_atomic_long_set_release((atomic_long_t *)p, old); } /** @@ -83,7 +83,7 @@ static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr, unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - old = arch_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p); return !!(old & BIT(7)); } #define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index d3cbb6c16babf..6e76b9dba00e7 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -119,7 +119,7 @@ extern void ct_idle_exit(void); */ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) { - return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); + return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); } /* @@ -128,7 +128,7 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) */ static __always_inline unsigned long ct_state_inc(int incby) { - return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state)); + return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state)); } static __always_inline bool warn_rcu_enter(void) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index fdd537ea513ff..bbff5f7f88030 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -51,7 +51,7 @@ DECLARE_PER_CPU(struct context_tracking, context_tracking); #ifdef CONFIG_CONTEXT_TRACKING_USER static __always_inline int __ct_state(void) { - return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK; + return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK; } #endif diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index ca736b05ec7b0..0d2e2a38b92d0 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -1071,7 +1071,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu) */ static __always_inline unsigned int num_online_cpus(void) { - return arch_atomic_read(&__num_online_cpus); + return raw_atomic_read(&__num_online_cpus); } #define num_possible_cpus() cpumask_weight(cpu_possible_mask) #define num_present_cpus() cpumask_weight(cpu_present_mask) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 4e968ebadce60..f0a949b7c9733 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -257,7 +257,7 @@ extern enum jump_label_type jump_label_init_type(struct jump_entry *entry); static __always_inline int static_key_count(struct static_key *key) { - return arch_atomic_read(&key->enabled); + return raw_atomic_read(&key->enabled); } static __always_inline void jump_label_init(void) diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index a09f1c19336ae..6ef0b35fc28c5 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * In this we case we don't care about any concurrency/ordering. */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) - arch_atomic_set(&ct->state, state); + raw_atomic_set(&ct->state, state); } else { /* * Even if context tracking is disabled on this CPU, because it's outside @@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state) */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { /* Tracking for vtime only, no concurrent RCU EQS accounting */ - arch_atomic_set(&ct->state, state); + raw_atomic_set(&ct->state, state); } else { /* * Tracking for vtime and RCU EQS. Make sure we don't race @@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * RCU only requires RCU_DYNTICKS_IDX increments to be fully * ordered. */ - arch_atomic_add(state, &ct->state); + raw_atomic_add(state, &ct->state); } } } @@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state) * In this we case we don't care about any concurrency/ordering. */ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) - arch_atomic_set(&ct->state, CONTEXT_KERNEL); + raw_atomic_set(&ct->state, CONTEXT_KERNEL); } else { if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { /* Tracking for vtime only, no concurrent RCU EQS accounting */ - arch_atomic_set(&ct->state, CONTEXT_KERNEL); + raw_atomic_set(&ct->state, CONTEXT_KERNEL); } else { /* * Tracking for vtime and RCU EQS. Make sure we don't race @@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state) * RCU only requires RCU_DYNTICKS_IDX increments to be fully * ordered. */ - arch_atomic_sub(state, &ct->state); + raw_atomic_sub(state, &ct->state); } } } diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index b5cc2b53464de..71443cff31f0d 100644 --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -287,7 +287,7 @@ static __always_inline u64 sched_clock_local(struct sched_clock_data *scd) clock = wrap_max(clock, min_clock); clock = wrap_min(clock, max_clock); - if (!arch_try_cmpxchg64(&scd->clock, &old_clock, clock)) + if (!raw_try_cmpxchg64(&scd->clock, &old_clock, clock)) goto again; return clock; From patchwork Mon Jun 5 07:01:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103116 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500668vqr; Mon, 5 Jun 2023 00:11:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7UWLktSqydnTg9Ag86dQ4giJRTqgHRFMaf4tx05zPzRA0Y1nLx6mieudxNYtPuKQd+f4ES X-Received: by 2002:a17:902:9006:b0:1af:cbb1:845 with SMTP id a6-20020a170902900600b001afcbb10845mr6708134plp.16.1685949065429; Mon, 05 Jun 2023 00:11:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949065; cv=none; d=google.com; s=arc-20160816; b=vJ4m4v0RsU1Ubjq4399JP+zNGljPcw2V8KXOgGKe/J7MNlAWWgX+fdySEJX3ucLBsL WEVLBAWVCAWV1RkTnJOLFuNGN4HaU9SC66Eq2+CTGXCPQD5+q14t0OfuPnuoL089apmC MDLWpHJMpA5tlvWuJ0C5eNhZ+itemO5yM14yd0CiX7dCzBnlrqukY1tryFXd8ziYamfC U9PTgmsdME4H9KvxsChC68qYyQCqIqDzB5L1rTrdITjAUzDSACKnL44YUxzjm7AvMelv PksyYcxXz5vWUvDckjZEaDmQ9x1gPJ9oE1J7z43VDmumeYlAyMUcT9JkDfu0ikXNEPPl UuoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oVEGC5r+X9bJ5eZmhlNhrXc5cBX5kEomGReEKf7fJX4=; b=lvk9vGfQ5GrxOg/Bo0UCnldbmgNicjEOJyP0Q5lAAtHh/jLvqrQsK7TzelRtMF9vV2 TMvbxSU34mmjGcc6b6+k/hlf0AZiWiODgfXD9uHfu+0eyEagRhNI2+lvi9/jji0rQGiP fbaez0OEmYr82MDhtKCXq+EUkXhzkQXZ8PzYgFZrprnCKQH8WyA6I12h63hTlqX5u67N Zo7Ssn7UlGeV/tE5OEQn5eb0/SVkKL9XzYmtaweBTrJjq5f6Rs8HAh68EHhKYflsv28I sPcu5W0nknt/qbDdmIXn0tqFrgxxWRzTc527ZmK6N7mflJAIPsh8n9bNkjWdzzkJ0Ggt iK+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b3-20020a170902d50300b001aaf15227easi5277489plg.332.2023.06.05.00.10.53; Mon, 05 Jun 2023 00:11:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230486AbjFEHD5 (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbjFEHDR (ORCPT ); Mon, 5 Jun 2023 03:03:17 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1F695E43; Mon, 5 Jun 2023 00:02:36 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4260C15A1; Mon, 5 Jun 2023 00:03:08 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BA5AC3F793; Mon, 5 Jun 2023 00:02:20 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 19/27] locking/atomic: scripts: build raw_atomic_long*() directly Date: Mon, 5 Jun 2023 08:01:16 +0100 Message-Id: <20230605070124.3741859-20-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845727304152564?= X-GMAIL-MSGID: =?utf-8?q?1767845727304152564?= Now that arch_atomic*() usage is limited to the atomic headers, we no longer have any users of arch_atomic_long_*(), and can generate raw_atomic_long_*() directly. Generate the raw_atomic_long_*() ops directly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic.h | 2 +- include/linux/atomic/atomic-long.h | 682 ++++++++++++++--------------- include/linux/atomic/atomic-raw.h | 512 +--------------------- scripts/atomic/gen-atomic-long.sh | 4 +- scripts/atomic/gen-atomic-raw.sh | 4 - 5 files changed, 345 insertions(+), 859 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 127f5dc63a7df..296cfae0389fe 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -78,8 +78,8 @@ }) #include -#include #include +#include #include #endif /* _LINUX_ATOMIC_H */ diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 2fc51ba66bebd..92dc82ce1ce6d 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -24,1027 +24,1027 @@ typedef atomic_t atomic_long_t; #ifdef CONFIG_64BIT static __always_inline long -arch_atomic_long_read(const atomic_long_t *v) +raw_atomic_long_read(const atomic_long_t *v) { - return arch_atomic64_read(v); + return raw_atomic64_read(v); } static __always_inline long -arch_atomic_long_read_acquire(const atomic_long_t *v) +raw_atomic_long_read_acquire(const atomic_long_t *v) { - return arch_atomic64_read_acquire(v); + return raw_atomic64_read_acquire(v); } static __always_inline void -arch_atomic_long_set(atomic_long_t *v, long i) +raw_atomic_long_set(atomic_long_t *v, long i) { - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } static __always_inline void -arch_atomic_long_set_release(atomic_long_t *v, long i) +raw_atomic_long_set_release(atomic_long_t *v, long i) { - arch_atomic64_set_release(v, i); + raw_atomic64_set_release(v, i); } static __always_inline void -arch_atomic_long_add(long i, atomic_long_t *v) +raw_atomic_long_add(long i, atomic_long_t *v) { - arch_atomic64_add(i, v); + raw_atomic64_add(i, v); } static __always_inline long -arch_atomic_long_add_return(long i, atomic_long_t *v) +raw_atomic_long_add_return(long i, atomic_long_t *v) { - return arch_atomic64_add_return(i, v); + return raw_atomic64_add_return(i, v); } static __always_inline long -arch_atomic_long_add_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { - return arch_atomic64_add_return_acquire(i, v); + return raw_atomic64_add_return_acquire(i, v); } static __always_inline long -arch_atomic_long_add_return_release(long i, atomic_long_t *v) +raw_atomic_long_add_return_release(long i, atomic_long_t *v) { - return arch_atomic64_add_return_release(i, v); + return raw_atomic64_add_return_release(i, v); } static __always_inline long -arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_add_return_relaxed(i, v); + return raw_atomic64_add_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add(long i, atomic_long_t *v) +raw_atomic_long_fetch_add(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add(i, v); + return raw_atomic64_fetch_add(i, v); } static __always_inline long -arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_acquire(i, v); + return raw_atomic64_fetch_add_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_add_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_release(i, v); + return raw_atomic64_fetch_add_release(i, v); } static __always_inline long -arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_add_relaxed(i, v); + return raw_atomic64_fetch_add_relaxed(i, v); } static __always_inline void -arch_atomic_long_sub(long i, atomic_long_t *v) +raw_atomic_long_sub(long i, atomic_long_t *v) { - arch_atomic64_sub(i, v); + raw_atomic64_sub(i, v); } static __always_inline long -arch_atomic_long_sub_return(long i, atomic_long_t *v) +raw_atomic_long_sub_return(long i, atomic_long_t *v) { - return arch_atomic64_sub_return(i, v); + return raw_atomic64_sub_return(i, v); } static __always_inline long -arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_acquire(i, v); + return raw_atomic64_sub_return_acquire(i, v); } static __always_inline long -arch_atomic_long_sub_return_release(long i, atomic_long_t *v) +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_release(i, v); + return raw_atomic64_sub_return_release(i, v); } static __always_inline long -arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_sub_return_relaxed(i, v); + return raw_atomic64_sub_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_sub(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub(i, v); + return raw_atomic64_fetch_sub(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_acquire(i, v); + return raw_atomic64_fetch_sub_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_release(i, v); + return raw_atomic64_fetch_sub_release(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_sub_relaxed(i, v); + return raw_atomic64_fetch_sub_relaxed(i, v); } static __always_inline void -arch_atomic_long_inc(atomic_long_t *v) +raw_atomic_long_inc(atomic_long_t *v) { - arch_atomic64_inc(v); + raw_atomic64_inc(v); } static __always_inline long -arch_atomic_long_inc_return(atomic_long_t *v) +raw_atomic_long_inc_return(atomic_long_t *v) { - return arch_atomic64_inc_return(v); + return raw_atomic64_inc_return(v); } static __always_inline long -arch_atomic_long_inc_return_acquire(atomic_long_t *v) +raw_atomic_long_inc_return_acquire(atomic_long_t *v) { - return arch_atomic64_inc_return_acquire(v); + return raw_atomic64_inc_return_acquire(v); } static __always_inline long -arch_atomic_long_inc_return_release(atomic_long_t *v) +raw_atomic_long_inc_return_release(atomic_long_t *v) { - return arch_atomic64_inc_return_release(v); + return raw_atomic64_inc_return_release(v); } static __always_inline long -arch_atomic_long_inc_return_relaxed(atomic_long_t *v) +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_inc_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_inc(atomic_long_t *v) +raw_atomic_long_fetch_inc(atomic_long_t *v) { - return arch_atomic64_fetch_inc(v); + return raw_atomic64_fetch_inc(v); } static __always_inline long -arch_atomic_long_fetch_inc_acquire(atomic_long_t *v) +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { - return arch_atomic64_fetch_inc_acquire(v); + return raw_atomic64_fetch_inc_acquire(v); } static __always_inline long -arch_atomic_long_fetch_inc_release(atomic_long_t *v) +raw_atomic_long_fetch_inc_release(atomic_long_t *v) { - return arch_atomic64_fetch_inc_release(v); + return raw_atomic64_fetch_inc_release(v); } static __always_inline long -arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { - return arch_atomic64_fetch_inc_relaxed(v); + return raw_atomic64_fetch_inc_relaxed(v); } static __always_inline void -arch_atomic_long_dec(atomic_long_t *v) +raw_atomic_long_dec(atomic_long_t *v) { - arch_atomic64_dec(v); + raw_atomic64_dec(v); } static __always_inline long -arch_atomic_long_dec_return(atomic_long_t *v) +raw_atomic_long_dec_return(atomic_long_t *v) { - return arch_atomic64_dec_return(v); + return raw_atomic64_dec_return(v); } static __always_inline long -arch_atomic_long_dec_return_acquire(atomic_long_t *v) +raw_atomic_long_dec_return_acquire(atomic_long_t *v) { - return arch_atomic64_dec_return_acquire(v); + return raw_atomic64_dec_return_acquire(v); } static __always_inline long -arch_atomic_long_dec_return_release(atomic_long_t *v) +raw_atomic_long_dec_return_release(atomic_long_t *v) { - return arch_atomic64_dec_return_release(v); + return raw_atomic64_dec_return_release(v); } static __always_inline long -arch_atomic_long_dec_return_relaxed(atomic_long_t *v) +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_dec_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_dec(atomic_long_t *v) +raw_atomic_long_fetch_dec(atomic_long_t *v) { - return arch_atomic64_fetch_dec(v); + return raw_atomic64_fetch_dec(v); } static __always_inline long -arch_atomic_long_fetch_dec_acquire(atomic_long_t *v) +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { - return arch_atomic64_fetch_dec_acquire(v); + return raw_atomic64_fetch_dec_acquire(v); } static __always_inline long -arch_atomic_long_fetch_dec_release(atomic_long_t *v) +raw_atomic_long_fetch_dec_release(atomic_long_t *v) { - return arch_atomic64_fetch_dec_release(v); + return raw_atomic64_fetch_dec_release(v); } static __always_inline long -arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_dec_relaxed(v); } static __always_inline void -arch_atomic_long_and(long i, atomic_long_t *v) +raw_atomic_long_and(long i, atomic_long_t *v) { - arch_atomic64_and(i, v); + raw_atomic64_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and(long i, atomic_long_t *v) +raw_atomic_long_fetch_and(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and(i, v); + return raw_atomic64_fetch_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_acquire(i, v); + return raw_atomic64_fetch_and_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_and_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_release(i, v); + return raw_atomic64_fetch_and_release(i, v); } static __always_inline long -arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_and_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(i, v); } static __always_inline void -arch_atomic_long_andnot(long i, atomic_long_t *v) +raw_atomic_long_andnot(long i, atomic_long_t *v) { - arch_atomic64_andnot(i, v); + raw_atomic64_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot(i, v); + return raw_atomic64_fetch_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_acquire(i, v); + return raw_atomic64_fetch_andnot_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_release(i, v); + return raw_atomic64_fetch_andnot_release(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_andnot_relaxed(i, v); } static __always_inline void -arch_atomic_long_or(long i, atomic_long_t *v) +raw_atomic_long_or(long i, atomic_long_t *v) { - arch_atomic64_or(i, v); + raw_atomic64_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or(long i, atomic_long_t *v) +raw_atomic_long_fetch_or(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or(i, v); + return raw_atomic64_fetch_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_acquire(i, v); + return raw_atomic64_fetch_or_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_or_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_release(i, v); + return raw_atomic64_fetch_or_release(i, v); } static __always_inline long -arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_or_relaxed(i, v); + return raw_atomic64_fetch_or_relaxed(i, v); } static __always_inline void -arch_atomic_long_xor(long i, atomic_long_t *v) +raw_atomic_long_xor(long i, atomic_long_t *v) { - arch_atomic64_xor(i, v); + raw_atomic64_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor(i, v); + return raw_atomic64_fetch_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_acquire(i, v); + return raw_atomic64_fetch_xor_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_release(i, v); + return raw_atomic64_fetch_xor_release(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_fetch_xor_relaxed(i, v); + return raw_atomic64_fetch_xor_relaxed(i, v); } static __always_inline long -arch_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long i) { - return arch_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, i); } static __always_inline long -arch_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { - return arch_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, i); } static __always_inline long -arch_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long i) { - return arch_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, i); } static __always_inline long -arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { - return arch_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, i); } static __always_inline long -arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg(v, old, new); + return raw_atomic64_cmpxchg(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_acquire(v, old, new); + return raw_atomic64_cmpxchg_acquire(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_release(v, old, new); + return raw_atomic64_cmpxchg_release(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_atomic64_cmpxchg_relaxed(v, old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_release(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { - return arch_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); + return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); } static __always_inline bool -arch_atomic_long_sub_and_test(long i, atomic_long_t *v) +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { - return arch_atomic64_sub_and_test(i, v); + return raw_atomic64_sub_and_test(i, v); } static __always_inline bool -arch_atomic_long_dec_and_test(atomic_long_t *v) +raw_atomic_long_dec_and_test(atomic_long_t *v) { - return arch_atomic64_dec_and_test(v); + return raw_atomic64_dec_and_test(v); } static __always_inline bool -arch_atomic_long_inc_and_test(atomic_long_t *v) +raw_atomic_long_inc_and_test(atomic_long_t *v) { - return arch_atomic64_inc_and_test(v); + return raw_atomic64_inc_and_test(v); } static __always_inline bool -arch_atomic_long_add_negative(long i, atomic_long_t *v) +raw_atomic_long_add_negative(long i, atomic_long_t *v) { - return arch_atomic64_add_negative(i, v); + return raw_atomic64_add_negative(i, v); } static __always_inline bool -arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_acquire(i, v); + return raw_atomic64_add_negative_acquire(i, v); } static __always_inline bool -arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_release(i, v); + return raw_atomic64_add_negative_release(i, v); } static __always_inline bool -arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_negative_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic64_fetch_add_unless(v, a, u); + return raw_atomic64_fetch_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic64_add_unless(v, a, u); + return raw_atomic64_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_inc_not_zero(atomic_long_t *v) +raw_atomic_long_inc_not_zero(atomic_long_t *v) { - return arch_atomic64_inc_not_zero(v); + return raw_atomic64_inc_not_zero(v); } static __always_inline bool -arch_atomic_long_inc_unless_negative(atomic_long_t *v) +raw_atomic_long_inc_unless_negative(atomic_long_t *v) { - return arch_atomic64_inc_unless_negative(v); + return raw_atomic64_inc_unless_negative(v); } static __always_inline bool -arch_atomic_long_dec_unless_positive(atomic_long_t *v) +raw_atomic_long_dec_unless_positive(atomic_long_t *v) { - return arch_atomic64_dec_unless_positive(v); + return raw_atomic64_dec_unless_positive(v); } static __always_inline long -arch_atomic_long_dec_if_positive(atomic_long_t *v) +raw_atomic_long_dec_if_positive(atomic_long_t *v) { - return arch_atomic64_dec_if_positive(v); + return raw_atomic64_dec_if_positive(v); } #else /* CONFIG_64BIT */ static __always_inline long -arch_atomic_long_read(const atomic_long_t *v) +raw_atomic_long_read(const atomic_long_t *v) { - return arch_atomic_read(v); + return raw_atomic_read(v); } static __always_inline long -arch_atomic_long_read_acquire(const atomic_long_t *v) +raw_atomic_long_read_acquire(const atomic_long_t *v) { - return arch_atomic_read_acquire(v); + return raw_atomic_read_acquire(v); } static __always_inline void -arch_atomic_long_set(atomic_long_t *v, long i) +raw_atomic_long_set(atomic_long_t *v, long i) { - arch_atomic_set(v, i); + raw_atomic_set(v, i); } static __always_inline void -arch_atomic_long_set_release(atomic_long_t *v, long i) +raw_atomic_long_set_release(atomic_long_t *v, long i) { - arch_atomic_set_release(v, i); + raw_atomic_set_release(v, i); } static __always_inline void -arch_atomic_long_add(long i, atomic_long_t *v) +raw_atomic_long_add(long i, atomic_long_t *v) { - arch_atomic_add(i, v); + raw_atomic_add(i, v); } static __always_inline long -arch_atomic_long_add_return(long i, atomic_long_t *v) +raw_atomic_long_add_return(long i, atomic_long_t *v) { - return arch_atomic_add_return(i, v); + return raw_atomic_add_return(i, v); } static __always_inline long -arch_atomic_long_add_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { - return arch_atomic_add_return_acquire(i, v); + return raw_atomic_add_return_acquire(i, v); } static __always_inline long -arch_atomic_long_add_return_release(long i, atomic_long_t *v) +raw_atomic_long_add_return_release(long i, atomic_long_t *v) { - return arch_atomic_add_return_release(i, v); + return raw_atomic_add_return_release(i, v); } static __always_inline long -arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic_add_return_relaxed(i, v); + return raw_atomic_add_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add(long i, atomic_long_t *v) +raw_atomic_long_fetch_add(long i, atomic_long_t *v) { - return arch_atomic_fetch_add(i, v); + return raw_atomic_fetch_add(i, v); } static __always_inline long -arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_acquire(i, v); + return raw_atomic_fetch_add_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_add_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_release(i, v); + return raw_atomic_fetch_add_release(i, v); } static __always_inline long -arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_add_relaxed(i, v); + return raw_atomic_fetch_add_relaxed(i, v); } static __always_inline void -arch_atomic_long_sub(long i, atomic_long_t *v) +raw_atomic_long_sub(long i, atomic_long_t *v) { - arch_atomic_sub(i, v); + raw_atomic_sub(i, v); } static __always_inline long -arch_atomic_long_sub_return(long i, atomic_long_t *v) +raw_atomic_long_sub_return(long i, atomic_long_t *v) { - return arch_atomic_sub_return(i, v); + return raw_atomic_sub_return(i, v); } static __always_inline long -arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v) +raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { - return arch_atomic_sub_return_acquire(i, v); + return raw_atomic_sub_return_acquire(i, v); } static __always_inline long -arch_atomic_long_sub_return_release(long i, atomic_long_t *v) +raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { - return arch_atomic_sub_return_release(i, v); + return raw_atomic_sub_return_release(i, v); } static __always_inline long -arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { - return arch_atomic_sub_return_relaxed(i, v); + return raw_atomic_sub_return_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_sub(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub(i, v); + return raw_atomic_fetch_sub(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_acquire(i, v); + return raw_atomic_fetch_sub_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_release(i, v); + return raw_atomic_fetch_sub_release(i, v); } static __always_inline long -arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_sub_relaxed(i, v); + return raw_atomic_fetch_sub_relaxed(i, v); } static __always_inline void -arch_atomic_long_inc(atomic_long_t *v) +raw_atomic_long_inc(atomic_long_t *v) { - arch_atomic_inc(v); + raw_atomic_inc(v); } static __always_inline long -arch_atomic_long_inc_return(atomic_long_t *v) +raw_atomic_long_inc_return(atomic_long_t *v) { - return arch_atomic_inc_return(v); + return raw_atomic_inc_return(v); } static __always_inline long -arch_atomic_long_inc_return_acquire(atomic_long_t *v) +raw_atomic_long_inc_return_acquire(atomic_long_t *v) { - return arch_atomic_inc_return_acquire(v); + return raw_atomic_inc_return_acquire(v); } static __always_inline long -arch_atomic_long_inc_return_release(atomic_long_t *v) +raw_atomic_long_inc_return_release(atomic_long_t *v) { - return arch_atomic_inc_return_release(v); + return raw_atomic_inc_return_release(v); } static __always_inline long -arch_atomic_long_inc_return_relaxed(atomic_long_t *v) +raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_inc_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_inc(atomic_long_t *v) +raw_atomic_long_fetch_inc(atomic_long_t *v) { - return arch_atomic_fetch_inc(v); + return raw_atomic_fetch_inc(v); } static __always_inline long -arch_atomic_long_fetch_inc_acquire(atomic_long_t *v) +raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { - return arch_atomic_fetch_inc_acquire(v); + return raw_atomic_fetch_inc_acquire(v); } static __always_inline long -arch_atomic_long_fetch_inc_release(atomic_long_t *v) +raw_atomic_long_fetch_inc_release(atomic_long_t *v) { - return arch_atomic_fetch_inc_release(v); + return raw_atomic_fetch_inc_release(v); } static __always_inline long -arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { - return arch_atomic_fetch_inc_relaxed(v); + return raw_atomic_fetch_inc_relaxed(v); } static __always_inline void -arch_atomic_long_dec(atomic_long_t *v) +raw_atomic_long_dec(atomic_long_t *v) { - arch_atomic_dec(v); + raw_atomic_dec(v); } static __always_inline long -arch_atomic_long_dec_return(atomic_long_t *v) +raw_atomic_long_dec_return(atomic_long_t *v) { - return arch_atomic_dec_return(v); + return raw_atomic_dec_return(v); } static __always_inline long -arch_atomic_long_dec_return_acquire(atomic_long_t *v) +raw_atomic_long_dec_return_acquire(atomic_long_t *v) { - return arch_atomic_dec_return_acquire(v); + return raw_atomic_dec_return_acquire(v); } static __always_inline long -arch_atomic_long_dec_return_release(atomic_long_t *v) +raw_atomic_long_dec_return_release(atomic_long_t *v) { - return arch_atomic_dec_return_release(v); + return raw_atomic_dec_return_release(v); } static __always_inline long -arch_atomic_long_dec_return_relaxed(atomic_long_t *v) +raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_dec_return_relaxed(v); } static __always_inline long -arch_atomic_long_fetch_dec(atomic_long_t *v) +raw_atomic_long_fetch_dec(atomic_long_t *v) { - return arch_atomic_fetch_dec(v); + return raw_atomic_fetch_dec(v); } static __always_inline long -arch_atomic_long_fetch_dec_acquire(atomic_long_t *v) +raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { - return arch_atomic_fetch_dec_acquire(v); + return raw_atomic_fetch_dec_acquire(v); } static __always_inline long -arch_atomic_long_fetch_dec_release(atomic_long_t *v) +raw_atomic_long_fetch_dec_release(atomic_long_t *v) { - return arch_atomic_fetch_dec_release(v); + return raw_atomic_fetch_dec_release(v); } static __always_inline long -arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v) +raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_dec_relaxed(v); } static __always_inline void -arch_atomic_long_and(long i, atomic_long_t *v) +raw_atomic_long_and(long i, atomic_long_t *v) { - arch_atomic_and(i, v); + raw_atomic_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and(long i, atomic_long_t *v) +raw_atomic_long_fetch_and(long i, atomic_long_t *v) { - return arch_atomic_fetch_and(i, v); + return raw_atomic_fetch_and(i, v); } static __always_inline long -arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_acquire(i, v); + return raw_atomic_fetch_and_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_and_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_release(i, v); + return raw_atomic_fetch_and_release(i, v); } static __always_inline long -arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_and_relaxed(i, v); + return raw_atomic_fetch_and_relaxed(i, v); } static __always_inline void -arch_atomic_long_andnot(long i, atomic_long_t *v) +raw_atomic_long_andnot(long i, atomic_long_t *v) { - arch_atomic_andnot(i, v); + raw_atomic_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot(i, v); + return raw_atomic_fetch_andnot(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_acquire(i, v); + return raw_atomic_fetch_andnot_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_release(i, v); + return raw_atomic_fetch_andnot_release(i, v); } static __always_inline long -arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_andnot_relaxed(i, v); + return raw_atomic_fetch_andnot_relaxed(i, v); } static __always_inline void -arch_atomic_long_or(long i, atomic_long_t *v) +raw_atomic_long_or(long i, atomic_long_t *v) { - arch_atomic_or(i, v); + raw_atomic_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or(long i, atomic_long_t *v) +raw_atomic_long_fetch_or(long i, atomic_long_t *v) { - return arch_atomic_fetch_or(i, v); + return raw_atomic_fetch_or(i, v); } static __always_inline long -arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_acquire(i, v); + return raw_atomic_fetch_or_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_or_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_release(i, v); + return raw_atomic_fetch_or_release(i, v); } static __always_inline long -arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_or_relaxed(i, v); + return raw_atomic_fetch_or_relaxed(i, v); } static __always_inline void -arch_atomic_long_xor(long i, atomic_long_t *v) +raw_atomic_long_xor(long i, atomic_long_t *v) { - arch_atomic_xor(i, v); + raw_atomic_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor(i, v); + return raw_atomic_fetch_xor(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_acquire(i, v); + return raw_atomic_fetch_xor_acquire(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_release(i, v); + return raw_atomic_fetch_xor_release(i, v); } static __always_inline long -arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { - return arch_atomic_fetch_xor_relaxed(i, v); + return raw_atomic_fetch_xor_relaxed(i, v); } static __always_inline long -arch_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long i) { - return arch_atomic_xchg(v, i); + return raw_atomic_xchg(v, i); } static __always_inline long -arch_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { - return arch_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, i); } static __always_inline long -arch_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long i) { - return arch_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, i); } static __always_inline long -arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { - return arch_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, i); } static __always_inline long -arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg(v, old, new); + return raw_atomic_cmpxchg(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_acquire(v, old, new); + return raw_atomic_cmpxchg_acquire(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_release(v, old, new); + return raw_atomic_cmpxchg_release(v, old, new); } static __always_inline long -arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_atomic_cmpxchg_relaxed(v, old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg(v, (int *)old, new); + return raw_atomic_try_cmpxchg(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_acquire(v, (int *)old, new); + return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_release(v, (int *)old, new); + return raw_atomic_try_cmpxchg_release(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { - return arch_atomic_try_cmpxchg_relaxed(v, (int *)old, new); + return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); } static __always_inline bool -arch_atomic_long_sub_and_test(long i, atomic_long_t *v) +raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { - return arch_atomic_sub_and_test(i, v); + return raw_atomic_sub_and_test(i, v); } static __always_inline bool -arch_atomic_long_dec_and_test(atomic_long_t *v) +raw_atomic_long_dec_and_test(atomic_long_t *v) { - return arch_atomic_dec_and_test(v); + return raw_atomic_dec_and_test(v); } static __always_inline bool -arch_atomic_long_inc_and_test(atomic_long_t *v) +raw_atomic_long_inc_and_test(atomic_long_t *v) { - return arch_atomic_inc_and_test(v); + return raw_atomic_inc_and_test(v); } static __always_inline bool -arch_atomic_long_add_negative(long i, atomic_long_t *v) +raw_atomic_long_add_negative(long i, atomic_long_t *v) { - return arch_atomic_add_negative(i, v); + return raw_atomic_add_negative(i, v); } static __always_inline bool -arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { - return arch_atomic_add_negative_acquire(i, v); + return raw_atomic_add_negative_acquire(i, v); } static __always_inline bool -arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { - return arch_atomic_add_negative_release(i, v); + return raw_atomic_add_negative_release(i, v); } static __always_inline bool -arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_negative_relaxed(i, v); } static __always_inline long -arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic_fetch_add_unless(v, a, u); + return raw_atomic_fetch_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_add_unless(atomic_long_t *v, long a, long u) +raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { - return arch_atomic_add_unless(v, a, u); + return raw_atomic_add_unless(v, a, u); } static __always_inline bool -arch_atomic_long_inc_not_zero(atomic_long_t *v) +raw_atomic_long_inc_not_zero(atomic_long_t *v) { - return arch_atomic_inc_not_zero(v); + return raw_atomic_inc_not_zero(v); } static __always_inline bool -arch_atomic_long_inc_unless_negative(atomic_long_t *v) +raw_atomic_long_inc_unless_negative(atomic_long_t *v) { - return arch_atomic_inc_unless_negative(v); + return raw_atomic_inc_unless_negative(v); } static __always_inline bool -arch_atomic_long_dec_unless_positive(atomic_long_t *v) +raw_atomic_long_dec_unless_positive(atomic_long_t *v) { - return arch_atomic_dec_unless_positive(v); + return raw_atomic_dec_unless_positive(v); } static __always_inline long -arch_atomic_long_dec_if_positive(atomic_long_t *v) +raw_atomic_long_dec_if_positive(atomic_long_t *v) { - return arch_atomic_dec_if_positive(v); + return raw_atomic_dec_if_positive(v); } #endif /* CONFIG_64BIT */ #endif /* _LINUX_ATOMIC_LONG_H */ -// a194c07d7d2f4b0e178d3c118c919775d5d65f50 +// 108784846d3bbbb201b8dabe621c5dc30b216206 diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h index 83ff0269657e7..8b2fc04cf8c54 100644 --- a/include/linux/atomic/atomic-raw.h +++ b/include/linux/atomic/atomic-raw.h @@ -1026,516 +1026,6 @@ raw_atomic64_dec_if_positive(atomic64_t *v) return arch_atomic64_dec_if_positive(v); } -static __always_inline long -raw_atomic_long_read(const atomic_long_t *v) -{ - return arch_atomic_long_read(v); -} - -static __always_inline long -raw_atomic_long_read_acquire(const atomic_long_t *v) -{ - return arch_atomic_long_read_acquire(v); -} - -static __always_inline void -raw_atomic_long_set(atomic_long_t *v, long i) -{ - arch_atomic_long_set(v, i); -} - -static __always_inline void -raw_atomic_long_set_release(atomic_long_t *v, long i) -{ - arch_atomic_long_set_release(v, i); -} - -static __always_inline void -raw_atomic_long_add(long i, atomic_long_t *v) -{ - arch_atomic_long_add(i, v); -} - -static __always_inline long -raw_atomic_long_add_return(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_sub(long i, atomic_long_t *v) -{ - arch_atomic_long_sub(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_inc(atomic_long_t *v) -{ - arch_atomic_long_inc(v); -} - -static __always_inline long -raw_atomic_long_inc_return(atomic_long_t *v) -{ - return arch_atomic_long_inc_return(v); -} - -static __always_inline long -raw_atomic_long_inc_return_acquire(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_inc_return_release(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_release(v); -} - -static __always_inline long -raw_atomic_long_inc_return_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_inc_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_release(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_long_dec(atomic_long_t *v) -{ - arch_atomic_long_dec(v); -} - -static __always_inline long -raw_atomic_long_dec_return(atomic_long_t *v) -{ - return arch_atomic_long_dec_return(v); -} - -static __always_inline long -raw_atomic_long_dec_return_acquire(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_dec_return_release(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_release(v); -} - -static __always_inline long -raw_atomic_long_dec_return_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_dec_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_release(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) -{ - return arch_atomic_long_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_long_and(long i, atomic_long_t *v) -{ - arch_atomic_long_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_andnot(long i, atomic_long_t *v) -{ - arch_atomic_long_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_or(long i, atomic_long_t *v) -{ - arch_atomic_long_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_xor(long i, atomic_long_t *v) -{ - arch_atomic_long_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_fetch_xor_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_acquire(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_release(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) -{ - return arch_atomic_long_xchg_relaxed(v, i); -} - -static __always_inline long -raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_acquire(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_release(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) -{ - return arch_atomic_long_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) -{ - return arch_atomic_long_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_sub_and_test(long i, atomic_long_t *v) -{ - return arch_atomic_long_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_long_dec_and_test(atomic_long_t *v) -{ - return arch_atomic_long_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_long_inc_and_test(atomic_long_t *v) -{ - return arch_atomic_long_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_long_add_negative(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_release(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) -{ - return arch_atomic_long_add_negative_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) -{ - return arch_atomic_long_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) -{ - return arch_atomic_long_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_inc_not_zero(atomic_long_t *v) -{ - return arch_atomic_long_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_long_inc_unless_negative(atomic_long_t *v) -{ - return arch_atomic_long_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_long_dec_unless_positive(atomic_long_t *v) -{ - return arch_atomic_long_dec_unless_positive(v); -} - -static __always_inline long -raw_atomic_long_dec_if_positive(atomic_long_t *v) -{ - return arch_atomic_long_dec_if_positive(v); -} - #define raw_xchg(...) \ arch_xchg(__VA_ARGS__) @@ -1642,4 +1132,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) arch_try_cmpxchg128_local(__VA_ARGS__) #endif /* _LINUX_ATOMIC_RAW_H */ -// 01d54200571b3857755a07c10074a4fd58cef6b1 +// b23ed4424e85200e200ded094522e1d743b3a5b1 diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index eda89cea6e1d1..75e91d6da30d3 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -47,9 +47,9 @@ gen_proto_order_variant() cat < X-Patchwork-Id: 103128 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501108vqr; Mon, 5 Jun 2023 00:12:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6X9ULRrSI0YlJN/IVUQm18IhL3vSq30pZ9qhJzPIaYXlDFIXWjHuVmZXeYusUo7h4HtoU6 X-Received: by 2002:a05:6808:15:b0:389:4f7b:949d with SMTP id u21-20020a056808001500b003894f7b949dmr3684195oic.22.1685949121922; Mon, 05 Jun 2023 00:12:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949121; cv=none; d=google.com; s=arc-20160816; b=HyzFWQGLAQw+vyH/3N8jydocZhPrkOVK+TdEddlmPKOmHGe3QpTVOe0fVWZDGxZFEV K/vBYEI3t918B0wQGGcXkrdlDFHVvZ6IyXFfsrbNvY6n9r2cWIwOonX3adTnH483W8Ah dWgQzx0Dn2fP1l3vZ0sZC/pKFD9e1xvP46zNSF6xiPgXI96/Tk4NnimoBH7IfG99jibq IoGPzJONPJ99edPkjY7xm7N5qsSKmJk8oXMaeAG71jO3k7oMUo5nuPYR3+mg19tv6MWw kQnRU/XUcrPgX32GrpobMVWJHDW/BbcDz25jlhjPQ7AQIgg3iwE4W2E0nsdJ9BRpl1pY mUxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7g2rrrnkKwtH7aa6LTUWD0MsquULXp+gTAPQjqtK2sU=; b=IG5OwsUyC0AwCy5rJ4mcENoAftay10Rzf5LcabJZLC51ZBO233twR5YDRA50oTkVP6 tr9ENa2BTbXvG5BemJfh5LpLYy9kCI+K2bjD1sp0Z9+FuvJAzREnO+N4/qXLMOXx0oFx Muw4hbht7LrcSE8nvZIeqekYEAlC9acRvVVsdNrLHOxd2GaIoMRg7ZUpn09VK1aTCYFe nCVp8GCqoep40QiA/Vd7a1OrAN7YVNu4o5yuSWCF7CIrhtyfeEYig7Ckj3e7C+Iz6gjY 8lFksgEzADqdp37ur81QwGCG5kbEwLYxP6VNA8Uueblc+ligSB9FiDq6JBr9AzqoASPJ Vm4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a30-20020a63705e000000b0053fbae20d02si5199481pgn.655.2023.06.05.00.11.49; Mon, 05 Jun 2023 00:12:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231158AbjFEHDi (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230308AbjFEHDI (ORCPT ); Mon, 5 Jun 2023 03:03:08 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 549F71705; Mon, 5 Jun 2023 00:02:26 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B824F152B; Mon, 5 Jun 2023 00:03:11 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90AF03F793; Mon, 5 Jun 2023 00:02:23 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 20/27] locking/atomic: scripts: restructure fallback ifdeffery Date: Mon, 5 Jun 2023 08:01:17 +0100 Message-Id: <20230605070124.3741859-21-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845786064145526?= X-GMAIL-MSGID: =?utf-8?q?1767845786064145526?= Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_() namespace, and only need to fill in the raw_atomic*_() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_ ordering variant with its own ifdeffery checking for the arch_atomic*_ ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic.h | 1 - include/linux/atomic/atomic-arch-fallback.h | 3178 +++++++++--------- include/linux/atomic/atomic-raw.h | 1135 ------- scripts/atomic/fallbacks/acquire | 2 +- scripts/atomic/fallbacks/add_negative | 4 +- scripts/atomic/fallbacks/add_unless | 4 +- scripts/atomic/fallbacks/andnot | 4 +- scripts/atomic/fallbacks/cmpxchg | 4 +- scripts/atomic/fallbacks/dec | 4 +- scripts/atomic/fallbacks/dec_and_test | 4 +- scripts/atomic/fallbacks/dec_if_positive | 6 +- scripts/atomic/fallbacks/dec_unless_positive | 6 +- scripts/atomic/fallbacks/fence | 2 +- scripts/atomic/fallbacks/fetch_add_unless | 6 +- scripts/atomic/fallbacks/inc | 4 +- scripts/atomic/fallbacks/inc_and_test | 4 +- scripts/atomic/fallbacks/inc_not_zero | 4 +- scripts/atomic/fallbacks/inc_unless_negative | 6 +- scripts/atomic/fallbacks/read_acquire | 4 +- scripts/atomic/fallbacks/release | 2 +- scripts/atomic/fallbacks/set_release | 4 +- scripts/atomic/fallbacks/sub_and_test | 4 +- scripts/atomic/fallbacks/try_cmpxchg | 4 +- scripts/atomic/fallbacks/xchg | 4 +- scripts/atomic/gen-atomic-fallback.sh | 236 +- scripts/atomic/gen-atomic-raw.sh | 80 - scripts/atomic/gen-atomics.sh | 1 - 27 files changed, 1866 insertions(+), 2851 deletions(-) delete mode 100644 include/linux/atomic/atomic-raw.h delete mode 100755 scripts/atomic/gen-atomic-raw.sh diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 296cfae0389fe..8dd57c3a99e9b 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -78,7 +78,6 @@ }) #include -#include #include #include diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 1a2d81dbc2e48..99bc1a871dc12 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -8,2749 +8,2911 @@ #include -#ifndef arch_xchg_relaxed -#define arch_xchg_acquire arch_xchg -#define arch_xchg_release arch_xchg -#define arch_xchg_relaxed arch_xchg -#else /* arch_xchg_relaxed */ - -#ifndef arch_xchg_acquire -#define arch_xchg_acquire(...) \ - __atomic_op_acquire(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg) +#define raw_xchg arch_xchg +#elif defined(arch_xchg_relaxed) +#define raw_xchg(...) \ + __atomic_op_fence(arch_xchg, __VA_ARGS__) +#else +extern void raw_xchg_not_implemented(void); +#define raw_xchg(...) raw_xchg_not_implemented() #endif -#ifndef arch_xchg_release -#define arch_xchg_release(...) \ - __atomic_op_release(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg_acquire) +#define raw_xchg_acquire arch_xchg_acquire +#elif defined(arch_xchg_relaxed) +#define raw_xchg_acquire(...) \ + __atomic_op_acquire(arch_xchg, __VA_ARGS__) +#elif defined(arch_xchg) +#define raw_xchg_acquire arch_xchg +#else +extern void raw_xchg_acquire_not_implemented(void); +#define raw_xchg_acquire(...) raw_xchg_acquire_not_implemented() #endif -#ifndef arch_xchg -#define arch_xchg(...) \ - __atomic_op_fence(arch_xchg, __VA_ARGS__) +#if defined(arch_xchg_release) +#define raw_xchg_release arch_xchg_release +#elif defined(arch_xchg_relaxed) +#define raw_xchg_release(...) \ + __atomic_op_release(arch_xchg, __VA_ARGS__) +#elif defined(arch_xchg) +#define raw_xchg_release arch_xchg +#else +extern void raw_xchg_release_not_implemented(void); +#define raw_xchg_release(...) raw_xchg_release_not_implemented() +#endif + +#if defined(arch_xchg_relaxed) +#define raw_xchg_relaxed arch_xchg_relaxed +#elif defined(arch_xchg) +#define raw_xchg_relaxed arch_xchg +#else +extern void raw_xchg_relaxed_not_implemented(void); +#define raw_xchg_relaxed(...) raw_xchg_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg) +#define raw_cmpxchg arch_cmpxchg +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg(...) \ + __atomic_op_fence(arch_cmpxchg, __VA_ARGS__) +#else +extern void raw_cmpxchg_not_implemented(void); +#define raw_cmpxchg(...) raw_cmpxchg_not_implemented() #endif -#endif /* arch_xchg_relaxed */ - -#ifndef arch_cmpxchg_relaxed -#define arch_cmpxchg_acquire arch_cmpxchg -#define arch_cmpxchg_release arch_cmpxchg -#define arch_cmpxchg_relaxed arch_cmpxchg -#else /* arch_cmpxchg_relaxed */ - -#ifndef arch_cmpxchg_acquire -#define arch_cmpxchg_acquire(...) \ +#if defined(arch_cmpxchg_acquire) +#define raw_cmpxchg_acquire arch_cmpxchg_acquire +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_acquire(...) \ __atomic_op_acquire(arch_cmpxchg, __VA_ARGS__) +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_acquire arch_cmpxchg +#else +extern void raw_cmpxchg_acquire_not_implemented(void); +#define raw_cmpxchg_acquire(...) raw_cmpxchg_acquire_not_implemented() #endif -#ifndef arch_cmpxchg_release -#define arch_cmpxchg_release(...) \ +#if defined(arch_cmpxchg_release) +#define raw_cmpxchg_release arch_cmpxchg_release +#elif defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_release(...) \ __atomic_op_release(arch_cmpxchg, __VA_ARGS__) +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_release arch_cmpxchg +#else +extern void raw_cmpxchg_release_not_implemented(void); +#define raw_cmpxchg_release(...) raw_cmpxchg_release_not_implemented() +#endif + +#if defined(arch_cmpxchg_relaxed) +#define raw_cmpxchg_relaxed arch_cmpxchg_relaxed +#elif defined(arch_cmpxchg) +#define raw_cmpxchg_relaxed arch_cmpxchg +#else +extern void raw_cmpxchg_relaxed_not_implemented(void); +#define raw_cmpxchg_relaxed(...) raw_cmpxchg_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg64) +#define raw_cmpxchg64 arch_cmpxchg64 +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64(...) \ + __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__) +#else +extern void raw_cmpxchg64_not_implemented(void); +#define raw_cmpxchg64(...) raw_cmpxchg64_not_implemented() #endif -#ifndef arch_cmpxchg -#define arch_cmpxchg(...) \ - __atomic_op_fence(arch_cmpxchg, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg_relaxed */ - -#ifndef arch_cmpxchg64_relaxed -#define arch_cmpxchg64_acquire arch_cmpxchg64 -#define arch_cmpxchg64_release arch_cmpxchg64 -#define arch_cmpxchg64_relaxed arch_cmpxchg64 -#else /* arch_cmpxchg64_relaxed */ - -#ifndef arch_cmpxchg64_acquire -#define arch_cmpxchg64_acquire(...) \ +#if defined(arch_cmpxchg64_acquire) +#define raw_cmpxchg64_acquire arch_cmpxchg64_acquire +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_acquire(...) \ __atomic_op_acquire(arch_cmpxchg64, __VA_ARGS__) +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_acquire arch_cmpxchg64 +#else +extern void raw_cmpxchg64_acquire_not_implemented(void); +#define raw_cmpxchg64_acquire(...) raw_cmpxchg64_acquire_not_implemented() #endif -#ifndef arch_cmpxchg64_release -#define arch_cmpxchg64_release(...) \ +#if defined(arch_cmpxchg64_release) +#define raw_cmpxchg64_release arch_cmpxchg64_release +#elif defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_release(...) \ __atomic_op_release(arch_cmpxchg64, __VA_ARGS__) +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_release arch_cmpxchg64 +#else +extern void raw_cmpxchg64_release_not_implemented(void); +#define raw_cmpxchg64_release(...) raw_cmpxchg64_release_not_implemented() +#endif + +#if defined(arch_cmpxchg64_relaxed) +#define raw_cmpxchg64_relaxed arch_cmpxchg64_relaxed +#elif defined(arch_cmpxchg64) +#define raw_cmpxchg64_relaxed arch_cmpxchg64 +#else +extern void raw_cmpxchg64_relaxed_not_implemented(void); +#define raw_cmpxchg64_relaxed(...) raw_cmpxchg64_relaxed_not_implemented() +#endif + +#if defined(arch_cmpxchg128) +#define raw_cmpxchg128 arch_cmpxchg128 +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128(...) \ + __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__) +#else +extern void raw_cmpxchg128_not_implemented(void); +#define raw_cmpxchg128(...) raw_cmpxchg128_not_implemented() #endif -#ifndef arch_cmpxchg64 -#define arch_cmpxchg64(...) \ - __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg64_relaxed */ - -#ifndef arch_cmpxchg128_relaxed -#define arch_cmpxchg128_acquire arch_cmpxchg128 -#define arch_cmpxchg128_release arch_cmpxchg128 -#define arch_cmpxchg128_relaxed arch_cmpxchg128 -#else /* arch_cmpxchg128_relaxed */ - -#ifndef arch_cmpxchg128_acquire -#define arch_cmpxchg128_acquire(...) \ +#if defined(arch_cmpxchg128_acquire) +#define raw_cmpxchg128_acquire arch_cmpxchg128_acquire +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_acquire(...) \ __atomic_op_acquire(arch_cmpxchg128, __VA_ARGS__) +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_acquire arch_cmpxchg128 +#else +extern void raw_cmpxchg128_acquire_not_implemented(void); +#define raw_cmpxchg128_acquire(...) raw_cmpxchg128_acquire_not_implemented() #endif -#ifndef arch_cmpxchg128_release -#define arch_cmpxchg128_release(...) \ +#if defined(arch_cmpxchg128_release) +#define raw_cmpxchg128_release arch_cmpxchg128_release +#elif defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_release(...) \ __atomic_op_release(arch_cmpxchg128, __VA_ARGS__) -#endif - -#ifndef arch_cmpxchg128 -#define arch_cmpxchg128(...) \ - __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__) -#endif - -#endif /* arch_cmpxchg128_relaxed */ - -#ifndef arch_try_cmpxchg_relaxed -#ifdef arch_try_cmpxchg -#define arch_try_cmpxchg_acquire arch_try_cmpxchg -#define arch_try_cmpxchg_release arch_try_cmpxchg -#define arch_try_cmpxchg_relaxed arch_try_cmpxchg -#endif /* arch_try_cmpxchg */ - -#ifndef arch_try_cmpxchg -#define arch_try_cmpxchg(_ptr, _oldp, _new) \ +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_release arch_cmpxchg128 +#else +extern void raw_cmpxchg128_release_not_implemented(void); +#define raw_cmpxchg128_release(...) raw_cmpxchg128_release_not_implemented() +#endif + +#if defined(arch_cmpxchg128_relaxed) +#define raw_cmpxchg128_relaxed arch_cmpxchg128_relaxed +#elif defined(arch_cmpxchg128) +#define raw_cmpxchg128_relaxed arch_cmpxchg128 +#else +extern void raw_cmpxchg128_relaxed_not_implemented(void); +#define raw_cmpxchg128_relaxed(...) raw_cmpxchg128_relaxed_not_implemented() +#endif + +#if defined(arch_try_cmpxchg) +#define raw_try_cmpxchg arch_try_cmpxchg +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg(...) \ + __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__) +#else +#define raw_try_cmpxchg(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg */ +#endif -#ifndef arch_try_cmpxchg_acquire -#define arch_try_cmpxchg_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_acquire) +#define raw_try_cmpxchg_acquire arch_try_cmpxchg_acquire +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__) +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_acquire arch_try_cmpxchg +#else +#define raw_try_cmpxchg_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_acquire */ +#endif -#ifndef arch_try_cmpxchg_release -#define arch_try_cmpxchg_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_release) +#define raw_try_cmpxchg_release arch_try_cmpxchg_release +#elif defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_release(...) \ + __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__) +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_release arch_try_cmpxchg +#else +#define raw_try_cmpxchg_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_release */ +#endif -#ifndef arch_try_cmpxchg_relaxed -#define arch_try_cmpxchg_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg_relaxed) +#define raw_try_cmpxchg_relaxed arch_try_cmpxchg_relaxed +#elif defined(arch_try_cmpxchg) +#define raw_try_cmpxchg_relaxed arch_try_cmpxchg +#else +#define raw_try_cmpxchg_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_relaxed */ - -#else /* arch_try_cmpxchg_relaxed */ - -#ifndef arch_try_cmpxchg_acquire -#define arch_try_cmpxchg_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg_release -#define arch_try_cmpxchg_release(...) \ - __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__) #endif -#ifndef arch_try_cmpxchg -#define arch_try_cmpxchg(...) \ - __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__) -#endif - -#endif /* arch_try_cmpxchg_relaxed */ - -#ifndef arch_try_cmpxchg64_relaxed -#ifdef arch_try_cmpxchg64 -#define arch_try_cmpxchg64_acquire arch_try_cmpxchg64 -#define arch_try_cmpxchg64_release arch_try_cmpxchg64 -#define arch_try_cmpxchg64_relaxed arch_try_cmpxchg64 -#endif /* arch_try_cmpxchg64 */ - -#ifndef arch_try_cmpxchg64 -#define arch_try_cmpxchg64(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64 arch_try_cmpxchg64 +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64(...) \ + __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__) +#else +#define raw_try_cmpxchg64(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64 */ +#endif -#ifndef arch_try_cmpxchg64_acquire -#define arch_try_cmpxchg64_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_acquire) +#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64_acquire +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__) +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_acquire */ +#endif -#ifndef arch_try_cmpxchg64_release -#define arch_try_cmpxchg64_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_release) +#define raw_try_cmpxchg64_release arch_try_cmpxchg64_release +#elif defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_release(...) \ + __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__) +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_release arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_release */ +#endif -#ifndef arch_try_cmpxchg64_relaxed -#define arch_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg64_relaxed) +#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64_relaxed +#elif defined(arch_try_cmpxchg64) +#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64 +#else +#define raw_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_relaxed */ - -#else /* arch_try_cmpxchg64_relaxed */ - -#ifndef arch_try_cmpxchg64_acquire -#define arch_try_cmpxchg64_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg64_release -#define arch_try_cmpxchg64_release(...) \ - __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__) -#endif - -#ifndef arch_try_cmpxchg64 -#define arch_try_cmpxchg64(...) \ - __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__) #endif -#endif /* arch_try_cmpxchg64_relaxed */ - -#ifndef arch_try_cmpxchg128_relaxed -#ifdef arch_try_cmpxchg128 -#define arch_try_cmpxchg128_acquire arch_try_cmpxchg128 -#define arch_try_cmpxchg128_release arch_try_cmpxchg128 -#define arch_try_cmpxchg128_relaxed arch_try_cmpxchg128 -#endif /* arch_try_cmpxchg128 */ - -#ifndef arch_try_cmpxchg128 -#define arch_try_cmpxchg128(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128 arch_try_cmpxchg128 +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128(...) \ + __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__) +#else +#define raw_try_cmpxchg128(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128 */ +#endif -#ifndef arch_try_cmpxchg128_acquire -#define arch_try_cmpxchg128_acquire(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_acquire) +#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128_acquire +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_acquire(...) \ + __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__) +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_acquire(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_acquire((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_acquire((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_acquire */ +#endif -#ifndef arch_try_cmpxchg128_release -#define arch_try_cmpxchg128_release(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_release) +#define raw_try_cmpxchg128_release arch_try_cmpxchg128_release +#elif defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_release(...) \ + __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__) +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_release arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_release(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_release((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_release((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_release */ +#endif -#ifndef arch_try_cmpxchg128_relaxed -#define arch_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \ +#if defined(arch_try_cmpxchg128_relaxed) +#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128_relaxed +#elif defined(arch_try_cmpxchg128) +#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128 +#else +#define raw_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg128_relaxed((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_relaxed((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg128_relaxed */ - -#else /* arch_try_cmpxchg128_relaxed */ - -#ifndef arch_try_cmpxchg128_acquire -#define arch_try_cmpxchg128_acquire(...) \ - __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__) #endif -#ifndef arch_try_cmpxchg128_release -#define arch_try_cmpxchg128_release(...) \ - __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__) -#endif +#define raw_cmpxchg_local arch_cmpxchg_local -#ifndef arch_try_cmpxchg128 -#define arch_try_cmpxchg128(...) \ - __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__) +#ifdef arch_try_cmpxchg_local +#define raw_try_cmpxchg_local arch_try_cmpxchg_local +#else +#define raw_try_cmpxchg_local(_ptr, _oldp, _new) \ +({ \ + typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ + ___r = raw_cmpxchg_local((_ptr), ___o, (_new)); \ + if (unlikely(___r != ___o)) \ + *___op = ___r; \ + likely(___r == ___o); \ +}) #endif -#endif /* arch_try_cmpxchg128_relaxed */ +#define raw_cmpxchg64_local arch_cmpxchg64_local -#ifndef arch_try_cmpxchg_local -#define arch_try_cmpxchg_local(_ptr, _oldp, _new) \ +#ifdef arch_try_cmpxchg64_local +#define raw_try_cmpxchg64_local arch_try_cmpxchg64_local +#else +#define raw_try_cmpxchg64_local(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg_local((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg64_local((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg_local */ +#endif + +#define raw_cmpxchg128_local arch_cmpxchg128_local -#ifndef arch_try_cmpxchg64_local -#define arch_try_cmpxchg64_local(_ptr, _oldp, _new) \ +#ifdef arch_try_cmpxchg128_local +#define raw_try_cmpxchg128_local arch_try_cmpxchg128_local +#else +#define raw_try_cmpxchg128_local(_ptr, _oldp, _new) \ ({ \ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \ - ___r = arch_cmpxchg64_local((_ptr), ___o, (_new)); \ + ___r = raw_cmpxchg128_local((_ptr), ___o, (_new)); \ if (unlikely(___r != ___o)) \ *___op = ___r; \ likely(___r == ___o); \ }) -#endif /* arch_try_cmpxchg64_local */ +#endif + +#define raw_sync_cmpxchg arch_sync_cmpxchg -#ifndef arch_atomic_read_acquire +#define raw_atomic_read arch_atomic_read + +#if defined(arch_atomic_read_acquire) +#define raw_atomic_read_acquire arch_atomic_read_acquire +#elif defined(arch_atomic_read) +#define raw_atomic_read_acquire arch_atomic_read +#else static __always_inline int -arch_atomic_read_acquire(const atomic_t *v) +raw_atomic_read_acquire(const atomic_t *v) { int ret; if (__native_word(atomic_t)) { ret = smp_load_acquire(&(v)->counter); } else { - ret = arch_atomic_read(v); + ret = raw_atomic_read(v); __atomic_acquire_fence(); } return ret; } -#define arch_atomic_read_acquire arch_atomic_read_acquire #endif -#ifndef arch_atomic_set_release +#define raw_atomic_set arch_atomic_set + +#if defined(arch_atomic_set_release) +#define raw_atomic_set_release arch_atomic_set_release +#elif defined(arch_atomic_set) +#define raw_atomic_set_release arch_atomic_set +#else static __always_inline void -arch_atomic_set_release(atomic_t *v, int i) +raw_atomic_set_release(atomic_t *v, int i) { if (__native_word(atomic_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); - arch_atomic_set(v, i); + raw_atomic_set(v, i); } } -#define arch_atomic_set_release arch_atomic_set_release #endif -#ifndef arch_atomic_add_return_relaxed -#define arch_atomic_add_return_acquire arch_atomic_add_return -#define arch_atomic_add_return_release arch_atomic_add_return -#define arch_atomic_add_return_relaxed arch_atomic_add_return -#else /* arch_atomic_add_return_relaxed */ +#define raw_atomic_add arch_atomic_add + +#if defined(arch_atomic_add_return) +#define raw_atomic_add_return arch_atomic_add_return +#elif defined(arch_atomic_add_return_relaxed) +static __always_inline int +raw_atomic_add_return(int i, atomic_t *v) +{ + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_add_return_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#else +#error "Unable to define raw_atomic_add_return" +#endif -#ifndef arch_atomic_add_return_acquire +#if defined(arch_atomic_add_return_acquire) +#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire +#elif defined(arch_atomic_add_return_relaxed) static __always_inline int -arch_atomic_add_return_acquire(int i, atomic_t *v) +raw_atomic_add_return_acquire(int i, atomic_t *v) { int ret = arch_atomic_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_acquire arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_acquire" #endif -#ifndef arch_atomic_add_return_release +#if defined(arch_atomic_add_return_release) +#define raw_atomic_add_return_release arch_atomic_add_return_release +#elif defined(arch_atomic_add_return_relaxed) static __always_inline int -arch_atomic_add_return_release(int i, atomic_t *v) +raw_atomic_add_return_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_add_return_relaxed(i, v); } -#define arch_atomic_add_return_release arch_atomic_add_return_release +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_release arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_release" #endif -#ifndef arch_atomic_add_return +#if defined(arch_atomic_add_return_relaxed) +#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#elif defined(arch_atomic_add_return) +#define raw_atomic_add_return_relaxed arch_atomic_add_return +#else +#error "Unable to define raw_atomic_add_return_relaxed" +#endif + +#if defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add arch_atomic_fetch_add +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_add_return(int i, atomic_t *v) +raw_atomic_fetch_add(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_add_return_relaxed(i, v); + ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_add_return arch_atomic_add_return +#else +#error "Unable to define raw_atomic_fetch_add" #endif -#endif /* arch_atomic_add_return_relaxed */ - -#ifndef arch_atomic_fetch_add_relaxed -#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add -#define arch_atomic_fetch_add_release arch_atomic_fetch_add -#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add -#else /* arch_atomic_fetch_add_relaxed */ - -#ifndef arch_atomic_fetch_add_acquire +#if defined(arch_atomic_fetch_add_acquire) +#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_fetch_add_acquire(int i, atomic_t *v) +raw_atomic_fetch_add_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_acquire" #endif -#ifndef arch_atomic_fetch_add_release +#if defined(arch_atomic_fetch_add_release) +#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release +#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int -arch_atomic_fetch_add_release(int i, atomic_t *v) +raw_atomic_fetch_add_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_add_relaxed(i, v); } -#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_release arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_release" +#endif + +#if defined(arch_atomic_fetch_add_relaxed) +#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#elif defined(arch_atomic_fetch_add) +#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_fetch_add_relaxed" #endif -#ifndef arch_atomic_fetch_add +#define raw_atomic_sub arch_atomic_sub + +#if defined(arch_atomic_sub_return) +#define raw_atomic_sub_return arch_atomic_sub_return +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_fetch_add(int i, atomic_t *v) +raw_atomic_sub_return(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_add_relaxed(i, v); + ret = arch_atomic_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_add arch_atomic_fetch_add +#else +#error "Unable to define raw_atomic_sub_return" #endif -#endif /* arch_atomic_fetch_add_relaxed */ - -#ifndef arch_atomic_sub_return_relaxed -#define arch_atomic_sub_return_acquire arch_atomic_sub_return -#define arch_atomic_sub_return_release arch_atomic_sub_return -#define arch_atomic_sub_return_relaxed arch_atomic_sub_return -#else /* arch_atomic_sub_return_relaxed */ - -#ifndef arch_atomic_sub_return_acquire +#if defined(arch_atomic_sub_return_acquire) +#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_sub_return_acquire(int i, atomic_t *v) +raw_atomic_sub_return_acquire(int i, atomic_t *v) { int ret = arch_atomic_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_acquire arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_acquire" #endif -#ifndef arch_atomic_sub_return_release +#if defined(arch_atomic_sub_return_release) +#define raw_atomic_sub_return_release arch_atomic_sub_return_release +#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int -arch_atomic_sub_return_release(int i, atomic_t *v) +raw_atomic_sub_return_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_sub_return_relaxed(i, v); } -#define arch_atomic_sub_return_release arch_atomic_sub_return_release +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_release arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_release" +#endif + +#if defined(arch_atomic_sub_return_relaxed) +#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#elif defined(arch_atomic_sub_return) +#define raw_atomic_sub_return_relaxed arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_sub_return_relaxed" #endif -#ifndef arch_atomic_sub_return +#if defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub arch_atomic_fetch_sub +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_sub_return(int i, atomic_t *v) +raw_atomic_fetch_sub(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_sub_return_relaxed(i, v); + ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_sub_return arch_atomic_sub_return +#else +#error "Unable to define raw_atomic_fetch_sub" #endif -#endif /* arch_atomic_sub_return_relaxed */ - -#ifndef arch_atomic_fetch_sub_relaxed -#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub -#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub -#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub -#else /* arch_atomic_fetch_sub_relaxed */ - -#ifndef arch_atomic_fetch_sub_acquire +#if defined(arch_atomic_fetch_sub_acquire) +#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_fetch_sub_acquire(int i, atomic_t *v) +raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_acquire" #endif -#ifndef arch_atomic_fetch_sub_release +#if defined(arch_atomic_fetch_sub_release) +#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int -arch_atomic_fetch_sub_release(int i, atomic_t *v) +raw_atomic_fetch_sub_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_sub_relaxed(i, v); } -#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_release" #endif -#ifndef arch_atomic_fetch_sub -static __always_inline int -arch_atomic_fetch_sub(int i, atomic_t *v) -{ - int ret; - __atomic_pre_full_fence(); - ret = arch_atomic_fetch_sub_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#if defined(arch_atomic_fetch_sub_relaxed) +#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#elif defined(arch_atomic_fetch_sub) +#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub +#else +#error "Unable to define raw_atomic_fetch_sub_relaxed" #endif -#endif /* arch_atomic_fetch_sub_relaxed */ - -#ifndef arch_atomic_inc +#if defined(arch_atomic_inc) +#define raw_atomic_inc arch_atomic_inc +#else static __always_inline void -arch_atomic_inc(atomic_t *v) +raw_atomic_inc(atomic_t *v) { - arch_atomic_add(1, v); + raw_atomic_add(1, v); } -#define arch_atomic_inc arch_atomic_inc #endif -#ifndef arch_atomic_inc_return_relaxed -#ifdef arch_atomic_inc_return -#define arch_atomic_inc_return_acquire arch_atomic_inc_return -#define arch_atomic_inc_return_release arch_atomic_inc_return -#define arch_atomic_inc_return_relaxed arch_atomic_inc_return -#endif /* arch_atomic_inc_return */ - -#ifndef arch_atomic_inc_return +#if defined(arch_atomic_inc_return) +#define raw_atomic_inc_return arch_atomic_inc_return +#elif defined(arch_atomic_inc_return_relaxed) +static __always_inline int +raw_atomic_inc_return(atomic_t *v) +{ + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_inc_return_relaxed(v); + __atomic_post_full_fence(); + return ret; +} +#else static __always_inline int -arch_atomic_inc_return(atomic_t *v) +raw_atomic_inc_return(atomic_t *v) { - return arch_atomic_add_return(1, v); + return raw_atomic_add_return(1, v); } -#define arch_atomic_inc_return arch_atomic_inc_return #endif -#ifndef arch_atomic_inc_return_acquire +#if defined(arch_atomic_inc_return_acquire) +#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire +#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int -arch_atomic_inc_return_acquire(atomic_t *v) +raw_atomic_inc_return_acquire(atomic_t *v) { - return arch_atomic_add_return_acquire(1, v); + int ret = arch_atomic_inc_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire -#endif - -#ifndef arch_atomic_inc_return_release +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_acquire arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_release(atomic_t *v) +raw_atomic_inc_return_acquire(atomic_t *v) { - return arch_atomic_add_return_release(1, v); + return raw_atomic_add_return_acquire(1, v); } -#define arch_atomic_inc_return_release arch_atomic_inc_return_release #endif -#ifndef arch_atomic_inc_return_relaxed +#if defined(arch_atomic_inc_return_release) +#define raw_atomic_inc_return_release arch_atomic_inc_return_release +#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int -arch_atomic_inc_return_relaxed(atomic_t *v) +raw_atomic_inc_return_release(atomic_t *v) { - return arch_atomic_add_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_inc_return_relaxed(v); } -#define arch_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed -#endif - -#else /* arch_atomic_inc_return_relaxed */ - -#ifndef arch_atomic_inc_return_acquire +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_release arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_acquire(atomic_t *v) +raw_atomic_inc_return_release(atomic_t *v) { - int ret = arch_atomic_inc_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_add_return_release(1, v); } -#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire #endif -#ifndef arch_atomic_inc_return_release +#if defined(arch_atomic_inc_return_relaxed) +#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed +#elif defined(arch_atomic_inc_return) +#define raw_atomic_inc_return_relaxed arch_atomic_inc_return +#else static __always_inline int -arch_atomic_inc_return_release(atomic_t *v) +raw_atomic_inc_return_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_inc_return_relaxed(v); + return raw_atomic_add_return_relaxed(1, v); } -#define arch_atomic_inc_return_release arch_atomic_inc_return_release #endif -#ifndef arch_atomic_inc_return +#if defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc arch_atomic_fetch_inc +#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int -arch_atomic_inc_return(atomic_t *v) +raw_atomic_fetch_inc(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_inc_return_relaxed(v); + ret = arch_atomic_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_inc_return arch_atomic_inc_return -#endif - -#endif /* arch_atomic_inc_return_relaxed */ - -#ifndef arch_atomic_fetch_inc_relaxed -#ifdef arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc -#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc -#endif /* arch_atomic_fetch_inc */ - -#ifndef arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc(atomic_t *v) +raw_atomic_fetch_inc(atomic_t *v) { - return arch_atomic_fetch_add(1, v); + return raw_atomic_fetch_add(1, v); } -#define arch_atomic_fetch_inc arch_atomic_fetch_inc #endif -#ifndef arch_atomic_fetch_inc_acquire +#if defined(arch_atomic_fetch_inc_acquire) +#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire +#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int -arch_atomic_fetch_inc_acquire(atomic_t *v) +raw_atomic_fetch_inc_acquire(atomic_t *v) { - return arch_atomic_fetch_add_acquire(1, v); + int ret = arch_atomic_fetch_inc_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire -#endif - -#ifndef arch_atomic_fetch_inc_release +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_release(atomic_t *v) +raw_atomic_fetch_inc_acquire(atomic_t *v) { - return arch_atomic_fetch_add_release(1, v); + return raw_atomic_fetch_add_acquire(1, v); } -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release #endif -#ifndef arch_atomic_fetch_inc_relaxed +#if defined(arch_atomic_fetch_inc_release) +#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release +#elif defined(arch_atomic_fetch_inc_relaxed) +static __always_inline int +raw_atomic_fetch_inc_release(atomic_t *v) +{ + __atomic_release_fence(); + return arch_atomic_fetch_inc_relaxed(v); +} +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_relaxed(atomic_t *v) +raw_atomic_fetch_inc_release(atomic_t *v) { - return arch_atomic_fetch_add_relaxed(1, v); + return raw_atomic_fetch_add_release(1, v); } -#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed #endif -#else /* arch_atomic_fetch_inc_relaxed */ - -#ifndef arch_atomic_fetch_inc_acquire +#if defined(arch_atomic_fetch_inc_relaxed) +#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed +#elif defined(arch_atomic_fetch_inc) +#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc +#else static __always_inline int -arch_atomic_fetch_inc_acquire(atomic_t *v) +raw_atomic_fetch_inc_relaxed(atomic_t *v) { - int ret = arch_atomic_fetch_inc_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_fetch_add_relaxed(1, v); } -#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire #endif -#ifndef arch_atomic_fetch_inc_release -static __always_inline int -arch_atomic_fetch_inc_release(atomic_t *v) +#if defined(arch_atomic_dec) +#define raw_atomic_dec arch_atomic_dec +#else +static __always_inline void +raw_atomic_dec(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_fetch_inc_relaxed(v); + raw_atomic_sub(1, v); } -#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release #endif -#ifndef arch_atomic_fetch_inc +#if defined(arch_atomic_dec_return) +#define raw_atomic_dec_return arch_atomic_dec_return +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_fetch_inc(atomic_t *v) +raw_atomic_dec_return(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_inc_relaxed(v); + ret = arch_atomic_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_inc arch_atomic_fetch_inc -#endif - -#endif /* arch_atomic_fetch_inc_relaxed */ - -#ifndef arch_atomic_dec -static __always_inline void -arch_atomic_dec(atomic_t *v) -{ - arch_atomic_sub(1, v); -} -#define arch_atomic_dec arch_atomic_dec -#endif - -#ifndef arch_atomic_dec_return_relaxed -#ifdef arch_atomic_dec_return -#define arch_atomic_dec_return_acquire arch_atomic_dec_return -#define arch_atomic_dec_return_release arch_atomic_dec_return -#define arch_atomic_dec_return_relaxed arch_atomic_dec_return -#endif /* arch_atomic_dec_return */ - -#ifndef arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return(atomic_t *v) +raw_atomic_dec_return(atomic_t *v) { - return arch_atomic_sub_return(1, v); + return raw_atomic_sub_return(1, v); } -#define arch_atomic_dec_return arch_atomic_dec_return #endif -#ifndef arch_atomic_dec_return_acquire +#if defined(arch_atomic_dec_return_acquire) +#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_dec_return_acquire(atomic_t *v) +raw_atomic_dec_return_acquire(atomic_t *v) { - return arch_atomic_sub_return_acquire(1, v); + int ret = arch_atomic_dec_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire -#endif - -#ifndef arch_atomic_dec_return_release +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_acquire arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_release(atomic_t *v) +raw_atomic_dec_return_acquire(atomic_t *v) { - return arch_atomic_sub_return_release(1, v); + return raw_atomic_sub_return_acquire(1, v); } -#define arch_atomic_dec_return_release arch_atomic_dec_return_release #endif -#ifndef arch_atomic_dec_return_relaxed +#if defined(arch_atomic_dec_return_release) +#define raw_atomic_dec_return_release arch_atomic_dec_return_release +#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int -arch_atomic_dec_return_relaxed(atomic_t *v) +raw_atomic_dec_return_release(atomic_t *v) { - return arch_atomic_sub_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_dec_return_relaxed(v); } -#define arch_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed -#endif - -#else /* arch_atomic_dec_return_relaxed */ - -#ifndef arch_atomic_dec_return_acquire +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_release arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_acquire(atomic_t *v) +raw_atomic_dec_return_release(atomic_t *v) { - int ret = arch_atomic_dec_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_sub_return_release(1, v); } -#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire #endif -#ifndef arch_atomic_dec_return_release +#if defined(arch_atomic_dec_return_relaxed) +#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed +#elif defined(arch_atomic_dec_return) +#define raw_atomic_dec_return_relaxed arch_atomic_dec_return +#else static __always_inline int -arch_atomic_dec_return_release(atomic_t *v) +raw_atomic_dec_return_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_dec_return_relaxed(v); + return raw_atomic_sub_return_relaxed(1, v); } -#define arch_atomic_dec_return_release arch_atomic_dec_return_release #endif -#ifndef arch_atomic_dec_return +#if defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec arch_atomic_fetch_dec +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_dec_return(atomic_t *v) +raw_atomic_fetch_dec(atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_dec_return_relaxed(v); + ret = arch_atomic_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_dec_return arch_atomic_dec_return -#endif - -#endif /* arch_atomic_dec_return_relaxed */ - -#ifndef arch_atomic_fetch_dec_relaxed -#ifdef arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec -#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec -#endif /* arch_atomic_fetch_dec */ - -#ifndef arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec(atomic_t *v) +raw_atomic_fetch_dec(atomic_t *v) { - return arch_atomic_fetch_sub(1, v); + return raw_atomic_fetch_sub(1, v); } -#define arch_atomic_fetch_dec arch_atomic_fetch_dec #endif -#ifndef arch_atomic_fetch_dec_acquire +#if defined(arch_atomic_fetch_dec_acquire) +#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_fetch_dec_acquire(atomic_t *v) +raw_atomic_fetch_dec_acquire(atomic_t *v) { - return arch_atomic_fetch_sub_acquire(1, v); + int ret = arch_atomic_fetch_dec_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire -#endif - -#ifndef arch_atomic_fetch_dec_release +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_release(atomic_t *v) +raw_atomic_fetch_dec_acquire(atomic_t *v) { - return arch_atomic_fetch_sub_release(1, v); + return raw_atomic_fetch_sub_acquire(1, v); } -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release #endif -#ifndef arch_atomic_fetch_dec_relaxed +#if defined(arch_atomic_fetch_dec_release) +#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release +#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int -arch_atomic_fetch_dec_relaxed(atomic_t *v) +raw_atomic_fetch_dec_release(atomic_t *v) { - return arch_atomic_fetch_sub_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic_fetch_dec_relaxed(v); } -#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed -#endif - -#else /* arch_atomic_fetch_dec_relaxed */ - -#ifndef arch_atomic_fetch_dec_acquire +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_acquire(atomic_t *v) +raw_atomic_fetch_dec_release(atomic_t *v) { - int ret = arch_atomic_fetch_dec_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic_fetch_sub_release(1, v); } -#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire #endif -#ifndef arch_atomic_fetch_dec_release +#if defined(arch_atomic_fetch_dec_relaxed) +#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed +#elif defined(arch_atomic_fetch_dec) +#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec +#else static __always_inline int -arch_atomic_fetch_dec_release(atomic_t *v) +raw_atomic_fetch_dec_relaxed(atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_fetch_dec_relaxed(v); + return raw_atomic_fetch_sub_relaxed(1, v); } -#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release #endif -#ifndef arch_atomic_fetch_dec +#define raw_atomic_and arch_atomic_and + +#if defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and arch_atomic_fetch_and +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_dec(atomic_t *v) +raw_atomic_fetch_and(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_dec_relaxed(v); + ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_dec arch_atomic_fetch_dec +#else +#error "Unable to define raw_atomic_fetch_and" #endif -#endif /* arch_atomic_fetch_dec_relaxed */ - -#ifndef arch_atomic_fetch_and_relaxed -#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and -#define arch_atomic_fetch_and_release arch_atomic_fetch_and -#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and -#else /* arch_atomic_fetch_and_relaxed */ - -#ifndef arch_atomic_fetch_and_acquire +#if defined(arch_atomic_fetch_and_acquire) +#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_and_acquire(int i, atomic_t *v) +raw_atomic_fetch_and_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_acquire" #endif -#ifndef arch_atomic_fetch_and_release +#if defined(arch_atomic_fetch_and_release) +#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release +#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int -arch_atomic_fetch_and_release(int i, atomic_t *v) +raw_atomic_fetch_and_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_and_relaxed(i, v); } -#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_release arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_release" #endif -#ifndef arch_atomic_fetch_and +#if defined(arch_atomic_fetch_and_relaxed) +#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#elif defined(arch_atomic_fetch_and) +#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and +#else +#error "Unable to define raw_atomic_fetch_and_relaxed" +#endif + +#if defined(arch_atomic_andnot) +#define raw_atomic_andnot arch_atomic_andnot +#else +static __always_inline void +raw_atomic_andnot(int i, atomic_t *v) +{ + raw_atomic_and(~i, v); +} +#endif + +#if defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot +#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int -arch_atomic_fetch_and(int i, atomic_t *v) +raw_atomic_fetch_andnot(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_and_relaxed(i, v); + ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_and arch_atomic_fetch_and -#endif - -#endif /* arch_atomic_fetch_and_relaxed */ - -#ifndef arch_atomic_andnot -static __always_inline void -arch_atomic_andnot(int i, atomic_t *v) +#else +static __always_inline int +raw_atomic_fetch_andnot(int i, atomic_t *v) { - arch_atomic_and(~i, v); + return raw_atomic_fetch_and(~i, v); } -#define arch_atomic_andnot arch_atomic_andnot #endif -#ifndef arch_atomic_fetch_andnot_relaxed -#ifdef arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot -#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot -#endif /* arch_atomic_fetch_andnot */ - -#ifndef arch_atomic_fetch_andnot +#if defined(arch_atomic_fetch_andnot_acquire) +#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#elif defined(arch_atomic_fetch_andnot_relaxed) +static __always_inline int +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + int ret = arch_atomic_fetch_andnot_relaxed(i, v); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot(int i, atomic_t *v) +raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { - return arch_atomic_fetch_and(~i, v); + return raw_atomic_fetch_and_acquire(~i, v); } -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot #endif -#ifndef arch_atomic_fetch_andnot_acquire +#if defined(arch_atomic_fetch_andnot_release) +#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release +#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int -arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) +raw_atomic_fetch_andnot_release(int i, atomic_t *v) { - return arch_atomic_fetch_and_acquire(~i, v); + __atomic_release_fence(); + return arch_atomic_fetch_andnot_relaxed(i, v); } -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire -#endif - -#ifndef arch_atomic_fetch_andnot_release +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot_release(int i, atomic_t *v) +raw_atomic_fetch_andnot_release(int i, atomic_t *v) { - return arch_atomic_fetch_and_release(~i, v); + return raw_atomic_fetch_and_release(~i, v); } -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release #endif -#ifndef arch_atomic_fetch_andnot_relaxed +#if defined(arch_atomic_fetch_andnot_relaxed) +#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed +#elif defined(arch_atomic_fetch_andnot) +#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot +#else static __always_inline int -arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { - return arch_atomic_fetch_and_relaxed(~i, v); + return raw_atomic_fetch_and_relaxed(~i, v); } -#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed #endif -#else /* arch_atomic_fetch_andnot_relaxed */ +#define raw_atomic_or arch_atomic_or -#ifndef arch_atomic_fetch_andnot_acquire +#if defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or arch_atomic_fetch_or +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) +raw_atomic_fetch_or(int i, atomic_t *v) { - int ret = arch_atomic_fetch_andnot_relaxed(i, v); - __atomic_acquire_fence(); + int ret; + __atomic_pre_full_fence(); + ret = arch_atomic_fetch_or_relaxed(i, v); + __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#else +#error "Unable to define raw_atomic_fetch_or" #endif -#ifndef arch_atomic_fetch_andnot_release +#if defined(arch_atomic_fetch_or_acquire) +#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_andnot_release(int i, atomic_t *v) -{ - __atomic_release_fence(); - return arch_atomic_fetch_andnot_relaxed(i, v); -} -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release -#endif - -#ifndef arch_atomic_fetch_andnot -static __always_inline int -arch_atomic_fetch_andnot(int i, atomic_t *v) -{ - int ret; - __atomic_pre_full_fence(); - ret = arch_atomic_fetch_andnot_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot -#endif - -#endif /* arch_atomic_fetch_andnot_relaxed */ - -#ifndef arch_atomic_fetch_or_relaxed -#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or -#define arch_atomic_fetch_or_release arch_atomic_fetch_or -#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or -#else /* arch_atomic_fetch_or_relaxed */ - -#ifndef arch_atomic_fetch_or_acquire -static __always_inline int -arch_atomic_fetch_or_acquire(int i, atomic_t *v) +raw_atomic_fetch_or_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_acquire" #endif -#ifndef arch_atomic_fetch_or_release +#if defined(arch_atomic_fetch_or_release) +#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release +#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int -arch_atomic_fetch_or_release(int i, atomic_t *v) +raw_atomic_fetch_or_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_or_relaxed(i, v); } -#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_release arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_release" #endif -#ifndef arch_atomic_fetch_or +#if defined(arch_atomic_fetch_or_relaxed) +#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#elif defined(arch_atomic_fetch_or) +#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_or_relaxed" +#endif + +#define raw_atomic_xor arch_atomic_xor + +#if defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor arch_atomic_fetch_xor +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_or(int i, atomic_t *v) +raw_atomic_fetch_xor(int i, atomic_t *v) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_or_relaxed(i, v); + ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_or arch_atomic_fetch_or +#else +#error "Unable to define raw_atomic_fetch_xor" #endif -#endif /* arch_atomic_fetch_or_relaxed */ - -#ifndef arch_atomic_fetch_xor_relaxed -#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor -#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor -#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor -#else /* arch_atomic_fetch_xor_relaxed */ - -#ifndef arch_atomic_fetch_xor_acquire +#if defined(arch_atomic_fetch_xor_acquire) +#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_xor_acquire(int i, atomic_t *v) +raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { int ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_acquire" #endif -#ifndef arch_atomic_fetch_xor_release +#if defined(arch_atomic_fetch_xor_release) +#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int -arch_atomic_fetch_xor_release(int i, atomic_t *v) +raw_atomic_fetch_xor_release(int i, atomic_t *v) { __atomic_release_fence(); return arch_atomic_fetch_xor_relaxed(i, v); } -#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_release" #endif -#ifndef arch_atomic_fetch_xor +#if defined(arch_atomic_fetch_xor_relaxed) +#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#elif defined(arch_atomic_fetch_xor) +#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor +#else +#error "Unable to define raw_atomic_fetch_xor_relaxed" +#endif + +#if defined(arch_atomic_xchg) +#define raw_atomic_xchg arch_atomic_xchg +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_fetch_xor(int i, atomic_t *v) +raw_atomic_xchg(atomic_t *v, int i) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_fetch_xor_relaxed(i, v); + ret = arch_atomic_xchg_relaxed(v, i); __atomic_post_full_fence(); return ret; } -#define arch_atomic_fetch_xor arch_atomic_fetch_xor -#endif - -#endif /* arch_atomic_fetch_xor_relaxed */ - -#ifndef arch_atomic_xchg_relaxed -#ifdef arch_atomic_xchg -#define arch_atomic_xchg_acquire arch_atomic_xchg -#define arch_atomic_xchg_release arch_atomic_xchg -#define arch_atomic_xchg_relaxed arch_atomic_xchg -#endif /* arch_atomic_xchg */ - -#ifndef arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg(atomic_t *v, int new) +raw_atomic_xchg(atomic_t *v, int new) { - return arch_xchg(&v->counter, new); + return raw_xchg(&v->counter, new); } -#define arch_atomic_xchg arch_atomic_xchg #endif -#ifndef arch_atomic_xchg_acquire +#if defined(arch_atomic_xchg_acquire) +#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_xchg_acquire(atomic_t *v, int new) +raw_atomic_xchg_acquire(atomic_t *v, int i) { - return arch_xchg_acquire(&v->counter, new); + int ret = arch_atomic_xchg_relaxed(v, i); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire -#endif - -#ifndef arch_atomic_xchg_release +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_acquire arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_release(atomic_t *v, int new) +raw_atomic_xchg_acquire(atomic_t *v, int new) { - return arch_xchg_release(&v->counter, new); + return raw_xchg_acquire(&v->counter, new); } -#define arch_atomic_xchg_release arch_atomic_xchg_release #endif -#ifndef arch_atomic_xchg_relaxed +#if defined(arch_atomic_xchg_release) +#define raw_atomic_xchg_release arch_atomic_xchg_release +#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -arch_atomic_xchg_relaxed(atomic_t *v, int new) +raw_atomic_xchg_release(atomic_t *v, int i) { - return arch_xchg_relaxed(&v->counter, new); + __atomic_release_fence(); + return arch_atomic_xchg_relaxed(v, i); } -#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#endif - -#else /* arch_atomic_xchg_relaxed */ - -#ifndef arch_atomic_xchg_acquire +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_release arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_acquire(atomic_t *v, int i) +raw_atomic_xchg_release(atomic_t *v, int new) { - int ret = arch_atomic_xchg_relaxed(v, i); - __atomic_acquire_fence(); - return ret; + return raw_xchg_release(&v->counter, new); } -#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire #endif -#ifndef arch_atomic_xchg_release +#if defined(arch_atomic_xchg_relaxed) +#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed +#elif defined(arch_atomic_xchg) +#define raw_atomic_xchg_relaxed arch_atomic_xchg +#else static __always_inline int -arch_atomic_xchg_release(atomic_t *v, int i) +raw_atomic_xchg_relaxed(atomic_t *v, int new) { - __atomic_release_fence(); - return arch_atomic_xchg_relaxed(v, i); + return raw_xchg_relaxed(&v->counter, new); } -#define arch_atomic_xchg_release arch_atomic_xchg_release #endif -#ifndef arch_atomic_xchg +#if defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg arch_atomic_cmpxchg +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_xchg(atomic_t *v, int i) +raw_atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; __atomic_pre_full_fence(); - ret = arch_atomic_xchg_relaxed(v, i); + ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic_xchg arch_atomic_xchg -#endif - -#endif /* arch_atomic_xchg_relaxed */ - -#ifndef arch_atomic_cmpxchg_relaxed -#ifdef arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg -#endif /* arch_atomic_cmpxchg */ - -#ifndef arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg(atomic_t *v, int old, int new) +raw_atomic_cmpxchg(atomic_t *v, int old, int new) { - return arch_cmpxchg(&v->counter, old, new); + return raw_cmpxchg(&v->counter, old, new); } -#define arch_atomic_cmpxchg arch_atomic_cmpxchg #endif -#ifndef arch_atomic_cmpxchg_acquire +#if defined(arch_atomic_cmpxchg_acquire) +#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { - return arch_cmpxchg_acquire(&v->counter, old, new); + int ret = arch_atomic_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#endif - -#ifndef arch_atomic_cmpxchg_release +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { - return arch_cmpxchg_release(&v->counter, old, new); + return raw_cmpxchg_acquire(&v->counter, old, new); } -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release #endif -#ifndef arch_atomic_cmpxchg_relaxed +#if defined(arch_atomic_cmpxchg_release) +#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release +#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int -arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { - return arch_cmpxchg_relaxed(&v->counter, old, new); + __atomic_release_fence(); + return arch_atomic_cmpxchg_relaxed(v, old, new); } -#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#endif - -#else /* arch_atomic_cmpxchg_relaxed */ - -#ifndef arch_atomic_cmpxchg_acquire +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { - int ret = arch_atomic_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; + return raw_cmpxchg_release(&v->counter, old, new); } -#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire #endif -#ifndef arch_atomic_cmpxchg_release +#if defined(arch_atomic_cmpxchg_relaxed) +#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#elif defined(arch_atomic_cmpxchg) +#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg +#else static __always_inline int -arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) +raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { - __atomic_release_fence(); - return arch_atomic_cmpxchg_relaxed(v, old, new); + return raw_cmpxchg_relaxed(&v->counter, old, new); } -#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release #endif -#ifndef arch_atomic_cmpxchg -static __always_inline int -arch_atomic_cmpxchg(atomic_t *v, int old, int new) +#if defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg +#elif defined(arch_atomic_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { - int ret; + bool ret; __atomic_pre_full_fence(); - ret = arch_atomic_cmpxchg_relaxed(v, old, new); + ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic_cmpxchg arch_atomic_cmpxchg -#endif - -#endif /* arch_atomic_cmpxchg_relaxed */ - -#ifndef arch_atomic_try_cmpxchg_relaxed -#ifdef arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg -#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg -#endif /* arch_atomic_try_cmpxchg */ - -#ifndef arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg(v, o, new); + r = raw_atomic_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #endif -#ifndef arch_atomic_try_cmpxchg_acquire +#if defined(arch_atomic_try_cmpxchg_acquire) +#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire +#elif defined(arch_atomic_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_acquire(v, o, new); + r = raw_atomic_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire #endif -#ifndef arch_atomic_try_cmpxchg_release +#if defined(arch_atomic_try_cmpxchg_release) +#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release +#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool -arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + __atomic_release_fence(); + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +} +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg +#else +static __always_inline bool +raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_release(v, o, new); + r = raw_atomic_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release #endif -#ifndef arch_atomic_try_cmpxchg_relaxed +#if defined(arch_atomic_try_cmpxchg_relaxed) +#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed +#elif defined(arch_atomic_try_cmpxchg) +#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg +#else static __always_inline bool -arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { int r, o = *old; - r = arch_atomic_cmpxchg_relaxed(v, o, new); + r = raw_atomic_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed -#endif - -#else /* arch_atomic_try_cmpxchg_relaxed */ - -#ifndef arch_atomic_try_cmpxchg_acquire -static __always_inline bool -arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ - bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; -} -#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire -#endif - -#ifndef arch_atomic_try_cmpxchg_release -static __always_inline bool -arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ - __atomic_release_fence(); - return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} -#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release -#endif - -#ifndef arch_atomic_try_cmpxchg -static __always_inline bool -arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #endif -#endif /* arch_atomic_try_cmpxchg_relaxed */ - -#ifndef arch_atomic_sub_and_test +#if defined(arch_atomic_sub_and_test) +#define raw_atomic_sub_and_test arch_atomic_sub_and_test +#else static __always_inline bool -arch_atomic_sub_and_test(int i, atomic_t *v) +raw_atomic_sub_and_test(int i, atomic_t *v) { - return arch_atomic_sub_return(i, v) == 0; + return raw_atomic_sub_return(i, v) == 0; } -#define arch_atomic_sub_and_test arch_atomic_sub_and_test #endif -#ifndef arch_atomic_dec_and_test +#if defined(arch_atomic_dec_and_test) +#define raw_atomic_dec_and_test arch_atomic_dec_and_test +#else static __always_inline bool -arch_atomic_dec_and_test(atomic_t *v) +raw_atomic_dec_and_test(atomic_t *v) { - return arch_atomic_dec_return(v) == 0; + return raw_atomic_dec_return(v) == 0; } -#define arch_atomic_dec_and_test arch_atomic_dec_and_test #endif -#ifndef arch_atomic_inc_and_test +#if defined(arch_atomic_inc_and_test) +#define raw_atomic_inc_and_test arch_atomic_inc_and_test +#else static __always_inline bool -arch_atomic_inc_and_test(atomic_t *v) +raw_atomic_inc_and_test(atomic_t *v) { - return arch_atomic_inc_return(v) == 0; + return raw_atomic_inc_return(v) == 0; } -#define arch_atomic_inc_and_test arch_atomic_inc_and_test #endif -#ifndef arch_atomic_add_negative_relaxed -#ifdef arch_atomic_add_negative -#define arch_atomic_add_negative_acquire arch_atomic_add_negative -#define arch_atomic_add_negative_release arch_atomic_add_negative -#define arch_atomic_add_negative_relaxed arch_atomic_add_negative -#endif /* arch_atomic_add_negative */ - -#ifndef arch_atomic_add_negative +#if defined(arch_atomic_add_negative) +#define raw_atomic_add_negative arch_atomic_add_negative +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative(int i, atomic_t *v) +raw_atomic_add_negative(int i, atomic_t *v) { - return arch_atomic_add_return(i, v) < 0; + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic_add_negative arch_atomic_add_negative -#endif - -#ifndef arch_atomic_add_negative_acquire +#else static __always_inline bool -arch_atomic_add_negative_acquire(int i, atomic_t *v) +raw_atomic_add_negative(int i, atomic_t *v) { - return arch_atomic_add_return_acquire(i, v) < 0; + return raw_atomic_add_return(i, v) < 0; } -#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire #endif -#ifndef arch_atomic_add_negative_release +#if defined(arch_atomic_add_negative_acquire) +#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative_release(int i, atomic_t *v) +raw_atomic_add_negative_acquire(int i, atomic_t *v) { - return arch_atomic_add_return_release(i, v) < 0; + bool ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic_add_negative_release arch_atomic_add_negative_release -#endif - -#ifndef arch_atomic_add_negative_relaxed +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_acquire arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative_relaxed(int i, atomic_t *v) +raw_atomic_add_negative_acquire(int i, atomic_t *v) { - return arch_atomic_add_return_relaxed(i, v) < 0; + return raw_atomic_add_return_acquire(i, v) < 0; } -#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed #endif -#else /* arch_atomic_add_negative_relaxed */ - -#ifndef arch_atomic_add_negative_acquire +#if defined(arch_atomic_add_negative_release) +#define raw_atomic_add_negative_release arch_atomic_add_negative_release +#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool -arch_atomic_add_negative_acquire(int i, atomic_t *v) +raw_atomic_add_negative_release(int i, atomic_t *v) { - bool ret = arch_atomic_add_negative_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic_add_negative_relaxed(i, v); } -#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire -#endif - -#ifndef arch_atomic_add_negative_release +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_release arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative_release(int i, atomic_t *v) +raw_atomic_add_negative_release(int i, atomic_t *v) { - __atomic_release_fence(); - return arch_atomic_add_negative_relaxed(i, v); + return raw_atomic_add_return_release(i, v) < 0; } -#define arch_atomic_add_negative_release arch_atomic_add_negative_release #endif -#ifndef arch_atomic_add_negative +#if defined(arch_atomic_add_negative_relaxed) +#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed +#elif defined(arch_atomic_add_negative) +#define raw_atomic_add_negative_relaxed arch_atomic_add_negative +#else static __always_inline bool -arch_atomic_add_negative(int i, atomic_t *v) +raw_atomic_add_negative_relaxed(int i, atomic_t *v) { - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic_add_negative_relaxed(i, v); - __atomic_post_full_fence(); - return ret; + return raw_atomic_add_return_relaxed(i, v) < 0; } -#define arch_atomic_add_negative arch_atomic_add_negative #endif -#endif /* arch_atomic_add_negative_relaxed */ - -#ifndef arch_atomic_fetch_add_unless +#if defined(arch_atomic_fetch_add_unless) +#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless +#else static __always_inline int -arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) +raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c == u)) break; - } while (!arch_atomic_try_cmpxchg(v, &c, c + a)); + } while (!raw_atomic_try_cmpxchg(v, &c, c + a)); return c; } -#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless #endif -#ifndef arch_atomic_add_unless +#if defined(arch_atomic_add_unless) +#define raw_atomic_add_unless arch_atomic_add_unless +#else static __always_inline bool -arch_atomic_add_unless(atomic_t *v, int a, int u) +raw_atomic_add_unless(atomic_t *v, int a, int u) { - return arch_atomic_fetch_add_unless(v, a, u) != u; + return raw_atomic_fetch_add_unless(v, a, u) != u; } -#define arch_atomic_add_unless arch_atomic_add_unless #endif -#ifndef arch_atomic_inc_not_zero +#if defined(arch_atomic_inc_not_zero) +#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero +#else static __always_inline bool -arch_atomic_inc_not_zero(atomic_t *v) +raw_atomic_inc_not_zero(atomic_t *v) { - return arch_atomic_add_unless(v, 1, 0); + return raw_atomic_add_unless(v, 1, 0); } -#define arch_atomic_inc_not_zero arch_atomic_inc_not_zero #endif -#ifndef arch_atomic_inc_unless_negative +#if defined(arch_atomic_inc_unless_negative) +#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative +#else static __always_inline bool -arch_atomic_inc_unless_negative(atomic_t *v) +raw_atomic_inc_unless_negative(atomic_t *v) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c < 0)) return false; - } while (!arch_atomic_try_cmpxchg(v, &c, c + 1)); + } while (!raw_atomic_try_cmpxchg(v, &c, c + 1)); return true; } -#define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative #endif -#ifndef arch_atomic_dec_unless_positive +#if defined(arch_atomic_dec_unless_positive) +#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive +#else static __always_inline bool -arch_atomic_dec_unless_positive(atomic_t *v) +raw_atomic_dec_unless_positive(atomic_t *v) { - int c = arch_atomic_read(v); + int c = raw_atomic_read(v); do { if (unlikely(c > 0)) return false; - } while (!arch_atomic_try_cmpxchg(v, &c, c - 1)); + } while (!raw_atomic_try_cmpxchg(v, &c, c - 1)); return true; } -#define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive #endif -#ifndef arch_atomic_dec_if_positive +#if defined(arch_atomic_dec_if_positive) +#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive +#else static __always_inline int -arch_atomic_dec_if_positive(atomic_t *v) +raw_atomic_dec_if_positive(atomic_t *v) { - int dec, c = arch_atomic_read(v); + int dec, c = raw_atomic_read(v); do { dec = c - 1; if (unlikely(dec < 0)) break; - } while (!arch_atomic_try_cmpxchg(v, &c, dec)); + } while (!raw_atomic_try_cmpxchg(v, &c, dec)); return dec; } -#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive #endif #ifdef CONFIG_GENERIC_ATOMIC64 #include #endif -#ifndef arch_atomic64_read_acquire +#define raw_atomic64_read arch_atomic64_read + +#if defined(arch_atomic64_read_acquire) +#define raw_atomic64_read_acquire arch_atomic64_read_acquire +#elif defined(arch_atomic64_read) +#define raw_atomic64_read_acquire arch_atomic64_read +#else static __always_inline s64 -arch_atomic64_read_acquire(const atomic64_t *v) +raw_atomic64_read_acquire(const atomic64_t *v) { s64 ret; if (__native_word(atomic64_t)) { ret = smp_load_acquire(&(v)->counter); } else { - ret = arch_atomic64_read(v); + ret = raw_atomic64_read(v); __atomic_acquire_fence(); } return ret; } -#define arch_atomic64_read_acquire arch_atomic64_read_acquire #endif -#ifndef arch_atomic64_set_release +#define raw_atomic64_set arch_atomic64_set + +#if defined(arch_atomic64_set_release) +#define raw_atomic64_set_release arch_atomic64_set_release +#elif defined(arch_atomic64_set) +#define raw_atomic64_set_release arch_atomic64_set +#else static __always_inline void -arch_atomic64_set_release(atomic64_t *v, s64 i) +raw_atomic64_set_release(atomic64_t *v, s64 i) { if (__native_word(atomic64_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); - arch_atomic64_set(v, i); + raw_atomic64_set(v, i); } } -#define arch_atomic64_set_release arch_atomic64_set_release #endif -#ifndef arch_atomic64_add_return_relaxed -#define arch_atomic64_add_return_acquire arch_atomic64_add_return -#define arch_atomic64_add_return_release arch_atomic64_add_return -#define arch_atomic64_add_return_relaxed arch_atomic64_add_return -#else /* arch_atomic64_add_return_relaxed */ +#define raw_atomic64_add arch_atomic64_add + +#if defined(arch_atomic64_add_return) +#define raw_atomic64_add_return arch_atomic64_add_return +#elif defined(arch_atomic64_add_return_relaxed) +static __always_inline s64 +raw_atomic64_add_return(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_add_return_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#else +#error "Unable to define raw_atomic64_add_return" +#endif -#ifndef arch_atomic64_add_return_acquire +#if defined(arch_atomic64_add_return_acquire) +#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 -arch_atomic64_add_return_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_acquire arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_acquire" #endif -#ifndef arch_atomic64_add_return_release +#if defined(arch_atomic64_add_return_release) +#define raw_atomic64_add_return_release arch_atomic64_add_return_release +#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 -arch_atomic64_add_return_release(s64 i, atomic64_t *v) +raw_atomic64_add_return_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_add_return_relaxed(i, v); } -#define arch_atomic64_add_return_release arch_atomic64_add_return_release +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_release arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_release" +#endif + +#if defined(arch_atomic64_add_return_relaxed) +#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#elif defined(arch_atomic64_add_return) +#define raw_atomic64_add_return_relaxed arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_add_return_relaxed" #endif -#ifndef arch_atomic64_add_return +#if defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add arch_atomic64_fetch_add +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_add_return(s64 i, atomic64_t *v) +raw_atomic64_fetch_add(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_add_return_relaxed(i, v); + ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_add_return arch_atomic64_add_return +#else +#error "Unable to define raw_atomic64_fetch_add" #endif -#endif /* arch_atomic64_add_return_relaxed */ - -#ifndef arch_atomic64_fetch_add_relaxed -#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add -#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add -#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add -#else /* arch_atomic64_fetch_add_relaxed */ - -#ifndef arch_atomic64_fetch_add_acquire +#if defined(arch_atomic64_fetch_add_acquire) +#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_acquire" #endif -#ifndef arch_atomic64_fetch_add_release +#if defined(arch_atomic64_fetch_add_release) +#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 -arch_atomic64_fetch_add_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_add_relaxed(i, v); } -#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_release" +#endif + +#if defined(arch_atomic64_fetch_add_relaxed) +#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#elif defined(arch_atomic64_fetch_add) +#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_fetch_add_relaxed" #endif -#ifndef arch_atomic64_fetch_add +#define raw_atomic64_sub arch_atomic64_sub + +#if defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return arch_atomic64_sub_return +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_fetch_add(s64 i, atomic64_t *v) +raw_atomic64_sub_return(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_add_relaxed(i, v); + ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_add arch_atomic64_fetch_add +#else +#error "Unable to define raw_atomic64_sub_return" #endif -#endif /* arch_atomic64_fetch_add_relaxed */ - -#ifndef arch_atomic64_sub_return_relaxed -#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return -#define arch_atomic64_sub_return_release arch_atomic64_sub_return -#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return -#else /* arch_atomic64_sub_return_relaxed */ - -#ifndef arch_atomic64_sub_return_acquire +#if defined(arch_atomic64_sub_return_acquire) +#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_acquire" #endif -#ifndef arch_atomic64_sub_return_release +#if defined(arch_atomic64_sub_return_release) +#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release +#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 -arch_atomic64_sub_return_release(s64 i, atomic64_t *v) +raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_sub_return_relaxed(i, v); } -#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_release arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_release" +#endif + +#if defined(arch_atomic64_sub_return_relaxed) +#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#elif defined(arch_atomic64_sub_return) +#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_sub_return_relaxed" #endif -#ifndef arch_atomic64_sub_return +#if defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_sub_return(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_sub_return_relaxed(i, v); + ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_sub_return arch_atomic64_sub_return +#else +#error "Unable to define raw_atomic64_fetch_sub" #endif -#endif /* arch_atomic64_sub_return_relaxed */ - -#ifndef arch_atomic64_fetch_sub_relaxed -#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub -#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub -#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub -#else /* arch_atomic64_fetch_sub_relaxed */ - -#ifndef arch_atomic64_fetch_sub_acquire +#if defined(arch_atomic64_fetch_sub_acquire) +#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_acquire" #endif -#ifndef arch_atomic64_fetch_sub_release +#if defined(arch_atomic64_fetch_sub_release) +#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 -arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_sub_relaxed(i, v); } -#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_release" #endif -#ifndef arch_atomic64_fetch_sub -static __always_inline s64 -arch_atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - s64 ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_sub_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub +#if defined(arch_atomic64_fetch_sub_relaxed) +#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#elif defined(arch_atomic64_fetch_sub) +#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub +#else +#error "Unable to define raw_atomic64_fetch_sub_relaxed" #endif -#endif /* arch_atomic64_fetch_sub_relaxed */ - -#ifndef arch_atomic64_inc +#if defined(arch_atomic64_inc) +#define raw_atomic64_inc arch_atomic64_inc +#else static __always_inline void -arch_atomic64_inc(atomic64_t *v) +raw_atomic64_inc(atomic64_t *v) { - arch_atomic64_add(1, v); + raw_atomic64_add(1, v); } -#define arch_atomic64_inc arch_atomic64_inc #endif -#ifndef arch_atomic64_inc_return_relaxed -#ifdef arch_atomic64_inc_return -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return -#define arch_atomic64_inc_return_release arch_atomic64_inc_return -#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return -#endif /* arch_atomic64_inc_return */ - -#ifndef arch_atomic64_inc_return +#if defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return arch_atomic64_inc_return +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return(atomic64_t *v) +raw_atomic64_inc_return(atomic64_t *v) { - return arch_atomic64_add_return(1, v); + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_inc_return_relaxed(v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_inc_return arch_atomic64_inc_return -#endif - -#ifndef arch_atomic64_inc_return_acquire +#else static __always_inline s64 -arch_atomic64_inc_return_acquire(atomic64_t *v) +raw_atomic64_inc_return(atomic64_t *v) { - return arch_atomic64_add_return_acquire(1, v); + return raw_atomic64_add_return(1, v); } -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire #endif -#ifndef arch_atomic64_inc_return_release +#if defined(arch_atomic64_inc_return_acquire) +#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return_release(atomic64_t *v) +raw_atomic64_inc_return_acquire(atomic64_t *v) { - return arch_atomic64_add_return_release(1, v); + s64 ret = arch_atomic64_inc_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release -#endif - -#ifndef arch_atomic64_inc_return_relaxed +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return +#else static __always_inline s64 -arch_atomic64_inc_return_relaxed(atomic64_t *v) +raw_atomic64_inc_return_acquire(atomic64_t *v) { - return arch_atomic64_add_return_relaxed(1, v); + return raw_atomic64_add_return_acquire(1, v); } -#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed #endif -#else /* arch_atomic64_inc_return_relaxed */ - -#ifndef arch_atomic64_inc_return_acquire +#if defined(arch_atomic64_inc_return_release) +#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release +#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 -arch_atomic64_inc_return_acquire(atomic64_t *v) +raw_atomic64_inc_return_release(atomic64_t *v) { - s64 ret = arch_atomic64_inc_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_inc_return_relaxed(v); +} +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_release arch_atomic64_inc_return +#else +static __always_inline s64 +raw_atomic64_inc_return_release(atomic64_t *v) +{ + return raw_atomic64_add_return_release(1, v); } -#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire #endif -#ifndef arch_atomic64_inc_return_release +#if defined(arch_atomic64_inc_return_relaxed) +#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed +#elif defined(arch_atomic64_inc_return) +#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return +#else static __always_inline s64 -arch_atomic64_inc_return_release(atomic64_t *v) +raw_atomic64_inc_return_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_inc_return_relaxed(v); + return raw_atomic64_add_return_relaxed(1, v); } -#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release #endif -#ifndef arch_atomic64_inc_return +#if defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc +#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 -arch_atomic64_inc_return(atomic64_t *v) +raw_atomic64_fetch_inc(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_inc_return_relaxed(v); + ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_inc_return arch_atomic64_inc_return -#endif - -#endif /* arch_atomic64_inc_return_relaxed */ - -#ifndef arch_atomic64_fetch_inc_relaxed -#ifdef arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc -#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc -#endif /* arch_atomic64_fetch_inc */ - -#ifndef arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc(atomic64_t *v) +raw_atomic64_fetch_inc(atomic64_t *v) { - return arch_atomic64_fetch_add(1, v); + return raw_atomic64_fetch_add(1, v); } -#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc #endif -#ifndef arch_atomic64_fetch_inc_acquire +#if defined(arch_atomic64_fetch_inc_acquire) +#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire +#elif defined(arch_atomic64_fetch_inc_relaxed) +static __always_inline s64 +raw_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + s64 ret = arch_atomic64_fetch_inc_relaxed(v); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_acquire(atomic64_t *v) +raw_atomic64_fetch_inc_acquire(atomic64_t *v) { - return arch_atomic64_fetch_add_acquire(1, v); + return raw_atomic64_fetch_add_acquire(1, v); } -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire #endif -#ifndef arch_atomic64_fetch_inc_release +#if defined(arch_atomic64_fetch_inc_release) +#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release +#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 -arch_atomic64_fetch_inc_release(atomic64_t *v) +raw_atomic64_fetch_inc_release(atomic64_t *v) { - return arch_atomic64_fetch_add_release(1, v); + __atomic_release_fence(); + return arch_atomic64_fetch_inc_relaxed(v); } -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release -#endif - -#ifndef arch_atomic64_fetch_inc_relaxed +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_relaxed(atomic64_t *v) +raw_atomic64_fetch_inc_release(atomic64_t *v) { - return arch_atomic64_fetch_add_relaxed(1, v); + return raw_atomic64_fetch_add_release(1, v); } -#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed #endif -#else /* arch_atomic64_fetch_inc_relaxed */ - -#ifndef arch_atomic64_fetch_inc_acquire +#if defined(arch_atomic64_fetch_inc_relaxed) +#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed +#elif defined(arch_atomic64_fetch_inc) +#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc +#else static __always_inline s64 -arch_atomic64_fetch_inc_acquire(atomic64_t *v) +raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { - s64 ret = arch_atomic64_fetch_inc_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_fetch_add_relaxed(1, v); } -#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire #endif -#ifndef arch_atomic64_fetch_inc_release -static __always_inline s64 -arch_atomic64_fetch_inc_release(atomic64_t *v) +#if defined(arch_atomic64_dec) +#define raw_atomic64_dec arch_atomic64_dec +#else +static __always_inline void +raw_atomic64_dec(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_inc_relaxed(v); + raw_atomic64_sub(1, v); } -#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release #endif -#ifndef arch_atomic64_fetch_inc +#if defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return arch_atomic64_dec_return +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_fetch_inc(atomic64_t *v) +raw_atomic64_dec_return(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_inc_relaxed(v); + ret = arch_atomic64_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc -#endif - -#endif /* arch_atomic64_fetch_inc_relaxed */ - -#ifndef arch_atomic64_dec -static __always_inline void -arch_atomic64_dec(atomic64_t *v) -{ - arch_atomic64_sub(1, v); -} -#define arch_atomic64_dec arch_atomic64_dec -#endif - -#ifndef arch_atomic64_dec_return_relaxed -#ifdef arch_atomic64_dec_return -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return -#define arch_atomic64_dec_return_release arch_atomic64_dec_return -#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return -#endif /* arch_atomic64_dec_return */ - -#ifndef arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return(atomic64_t *v) +raw_atomic64_dec_return(atomic64_t *v) { - return arch_atomic64_sub_return(1, v); + return raw_atomic64_sub_return(1, v); } -#define arch_atomic64_dec_return arch_atomic64_dec_return #endif -#ifndef arch_atomic64_dec_return_acquire +#if defined(arch_atomic64_dec_return_acquire) +#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_dec_return_acquire(atomic64_t *v) +raw_atomic64_dec_return_acquire(atomic64_t *v) { - return arch_atomic64_sub_return_acquire(1, v); + s64 ret = arch_atomic64_dec_return_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire -#endif - -#ifndef arch_atomic64_dec_return_release +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_release(atomic64_t *v) +raw_atomic64_dec_return_acquire(atomic64_t *v) { - return arch_atomic64_sub_return_release(1, v); + return raw_atomic64_sub_return_acquire(1, v); } -#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release #endif -#ifndef arch_atomic64_dec_return_relaxed +#if defined(arch_atomic64_dec_return_release) +#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release +#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 -arch_atomic64_dec_return_relaxed(atomic64_t *v) +raw_atomic64_dec_return_release(atomic64_t *v) { - return arch_atomic64_sub_return_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic64_dec_return_relaxed(v); } -#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed -#endif - -#else /* arch_atomic64_dec_return_relaxed */ - -#ifndef arch_atomic64_dec_return_acquire +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_release arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_acquire(atomic64_t *v) +raw_atomic64_dec_return_release(atomic64_t *v) { - s64 ret = arch_atomic64_dec_return_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_sub_return_release(1, v); } -#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire #endif -#ifndef arch_atomic64_dec_return_release +#if defined(arch_atomic64_dec_return_relaxed) +#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed +#elif defined(arch_atomic64_dec_return) +#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return +#else static __always_inline s64 -arch_atomic64_dec_return_release(atomic64_t *v) +raw_atomic64_dec_return_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_dec_return_relaxed(v); + return raw_atomic64_sub_return_relaxed(1, v); } -#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release #endif -#ifndef arch_atomic64_dec_return +#if defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_dec_return(atomic64_t *v) +raw_atomic64_fetch_dec(atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_dec_return_relaxed(v); + ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_dec_return arch_atomic64_dec_return -#endif - -#endif /* arch_atomic64_dec_return_relaxed */ - -#ifndef arch_atomic64_fetch_dec_relaxed -#ifdef arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec -#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec -#endif /* arch_atomic64_fetch_dec */ - -#ifndef arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec(atomic64_t *v) +raw_atomic64_fetch_dec(atomic64_t *v) { - return arch_atomic64_fetch_sub(1, v); + return raw_atomic64_fetch_sub(1, v); } -#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec #endif -#ifndef arch_atomic64_fetch_dec_acquire +#if defined(arch_atomic64_fetch_dec_acquire) +#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec_acquire(atomic64_t *v) +raw_atomic64_fetch_dec_acquire(atomic64_t *v) { - return arch_atomic64_fetch_sub_acquire(1, v); + s64 ret = arch_atomic64_fetch_dec_relaxed(v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire -#endif - -#ifndef arch_atomic64_fetch_dec_release +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_release(atomic64_t *v) +raw_atomic64_fetch_dec_acquire(atomic64_t *v) { - return arch_atomic64_fetch_sub_release(1, v); + return raw_atomic64_fetch_sub_acquire(1, v); } -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release #endif -#ifndef arch_atomic64_fetch_dec_relaxed +#if defined(arch_atomic64_fetch_dec_release) +#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release +#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec_relaxed(atomic64_t *v) +raw_atomic64_fetch_dec_release(atomic64_t *v) { - return arch_atomic64_fetch_sub_relaxed(1, v); + __atomic_release_fence(); + return arch_atomic64_fetch_dec_relaxed(v); } -#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed -#endif - -#else /* arch_atomic64_fetch_dec_relaxed */ - -#ifndef arch_atomic64_fetch_dec_acquire +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_acquire(atomic64_t *v) +raw_atomic64_fetch_dec_release(atomic64_t *v) { - s64 ret = arch_atomic64_fetch_dec_relaxed(v); - __atomic_acquire_fence(); - return ret; + return raw_atomic64_fetch_sub_release(1, v); } -#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire #endif -#ifndef arch_atomic64_fetch_dec_release +#if defined(arch_atomic64_fetch_dec_relaxed) +#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed +#elif defined(arch_atomic64_fetch_dec) +#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec +#else static __always_inline s64 -arch_atomic64_fetch_dec_release(atomic64_t *v) +raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_dec_relaxed(v); + return raw_atomic64_fetch_sub_relaxed(1, v); } -#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release #endif -#ifndef arch_atomic64_fetch_dec +#define raw_atomic64_and arch_atomic64_and + +#if defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and arch_atomic64_fetch_and +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_dec(atomic64_t *v) +raw_atomic64_fetch_and(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_dec_relaxed(v); + ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec +#else +#error "Unable to define raw_atomic64_fetch_and" #endif -#endif /* arch_atomic64_fetch_dec_relaxed */ - -#ifndef arch_atomic64_fetch_and_relaxed -#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and -#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and -#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and -#else /* arch_atomic64_fetch_and_relaxed */ - -#ifndef arch_atomic64_fetch_and_acquire +#if defined(arch_atomic64_fetch_and_acquire) +#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_acquire" #endif -#ifndef arch_atomic64_fetch_and_release +#if defined(arch_atomic64_fetch_and_release) +#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 -arch_atomic64_fetch_and_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_and_relaxed(i, v); } -#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_release" #endif -#ifndef arch_atomic64_fetch_and -static __always_inline s64 -arch_atomic64_fetch_and(s64 i, atomic64_t *v) -{ - s64 ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_and_relaxed(i, v); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_fetch_and arch_atomic64_fetch_and +#if defined(arch_atomic64_fetch_and_relaxed) +#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#elif defined(arch_atomic64_fetch_and) +#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and +#else +#error "Unable to define raw_atomic64_fetch_and_relaxed" #endif -#endif /* arch_atomic64_fetch_and_relaxed */ - -#ifndef arch_atomic64_andnot +#if defined(arch_atomic64_andnot) +#define raw_atomic64_andnot arch_atomic64_andnot +#else static __always_inline void -arch_atomic64_andnot(s64 i, atomic64_t *v) +raw_atomic64_andnot(s64 i, atomic64_t *v) { - arch_atomic64_and(~i, v); + raw_atomic64_and(~i, v); } -#define arch_atomic64_andnot arch_atomic64_andnot #endif -#ifndef arch_atomic64_fetch_andnot_relaxed -#ifdef arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot -#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot -#endif /* arch_atomic64_fetch_andnot */ - -#ifndef arch_atomic64_fetch_andnot +#if defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and(~i, v); + s64 ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_fetch_andnot_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot -#endif - -#ifndef arch_atomic64_fetch_andnot_acquire +#else static __always_inline s64 -arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_acquire(~i, v); + return raw_atomic64_fetch_and(~i, v); } -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire #endif -#ifndef arch_atomic64_fetch_andnot_release +#if defined(arch_atomic64_fetch_andnot_acquire) +#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_release(~i, v); + s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release -#endif - -#ifndef arch_atomic64_fetch_andnot_relaxed +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot +#else static __always_inline s64 -arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_fetch_and_relaxed(~i, v); + return raw_atomic64_fetch_and_acquire(~i, v); } -#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed #endif -#else /* arch_atomic64_fetch_andnot_relaxed */ - -#ifndef arch_atomic64_fetch_andnot_acquire +#if defined(arch_atomic64_fetch_andnot_release) +#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release +#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { - s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_fetch_andnot_relaxed(i, v); +} +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot +#else +static __always_inline s64 +raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return raw_atomic64_fetch_and_release(~i, v); } -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire #endif -#ifndef arch_atomic64_fetch_andnot_release +#if defined(arch_atomic64_fetch_andnot_relaxed) +#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed +#elif defined(arch_atomic64_fetch_andnot) +#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot +#else static __always_inline s64 -arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_fetch_andnot_relaxed(i, v); + return raw_atomic64_fetch_and_relaxed(~i, v); } -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release #endif -#ifndef arch_atomic64_fetch_andnot +#define raw_atomic64_or arch_atomic64_or + +#if defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or arch_atomic64_fetch_or +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) +raw_atomic64_fetch_or(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_andnot_relaxed(i, v); + ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot +#else +#error "Unable to define raw_atomic64_fetch_or" #endif -#endif /* arch_atomic64_fetch_andnot_relaxed */ - -#ifndef arch_atomic64_fetch_or_relaxed -#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or -#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or -#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or -#else /* arch_atomic64_fetch_or_relaxed */ - -#ifndef arch_atomic64_fetch_or_acquire +#if defined(arch_atomic64_fetch_or_acquire) +#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_acquire" #endif -#ifndef arch_atomic64_fetch_or_release +#if defined(arch_atomic64_fetch_or_release) +#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 -arch_atomic64_fetch_or_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_or_relaxed(i, v); } -#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_release" +#endif + +#if defined(arch_atomic64_fetch_or_relaxed) +#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#elif defined(arch_atomic64_fetch_or) +#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_or_relaxed" #endif -#ifndef arch_atomic64_fetch_or +#define raw_atomic64_xor arch_atomic64_xor + +#if defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_or(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_or_relaxed(i, v); + ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_or arch_atomic64_fetch_or +#else +#error "Unable to define raw_atomic64_fetch_xor" #endif -#endif /* arch_atomic64_fetch_or_relaxed */ - -#ifndef arch_atomic64_fetch_xor_relaxed -#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor -#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor -#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor -#else /* arch_atomic64_fetch_xor_relaxed */ - -#ifndef arch_atomic64_fetch_xor_acquire +#if defined(arch_atomic64_fetch_xor_acquire) +#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { s64 ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; } -#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_acquire" #endif -#ifndef arch_atomic64_fetch_xor_release +#if defined(arch_atomic64_fetch_xor_release) +#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { __atomic_release_fence(); return arch_atomic64_fetch_xor_relaxed(i, v); } -#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_release" +#endif + +#if defined(arch_atomic64_fetch_xor_relaxed) +#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#elif defined(arch_atomic64_fetch_xor) +#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor +#else +#error "Unable to define raw_atomic64_fetch_xor_relaxed" #endif -#ifndef arch_atomic64_fetch_xor +#if defined(arch_atomic64_xchg) +#define raw_atomic64_xchg arch_atomic64_xchg +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_fetch_xor(s64 i, atomic64_t *v) +raw_atomic64_xchg(atomic64_t *v, s64 i) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_fetch_xor_relaxed(i, v); + ret = arch_atomic64_xchg_relaxed(v, i); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor -#endif - -#endif /* arch_atomic64_fetch_xor_relaxed */ - -#ifndef arch_atomic64_xchg_relaxed -#ifdef arch_atomic64_xchg -#define arch_atomic64_xchg_acquire arch_atomic64_xchg -#define arch_atomic64_xchg_release arch_atomic64_xchg -#define arch_atomic64_xchg_relaxed arch_atomic64_xchg -#endif /* arch_atomic64_xchg */ - -#ifndef arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg(atomic64_t *v, s64 new) +raw_atomic64_xchg(atomic64_t *v, s64 new) { - return arch_xchg(&v->counter, new); + return raw_xchg(&v->counter, new); } -#define arch_atomic64_xchg arch_atomic64_xchg #endif -#ifndef arch_atomic64_xchg_acquire +#if defined(arch_atomic64_xchg_acquire) +#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_xchg_acquire(atomic64_t *v, s64 new) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) { - return arch_xchg_acquire(&v->counter, new); + s64 ret = arch_atomic64_xchg_relaxed(v, i); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire -#endif - -#ifndef arch_atomic64_xchg_release +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_acquire arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_release(atomic64_t *v, s64 new) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { - return arch_xchg_release(&v->counter, new); + return raw_xchg_acquire(&v->counter, new); } -#define arch_atomic64_xchg_release arch_atomic64_xchg_release #endif -#ifndef arch_atomic64_xchg_relaxed +#if defined(arch_atomic64_xchg_release) +#define raw_atomic64_xchg_release arch_atomic64_xchg_release +#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +raw_atomic64_xchg_release(atomic64_t *v, s64 i) { - return arch_xchg_relaxed(&v->counter, new); + __atomic_release_fence(); + return arch_atomic64_xchg_relaxed(v, i); } -#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed -#endif - -#else /* arch_atomic64_xchg_relaxed */ - -#ifndef arch_atomic64_xchg_acquire +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_release arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_acquire(atomic64_t *v, s64 i) +raw_atomic64_xchg_release(atomic64_t *v, s64 new) { - s64 ret = arch_atomic64_xchg_relaxed(v, i); - __atomic_acquire_fence(); - return ret; + return raw_xchg_release(&v->counter, new); } -#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire #endif -#ifndef arch_atomic64_xchg_release +#if defined(arch_atomic64_xchg_relaxed) +#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed +#elif defined(arch_atomic64_xchg) +#define raw_atomic64_xchg_relaxed arch_atomic64_xchg +#else static __always_inline s64 -arch_atomic64_xchg_release(atomic64_t *v, s64 i) +raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { - __atomic_release_fence(); - return arch_atomic64_xchg_relaxed(v, i); + return raw_xchg_relaxed(&v->counter, new); } -#define arch_atomic64_xchg_release arch_atomic64_xchg_release #endif -#ifndef arch_atomic64_xchg +#if defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_xchg(atomic64_t *v, s64 i) +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_xchg_relaxed(v, i); + ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_xchg arch_atomic64_xchg -#endif - -#endif /* arch_atomic64_xchg_relaxed */ - -#ifndef arch_atomic64_cmpxchg_relaxed -#ifdef arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg -#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg -#endif /* arch_atomic64_cmpxchg */ - -#ifndef arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg(&v->counter, old, new); + return raw_cmpxchg(&v->counter, old, new); } -#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg #endif -#ifndef arch_atomic64_cmpxchg_acquire +#if defined(arch_atomic64_cmpxchg_acquire) +#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_acquire(&v->counter, old, new); + s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire -#endif - -#ifndef arch_atomic64_cmpxchg_release +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_release(&v->counter, old, new); + return raw_cmpxchg_acquire(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release #endif -#ifndef arch_atomic64_cmpxchg_relaxed +#if defined(arch_atomic64_cmpxchg_release) +#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release +#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 -arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { - return arch_cmpxchg_relaxed(&v->counter, old, new); + __atomic_release_fence(); + return arch_atomic64_cmpxchg_relaxed(v, old, new); } -#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed -#endif - -#else /* arch_atomic64_cmpxchg_relaxed */ - -#ifndef arch_atomic64_cmpxchg_acquire +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { - s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; + return raw_cmpxchg_release(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire #endif -#ifndef arch_atomic64_cmpxchg_release +#if defined(arch_atomic64_cmpxchg_relaxed) +#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed +#elif defined(arch_atomic64_cmpxchg) +#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg +#else static __always_inline s64 -arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { - __atomic_release_fence(); - return arch_atomic64_cmpxchg_relaxed(v, old, new); + return raw_cmpxchg_relaxed(&v->counter, old, new); } -#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release #endif -#ifndef arch_atomic64_cmpxchg -static __always_inline s64 -arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +#if defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { - s64 ret; + bool ret; __atomic_pre_full_fence(); - ret = arch_atomic64_cmpxchg_relaxed(v, old, new); + ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; } -#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg -#endif - -#endif /* arch_atomic64_cmpxchg_relaxed */ - -#ifndef arch_atomic64_try_cmpxchg_relaxed -#ifdef arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg -#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg -#endif /* arch_atomic64_try_cmpxchg */ - -#ifndef arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg(v, o, new); + r = raw_atomic64_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg #endif -#ifndef arch_atomic64_try_cmpxchg_acquire +#if defined(arch_atomic64_try_cmpxchg_acquire) +#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); + __atomic_acquire_fence(); + return ret; +} +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_acquire(v, o, new); + r = raw_atomic64_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire #endif -#ifndef arch_atomic64_try_cmpxchg_release +#if defined(arch_atomic64_try_cmpxchg_release) +#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release +#elif defined(arch_atomic64_try_cmpxchg_relaxed) +static __always_inline bool +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + __atomic_release_fence(); + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +} +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_release(v, o, new); + r = raw_atomic64_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release #endif -#ifndef arch_atomic64_try_cmpxchg_relaxed +#if defined(arch_atomic64_try_cmpxchg_relaxed) +#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed +#elif defined(arch_atomic64_try_cmpxchg) +#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg +#else static __always_inline bool -arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { s64 r, o = *old; - r = arch_atomic64_cmpxchg_relaxed(v, o, new); + r = raw_atomic64_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); } -#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed #endif -#else /* arch_atomic64_try_cmpxchg_relaxed */ - -#ifndef arch_atomic64_try_cmpxchg_acquire -static __always_inline bool -arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ - bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); - __atomic_acquire_fence(); - return ret; -} -#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire -#endif - -#ifndef arch_atomic64_try_cmpxchg_release -static __always_inline bool -arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ - __atomic_release_fence(); - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} -#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release -#endif - -#ifndef arch_atomic64_try_cmpxchg -static __always_inline bool -arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); - __atomic_post_full_fence(); - return ret; -} -#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -#endif - -#endif /* arch_atomic64_try_cmpxchg_relaxed */ - -#ifndef arch_atomic64_sub_and_test +#if defined(arch_atomic64_sub_and_test) +#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test +#else static __always_inline bool -arch_atomic64_sub_and_test(s64 i, atomic64_t *v) +raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { - return arch_atomic64_sub_return(i, v) == 0; + return raw_atomic64_sub_return(i, v) == 0; } -#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test #endif -#ifndef arch_atomic64_dec_and_test +#if defined(arch_atomic64_dec_and_test) +#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test +#else static __always_inline bool -arch_atomic64_dec_and_test(atomic64_t *v) +raw_atomic64_dec_and_test(atomic64_t *v) { - return arch_atomic64_dec_return(v) == 0; + return raw_atomic64_dec_return(v) == 0; } -#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test #endif -#ifndef arch_atomic64_inc_and_test +#if defined(arch_atomic64_inc_and_test) +#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test +#else static __always_inline bool -arch_atomic64_inc_and_test(atomic64_t *v) +raw_atomic64_inc_and_test(atomic64_t *v) { - return arch_atomic64_inc_return(v) == 0; + return raw_atomic64_inc_return(v) == 0; } -#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test #endif -#ifndef arch_atomic64_add_negative_relaxed -#ifdef arch_atomic64_add_negative -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative -#define arch_atomic64_add_negative_release arch_atomic64_add_negative -#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative -#endif /* arch_atomic64_add_negative */ - -#ifndef arch_atomic64_add_negative +#if defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative arch_atomic64_add_negative +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative(s64 i, atomic64_t *v) +raw_atomic64_add_negative(s64 i, atomic64_t *v) { - return arch_atomic64_add_return(i, v) < 0; + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; } -#define arch_atomic64_add_negative arch_atomic64_add_negative -#endif - -#ifndef arch_atomic64_add_negative_acquire +#else static __always_inline bool -arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_negative(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_acquire(i, v) < 0; + return raw_atomic64_add_return(i, v) < 0; } -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire #endif -#ifndef arch_atomic64_add_negative_release +#if defined(arch_atomic64_add_negative_acquire) +#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_release(i, v) < 0; + bool ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; } -#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release -#endif - -#ifndef arch_atomic64_add_negative_relaxed +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { - return arch_atomic64_add_return_relaxed(i, v) < 0; + return raw_atomic64_add_return_acquire(i, v) < 0; } -#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed #endif -#else /* arch_atomic64_add_negative_relaxed */ - -#ifndef arch_atomic64_add_negative_acquire +#if defined(arch_atomic64_add_negative_release) +#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release +#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool -arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { - bool ret = arch_atomic64_add_negative_relaxed(i, v); - __atomic_acquire_fence(); - return ret; + __atomic_release_fence(); + return arch_atomic64_add_negative_relaxed(i, v); } -#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire -#endif - -#ifndef arch_atomic64_add_negative_release +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_release arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { - __atomic_release_fence(); - return arch_atomic64_add_negative_relaxed(i, v); + return raw_atomic64_add_return_release(i, v) < 0; } -#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release #endif -#ifndef arch_atomic64_add_negative +#if defined(arch_atomic64_add_negative_relaxed) +#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed +#elif defined(arch_atomic64_add_negative) +#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative +#else static __always_inline bool -arch_atomic64_add_negative(s64 i, atomic64_t *v) +raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { - bool ret; - __atomic_pre_full_fence(); - ret = arch_atomic64_add_negative_relaxed(i, v); - __atomic_post_full_fence(); - return ret; + return raw_atomic64_add_return_relaxed(i, v) < 0; } -#define arch_atomic64_add_negative arch_atomic64_add_negative #endif -#endif /* arch_atomic64_add_negative_relaxed */ - -#ifndef arch_atomic64_fetch_add_unless +#if defined(arch_atomic64_fetch_add_unless) +#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless +#else static __always_inline s64 -arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c == u)) break; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + a)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c + a)); return c; } -#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif -#ifndef arch_atomic64_add_unless +#if defined(arch_atomic64_add_unless) +#define raw_atomic64_add_unless arch_atomic64_add_unless +#else static __always_inline bool -arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { - return arch_atomic64_fetch_add_unless(v, a, u) != u; + return raw_atomic64_fetch_add_unless(v, a, u) != u; } -#define arch_atomic64_add_unless arch_atomic64_add_unless #endif -#ifndef arch_atomic64_inc_not_zero +#if defined(arch_atomic64_inc_not_zero) +#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero +#else static __always_inline bool -arch_atomic64_inc_not_zero(atomic64_t *v) +raw_atomic64_inc_not_zero(atomic64_t *v) { - return arch_atomic64_add_unless(v, 1, 0); + return raw_atomic64_add_unless(v, 1, 0); } -#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero #endif -#ifndef arch_atomic64_inc_unless_negative +#if defined(arch_atomic64_inc_unless_negative) +#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative +#else static __always_inline bool -arch_atomic64_inc_unless_negative(atomic64_t *v) +raw_atomic64_inc_unless_negative(atomic64_t *v) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c < 0)) return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + 1)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1)); return true; } -#define arch_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative #endif -#ifndef arch_atomic64_dec_unless_positive +#if defined(arch_atomic64_dec_unless_positive) +#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive +#else static __always_inline bool -arch_atomic64_dec_unless_positive(atomic64_t *v) +raw_atomic64_dec_unless_positive(atomic64_t *v) { - s64 c = arch_atomic64_read(v); + s64 c = raw_atomic64_read(v); do { if (unlikely(c > 0)) return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c - 1)); + } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1)); return true; } -#define arch_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive #endif -#ifndef arch_atomic64_dec_if_positive +#if defined(arch_atomic64_dec_if_positive) +#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive +#else static __always_inline s64 -arch_atomic64_dec_if_positive(atomic64_t *v) +raw_atomic64_dec_if_positive(atomic64_t *v) { - s64 dec, c = arch_atomic64_read(v); + s64 dec, c = raw_atomic64_read(v); do { dec = c - 1; if (unlikely(dec < 0)) break; - } while (!arch_atomic64_try_cmpxchg(v, &c, dec)); + } while (!raw_atomic64_try_cmpxchg(v, &c, dec)); return dec; } -#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// e1cee558cc61cae887890db30fcdf93baca9f498 +// c2048fccede6fac923252290e2b303949d5dec83 diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h deleted file mode 100644 index 8b2fc04cf8c54..0000000000000 --- a/include/linux/atomic/atomic-raw.h +++ /dev/null @@ -1,1135 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -// Generated by scripts/atomic/gen-atomic-raw.sh -// DO NOT MODIFY THIS FILE DIRECTLY - -#ifndef _LINUX_ATOMIC_RAW_H -#define _LINUX_ATOMIC_RAW_H - -static __always_inline int -raw_atomic_read(const atomic_t *v) -{ - return arch_atomic_read(v); -} - -static __always_inline int -raw_atomic_read_acquire(const atomic_t *v) -{ - return arch_atomic_read_acquire(v); -} - -static __always_inline void -raw_atomic_set(atomic_t *v, int i) -{ - arch_atomic_set(v, i); -} - -static __always_inline void -raw_atomic_set_release(atomic_t *v, int i) -{ - arch_atomic_set_release(v, i); -} - -static __always_inline void -raw_atomic_add(int i, atomic_t *v) -{ - arch_atomic_add(i, v); -} - -static __always_inline int -raw_atomic_add_return(int i, atomic_t *v) -{ - return arch_atomic_add_return(i, v); -} - -static __always_inline int -raw_atomic_add_return_acquire(int i, atomic_t *v) -{ - return arch_atomic_add_return_acquire(i, v); -} - -static __always_inline int -raw_atomic_add_return_release(int i, atomic_t *v) -{ - return arch_atomic_add_return_release(i, v); -} - -static __always_inline int -raw_atomic_add_return_relaxed(int i, atomic_t *v) -{ - return arch_atomic_add_return_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_add(int i, atomic_t *v) -{ - return arch_atomic_fetch_add(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_sub(int i, atomic_t *v) -{ - arch_atomic_sub(i, v); -} - -static __always_inline int -raw_atomic_sub_return(int i, atomic_t *v) -{ - return arch_atomic_sub_return(i, v); -} - -static __always_inline int -raw_atomic_sub_return_acquire(int i, atomic_t *v) -{ - return arch_atomic_sub_return_acquire(i, v); -} - -static __always_inline int -raw_atomic_sub_return_release(int i, atomic_t *v) -{ - return arch_atomic_sub_return_release(i, v); -} - -static __always_inline int -raw_atomic_sub_return_relaxed(int i, atomic_t *v) -{ - return arch_atomic_sub_return_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_inc(atomic_t *v) -{ - arch_atomic_inc(v); -} - -static __always_inline int -raw_atomic_inc_return(atomic_t *v) -{ - return arch_atomic_inc_return(v); -} - -static __always_inline int -raw_atomic_inc_return_acquire(atomic_t *v) -{ - return arch_atomic_inc_return_acquire(v); -} - -static __always_inline int -raw_atomic_inc_return_release(atomic_t *v) -{ - return arch_atomic_inc_return_release(v); -} - -static __always_inline int -raw_atomic_inc_return_relaxed(atomic_t *v) -{ - return arch_atomic_inc_return_relaxed(v); -} - -static __always_inline int -raw_atomic_fetch_inc(atomic_t *v) -{ - return arch_atomic_fetch_inc(v); -} - -static __always_inline int -raw_atomic_fetch_inc_acquire(atomic_t *v) -{ - return arch_atomic_fetch_inc_acquire(v); -} - -static __always_inline int -raw_atomic_fetch_inc_release(atomic_t *v) -{ - return arch_atomic_fetch_inc_release(v); -} - -static __always_inline int -raw_atomic_fetch_inc_relaxed(atomic_t *v) -{ - return arch_atomic_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_dec(atomic_t *v) -{ - arch_atomic_dec(v); -} - -static __always_inline int -raw_atomic_dec_return(atomic_t *v) -{ - return arch_atomic_dec_return(v); -} - -static __always_inline int -raw_atomic_dec_return_acquire(atomic_t *v) -{ - return arch_atomic_dec_return_acquire(v); -} - -static __always_inline int -raw_atomic_dec_return_release(atomic_t *v) -{ - return arch_atomic_dec_return_release(v); -} - -static __always_inline int -raw_atomic_dec_return_relaxed(atomic_t *v) -{ - return arch_atomic_dec_return_relaxed(v); -} - -static __always_inline int -raw_atomic_fetch_dec(atomic_t *v) -{ - return arch_atomic_fetch_dec(v); -} - -static __always_inline int -raw_atomic_fetch_dec_acquire(atomic_t *v) -{ - return arch_atomic_fetch_dec_acquire(v); -} - -static __always_inline int -raw_atomic_fetch_dec_release(atomic_t *v) -{ - return arch_atomic_fetch_dec_release(v); -} - -static __always_inline int -raw_atomic_fetch_dec_relaxed(atomic_t *v) -{ - return arch_atomic_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_and(int i, atomic_t *v) -{ - arch_atomic_and(i, v); -} - -static __always_inline int -raw_atomic_fetch_and(int i, atomic_t *v) -{ - return arch_atomic_fetch_and(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_and_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_andnot(int i, atomic_t *v) -{ - arch_atomic_andnot(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_or(int i, atomic_t *v) -{ - arch_atomic_or(i, v); -} - -static __always_inline int -raw_atomic_fetch_or(int i, atomic_t *v) -{ - return arch_atomic_fetch_or(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_or_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_xor(int i, atomic_t *v) -{ - arch_atomic_xor(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_acquire(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_acquire(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_release(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_release(i, v); -} - -static __always_inline int -raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) -{ - return arch_atomic_fetch_xor_relaxed(i, v); -} - -static __always_inline int -raw_atomic_xchg(atomic_t *v, int i) -{ - return arch_atomic_xchg(v, i); -} - -static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int i) -{ - return arch_atomic_xchg_acquire(v, i); -} - -static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int i) -{ - return arch_atomic_xchg_release(v, i); -} - -static __always_inline int -raw_atomic_xchg_relaxed(atomic_t *v, int i) -{ - return arch_atomic_xchg_relaxed(v, i); -} - -static __always_inline int -raw_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_acquire(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_release(v, old, new); -} - -static __always_inline int -raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) -{ - return arch_atomic_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) -{ - return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_sub_and_test(int i, atomic_t *v) -{ - return arch_atomic_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_dec_and_test(atomic_t *v) -{ - return arch_atomic_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_inc_and_test(atomic_t *v) -{ - return arch_atomic_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_add_negative(int i, atomic_t *v) -{ - return arch_atomic_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_acquire(int i, atomic_t *v) -{ - return arch_atomic_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_release(int i, atomic_t *v) -{ - return arch_atomic_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_add_negative_relaxed(int i, atomic_t *v) -{ - return arch_atomic_add_negative_relaxed(i, v); -} - -static __always_inline int -raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - return arch_atomic_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_add_unless(atomic_t *v, int a, int u) -{ - return arch_atomic_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_inc_not_zero(atomic_t *v) -{ - return arch_atomic_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_inc_unless_negative(atomic_t *v) -{ - return arch_atomic_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_dec_unless_positive(atomic_t *v) -{ - return arch_atomic_dec_unless_positive(v); -} - -static __always_inline int -raw_atomic_dec_if_positive(atomic_t *v) -{ - return arch_atomic_dec_if_positive(v); -} - -static __always_inline s64 -raw_atomic64_read(const atomic64_t *v) -{ - return arch_atomic64_read(v); -} - -static __always_inline s64 -raw_atomic64_read_acquire(const atomic64_t *v) -{ - return arch_atomic64_read_acquire(v); -} - -static __always_inline void -raw_atomic64_set(atomic64_t *v, s64 i) -{ - arch_atomic64_set(v, i); -} - -static __always_inline void -raw_atomic64_set_release(atomic64_t *v, s64 i) -{ - arch_atomic64_set_release(v, i); -} - -static __always_inline void -raw_atomic64_add(s64 i, atomic64_t *v) -{ - arch_atomic64_add(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_release(i, v); -} - -static __always_inline s64 -raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_return_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_sub(s64 i, atomic64_t *v) -{ - arch_atomic64_sub(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_release(i, v); -} - -static __always_inline s64 -raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_return_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_inc(atomic64_t *v) -{ - arch_atomic64_inc(v); -} - -static __always_inline s64 -raw_atomic64_inc_return(atomic64_t *v) -{ - return arch_atomic64_inc_return(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_acquire(atomic64_t *v) -{ - return arch_atomic64_inc_return_acquire(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_release(atomic64_t *v) -{ - return arch_atomic64_inc_return_release(v); -} - -static __always_inline s64 -raw_atomic64_inc_return_relaxed(atomic64_t *v) -{ - return arch_atomic64_inc_return_relaxed(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc(atomic64_t *v) -{ - return arch_atomic64_fetch_inc(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_acquire(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_acquire(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_release(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_release(v); -} - -static __always_inline s64 -raw_atomic64_fetch_inc_relaxed(atomic64_t *v) -{ - return arch_atomic64_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic64_dec(atomic64_t *v) -{ - arch_atomic64_dec(v); -} - -static __always_inline s64 -raw_atomic64_dec_return(atomic64_t *v) -{ - return arch_atomic64_dec_return(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_acquire(atomic64_t *v) -{ - return arch_atomic64_dec_return_acquire(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_release(atomic64_t *v) -{ - return arch_atomic64_dec_return_release(v); -} - -static __always_inline s64 -raw_atomic64_dec_return_relaxed(atomic64_t *v) -{ - return arch_atomic64_dec_return_relaxed(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec(atomic64_t *v) -{ - return arch_atomic64_fetch_dec(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_acquire(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_acquire(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_release(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_release(v); -} - -static __always_inline s64 -raw_atomic64_fetch_dec_relaxed(atomic64_t *v) -{ - return arch_atomic64_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic64_and(s64 i, atomic64_t *v) -{ - arch_atomic64_and(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_andnot(s64 i, atomic64_t *v) -{ - arch_atomic64_andnot(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_or(s64 i, atomic64_t *v) -{ - arch_atomic64_or(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic64_xor(s64 i, atomic64_t *v) -{ - arch_atomic64_xor(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_acquire(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_release(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_fetch_xor_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_acquire(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_release(v, i); -} - -static __always_inline s64 -raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i) -{ - return arch_atomic64_xchg_relaxed(v, i); -} - -static __always_inline s64 -raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_acquire(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_release(v, old, new); -} - -static __always_inline s64 -raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) -{ - return arch_atomic64_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_acquire(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_release(v, old, new); -} - -static __always_inline bool -raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) -{ - return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic64_sub_and_test(s64 i, atomic64_t *v) -{ - return arch_atomic64_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic64_dec_and_test(atomic64_t *v) -{ - return arch_atomic64_dec_and_test(v); -} - -static __always_inline bool -raw_atomic64_inc_and_test(atomic64_t *v) -{ - return arch_atomic64_inc_and_test(v); -} - -static __always_inline bool -raw_atomic64_add_negative(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_release(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) -{ - return arch_atomic64_add_negative_relaxed(i, v); -} - -static __always_inline s64 -raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) -{ - return arch_atomic64_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) -{ - return arch_atomic64_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic64_inc_not_zero(atomic64_t *v) -{ - return arch_atomic64_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic64_inc_unless_negative(atomic64_t *v) -{ - return arch_atomic64_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic64_dec_unless_positive(atomic64_t *v) -{ - return arch_atomic64_dec_unless_positive(v); -} - -static __always_inline s64 -raw_atomic64_dec_if_positive(atomic64_t *v) -{ - return arch_atomic64_dec_if_positive(v); -} - -#define raw_xchg(...) \ - arch_xchg(__VA_ARGS__) - -#define raw_xchg_acquire(...) \ - arch_xchg_acquire(__VA_ARGS__) - -#define raw_xchg_release(...) \ - arch_xchg_release(__VA_ARGS__) - -#define raw_xchg_relaxed(...) \ - arch_xchg_relaxed(__VA_ARGS__) - -#define raw_cmpxchg(...) \ - arch_cmpxchg(__VA_ARGS__) - -#define raw_cmpxchg_acquire(...) \ - arch_cmpxchg_acquire(__VA_ARGS__) - -#define raw_cmpxchg_release(...) \ - arch_cmpxchg_release(__VA_ARGS__) - -#define raw_cmpxchg_relaxed(...) \ - arch_cmpxchg_relaxed(__VA_ARGS__) - -#define raw_cmpxchg64(...) \ - arch_cmpxchg64(__VA_ARGS__) - -#define raw_cmpxchg64_acquire(...) \ - arch_cmpxchg64_acquire(__VA_ARGS__) - -#define raw_cmpxchg64_release(...) \ - arch_cmpxchg64_release(__VA_ARGS__) - -#define raw_cmpxchg64_relaxed(...) \ - arch_cmpxchg64_relaxed(__VA_ARGS__) - -#define raw_cmpxchg128(...) \ - arch_cmpxchg128(__VA_ARGS__) - -#define raw_cmpxchg128_acquire(...) \ - arch_cmpxchg128_acquire(__VA_ARGS__) - -#define raw_cmpxchg128_release(...) \ - arch_cmpxchg128_release(__VA_ARGS__) - -#define raw_cmpxchg128_relaxed(...) \ - arch_cmpxchg128_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg(...) \ - arch_try_cmpxchg(__VA_ARGS__) - -#define raw_try_cmpxchg_acquire(...) \ - arch_try_cmpxchg_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg_release(...) \ - arch_try_cmpxchg_release(__VA_ARGS__) - -#define raw_try_cmpxchg_relaxed(...) \ - arch_try_cmpxchg_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg64(...) \ - arch_try_cmpxchg64(__VA_ARGS__) - -#define raw_try_cmpxchg64_acquire(...) \ - arch_try_cmpxchg64_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg64_release(...) \ - arch_try_cmpxchg64_release(__VA_ARGS__) - -#define raw_try_cmpxchg64_relaxed(...) \ - arch_try_cmpxchg64_relaxed(__VA_ARGS__) - -#define raw_try_cmpxchg128(...) \ - arch_try_cmpxchg128(__VA_ARGS__) - -#define raw_try_cmpxchg128_acquire(...) \ - arch_try_cmpxchg128_acquire(__VA_ARGS__) - -#define raw_try_cmpxchg128_release(...) \ - arch_try_cmpxchg128_release(__VA_ARGS__) - -#define raw_try_cmpxchg128_relaxed(...) \ - arch_try_cmpxchg128_relaxed(__VA_ARGS__) - -#define raw_cmpxchg_local(...) \ - arch_cmpxchg_local(__VA_ARGS__) - -#define raw_cmpxchg64_local(...) \ - arch_cmpxchg64_local(__VA_ARGS__) - -#define raw_cmpxchg128_local(...) \ - arch_cmpxchg128_local(__VA_ARGS__) - -#define raw_sync_cmpxchg(...) \ - arch_sync_cmpxchg(__VA_ARGS__) - -#define raw_try_cmpxchg_local(...) \ - arch_try_cmpxchg_local(__VA_ARGS__) - -#define raw_try_cmpxchg64_local(...) \ - arch_try_cmpxchg64_local(__VA_ARGS__) - -#define raw_try_cmpxchg128_local(...) \ - arch_try_cmpxchg128_local(__VA_ARGS__) - -#endif /* _LINUX_ATOMIC_RAW_H */ -// b23ed4424e85200e200ded094522e1d743b3a5b1 diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire index ef764085c79aa..b0f732a5c46ef 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,6 +1,6 @@ cat <counter, old, new); + return raw_cmpxchg${order}(&v->counter, old, new); } EOF diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index 8c144c818e9ed..a660ac65994bd 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,7 +1,7 @@ cat < 0)) return false; - } while (!arch_${atomic}_try_cmpxchg(v, &c, c - 1)); + } while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1)); return true; } diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence index 07757d8e338ef..067eea553f5e0 100755 --- a/scripts/atomic/fallbacks/fence +++ b/scripts/atomic/fallbacks/fence @@ -1,6 +1,6 @@ cat <counter); } else { - ret = arch_${atomic}_read(v); + ret = raw_${atomic}_read(v); __atomic_acquire_fence(); } diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release index b46feb56d69ca..cbbff708129b8 100755 --- a/scripts/atomic/fallbacks/release +++ b/scripts/atomic/fallbacks/release @@ -1,6 +1,6 @@ cat <counter, i); } else { __atomic_release_fence(); - arch_${atomic}_set(v, i); + raw_${atomic}_set(v, i); } } EOF diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test index da8a049c9b02b..8975a496d495c 100755 --- a/scripts/atomic/fallbacks/sub_and_test +++ b/scripts/atomic/fallbacks/sub_and_test @@ -1,7 +1,7 @@ cat <counter, new); + return raw_xchg${order}(&v->counter, new); } EOF diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 337330865fa2e..86aca4f9f315a 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -17,19 +17,12 @@ gen_template_fallback() local atomic="$1"; shift local int="$1"; shift - local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}" - local ret="$(gen_ret_type "${meta}" "${int}")" local retstmt="$(gen_ret_stmt "${meta}")" local params="$(gen_params "${int}" "${atomic}" "$@")" local args="$(gen_args "$@")" - if [ ! -z "${template}" ]; then - printf "#ifndef ${atomicname}\n" - . ${template} - printf "#define ${atomicname} ${atomicname}\n" - printf "#endif\n\n" - fi + . ${template} } #gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...) @@ -59,69 +52,92 @@ gen_proto_fallback() gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" } -#gen_basic_fallbacks(basename) -gen_basic_fallbacks() -{ - local basename="$1"; shift -cat << EOF -#define ${basename}_acquire ${basename} -#define ${basename}_release ${basename} -#define ${basename}_relaxed ${basename} -EOF -} - -#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) -gen_proto_order_variants() +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, args...) +gen_proto_order_variant() { local meta="$1"; shift local pfx="$1"; shift local name="$1"; shift local sfx="$1"; shift + local order="$1"; shift local atomic="$1" - local basename="arch_${atomic}_${pfx}${name}${sfx}" - - local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")" + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + local basename="${atomic}_${pfx}${name}${sfx}" - # If we don't have relaxed atomics, then we don't bother with ordering fallbacks - # read_acquire and set_release need to be templated, though - if ! meta_has_relaxed "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" - if meta_has_acquire "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - fi + # Where there is no possible fallback, this order variant is mandatory + # and must be provided by arch code. Add a comment to the header to + # make this obvious. + # + # Ideally we'd error on a missing definition, but arch code might + # define this order variant as a C function without a preprocessor + # symbol. + if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then + printf "#define raw_${atomicname} arch_${atomicname}\n\n" + return + fi - if meta_has_release "${meta}"; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - fi + printf "#if defined(arch_${atomicname})\n" + printf "#define raw_${atomicname} arch_${atomicname}\n" - return + # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops + if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then + printf "#elif defined(arch_${basename}_relaxed)\n" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" fi - printf "#ifndef ${basename}_relaxed\n" + # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops + if [ ! -z "${order}" ]; then + printf "#elif defined(arch_${basename})\n" + printf "#define raw_${atomicname} arch_${basename}\n" + fi + printf "#else\n" if [ ! -z "${template}" ]; then - printf "#ifdef ${basename}\n" + gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + else + printf "#error \"Unable to define raw_${atomicname}\"\n" fi - gen_basic_fallbacks "${basename}" + printf "#endif\n\n" +} - if [ ! -z "${template}" ]; then - printf "#endif /* ${basename} */\n\n" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" + +#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) +gen_proto_order_variants() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local atomic="$1" + + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + + if meta_has_acquire "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" fi - printf "#else /* ${basename}_relaxed */\n\n" + if meta_has_release "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" + fi - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + if meta_has_relaxed "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" + fi +} - printf "#endif /* ${basename}_relaxed */\n\n" +#gen_basic_fallbacks(basename) +gen_basic_fallbacks() +{ + local basename="$1"; shift +cat << EOF +#define raw_${basename}_acquire arch_${basename} +#define raw_${basename}_release arch_${basename} +#define raw_${basename}_relaxed arch_${basename} +EOF } gen_order_fallbacks() @@ -130,36 +146,65 @@ gen_order_fallbacks() cat < ${LINUXDIR}/include/${header} From patchwork Mon Jun 5 07:01:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103114 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500639vqr; Mon, 5 Jun 2023 00:10:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ73Jsx5ftyl4bAutbD8iqhDuSXgNICl5kFbCCiTDe7V8A05A3UP4IXvJVj3/enjs+bLhvPx X-Received: by 2002:aca:2218:0:b0:39a:bbd5:c0b7 with SMTP id b24-20020aca2218000000b0039abbd5c0b7mr855254oic.53.1685949059610; Mon, 05 Jun 2023 00:10:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949059; cv=none; d=google.com; s=arc-20160816; b=lCfLcSrmm5UypUyf7AXVvrnYv/d1dNV0RbTK9bI2VA9bsjPe4etkyvbn9lbAz2N1QN 2UDAZH8h73jBx1GndpNYKbYObWLgNZt8Zw13IwcY6Xz5iGgjA4B+skTo315977fioyFn MaNzPf0BbZoDy7R93aIYwSzzWuRsRjeVdheIoPv9vWoKydLOznlerTv3+EsWpseGHmgG wlD3bo6sMHndE30WV5P6/Mp2uVsdNfjTqGTn4ESwDmlyD36p9oCIyEN1ZIwOuWD7LwfL Kpqn3nKJhFoPTjUM4eFx3oi71PlXqXKkAzE+WRUK0EuQIVUExh5b6qHGOUHRlqhIZPre 0oyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=MrNmIfp3v3SDidcpFCMSuMXN4YwJK8PgUaQackrFfmk=; b=SDHkgRKztB5j2UYaJK8FyRHYqbuJyjmVptyMhJOXvyyPeaSWz7uYJlHzLsA/Wc2TW3 gut8fe4zZq7lUE/dZ4ulWRLfAdamfaEbAvKDdtipUZP2E7ZupX74UrhbAcn3od2uwpl5 8iYYORCKYeH0SbvGSgkyULG4jUxaTaT5C5DvmI/QR+bq2rSDVMU/aIShQSKitbyYaHwL MCHHOgHRfJkOdVawdFj5PMmh5VjfblCWksqtOXMuWz6d0+N05ucU1VVd/ee7RQeooiao 8hscoyvw7U/3l8FCBTxiFzheKHtI9N1ZXnaPsMJUWySu2f2V9BGEZqiWI+6F3M1KkjPG g85w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f13-20020a63754d000000b0053fb1a1218fsi5086598pgn.817.2023.06.05.00.10.46; Mon, 05 Jun 2023 00:10:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230471AbjFEHDr (ORCPT + 99 others); Mon, 5 Jun 2023 03:03:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230318AbjFEHDJ (ORCPT ); Mon, 5 Jun 2023 03:03:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 815F81B6; Mon, 5 Jun 2023 00:02:29 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7C431650; Mon, 5 Jun 2023 00:03:14 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A2C1B3F793; Mon, 5 Jun 2023 00:02:27 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 21/27] locking/atomic: scripts: split pfx/name/sfx/order Date: Mon, 5 Jun 2023 08:01:18 +0100 Message-Id: <20230605070124.3741859-22-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845721068777025?= X-GMAIL-MSGID: =?utf-8?q?1767845721068777025?= Currently gen-atomic-long.sh's gen_proto_order_variant() function combines the pfx/name/sfx/order variables immediately, unlike other functions in gen-atomic-*.sh. This is fine today, but subsequent patches will require the individual individual pfx/name/sfx/order variables within gen-atomic-long.sh's gen_proto_order_variant() function. In preparation for this, split the variables in the style of other gen-atomic-*.sh scripts. This results in no change to the generated headers, so there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- scripts/atomic/gen-atomic-long.sh | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 75e91d6da30d3..13832171f7219 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -36,10 +36,15 @@ gen_args_cast() gen_proto_order_variant() { local meta="$1"; shift - local name="$1$2$3$4"; shift; shift; shift; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift local atomic="$1"; shift local int="$1"; shift + local atomicname="${pfx}${name}${sfx}${order}" + local ret="$(gen_ret_type "${meta}" "long")" local params="$(gen_params "long" "atomic_long" "$@")" local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")" @@ -47,9 +52,9 @@ gen_proto_order_variant() cat < X-Patchwork-Id: 103126 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501047vqr; Mon, 5 Jun 2023 00:11:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ61z7csRi2DUREY6EV9ogqVZhtuoZUJXySSj889zox+6szIfXuUyRiT5E3WDkSWcKg7lBf4 X-Received: by 2002:a92:2808:0:b0:33d:cad:4ebf with SMTP id l8-20020a922808000000b0033d0cad4ebfmr9550193ilf.11.1685949114925; Mon, 05 Jun 2023 00:11:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949114; cv=none; d=google.com; s=arc-20160816; b=Rp81UfCRADqZltLZwkKCeraRq9HuuORbcBIfSxlcpmdwmv4BfC9eFRTDkfOaFRFDTd nR6lsoWQusUxPSmoXBak+alJFKkR8m/6ZXulhsS3zicxjPip9S844TTz0+OvBxJLWNAu SdktZ9niVc91LepZBjfcH9+KJgvs4aVVoB+0hznqGU6cARKugmiRlOhwCJ4DjjxicXAO szI6dtiScYwjzncsAgFzqGVbWLLJABD/W8ovupBzTcTJYnsm0VERmIm2QnkfHj2IOVvl ytpAx8OhjV2OAuZpg3bstRnjjh/cQP6Fa5Mt1Leu+xcFfKtz0qIW8Bb+2tOpESYvAyup /Kpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=a0Zlim1FjDu91YqVxQDwobDD/HdPf5UFV2aDcdsSlFI=; b=YaD7EJds5ANS6NimDqvwZAaSecfkHnqtPVA8n9y2pons3E54vyAq6IAVqv+U40ttgS 0ObpbkHoXiFuDG6CWfd9mVZmgMM1EOT6eUafY6Fb/X6XlIQebP5GSiG8cNLtMJ3BcgUX 43f4EKrLj0OpmTomIZzwru3hZ8m99+ui3JDYu8HA3vewbRg5hDHd6CDm/dlNE1TIsBlZ tJg1ky5dURkmIvtBzxy0HWkKfmkspT2TmsQ6vMUduA1hsyg+Edwo2xngxeztumRzL8AI c/vwwzPElKXXFCPDR24rgWoA//oxGK6lRtTctg8/nehf/OXtTWFQN1trItWdpAYW4mub G3LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o25-20020a637e59000000b0053fb50702e9si5034947pgn.407.2023.06.05.00.11.42; Mon, 05 Jun 2023 00:11:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231236AbjFEHEY (ORCPT + 99 others); Mon, 5 Jun 2023 03:04:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230515AbjFEHEF (ORCPT ); Mon, 5 Jun 2023 03:04:05 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 44967DC; Mon, 5 Jun 2023 00:03:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A4227169E; Mon, 5 Jun 2023 00:03:17 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 428CB3F793; Mon, 5 Jun 2023 00:02:30 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 22/27] locking/atomic: scripts: simplify raw_atomic_long*() definitions Date: Mon, 5 Jun 2023 08:01:19 +0100 Message-Id: <20230605070124.3741859-23-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845779189410154?= X-GMAIL-MSGID: =?utf-8?q?1767845779189410154?= Currently, atomic-long is split into two sections, one defining the raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw atomic_long_*() ops for !CONFIG_64BIT. With many lines elided, this looks like: | #ifdef CONFIG_64BIT | ... | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); | } | ... | #else /* CONFIG_64BIT */ | ... | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | return raw_atomic_try_cmpxchg(v, (int *)old, new); | } | ... | #endif The two definitions are spread far apart in the file, and duplicate the prototype, making it hard to have a legible set of kerneldoc comments. Make this simpler by defining the C prototype once, and writing the two definitions inline. For example, the above becomes: | static __always_inline bool | raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) | { | #ifdef CONFIG_64BIT | return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); | #else | return raw_atomic_try_cmpxchg(v, (int *)old, new); | #endif | } As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. As a bonus, both the script and the generated file are somewhat shorter. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic/atomic-long.h | 857 ++++++++++++----------------- scripts/atomic/gen-atomic-long.sh | 27 +- 2 files changed, 350 insertions(+), 534 deletions(-) diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 92dc82ce1ce6d..63e0b4078ebd5 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -21,1030 +21,855 @@ typedef atomic_t atomic_long_t; #define atomic_long_cond_read_relaxed atomic_cond_read_relaxed #endif -#ifdef CONFIG_64BIT - -static __always_inline long -raw_atomic_long_read(const atomic_long_t *v) -{ - return raw_atomic64_read(v); -} - -static __always_inline long -raw_atomic_long_read_acquire(const atomic_long_t *v) -{ - return raw_atomic64_read_acquire(v); -} - -static __always_inline void -raw_atomic_long_set(atomic_long_t *v, long i) -{ - raw_atomic64_set(v, i); -} - -static __always_inline void -raw_atomic_long_set_release(atomic_long_t *v, long i) -{ - raw_atomic64_set_release(v, i); -} - -static __always_inline void -raw_atomic_long_add(long i, atomic_long_t *v) -{ - raw_atomic64_add(i, v); -} - -static __always_inline long -raw_atomic_long_add_return(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_release(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_add_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_add_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_sub(long i, atomic_long_t *v) -{ - raw_atomic64_sub(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_release(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_release(i, v); -} - -static __always_inline long -raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_return_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_sub_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_inc(atomic_long_t *v) -{ - raw_atomic64_inc(v); -} - -static __always_inline long -raw_atomic_long_inc_return(atomic_long_t *v) -{ - return raw_atomic64_inc_return(v); -} - -static __always_inline long -raw_atomic_long_inc_return_acquire(atomic_long_t *v) -{ - return raw_atomic64_inc_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_inc_return_release(atomic_long_t *v) -{ - return raw_atomic64_inc_return_release(v); -} - -static __always_inline long -raw_atomic_long_inc_return_relaxed(atomic_long_t *v) -{ - return raw_atomic64_inc_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_release(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) -{ - return raw_atomic64_fetch_inc_relaxed(v); -} - -static __always_inline void -raw_atomic_long_dec(atomic_long_t *v) -{ - raw_atomic64_dec(v); -} - -static __always_inline long -raw_atomic_long_dec_return(atomic_long_t *v) -{ - return raw_atomic64_dec_return(v); -} - -static __always_inline long -raw_atomic_long_dec_return_acquire(atomic_long_t *v) -{ - return raw_atomic64_dec_return_acquire(v); -} - -static __always_inline long -raw_atomic_long_dec_return_release(atomic_long_t *v) -{ - return raw_atomic64_dec_return_release(v); -} - -static __always_inline long -raw_atomic_long_dec_return_relaxed(atomic_long_t *v) -{ - return raw_atomic64_dec_return_relaxed(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_acquire(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_release(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_release(v); -} - -static __always_inline long -raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) -{ - return raw_atomic64_fetch_dec_relaxed(v); -} - -static __always_inline void -raw_atomic_long_and(long i, atomic_long_t *v) -{ - raw_atomic64_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_and_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_andnot(long i, atomic_long_t *v) -{ - raw_atomic64_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_andnot_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_or(long i, atomic_long_t *v) -{ - raw_atomic64_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_or_relaxed(i, v); -} - -static __always_inline void -raw_atomic_long_xor(long i, atomic_long_t *v) -{ - raw_atomic64_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_acquire(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_release(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_fetch_xor_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_acquire(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_release(v, i); -} - -static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) -{ - return raw_atomic64_xchg_relaxed(v, i); -} - -static __always_inline long -raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_acquire(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_release(v, old, new); -} - -static __always_inline long -raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) -{ - return raw_atomic64_cmpxchg_relaxed(v, old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) -{ - return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); -} - -static __always_inline bool -raw_atomic_long_sub_and_test(long i, atomic_long_t *v) -{ - return raw_atomic64_sub_and_test(i, v); -} - -static __always_inline bool -raw_atomic_long_dec_and_test(atomic_long_t *v) -{ - return raw_atomic64_dec_and_test(v); -} - -static __always_inline bool -raw_atomic_long_inc_and_test(atomic_long_t *v) -{ - return raw_atomic64_inc_and_test(v); -} - -static __always_inline bool -raw_atomic_long_add_negative(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_acquire(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_release(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_release(i, v); -} - -static __always_inline bool -raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) -{ - return raw_atomic64_add_negative_relaxed(i, v); -} - -static __always_inline long -raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) -{ - return raw_atomic64_fetch_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) -{ - return raw_atomic64_add_unless(v, a, u); -} - -static __always_inline bool -raw_atomic_long_inc_not_zero(atomic_long_t *v) -{ - return raw_atomic64_inc_not_zero(v); -} - -static __always_inline bool -raw_atomic_long_inc_unless_negative(atomic_long_t *v) -{ - return raw_atomic64_inc_unless_negative(v); -} - -static __always_inline bool -raw_atomic_long_dec_unless_positive(atomic_long_t *v) -{ - return raw_atomic64_dec_unless_positive(v); -} - -static __always_inline long -raw_atomic_long_dec_if_positive(atomic_long_t *v) -{ - return raw_atomic64_dec_if_positive(v); -} - -#else /* CONFIG_64BIT */ - static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_read(v); +#else return raw_atomic_read(v); +#endif } static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_read_acquire(v); +#else return raw_atomic_read_acquire(v); +#endif } static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + raw_atomic64_set(v, i); +#else raw_atomic_set(v, i); +#endif } static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + raw_atomic64_set_release(v, i); +#else raw_atomic_set_release(v, i); +#endif } static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_add(i, v); +#else raw_atomic_add(i, v); +#endif } static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return(i, v); +#else return raw_atomic_add_return(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_acquire(i, v); +#else return raw_atomic_add_return_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_release(i, v); +#else return raw_atomic_add_return_release(i, v); +#endif } static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_return_relaxed(i, v); +#else return raw_atomic_add_return_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add(i, v); +#else return raw_atomic_fetch_add(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_acquire(i, v); +#else return raw_atomic_fetch_add_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_release(i, v); +#else return raw_atomic_fetch_add_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_relaxed(i, v); +#else return raw_atomic_fetch_add_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_sub(i, v); +#else raw_atomic_sub(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return(i, v); +#else return raw_atomic_sub_return(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_acquire(i, v); +#else return raw_atomic_sub_return_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_release(i, v); +#else return raw_atomic_sub_return_release(i, v); +#endif } static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_return_relaxed(i, v); +#else return raw_atomic_sub_return_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub(i, v); +#else return raw_atomic_fetch_sub(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_acquire(i, v); +#else return raw_atomic_fetch_sub_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_release(i, v); +#else return raw_atomic_fetch_sub_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_sub_relaxed(i, v); +#else return raw_atomic_fetch_sub_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_inc(v); +#else raw_atomic_inc(v); +#endif } static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return(v); +#else return raw_atomic_inc_return(v); +#endif } static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_acquire(v); +#else return raw_atomic_inc_return_acquire(v); +#endif } static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_release(v); +#else return raw_atomic_inc_return_release(v); +#endif } static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_return_relaxed(v); +#else return raw_atomic_inc_return_relaxed(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc(v); +#else return raw_atomic_fetch_inc(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_acquire(v); +#else return raw_atomic_fetch_inc_acquire(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_release(v); +#else return raw_atomic_fetch_inc_release(v); +#endif } static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_inc_relaxed(v); +#else return raw_atomic_fetch_inc_relaxed(v); +#endif } static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_dec(v); +#else raw_atomic_dec(v); +#endif } static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return(v); +#else return raw_atomic_dec_return(v); +#endif } static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_acquire(v); +#else return raw_atomic_dec_return_acquire(v); +#endif } static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_release(v); +#else return raw_atomic_dec_return_release(v); +#endif } static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_return_relaxed(v); +#else return raw_atomic_dec_return_relaxed(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec(v); +#else return raw_atomic_fetch_dec(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_acquire(v); +#else return raw_atomic_fetch_dec_acquire(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_release(v); +#else return raw_atomic_fetch_dec_release(v); +#endif } static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_dec_relaxed(v); +#else return raw_atomic_fetch_dec_relaxed(v); +#endif } static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_and(i, v); +#else raw_atomic_and(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and(i, v); +#else return raw_atomic_fetch_and(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_acquire(i, v); +#else return raw_atomic_fetch_and_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_release(i, v); +#else return raw_atomic_fetch_and_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_and_relaxed(i, v); +#else return raw_atomic_fetch_and_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_andnot(i, v); +#else raw_atomic_andnot(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot(i, v); +#else return raw_atomic_fetch_andnot(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_acquire(i, v); +#else return raw_atomic_fetch_andnot_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_release(i, v); +#else return raw_atomic_fetch_andnot_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_andnot_relaxed(i, v); +#else return raw_atomic_fetch_andnot_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_or(i, v); +#else raw_atomic_or(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or(i, v); +#else return raw_atomic_fetch_or(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_acquire(i, v); +#else return raw_atomic_fetch_or_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_release(i, v); +#else return raw_atomic_fetch_or_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_or_relaxed(i, v); +#else return raw_atomic_fetch_or_relaxed(i, v); +#endif } static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + raw_atomic64_xor(i, v); +#else raw_atomic_xor(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor(i, v); +#else return raw_atomic_fetch_xor(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_acquire(i, v); +#else return raw_atomic_fetch_xor_acquire(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_release(i, v); +#else return raw_atomic_fetch_xor_release(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_xor_relaxed(i, v); +#else return raw_atomic_fetch_xor_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg(v, i); +#else return raw_atomic_xchg(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_acquire(v, i); +#else return raw_atomic_xchg_acquire(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_release(v, i); +#else return raw_atomic_xchg_release(v, i); +#endif } static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) { +#ifdef CONFIG_64BIT + return raw_atomic64_xchg_relaxed(v, i); +#else return raw_atomic_xchg_relaxed(v, i); +#endif } static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg(v, old, new); +#else return raw_atomic_cmpxchg(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_acquire(v, old, new); +#else return raw_atomic_cmpxchg_acquire(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_release(v, old, new); +#else return raw_atomic_cmpxchg_release(v, old, new); +#endif } static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_cmpxchg_relaxed(v, old, new); +#else return raw_atomic_cmpxchg_relaxed(v, old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_release(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { +#ifdef CONFIG_64BIT + return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); +#else return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); +#endif } static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_sub_and_test(i, v); +#else return raw_atomic_sub_and_test(i, v); +#endif } static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_and_test(v); +#else return raw_atomic_dec_and_test(v); +#endif } static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_and_test(v); +#else return raw_atomic_inc_and_test(v); +#endif } static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative(i, v); +#else return raw_atomic_add_negative(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_acquire(i, v); +#else return raw_atomic_add_negative_acquire(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_release(i, v); +#else return raw_atomic_add_negative_release(i, v); +#endif } static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_negative_relaxed(i, v); +#else return raw_atomic_add_negative_relaxed(i, v); +#endif } static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { +#ifdef CONFIG_64BIT + return raw_atomic64_fetch_add_unless(v, a, u); +#else return raw_atomic_fetch_add_unless(v, a, u); +#endif } static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { +#ifdef CONFIG_64BIT + return raw_atomic64_add_unless(v, a, u); +#else return raw_atomic_add_unless(v, a, u); +#endif } static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_not_zero(v); +#else return raw_atomic_inc_not_zero(v); +#endif } static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_inc_unless_negative(v); +#else return raw_atomic_inc_unless_negative(v); +#endif } static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_unless_positive(v); +#else return raw_atomic_dec_unless_positive(v); +#endif } static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { +#ifdef CONFIG_64BIT + return raw_atomic64_dec_if_positive(v); +#else return raw_atomic_dec_if_positive(v); +#endif } -#endif /* CONFIG_64BIT */ #endif /* _LINUX_ATOMIC_LONG_H */ -// 108784846d3bbbb201b8dabe621c5dc30b216206 +// ad09f849db0db5b30c82e497eeb9056a394c5f22 diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 13832171f7219..af27a71b37ef1 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -32,7 +32,7 @@ gen_args_cast() done } -#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +#gen_proto_order_variant(meta, pfx, name, sfx, order, arg...) gen_proto_order_variant() { local meta="$1"; shift @@ -40,21 +40,24 @@ gen_proto_order_variant() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local atomic="$1"; shift - local int="$1"; shift local atomicname="${pfx}${name}${sfx}${order}" local ret="$(gen_ret_type "${meta}" "long")" local params="$(gen_params "long" "atomic_long" "$@")" - local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")" + local argscast_32="$(gen_args_cast "int" "atomic" "$@")" + local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")" local retstmt="$(gen_ret_stmt "${meta}")" cat < X-Patchwork-Id: 103132 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501851vqr; Mon, 5 Jun 2023 00:13:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5MPtmoiogmXcrYI+8krAVlRgGiNUEovp7f4o0u/HvyI/MSq1rCmHM7igmVfvfCjPisw7q0 X-Received: by 2002:a05:6a21:2d86:b0:10a:cb95:5aa3 with SMTP id ty6-20020a056a212d8600b0010acb955aa3mr6951674pzb.7.1685949218821; Mon, 05 Jun 2023 00:13:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949218; cv=none; d=google.com; s=arc-20160816; b=k0Z1kePVY+vY6Q5WOa7jUyhEdKOGPRhypZLivBlCJJI6O6bvKTx9xzWIYItxTpizsB h4DCrhkTqP5whnONO/SURmr4mX7Z9ZAtg9uCEUL5Acd4k5egD8Sfsq1arCsBnHK4Vrw/ Z/+2T1Q8a/KIYcDgvrqRS8gTF9JT7goozWfr2Ii/y+0dAWnp7XCT5Gva3/pVxLN+1P8U Pbq3ktnzqVwBNEZZyXIspBfnkdaEibfmsFg9uPTKS6Mqh11Np1Xgde/CPuKVBF2nyjLM k8/qdwaH3xWk2iocAu+NvaKu9WbZkbw349Nsd5LAGZt/DTYqsMmdIp4zmetVayxP3P+H M1HA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=CHT1nJ3RE8gMBjV3axhE1UyjdAN3SXyCO572UNf31EU=; b=HulzcZDUHKjVsoM0t8yij9a+kNPA7C7uYDFdawpzZ4N7ZCsdH32yaRDLWFfymvJdew EbiKrxYSHLPanT+IkFbfFFDXlDmYw831bKHOrVI3uRzPy0W1P06OD5eop43jkc1XqbcS srqjJ1y826v985xNFTC9ezFAlpchTTc4jg9ndV1sX3hvUQsV2ub9m2Vs6/0MS+29MX48 mzTfM1PvoHRud8Y0KRZI1zqNIW4U9jd0Q+VLcFYvuAVzkM/zpcsLZFtWlvzm7j3z28Ed nsTeEM+ujIYKcFQOUI2jjUbrsYhxUIHJWn+5Doh3lsIUQuUCaxD5hYji2qhni8icTMff ixbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id az9-20020a056a02004900b00513973cb8b8si5018635pgb.202.2023.06.05.00.12.56; Mon, 05 Jun 2023 00:13:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231260AbjFEHEh (ORCPT + 99 others); Mon, 5 Jun 2023 03:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231165AbjFEHEQ (ORCPT ); Mon, 5 Jun 2023 03:04:16 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4EB3710C8; Mon, 5 Jun 2023 00:03:29 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A26ED16A3; Mon, 5 Jun 2023 00:03:20 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E4D453F793; Mon, 5 Jun 2023 00:02:32 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 23/27] locking/atomic: scripts: simplify raw_atomic*() definitions Date: Mon, 5 Jun 2023 08:01:20 +0100 Message-Id: <20230605070124.3741859-24-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845887556450573?= X-GMAIL-MSGID: =?utf-8?q?1767845887556450573?= Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic/atomic-arch-fallback.h | 1790 +++++++++--------- include/linux/atomic/atomic-instrumented.h | 50 +- include/linux/atomic/atomic-long.h | 26 +- scripts/atomic/atomics.tbl | 2 +- scripts/atomic/fallbacks/acquire | 4 - scripts/atomic/fallbacks/add_negative | 4 - scripts/atomic/fallbacks/add_unless | 4 - scripts/atomic/fallbacks/andnot | 4 - scripts/atomic/fallbacks/cmpxchg | 4 - scripts/atomic/fallbacks/dec | 4 - scripts/atomic/fallbacks/dec_and_test | 4 - scripts/atomic/fallbacks/dec_if_positive | 4 - scripts/atomic/fallbacks/dec_unless_positive | 4 - scripts/atomic/fallbacks/fence | 4 - scripts/atomic/fallbacks/fetch_add_unless | 4 - scripts/atomic/fallbacks/inc | 4 - scripts/atomic/fallbacks/inc_and_test | 4 - scripts/atomic/fallbacks/inc_not_zero | 4 - scripts/atomic/fallbacks/inc_unless_negative | 4 - scripts/atomic/fallbacks/read_acquire | 4 - scripts/atomic/fallbacks/release | 4 - scripts/atomic/fallbacks/set_release | 4 - scripts/atomic/fallbacks/sub_and_test | 4 - scripts/atomic/fallbacks/try_cmpxchg | 4 - scripts/atomic/fallbacks/xchg | 4 - scripts/atomic/gen-atomic-fallback.sh | 26 +- 26 files changed, 901 insertions(+), 1077 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 99bc1a871dc12..470c2890ab8d6 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -428,16 +428,20 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void); #define raw_sync_cmpxchg arch_sync_cmpxchg -#define raw_atomic_read arch_atomic_read +static __always_inline int +raw_atomic_read(const atomic_t *v) +{ + return arch_atomic_read(v); +} -#if defined(arch_atomic_read_acquire) -#define raw_atomic_read_acquire arch_atomic_read_acquire -#elif defined(arch_atomic_read) -#define raw_atomic_read_acquire arch_atomic_read -#else static __always_inline int raw_atomic_read_acquire(const atomic_t *v) { +#if defined(arch_atomic_read_acquire) + return arch_atomic_read_acquire(v); +#elif defined(arch_atomic_read) + return arch_atomic_read(v); +#else int ret; if (__native_word(atomic_t)) { @@ -448,1144 +452,1088 @@ raw_atomic_read_acquire(const atomic_t *v) } return ret; -} #endif +} -#define raw_atomic_set arch_atomic_set +static __always_inline void +raw_atomic_set(atomic_t *v, int i) +{ + arch_atomic_set(v, i); +} -#if defined(arch_atomic_set_release) -#define raw_atomic_set_release arch_atomic_set_release -#elif defined(arch_atomic_set) -#define raw_atomic_set_release arch_atomic_set -#else static __always_inline void raw_atomic_set_release(atomic_t *v, int i) { +#if defined(arch_atomic_set_release) + arch_atomic_set_release(v, i); +#elif defined(arch_atomic_set) + arch_atomic_set(v, i); +#else if (__native_word(atomic_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); raw_atomic_set(v, i); } -} #endif +} -#define raw_atomic_add arch_atomic_add +static __always_inline void +raw_atomic_add(int i, atomic_t *v) +{ + arch_atomic_add(i, v); +} -#if defined(arch_atomic_add_return) -#define raw_atomic_add_return arch_atomic_add_return -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return(int i, atomic_t *v) { +#if defined(arch_atomic_add_return) + return arch_atomic_add_return(i, v); +#elif defined(arch_atomic_add_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_add_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_add_return" #endif +} -#if defined(arch_atomic_add_return_acquire) -#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_add_return_acquire) + return arch_atomic_add_return_acquire(i, v); +#elif defined(arch_atomic_add_return_relaxed) int ret = arch_atomic_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_acquire arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_acquire" #endif +} -#if defined(arch_atomic_add_return_release) -#define raw_atomic_add_return_release arch_atomic_add_return_release -#elif defined(arch_atomic_add_return_relaxed) static __always_inline int raw_atomic_add_return_release(int i, atomic_t *v) { +#if defined(arch_atomic_add_return_release) + return arch_atomic_add_return_release(i, v); +#elif defined(arch_atomic_add_return_relaxed) __atomic_release_fence(); return arch_atomic_add_return_relaxed(i, v); -} #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_release arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_release" #endif +} +static __always_inline int +raw_atomic_add_return_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_add_return_relaxed) -#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed + return arch_atomic_add_return_relaxed(i, v); #elif defined(arch_atomic_add_return) -#define raw_atomic_add_return_relaxed arch_atomic_add_return + return arch_atomic_add_return(i, v); #else #error "Unable to define raw_atomic_add_return_relaxed" #endif +} -#if defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add arch_atomic_fetch_add -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add) + return arch_atomic_fetch_add(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_add" #endif +} -#if defined(arch_atomic_fetch_add_acquire) -#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add_acquire) + return arch_atomic_fetch_add_acquire(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) int ret = arch_atomic_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_acquire" #endif +} -#if defined(arch_atomic_fetch_add_release) -#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release -#elif defined(arch_atomic_fetch_add_relaxed) static __always_inline int raw_atomic_fetch_add_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_add_release) + return arch_atomic_fetch_add_release(i, v); +#elif defined(arch_atomic_fetch_add_relaxed) __atomic_release_fence(); return arch_atomic_fetch_add_relaxed(i, v); -} #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_release arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_release" #endif +} +static __always_inline int +raw_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_add_relaxed) -#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed + return arch_atomic_fetch_add_relaxed(i, v); #elif defined(arch_atomic_fetch_add) -#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add + return arch_atomic_fetch_add(i, v); #else #error "Unable to define raw_atomic_fetch_add_relaxed" #endif +} -#define raw_atomic_sub arch_atomic_sub +static __always_inline void +raw_atomic_sub(int i, atomic_t *v) +{ + arch_atomic_sub(i, v); +} -#if defined(arch_atomic_sub_return) -#define raw_atomic_sub_return arch_atomic_sub_return -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return) + return arch_atomic_sub_return(i, v); +#elif defined(arch_atomic_sub_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_sub_return" #endif +} -#if defined(arch_atomic_sub_return_acquire) -#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return_acquire) + return arch_atomic_sub_return_acquire(i, v); +#elif defined(arch_atomic_sub_return_relaxed) int ret = arch_atomic_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_acquire arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_acquire" #endif +} -#if defined(arch_atomic_sub_return_release) -#define raw_atomic_sub_return_release arch_atomic_sub_return_release -#elif defined(arch_atomic_sub_return_relaxed) static __always_inline int raw_atomic_sub_return_release(int i, atomic_t *v) { +#if defined(arch_atomic_sub_return_release) + return arch_atomic_sub_return_release(i, v); +#elif defined(arch_atomic_sub_return_relaxed) __atomic_release_fence(); return arch_atomic_sub_return_relaxed(i, v); -} #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_release arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_release" #endif +} +static __always_inline int +raw_atomic_sub_return_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_sub_return_relaxed) -#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed + return arch_atomic_sub_return_relaxed(i, v); #elif defined(arch_atomic_sub_return) -#define raw_atomic_sub_return_relaxed arch_atomic_sub_return + return arch_atomic_sub_return(i, v); #else #error "Unable to define raw_atomic_sub_return_relaxed" #endif +} -#if defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub arch_atomic_fetch_sub -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub) + return arch_atomic_fetch_sub(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_sub" #endif +} -#if defined(arch_atomic_fetch_sub_acquire) -#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub_acquire) + return arch_atomic_fetch_sub_acquire(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) int ret = arch_atomic_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_acquire" #endif +} -#if defined(arch_atomic_fetch_sub_release) -#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release -#elif defined(arch_atomic_fetch_sub_relaxed) static __always_inline int raw_atomic_fetch_sub_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_sub_release) + return arch_atomic_fetch_sub_release(i, v); +#elif defined(arch_atomic_fetch_sub_relaxed) __atomic_release_fence(); return arch_atomic_fetch_sub_relaxed(i, v); -} #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_release" #endif +} +static __always_inline int +raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_sub_relaxed) -#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed + return arch_atomic_fetch_sub_relaxed(i, v); #elif defined(arch_atomic_fetch_sub) -#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub + return arch_atomic_fetch_sub(i, v); #else #error "Unable to define raw_atomic_fetch_sub_relaxed" #endif +} -#if defined(arch_atomic_inc) -#define raw_atomic_inc arch_atomic_inc -#else static __always_inline void raw_atomic_inc(atomic_t *v) { +#if defined(arch_atomic_inc) + arch_atomic_inc(v); +#else raw_atomic_add(1, v); -} #endif +} -#if defined(arch_atomic_inc_return) -#define raw_atomic_inc_return arch_atomic_inc_return -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return(atomic_t *v) { +#if defined(arch_atomic_inc_return) + return arch_atomic_inc_return(v); +#elif defined(arch_atomic_inc_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_inc_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_inc_return(atomic_t *v) -{ return raw_atomic_add_return(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_acquire) -#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return_acquire(atomic_t *v) { +#if defined(arch_atomic_inc_return_acquire) + return arch_atomic_inc_return_acquire(v); +#elif defined(arch_atomic_inc_return_relaxed) int ret = arch_atomic_inc_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_acquire arch_atomic_inc_return + return arch_atomic_inc_return(v); #else -static __always_inline int -raw_atomic_inc_return_acquire(atomic_t *v) -{ return raw_atomic_add_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_release) -#define raw_atomic_inc_return_release arch_atomic_inc_return_release -#elif defined(arch_atomic_inc_return_relaxed) static __always_inline int raw_atomic_inc_return_release(atomic_t *v) { +#if defined(arch_atomic_inc_return_release) + return arch_atomic_inc_return_release(v); +#elif defined(arch_atomic_inc_return_relaxed) __atomic_release_fence(); return arch_atomic_inc_return_relaxed(v); -} #elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_release arch_atomic_inc_return + return arch_atomic_inc_return(v); #else -static __always_inline int -raw_atomic_inc_return_release(atomic_t *v) -{ return raw_atomic_add_return_release(1, v); -} #endif +} -#if defined(arch_atomic_inc_return_relaxed) -#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed -#elif defined(arch_atomic_inc_return) -#define raw_atomic_inc_return_relaxed arch_atomic_inc_return -#else static __always_inline int raw_atomic_inc_return_relaxed(atomic_t *v) { +#if defined(arch_atomic_inc_return_relaxed) + return arch_atomic_inc_return_relaxed(v); +#elif defined(arch_atomic_inc_return) + return arch_atomic_inc_return(v); +#else return raw_atomic_add_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc arch_atomic_fetch_inc -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc(atomic_t *v) { +#if defined(arch_atomic_fetch_inc) + return arch_atomic_fetch_inc(v); +#elif defined(arch_atomic_fetch_inc_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_inc(atomic_t *v) -{ return raw_atomic_fetch_add(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_acquire) -#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc_acquire(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_acquire) + return arch_atomic_fetch_inc_acquire(v); +#elif defined(arch_atomic_fetch_inc_relaxed) int ret = arch_atomic_fetch_inc_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc + return arch_atomic_fetch_inc(v); #else -static __always_inline int -raw_atomic_fetch_inc_acquire(atomic_t *v) -{ return raw_atomic_fetch_add_acquire(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_release) -#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release -#elif defined(arch_atomic_fetch_inc_relaxed) static __always_inline int raw_atomic_fetch_inc_release(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_release) + return arch_atomic_fetch_inc_release(v); +#elif defined(arch_atomic_fetch_inc_relaxed) __atomic_release_fence(); return arch_atomic_fetch_inc_relaxed(v); -} #elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc + return arch_atomic_fetch_inc(v); #else -static __always_inline int -raw_atomic_fetch_inc_release(atomic_t *v) -{ return raw_atomic_fetch_add_release(1, v); -} #endif +} -#if defined(arch_atomic_fetch_inc_relaxed) -#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed -#elif defined(arch_atomic_fetch_inc) -#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc -#else static __always_inline int raw_atomic_fetch_inc_relaxed(atomic_t *v) { +#if defined(arch_atomic_fetch_inc_relaxed) + return arch_atomic_fetch_inc_relaxed(v); +#elif defined(arch_atomic_fetch_inc) + return arch_atomic_fetch_inc(v); +#else return raw_atomic_fetch_add_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_dec) -#define raw_atomic_dec arch_atomic_dec -#else static __always_inline void raw_atomic_dec(atomic_t *v) { +#if defined(arch_atomic_dec) + arch_atomic_dec(v); +#else raw_atomic_sub(1, v); -} #endif +} -#if defined(arch_atomic_dec_return) -#define raw_atomic_dec_return arch_atomic_dec_return -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return(atomic_t *v) { +#if defined(arch_atomic_dec_return) + return arch_atomic_dec_return(v); +#elif defined(arch_atomic_dec_return_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_dec_return(atomic_t *v) -{ return raw_atomic_sub_return(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_acquire) -#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return_acquire(atomic_t *v) { +#if defined(arch_atomic_dec_return_acquire) + return arch_atomic_dec_return_acquire(v); +#elif defined(arch_atomic_dec_return_relaxed) int ret = arch_atomic_dec_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_acquire arch_atomic_dec_return + return arch_atomic_dec_return(v); #else -static __always_inline int -raw_atomic_dec_return_acquire(atomic_t *v) -{ return raw_atomic_sub_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_release) -#define raw_atomic_dec_return_release arch_atomic_dec_return_release -#elif defined(arch_atomic_dec_return_relaxed) static __always_inline int raw_atomic_dec_return_release(atomic_t *v) { +#if defined(arch_atomic_dec_return_release) + return arch_atomic_dec_return_release(v); +#elif defined(arch_atomic_dec_return_relaxed) __atomic_release_fence(); return arch_atomic_dec_return_relaxed(v); -} #elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_release arch_atomic_dec_return + return arch_atomic_dec_return(v); #else -static __always_inline int -raw_atomic_dec_return_release(atomic_t *v) -{ return raw_atomic_sub_return_release(1, v); -} #endif +} -#if defined(arch_atomic_dec_return_relaxed) -#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed -#elif defined(arch_atomic_dec_return) -#define raw_atomic_dec_return_relaxed arch_atomic_dec_return -#else static __always_inline int raw_atomic_dec_return_relaxed(atomic_t *v) { +#if defined(arch_atomic_dec_return_relaxed) + return arch_atomic_dec_return_relaxed(v); +#elif defined(arch_atomic_dec_return) + return arch_atomic_dec_return(v); +#else return raw_atomic_sub_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec arch_atomic_fetch_dec -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec(atomic_t *v) { +#if defined(arch_atomic_fetch_dec) + return arch_atomic_fetch_dec(v); +#elif defined(arch_atomic_fetch_dec_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_dec(atomic_t *v) -{ return raw_atomic_fetch_sub(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_acquire) -#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec_acquire(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_acquire) + return arch_atomic_fetch_dec_acquire(v); +#elif defined(arch_atomic_fetch_dec_relaxed) int ret = arch_atomic_fetch_dec_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec + return arch_atomic_fetch_dec(v); #else -static __always_inline int -raw_atomic_fetch_dec_acquire(atomic_t *v) -{ return raw_atomic_fetch_sub_acquire(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_release) -#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release -#elif defined(arch_atomic_fetch_dec_relaxed) static __always_inline int raw_atomic_fetch_dec_release(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_release) + return arch_atomic_fetch_dec_release(v); +#elif defined(arch_atomic_fetch_dec_relaxed) __atomic_release_fence(); return arch_atomic_fetch_dec_relaxed(v); -} #elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec + return arch_atomic_fetch_dec(v); #else -static __always_inline int -raw_atomic_fetch_dec_release(atomic_t *v) -{ return raw_atomic_fetch_sub_release(1, v); -} #endif +} -#if defined(arch_atomic_fetch_dec_relaxed) -#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed -#elif defined(arch_atomic_fetch_dec) -#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec -#else static __always_inline int raw_atomic_fetch_dec_relaxed(atomic_t *v) { +#if defined(arch_atomic_fetch_dec_relaxed) + return arch_atomic_fetch_dec_relaxed(v); +#elif defined(arch_atomic_fetch_dec) + return arch_atomic_fetch_dec(v); +#else return raw_atomic_fetch_sub_relaxed(1, v); -} #endif +} -#define raw_atomic_and arch_atomic_and +static __always_inline void +raw_atomic_and(int i, atomic_t *v) +{ + arch_atomic_and(i, v); +} -#if defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and arch_atomic_fetch_and -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and) + return arch_atomic_fetch_and(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_and" #endif +} -#if defined(arch_atomic_fetch_and_acquire) -#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and_acquire) + return arch_atomic_fetch_and_acquire(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) int ret = arch_atomic_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_acquire" #endif +} -#if defined(arch_atomic_fetch_and_release) -#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release -#elif defined(arch_atomic_fetch_and_relaxed) static __always_inline int raw_atomic_fetch_and_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_and_release) + return arch_atomic_fetch_and_release(i, v); +#elif defined(arch_atomic_fetch_and_relaxed) __atomic_release_fence(); return arch_atomic_fetch_and_relaxed(i, v); -} #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_release arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_release" #endif +} +static __always_inline int +raw_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_and_relaxed) -#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed + return arch_atomic_fetch_and_relaxed(i, v); #elif defined(arch_atomic_fetch_and) -#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and + return arch_atomic_fetch_and(i, v); #else #error "Unable to define raw_atomic_fetch_and_relaxed" #endif +} -#if defined(arch_atomic_andnot) -#define raw_atomic_andnot arch_atomic_andnot -#else static __always_inline void raw_atomic_andnot(int i, atomic_t *v) { +#if defined(arch_atomic_andnot) + arch_atomic_andnot(i, v); +#else raw_atomic_and(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot) + return arch_atomic_fetch_andnot(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_fetch_andnot(int i, atomic_t *v) -{ return raw_atomic_fetch_and(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_acquire) -#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_acquire) + return arch_atomic_fetch_andnot_acquire(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) int ret = arch_atomic_fetch_andnot_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot + return arch_atomic_fetch_andnot(i, v); #else -static __always_inline int -raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ return raw_atomic_fetch_and_acquire(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_release) -#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release -#elif defined(arch_atomic_fetch_andnot_relaxed) static __always_inline int raw_atomic_fetch_andnot_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_release) + return arch_atomic_fetch_andnot_release(i, v); +#elif defined(arch_atomic_fetch_andnot_relaxed) __atomic_release_fence(); return arch_atomic_fetch_andnot_relaxed(i, v); -} #elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot + return arch_atomic_fetch_andnot(i, v); #else -static __always_inline int -raw_atomic_fetch_andnot_release(int i, atomic_t *v) -{ return raw_atomic_fetch_and_release(~i, v); -} #endif +} -#if defined(arch_atomic_fetch_andnot_relaxed) -#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed -#elif defined(arch_atomic_fetch_andnot) -#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot -#else static __always_inline int raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_andnot_relaxed) + return arch_atomic_fetch_andnot_relaxed(i, v); +#elif defined(arch_atomic_fetch_andnot) + return arch_atomic_fetch_andnot(i, v); +#else return raw_atomic_fetch_and_relaxed(~i, v); -} #endif +} -#define raw_atomic_or arch_atomic_or +static __always_inline void +raw_atomic_or(int i, atomic_t *v) +{ + arch_atomic_or(i, v); +} -#if defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or arch_atomic_fetch_or -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or) + return arch_atomic_fetch_or(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_or" #endif +} -#if defined(arch_atomic_fetch_or_acquire) -#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or_acquire) + return arch_atomic_fetch_or_acquire(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) int ret = arch_atomic_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_acquire" #endif +} -#if defined(arch_atomic_fetch_or_release) -#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release -#elif defined(arch_atomic_fetch_or_relaxed) static __always_inline int raw_atomic_fetch_or_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_or_release) + return arch_atomic_fetch_or_release(i, v); +#elif defined(arch_atomic_fetch_or_relaxed) __atomic_release_fence(); return arch_atomic_fetch_or_relaxed(i, v); -} #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_release arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_release" #endif +} +static __always_inline int +raw_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_or_relaxed) -#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed + return arch_atomic_fetch_or_relaxed(i, v); #elif defined(arch_atomic_fetch_or) -#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or + return arch_atomic_fetch_or(i, v); #else #error "Unable to define raw_atomic_fetch_or_relaxed" #endif +} -#define raw_atomic_xor arch_atomic_xor +static __always_inline void +raw_atomic_xor(int i, atomic_t *v) +{ + arch_atomic_xor(i, v); +} -#if defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor arch_atomic_fetch_xor -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor) + return arch_atomic_fetch_xor(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic_fetch_xor" #endif +} -#if defined(arch_atomic_fetch_xor_acquire) -#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor_acquire) + return arch_atomic_fetch_xor_acquire(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) int ret = arch_atomic_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_acquire" #endif +} -#if defined(arch_atomic_fetch_xor_release) -#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release -#elif defined(arch_atomic_fetch_xor_relaxed) static __always_inline int raw_atomic_fetch_xor_release(int i, atomic_t *v) { +#if defined(arch_atomic_fetch_xor_release) + return arch_atomic_fetch_xor_release(i, v); +#elif defined(arch_atomic_fetch_xor_relaxed) __atomic_release_fence(); return arch_atomic_fetch_xor_relaxed(i, v); -} #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_release" #endif +} +static __always_inline int +raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ #if defined(arch_atomic_fetch_xor_relaxed) -#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed + return arch_atomic_fetch_xor_relaxed(i, v); #elif defined(arch_atomic_fetch_xor) -#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor + return arch_atomic_fetch_xor(i, v); #else #error "Unable to define raw_atomic_fetch_xor_relaxed" #endif +} -#if defined(arch_atomic_xchg) -#define raw_atomic_xchg arch_atomic_xchg -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg(atomic_t *v, int i) +raw_atomic_xchg(atomic_t *v, int new) { +#if defined(arch_atomic_xchg) + return arch_atomic_xchg(v, new); +#elif defined(arch_atomic_xchg_relaxed) int ret; __atomic_pre_full_fence(); - ret = arch_atomic_xchg_relaxed(v, i); + ret = arch_atomic_xchg_relaxed(v, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_xchg(atomic_t *v, int new) -{ return raw_xchg(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_acquire) -#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int i) +raw_atomic_xchg_acquire(atomic_t *v, int new) { - int ret = arch_atomic_xchg_relaxed(v, i); +#if defined(arch_atomic_xchg_acquire) + return arch_atomic_xchg_acquire(v, new); +#elif defined(arch_atomic_xchg_relaxed) + int ret = arch_atomic_xchg_relaxed(v, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_acquire arch_atomic_xchg + return arch_atomic_xchg(v, new); #else -static __always_inline int -raw_atomic_xchg_acquire(atomic_t *v, int new) -{ return raw_xchg_acquire(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_release) -#define raw_atomic_xchg_release arch_atomic_xchg_release -#elif defined(arch_atomic_xchg_relaxed) static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int i) +raw_atomic_xchg_release(atomic_t *v, int new) { +#if defined(arch_atomic_xchg_release) + return arch_atomic_xchg_release(v, new); +#elif defined(arch_atomic_xchg_relaxed) __atomic_release_fence(); - return arch_atomic_xchg_relaxed(v, i); -} + return arch_atomic_xchg_relaxed(v, new); #elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_release arch_atomic_xchg + return arch_atomic_xchg(v, new); #else -static __always_inline int -raw_atomic_xchg_release(atomic_t *v, int new) -{ return raw_xchg_release(&v->counter, new); -} #endif +} -#if defined(arch_atomic_xchg_relaxed) -#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed -#elif defined(arch_atomic_xchg) -#define raw_atomic_xchg_relaxed arch_atomic_xchg -#else static __always_inline int raw_atomic_xchg_relaxed(atomic_t *v, int new) { +#if defined(arch_atomic_xchg_relaxed) + return arch_atomic_xchg_relaxed(v, new); +#elif defined(arch_atomic_xchg) + return arch_atomic_xchg(v, new); +#else return raw_xchg_relaxed(&v->counter, new); -} #endif +} -#if defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg arch_atomic_cmpxchg -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg) + return arch_atomic_cmpxchg(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) int ret; __atomic_pre_full_fence(); ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline int -raw_atomic_cmpxchg(atomic_t *v, int old, int new) -{ return raw_cmpxchg(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_acquire) -#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_acquire) + return arch_atomic_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) int ret = arch_atomic_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg + return arch_atomic_cmpxchg(v, old, new); #else -static __always_inline int -raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) -{ return raw_cmpxchg_acquire(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_release) -#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release -#elif defined(arch_atomic_cmpxchg_relaxed) static __always_inline int raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_release) + return arch_atomic_cmpxchg_release(v, old, new); +#elif defined(arch_atomic_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg + return arch_atomic_cmpxchg(v, old, new); #else -static __always_inline int -raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) -{ return raw_cmpxchg_release(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_cmpxchg_relaxed) -#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#elif defined(arch_atomic_cmpxchg) -#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg -#else static __always_inline int raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { +#if defined(arch_atomic_cmpxchg_relaxed) + return arch_atomic_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic_cmpxchg) + return arch_atomic_cmpxchg(v, old, new); +#else return raw_cmpxchg_relaxed(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg) + return arch_atomic_try_cmpxchg(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_acquire) -#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_acquire) + return arch_atomic_try_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg + return arch_atomic_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_release) -#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release -#elif defined(arch_atomic_try_cmpxchg_relaxed) static __always_inline bool raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_release) + return arch_atomic_try_cmpxchg_release(v, old, new); +#elif defined(arch_atomic_try_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic_try_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg + return arch_atomic_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) -{ int r, o = *old; r = raw_atomic_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_try_cmpxchg_relaxed) -#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed -#elif defined(arch_atomic_try_cmpxchg) -#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg -#else static __always_inline bool raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { +#if defined(arch_atomic_try_cmpxchg_relaxed) + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic_try_cmpxchg) + return arch_atomic_try_cmpxchg(v, old, new); +#else int r, o = *old; r = raw_atomic_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic_sub_and_test) -#define raw_atomic_sub_and_test arch_atomic_sub_and_test -#else static __always_inline bool raw_atomic_sub_and_test(int i, atomic_t *v) { +#if defined(arch_atomic_sub_and_test) + return arch_atomic_sub_and_test(i, v); +#else return raw_atomic_sub_return(i, v) == 0; -} #endif +} -#if defined(arch_atomic_dec_and_test) -#define raw_atomic_dec_and_test arch_atomic_dec_and_test -#else static __always_inline bool raw_atomic_dec_and_test(atomic_t *v) { +#if defined(arch_atomic_dec_and_test) + return arch_atomic_dec_and_test(v); +#else return raw_atomic_dec_return(v) == 0; -} #endif +} -#if defined(arch_atomic_inc_and_test) -#define raw_atomic_inc_and_test arch_atomic_inc_and_test -#else static __always_inline bool raw_atomic_inc_and_test(atomic_t *v) { +#if defined(arch_atomic_inc_and_test) + return arch_atomic_inc_and_test(v); +#else return raw_atomic_inc_return(v) == 0; -} #endif +} -#if defined(arch_atomic_add_negative) -#define raw_atomic_add_negative arch_atomic_add_negative -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative) + return arch_atomic_add_negative(i, v); +#elif defined(arch_atomic_add_negative_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic_add_negative_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic_add_negative(int i, atomic_t *v) -{ return raw_atomic_add_return(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_acquire) -#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative_acquire(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_acquire) + return arch_atomic_add_negative_acquire(i, v); +#elif defined(arch_atomic_add_negative_relaxed) bool ret = arch_atomic_add_negative_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_acquire arch_atomic_add_negative + return arch_atomic_add_negative(i, v); #else -static __always_inline bool -raw_atomic_add_negative_acquire(int i, atomic_t *v) -{ return raw_atomic_add_return_acquire(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_release) -#define raw_atomic_add_negative_release arch_atomic_add_negative_release -#elif defined(arch_atomic_add_negative_relaxed) static __always_inline bool raw_atomic_add_negative_release(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_release) + return arch_atomic_add_negative_release(i, v); +#elif defined(arch_atomic_add_negative_relaxed) __atomic_release_fence(); return arch_atomic_add_negative_relaxed(i, v); -} #elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_release arch_atomic_add_negative + return arch_atomic_add_negative(i, v); #else -static __always_inline bool -raw_atomic_add_negative_release(int i, atomic_t *v) -{ return raw_atomic_add_return_release(i, v) < 0; -} #endif +} -#if defined(arch_atomic_add_negative_relaxed) -#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed -#elif defined(arch_atomic_add_negative) -#define raw_atomic_add_negative_relaxed arch_atomic_add_negative -#else static __always_inline bool raw_atomic_add_negative_relaxed(int i, atomic_t *v) { +#if defined(arch_atomic_add_negative_relaxed) + return arch_atomic_add_negative_relaxed(i, v); +#elif defined(arch_atomic_add_negative) + return arch_atomic_add_negative(i, v); +#else return raw_atomic_add_return_relaxed(i, v) < 0; -} #endif +} -#if defined(arch_atomic_fetch_add_unless) -#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless -#else static __always_inline int raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { +#if defined(arch_atomic_fetch_add_unless) + return arch_atomic_fetch_add_unless(v, a, u); +#else int c = raw_atomic_read(v); do { @@ -1594,35 +1542,35 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) } while (!raw_atomic_try_cmpxchg(v, &c, c + a)); return c; -} #endif +} -#if defined(arch_atomic_add_unless) -#define raw_atomic_add_unless arch_atomic_add_unless -#else static __always_inline bool raw_atomic_add_unless(atomic_t *v, int a, int u) { +#if defined(arch_atomic_add_unless) + return arch_atomic_add_unless(v, a, u); +#else return raw_atomic_fetch_add_unless(v, a, u) != u; -} #endif +} -#if defined(arch_atomic_inc_not_zero) -#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero -#else static __always_inline bool raw_atomic_inc_not_zero(atomic_t *v) { +#if defined(arch_atomic_inc_not_zero) + return arch_atomic_inc_not_zero(v); +#else return raw_atomic_add_unless(v, 1, 0); -} #endif +} -#if defined(arch_atomic_inc_unless_negative) -#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative -#else static __always_inline bool raw_atomic_inc_unless_negative(atomic_t *v) { +#if defined(arch_atomic_inc_unless_negative) + return arch_atomic_inc_unless_negative(v); +#else int c = raw_atomic_read(v); do { @@ -1631,15 +1579,15 @@ raw_atomic_inc_unless_negative(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, c + 1)); return true; -} #endif +} -#if defined(arch_atomic_dec_unless_positive) -#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive -#else static __always_inline bool raw_atomic_dec_unless_positive(atomic_t *v) { +#if defined(arch_atomic_dec_unless_positive) + return arch_atomic_dec_unless_positive(v); +#else int c = raw_atomic_read(v); do { @@ -1648,15 +1596,15 @@ raw_atomic_dec_unless_positive(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, c - 1)); return true; -} #endif +} -#if defined(arch_atomic_dec_if_positive) -#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive -#else static __always_inline int raw_atomic_dec_if_positive(atomic_t *v) { +#if defined(arch_atomic_dec_if_positive) + return arch_atomic_dec_if_positive(v); +#else int dec, c = raw_atomic_read(v); do { @@ -1666,23 +1614,27 @@ raw_atomic_dec_if_positive(atomic_t *v) } while (!raw_atomic_try_cmpxchg(v, &c, dec)); return dec; -} #endif +} #ifdef CONFIG_GENERIC_ATOMIC64 #include #endif -#define raw_atomic64_read arch_atomic64_read +static __always_inline s64 +raw_atomic64_read(const atomic64_t *v) +{ + return arch_atomic64_read(v); +} -#if defined(arch_atomic64_read_acquire) -#define raw_atomic64_read_acquire arch_atomic64_read_acquire -#elif defined(arch_atomic64_read) -#define raw_atomic64_read_acquire arch_atomic64_read -#else static __always_inline s64 raw_atomic64_read_acquire(const atomic64_t *v) { +#if defined(arch_atomic64_read_acquire) + return arch_atomic64_read_acquire(v); +#elif defined(arch_atomic64_read) + return arch_atomic64_read(v); +#else s64 ret; if (__native_word(atomic64_t)) { @@ -1693,1144 +1645,1088 @@ raw_atomic64_read_acquire(const atomic64_t *v) } return ret; -} #endif +} -#define raw_atomic64_set arch_atomic64_set +static __always_inline void +raw_atomic64_set(atomic64_t *v, s64 i) +{ + arch_atomic64_set(v, i); +} -#if defined(arch_atomic64_set_release) -#define raw_atomic64_set_release arch_atomic64_set_release -#elif defined(arch_atomic64_set) -#define raw_atomic64_set_release arch_atomic64_set -#else static __always_inline void raw_atomic64_set_release(atomic64_t *v, s64 i) { +#if defined(arch_atomic64_set_release) + arch_atomic64_set_release(v, i); +#elif defined(arch_atomic64_set) + arch_atomic64_set(v, i); +#else if (__native_word(atomic64_t)) { smp_store_release(&(v)->counter, i); } else { __atomic_release_fence(); raw_atomic64_set(v, i); } -} #endif +} -#define raw_atomic64_add arch_atomic64_add +static __always_inline void +raw_atomic64_add(s64 i, atomic64_t *v) +{ + arch_atomic64_add(i, v); +} -#if defined(arch_atomic64_add_return) -#define raw_atomic64_add_return arch_atomic64_add_return -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return) + return arch_atomic64_add_return(i, v); +#elif defined(arch_atomic64_add_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_add_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_add_return" #endif +} -#if defined(arch_atomic64_add_return_acquire) -#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return_acquire) + return arch_atomic64_add_return_acquire(i, v); +#elif defined(arch_atomic64_add_return_relaxed) s64 ret = arch_atomic64_add_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_acquire arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_acquire" #endif +} -#if defined(arch_atomic64_add_return_release) -#define raw_atomic64_add_return_release arch_atomic64_add_return_release -#elif defined(arch_atomic64_add_return_relaxed) static __always_inline s64 raw_atomic64_add_return_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_return_release) + return arch_atomic64_add_return_release(i, v); +#elif defined(arch_atomic64_add_return_relaxed) __atomic_release_fence(); return arch_atomic64_add_return_relaxed(i, v); -} #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_release arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_release" #endif +} +static __always_inline s64 +raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_add_return_relaxed) -#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed + return arch_atomic64_add_return_relaxed(i, v); #elif defined(arch_atomic64_add_return) -#define raw_atomic64_add_return_relaxed arch_atomic64_add_return + return arch_atomic64_add_return(i, v); #else #error "Unable to define raw_atomic64_add_return_relaxed" #endif +} -#if defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add arch_atomic64_fetch_add -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add) + return arch_atomic64_fetch_add(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_add" #endif +} -#if defined(arch_atomic64_fetch_add_acquire) -#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add_acquire) + return arch_atomic64_fetch_add_acquire(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) s64 ret = arch_atomic64_fetch_add_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_acquire" #endif +} -#if defined(arch_atomic64_fetch_add_release) -#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release -#elif defined(arch_atomic64_fetch_add_relaxed) static __always_inline s64 raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_add_release) + return arch_atomic64_fetch_add_release(i, v); +#elif defined(arch_atomic64_fetch_add_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_add_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_add_relaxed) -#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed + return arch_atomic64_fetch_add_relaxed(i, v); #elif defined(arch_atomic64_fetch_add) -#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add + return arch_atomic64_fetch_add(i, v); #else #error "Unable to define raw_atomic64_fetch_add_relaxed" #endif +} -#define raw_atomic64_sub arch_atomic64_sub +static __always_inline void +raw_atomic64_sub(s64 i, atomic64_t *v) +{ + arch_atomic64_sub(i, v); +} -#if defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return arch_atomic64_sub_return -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return) + return arch_atomic64_sub_return(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_sub_return" #endif +} -#if defined(arch_atomic64_sub_return_acquire) -#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return_acquire) + return arch_atomic64_sub_return_acquire(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) s64 ret = arch_atomic64_sub_return_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_acquire" #endif +} -#if defined(arch_atomic64_sub_return_release) -#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release -#elif defined(arch_atomic64_sub_return_relaxed) static __always_inline s64 raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_return_release) + return arch_atomic64_sub_return_release(i, v); +#elif defined(arch_atomic64_sub_return_relaxed) __atomic_release_fence(); return arch_atomic64_sub_return_relaxed(i, v); -} #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_release arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_release" #endif +} +static __always_inline s64 +raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_sub_return_relaxed) -#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed + return arch_atomic64_sub_return_relaxed(i, v); #elif defined(arch_atomic64_sub_return) -#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return + return arch_atomic64_sub_return(i, v); #else #error "Unable to define raw_atomic64_sub_return_relaxed" #endif +} -#if defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub) + return arch_atomic64_fetch_sub(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_sub" #endif +} -#if defined(arch_atomic64_fetch_sub_acquire) -#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub_acquire) + return arch_atomic64_fetch_sub_acquire(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) s64 ret = arch_atomic64_fetch_sub_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_acquire" #endif +} -#if defined(arch_atomic64_fetch_sub_release) -#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release -#elif defined(arch_atomic64_fetch_sub_relaxed) static __always_inline s64 raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_sub_release) + return arch_atomic64_fetch_sub_release(i, v); +#elif defined(arch_atomic64_fetch_sub_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_sub_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_sub_relaxed) -#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed + return arch_atomic64_fetch_sub_relaxed(i, v); #elif defined(arch_atomic64_fetch_sub) -#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub + return arch_atomic64_fetch_sub(i, v); #else #error "Unable to define raw_atomic64_fetch_sub_relaxed" #endif +} -#if defined(arch_atomic64_inc) -#define raw_atomic64_inc arch_atomic64_inc -#else static __always_inline void raw_atomic64_inc(atomic64_t *v) { +#if defined(arch_atomic64_inc) + arch_atomic64_inc(v); +#else raw_atomic64_add(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return arch_atomic64_inc_return -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return(atomic64_t *v) { +#if defined(arch_atomic64_inc_return) + return arch_atomic64_inc_return(v); +#elif defined(arch_atomic64_inc_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_inc_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_inc_return(atomic64_t *v) -{ return raw_atomic64_add_return(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_acquire) -#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return_acquire(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_acquire) + return arch_atomic64_inc_return_acquire(v); +#elif defined(arch_atomic64_inc_return_relaxed) s64 ret = arch_atomic64_inc_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return + return arch_atomic64_inc_return(v); #else -static __always_inline s64 -raw_atomic64_inc_return_acquire(atomic64_t *v) -{ return raw_atomic64_add_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_release) -#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release -#elif defined(arch_atomic64_inc_return_relaxed) static __always_inline s64 raw_atomic64_inc_return_release(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_release) + return arch_atomic64_inc_return_release(v); +#elif defined(arch_atomic64_inc_return_relaxed) __atomic_release_fence(); return arch_atomic64_inc_return_relaxed(v); -} #elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_release arch_atomic64_inc_return + return arch_atomic64_inc_return(v); #else -static __always_inline s64 -raw_atomic64_inc_return_release(atomic64_t *v) -{ return raw_atomic64_add_return_release(1, v); -} #endif +} -#if defined(arch_atomic64_inc_return_relaxed) -#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed -#elif defined(arch_atomic64_inc_return) -#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return -#else static __always_inline s64 raw_atomic64_inc_return_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_inc_return_relaxed) + return arch_atomic64_inc_return_relaxed(v); +#elif defined(arch_atomic64_inc_return) + return arch_atomic64_inc_return(v); +#else return raw_atomic64_add_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc) + return arch_atomic64_fetch_inc(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_inc(atomic64_t *v) -{ return raw_atomic64_fetch_add(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_acquire) -#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc_acquire(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_acquire) + return arch_atomic64_fetch_inc_acquire(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) s64 ret = arch_atomic64_fetch_inc_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc + return arch_atomic64_fetch_inc(v); #else -static __always_inline s64 -raw_atomic64_fetch_inc_acquire(atomic64_t *v) -{ return raw_atomic64_fetch_add_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_release) -#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release -#elif defined(arch_atomic64_fetch_inc_relaxed) static __always_inline s64 raw_atomic64_fetch_inc_release(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_release) + return arch_atomic64_fetch_inc_release(v); +#elif defined(arch_atomic64_fetch_inc_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_inc_relaxed(v); -} #elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc + return arch_atomic64_fetch_inc(v); #else -static __always_inline s64 -raw_atomic64_fetch_inc_release(atomic64_t *v) -{ return raw_atomic64_fetch_add_release(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_inc_relaxed) -#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed -#elif defined(arch_atomic64_fetch_inc) -#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc -#else static __always_inline s64 raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_fetch_inc_relaxed) + return arch_atomic64_fetch_inc_relaxed(v); +#elif defined(arch_atomic64_fetch_inc) + return arch_atomic64_fetch_inc(v); +#else return raw_atomic64_fetch_add_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_dec) -#define raw_atomic64_dec arch_atomic64_dec -#else static __always_inline void raw_atomic64_dec(atomic64_t *v) { +#if defined(arch_atomic64_dec) + arch_atomic64_dec(v); +#else raw_atomic64_sub(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return arch_atomic64_dec_return -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return(atomic64_t *v) { +#if defined(arch_atomic64_dec_return) + return arch_atomic64_dec_return(v); +#elif defined(arch_atomic64_dec_return_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_dec_return_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_dec_return(atomic64_t *v) -{ return raw_atomic64_sub_return(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_acquire) -#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return_acquire(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_acquire) + return arch_atomic64_dec_return_acquire(v); +#elif defined(arch_atomic64_dec_return_relaxed) s64 ret = arch_atomic64_dec_return_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return + return arch_atomic64_dec_return(v); #else -static __always_inline s64 -raw_atomic64_dec_return_acquire(atomic64_t *v) -{ return raw_atomic64_sub_return_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_release) -#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release -#elif defined(arch_atomic64_dec_return_relaxed) static __always_inline s64 raw_atomic64_dec_return_release(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_release) + return arch_atomic64_dec_return_release(v); +#elif defined(arch_atomic64_dec_return_relaxed) __atomic_release_fence(); return arch_atomic64_dec_return_relaxed(v); -} #elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_release arch_atomic64_dec_return + return arch_atomic64_dec_return(v); #else -static __always_inline s64 -raw_atomic64_dec_return_release(atomic64_t *v) -{ return raw_atomic64_sub_return_release(1, v); -} #endif +} -#if defined(arch_atomic64_dec_return_relaxed) -#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed -#elif defined(arch_atomic64_dec_return) -#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return -#else static __always_inline s64 raw_atomic64_dec_return_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_dec_return_relaxed) + return arch_atomic64_dec_return_relaxed(v); +#elif defined(arch_atomic64_dec_return) + return arch_atomic64_dec_return(v); +#else return raw_atomic64_sub_return_relaxed(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec) + return arch_atomic64_fetch_dec(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_dec(atomic64_t *v) -{ return raw_atomic64_fetch_sub(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_acquire) -#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec_acquire(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_acquire) + return arch_atomic64_fetch_dec_acquire(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) s64 ret = arch_atomic64_fetch_dec_relaxed(v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec + return arch_atomic64_fetch_dec(v); #else -static __always_inline s64 -raw_atomic64_fetch_dec_acquire(atomic64_t *v) -{ return raw_atomic64_fetch_sub_acquire(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_release) -#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release -#elif defined(arch_atomic64_fetch_dec_relaxed) static __always_inline s64 raw_atomic64_fetch_dec_release(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_release) + return arch_atomic64_fetch_dec_release(v); +#elif defined(arch_atomic64_fetch_dec_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_dec_relaxed(v); -} #elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec + return arch_atomic64_fetch_dec(v); #else -static __always_inline s64 -raw_atomic64_fetch_dec_release(atomic64_t *v) -{ return raw_atomic64_fetch_sub_release(1, v); -} #endif +} -#if defined(arch_atomic64_fetch_dec_relaxed) -#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed -#elif defined(arch_atomic64_fetch_dec) -#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec -#else static __always_inline s64 raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { +#if defined(arch_atomic64_fetch_dec_relaxed) + return arch_atomic64_fetch_dec_relaxed(v); +#elif defined(arch_atomic64_fetch_dec) + return arch_atomic64_fetch_dec(v); +#else return raw_atomic64_fetch_sub_relaxed(1, v); -} #endif +} -#define raw_atomic64_and arch_atomic64_and +static __always_inline void +raw_atomic64_and(s64 i, atomic64_t *v) +{ + arch_atomic64_and(i, v); +} -#if defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and arch_atomic64_fetch_and -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and) + return arch_atomic64_fetch_and(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_and" #endif +} -#if defined(arch_atomic64_fetch_and_acquire) -#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and_acquire) + return arch_atomic64_fetch_and_acquire(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) s64 ret = arch_atomic64_fetch_and_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_acquire" #endif +} -#if defined(arch_atomic64_fetch_and_release) -#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release -#elif defined(arch_atomic64_fetch_and_relaxed) static __always_inline s64 raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_and_release) + return arch_atomic64_fetch_and_release(i, v); +#elif defined(arch_atomic64_fetch_and_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_and_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_and_relaxed) -#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed + return arch_atomic64_fetch_and_relaxed(i, v); #elif defined(arch_atomic64_fetch_and) -#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and + return arch_atomic64_fetch_and(i, v); #else #error "Unable to define raw_atomic64_fetch_and_relaxed" #endif +} -#if defined(arch_atomic64_andnot) -#define raw_atomic64_andnot arch_atomic64_andnot -#else static __always_inline void raw_atomic64_andnot(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_andnot) + arch_atomic64_andnot(i, v); +#else raw_atomic64_and(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot) + return arch_atomic64_fetch_andnot(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_andnot_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_acquire) -#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_acquire) + return arch_atomic64_fetch_andnot_acquire(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot + return arch_atomic64_fetch_andnot(i, v); #else -static __always_inline s64 -raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and_acquire(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_release) -#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release -#elif defined(arch_atomic64_fetch_andnot_relaxed) static __always_inline s64 raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_release) + return arch_atomic64_fetch_andnot_release(i, v); +#elif defined(arch_atomic64_fetch_andnot_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_andnot_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot + return arch_atomic64_fetch_andnot(i, v); #else -static __always_inline s64 -raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) -{ return raw_atomic64_fetch_and_release(~i, v); -} #endif +} -#if defined(arch_atomic64_fetch_andnot_relaxed) -#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed -#elif defined(arch_atomic64_fetch_andnot) -#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot -#else static __always_inline s64 raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_andnot_relaxed) + return arch_atomic64_fetch_andnot_relaxed(i, v); +#elif defined(arch_atomic64_fetch_andnot) + return arch_atomic64_fetch_andnot(i, v); +#else return raw_atomic64_fetch_and_relaxed(~i, v); -} #endif +} -#define raw_atomic64_or arch_atomic64_or +static __always_inline void +raw_atomic64_or(s64 i, atomic64_t *v) +{ + arch_atomic64_or(i, v); +} -#if defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or arch_atomic64_fetch_or -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or) + return arch_atomic64_fetch_or(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_or" #endif +} -#if defined(arch_atomic64_fetch_or_acquire) -#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or_acquire) + return arch_atomic64_fetch_or_acquire(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) s64 ret = arch_atomic64_fetch_or_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_acquire" #endif +} -#if defined(arch_atomic64_fetch_or_release) -#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release -#elif defined(arch_atomic64_fetch_or_relaxed) static __always_inline s64 raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_or_release) + return arch_atomic64_fetch_or_release(i, v); +#elif defined(arch_atomic64_fetch_or_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_or_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_or_relaxed) -#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed + return arch_atomic64_fetch_or_relaxed(i, v); #elif defined(arch_atomic64_fetch_or) -#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or + return arch_atomic64_fetch_or(i, v); #else #error "Unable to define raw_atomic64_fetch_or_relaxed" #endif +} -#define raw_atomic64_xor arch_atomic64_xor +static __always_inline void +raw_atomic64_xor(s64 i, atomic64_t *v) +{ + arch_atomic64_xor(i, v); +} -#if defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor) + return arch_atomic64_fetch_xor(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else #error "Unable to define raw_atomic64_fetch_xor" #endif +} -#if defined(arch_atomic64_fetch_xor_acquire) -#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor_acquire) + return arch_atomic64_fetch_xor_acquire(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) s64 ret = arch_atomic64_fetch_xor_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_acquire" #endif +} -#if defined(arch_atomic64_fetch_xor_release) -#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release -#elif defined(arch_atomic64_fetch_xor_relaxed) static __always_inline s64 raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_fetch_xor_release) + return arch_atomic64_fetch_xor_release(i, v); +#elif defined(arch_atomic64_fetch_xor_relaxed) __atomic_release_fence(); return arch_atomic64_fetch_xor_relaxed(i, v); -} #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_release" #endif +} +static __always_inline s64 +raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ #if defined(arch_atomic64_fetch_xor_relaxed) -#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed + return arch_atomic64_fetch_xor_relaxed(i, v); #elif defined(arch_atomic64_fetch_xor) -#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor + return arch_atomic64_fetch_xor(i, v); #else #error "Unable to define raw_atomic64_fetch_xor_relaxed" #endif +} -#if defined(arch_atomic64_xchg) -#define raw_atomic64_xchg arch_atomic64_xchg -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 i) +raw_atomic64_xchg(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg) + return arch_atomic64_xchg(v, new); +#elif defined(arch_atomic64_xchg_relaxed) s64 ret; __atomic_pre_full_fence(); - ret = arch_atomic64_xchg_relaxed(v, i); + ret = arch_atomic64_xchg_relaxed(v, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_xchg(atomic64_t *v, s64 new) -{ return raw_xchg(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_acquire) -#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 i) +raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { - s64 ret = arch_atomic64_xchg_relaxed(v, i); +#if defined(arch_atomic64_xchg_acquire) + return arch_atomic64_xchg_acquire(v, new); +#elif defined(arch_atomic64_xchg_relaxed) + s64 ret = arch_atomic64_xchg_relaxed(v, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_acquire arch_atomic64_xchg + return arch_atomic64_xchg(v, new); #else -static __always_inline s64 -raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) -{ return raw_xchg_acquire(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_release) -#define raw_atomic64_xchg_release arch_atomic64_xchg_release -#elif defined(arch_atomic64_xchg_relaxed) static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 i) +raw_atomic64_xchg_release(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg_release) + return arch_atomic64_xchg_release(v, new); +#elif defined(arch_atomic64_xchg_relaxed) __atomic_release_fence(); - return arch_atomic64_xchg_relaxed(v, i); -} + return arch_atomic64_xchg_relaxed(v, new); #elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_release arch_atomic64_xchg + return arch_atomic64_xchg(v, new); #else -static __always_inline s64 -raw_atomic64_xchg_release(atomic64_t *v, s64 new) -{ return raw_xchg_release(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_xchg_relaxed) -#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed -#elif defined(arch_atomic64_xchg) -#define raw_atomic64_xchg_relaxed arch_atomic64_xchg -#else static __always_inline s64 raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { +#if defined(arch_atomic64_xchg_relaxed) + return arch_atomic64_xchg_relaxed(v, new); +#elif defined(arch_atomic64_xchg) + return arch_atomic64_xchg(v, new); +#else return raw_xchg_relaxed(&v->counter, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg) + return arch_atomic64_cmpxchg(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) s64 ret; __atomic_pre_full_fence(); ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline s64 -raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_acquire) -#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_acquire) + return arch_atomic64_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg + return arch_atomic64_cmpxchg(v, old, new); #else -static __always_inline s64 -raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg_acquire(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_release) -#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release -#elif defined(arch_atomic64_cmpxchg_relaxed) static __always_inline s64 raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_release) + return arch_atomic64_cmpxchg_release(v, old, new); +#elif defined(arch_atomic64_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic64_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg + return arch_atomic64_cmpxchg(v, old, new); #else -static __always_inline s64 -raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) -{ return raw_cmpxchg_release(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_cmpxchg_relaxed) -#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed -#elif defined(arch_atomic64_cmpxchg) -#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg -#else static __always_inline s64 raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { +#if defined(arch_atomic64_cmpxchg_relaxed) + return arch_atomic64_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic64_cmpxchg) + return arch_atomic64_cmpxchg(v, old, new); +#else return raw_cmpxchg_relaxed(&v->counter, old, new); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg) + return arch_atomic64_try_cmpxchg(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_acquire) -#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_acquire) + return arch_atomic64_try_cmpxchg_acquire(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg + return arch_atomic64_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg_acquire(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_release) -#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release -#elif defined(arch_atomic64_try_cmpxchg_relaxed) static __always_inline bool raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_release) + return arch_atomic64_try_cmpxchg_release(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg_relaxed) __atomic_release_fence(); return arch_atomic64_try_cmpxchg_relaxed(v, old, new); -} #elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg + return arch_atomic64_try_cmpxchg(v, old, new); #else -static __always_inline bool -raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) -{ s64 r, o = *old; r = raw_atomic64_cmpxchg_release(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_try_cmpxchg_relaxed) -#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed -#elif defined(arch_atomic64_try_cmpxchg) -#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg -#else static __always_inline bool raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { +#if defined(arch_atomic64_try_cmpxchg_relaxed) + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +#elif defined(arch_atomic64_try_cmpxchg) + return arch_atomic64_try_cmpxchg(v, old, new); +#else s64 r, o = *old; r = raw_atomic64_cmpxchg_relaxed(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); -} #endif +} -#if defined(arch_atomic64_sub_and_test) -#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test -#else static __always_inline bool raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_sub_and_test) + return arch_atomic64_sub_and_test(i, v); +#else return raw_atomic64_sub_return(i, v) == 0; -} #endif +} -#if defined(arch_atomic64_dec_and_test) -#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test -#else static __always_inline bool raw_atomic64_dec_and_test(atomic64_t *v) { +#if defined(arch_atomic64_dec_and_test) + return arch_atomic64_dec_and_test(v); +#else return raw_atomic64_dec_return(v) == 0; -} #endif +} -#if defined(arch_atomic64_inc_and_test) -#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test -#else static __always_inline bool raw_atomic64_inc_and_test(atomic64_t *v) { +#if defined(arch_atomic64_inc_and_test) + return arch_atomic64_inc_and_test(v); +#else return raw_atomic64_inc_return(v) == 0; -} #endif +} -#if defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative arch_atomic64_add_negative -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative) + return arch_atomic64_add_negative(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) bool ret; __atomic_pre_full_fence(); ret = arch_atomic64_add_negative_relaxed(i, v); __atomic_post_full_fence(); return ret; -} #else -static __always_inline bool -raw_atomic64_add_negative(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_acquire) -#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_acquire) + return arch_atomic64_add_negative_acquire(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) bool ret = arch_atomic64_add_negative_relaxed(i, v); __atomic_acquire_fence(); return ret; -} #elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative + return arch_atomic64_add_negative(i, v); #else -static __always_inline bool -raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return_acquire(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_release) -#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release -#elif defined(arch_atomic64_add_negative_relaxed) static __always_inline bool raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_release) + return arch_atomic64_add_negative_release(i, v); +#elif defined(arch_atomic64_add_negative_relaxed) __atomic_release_fence(); return arch_atomic64_add_negative_relaxed(i, v); -} #elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_release arch_atomic64_add_negative + return arch_atomic64_add_negative(i, v); #else -static __always_inline bool -raw_atomic64_add_negative_release(s64 i, atomic64_t *v) -{ return raw_atomic64_add_return_release(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_add_negative_relaxed) -#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed -#elif defined(arch_atomic64_add_negative) -#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative -#else static __always_inline bool raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { +#if defined(arch_atomic64_add_negative_relaxed) + return arch_atomic64_add_negative_relaxed(i, v); +#elif defined(arch_atomic64_add_negative) + return arch_atomic64_add_negative(i, v); +#else return raw_atomic64_add_return_relaxed(i, v) < 0; -} #endif +} -#if defined(arch_atomic64_fetch_add_unless) -#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -#else static __always_inline s64 raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { +#if defined(arch_atomic64_fetch_add_unless) + return arch_atomic64_fetch_add_unless(v, a, u); +#else s64 c = raw_atomic64_read(v); do { @@ -2839,35 +2735,35 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) } while (!raw_atomic64_try_cmpxchg(v, &c, c + a)); return c; -} #endif +} -#if defined(arch_atomic64_add_unless) -#define raw_atomic64_add_unless arch_atomic64_add_unless -#else static __always_inline bool raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { +#if defined(arch_atomic64_add_unless) + return arch_atomic64_add_unless(v, a, u); +#else return raw_atomic64_fetch_add_unless(v, a, u) != u; -} #endif +} -#if defined(arch_atomic64_inc_not_zero) -#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero -#else static __always_inline bool raw_atomic64_inc_not_zero(atomic64_t *v) { +#if defined(arch_atomic64_inc_not_zero) + return arch_atomic64_inc_not_zero(v); +#else return raw_atomic64_add_unless(v, 1, 0); -} #endif +} -#if defined(arch_atomic64_inc_unless_negative) -#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative -#else static __always_inline bool raw_atomic64_inc_unless_negative(atomic64_t *v) { +#if defined(arch_atomic64_inc_unless_negative) + return arch_atomic64_inc_unless_negative(v); +#else s64 c = raw_atomic64_read(v); do { @@ -2876,15 +2772,15 @@ raw_atomic64_inc_unless_negative(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1)); return true; -} #endif +} -#if defined(arch_atomic64_dec_unless_positive) -#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive -#else static __always_inline bool raw_atomic64_dec_unless_positive(atomic64_t *v) { +#if defined(arch_atomic64_dec_unless_positive) + return arch_atomic64_dec_unless_positive(v); +#else s64 c = raw_atomic64_read(v); do { @@ -2893,15 +2789,15 @@ raw_atomic64_dec_unless_positive(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1)); return true; -} #endif +} -#if defined(arch_atomic64_dec_if_positive) -#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive -#else static __always_inline s64 raw_atomic64_dec_if_positive(atomic64_t *v) { +#if defined(arch_atomic64_dec_if_positive) + return arch_atomic64_dec_if_positive(v); +#else s64 dec, c = raw_atomic64_read(v); do { @@ -2911,8 +2807,8 @@ raw_atomic64_dec_if_positive(atomic64_t *v) } while (!raw_atomic64_try_cmpxchg(v, &c, dec)); return dec; -} #endif +} #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// c2048fccede6fac923252290e2b303949d5dec83 +// 205e090382132f1fc85e48b46e722865f9c81309 diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index 90ee2f55af770..5491c89dc03a0 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -462,33 +462,33 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) } static __always_inline int -atomic_xchg(atomic_t *v, int i) +atomic_xchg(atomic_t *v, int new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg(v, i); + return raw_atomic_xchg(v, new); } static __always_inline int -atomic_xchg_acquire(atomic_t *v, int i) +atomic_xchg_acquire(atomic_t *v, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, new); } static __always_inline int -atomic_xchg_release(atomic_t *v, int i) +atomic_xchg_release(atomic_t *v, int new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, new); } static __always_inline int -atomic_xchg_relaxed(atomic_t *v, int i) +atomic_xchg_relaxed(atomic_t *v, int new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, new); } static __always_inline int @@ -1103,33 +1103,33 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) } static __always_inline s64 -atomic64_xchg(atomic64_t *v, s64 i) +atomic64_xchg(atomic64_t *v, s64 new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, new); } static __always_inline s64 -atomic64_xchg_acquire(atomic64_t *v, s64 i) +atomic64_xchg_acquire(atomic64_t *v, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, new); } static __always_inline s64 -atomic64_xchg_release(atomic64_t *v, s64 i) +atomic64_xchg_release(atomic64_t *v, s64 new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, new); } static __always_inline s64 -atomic64_xchg_relaxed(atomic64_t *v, s64 i) +atomic64_xchg_relaxed(atomic64_t *v, s64 new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, new); } static __always_inline s64 @@ -1744,33 +1744,33 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) } static __always_inline long -atomic_long_xchg(atomic_long_t *v, long i) +atomic_long_xchg(atomic_long_t *v, long new) { kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg(v, i); + return raw_atomic_long_xchg(v, new); } static __always_inline long -atomic_long_xchg_acquire(atomic_long_t *v, long i) +atomic_long_xchg_acquire(atomic_long_t *v, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_acquire(v, i); + return raw_atomic_long_xchg_acquire(v, new); } static __always_inline long -atomic_long_xchg_release(atomic_long_t *v, long i) +atomic_long_xchg_release(atomic_long_t *v, long new) { kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_release(v, i); + return raw_atomic_long_xchg_release(v, new); } static __always_inline long -atomic_long_xchg_relaxed(atomic_long_t *v, long i) +atomic_long_xchg_relaxed(atomic_long_t *v, long new) { instrument_atomic_read_write(v, sizeof(*v)); - return raw_atomic_long_xchg_relaxed(v, i); + return raw_atomic_long_xchg_relaxed(v, new); } static __always_inline long @@ -2231,4 +2231,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// f6502977180430e61c1a7c4e5e665f04f501fb8d +// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 63e0b4078ebd5..f564f71ff8afc 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -622,42 +622,42 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) } static __always_inline long -raw_atomic_long_xchg(atomic_long_t *v, long i) +raw_atomic_long_xchg(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg(v, i); + return raw_atomic64_xchg(v, new); #else - return raw_atomic_xchg(v, i); + return raw_atomic_xchg(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_acquire(atomic_long_t *v, long i) +raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_acquire(v, i); + return raw_atomic64_xchg_acquire(v, new); #else - return raw_atomic_xchg_acquire(v, i); + return raw_atomic_xchg_acquire(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_release(atomic_long_t *v, long i) +raw_atomic_long_xchg_release(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_release(v, i); + return raw_atomic64_xchg_release(v, new); #else - return raw_atomic_xchg_release(v, i); + return raw_atomic_xchg_release(v, new); #endif } static __always_inline long -raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i) +raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { #ifdef CONFIG_64BIT - return raw_atomic64_xchg_relaxed(v, i); + return raw_atomic64_xchg_relaxed(v, new); #else - return raw_atomic_xchg_relaxed(v, i); + return raw_atomic_xchg_relaxed(v, new); #endif } @@ -872,4 +872,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// ad09f849db0db5b30c82e497eeb9056a394c5f22 +// e785d25cc3f220b7d473d36aac9da85dd7eb13a8 diff --git a/scripts/atomic/atomics.tbl b/scripts/atomic/atomics.tbl index 85ca8d9b5c279..903946cbf1b3e 100644 --- a/scripts/atomic/atomics.tbl +++ b/scripts/atomic/atomics.tbl @@ -27,7 +27,7 @@ and vF i v andnot vF i v or vF i v xor vF i v -xchg I v i +xchg I v i:new cmpxchg I v i:old i:new try_cmpxchg B v p:old i:new sub_and_test b i v diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire index b0f732a5c46ef..4da0cab3604e2 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,9 +1,5 @@ cat <counter, old, new); -} EOF diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index a660ac65994bd..60d286d40300f 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,7 +1,3 @@ cat <counter, i); } else { __atomic_release_fence(); raw_${atomic}_set(v, i); } -} EOF diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test index 8975a496d495c..d1f746fe0ca4d 100755 --- a/scripts/atomic/fallbacks/sub_and_test +++ b/scripts/atomic/fallbacks/sub_and_test @@ -1,7 +1,3 @@ cat <counter, new); -} EOF diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 86aca4f9f315a..2b470d31e3539 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -60,13 +60,23 @@ gen_proto_order_variant() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local atomic="$1" + local atomic="$1"; shift + local int="$1"; shift local atomicname="${atomic}_${pfx}${name}${sfx}${order}" local basename="${atomic}_${pfx}${name}${sfx}" local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" + local ret="$(gen_ret_type "${meta}" "${int}")" + local retstmt="$(gen_ret_stmt "${meta}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + + printf "static __always_inline ${ret}\n" + printf "raw_${atomicname}(${params})\n" + printf "{\n" + # Where there is no possible fallback, this order variant is mandatory # and must be provided by arch code. Add a comment to the header to # make this obvious. @@ -75,33 +85,35 @@ gen_proto_order_variant() # define this order variant as a C function without a preprocessor # symbol. if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then - printf "#define raw_${atomicname} arch_${atomicname}\n\n" + printf "\t${retstmt}arch_${atomicname}(${args});\n" + printf "}\n\n" return fi printf "#if defined(arch_${atomicname})\n" - printf "#define raw_${atomicname} arch_${atomicname}\n" + printf "\t${retstmt}arch_${atomicname}(${args});\n" # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then printf "#elif defined(arch_${basename}_relaxed)\n" - gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" fi # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops if [ ! -z "${order}" ]; then printf "#elif defined(arch_${basename})\n" - printf "#define raw_${atomicname} arch_${basename}\n" + printf "\t${retstmt}arch_${basename}(${args});\n" fi printf "#else\n" if [ ! -z "${template}" ]; then - gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" else printf "#error \"Unable to define raw_${atomicname}\"\n" fi - printf "#endif\n\n" + printf "#endif\n" + printf "}\n\n" } From patchwork Mon Jun 5 07:01:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103115 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500654vqr; Mon, 5 Jun 2023 00:11:03 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6f1TmGPzMA4PzZ2XmQk1dXgz3Ivpc9DXW85MEETvWa18uUaCfpKSWuedU8xstMlHGQQFDR X-Received: by 2002:a17:902:c1c4:b0:1af:f8a8:5ba4 with SMTP id c4-20020a170902c1c400b001aff8a85ba4mr2104833plc.4.1685949063175; Mon, 05 Jun 2023 00:11:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949063; cv=none; d=google.com; s=arc-20160816; b=DIZ6qCTz9I3Ww43/ktwYrJ3pP7FqqaI5kR7mnykaSk72qsYZCEaaScnNPMhKnoitnP skn07ZfOudqPzKkS4Rle8NKqx9HBTj9FychtkJ/qd7Y7PHC7ZmC4iBxuG7oo+TEvUxqV +mHPD0G6Y3LxvjfcaXMybS3t/u8iP/ya96uGkjcMwIvM3eQmW4Q7ub3NkmtrBv9+V4oF 2nDDM3ze3Jeps07TJe4QvoKeqY18jwtLhtOAUY38e5vh/36UytAOtoCs+LqVymu7EWOS 6aVzQmGcIxoGNHdcc20CPPsN96eYKK1cUVmmx+j/BdFIi22JKZF8tyl0yoqt26kr4smt /Ing== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=OZfBGqxcmfrO+pJvAmBTMskX0JCCxkCrf73Hbad1qlo=; b=g3phYlgi2Hn7imtGMaFtv+7Swp8nxKGrlaYmL5neAQBu3YViLp2jbclx/jjZk9wfn4 xRBIA0TcNmPQJ12+Jp7R7MMq7pAzb+m2icaCL4kRTIkeuVw4KUVumPJUqco2b9hUM1Yo DvQ1JhgXk4KQYGF37cA34nIWd4XtDnBb6zrKH7i+teevK8VmRB5wWw3ba/PY50DhYFLw Xm6srGU7Gf9lHH98dNfeGEqC3lfIuK+VQg5SYDSuRjUOJ+k1F8EBZD/U6SkbTMJQr9At urPG0IUsDDfk1AnJdkybaw9W36AAcQLWd0KMcPd2JSo7TSCgT6p6eWF8WTr7ktCRq5zU +fog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jb4-20020a170903258400b001b063904086si4821476plb.628.2023.06.05.00.10.50; Mon, 05 Jun 2023 00:11:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230490AbjFEHEl (ORCPT + 99 others); Mon, 5 Jun 2023 03:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbjFEHES (ORCPT ); Mon, 5 Jun 2023 03:04:18 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BD10710FE; Mon, 5 Jun 2023 00:03:33 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BEC11596; Mon, 5 Jun 2023 00:03:23 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DB3913F793; Mon, 5 Jun 2023 00:02:35 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 24/27] docs: scripts: kernel-doc: accept bitwise negation like ~@var Date: Mon, 5 Jun 2023 08:01:21 +0100 Message-Id: <20230605070124.3741859-25-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845724808684027?= X-GMAIL-MSGID: =?utf-8?q?1767845724808684027?= In some cases we'd like to indicate the bitwise negation of a parameter, e.g. ~@var This will be helpful for describing the atomic andnot operations, where we'd like to write comments of the form: Atomically updates @v to (@v & ~@i) Which kernel-doc currently transforms to: Atomically updates **v** to (**v** & ~**i**) Rather than the preferable form: Atomically updates **v** to (**v** & **~i**) This is similar to what we did for '!@var' in commit: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var") This patch follows the same pattern that commit used to permit a '!' prefix on a param ref, allowing a '~' prefix on a param ref, cuasing kernel-doc to generate the preferred form above. Suggested-by: Akira Yokosawa Link: https://lore.kernel.org/lkml/a5405368-d04c-f95c-ad18-95f429120dbe@gmail.com Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Jonathan Corbet Cc: Mauro Carvalho Chehab Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Randy Dunlap Cc: Will Deacon --- scripts/kernel-doc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/kernel-doc b/scripts/kernel-doc index 2486689ffc7b4..eb70c1fd4e868 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b'; my $type_constant2 = '\%([-_\w]+)'; my $type_func = '(\w+)\(\)'; my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)'; -my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)'; +my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)'; my $type_fp_param = '\@(\w+)\(\)'; # Special RST handling for func ptr params my $type_fp_param2 = '\@(\w+->\S+)\(\)'; # Special RST handling for structs with func ptr params my $type_env = '(\$\w+)'; From patchwork Mon Jun 5 07:01:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103123 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500918vqr; Mon, 5 Jun 2023 00:11:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6qtI03aufe+FAKAZXuzGu9LNstKVxDPxM0uv1jCVN62EfHWuvTjXXTfn3+OH3J3sbrJPHy X-Received: by 2002:a05:6a00:148b:b0:64c:f4f9:de87 with SMTP id v11-20020a056a00148b00b0064cf4f9de87mr13243107pfu.18.1685949099618; Mon, 05 Jun 2023 00:11:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949099; cv=none; d=google.com; s=arc-20160816; b=YoohGKzY2Y2vdDvXM1psHn7OnMEX0pdJHV0rAt81/idei2bzxHT3Cd1ai/8rSOzzM5 PlHBGDmLGX5ELb0KZLx3IlbW4h7a0eBXmCJ+zM5pSOYpoHXd8nUDAfIWKRXdY9t74VAq pI0uVWISN/Q36yNBs+/3iirb71hlURgRiENqQgDR76PRHHEWqT1tsyQkAdANx575E75r e7HoJy9ZgkGWpwsvCQI451jbbTg/gg4QwnTfvr/Ku8hJ/2nXKe5t0ZuksrleP5nCRhI+ R04DwcsmLbARfK/RgDaxPT8bWqfBPQl6TVMcDNtY8iugJAuOs+ERLid1j8A57Kb1SxVp dQtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=vz5WoXQAyTUpKEHddzIc3pSOLV9z4X1QKPfMVBKHPWU=; b=G6QmaX/rDGQbTw9LO2htcZtt3C3uDbKxB7orOjfgLPXLRvMFqaI+Jgyc5JZTNb+jyN IgXEBvXlmGX3u0gyrH5IoDeBK0eAIj5WTJnt45/EiZWv5cm0dGcEtNSq+vb8uo7JEuIo 0uS3VDFlIRA1n8K7WOBEI65DC2FjmjS5DtBOxv4mUP/rnSUX0femvesMocwdsPcICvoX E6VRznWgD+riZ0kXW3HJUY5bBWeihBhPjO3wIfkIKPd952TlVzcu3WHyf3Rx5FUZM9gL oCvsVQtdXiX903vEL1qDZIFyhAJv8I/7+8XEUvHY7xi+N4yMLvQCRft/GRfFxG0YRdK3 o65A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a30-20020a63705e000000b0053fbae20d02si5199481pgn.655.2023.06.05.00.11.27; Mon, 05 Jun 2023 00:11:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230236AbjFEHFF (ORCPT + 99 others); Mon, 5 Jun 2023 03:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230511AbjFEHEW (ORCPT ); Mon, 5 Jun 2023 03:04:22 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 654371719; Mon, 5 Jun 2023 00:03:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8C1A16F3; Mon, 5 Jun 2023 00:03:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9E76D3F793; Mon, 5 Jun 2023 00:02:38 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 25/27] locking/atomic: scripts: generate kerneldoc comments Date: Mon, 5 Jun 2023 08:01:22 +0100 Message-Id: <20230605070124.3741859-26-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845763266600078?= X-GMAIL-MSGID: =?utf-8?q?1767845763266600078?= Currently the atomics are documented in Documentation/atomic_t.txt, and have no kerneldoc comments. There are a sufficient number of gotchas (e.g. semantics, noinstr-safety) that it would be nice to have comments to call these out, and it would be nice to have kerneldoc comments such that these can be collated. While it's possible to derive the semantics from the code, this can be painful given the amount of indirection we currently have (e.g. fallback paths), and it's easy to be mislead by naming, e.g. * The unconditional void-returning ops *only* have relaxed variants without a _relaxed suffix, and can easily be mistaken for being fully ordered. It would be nice to give these a _relaxed() suffix, but this would result in significant churn throughout the kernel. * Our naming of conditional and unconditional+test ops is rather inconsistent, and it can be difficult to derive the name of an operation, or to identify where an op is conditional or unconditional+test. Some ops are clearly conditional: - dec_if_positive - add_unless - dec_unless_positive - inc_unless_negative Some ops are clearly unconditional+test: - sub_and_test - dec_and_test - inc_and_test However, what exactly those test is not obvious. A _test_zero suffix might be clearer. Others could be read ambiguously: - inc_not_zero // conditional - add_negative // unconditional+test It would probably be worth renaming these, e.g. to inc_unless_zero and add_test_negative. As a step towards making this more consistent and easier to understand, this patch adds kerneldoc comments for all generated *atomic*_*() functions. These are generated from templates, with some common text shared, making it easy to extend these in future if necessary. I've tried to make these as consistent and clear as possible, and I've deliberately ensured: * All ops have their ordering explicitly mentioned in the short and long description. * All test ops have "test" in their short description. * All ops are described as an expression using their usual C operator. For example: andnot: "Atomically updates @v to (@v & ~@i)" inc: "Atomically updates @v to (@v + 1)" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All conditional ops have their condition described as an expression using the usual C operators. For example: add_unless: "If (@v != @u), atomically updates @v to (@v + @i)" cmpxchg: "If (@v == @old), atomically updates @v to @new" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All bitwise ops (and,andnot,or,xor) explicitly mention that they are bitwise in their short description, so that they are not mistaken for performing their logical equivalents. * The noinstr safety of each op is explicitly described, with a description of whether or not to use the raw_ form of the op. There should be no functional change as a result of this patch. Reported-by: Paul E. McKenney Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Jonathan Corbet Cc: Peter Zijlstra Cc: Will Deacon Reviewed-by: Paul E. McKenney --- include/linux/atomic/atomic-arch-fallback.h | 1848 +++++++++++- include/linux/atomic/atomic-instrumented.h | 2771 +++++++++++++++++- include/linux/atomic/atomic-long.h | 925 +++++- scripts/atomic/atomic-tbl.sh | 112 +- scripts/atomic/gen-atomic-fallback.sh | 2 + scripts/atomic/gen-atomic-instrumented.sh | 2 + scripts/atomic/gen-atomic-long.sh | 2 + scripts/atomic/kerneldoc/add | 13 + scripts/atomic/kerneldoc/add_negative | 13 + scripts/atomic/kerneldoc/add_unless | 18 + scripts/atomic/kerneldoc/and | 13 + scripts/atomic/kerneldoc/andnot | 13 + scripts/atomic/kerneldoc/cmpxchg | 14 + scripts/atomic/kerneldoc/dec | 12 + scripts/atomic/kerneldoc/dec_and_test | 12 + scripts/atomic/kerneldoc/dec_if_positive | 12 + scripts/atomic/kerneldoc/dec_unless_positive | 12 + scripts/atomic/kerneldoc/inc | 12 + scripts/atomic/kerneldoc/inc_and_test | 12 + scripts/atomic/kerneldoc/inc_not_zero | 12 + scripts/atomic/kerneldoc/inc_unless_negative | 12 + scripts/atomic/kerneldoc/or | 13 + scripts/atomic/kerneldoc/read | 12 + scripts/atomic/kerneldoc/set | 13 + scripts/atomic/kerneldoc/sub | 13 + scripts/atomic/kerneldoc/sub_and_test | 13 + scripts/atomic/kerneldoc/try_cmpxchg | 15 + scripts/atomic/kerneldoc/xchg | 13 + scripts/atomic/kerneldoc/xor | 13 + 29 files changed, 5940 insertions(+), 7 deletions(-) create mode 100644 scripts/atomic/kerneldoc/add create mode 100644 scripts/atomic/kerneldoc/add_negative create mode 100644 scripts/atomic/kerneldoc/add_unless create mode 100644 scripts/atomic/kerneldoc/and create mode 100644 scripts/atomic/kerneldoc/andnot create mode 100644 scripts/atomic/kerneldoc/cmpxchg create mode 100644 scripts/atomic/kerneldoc/dec create mode 100644 scripts/atomic/kerneldoc/dec_and_test create mode 100644 scripts/atomic/kerneldoc/dec_if_positive create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive create mode 100644 scripts/atomic/kerneldoc/inc create mode 100644 scripts/atomic/kerneldoc/inc_and_test create mode 100644 scripts/atomic/kerneldoc/inc_not_zero create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative create mode 100644 scripts/atomic/kerneldoc/or create mode 100644 scripts/atomic/kerneldoc/read create mode 100644 scripts/atomic/kerneldoc/set create mode 100644 scripts/atomic/kerneldoc/sub create mode 100644 scripts/atomic/kerneldoc/sub_and_test create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg create mode 100644 scripts/atomic/kerneldoc/xchg create mode 100644 scripts/atomic/kerneldoc/xor diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 470c2890ab8d6..8cded57dd7a6f 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -428,12 +428,32 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void); #define raw_sync_cmpxchg arch_sync_cmpxchg +/** + * raw_atomic_read() - atomic load with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_read() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline int raw_atomic_read(const atomic_t *v) { return arch_atomic_read(v); } +/** + * raw_atomic_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline int raw_atomic_read_acquire(const atomic_t *v) { @@ -455,12 +475,34 @@ raw_atomic_read_acquire(const atomic_t *v) #endif } +/** + * raw_atomic_set() - atomic set with relaxed ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_set() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_set(atomic_t *v, int i) { arch_atomic_set(v, i); } +/** + * raw_atomic_set_release() - atomic set with release ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic_set_release() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_set_release(atomic_t *v, int i) { @@ -478,12 +520,34 @@ raw_atomic_set_release(atomic_t *v, int i) #endif } +/** + * raw_atomic_add() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_add(int i, atomic_t *v) { arch_atomic_add(i, v); } +/** + * raw_atomic_add_return() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_add_return(int i, atomic_t *v) { @@ -500,6 +564,17 @@ raw_atomic_add_return(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_add_return_acquire(int i, atomic_t *v) { @@ -516,6 +591,17 @@ raw_atomic_add_return_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_add_return_release(int i, atomic_t *v) { @@ -531,6 +617,17 @@ raw_atomic_add_return_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_return_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_add_return_relaxed(int i, atomic_t *v) { @@ -543,6 +640,17 @@ raw_atomic_add_return_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_add(int i, atomic_t *v) { @@ -559,6 +667,17 @@ raw_atomic_fetch_add(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_add_acquire(int i, atomic_t *v) { @@ -575,6 +694,17 @@ raw_atomic_fetch_add_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_add_release(int i, atomic_t *v) { @@ -590,6 +720,17 @@ raw_atomic_fetch_add_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_add_relaxed(int i, atomic_t *v) { @@ -602,12 +743,34 @@ raw_atomic_fetch_add_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_sub() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_sub(int i, atomic_t *v) { arch_atomic_sub(i, v); } +/** + * raw_atomic_sub_return() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_sub_return(int i, atomic_t *v) { @@ -624,6 +787,17 @@ raw_atomic_sub_return(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_sub_return_acquire(int i, atomic_t *v) { @@ -640,6 +814,17 @@ raw_atomic_sub_return_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_sub_return_release(int i, atomic_t *v) { @@ -655,6 +840,17 @@ raw_atomic_sub_return_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_sub_return_relaxed(int i, atomic_t *v) { @@ -667,6 +863,17 @@ raw_atomic_sub_return_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_sub(int i, atomic_t *v) { @@ -683,6 +890,17 @@ raw_atomic_fetch_sub(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_sub_acquire(int i, atomic_t *v) { @@ -699,6 +917,17 @@ raw_atomic_fetch_sub_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_sub_release(int i, atomic_t *v) { @@ -714,6 +943,17 @@ raw_atomic_fetch_sub_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) { @@ -726,6 +966,16 @@ raw_atomic_fetch_sub_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_inc() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_inc(atomic_t *v) { @@ -736,6 +986,16 @@ raw_atomic_inc(atomic_t *v) #endif } +/** + * raw_atomic_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_inc_return(atomic_t *v) { @@ -752,6 +1012,16 @@ raw_atomic_inc_return(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_inc_return_acquire(atomic_t *v) { @@ -768,6 +1038,16 @@ raw_atomic_inc_return_acquire(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_inc_return_release(atomic_t *v) { @@ -783,6 +1063,16 @@ raw_atomic_inc_return_release(atomic_t *v) #endif } +/** + * raw_atomic_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_inc_return_relaxed(atomic_t *v) { @@ -795,6 +1085,16 @@ raw_atomic_inc_return_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_inc(atomic_t *v) { @@ -811,6 +1111,16 @@ raw_atomic_fetch_inc(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_inc_acquire(atomic_t *v) { @@ -827,6 +1137,16 @@ raw_atomic_fetch_inc_acquire(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_inc_release(atomic_t *v) { @@ -842,6 +1162,16 @@ raw_atomic_fetch_inc_release(atomic_t *v) #endif } +/** + * raw_atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_inc_relaxed(atomic_t *v) { @@ -854,6 +1184,16 @@ raw_atomic_fetch_inc_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_dec() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_dec(atomic_t *v) { @@ -864,6 +1204,16 @@ raw_atomic_dec(atomic_t *v) #endif } +/** + * raw_atomic_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_dec_return(atomic_t *v) { @@ -880,6 +1230,16 @@ raw_atomic_dec_return(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_dec_return_acquire(atomic_t *v) { @@ -896,6 +1256,16 @@ raw_atomic_dec_return_acquire(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_dec_return_release(atomic_t *v) { @@ -911,6 +1281,16 @@ raw_atomic_dec_return_release(atomic_t *v) #endif } +/** + * raw_atomic_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline int raw_atomic_dec_return_relaxed(atomic_t *v) { @@ -923,6 +1303,16 @@ raw_atomic_dec_return_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_dec(atomic_t *v) { @@ -939,6 +1329,16 @@ raw_atomic_fetch_dec(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_dec_acquire(atomic_t *v) { @@ -955,6 +1355,16 @@ raw_atomic_fetch_dec_acquire(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_dec_release(atomic_t *v) { @@ -970,6 +1380,16 @@ raw_atomic_fetch_dec_release(atomic_t *v) #endif } +/** + * raw_atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_dec_relaxed(atomic_t *v) { @@ -982,12 +1402,34 @@ raw_atomic_fetch_dec_relaxed(atomic_t *v) #endif } +/** + * raw_atomic_and() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_and() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_and(int i, atomic_t *v) { arch_atomic_and(i, v); } +/** + * raw_atomic_fetch_and() - atomic bitwise AND with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_and(int i, atomic_t *v) { @@ -1004,6 +1446,17 @@ raw_atomic_fetch_and(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_and_acquire(int i, atomic_t *v) { @@ -1020,6 +1473,17 @@ raw_atomic_fetch_and_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_release() - atomic bitwise AND with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_and_release(int i, atomic_t *v) { @@ -1035,6 +1499,17 @@ raw_atomic_fetch_and_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_and_relaxed(int i, atomic_t *v) { @@ -1047,6 +1522,17 @@ raw_atomic_fetch_and_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_andnot() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_andnot(int i, atomic_t *v) { @@ -1057,6 +1543,17 @@ raw_atomic_andnot(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_andnot(int i, atomic_t *v) { @@ -1073,6 +1570,17 @@ raw_atomic_fetch_andnot(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -1089,6 +1597,17 @@ raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -1104,6 +1623,17 @@ raw_atomic_fetch_andnot_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { @@ -1116,12 +1646,34 @@ raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_or() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_or() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_or(int i, atomic_t *v) { arch_atomic_or(i, v); } +/** + * raw_atomic_fetch_or() - atomic bitwise OR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_or(int i, atomic_t *v) { @@ -1138,6 +1690,17 @@ raw_atomic_fetch_or(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_or_acquire(int i, atomic_t *v) { @@ -1154,6 +1717,17 @@ raw_atomic_fetch_or_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_release() - atomic bitwise OR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_or_release(int i, atomic_t *v) { @@ -1169,6 +1743,17 @@ raw_atomic_fetch_or_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_or_relaxed(int i, atomic_t *v) { @@ -1181,12 +1766,34 @@ raw_atomic_fetch_or_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_xor() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_xor() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_xor(int i, atomic_t *v) { arch_atomic_xor(i, v); } +/** + * raw_atomic_fetch_xor() - atomic bitwise XOR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_xor(int i, atomic_t *v) { @@ -1203,6 +1810,17 @@ raw_atomic_fetch_xor(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_xor_acquire(int i, atomic_t *v) { @@ -1219,6 +1837,17 @@ raw_atomic_fetch_xor_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_xor_release(int i, atomic_t *v) { @@ -1234,6 +1863,17 @@ raw_atomic_fetch_xor_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) { @@ -1246,6 +1886,17 @@ raw_atomic_fetch_xor_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_xchg(atomic_t *v, int new) { @@ -1262,6 +1913,17 @@ raw_atomic_xchg(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_xchg_acquire(atomic_t *v, int new) { @@ -1278,6 +1940,17 @@ raw_atomic_xchg_acquire(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_xchg_release(atomic_t *v, int new) { @@ -1293,6 +1966,17 @@ raw_atomic_xchg_release(atomic_t *v, int new) #endif } +/** + * raw_atomic_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_xchg_relaxed(atomic_t *v, int new) { @@ -1305,6 +1989,18 @@ raw_atomic_xchg_relaxed(atomic_t *v, int new) #endif } +/** + * raw_atomic_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -1321,6 +2017,18 @@ raw_atomic_cmpxchg(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { @@ -1337,6 +2045,18 @@ raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) { @@ -1352,6 +2072,18 @@ raw_atomic_cmpxchg_release(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { @@ -1364,6 +2096,19 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) #endif } +/** + * raw_atomic_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -1384,6 +2129,19 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -1404,6 +2162,19 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -1423,6 +2194,19 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { @@ -1439,6 +2223,17 @@ raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) #endif } +/** + * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_sub_and_test(int i, atomic_t *v) { @@ -1449,6 +2244,16 @@ raw_atomic_sub_and_test(int i, atomic_t *v) #endif } +/** + * raw_atomic_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_dec_and_test(atomic_t *v) { @@ -1459,6 +2264,16 @@ raw_atomic_dec_and_test(atomic_t *v) #endif } +/** + * raw_atomic_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_inc_and_test(atomic_t *v) { @@ -1469,6 +2284,17 @@ raw_atomic_inc_and_test(atomic_t *v) #endif } +/** + * raw_atomic_add_negative() - atomic add and test if negative with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative(int i, atomic_t *v) { @@ -1485,6 +2311,17 @@ raw_atomic_add_negative(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1501,6 +2338,17 @@ raw_atomic_add_negative_acquire(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_release() - atomic add and test if negative with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_release(int i, atomic_t *v) { @@ -1516,6 +2364,17 @@ raw_atomic_add_negative_release(int i, atomic_t *v) #endif } +/** + * raw_atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -1528,6 +2387,18 @@ raw_atomic_add_negative_relaxed(int i, atomic_t *v) #endif } +/** + * raw_atomic_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline int raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1545,6 +2416,18 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u) #endif } +/** + * raw_atomic_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_add_unless(atomic_t *v, int a, int u) { @@ -1555,6 +2438,16 @@ raw_atomic_add_unless(atomic_t *v, int a, int u) #endif } +/** + * raw_atomic_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_inc_not_zero(atomic_t *v) { @@ -1565,6 +2458,16 @@ raw_atomic_inc_not_zero(atomic_t *v) #endif } +/** + * raw_atomic_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_inc_unless_negative(atomic_t *v) { @@ -1582,6 +2485,16 @@ raw_atomic_inc_unless_negative(atomic_t *v) #endif } +/** + * raw_atomic_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_dec_unless_positive(atomic_t *v) { @@ -1599,6 +2512,16 @@ raw_atomic_dec_unless_positive(atomic_t *v) #endif } +/** + * raw_atomic_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline int raw_atomic_dec_if_positive(atomic_t *v) { @@ -1621,12 +2544,32 @@ raw_atomic_dec_if_positive(atomic_t *v) #include #endif +/** + * raw_atomic64_read() - atomic load with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_read() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline s64 raw_atomic64_read(const atomic64_t *v) { return arch_atomic64_read(v); } +/** + * raw_atomic64_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline s64 raw_atomic64_read_acquire(const atomic64_t *v) { @@ -1648,12 +2591,34 @@ raw_atomic64_read_acquire(const atomic64_t *v) #endif } +/** + * raw_atomic64_set() - atomic set with relaxed ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_set() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_set(atomic64_t *v, s64 i) { arch_atomic64_set(v, i); } +/** + * raw_atomic64_set_release() - atomic set with release ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_set_release() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_set_release(atomic64_t *v, s64 i) { @@ -1671,12 +2636,34 @@ raw_atomic64_set_release(atomic64_t *v, s64 i) #endif } +/** + * raw_atomic64_add() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_add(s64 i, atomic64_t *v) { arch_atomic64_add(i, v); } +/** + * raw_atomic64_add_return() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_add_return(s64 i, atomic64_t *v) { @@ -1693,6 +2680,17 @@ raw_atomic64_add_return(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) { @@ -1709,6 +2707,17 @@ raw_atomic64_add_return_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_add_return_release(s64 i, atomic64_t *v) { @@ -1724,6 +2733,17 @@ raw_atomic64_add_return_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_return_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) { @@ -1736,6 +2756,17 @@ raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add(s64 i, atomic64_t *v) { @@ -1752,6 +2783,17 @@ raw_atomic64_fetch_add(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { @@ -1768,6 +2810,17 @@ raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) { @@ -1783,6 +2836,17 @@ raw_atomic64_fetch_add_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { @@ -1795,12 +2859,34 @@ raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_sub(s64 i, atomic64_t *v) { arch_atomic64_sub(i, v); } +/** + * raw_atomic64_sub_return() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_sub_return(s64 i, atomic64_t *v) { @@ -1817,6 +2903,17 @@ raw_atomic64_sub_return(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { @@ -1833,6 +2930,17 @@ raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_release(s64 i, atomic64_t *v) { @@ -1848,6 +2956,17 @@ raw_atomic64_sub_return_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { @@ -1860,6 +2979,17 @@ raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub(s64 i, atomic64_t *v) { @@ -1876,6 +3006,17 @@ raw_atomic64_fetch_sub(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { @@ -1892,6 +3033,17 @@ raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { @@ -1907,6 +3059,17 @@ raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { @@ -1919,6 +3082,16 @@ raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_inc(atomic64_t *v) { @@ -1929,6 +3102,16 @@ raw_atomic64_inc(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_inc_return(atomic64_t *v) { @@ -1945,6 +3128,16 @@ raw_atomic64_inc_return(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_acquire(atomic64_t *v) { @@ -1961,6 +3154,16 @@ raw_atomic64_inc_return_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_release(atomic64_t *v) { @@ -1976,6 +3179,16 @@ raw_atomic64_inc_return_release(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_inc_return_relaxed(atomic64_t *v) { @@ -1988,6 +3201,16 @@ raw_atomic64_inc_return_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc(atomic64_t *v) { @@ -2004,6 +3227,16 @@ raw_atomic64_fetch_inc(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -2020,6 +3253,16 @@ raw_atomic64_fetch_inc_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_release(atomic64_t *v) { @@ -2035,6 +3278,16 @@ raw_atomic64_fetch_inc_release(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_inc_relaxed(atomic64_t *v) { @@ -2047,6 +3300,16 @@ raw_atomic64_fetch_inc_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_dec(atomic64_t *v) { @@ -2057,6 +3320,16 @@ raw_atomic64_dec(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_dec_return(atomic64_t *v) { @@ -2073,6 +3346,16 @@ raw_atomic64_dec_return(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_acquire(atomic64_t *v) { @@ -2089,6 +3372,16 @@ raw_atomic64_dec_return_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_release(atomic64_t *v) { @@ -2104,6 +3397,16 @@ raw_atomic64_dec_return_release(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline s64 raw_atomic64_dec_return_relaxed(atomic64_t *v) { @@ -2116,6 +3419,16 @@ raw_atomic64_dec_return_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec(atomic64_t *v) { @@ -2132,6 +3445,16 @@ raw_atomic64_fetch_dec(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -2148,6 +3471,16 @@ raw_atomic64_fetch_dec_acquire(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_release(atomic64_t *v) { @@ -2163,6 +3496,16 @@ raw_atomic64_fetch_dec_release(atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_dec_relaxed(atomic64_t *v) { @@ -2175,12 +3518,34 @@ raw_atomic64_fetch_dec_relaxed(atomic64_t *v) #endif } +/** + * raw_atomic64_and() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_and() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_and(s64 i, atomic64_t *v) { arch_atomic64_and(i, v); } +/** + * raw_atomic64_fetch_and() - atomic bitwise AND with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and(s64 i, atomic64_t *v) { @@ -2197,6 +3562,17 @@ raw_atomic64_fetch_and(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { @@ -2213,6 +3589,17 @@ raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_release() - atomic bitwise AND with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) { @@ -2228,6 +3615,17 @@ raw_atomic64_fetch_and_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { @@ -2240,6 +3638,17 @@ raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_andnot() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_andnot(s64 i, atomic64_t *v) { @@ -2250,6 +3659,17 @@ raw_atomic64_andnot(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -2266,6 +3686,17 @@ raw_atomic64_fetch_andnot(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -2282,6 +3713,17 @@ raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -2297,6 +3739,17 @@ raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { @@ -2309,12 +3762,34 @@ raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_or() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_or() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_or(s64 i, atomic64_t *v) { arch_atomic64_or(i, v); } +/** + * raw_atomic64_fetch_or() - atomic bitwise OR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or(s64 i, atomic64_t *v) { @@ -2331,6 +3806,17 @@ raw_atomic64_fetch_or(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { @@ -2347,6 +3833,17 @@ raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_release() - atomic bitwise OR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) { @@ -2362,6 +3859,17 @@ raw_atomic64_fetch_or_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { @@ -2374,12 +3882,34 @@ raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_xor() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_xor() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic64_xor(s64 i, atomic64_t *v) { arch_atomic64_xor(i, v); } +/** + * raw_atomic64_fetch_xor() - atomic bitwise XOR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor(s64 i, atomic64_t *v) { @@ -2396,6 +3926,17 @@ raw_atomic64_fetch_xor(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { @@ -2412,6 +3953,17 @@ raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { @@ -2427,6 +3979,17 @@ raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { @@ -2439,6 +4002,17 @@ raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_xchg() - atomic exchange with full ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_xchg(atomic64_t *v, s64 new) { @@ -2455,6 +4029,17 @@ raw_atomic64_xchg(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) { @@ -2471,6 +4056,17 @@ raw_atomic64_xchg_acquire(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_xchg_release(atomic64_t *v, s64 new) { @@ -2486,6 +4082,17 @@ raw_atomic64_xchg_release(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) { @@ -2498,6 +4105,18 @@ raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new) #endif } +/** + * raw_atomic64_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { @@ -2514,6 +4133,18 @@ raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { @@ -2530,6 +4161,18 @@ raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { @@ -2545,6 +4188,18 @@ raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { @@ -2557,6 +4212,19 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -2577,6 +4245,19 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -2597,6 +4278,19 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -2616,6 +4310,19 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { @@ -2632,6 +4339,17 @@ raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) #endif } +/** + * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -2642,6 +4360,16 @@ raw_atomic64_sub_and_test(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_dec_and_test(atomic64_t *v) { @@ -2652,6 +4380,16 @@ raw_atomic64_dec_and_test(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_and_test(atomic64_t *v) { @@ -2662,6 +4400,17 @@ raw_atomic64_inc_and_test(atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative() - atomic add and test if negative with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -2678,6 +4427,17 @@ raw_atomic64_add_negative(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -2694,6 +4454,17 @@ raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_release() - atomic add and test if negative with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -2709,6 +4480,17 @@ raw_atomic64_add_negative_release(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -2721,6 +4503,18 @@ raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) #endif } +/** + * raw_atomic64_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline s64 raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2738,6 +4532,18 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) #endif } +/** + * raw_atomic64_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2748,6 +4554,16 @@ raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) #endif } +/** + * raw_atomic64_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic64_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_not_zero(atomic64_t *v) { @@ -2758,6 +4574,16 @@ raw_atomic64_inc_not_zero(atomic64_t *v) #endif } +/** + * raw_atomic64_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic64_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_inc_unless_negative(atomic64_t *v) { @@ -2775,6 +4601,16 @@ raw_atomic64_inc_unless_negative(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic64_dec_unless_positive(atomic64_t *v) { @@ -2792,6 +4628,16 @@ raw_atomic64_dec_unless_positive(atomic64_t *v) #endif } +/** + * raw_atomic64_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline s64 raw_atomic64_dec_if_positive(atomic64_t *v) { @@ -2811,4 +4657,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v) } #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 205e090382132f1fc85e48b46e722865f9c81309 +// 3916f02c038baa3f5190d275f68b9211667fcc9d diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index 5491c89dc03a0..ebfc795f921b9 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -16,6 +16,16 @@ #include #include +/** + * atomic_read() - atomic load with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_read() there. + * + * Return: The value loaded from @v. + */ static __always_inline int atomic_read(const atomic_t *v) { @@ -23,6 +33,16 @@ atomic_read(const atomic_t *v) return raw_atomic_read(v); } +/** + * atomic_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_read_acquire() there. + * + * Return: The value loaded from @v. + */ static __always_inline int atomic_read_acquire(const atomic_t *v) { @@ -30,6 +50,17 @@ atomic_read_acquire(const atomic_t *v) return raw_atomic_read_acquire(v); } +/** + * atomic_set() - atomic set with relaxed ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_set() there. + * + * Return: Nothing. + */ static __always_inline void atomic_set(atomic_t *v, int i) { @@ -37,6 +68,17 @@ atomic_set(atomic_t *v, int i) raw_atomic_set(v, i); } +/** + * atomic_set_release() - atomic set with release ordering + * @v: pointer to atomic_t + * @i: int value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_set_release() there. + * + * Return: Nothing. + */ static __always_inline void atomic_set_release(atomic_t *v, int i) { @@ -45,6 +87,17 @@ atomic_set_release(atomic_t *v, int i) raw_atomic_set_release(v, i); } +/** + * atomic_add() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add() there. + * + * Return: Nothing. + */ static __always_inline void atomic_add(int i, atomic_t *v) { @@ -52,6 +105,17 @@ atomic_add(int i, atomic_t *v) raw_atomic_add(i, v); } +/** + * atomic_add_return() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_add_return(int i, atomic_t *v) { @@ -60,6 +124,17 @@ atomic_add_return(int i, atomic_t *v) return raw_atomic_add_return(i, v); } +/** + * atomic_add_return_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_add_return_acquire(int i, atomic_t *v) { @@ -67,6 +142,17 @@ atomic_add_return_acquire(int i, atomic_t *v) return raw_atomic_add_return_acquire(i, v); } +/** + * atomic_add_return_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_add_return_release(int i, atomic_t *v) { @@ -75,6 +161,17 @@ atomic_add_return_release(int i, atomic_t *v) return raw_atomic_add_return_release(i, v); } +/** + * atomic_add_return_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_add_return_relaxed(int i, atomic_t *v) { @@ -82,6 +179,17 @@ atomic_add_return_relaxed(int i, atomic_t *v) return raw_atomic_add_return_relaxed(i, v); } +/** + * atomic_fetch_add() - atomic add with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_add(int i, atomic_t *v) { @@ -90,6 +198,17 @@ atomic_fetch_add(int i, atomic_t *v) return raw_atomic_fetch_add(i, v); } +/** + * atomic_fetch_add_acquire() - atomic add with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_add_acquire(int i, atomic_t *v) { @@ -97,6 +216,17 @@ atomic_fetch_add_acquire(int i, atomic_t *v) return raw_atomic_fetch_add_acquire(i, v); } +/** + * atomic_fetch_add_release() - atomic add with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { @@ -105,6 +235,17 @@ atomic_fetch_add_release(int i, atomic_t *v) return raw_atomic_fetch_add_release(i, v); } +/** + * atomic_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { @@ -112,6 +253,17 @@ atomic_fetch_add_relaxed(int i, atomic_t *v) return raw_atomic_fetch_add_relaxed(i, v); } +/** + * atomic_sub() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub() there. + * + * Return: Nothing. + */ static __always_inline void atomic_sub(int i, atomic_t *v) { @@ -119,6 +271,17 @@ atomic_sub(int i, atomic_t *v) raw_atomic_sub(i, v); } +/** + * atomic_sub_return() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_sub_return(int i, atomic_t *v) { @@ -127,6 +290,17 @@ atomic_sub_return(int i, atomic_t *v) return raw_atomic_sub_return(i, v); } +/** + * atomic_sub_return_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_sub_return_acquire(int i, atomic_t *v) { @@ -134,6 +308,17 @@ atomic_sub_return_acquire(int i, atomic_t *v) return raw_atomic_sub_return_acquire(i, v); } +/** + * atomic_sub_return_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { @@ -142,6 +327,17 @@ atomic_sub_return_release(int i, atomic_t *v) return raw_atomic_sub_return_release(i, v); } +/** + * atomic_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_sub_return_relaxed(int i, atomic_t *v) { @@ -149,6 +345,17 @@ atomic_sub_return_relaxed(int i, atomic_t *v) return raw_atomic_sub_return_relaxed(i, v); } +/** + * atomic_fetch_sub() - atomic subtract with full ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { @@ -157,6 +364,17 @@ atomic_fetch_sub(int i, atomic_t *v) return raw_atomic_fetch_sub(i, v); } +/** + * atomic_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { @@ -164,6 +382,17 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) return raw_atomic_fetch_sub_acquire(i, v); } +/** + * atomic_fetch_sub_release() - atomic subtract with release ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { @@ -172,6 +401,17 @@ atomic_fetch_sub_release(int i, atomic_t *v) return raw_atomic_fetch_sub_release(i, v); } +/** + * atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: int value to subtract + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { @@ -179,6 +419,16 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v) return raw_atomic_fetch_sub_relaxed(i, v); } +/** + * atomic_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc() there. + * + * Return: Nothing. + */ static __always_inline void atomic_inc(atomic_t *v) { @@ -186,6 +436,16 @@ atomic_inc(atomic_t *v) raw_atomic_inc(v); } +/** + * atomic_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_inc_return(atomic_t *v) { @@ -194,6 +454,16 @@ atomic_inc_return(atomic_t *v) return raw_atomic_inc_return(v); } +/** + * atomic_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_inc_return_acquire(atomic_t *v) { @@ -201,6 +471,16 @@ atomic_inc_return_acquire(atomic_t *v) return raw_atomic_inc_return_acquire(v); } +/** + * atomic_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_inc_return_release(atomic_t *v) { @@ -209,6 +489,16 @@ atomic_inc_return_release(atomic_t *v) return raw_atomic_inc_return_release(v); } +/** + * atomic_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_inc_return_relaxed(atomic_t *v) { @@ -216,6 +506,16 @@ atomic_inc_return_relaxed(atomic_t *v) return raw_atomic_inc_return_relaxed(v); } +/** + * atomic_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_inc(atomic_t *v) { @@ -224,6 +524,16 @@ atomic_fetch_inc(atomic_t *v) return raw_atomic_fetch_inc(v); } +/** + * atomic_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_inc_acquire(atomic_t *v) { @@ -231,6 +541,16 @@ atomic_fetch_inc_acquire(atomic_t *v) return raw_atomic_fetch_inc_acquire(v); } +/** + * atomic_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_inc_release(atomic_t *v) { @@ -239,6 +559,16 @@ atomic_fetch_inc_release(atomic_t *v) return raw_atomic_fetch_inc_release(v); } +/** + * atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_inc_relaxed(atomic_t *v) { @@ -246,6 +576,16 @@ atomic_fetch_inc_relaxed(atomic_t *v) return raw_atomic_fetch_inc_relaxed(v); } +/** + * atomic_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec() there. + * + * Return: Nothing. + */ static __always_inline void atomic_dec(atomic_t *v) { @@ -253,6 +593,16 @@ atomic_dec(atomic_t *v) raw_atomic_dec(v); } +/** + * atomic_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_dec_return(atomic_t *v) { @@ -261,6 +611,16 @@ atomic_dec_return(atomic_t *v) return raw_atomic_dec_return(v); } +/** + * atomic_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_dec_return_acquire(atomic_t *v) { @@ -268,6 +628,16 @@ atomic_dec_return_acquire(atomic_t *v) return raw_atomic_dec_return_acquire(v); } +/** + * atomic_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_dec_return_release(atomic_t *v) { @@ -276,6 +646,16 @@ atomic_dec_return_release(atomic_t *v) return raw_atomic_dec_return_release(v); } +/** + * atomic_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline int atomic_dec_return_relaxed(atomic_t *v) { @@ -283,6 +663,16 @@ atomic_dec_return_relaxed(atomic_t *v) return raw_atomic_dec_return_relaxed(v); } +/** + * atomic_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_dec(atomic_t *v) { @@ -291,6 +681,16 @@ atomic_fetch_dec(atomic_t *v) return raw_atomic_fetch_dec(v); } +/** + * atomic_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_dec_acquire(atomic_t *v) { @@ -298,6 +698,16 @@ atomic_fetch_dec_acquire(atomic_t *v) return raw_atomic_fetch_dec_acquire(v); } +/** + * atomic_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_dec_release(atomic_t *v) { @@ -306,6 +716,16 @@ atomic_fetch_dec_release(atomic_t *v) return raw_atomic_fetch_dec_release(v); } +/** + * atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_dec_relaxed(atomic_t *v) { @@ -313,6 +733,17 @@ atomic_fetch_dec_relaxed(atomic_t *v) return raw_atomic_fetch_dec_relaxed(v); } +/** + * atomic_and() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_and() there. + * + * Return: Nothing. + */ static __always_inline void atomic_and(int i, atomic_t *v) { @@ -320,6 +751,17 @@ atomic_and(int i, atomic_t *v) raw_atomic_and(i, v); } +/** + * atomic_fetch_and() - atomic bitwise AND with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_and(int i, atomic_t *v) { @@ -328,6 +770,17 @@ atomic_fetch_and(int i, atomic_t *v) return raw_atomic_fetch_and(i, v); } +/** + * atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_and_acquire(int i, atomic_t *v) { @@ -335,6 +788,17 @@ atomic_fetch_and_acquire(int i, atomic_t *v) return raw_atomic_fetch_and_acquire(i, v); } +/** + * atomic_fetch_and_release() - atomic bitwise AND with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { @@ -343,6 +807,17 @@ atomic_fetch_and_release(int i, atomic_t *v) return raw_atomic_fetch_and_release(i, v); } +/** + * atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { @@ -350,6 +825,17 @@ atomic_fetch_and_relaxed(int i, atomic_t *v) return raw_atomic_fetch_and_relaxed(i, v); } +/** + * atomic_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_andnot() there. + * + * Return: Nothing. + */ static __always_inline void atomic_andnot(int i, atomic_t *v) { @@ -357,6 +843,17 @@ atomic_andnot(int i, atomic_t *v) raw_atomic_andnot(i, v); } +/** + * atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { @@ -365,6 +862,17 @@ atomic_fetch_andnot(int i, atomic_t *v) return raw_atomic_fetch_andnot(i, v); } +/** + * atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -372,6 +880,17 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) return raw_atomic_fetch_andnot_acquire(i, v); } +/** + * atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -380,6 +899,17 @@ atomic_fetch_andnot_release(int i, atomic_t *v) return raw_atomic_fetch_andnot_release(i, v); } +/** + * atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { @@ -387,6 +917,17 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v) return raw_atomic_fetch_andnot_relaxed(i, v); } +/** + * atomic_or() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_or() there. + * + * Return: Nothing. + */ static __always_inline void atomic_or(int i, atomic_t *v) { @@ -394,6 +935,17 @@ atomic_or(int i, atomic_t *v) raw_atomic_or(i, v); } +/** + * atomic_fetch_or() - atomic bitwise OR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_or(int i, atomic_t *v) { @@ -402,6 +954,17 @@ atomic_fetch_or(int i, atomic_t *v) return raw_atomic_fetch_or(i, v); } +/** + * atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_or_acquire(int i, atomic_t *v) { @@ -409,6 +972,17 @@ atomic_fetch_or_acquire(int i, atomic_t *v) return raw_atomic_fetch_or_acquire(i, v); } +/** + * atomic_fetch_or_release() - atomic bitwise OR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { @@ -417,6 +991,17 @@ atomic_fetch_or_release(int i, atomic_t *v) return raw_atomic_fetch_or_release(i, v); } +/** + * atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { @@ -424,6 +1009,17 @@ atomic_fetch_or_relaxed(int i, atomic_t *v) return raw_atomic_fetch_or_relaxed(i, v); } +/** + * atomic_xor() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xor() there. + * + * Return: Nothing. + */ static __always_inline void atomic_xor(int i, atomic_t *v) { @@ -431,6 +1027,17 @@ atomic_xor(int i, atomic_t *v) raw_atomic_xor(i, v); } +/** + * atomic_fetch_xor() - atomic bitwise XOR with full ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { @@ -439,6 +1046,17 @@ atomic_fetch_xor(int i, atomic_t *v) return raw_atomic_fetch_xor(i, v); } +/** + * atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { @@ -446,6 +1064,17 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) return raw_atomic_fetch_xor_acquire(i, v); } +/** + * atomic_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { @@ -454,6 +1083,17 @@ atomic_fetch_xor_release(int i, atomic_t *v) return raw_atomic_fetch_xor_release(i, v); } +/** + * atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: int value + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { @@ -461,6 +1101,17 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) return raw_atomic_fetch_xor_relaxed(i, v); } +/** + * atomic_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_xchg(atomic_t *v, int new) { @@ -469,6 +1120,17 @@ atomic_xchg(atomic_t *v, int new) return raw_atomic_xchg(v, new); } +/** + * atomic_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_xchg_acquire(atomic_t *v, int new) { @@ -476,6 +1138,17 @@ atomic_xchg_acquire(atomic_t *v, int new) return raw_atomic_xchg_acquire(v, new); } +/** + * atomic_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_xchg_release(atomic_t *v, int new) { @@ -484,6 +1157,17 @@ atomic_xchg_release(atomic_t *v, int new) return raw_atomic_xchg_release(v, new); } +/** + * atomic_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_t + * @new: int value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_xchg_relaxed(atomic_t *v, int new) { @@ -491,6 +1175,18 @@ atomic_xchg_relaxed(atomic_t *v, int new) return raw_atomic_xchg_relaxed(v, new); } +/** + * atomic_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -499,6 +1195,18 @@ atomic_cmpxchg(atomic_t *v, int old, int new) return raw_atomic_cmpxchg(v, old, new); } +/** + * atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { @@ -506,6 +1214,18 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_acquire(v, old, new); } +/** + * atomic_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { @@ -514,6 +1234,18 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_release(v, old, new); } +/** + * atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { @@ -521,6 +1253,19 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) return raw_atomic_cmpxchg_relaxed(v, old, new); } +/** + * atomic_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -530,6 +1275,19 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg(v, old, new); } +/** + * atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -538,6 +1296,19 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_acquire(v, old, new); } +/** + * atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -547,6 +1318,19 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_release(v, old, new); } +/** + * atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_t + * @old: pointer to int value to compare with + * @new: int value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { @@ -555,6 +1339,17 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) return raw_atomic_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { @@ -563,6 +1358,16 @@ atomic_sub_and_test(int i, atomic_t *v) return raw_atomic_sub_and_test(i, v); } +/** + * atomic_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_dec_and_test(atomic_t *v) { @@ -571,6 +1376,16 @@ atomic_dec_and_test(atomic_t *v) return raw_atomic_dec_and_test(v); } +/** + * atomic_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_inc_and_test(atomic_t *v) { @@ -579,6 +1394,17 @@ atomic_inc_and_test(atomic_t *v) return raw_atomic_inc_and_test(v); } +/** + * atomic_add_negative() - atomic add and test if negative with full ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative(int i, atomic_t *v) { @@ -587,6 +1413,17 @@ atomic_add_negative(int i, atomic_t *v) return raw_atomic_add_negative(i, v); } +/** + * atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_acquire(int i, atomic_t *v) { @@ -594,6 +1431,17 @@ atomic_add_negative_acquire(int i, atomic_t *v) return raw_atomic_add_negative_acquire(i, v); } +/** + * atomic_add_negative_release() - atomic add and test if negative with release ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_release(int i, atomic_t *v) { @@ -602,6 +1450,17 @@ atomic_add_negative_release(int i, atomic_t *v) return raw_atomic_add_negative_release(i, v); } +/** + * atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: int value to add + * @v: pointer to atomic_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -609,6 +1468,18 @@ atomic_add_negative_relaxed(int i, atomic_t *v) return raw_atomic_add_negative_relaxed(i, v); } +/** + * atomic_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there. + * + * Return: The original value of @v. + */ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -617,6 +1488,18 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) return raw_atomic_fetch_add_unless(v, a, u); } +/** + * atomic_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_t + * @a: int value to add + * @u: int value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { @@ -625,6 +1508,16 @@ atomic_add_unless(atomic_t *v, int a, int u) return raw_atomic_add_unless(v, a, u); } +/** + * atomic_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_inc_not_zero(atomic_t *v) { @@ -633,6 +1526,16 @@ atomic_inc_not_zero(atomic_t *v) return raw_atomic_inc_not_zero(v); } +/** + * atomic_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { @@ -641,6 +1544,16 @@ atomic_inc_unless_negative(atomic_t *v) return raw_atomic_inc_unless_negative(v); } +/** + * atomic_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { @@ -649,6 +1562,16 @@ atomic_dec_unless_positive(atomic_t *v) return raw_atomic_dec_unless_positive(v); } +/** + * atomic_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline int atomic_dec_if_positive(atomic_t *v) { @@ -657,6 +1580,16 @@ atomic_dec_if_positive(atomic_t *v) return raw_atomic_dec_if_positive(v); } +/** + * atomic64_read() - atomic load with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_read() there. + * + * Return: The value loaded from @v. + */ static __always_inline s64 atomic64_read(const atomic64_t *v) { @@ -664,6 +1597,16 @@ atomic64_read(const atomic64_t *v) return raw_atomic64_read(v); } +/** + * atomic64_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there. + * + * Return: The value loaded from @v. + */ static __always_inline s64 atomic64_read_acquire(const atomic64_t *v) { @@ -671,6 +1614,17 @@ atomic64_read_acquire(const atomic64_t *v) return raw_atomic64_read_acquire(v); } +/** + * atomic64_set() - atomic set with relaxed ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_set() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_set(atomic64_t *v, s64 i) { @@ -678,6 +1632,17 @@ atomic64_set(atomic64_t *v, s64 i) raw_atomic64_set(v, i); } +/** + * atomic64_set_release() - atomic set with release ordering + * @v: pointer to atomic64_t + * @i: s64 value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_set_release() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_set_release(atomic64_t *v, s64 i) { @@ -686,6 +1651,17 @@ atomic64_set_release(atomic64_t *v, s64 i) raw_atomic64_set_release(v, i); } +/** + * atomic64_add() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_add(s64 i, atomic64_t *v) { @@ -693,6 +1669,17 @@ atomic64_add(s64 i, atomic64_t *v) raw_atomic64_add(i, v); } +/** + * atomic64_add_return() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { @@ -701,6 +1688,17 @@ atomic64_add_return(s64 i, atomic64_t *v) return raw_atomic64_add_return(i, v); } +/** + * atomic64_add_return_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { @@ -708,6 +1706,17 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) return raw_atomic64_add_return_acquire(i, v); } +/** + * atomic64_add_return_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { @@ -716,6 +1725,17 @@ atomic64_add_return_release(s64 i, atomic64_t *v) return raw_atomic64_add_return_release(i, v); } +/** + * atomic64_add_return_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { @@ -723,6 +1743,17 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) return raw_atomic64_add_return_relaxed(i, v); } +/** + * atomic64_fetch_add() - atomic add with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { @@ -731,6 +1762,17 @@ atomic64_fetch_add(s64 i, atomic64_t *v) return raw_atomic64_fetch_add(i, v); } +/** + * atomic64_fetch_add_acquire() - atomic add with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { @@ -738,6 +1780,17 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_acquire(i, v); } +/** + * atomic64_fetch_add_release() - atomic add with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { @@ -746,6 +1799,17 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_release(i, v); } +/** + * atomic64_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { @@ -753,6 +1817,17 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_add_relaxed(i, v); } +/** + * atomic64_sub() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_sub(s64 i, atomic64_t *v) { @@ -760,6 +1835,17 @@ atomic64_sub(s64 i, atomic64_t *v) raw_atomic64_sub(i, v); } +/** + * atomic64_sub_return() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { @@ -768,6 +1854,17 @@ atomic64_sub_return(s64 i, atomic64_t *v) return raw_atomic64_sub_return(i, v); } +/** + * atomic64_sub_return_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { @@ -775,6 +1872,17 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) return raw_atomic64_sub_return_acquire(i, v); } +/** + * atomic64_sub_return_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { @@ -783,6 +1891,17 @@ atomic64_sub_return_release(s64 i, atomic64_t *v) return raw_atomic64_sub_return_release(i, v); } +/** + * atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { @@ -790,6 +1909,17 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) return raw_atomic64_sub_return_relaxed(i, v); } +/** + * atomic64_fetch_sub() - atomic subtract with full ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { @@ -798,6 +1928,17 @@ atomic64_fetch_sub(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub(i, v); } +/** + * atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { @@ -805,6 +1946,17 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_acquire(i, v); } +/** + * atomic64_fetch_sub_release() - atomic subtract with release ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { @@ -813,6 +1965,17 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_release(i, v); } +/** + * atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: s64 value to subtract + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { @@ -820,6 +1983,16 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_sub_relaxed(i, v); } +/** + * atomic64_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_inc(atomic64_t *v) { @@ -827,6 +2000,16 @@ atomic64_inc(atomic64_t *v) raw_atomic64_inc(v); } +/** + * atomic64_inc_return() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_inc_return(atomic64_t *v) { @@ -835,6 +2018,16 @@ atomic64_inc_return(atomic64_t *v) return raw_atomic64_inc_return(v); } +/** + * atomic64_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_inc_return_acquire(atomic64_t *v) { @@ -842,6 +2035,16 @@ atomic64_inc_return_acquire(atomic64_t *v) return raw_atomic64_inc_return_acquire(v); } +/** + * atomic64_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { @@ -850,6 +2053,16 @@ atomic64_inc_return_release(atomic64_t *v) return raw_atomic64_inc_return_release(v); } +/** + * atomic64_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { @@ -857,6 +2070,16 @@ atomic64_inc_return_relaxed(atomic64_t *v) return raw_atomic64_inc_return_relaxed(v); } +/** + * atomic64_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { @@ -865,6 +2088,16 @@ atomic64_fetch_inc(atomic64_t *v) return raw_atomic64_fetch_inc(v); } +/** + * atomic64_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -872,6 +2105,16 @@ atomic64_fetch_inc_acquire(atomic64_t *v) return raw_atomic64_fetch_inc_acquire(v); } +/** + * atomic64_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { @@ -880,6 +2123,16 @@ atomic64_fetch_inc_release(atomic64_t *v) return raw_atomic64_fetch_inc_release(v); } +/** + * atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { @@ -887,6 +2140,16 @@ atomic64_fetch_inc_relaxed(atomic64_t *v) return raw_atomic64_fetch_inc_relaxed(v); } +/** + * atomic64_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_dec(atomic64_t *v) { @@ -894,6 +2157,16 @@ atomic64_dec(atomic64_t *v) raw_atomic64_dec(v); } +/** + * atomic64_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_dec_return(atomic64_t *v) { @@ -902,6 +2175,16 @@ atomic64_dec_return(atomic64_t *v) return raw_atomic64_dec_return(v); } +/** + * atomic64_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_dec_return_acquire(atomic64_t *v) { @@ -909,6 +2192,16 @@ atomic64_dec_return_acquire(atomic64_t *v) return raw_atomic64_dec_return_acquire(v); } +/** + * atomic64_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { @@ -917,6 +2210,16 @@ atomic64_dec_return_release(atomic64_t *v) return raw_atomic64_dec_return_release(v); } +/** + * atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { @@ -924,6 +2227,16 @@ atomic64_dec_return_relaxed(atomic64_t *v) return raw_atomic64_dec_return_relaxed(v); } +/** + * atomic64_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { @@ -932,6 +2245,16 @@ atomic64_fetch_dec(atomic64_t *v) return raw_atomic64_fetch_dec(v); } +/** + * atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -939,6 +2262,16 @@ atomic64_fetch_dec_acquire(atomic64_t *v) return raw_atomic64_fetch_dec_acquire(v); } +/** + * atomic64_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { @@ -947,6 +2280,16 @@ atomic64_fetch_dec_release(atomic64_t *v) return raw_atomic64_fetch_dec_release(v); } +/** + * atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { @@ -954,6 +2297,17 @@ atomic64_fetch_dec_relaxed(atomic64_t *v) return raw_atomic64_fetch_dec_relaxed(v); } +/** + * atomic64_and() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_and() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_and(s64 i, atomic64_t *v) { @@ -961,6 +2315,17 @@ atomic64_and(s64 i, atomic64_t *v) raw_atomic64_and(i, v); } +/** + * atomic64_fetch_and() - atomic bitwise AND with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { @@ -969,6 +2334,17 @@ atomic64_fetch_and(s64 i, atomic64_t *v) return raw_atomic64_fetch_and(i, v); } +/** + * atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { @@ -976,6 +2352,17 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_acquire(i, v); } +/** + * atomic64_fetch_and_release() - atomic bitwise AND with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { @@ -984,6 +2371,17 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_release(i, v); } +/** + * atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { @@ -991,6 +2389,17 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_and_relaxed(i, v); } +/** + * atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_andnot() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_andnot(s64 i, atomic64_t *v) { @@ -998,6 +2407,17 @@ atomic64_andnot(s64 i, atomic64_t *v) raw_atomic64_andnot(i, v); } +/** + * atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -1006,6 +2426,17 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot(i, v); } +/** + * atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -1013,6 +2444,17 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_acquire(i, v); } +/** + * atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -1021,6 +2463,17 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_release(i, v); } +/** + * atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { @@ -1028,6 +2481,17 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_andnot_relaxed(i, v); } +/** + * atomic64_or() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_or() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_or(s64 i, atomic64_t *v) { @@ -1035,6 +2499,17 @@ atomic64_or(s64 i, atomic64_t *v) raw_atomic64_or(i, v); } +/** + * atomic64_fetch_or() - atomic bitwise OR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { @@ -1043,6 +2518,17 @@ atomic64_fetch_or(s64 i, atomic64_t *v) return raw_atomic64_fetch_or(i, v); } +/** + * atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { @@ -1050,6 +2536,17 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_acquire(i, v); } +/** + * atomic64_fetch_or_release() - atomic bitwise OR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { @@ -1058,6 +2555,17 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_release(i, v); } +/** + * atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { @@ -1065,6 +2573,17 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_or_relaxed(i, v); } +/** + * atomic64_xor() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xor() there. + * + * Return: Nothing. + */ static __always_inline void atomic64_xor(s64 i, atomic64_t *v) { @@ -1072,6 +2591,17 @@ atomic64_xor(s64 i, atomic64_t *v) raw_atomic64_xor(i, v); } +/** + * atomic64_fetch_xor() - atomic bitwise XOR with full ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { @@ -1080,6 +2610,17 @@ atomic64_fetch_xor(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor(i, v); } +/** + * atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { @@ -1087,6 +2628,17 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_acquire(i, v); } +/** + * atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { @@ -1095,6 +2647,17 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_release(i, v); } +/** + * atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: s64 value + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { @@ -1102,6 +2665,17 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) return raw_atomic64_fetch_xor_relaxed(i, v); } +/** + * atomic64_xchg() - atomic exchange with full ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 new) { @@ -1110,6 +2684,17 @@ atomic64_xchg(atomic64_t *v, s64 new) return raw_atomic64_xchg(v, new); } +/** + * atomic64_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 new) { @@ -1117,6 +2702,17 @@ atomic64_xchg_acquire(atomic64_t *v, s64 new) return raw_atomic64_xchg_acquire(v, new); } +/** + * atomic64_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 new) { @@ -1125,6 +2721,17 @@ atomic64_xchg_release(atomic64_t *v, s64 new) return raw_atomic64_xchg_release(v, new); } +/** + * atomic64_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic64_t + * @new: s64 value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 new) { @@ -1132,6 +2739,18 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 new) return raw_atomic64_xchg_relaxed(v, new); } +/** + * atomic64_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { @@ -1140,6 +2759,18 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg(v, old, new); } +/** + * atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { @@ -1147,6 +2778,18 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_acquire(v, old, new); } +/** + * atomic64_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { @@ -1155,6 +2798,18 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_release(v, old, new); } +/** + * atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { @@ -1162,6 +2817,19 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) return raw_atomic64_cmpxchg_relaxed(v, old, new); } +/** + * atomic64_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -1171,6 +2839,19 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg(v, old, new); } +/** + * atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -1179,6 +2860,19 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_acquire(v, old, new); } +/** + * atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -1188,6 +2882,19 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_release(v, old, new); } +/** + * atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic64_t + * @old: pointer to s64 value to compare with + * @new: s64 value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { @@ -1196,6 +2903,17 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) return raw_atomic64_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -1204,6 +2922,16 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) return raw_atomic64_sub_and_test(i, v); } +/** + * atomic64_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { @@ -1212,6 +2940,16 @@ atomic64_dec_and_test(atomic64_t *v) return raw_atomic64_dec_and_test(v); } +/** + * atomic64_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { @@ -1220,6 +2958,17 @@ atomic64_inc_and_test(atomic64_t *v) return raw_atomic64_inc_and_test(v); } +/** + * atomic64_add_negative() - atomic add and test if negative with full ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { @@ -1228,6 +2977,17 @@ atomic64_add_negative(s64 i, atomic64_t *v) return raw_atomic64_add_negative(i, v); } +/** + * atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -1235,6 +2995,17 @@ atomic64_add_negative_acquire(s64 i, atomic64_t *v) return raw_atomic64_add_negative_acquire(i, v); } +/** + * atomic64_add_negative_release() - atomic add and test if negative with release ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -1243,6 +3014,17 @@ atomic64_add_negative_release(s64 i, atomic64_t *v) return raw_atomic64_add_negative_release(i, v); } +/** + * atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: s64 value to add + * @v: pointer to atomic64_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -1250,6 +3032,18 @@ atomic64_add_negative_relaxed(s64 i, atomic64_t *v) return raw_atomic64_add_negative_relaxed(i, v); } +/** + * atomic64_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there. + * + * Return: The original value of @v. + */ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -1258,6 +3052,18 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) return raw_atomic64_fetch_add_unless(v, a, u); } +/** + * atomic64_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic64_t + * @a: s64 value to add + * @u: s64 value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -1266,6 +3072,16 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) return raw_atomic64_add_unless(v, a, u); } +/** + * atomic64_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic64_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { @@ -1274,6 +3090,16 @@ atomic64_inc_not_zero(atomic64_t *v) return raw_atomic64_inc_not_zero(v); } +/** + * atomic64_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic64_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { @@ -1282,6 +3108,16 @@ atomic64_inc_unless_negative(atomic64_t *v) return raw_atomic64_inc_unless_negative(v); } +/** + * atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { @@ -1290,6 +3126,16 @@ atomic64_dec_unless_positive(atomic64_t *v) return raw_atomic64_dec_unless_positive(v); } +/** + * atomic64_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic64_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { @@ -1298,6 +3144,16 @@ atomic64_dec_if_positive(atomic64_t *v) return raw_atomic64_dec_if_positive(v); } +/** + * atomic_long_read() - atomic load with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_read() there. + * + * Return: The value loaded from @v. + */ static __always_inline long atomic_long_read(const atomic_long_t *v) { @@ -1305,6 +3161,16 @@ atomic_long_read(const atomic_long_t *v) return raw_atomic_long_read(v); } +/** + * atomic_long_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there. + * + * Return: The value loaded from @v. + */ static __always_inline long atomic_long_read_acquire(const atomic_long_t *v) { @@ -1312,6 +3178,17 @@ atomic_long_read_acquire(const atomic_long_t *v) return raw_atomic_long_read_acquire(v); } +/** + * atomic_long_set() - atomic set with relaxed ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_set() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_set(atomic_long_t *v, long i) { @@ -1319,6 +3196,17 @@ atomic_long_set(atomic_long_t *v, long i) raw_atomic_long_set(v, i); } +/** + * atomic_long_set_release() - atomic set with release ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_set_release() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_set_release(atomic_long_t *v, long i) { @@ -1327,6 +3215,17 @@ atomic_long_set_release(atomic_long_t *v, long i) raw_atomic_long_set_release(v, i); } +/** + * atomic_long_add() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_add(long i, atomic_long_t *v) { @@ -1334,6 +3233,17 @@ atomic_long_add(long i, atomic_long_t *v) raw_atomic_long_add(i, v); } +/** + * atomic_long_add_return() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_add_return(long i, atomic_long_t *v) { @@ -1342,6 +3252,17 @@ atomic_long_add_return(long i, atomic_long_t *v) return raw_atomic_long_add_return(i, v); } +/** + * atomic_long_add_return_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_add_return_acquire(long i, atomic_long_t *v) { @@ -1349,6 +3270,17 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v) return raw_atomic_long_add_return_acquire(i, v); } +/** + * atomic_long_add_return_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_add_return_release(long i, atomic_long_t *v) { @@ -1357,6 +3289,17 @@ atomic_long_add_return_release(long i, atomic_long_t *v) return raw_atomic_long_add_return_release(i, v); } +/** + * atomic_long_add_return_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_add_return_relaxed(long i, atomic_long_t *v) { @@ -1364,6 +3307,17 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v) return raw_atomic_long_add_return_relaxed(i, v); } +/** + * atomic_long_fetch_add() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_add(long i, atomic_long_t *v) { @@ -1372,6 +3326,17 @@ atomic_long_fetch_add(long i, atomic_long_t *v) return raw_atomic_long_fetch_add(i, v); } +/** + * atomic_long_fetch_add_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { @@ -1379,6 +3344,17 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_acquire(i, v); } +/** + * atomic_long_fetch_add_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_add_release(long i, atomic_long_t *v) { @@ -1387,6 +3363,17 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_release(i, v); } +/** + * atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { @@ -1394,6 +3381,17 @@ atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_add_relaxed(i, v); } +/** + * atomic_long_sub() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_sub(long i, atomic_long_t *v) { @@ -1401,6 +3399,17 @@ atomic_long_sub(long i, atomic_long_t *v) raw_atomic_long_sub(i, v); } +/** + * atomic_long_sub_return() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_sub_return(long i, atomic_long_t *v) { @@ -1409,6 +3418,17 @@ atomic_long_sub_return(long i, atomic_long_t *v) return raw_atomic_long_sub_return(i, v); } +/** + * atomic_long_sub_return_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_sub_return_acquire(long i, atomic_long_t *v) { @@ -1416,6 +3436,17 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v) return raw_atomic_long_sub_return_acquire(i, v); } +/** + * atomic_long_sub_return_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_sub_return_release(long i, atomic_long_t *v) { @@ -1424,6 +3455,17 @@ atomic_long_sub_return_release(long i, atomic_long_t *v) return raw_atomic_long_sub_return_release(i, v); } +/** + * atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { @@ -1431,6 +3473,17 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v) return raw_atomic_long_sub_return_relaxed(i, v); } +/** + * atomic_long_fetch_sub() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_sub(long i, atomic_long_t *v) { @@ -1439,6 +3492,17 @@ atomic_long_fetch_sub(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub(i, v); } +/** + * atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { @@ -1446,6 +3510,17 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_acquire(i, v); } +/** + * atomic_long_fetch_sub_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_sub_release(long i, atomic_long_t *v) { @@ -1454,6 +3529,17 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_release(i, v); } +/** + * atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { @@ -1461,6 +3547,16 @@ atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_sub_relaxed(i, v); } +/** + * atomic_long_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_inc(atomic_long_t *v) { @@ -1468,6 +3564,16 @@ atomic_long_inc(atomic_long_t *v) raw_atomic_long_inc(v); } +/** + * atomic_long_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_inc_return(atomic_long_t *v) { @@ -1476,6 +3582,16 @@ atomic_long_inc_return(atomic_long_t *v) return raw_atomic_long_inc_return(v); } +/** + * atomic_long_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_inc_return_acquire(atomic_long_t *v) { @@ -1483,6 +3599,16 @@ atomic_long_inc_return_acquire(atomic_long_t *v) return raw_atomic_long_inc_return_acquire(v); } +/** + * atomic_long_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_inc_return_release(atomic_long_t *v) { @@ -1491,6 +3617,16 @@ atomic_long_inc_return_release(atomic_long_t *v) return raw_atomic_long_inc_return_release(v); } +/** + * atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_inc_return_relaxed(atomic_long_t *v) { @@ -1498,6 +3634,16 @@ atomic_long_inc_return_relaxed(atomic_long_t *v) return raw_atomic_long_inc_return_relaxed(v); } +/** + * atomic_long_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_inc(atomic_long_t *v) { @@ -1506,6 +3652,16 @@ atomic_long_fetch_inc(atomic_long_t *v) return raw_atomic_long_fetch_inc(v); } +/** + * atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_inc_acquire(atomic_long_t *v) { @@ -1513,6 +3669,16 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v) return raw_atomic_long_fetch_inc_acquire(v); } +/** + * atomic_long_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_inc_release(atomic_long_t *v) { @@ -1521,6 +3687,16 @@ atomic_long_fetch_inc_release(atomic_long_t *v) return raw_atomic_long_fetch_inc_release(v); } +/** + * atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_inc_relaxed(atomic_long_t *v) { @@ -1528,6 +3704,16 @@ atomic_long_fetch_inc_relaxed(atomic_long_t *v) return raw_atomic_long_fetch_inc_relaxed(v); } +/** + * atomic_long_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_dec(atomic_long_t *v) { @@ -1535,6 +3721,16 @@ atomic_long_dec(atomic_long_t *v) raw_atomic_long_dec(v); } +/** + * atomic_long_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_dec_return(atomic_long_t *v) { @@ -1543,6 +3739,16 @@ atomic_long_dec_return(atomic_long_t *v) return raw_atomic_long_dec_return(v); } +/** + * atomic_long_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_dec_return_acquire(atomic_long_t *v) { @@ -1550,6 +3756,16 @@ atomic_long_dec_return_acquire(atomic_long_t *v) return raw_atomic_long_dec_return_acquire(v); } +/** + * atomic_long_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_dec_return_release(atomic_long_t *v) { @@ -1558,6 +3774,16 @@ atomic_long_dec_return_release(atomic_long_t *v) return raw_atomic_long_dec_return_release(v); } +/** + * atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there. + * + * Return: The updated value of @v. + */ static __always_inline long atomic_long_dec_return_relaxed(atomic_long_t *v) { @@ -1565,6 +3791,16 @@ atomic_long_dec_return_relaxed(atomic_long_t *v) return raw_atomic_long_dec_return_relaxed(v); } +/** + * atomic_long_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_dec(atomic_long_t *v) { @@ -1573,6 +3809,16 @@ atomic_long_fetch_dec(atomic_long_t *v) return raw_atomic_long_fetch_dec(v); } +/** + * atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_dec_acquire(atomic_long_t *v) { @@ -1580,6 +3826,16 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v) return raw_atomic_long_fetch_dec_acquire(v); } +/** + * atomic_long_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_dec_release(atomic_long_t *v) { @@ -1588,6 +3844,16 @@ atomic_long_fetch_dec_release(atomic_long_t *v) return raw_atomic_long_fetch_dec_release(v); } +/** + * atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_dec_relaxed(atomic_long_t *v) { @@ -1595,6 +3861,17 @@ atomic_long_fetch_dec_relaxed(atomic_long_t *v) return raw_atomic_long_fetch_dec_relaxed(v); } +/** + * atomic_long_and() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_and() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_and(long i, atomic_long_t *v) { @@ -1602,6 +3879,17 @@ atomic_long_and(long i, atomic_long_t *v) raw_atomic_long_and(i, v); } +/** + * atomic_long_fetch_and() - atomic bitwise AND with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_and(long i, atomic_long_t *v) { @@ -1610,6 +3898,17 @@ atomic_long_fetch_and(long i, atomic_long_t *v) return raw_atomic_long_fetch_and(i, v); } +/** + * atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { @@ -1617,6 +3916,17 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_acquire(i, v); } +/** + * atomic_long_fetch_and_release() - atomic bitwise AND with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_and_release(long i, atomic_long_t *v) { @@ -1625,6 +3935,17 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_release(i, v); } +/** + * atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { @@ -1632,6 +3953,17 @@ atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_and_relaxed(i, v); } +/** + * atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_andnot() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_andnot(long i, atomic_long_t *v) { @@ -1639,6 +3971,17 @@ atomic_long_andnot(long i, atomic_long_t *v) raw_atomic_long_andnot(i, v); } +/** + * atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_andnot(long i, atomic_long_t *v) { @@ -1647,6 +3990,17 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot(i, v); } +/** + * atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { @@ -1654,6 +4008,17 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_acquire(i, v); } +/** + * atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { @@ -1662,6 +4027,17 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_release(i, v); } +/** + * atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { @@ -1669,6 +4045,17 @@ atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_andnot_relaxed(i, v); } +/** + * atomic_long_or() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_or() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_or(long i, atomic_long_t *v) { @@ -1676,6 +4063,17 @@ atomic_long_or(long i, atomic_long_t *v) raw_atomic_long_or(i, v); } +/** + * atomic_long_fetch_or() - atomic bitwise OR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_or(long i, atomic_long_t *v) { @@ -1684,6 +4082,17 @@ atomic_long_fetch_or(long i, atomic_long_t *v) return raw_atomic_long_fetch_or(i, v); } +/** + * atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { @@ -1691,6 +4100,17 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_acquire(i, v); } +/** + * atomic_long_fetch_or_release() - atomic bitwise OR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_or_release(long i, atomic_long_t *v) { @@ -1699,6 +4119,17 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_release(i, v); } +/** + * atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { @@ -1706,6 +4137,17 @@ atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_or_relaxed(i, v); } +/** + * atomic_long_xor() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xor() there. + * + * Return: Nothing. + */ static __always_inline void atomic_long_xor(long i, atomic_long_t *v) { @@ -1713,6 +4155,17 @@ atomic_long_xor(long i, atomic_long_t *v) raw_atomic_long_xor(i, v); } +/** + * atomic_long_fetch_xor() - atomic bitwise XOR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_xor(long i, atomic_long_t *v) { @@ -1721,6 +4174,17 @@ atomic_long_fetch_xor(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor(i, v); } +/** + * atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { @@ -1728,6 +4192,17 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_acquire(i, v); } +/** + * atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_xor_release(long i, atomic_long_t *v) { @@ -1736,6 +4211,17 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_release(i, v); } +/** + * atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { @@ -1743,6 +4229,17 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) return raw_atomic_long_fetch_xor_relaxed(i, v); } +/** + * atomic_long_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_xchg(atomic_long_t *v, long new) { @@ -1751,6 +4248,17 @@ atomic_long_xchg(atomic_long_t *v, long new) return raw_atomic_long_xchg(v, new); } +/** + * atomic_long_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_xchg_acquire(atomic_long_t *v, long new) { @@ -1758,6 +4266,17 @@ atomic_long_xchg_acquire(atomic_long_t *v, long new) return raw_atomic_long_xchg_acquire(v, new); } +/** + * atomic_long_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_xchg_release(atomic_long_t *v, long new) { @@ -1766,6 +4285,17 @@ atomic_long_xchg_release(atomic_long_t *v, long new) return raw_atomic_long_xchg_release(v, new); } +/** + * atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_xchg_relaxed(atomic_long_t *v, long new) { @@ -1773,6 +4303,18 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long new) return raw_atomic_long_xchg_relaxed(v, new); } +/** + * atomic_long_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { @@ -1781,6 +4323,18 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg(v, old, new); } +/** + * atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { @@ -1788,6 +4342,18 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_acquire(v, old, new); } +/** + * atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { @@ -1796,6 +4362,18 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_release(v, old, new); } +/** + * atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { @@ -1803,6 +4381,19 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) return raw_atomic_long_cmpxchg_relaxed(v, old, new); } +/** + * atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { @@ -1812,6 +4403,19 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg(v, old, new); } +/** + * atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { @@ -1820,6 +4424,19 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_acquire(v, old, new); } +/** + * atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { @@ -1829,6 +4446,19 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_release(v, old, new); } +/** + * atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { @@ -1837,6 +4467,17 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) return raw_atomic_long_try_cmpxchg_relaxed(v, old, new); } +/** + * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_sub_and_test(long i, atomic_long_t *v) { @@ -1845,6 +4486,16 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) return raw_atomic_long_sub_and_test(i, v); } +/** + * atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_dec_and_test(atomic_long_t *v) { @@ -1853,6 +4504,16 @@ atomic_long_dec_and_test(atomic_long_t *v) return raw_atomic_long_dec_and_test(v); } +/** + * atomic_long_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool atomic_long_inc_and_test(atomic_long_t *v) { @@ -1861,6 +4522,17 @@ atomic_long_inc_and_test(atomic_long_t *v) return raw_atomic_long_inc_and_test(v); } +/** + * atomic_long_add_negative() - atomic add and test if negative with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative(long i, atomic_long_t *v) { @@ -1869,6 +4541,17 @@ atomic_long_add_negative(long i, atomic_long_t *v) return raw_atomic_long_add_negative(i, v); } +/** + * atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_acquire(long i, atomic_long_t *v) { @@ -1876,6 +4559,17 @@ atomic_long_add_negative_acquire(long i, atomic_long_t *v) return raw_atomic_long_add_negative_acquire(i, v); } +/** + * atomic_long_add_negative_release() - atomic add and test if negative with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_release(long i, atomic_long_t *v) { @@ -1884,6 +4578,17 @@ atomic_long_add_negative_release(long i, atomic_long_t *v) return raw_atomic_long_add_negative_release(i, v); } +/** + * atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { @@ -1891,6 +4596,18 @@ atomic_long_add_negative_relaxed(long i, atomic_long_t *v) return raw_atomic_long_add_negative_relaxed(i, v); } +/** + * atomic_long_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there. + * + * Return: The original value of @v. + */ static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -1899,6 +4616,18 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) return raw_atomic_long_fetch_add_unless(v, a, u); } +/** + * atomic_long_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_add_unless(atomic_long_t *v, long a, long u) { @@ -1907,6 +4636,16 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) return raw_atomic_long_add_unless(v, a, u); } +/** + * atomic_long_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_long_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_inc_not_zero(atomic_long_t *v) { @@ -1915,6 +4654,16 @@ atomic_long_inc_not_zero(atomic_long_t *v) return raw_atomic_long_inc_not_zero(v); } +/** + * atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_long_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_inc_unless_negative(atomic_long_t *v) { @@ -1923,6 +4672,16 @@ atomic_long_inc_unless_negative(atomic_long_t *v) return raw_atomic_long_inc_unless_negative(v); } +/** + * atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool atomic_long_dec_unless_positive(atomic_long_t *v) { @@ -1931,6 +4690,16 @@ atomic_long_dec_unless_positive(atomic_long_t *v) return raw_atomic_long_dec_unless_positive(v); } +/** + * atomic_long_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline long atomic_long_dec_if_positive(atomic_long_t *v) { @@ -2231,4 +5000,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c +// 06cec02e676a484857aee38b0071a1d846ec9457 diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index f564f71ff8afc..f6df2adadf997 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -21,6 +21,16 @@ typedef atomic_t atomic_long_t; #define atomic_long_cond_read_relaxed atomic_cond_read_relaxed #endif +/** + * raw_atomic_long_read() - atomic load with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_read() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { @@ -31,6 +41,16 @@ raw_atomic_long_read(const atomic_long_t *v) #endif } +/** + * raw_atomic_long_read_acquire() - atomic load with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically loads the value of @v with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere. + * + * Return: The value loaded from @v. + */ static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { @@ -41,6 +61,17 @@ raw_atomic_long_read_acquire(const atomic_long_t *v) #endif } +/** + * raw_atomic_long_set() - atomic set with relaxed ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_set() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { @@ -51,6 +82,17 @@ raw_atomic_long_set(atomic_long_t *v, long i) #endif } +/** + * raw_atomic_long_set_release() - atomic set with release ordering + * @v: pointer to atomic_long_t + * @i: long value to assign + * + * Atomically sets @v to @i with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { @@ -61,6 +103,17 @@ raw_atomic_long_set_release(atomic_long_t *v, long i) #endif } +/** + * raw_atomic_long_add() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { @@ -71,6 +124,17 @@ raw_atomic_long_add(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { @@ -81,6 +145,17 @@ raw_atomic_long_add_return(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { @@ -91,6 +166,17 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { @@ -101,6 +187,17 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_return_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { @@ -111,6 +208,17 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add() - atomic add with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { @@ -121,6 +229,17 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_acquire() - atomic add with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { @@ -131,6 +250,17 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_release() - atomic add with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { @@ -141,6 +271,17 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { @@ -151,6 +292,17 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { @@ -161,6 +313,17 @@ raw_atomic_long_sub(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { @@ -171,6 +334,17 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { @@ -181,6 +355,17 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { @@ -191,6 +376,17 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { @@ -201,6 +397,17 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub() - atomic subtract with full ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { @@ -211,6 +418,17 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { @@ -221,6 +439,17 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_release() - atomic subtract with release ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { @@ -231,6 +460,17 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering + * @i: long value to subtract + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { @@ -241,6 +481,16 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { @@ -251,6 +501,16 @@ raw_atomic_long_inc(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { @@ -261,6 +521,16 @@ raw_atomic_long_inc_return(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { @@ -271,6 +541,16 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { @@ -281,6 +561,16 @@ raw_atomic_long_inc_return_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { @@ -291,6 +581,16 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc() - atomic increment with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { @@ -301,6 +601,16 @@ raw_atomic_long_fetch_inc(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { @@ -311,6 +621,16 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_release() - atomic increment with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { @@ -321,6 +641,16 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { @@ -331,6 +661,16 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { @@ -341,6 +681,16 @@ raw_atomic_long_dec(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { @@ -351,6 +701,16 @@ raw_atomic_long_dec_return(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { @@ -361,6 +721,16 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { @@ -371,6 +741,16 @@ raw_atomic_long_dec_return_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere. + * + * Return: The updated value of @v. + */ static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { @@ -381,6 +761,16 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec() - atomic decrement with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { @@ -391,6 +781,16 @@ raw_atomic_long_fetch_dec(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { @@ -401,6 +801,16 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_release() - atomic decrement with release ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { @@ -411,6 +821,16 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { @@ -421,6 +841,17 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) #endif } +/** + * raw_atomic_long_and() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_and() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { @@ -431,6 +862,17 @@ raw_atomic_long_and(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and() - atomic bitwise AND with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { @@ -441,6 +883,17 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { @@ -451,6 +904,17 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_release() - atomic bitwise AND with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { @@ -461,6 +925,17 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { @@ -471,6 +946,17 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { @@ -481,6 +967,17 @@ raw_atomic_long_andnot(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { @@ -491,6 +988,17 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { @@ -501,6 +1009,17 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { @@ -511,6 +1030,17 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v & ~@i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { @@ -521,6 +1051,17 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_or() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_or() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { @@ -531,6 +1072,17 @@ raw_atomic_long_or(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or() - atomic bitwise OR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { @@ -541,6 +1093,17 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { @@ -551,6 +1114,17 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_release() - atomic bitwise OR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { @@ -561,6 +1135,17 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v | @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { @@ -571,6 +1156,17 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_xor() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xor() elsewhere. + * + * Return: Nothing. + */ static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { @@ -581,6 +1177,17 @@ raw_atomic_long_xor(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor() - atomic bitwise XOR with full ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { @@ -591,6 +1198,17 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { @@ -601,6 +1219,17 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { @@ -611,6 +1240,17 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering + * @i: long value + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v ^ @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { @@ -621,6 +1261,17 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_xchg() - atomic exchange with full ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long new) { @@ -631,6 +1282,17 @@ raw_atomic_long_xchg(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_acquire() - atomic exchange with acquire ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { @@ -641,6 +1303,17 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_release() - atomic exchange with release ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long new) { @@ -651,6 +1324,17 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @new: long value to assign + * + * Atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { @@ -661,6 +1345,18 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) #endif } +/** + * raw_atomic_long_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { @@ -671,6 +1367,18 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { @@ -681,6 +1389,18 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { @@ -691,6 +1411,18 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { @@ -701,6 +1433,19 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with full ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { @@ -711,6 +1456,19 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with acquire ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { @@ -721,6 +1479,19 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with release ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { @@ -731,6 +1502,19 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering + * @v: pointer to atomic_long_t + * @old: pointer to long value to compare with + * @new: long value to assign + * + * If (@v == @old), atomically updates @v to @new with relaxed ordering. + * Otherwise, updates @old to the current value of @v. + * + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere. + * + * Return: @true if the exchange occured, @false otherwise. + */ static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { @@ -741,6 +1525,17 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) #endif } +/** + * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { @@ -751,6 +1546,16 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { @@ -761,6 +1566,16 @@ raw_atomic_long_dec_and_test(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_and_test() - atomic increment and test if zero with full ordering + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere. + * + * Return: @true if the resulting value of @v is zero, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { @@ -771,6 +1586,17 @@ raw_atomic_long_inc_and_test(atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative() - atomic add and test if negative with full ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { @@ -781,6 +1607,17 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with acquire ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { @@ -791,6 +1628,17 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_release() - atomic add and test if negative with release ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with release ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { @@ -801,6 +1649,17 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering + * @i: long value to add + * @v: pointer to atomic_long_t + * + * Atomically updates @v to (@v + @i) with relaxed ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere. + * + * Return: @true if the resulting value of @v is negative, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { @@ -811,6 +1670,18 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) #endif } +/** + * raw_atomic_long_fetch_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere. + * + * Return: The original value of @v. + */ static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -821,6 +1692,18 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) #endif } +/** + * raw_atomic_long_add_unless() - atomic add unless value with full ordering + * @v: pointer to atomic_long_t + * @a: long value to add + * @u: long value to compare with + * + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { @@ -831,6 +1714,16 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) #endif } +/** + * raw_atomic_long_inc_not_zero() - atomic increment unless zero with full ordering + * @v: pointer to atomic_long_t + * + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { @@ -841,6 +1734,16 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v) #endif } +/** + * raw_atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering + * @v: pointer to atomic_long_t + * + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { @@ -851,6 +1754,16 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { @@ -861,6 +1774,16 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v) #endif } +/** + * raw_atomic_long_dec_if_positive() - atomic decrement if positive with full ordering + * @v: pointer to atomic_long_t + * + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering. + * + * Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere. + * + * Return: @true if @v was updated, @false otherwise. + */ static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { @@ -872,4 +1795,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// e785d25cc3f220b7d473d36aac9da85dd7eb13a8 +// 029d2e3a493086671e874a4c2e0e42084be42403 diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh index 81d5c32039dd4..d4d4b474e8d56 100755 --- a/scripts/atomic/atomic-tbl.sh +++ b/scripts/atomic/atomic-tbl.sh @@ -36,9 +36,16 @@ meta_has_relaxed() meta_in "$1" "BFIR" } -#find_fallback_template(pfx, name, sfx, order) -find_fallback_template() +#meta_is_implicitly_relaxed(meta) +meta_is_implicitly_relaxed() +{ + meta_in "$1" "vls" +} + +#find_template(tmpltype, pfx, name, sfx, order) +find_template() { + local tmpltype="$1"; shift local pfx="$1"; shift local name="$1"; shift local sfx="$1"; shift @@ -52,8 +59,8 @@ find_fallback_template() # # Start at the most specific, and fall back to the most general. Once # we find a specific fallback, don't bother looking for more. - for base in "${pfx}${name}${sfx}${order}" "${name}"; do - file="${ATOMICDIR}/fallbacks/${base}" + for base in "${pfx}${name}${sfx}${order}" "${pfx}${name}${sfx}" "${name}"; do + file="${ATOMICDIR}/${tmpltype}/${base}" if [ -f "${file}" ]; then printf "${file}" @@ -62,6 +69,18 @@ find_fallback_template() done } +#find_fallback_template(pfx, name, sfx, order) +find_fallback_template() +{ + find_template "fallbacks" "$@" +} + +#find_kerneldoc_template(pfx, name, sfx, order) +find_kerneldoc_template() +{ + find_template "kerneldoc" "$@" +} + #gen_ret_type(meta, int) gen_ret_type() { local meta="$1"; shift @@ -142,6 +161,91 @@ gen_args() done } +#gen_desc_return(meta) +gen_desc_return() +{ + local meta="$1"; shift + + case "${meta}" in + [v]) + printf "Return: Nothing." + ;; + [Ff]) + printf "Return: The original value of @v." + ;; + [R]) + printf "Return: The updated value of @v." + ;; + [l]) + printf "Return: The value of @v." + ;; + esac +} + +#gen_template_kerneldoc(template, class, meta, pfx, name, sfx, order, atomic, int, args...) +gen_template_kerneldoc() +{ + local template="$1"; shift + local class="$1"; shift + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local retstmt="$(gen_ret_stmt "${meta}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + local desc_order="" + local desc_instrumentation="" + local desc_return="" + + if [ ! -z "${order}" ]; then + desc_order="${order##_}" + elif meta_is_implicitly_relaxed "${meta}"; then + desc_order="relaxed" + else + desc_order="full" + fi + + if [ -z "${class}" ]; then + desc_noinstr="Unsafe to use in noinstr code; use raw_${atomicname}() there." + else + desc_noinstr="Safe to use in noinstr code; prefer ${atomicname}() elsewhere." + fi + + desc_return="$(gen_desc_return "${meta}")" + + . ${template} +} + +#gen_kerneldoc(class, meta, pfx, name, sfx, order, atomic, int, args...) +gen_kerneldoc() +{ + local class="$1"; shift + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local tmpl="$(find_kerneldoc_template "${pfx}" "${name}" "${sfx}" "${order}")" + if [ -z "${tmpl}" ]; then + printf "/*\n" + printf " * No kerneldoc available for ${class}${atomicname}\n" + printf " */\n" + else + gen_template_kerneldoc "${tmpl}" "${class}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@" + fi +} + #gen_proto_order_variants(meta, pfx, name, sfx, ...) gen_proto_order_variants() { diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 2b470d31e3539..c0c8a85d7c81b 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -73,6 +73,8 @@ gen_proto_order_variant() local params="$(gen_params "${int}" "${atomic}" "$@")" local args="$(gen_args "$@")" + gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" + printf "static __always_inline ${ret}\n" printf "raw_${atomicname}(${params})\n" printf "{\n" diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 93c949aa9e544..9d3863ceb4d48 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -67,6 +67,8 @@ gen_proto_order_variant() local checks="$(gen_params_checks "${meta}" "${order}" "$@")" local args="$(gen_args "$@")" local retstmt="$(gen_ret_stmt "${meta}")" + + gen_kerneldoc "" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@" cat < 0), atomically updates @v to (@v - 1) with ${desc_order} ordering. + * + * ${desc_noinstr} + * + * Return: @true if @v was updated, @false otherwise. + */ +EOF diff --git a/scripts/atomic/kerneldoc/dec_unless_positive b/scripts/atomic/kerneldoc/dec_unless_positive new file mode 100644 index 0000000000000..ee73612f03547 --- /dev/null +++ b/scripts/atomic/kerneldoc/dec_unless_positive @@ -0,0 +1,12 @@ +cat <= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering. + * + * ${desc_noinstr} + * + * Return: @true if @v was updated, @false otherwise. + */ +EOF diff --git a/scripts/atomic/kerneldoc/or b/scripts/atomic/kerneldoc/or new file mode 100644 index 0000000000000..55b33de504165 --- /dev/null +++ b/scripts/atomic/kerneldoc/or @@ -0,0 +1,13 @@ +cat < X-Patchwork-Id: 103119 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2500783vqr; Mon, 5 Jun 2023 00:11:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4xXaq4qjswXsPBwFNc4DnIZuBu5ABRawIWV1tjzcShOWUvjIurqlLRDtkHOrd3mCZCFF+a X-Received: by 2002:a17:90a:7e94:b0:253:6e6f:f5c5 with SMTP id j20-20020a17090a7e9400b002536e6ff5c5mr7877891pjl.7.1685949080263; Mon, 05 Jun 2023 00:11:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949080; cv=none; d=google.com; s=arc-20160816; b=Y/CbbLzamIu5+iyoD0ey3ZrY/JK8ymNf6CUAF3a4p17Fbj54eM2qr4FnCgfnd0zPgQ qxOqiDiGFIKD6J8nODF/VjI2ACjGuViptji1eA9Wl8DfcJhRd1oGBxhvDeSqYVS7AEoA Z6a4EZed7EZKmnFIuJsDt88Aa/AC7+TzHNUgAPjIws8SWdLGGKdl2Sc5opyYZu/sNZyl QKXDfqp987vUaICBkzZikqvkkWBWSZkaeeZikFqZGqxSZRTjKZfSppt3fH4OFHS66T7v R0Ak1rZJjl9qbDqNeDStkBJSk84/9DeOXN9Y7VaJ6ArTqDe4RIcjX07sLqvYWf7Sx7HG VJkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7+junzn/3qiqrSBTSJCmpRhohHV5HR4IC5AQxkWrHWI=; b=YnbwPLGLm6oc4a4AI0zZQZrEnB2jUPPjZ4TFzZVIVMuiwbouYsxOIcma+7ZcD1sMbg 9cbqZ38v1RwAG0SFw6M58EzdpCrQULQ3DgdWZ8t/5Z7toVY9HHt8z3lcIxNlG0yOI45u R8WydG3JkBwfBMrdwNiopJxfl24OhwHzND/6bAFZPwFsN5OYX7qV/FD6SE0tD3IvQKek EDe4WJ4t5IoeT41VxXg+JzHZiEt4G0OKj2m+pa3ncIVryCbNmtTvS9ObAuWG6Jrhvui+ 8V7ozz2ix9Ty3FjdcvyDh/bKRNv+YHW3JMlsMHJ7L01L3fFf/O/nWOJbbuUbdjxTo1Cg ScDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gf13-20020a17090ac7cd00b0025355f85165si5208025pjb.139.2023.06.05.00.11.08; Mon, 05 Jun 2023 00:11:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231269AbjFEHEr (ORCPT + 99 others); Mon, 5 Jun 2023 03:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230219AbjFEHEV (ORCPT ); Mon, 5 Jun 2023 03:04:21 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1055E171C; Mon, 5 Jun 2023 00:03:39 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C420216F8; Mon, 5 Jun 2023 00:03:28 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7FABE3F793; Mon, 5 Jun 2023 00:02:41 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 26/27] locking/atomic: docs: Add atomic operations to the driver basic API documentation Date: Mon, 5 Jun 2023 08:01:23 +0100 Message-Id: <20230605070124.3741859-27-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845742728272112?= X-GMAIL-MSGID: =?utf-8?q?1767845742728272112?= From: "Paul E. McKenney" Add the generated atomic headers to driver-api/basics.rst in order to provide documentation for the Linux kernel's atomic operations. At the same time, dtop the x86 atomic header, which provides kerneldoc comments for some arch_atomic*_*() operations. The arch_atomic*_*() operations are now purely an implenentation detail of the raw_atomic*_*() ops, and outside of implementing the atomics, code should use the raw_atomic*_*() forms. Signed-off-by: Paul E. McKenney [Mark: add atomic-{instrumented,long}.h, update commit message] Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Cc: Akira Yokosawa Cc: Boqun Feng Cc: Jonathan Corbet Cc: Peter Zijlstra Cc: Will Deacon --- Documentation/driver-api/basics.rst | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst index 4b4d8e28d3be4..7671b531ba1a8 100644 --- a/Documentation/driver-api/basics.rst +++ b/Documentation/driver-api/basics.rst @@ -84,7 +84,13 @@ Reference counting Atomics ------- -.. kernel-doc:: arch/x86/include/asm/atomic.h +.. kernel-doc:: include/linux/atomic/atomic-instrumented.h + :internal: + +.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h + :internal: + +.. kernel-doc:: include/linux/atomic/atomic-long.h :internal: Kernel objects manipulation From patchwork Mon Jun 5 07:01:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103130 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2501523vqr; Mon, 5 Jun 2023 00:12:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7DlcTpmAZkyRJFFFCDoOV/oO8YWGTW2AWvSwce+Brfy6UXWvDldhxBndoL39jtXeEx8p3F X-Received: by 2002:a67:fbc6:0:b0:43b:3d0b:c01 with SMTP id o6-20020a67fbc6000000b0043b3d0b0c01mr1305904vsr.1.1685949172418; Mon, 05 Jun 2023 00:12:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685949172; cv=none; d=google.com; s=arc-20160816; b=RJfJf7+AfbVdbteK9ZVg8I9r0UlFmMcASWGM8j4/pJZ/XkWYiSKDkRtXccf0iTJXJ5 LIYKLbWVHK0FfrXZ//H6BZs/JQhQGRBTTfTmKNjXyoIj97PIiIm2zXj+b/z98+fMAzh6 2QJ8zLcjJv2JGurZ08b25PLwwAN4j3xAWmv3qZ8fu/ZIl0+bHdR+R3pQ1GCObfiN+bBc p2ixey3LLfBClwH9ASMeKl9jYIw2CVIX9jcA4QvLOOGCtGP516YD7Gn73Rd7NxFlh6US V0HxK0wuS8pIkzWtfHLKynzDB1c+HZgBjb8JV5WwKrl1IVMEHM3V2YEMxhLrgGYN//QN C3rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zWgF0NmaAHJgpdfdDxmVuJg2LnbgP1r8d9QphaS7zUo=; b=ezxKX1uA8SAEB03bC4PbYgaXlChiZdbXZdL3GE/2lLw3QD86j4yTPeQoCaQ6AhbPyt gev3TU0WTF+xE9WqIeHa0oabEQLU7x9ZcfRt0cWdBjEe3KKB7NTmENBnbgGa/RsTM9MI KYMKHCKb9pCAMm9rJJ+uhCSFZEkT0AAUd23N3pSTGo9yJDRv3uV/OMlM1p1pbRMvZ2mx Qz3sCRM/9V1FaDnrywbuFIOkmR1cOaAksKSG71p9hUhH1G5cm7DgqIjsabr5qiyIbTmN Et53EECx1rK3guIPDBbY/I8TdTbFgtu40IW0GpfSTbKWUyiucWCH1AQM0C/aSGOfC/GE FgLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c6-20020a6566c6000000b0052c87a89084si4994008pgw.374.2023.06.05.00.12.40; Mon, 05 Jun 2023 00:12:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231219AbjFEHEG (ORCPT + 99 others); Mon, 5 Jun 2023 03:04:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230370AbjFEHDY (ORCPT ); Mon, 5 Jun 2023 03:03:24 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5ACD9E5C; Mon, 5 Jun 2023 00:02:45 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C005B1655; Mon, 5 Jun 2023 00:03:30 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7AC143F793; Mon, 5 Jun 2023 00:02:43 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 27/27] locking/atomic: treewide: delete arch_atomic_*() kerneldoc Date: Mon, 5 Jun 2023 08:01:24 +0100 Message-Id: <20230605070124.3741859-28-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845839711208295?= X-GMAIL-MSGID: =?utf-8?q?1767845839711208295?= Currently several architectures have kerneldoc comments for arch_atomic_*(), which is unhelpful as these live in a shared namespace where they clash, and the arch_atomic_*() ops are now an implementation detail of the raw_atomic_*() ops, which no-one should use those directly. Delete the kerneldoc comments for arch_atomic_*(), along with pseudo-kerneldoc comments which are in the correct style but are missing the leading '/**' necessary to be true kerneldoc comments. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Jonathan Corbet Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 25 -------- arch/arc/include/asm/atomic64-arcv2.h | 17 ------ arch/hexagon/include/asm/atomic.h | 16 ----- arch/loongarch/include/asm/atomic.h | 49 --------------- arch/x86/include/asm/atomic.h | 87 --------------------------- arch/x86/include/asm/atomic64_32.h | 76 ----------------------- arch/x86/include/asm/atomic64_64.h | 81 ------------------------- 7 files changed, 351 deletions(-) diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index ec8ab552c527a..cbd9244571af0 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -200,15 +200,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * arch_atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, new, old; @@ -232,15 +223,6 @@ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) } #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless -/** - * arch_atomic64_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { s64 c, new, old; @@ -264,13 +246,6 @@ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u } #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { s64 old, tmp; diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h index 2b7c9e61a2947..6b6db981967ae 100644 --- a/arch/arc/include/asm/atomic64-arcv2.h +++ b/arch/arc/include/asm/atomic64-arcv2.h @@ -182,14 +182,6 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) } #define arch_atomic64_xchg arch_atomic64_xchg -/** - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic64_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ - static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { s64 val; @@ -214,15 +206,6 @@ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) } #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive -/** - * arch_atomic64_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if it was not @u. - * Returns the old value of @v - */ static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { s64 old, temp; diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 5c8440016c762..2447d083c432f 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -28,12 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int new) #define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) -/** - * arch_atomic_read - reads a word, atomically - * @v: pointer to atomic value - * - * Assumes all word reads on our architecture are atomic. - */ #define arch_atomic_read(v) READ_ONCE((v)->counter) #define ATOMIC_OP(op) \ @@ -112,16 +106,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * arch_atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer to value - * @a: amount to add - * @u: unless value is equal to u - * - * Returns old value. - * - */ - static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int __oldval; diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h index 8d73c85911b08..e27f0c72d3242 100644 --- a/arch/loongarch/include/asm/atomic.h +++ b/arch/loongarch/include/asm/atomic.h @@ -29,21 +29,7 @@ #define ATOMIC_INIT(i) { (i) } -/* - * arch_atomic_read - read atomic variable - * @v: pointer of type atomic_t - * - * Atomically reads the value of @v. - */ #define arch_atomic_read(v) READ_ONCE((v)->counter) - -/* - * arch_atomic_set - set atomic variable - * @v: pointer of type atomic_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ #define arch_atomic_set(v, i) WRITE_ONCE((v)->counter, (i)) #define ATOMIC_OP(op, I, asm_op) \ @@ -139,14 +125,6 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) } #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless -/* - * arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically test @v and subtract @i if @v is greater or equal than @i. - * The function returns the old value of @v minus @i. - */ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) { int result; @@ -181,28 +159,13 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v) return result; } -/* - * arch_atomic_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - */ #define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(1, v) #ifdef CONFIG_64BIT #define ATOMIC64_INIT(i) { (i) } -/* - * arch_atomic64_read - read atomic variable - * @v: pointer of type atomic64_t - * - */ #define arch_atomic64_read(v) READ_ONCE((v)->counter) - -/* - * arch_atomic64_set - set atomic variable - * @v: pointer of type atomic64_t - * @i: required value - */ #define arch_atomic64_set(v, i) WRITE_ONCE((v)->counter, (i)) #define ATOMIC64_OP(op, I, asm_op) \ @@ -297,14 +260,6 @@ static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u) } #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless -/* - * arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically test @v and subtract @i if @v is greater or equal than @i. - * The function returns the old value of @v minus @i. - */ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) { long result; @@ -339,10 +294,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v) return result; } -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic64_t - */ #define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(1, v) #endif /* CONFIG_64BIT */ diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 5e754e8957671..55a55ec043502 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -14,12 +14,6 @@ * resource counting etc.. */ -/** - * arch_atomic_read - read atomic variable - * @v: pointer of type atomic_t - * - * Atomically reads the value of @v. - */ static __always_inline int arch_atomic_read(const atomic_t *v) { /* @@ -29,25 +23,11 @@ static __always_inline int arch_atomic_read(const atomic_t *v) return __READ_ONCE((v)->counter); } -/** - * arch_atomic_set - set atomic variable - * @v: pointer of type atomic_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ static __always_inline void arch_atomic_set(atomic_t *v, int i) { __WRITE_ONCE(v->counter, i); } -/** - * arch_atomic_add - add integer to atomic variable - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v. - */ static __always_inline void arch_atomic_add(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "addl %1,%0" @@ -55,13 +35,6 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v) : "ir" (i) : "memory"); } -/** - * arch_atomic_sub - subtract integer from atomic variable - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v. - */ static __always_inline void arch_atomic_sub(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "subl %1,%0" @@ -69,27 +42,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v) : "ir" (i) : "memory"); } -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i); } #define arch_atomic_sub_and_test arch_atomic_sub_and_test -/** - * arch_atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" @@ -97,12 +55,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v) } #define arch_atomic_inc arch_atomic_inc -/** - * arch_atomic_dec - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" @@ -110,69 +62,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v) } #define arch_atomic_dec arch_atomic_dec -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e); } #define arch_atomic_dec_and_test arch_atomic_dec_and_test -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e); } #define arch_atomic_inc_and_test arch_atomic_inc_and_test -/** - * arch_atomic_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i); } #define arch_atomic_add_negative arch_atomic_add_negative -/** - * arch_atomic_add_return - add integer and return - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns @i + @v - */ static __always_inline int arch_atomic_add_return(int i, atomic_t *v) { return i + xadd(&v->counter, i); } #define arch_atomic_add_return arch_atomic_add_return -/** - * arch_atomic_sub_return - subtract integer and return - * @v: pointer of type atomic_t - * @i: integer value to subtract - * - * Atomically subtracts @i from @v and returns @v - @i - */ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) { return arch_atomic_add_return(-i, v); diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 808b4eece251e..3486d91b8595f 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -61,30 +61,12 @@ ATOMIC64_DECL(add_unless); #undef __ATOMIC64_DECL #undef ATOMIC64_EXPORT -/** - * arch_atomic64_cmpxchg - cmpxchg atomic64 variable - * @v: pointer to type atomic64_t - * @o: expected value - * @n: new value - * - * Atomically sets @v to @n if it was equal to @o and returns - * the old value. - */ - static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { return arch_cmpxchg64(&v->counter, o, n); } #define arch_atomic64_cmpxchg arch_atomic64_cmpxchg -/** - * arch_atomic64_xchg - xchg atomic64 variable - * @v: pointer to type atomic64_t - * @n: value to assign - * - * Atomically xchgs the value of @v to @n and returns - * the old value. - */ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) { s64 o; @@ -97,13 +79,6 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) } #define arch_atomic64_xchg arch_atomic64_xchg -/** - * arch_atomic64_set - set atomic64 variable - * @v: pointer to type atomic64_t - * @i: value to assign - * - * Atomically sets the value of @v to @n. - */ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) { unsigned high = (unsigned)(i >> 32); @@ -113,12 +88,6 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) : "eax", "edx", "memory"); } -/** - * arch_atomic64_read - read atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically reads the value of @v and returns it. - */ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) { s64 r; @@ -126,13 +95,6 @@ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) return r; } -/** - * arch_atomic64_add_return - add and return - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns @i + *@v - */ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { alternative_atomic64(add_return, @@ -142,9 +104,6 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) } #define arch_atomic64_add_return arch_atomic64_add_return -/* - * Other variants with different arithmetic operators: - */ static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { alternative_atomic64(sub_return, @@ -172,13 +131,6 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v) } #define arch_atomic64_dec_return arch_atomic64_dec_return -/** - * arch_atomic64_add - add integer to atomic64 variable - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v. - */ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v) { __alternative_atomic64(add, add_return, @@ -187,13 +139,6 @@ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v) return i; } -/** - * arch_atomic64_sub - subtract the atomic64 variable - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v. - */ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) { __alternative_atomic64(sub, sub_return, @@ -202,12 +147,6 @@ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) return i; } -/** - * arch_atomic64_inc - increment atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic64_inc(atomic64_t *v) { __alternative_atomic64(inc, inc_return, /* no output */, @@ -215,12 +154,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) } #define arch_atomic64_inc arch_atomic64_inc -/** - * arch_atomic64_dec - decrement atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic64_dec(atomic64_t *v) { __alternative_atomic64(dec, dec_return, /* no output */, @@ -228,15 +161,6 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) } #define arch_atomic64_dec arch_atomic64_dec -/** - * arch_atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns non-zero if the add was done, zero otherwise. - */ static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned low = (unsigned)u; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index c496595bf6012..3165c0feedf74 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -10,37 +10,16 @@ #define ATOMIC64_INIT(i) { (i) } -/** - * arch_atomic64_read - read atomic64 variable - * @v: pointer of type atomic64_t - * - * Atomically reads the value of @v. - * Doesn't imply a read memory barrier. - */ static __always_inline s64 arch_atomic64_read(const atomic64_t *v) { return __READ_ONCE((v)->counter); } -/** - * arch_atomic64_set - set atomic64 variable - * @v: pointer to type atomic64_t - * @i: required value - * - * Atomically sets the value of @v to @i. - */ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i) { __WRITE_ONCE(v->counter, i); } -/** - * arch_atomic64_add - add integer to atomic64 variable - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v. - */ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" @@ -48,13 +27,6 @@ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) : "er" (i), "m" (v->counter) : "memory"); } -/** - * arch_atomic64_sub - subtract the atomic64 variable - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v. - */ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" @@ -62,27 +34,12 @@ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v) : "er" (i), "m" (v->counter) : "memory"); } -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i); } #define arch_atomic64_sub_and_test arch_atomic64_sub_and_test -/** - * arch_atomic64_inc - increment atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic64_inc(atomic64_t *v) { asm volatile(LOCK_PREFIX "incq %0" @@ -91,12 +48,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) } #define arch_atomic64_inc arch_atomic64_inc -/** - * arch_atomic64_dec - decrement atomic64 variable - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic64_dec(atomic64_t *v) { asm volatile(LOCK_PREFIX "decq %0" @@ -105,56 +56,24 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) } #define arch_atomic64_dec arch_atomic64_dec -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, e); } #define arch_atomic64_dec_and_test arch_atomic64_dec_and_test -/** - * arch_atomic64_inc_and_test - increment and test - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, e); } #define arch_atomic64_inc_and_test arch_atomic64_inc_and_test -/** - * arch_atomic64_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i); } #define arch_atomic64_add_negative arch_atomic64_add_negative -/** - * arch_atomic64_add_return - add and return - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns @i + @v - */ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { return i + xadd(&v->counter, i);