From patchwork Mon Jun 5 07:00:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 103105 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2497304vqr; Mon, 5 Jun 2023 00:02:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5xyRfbVMRiIGZGQWbY89IogyGi9oGqnKKKZlYWbV0d6K73gJpzr/6ADonyt3ghOO0q16ff X-Received: by 2002:a17:90a:6b0b:b0:259:453:756e with SMTP id v11-20020a17090a6b0b00b002590453756emr6996470pjj.2.1685948578701; Mon, 05 Jun 2023 00:02:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685948578; cv=none; d=google.com; s=arc-20160816; b=ZRDljGnUA7vZfyNPUZz3yqs7WObWEXYA5iu0ikD7mBGc9HGsym/SWeyy9uSnps0P9p bBnbngThHAp5KQmNXPx3QxGig3Lhq/EBtyE7xYnAJIoQ5KiaiV6Zq7c4ay4JZHeOBcHv 5g+cZ6fHbltvmMsMJe2zTdBCJCkTuYjM3erox2fosW+se/PoYspu+d5Rqr5TG/2r6img B2C4m7BWqTNghVaU3f9RbRkbiNswxrOZtNXJ30Nq632h6ZXeHdEK5uH+ldYsKetwTvQQ h4DvpbVmvMPEORPoxe9waHx0Yki1FqAhK4fnMa1/BI94RRLmnI+y9zumtoKNrfSmZG4k 2D0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=XEwMA7VUEoDLmb9HfM3XGyusfJlsj5zPytQtt+wXEfw=; b=jK9w0UoKVQgmueVgbR6fKHtxmc+Fsif22UsQbJmlQfBN58sp7zFpnK4CWwwr5MXt7V +uMyrH3bzgZyq9uM6YSteOQ8s7TIrywdFdNZx9HGJ1IEfnHvtE7d4X1cAYPvHLUoztYe Mccs0ZWTIZspaG1e+YOSLV3MtZ/XXuow8q1vuBJdh8K0TqYhKhRGpgURYrUMm/U5U7i+ GAxXByaxOZ6caNBOBsYVLl/avaLxR0XgyQWCdp/BrOUQYRv4ME3P0iETZ1cNauin1ho5 B4XAmlzL7yUtbANEM5GDChCNnsqCHSY0ojRe/+gYPXgJxuFelzqAoN8NDdkljlvhEjHv 0brg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bt14-20020a63290e000000b00543a6cf2b55si1685665pgb.527.2023.06.05.00.02.45; Mon, 05 Jun 2023 00:02:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229964AbjFEHBt (ORCPT + 99 others); Mon, 5 Jun 2023 03:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230127AbjFEHBi (ORCPT ); Mon, 5 Jun 2023 03:01:38 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 57AF6AD; Mon, 5 Jun 2023 00:01:36 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A58301596; Mon, 5 Jun 2023 00:02:21 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D2B03F793; Mon, 5 Jun 2023 00:01:34 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 02/27] locking/atomic: remove fallback comments Date: Mon, 5 Jun 2023 08:00:59 +0100 Message-Id: <20230605070124.3741859-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767845216669756488?= X-GMAIL-MSGID: =?utf-8?q?1767845216669756488?= Currently a subset of the fallback templates have kerneldoc comments, resulting in a haphazard set of generated kerneldoc comments as only some operations have fallback templates to begin with. We'd like to generate more consistent kerneldoc comments, and to do so we'll need to restructure the way the fallback code is generated. To minimize churn and to make it easier to restructure the fallback code, this patch removes the existing kerneldoc comments from the fallback templates. We can add new kerneldoc comments in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic/atomic-arch-fallback.h | 166 +------------------- scripts/atomic/fallbacks/add_negative | 8 - scripts/atomic/fallbacks/add_unless | 9 -- scripts/atomic/fallbacks/dec_and_test | 8 - scripts/atomic/fallbacks/fetch_add_unless | 9 -- scripts/atomic/fallbacks/inc_and_test | 8 - scripts/atomic/fallbacks/inc_not_zero | 7 - scripts/atomic/fallbacks/sub_and_test | 9 -- 8 files changed, 1 insertion(+), 223 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 1722ddb6f17e0..3ce4cb5e790c5 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) #endif /* arch_atomic_try_cmpxchg_relaxed */ #ifndef arch_atomic_sub_and_test -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { @@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v) #endif #ifndef arch_atomic_dec_and_test -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { @@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v) #endif #ifndef arch_atomic_inc_and_test -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { @@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v) #endif /* arch_atomic_add_negative */ #ifndef arch_atomic_add_negative -/** - * arch_atomic_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { @@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_acquire -/** - * arch_atomic_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_release -/** - * arch_atomic_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) { @@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v) #endif #ifndef arch_atomic_add_negative_relaxed -/** - * arch_atomic_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative_relaxed(int i, atomic_t *v) { @@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v) #endif /* arch_atomic_add_negative_relaxed */ #ifndef arch_atomic_fetch_add_unless -/** - * arch_atomic_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_add_unless -/** - * arch_atomic_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic_add_unless(atomic_t *v, int a, int u) { @@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u) #endif #ifndef arch_atomic_inc_not_zero -/** - * arch_atomic_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic_inc_not_zero(atomic_t *v) { @@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) #endif /* arch_atomic64_try_cmpxchg_relaxed */ #ifndef arch_atomic64_sub_and_test -/** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { @@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_dec_and_test -/** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v) { @@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v) #endif #ifndef arch_atomic64_inc_and_test -/** - * arch_atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v) { @@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v) #endif /* arch_atomic64_add_negative */ #ifndef arch_atomic64_add_negative -/** - * arch_atomic64_add_negative - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_acquire -/** - * arch_atomic64_add_negative_acquire - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_release -/** - * arch_atomic64_add_negative_release - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v) #endif #ifndef arch_atomic64_add_negative_relaxed -/** - * arch_atomic64_add_negative_relaxed - Add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. - */ static __always_inline bool arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) { @@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #endif /* arch_atomic64_add_negative_relaxed */ #ifndef arch_atomic64_fetch_add_unless -/** - * arch_atomic64_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_add_unless -/** - * arch_atomic64_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. - */ static __always_inline bool arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) #endif #ifndef arch_atomic64_inc_not_zero -/** - * arch_atomic64_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. - */ static __always_inline bool arch_atomic64_inc_not_zero(atomic64_t *v) { @@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53 +// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277 diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative index e5980abf5904e..d0bd2dfbb244c 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -1,12 +1,4 @@ cat <