From patchwork Sat Dec 24 21:50:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak R Varma X-Patchwork-Id: 36451 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp306322wrt; Sat, 24 Dec 2022 14:03:17 -0800 (PST) X-Google-Smtp-Source: AMrXdXuvSmHji1SPvndnD5Jt1AlpANOkPwJ/7tNRU6sIps/oeqcbVTXGYZQY9Q9r5meFAs74f319 X-Received: by 2002:a17:906:99c7:b0:7b5:860d:7039 with SMTP id s7-20020a17090699c700b007b5860d7039mr12698491ejn.10.1671919397197; Sat, 24 Dec 2022 14:03:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671919397; cv=none; d=google.com; s=arc-20160816; b=uCFMp5aLTBYGJaaOP/g/y9RN8SJ2hGshu7c8oFKeMAqH+dRwH6iJbKqQpV90QIVvYG pP7qnVpxlV5XAEU08Os4wobbwf7RbHGog14sJ9nbSN6ytznsBQSq5E1TYRwo0lByvE/w +azNFQfOedhvlwANT8v4zfHC03nOv7Y9VJ68QqnM2277JcwUuYQUfTr2AsyGYOm2gLjp CO+QEFePBi6qRgMHgfA4CH3hNKSd+jVLx7KEUyWZnSROeccnuFKNKSCcx3UE5Y+HJc0V XHTF5YIw+Tjm7zHSGnjk5YOS8oW06s1L/NMXnu9yD8DkLAmSsJudhXPJvMKQsBzv9TOa od1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-disposition:mime-version:message-id :subject:cc:to:from:date:dkim-signature; bh=eQp8qnis2wGVB9fFEYq0PTqiJp5zji+DHTKfgJahAKM=; b=F4o/By4Hh2lwC8htb2AQ2vTL89xYWKmAcSr8xr+lV5gFPBw25Ib/Fl0qQYYJGFWdYT k8ke1AHFR2BFL3cD0QQlBUUS9tdiIXJwlYlxtqVW4m4M+gQKyohuaEt45j2cLpsPL9y9 zbnqAMB5gqdEc+gl8HYCrL6QfsKqVwFCq8c4vqPYsmHcjd48pivtrMczdItPdjCvVSX2 z6Mkdb7KiHdrCB+4SYLREzLJUGMKBtVEYDsgm4xNfVHLB31DvJFlZpJbNWsAD1cMa2mj Gf1lu67bnQfzpa+v5h9mjH9bsA/5nfXDrsy1MmNaq74L2vwJdJqnckKYi5P1waXwNVzs 2LCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@mailo.com header.s=mailo header.b="d/Etw4X6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mailo.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id he7-20020a1709073d8700b007da4fe085d4si6154320ejc.694.2022.12.24.14.02.50; Sat, 24 Dec 2022 14:03:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@mailo.com header.s=mailo header.b="d/Etw4X6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mailo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229712AbiLXVui (ORCPT + 99 others); Sat, 24 Dec 2022 16:50:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229445AbiLXVuf (ORCPT ); Sat, 24 Dec 2022 16:50:35 -0500 Received: from msg-4.mailo.com (msg-4.mailo.com [213.182.54.15]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A54A2BC8A for ; Sat, 24 Dec 2022 13:50:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=mailo.com; s=mailo; t=1671918615; bh=3RMRR8G3aQrYrRNzBtDZFRRHfOs5HasLm+7fpYUXFwE=; h=X-EA-Auth:Date:From:To:Cc:Subject:Message-ID:MIME-Version: Content-Type; b=d/Etw4X6d93jWGYSzsp6rv9fqXVnPLxVIC2NnQu6nBgnbV+7zOMV7Vx7EX98dM6uz NTDIZDg3HujpuLm14VUBP7F62msc0ixOr/a6qwOLmwFzu2wWJteuiuMyXRj3g0c1m+ NphzZl/wHIScbUXRf9xxkzR7s8yRRdn5W96wkQgM= Received: by b-3.in.mailobj.net [192.168.90.13] with ESMTP via ip-206.mailobj.net [213.182.55.206] Sat, 24 Dec 2022 22:50:15 +0100 (CET) X-EA-Auth: 4MfYGIvEIKR+4dpyaHgInVQhNCzEG5VrXxDsSwjZeiLhfOmGuLNBhBjxVhFiQgk3p6BWsoLG3Mg7GfMSDYbYv5g8lOMevslW Date: Sun, 25 Dec 2022 03:20:08 +0530 From: Deepak R Varma To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: Saurabh Singh Sengar , Praveen Kumar , Deepak R Varma Subject: [PATCH] drm/i915: convert i915_active.count from atomic_t to refcount_t Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1753134553749544396?= X-GMAIL-MSGID: =?utf-8?q?1753134553749544396?= The refcount_* APIs are designed to address known issues with the atomic_t APIs for reference counting. They provide following distinct advantages: - protect the reference counters from overflow/underflow - avoid use-after-free errors - provide improved memory ordering guarantee schemes - neater and safer. Hence, convert the atomic_t count member variable and associated atomic_*() API calls to equivalent refcount_t type and refcount_*() API calls. This patch proposal address the following warnings generated by the atomic_as_refcounter.cocci coccinelle script atomic_add_unless Signed-off-by: Deepak R Varma --- Please note: Proposed changes are compile tested only. drivers/gpu/drm/i915/i915_active.c | 24 +++++++++++++----------- drivers/gpu/drm/i915/i915_active.h | 6 +++--- drivers/gpu/drm/i915/i915_active_types.h | 4 ++-- 3 files changed, 18 insertions(+), 16 deletions(-) -- 2.34.1 diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index 7412abf166a8..4a8d873b4347 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -133,7 +133,7 @@ __active_retire(struct i915_active *ref) GEM_BUG_ON(i915_active_is_idle(ref)); /* return the unused nodes to our slabcache -- flushing the allocator */ - if (!atomic_dec_and_lock_irqsave(&ref->count, &ref->tree_lock, flags)) + if (!refcount_dec_and_lock_irqsave(&ref->count, &ref->tree_lock, &flags)) return; GEM_BUG_ON(rcu_access_pointer(ref->excl.fence)); @@ -179,8 +179,8 @@ active_work(struct work_struct *wrk) { struct i915_active *ref = container_of(wrk, typeof(*ref), work); - GEM_BUG_ON(!atomic_read(&ref->count)); - if (atomic_add_unless(&ref->count, -1, 1)) + GEM_BUG_ON(!refcount_read(&ref->count)); + if (refcount_dec_not_one(&ref->count)) return; __active_retire(ref); @@ -189,8 +189,8 @@ active_work(struct work_struct *wrk) static void active_retire(struct i915_active *ref) { - GEM_BUG_ON(!atomic_read(&ref->count)); - if (atomic_add_unless(&ref->count, -1, 1)) + GEM_BUG_ON(!refcount_read(&ref->count)); + if (refcount_dec_not_one(&ref->count)) return; if (ref->flags & I915_ACTIVE_RETIRE_SLEEPS) { @@ -354,7 +354,7 @@ void __i915_active_init(struct i915_active *ref, ref->cache = NULL; init_llist_head(&ref->preallocated_barriers); - atomic_set(&ref->count, 0); + refcount_set(&ref->count, 0); __mutex_init(&ref->mutex, "i915_active", mkey); __i915_active_fence_init(&ref->excl, NULL, excl_retire); INIT_WORK(&ref->work, active_work); @@ -445,7 +445,7 @@ int i915_active_add_request(struct i915_active *ref, struct i915_request *rq) if (replace_barrier(ref, active)) { RCU_INIT_POINTER(active->fence, NULL); - atomic_dec(&ref->count); + refcount_dec(&ref->count); } if (!__i915_active_fence_set(active, fence)) __i915_active_acquire(ref); @@ -488,14 +488,16 @@ i915_active_set_exclusive(struct i915_active *ref, struct dma_fence *f) bool i915_active_acquire_if_busy(struct i915_active *ref) { debug_active_assert(ref); - return atomic_add_unless(&ref->count, 1, 0); + return refcount_add_not_zero(1, &ref->count); } static void __i915_active_activate(struct i915_active *ref) { spin_lock_irq(&ref->tree_lock); /* __active_retire() */ - if (!atomic_fetch_inc(&ref->count)) + if (!refcount_inc_not_zero(&ref->count)) { + refcount_inc(&ref->count); debug_active_activate(ref); + } spin_unlock_irq(&ref->tree_lock); } @@ -757,7 +759,7 @@ int i915_sw_fence_await_active(struct i915_sw_fence *fence, void i915_active_fini(struct i915_active *ref) { debug_active_fini(ref); - GEM_BUG_ON(atomic_read(&ref->count)); + GEM_BUG_ON(refcount_read(&ref->count)); GEM_BUG_ON(work_pending(&ref->work)); mutex_destroy(&ref->mutex); @@ -927,7 +929,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref, first = first->next; - atomic_dec(&ref->count); + refcount_dec(&ref->count); intel_engine_pm_put(barrier_to_engine(node)); kmem_cache_free(slab_cache, node); diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h index 7eb44132183a..116c7c28466a 100644 --- a/drivers/gpu/drm/i915/i915_active.h +++ b/drivers/gpu/drm/i915/i915_active.h @@ -193,14 +193,14 @@ void i915_active_release(struct i915_active *ref); static inline void __i915_active_acquire(struct i915_active *ref) { - GEM_BUG_ON(!atomic_read(&ref->count)); - atomic_inc(&ref->count); + GEM_BUG_ON(!refcount_read(&ref->count)); + refcount_inc(&ref->count); } static inline bool i915_active_is_idle(const struct i915_active *ref) { - return !atomic_read(&ref->count); + return !refcount_read(&ref->count); } void i915_active_fini(struct i915_active *ref); diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h index b02a78ac87db..152a3a25d9f7 100644 --- a/drivers/gpu/drm/i915/i915_active_types.h +++ b/drivers/gpu/drm/i915/i915_active_types.h @@ -7,7 +7,7 @@ #ifndef _I915_ACTIVE_TYPES_H_ #define _I915_ACTIVE_TYPES_H_ -#include +#include #include #include #include @@ -23,7 +23,7 @@ struct i915_active_fence { struct active_node; struct i915_active { - atomic_t count; + refcount_t count; struct mutex mutex; spinlock_t tree_lock;