From patchwork Mon Apr 24 22:05:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87182 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3037108vqo; Mon, 24 Apr 2023 15:23:45 -0700 (PDT) X-Google-Smtp-Source: AKy350aWDzW2sbXVluAn1qfgWQ2ATgy5lpf4gmzz0FyvshvGz8BIV5zdn89npy6HbwvU3ytcEtr3 X-Received: by 2002:a05:6a20:32aa:b0:f0:ccb8:e837 with SMTP id g42-20020a056a2032aa00b000f0ccb8e837mr14463649pzd.55.1682375024945; Mon, 24 Apr 2023 15:23:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682375024; cv=none; d=google.com; s=arc-20160816; b=HXGkIis3bYtj/+7VxbnJbVK0Q8aZkYbIalh8J99xsjxf7kRHwTg34x+m99HaW6qKum SelA0v0exToxHVsdBfXAPvnOASx+y4otTr0AV3TNWwz0O+/n/tmUSDeZ8OMAue+X/oxB 7R91GtEPODGK8Xgk+vy6WenLOBYCgwFH8WO1huoQNhZ799sj+pWMyb5Zn/HymhUYc/iX xSBkgcaEG8ErTBvpK7vdOPTurNqv9R/o9Wa7zx6Xgq9/47mh2ajWKFfTjFoqp62pKx+v v6xRsF8QTNjt6SZlYfY44JGW4F7bzR8HI1Pla+4fGhZX30SI6w5fKeOQpjYor4N4CB/n Z0JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=4XI1a23hPEkJsqk4x5iY/TgQUqezKAcgLInQR1S70hE=; b=y4cx3PBRpH1JyKcYxwjb/tHI7d1bHqcuDuSebsvgPuCBaiDZcql3h4IgQsgdstf3NT fotKbJpmtDP/bwCcTH3SFNr2KRHgzczsPkKB2DOYFWx7Tv3sukhGmrEKuafaz8mPFH6X /TaPQBRqOT/pIqn72aNXihu061lUabF1acdibzoqK1g4IrBkmRUHzXtjME8A7UloW31n sfkjVJL/3jlXpi/nMUVAiwSW0iWVLxTcoMzMKMoQa9hUNGlZyXHKzK2OOY1wzJMVwfqQ 8LMX/e7BaOONcHra1dTGpK5En3YBEDSdkUCoaOwWgEGDmw3BHsxlbIfDFh3CNxtjtfHS k5dA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TMK0IoF5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bk13-20020a056a02028d00b0051b603fe4f9si11690451pgb.876.2023.04.24.15.23.29; Mon, 24 Apr 2023 15:23:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TMK0IoF5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233150AbjDXWGD (ORCPT + 99 others); Mon, 24 Apr 2023 18:06:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233139AbjDXWGA (ORCPT ); Mon, 24 Apr 2023 18:06:00 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C91701B9; Mon, 24 Apr 2023 15:05:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373958; x=1713909958; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=mqfual4qhwBGATemiquiJwy5ApxQS+xFr1kM/b4gHaA=; b=TMK0IoF5MaQ56YwVWrgkRLuV0Y39XQ0p8v2XyJPNDe/rR9MOkwd8fPGN /Nb/OQjbEkKRz1jj7ZdfqxgQL/K+MFqiVEHLbejAHwlngiBPY2Smqfnym MNn7Z9d/yqGaW3z1ymnMA7W12wr2BiVFWT3bkt/luucTGlXj2hcBxq6oV URx4Xof4njoq3xn40q11Lqi6K3Do8kIVCwp1HgkYsQIebiXPACRoeovHG sIIYXywPkoEy1MJA8e0Awlzxe7ljZq79iTdb6+xMj8fZ1AgcB2UfXXppH GyaXlAz8Bl1vlEOl7IqyLiF6KFPADDiE+t3qAIZJIyjcLMnS1y8QBO2gD w==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473641" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473641" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:05:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500112" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500112" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:05:53 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:38 +0200 Subject: [PATCH v8 1/7] lib/ref_tracker: add unlocked leak print helper MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-1-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764098073923049472?= X-GMAIL-MSGID: =?utf-8?q?1764098073923049472?= To have reliable detection of leaks, caller must be able to check under the same lock both: tracked counter and the leaks. dir.lock is natural candidate for such lock and unlocked print helper can be called with this lock taken. As a bonus we can reuse this helper in ref_tracker_dir_exit. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 8 ++++++ lib/ref_tracker.c | 66 ++++++++++++++++++++++++++------------------- 2 files changed, 46 insertions(+), 28 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 9ca353ab712b5e..87a92f2bec1b88 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -36,6 +36,9 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, void ref_tracker_dir_exit(struct ref_tracker_dir *dir); +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit); + void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); @@ -56,6 +59,11 @@ static inline void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { } +static inline void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ +} + static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit) { diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index dc7b14aa3431e2..d4eb0929af8f96 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -14,6 +14,38 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker *tracker; + unsigned int i = 0; + + lockdep_assert_held(&dir->lock); + + list_for_each_entry(tracker, &dir->list, head) { + if (i < display_limit) { + pr_err("leaked reference.\n"); + if (tracker->alloc_stack_handle) + stack_depot_print(tracker->alloc_stack_handle); + i++; + } else { + break; + } + } +} +EXPORT_SYMBOL(ref_tracker_dir_print_locked); + +void ref_tracker_dir_print(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + ref_tracker_dir_print_locked(dir, display_limit); + spin_unlock_irqrestore(&dir->lock, flags); +} +EXPORT_SYMBOL(ref_tracker_dir_print); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; @@ -27,13 +59,13 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) kfree(tracker); dir->quarantine_avail++; } - list_for_each_entry_safe(tracker, n, &dir->list, head) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); + if (!list_empty(&dir->list)) { + ref_tracker_dir_print_locked(dir, 16); leak = true; - list_del(&tracker->head); - kfree(tracker); + list_for_each_entry_safe(tracker, n, &dir->list, head) { + list_del(&tracker->head); + kfree(tracker); + } } spin_unlock_irqrestore(&dir->lock, flags); WARN_ON_ONCE(leak); @@ -42,28 +74,6 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) } EXPORT_SYMBOL(ref_tracker_dir_exit); -void ref_tracker_dir_print(struct ref_tracker_dir *dir, - unsigned int display_limit) -{ - struct ref_tracker *tracker; - unsigned long flags; - unsigned int i = 0; - - spin_lock_irqsave(&dir->lock, flags); - list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; - } - } - spin_unlock_irqrestore(&dir->lock, flags); -} -EXPORT_SYMBOL(ref_tracker_dir_print); - int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) From patchwork Mon Apr 24 22:05:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87181 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3036908vqo; Mon, 24 Apr 2023 15:23:16 -0700 (PDT) X-Google-Smtp-Source: AKy350avY9+LXrulPBMQcxeJ3NxFKhQMh6wqjMJN8LhelRppaQVfLQgZbfRyMuSVrEkeOZw3Rj4O X-Received: by 2002:a17:902:da82:b0:1a0:76e8:a4d with SMTP id j2-20020a170902da8200b001a076e80a4dmr22952643plx.14.1682374996559; Mon, 24 Apr 2023 15:23:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682374996; cv=none; d=google.com; s=arc-20160816; b=QbRDZ9vxsrWf1kEd/Qn3ND89JM7zLRiUoaZXzgnskgZy863MTjdeyGoOsYIGtQcvEX gjMuZQRW1hTp+bFBmuyxqsh8tU3hTILDWVMHpjR0CUj89ZCu0U6WF0WMerZbLA4dhhWf 4T+J2u8jKHTi5yOLO57k0gjcwWjp54QKOc3xqfZ01xjkxcSPKejr4+wBap+63NT7T9K8 yfmLfZ2lwk4mMU5h7vD0WgC8TCVrCwC2h1b0EbzvSOq4NK0iANJQa6jWNJvt73VjB4EO FMUWbqwOGRi//PMoJHufjGzMTf6YkUoTtZnaed3F0tb9JKNsOGyQoIYWPcOQ8Z9xfN/g Kgcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=Zmn265NRJfzMmjzt3P4fHez3WxuoT/d/hWllEozKDuo=; b=HnYO1lIK5pjYD7jSoo5Gp7GsbmWrE5Eh6Cle5d8uSy1U+sfRgM9XOz8K7hha5ZhBSl yZMdE1QpZju/nsiVNb707nnnFSTbksy69I9GV/4DXCE48uH2qq4BqDF+tN8n6FfCh1lt cZMQ43o9u5Ms+lskTkU74VZWoU5pHE7HC74QUkzy9eqQH50svGcd+FIoDgiYlX9hvz3m IhlNxtVzmdL2Z35Bwlui/aU9NNGqi4lsfhjp045cRspD3MzTUfZ2D1lnyocfU+/jw8J0 1AXNnZoF293ueqvOmHOVxeHGrDIokeRFaaPipi+cNexec9LJ0kgtAxUdYx5cXS1FhPT0 CCXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IIuOici5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x18-20020a17090300d200b001a94b91f428si8003481plc.319.2023.04.24.15.23.00; Mon, 24 Apr 2023 15:23:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IIuOici5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233170AbjDXWGH (ORCPT + 99 others); Mon, 24 Apr 2023 18:06:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233144AbjDXWGC (ORCPT ); Mon, 24 Apr 2023 18:06:02 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B62F6E97; Mon, 24 Apr 2023 15:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373960; x=1713909960; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=+ZibFYMqHwqO8S/LEJGbM8uCyH9WZlDakLl+Nlc+RJ0=; b=IIuOici5aMIJmKYbtYewRAo521zWzscZXplfUZWpt0hCNPTz8ZBTI59H 4KdP7wt+Wosi64Vanh2HAvVU1JSF4bs7Uge56LKoozjDYww7eie/weCF4 W+hBrnRpN/Jv7kP4f5dkD86Uuy9X+1MEy3mkHRZzMuoL9gtyjylXv65dj KQTTVAaMIz48p116MCr0Me4MnsE/miKcCa+wq41bZ+HWtI9JNqvaMBdo5 3q5FghbZnmMoYHPjF+wIM6Jts0pf2bXmzP4M8PideXE/pJoZbVwhwqDCj +/ueDUIBitxam33Rv/QByEhNfHkjDDhoqlwtwDoNJxC5pPt+upflSmyUl Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473662" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473662" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:05:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500122" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500122" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:05:56 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:39 +0200 Subject: [PATCH v8 2/7] lib/ref_tracker: improve printing stats MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-2-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764098044492212629?= X-GMAIL-MSGID: =?utf-8?q?1764098044492212629?= In case the library is tracking busy subsystem, simply printing stack for every active reference will spam log with long, hard to read, redundant stack traces. To improve readabilty following changes have been made: - reports are printed per stack_handle - log is more compact, - added display name for ref_tracker_dir - it will differentiate multiple subsystems, - stack trace is printed indented, in the same printk call, - info about dropped references is printed as well. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 9 ++++- lib/ref_tracker.c | 90 +++++++++++++++++++++++++++++++++++++++------ lib/test_ref_tracker.c | 2 +- net/core/dev.c | 2 +- net/core/net_namespace.c | 4 +- 5 files changed, 90 insertions(+), 17 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 87a92f2bec1b88..19a69e7809d6c1 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -17,12 +17,15 @@ struct ref_tracker_dir { bool dead; struct list_head list; /* List of active trackers */ struct list_head quarantine; /* List of dead trackers */ + char name[32]; #endif }; #ifdef CONFIG_REF_TRACKER + static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + unsigned int quarantine_count, + const char *name) { INIT_LIST_HEAD(&dir->list); INIT_LIST_HEAD(&dir->quarantine); @@ -31,6 +34,7 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, dir->dead = false; refcount_set(&dir->untracked, 1); refcount_set(&dir->no_tracker, 1); + strscpy(dir->name, name, sizeof(dir->name)); stack_depot_init(); } @@ -51,7 +55,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, #else /* CONFIG_REF_TRACKER */ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + unsigned int quarantine_count, + const char *name) { } diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index d4eb0929af8f96..2ffe79c90c1771 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -1,11 +1,16 @@ // SPDX-License-Identifier: GPL-2.0-or-later + +#define pr_fmt(fmt) "ref_tracker: " fmt + #include +#include #include #include #include #include #define REF_TRACKER_STACK_ENTRIES 16 +#define STACK_BUF_SIZE 1024 struct ref_tracker { struct list_head head; /* anchor into dir->list or dir->quarantine */ @@ -14,24 +19,87 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ref_tracker_dir_stats { + int total; + int count; + struct { + depot_stack_handle_t stack_handle; + unsigned int count; + } stacks[]; +}; + +static struct ref_tracker_dir_stats * +ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) { + struct ref_tracker_dir_stats *stats; struct ref_tracker *tracker; - unsigned int i = 0; - lockdep_assert_held(&dir->lock); + stats = kmalloc(struct_size(stats, stacks, limit), + GFP_NOWAIT | __GFP_NOWARN); + if (!stats) + return ERR_PTR(-ENOMEM); + stats->total = 0; + stats->count = 0; list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; + depot_stack_handle_t stack = tracker->alloc_stack_handle; + int i; + + ++stats->total; + for (i = 0; i < stats->count; ++i) + if (stats->stacks[i].stack_handle == stack) + break; + if (i >= limit) + continue; + if (i >= stats->count) { + stats->stacks[i].stack_handle = stack; + stats->stacks[i].count = 0; + ++stats->count; } + ++stats->stacks[i].count; + } + + return stats; +} + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker_dir_stats *stats; + unsigned int i = 0, skipped; + depot_stack_handle_t stack; + char *sbuf; + + lockdep_assert_held(&dir->lock); + + if (list_empty(&dir->list)) + return; + + stats = ref_tracker_get_stats(dir, display_limit); + if (IS_ERR(stats)) { + pr_err("%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); + return; } + + sbuf = kmalloc(STACK_BUF_SIZE, GFP_NOWAIT | __GFP_NOWARN); + + for (i = 0, skipped = stats->total; i < stats->count; ++i) { + stack = stats->stacks[i].stack_handle; + if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) + sbuf[0] = 0; + pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); + skipped -= stats->stacks[i].count; + } + + if (skipped) + pr_err("%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); + + kfree(sbuf); + + kfree(stats); } EXPORT_SYMBOL(ref_tracker_dir_print_locked); diff --git a/lib/test_ref_tracker.c b/lib/test_ref_tracker.c index 19d7dec70cc62f..49970a7c96f3f4 100644 --- a/lib/test_ref_tracker.c +++ b/lib/test_ref_tracker.c @@ -64,7 +64,7 @@ static int __init test_ref_tracker_init(void) { int i; - ref_tracker_dir_init(&ref_dir, 100); + ref_tracker_dir_init(&ref_dir, 100, "selftest"); timer_setup(&test_ref_tracker_timer, test_ref_tracker_timer_func, 0); mod_timer(&test_ref_tracker_timer, jiffies + 1); diff --git a/net/core/dev.c b/net/core/dev.c index 1488f700bf819a..cd0de2bca949b8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -10592,7 +10592,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, dev = PTR_ALIGN(p, NETDEV_ALIGN); dev->padded = (char *)dev - (char *)p; - ref_tracker_dir_init(&dev->refcnt_tracker, 128); + ref_tracker_dir_init(&dev->refcnt_tracker, 128, name); #ifdef CONFIG_PCPU_DEV_REFCNT dev->pcpu_refcnt = alloc_percpu(int); if (!dev->pcpu_refcnt) diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c index 7b69cf882b8efd..67f65dd3bb674c 100644 --- a/net/core/net_namespace.c +++ b/net/core/net_namespace.c @@ -307,7 +307,7 @@ EXPORT_SYMBOL_GPL(get_net_ns_by_id); /* init code that must occur even if setup_net() is not called. */ static __net_init void preinit_net(struct net *net) { - ref_tracker_dir_init(&net->notrefcnt_tracker, 128); + ref_tracker_dir_init(&net->notrefcnt_tracker, 128, "net notrefcnt"); } /* @@ -321,7 +321,7 @@ static __net_init int setup_net(struct net *net, struct user_namespace *user_ns) LIST_HEAD(net_exit_list); refcount_set(&net->ns.count, 1); - ref_tracker_dir_init(&net->refcnt_tracker, 128); + ref_tracker_dir_init(&net->refcnt_tracker, 128, "net refcnt"); refcount_set(&net->passive, 1); get_random_bytes(&net->hash_mix, sizeof(u32)); From patchwork Mon Apr 24 22:05:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87180 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3036774vqo; Mon, 24 Apr 2023 15:22:55 -0700 (PDT) X-Google-Smtp-Source: AKy350Ym33NHMvStVVIkwl2ppTVZZWbDJR0JMVnSnwBEi8PE5CBZL6Wmjsax4j+PN4xAsBlLVGVC X-Received: by 2002:a05:6a21:6806:b0:ef:85a6:4654 with SMTP id wr6-20020a056a21680600b000ef85a64654mr16009367pzb.30.1682374975560; Mon, 24 Apr 2023 15:22:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682374975; cv=none; d=google.com; s=arc-20160816; b=Cr+MTjuWn24a7an3bWP8yDyQ+SG3uPEGzFN28gFq+9t99zhuPERGHSPH0B5u/DJ9N7 APIzl5W7wituULO39W+pZ91luoM3pwKzxgqWOaXS/Anu5EDW4h6PPOCbSa0ND0jScw8a GeaSwSchMn3kcHpCOxIzGgF1efaVAtsc4PF+uJijPs1vCwCNUTRmgJjrF8grssvFDD5P URsQgjWqAKr3lCZQOdgOhf05Q79WLhQWKx4aSaGmxVJfzAUqJrGCU0EHQyQfE6jT8p6k R9d7YPHRRJ2o6fHQP6w5wgFW+nJ8mA62B4PG6Fv13S5xRgPVfUMcedxA8b0HS2fYfWwN 1DIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=o7tiIpebJiXe4njC8w4mIVPKQtBgyOMYFWDJ2GOFnFE=; b=mofddSMH3AF4/1P26RY4FuNgrpir/o2DlmsaLlzeGDZRucv2cixElml1s8JHEbGhed mUHVhQ1u1fnnfnknabICRo36LBXbwC5/fmOav2b9qZrl6u3dQ73hQG2MqfCw4vAwrmmz gBP/kGoFz2JO2LgxZVXNMidI0d28PFhPCSgVTzZwNeJ7UQlvQkYb1lBHK+Z1V5qydbu4 pXkJ0QGalw4vTUhVg6in/0kUvvnROCWcGmRS49VOdiHmt2twx7MBd3dkoNJv6zyctuDZ +bKAt7Vsd0G64cM428ltUctw9Y72oM/2ULQrt4T/DVggF57eIxJ6MGgV8lKQ2aFV3gwF eoiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MMSDwdWY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h70-20020a638349000000b005254580c66bsi4972916pge.231.2023.04.24.15.22.41; Mon, 24 Apr 2023 15:22:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MMSDwdWY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233163AbjDXWGU (ORCPT + 99 others); Mon, 24 Apr 2023 18:06:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233162AbjDXWGF (ORCPT ); Mon, 24 Apr 2023 18:06:05 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A9E8A74; Mon, 24 Apr 2023 15:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373963; x=1713909963; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=9543xZcCMZfDtzLHnM5dTa2W9tWhV6LjC/60obdR+TE=; b=MMSDwdWYPY0afh/6S5wrIv9uUi7xfMhfUdt+U0XR4u1f3l2vS6QGHWC0 6Kw1qhCdT0STSr8jkGgrD60v6zpNEKHAgbE1rycGtn4ibrbLf30noms1D lTerNV2IIExiTAT+wLGuX/mkAjpSX7WBJjcxuoPoifOCOvVafz0cjRC8/ Cxb6DKw+ad6us2jeBFHYECDLoCXHKXlL6Ftv34TENs4nAgUrPfLj1YHTe QiFJSUTfgnDRFE70O23HS3/pzumVIoJnx+4muigYA5nG+RAY1N6FlSOnP 8HOQTdzFWQ0YQxSjO/XezIDW04U5YmzvYSODUrd/b0UkbyPzbNc9iV41N Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473696" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473696" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500186" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500186" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:05:59 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:40 +0200 Subject: [PATCH v8 3/7] lib/ref_tracker: add printing to memory buffer MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-3-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764098022581267729?= X-GMAIL-MSGID: =?utf-8?q?1764098022581267729?= Similar to stack_(depot|trace)_snprint the patch adds helper to printing stats to memory buffer. It will be helpful in case of debugfs. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 8 +++++++ lib/ref_tracker.c | 56 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 56 insertions(+), 8 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 19a69e7809d6c1..8eac4f3d52547c 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -46,6 +46,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size); + int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp); @@ -74,6 +76,12 @@ static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, { } +static inline int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, + char *buf, size_t size) +{ + return 0; +} + static inline int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index 2ffe79c90c1771..cce4614b07940f 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -62,8 +62,27 @@ ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) return stats; } -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ostream { + char *buf; + int size, used; +}; + +#define pr_ostream(stream, fmt, args...) \ +({ \ + struct ostream *_s = (stream); \ +\ + if (!_s->buf) { \ + pr_err(fmt, ##args); \ + } else { \ + int ret, len = _s->size - _s->used; \ + ret = snprintf(_s->buf + _s->used, len, pr_fmt(fmt), ##args); \ + _s->used += min(ret, len); \ + } \ +}) + +static void +__ref_tracker_dir_pr_ostream(struct ref_tracker_dir *dir, + unsigned int display_limit, struct ostream *s) { struct ref_tracker_dir_stats *stats; unsigned int i = 0, skipped; @@ -77,8 +96,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stats = ref_tracker_get_stats(dir, display_limit); if (IS_ERR(stats)) { - pr_err("%s@%pK: couldn't get stats, error %pe\n", - dir->name, dir, stats); + pr_ostream(s, "%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); return; } @@ -88,19 +107,27 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stack = stats->stacks[i].stack_handle; if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) sbuf[0] = 0; - pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, - stats->stacks[i].count, stats->total, sbuf); + pr_ostream(s, "%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); skipped -= stats->stacks[i].count; } if (skipped) - pr_err("%s@%pK skipped reports about %d/%d users.\n", - dir->name, dir, skipped, stats->total); + pr_ostream(s, "%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); kfree(sbuf); kfree(stats); } + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ostream os = {}; + + __ref_tracker_dir_pr_ostream(dir, display_limit, &os); +} EXPORT_SYMBOL(ref_tracker_dir_print_locked); void ref_tracker_dir_print(struct ref_tracker_dir *dir, @@ -114,6 +141,19 @@ void ref_tracker_dir_print(struct ref_tracker_dir *dir, } EXPORT_SYMBOL(ref_tracker_dir_print); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size) +{ + struct ostream os = { .buf = buf, .size = size }; + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + __ref_tracker_dir_pr_ostream(dir, 16, &os); + spin_unlock_irqrestore(&dir->lock, flags); + + return os.used; +} +EXPORT_SYMBOL(ref_tracker_dir_snprint); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; From patchwork Mon Apr 24 22:05:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87185 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3037657vqo; Mon, 24 Apr 2023 15:25:11 -0700 (PDT) X-Google-Smtp-Source: AKy350Z6diqdY0ISxH1p2srJ+f8TTUxJAnJbpMSVktD+QlU6N3udhHOn1zwuECZ1uZOlZwkFGKlE X-Received: by 2002:a17:902:ea03:b0:1a6:961e:fcfe with SMTP id s3-20020a170902ea0300b001a6961efcfemr19230030plg.30.1682375111148; Mon, 24 Apr 2023 15:25:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682375111; cv=none; d=google.com; s=arc-20160816; b=LU+R6/yqKJ7UDbB3Mc1O744XF7QkfoH9QUnz/x2AS18SlqMEpbfORIyYwx4aY/PxiR 1vcXpD0mqDpAdFvP7JNTmWoK1wSdDtECB8hBKzKSfFc9BNR9inJsf6qeUAFC5H7S8b2c UitTQWTRJury5KnIEoFhxQ427qlgqrOxdCQ4+GIc7zzM0VHGqM1kSRdP5gZpxQJi3woD Y/x+R98hkGiB98nat1lj/kp4p7WfT5Ay91Wy3DYRb1Uktlq8FulAaASyq9EwlDK3uKzK A1YPsDt3IiTIuiD+2bNuqsnRP466mPeDOcr/6G9T+wcDh67TUGJdltyKz2/MucKHqSJD HNUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=kJXpH+wik3uP2g5wUqL3qtZwznfw9ci6JLorCqLgbzI=; b=v2oYZrT4WdrVMBFH+hCR9CEXwnbef2wjMMWPhB5X7kpSSL3yNrD+tB0xWTyMtZRz2l s16CpBVsgLSPF75URkKCxrP9RHS5sHb4VAC0N8LCbnw9h6xNjRQ+/R2Nc5DN9F1eYnyn EAiav1F1jGHI3A//XE8hkBNOn51N2EhoXGfIytUodOtE9L98nH7/oAXqRv3KnDZgQHsE rJYeP9keJN7J6MFIF9Po7GA4Qau/UBRIrPVilrHbqwqfOUteQ5plkNp3UufGcg7jU8QF ulJway1UF+KcJdTm+9i5Q9Mji3iW1lb4PIOq1eyCArl14Y7keEsi4CfvyLSI6DvATUQf cP6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ORKByxqB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk16-20020a17090308d000b001a216572122si12449055plb.285.2023.04.24.15.24.56; Mon, 24 Apr 2023 15:25:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ORKByxqB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233226AbjDXWGb (ORCPT + 99 others); Mon, 24 Apr 2023 18:06:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233182AbjDXWGQ (ORCPT ); Mon, 24 Apr 2023 18:06:16 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1908D8A40; Mon, 24 Apr 2023 15:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373967; x=1713909967; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=aF+00fMmCxB5g2z9dZZXQyc7aZrPeManS84M0BOgQmc=; b=ORKByxqBPUjhYAArNQa8k4UBVM5OxY1JwAOAagCPK/PpC0Jz0/zo++9k iuNRrl992qYdFA7TaHcd+LZj+5+ftp6GQOqEsPSX719PBVIrHeE7ohxId D3aKxrewj+ZFboAxWJCCV/CYVJHazqvnZF+cNeHjUqlm7jev+5T90m0a5 2/kmB0SP/VW6q9jPSEBsNz0Sh5t9lcmeK13r4Jo+5WkikMGepEKXw0JBj PIoWQft/TBPj0wAbAf4rYv0CkJ3Xgx/51nlIzxjFCoDK2PeG43qde0bKd ZAMfl2SF3aPW5jO3D8FrmaUGsImAHdWSaI3WJeGuxKWSQUNOsX71YW7az A==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473719" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473719" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500256" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500256" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:03 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:41 +0200 Subject: [PATCH v8 4/7] lib/ref_tracker: remove warnings in case of allocation failure MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-4-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764098164279341703?= X-GMAIL-MSGID: =?utf-8?q?1764098164279341703?= Library can handle allocation failures. To avoid allocation warnings __GFP_NOWARN has been added everywhere. Moreover GFP_ATOMIC has been replaced with GFP_NOWAIT in case of stack allocation on tracker free call. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- lib/ref_tracker.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index cce4614b07940f..cf5609b1ca7936 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -189,7 +189,7 @@ int ref_tracker_alloc(struct ref_tracker_dir *dir, unsigned long entries[REF_TRACKER_STACK_ENTRIES]; struct ref_tracker *tracker; unsigned int nr_entries; - gfp_t gfp_mask = gfp; + gfp_t gfp_mask = gfp | __GFP_NOWARN; unsigned long flags; WARN_ON_ONCE(dir->dead); @@ -237,7 +237,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, return -EEXIST; } nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 1); - stack_handle = stack_depot_save(entries, nr_entries, GFP_ATOMIC); + stack_handle = stack_depot_save(entries, nr_entries, + GFP_NOWAIT | __GFP_NOWARN); spin_lock_irqsave(&dir->lock, flags); if (tracker->dead) { From patchwork Mon Apr 24 22:05:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87179 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3033902vqo; Mon, 24 Apr 2023 15:15:29 -0700 (PDT) X-Google-Smtp-Source: AKy350Z2eEaYB7nGuEBGyu1mqFxImLo35rdxVEydSFA2VcHLF3hAZSWJHUN6FJoKZ94bDu0+3CM+ X-Received: by 2002:a05:6a00:190f:b0:63f:1eb3:824b with SMTP id y15-20020a056a00190f00b0063f1eb3824bmr15925257pfi.17.1682374529558; Mon, 24 Apr 2023 15:15:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682374529; cv=none; d=google.com; s=arc-20160816; b=nih3EodbW6T6jA2rm7qaXSnbX4LJj7AJllmd0qZYNuTCGKxmPmzEbsJWGm6M/LZEtM oIpx8LJEl9yvh8lcxxBdH/gziHmvCCb+RID9LExwcx0ZFtCIQuH6q05CVIVHDJFOSFtl fSJUOskDFw09+0GXDoSF4CZmJFcjN8bcfdvIN/PO56WD5sV/g8/c0v17x1bHeHWA3yPL Q4LjR3UW7McGTpABwfxARjA+ZiF8u4hKGDNZ4mEsXnUapbPQhucljCRa9TYJ1VD9o+BX UNZVxu3pbYfl7xNIcqqkBS1T6biUg5HFjH5Yq9IVojWvvbP4vMVZSInhK+/MtKOAbmgB iIng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=EJPQJ8KOKndaoJJskI6fPVcPqHy6R9yx5s6liwSLVDI=; b=Zb9dFLSLCjk7TekpK4Z+GN8QkgK7mbSVRvy/ZPfHW/dRCOhOwRdSN6y/Y6ENnRi8+n /9QMhrtLxFDsqY7JJToQRulNgW+hPnOV/KhARF5j5/RItuAgPr486b+itfdgllEYmDSe QdnN3a95VhMB7bAnKzlyJE31BqVlD011UooIJAsBTMFm7o2IQMxS3ZlTT4C/x6W4aggb POXIG5b52Aw3nqqc1RbNddufTpqXmTi76ZlXq29PDku2/vMSOr7Hpn1BdqnJyBkYSRlG 0bHeEnucJrMlOHR804l6EDrq+7LzQZwZVwCSe44v0yg1C6VwQ+gAP6uI1vBooFhEn0Iy sEQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ROjkT2oO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a15-20020a65640f000000b0051397a23a0bsi11744738pgv.482.2023.04.24.15.15.13; Mon, 24 Apr 2023 15:15:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ROjkT2oO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233196AbjDXWGt (ORCPT + 99 others); Mon, 24 Apr 2023 18:06:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233247AbjDXWGh (ORCPT ); Mon, 24 Apr 2023 18:06:37 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93C1F8A62; Mon, 24 Apr 2023 15:06:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373975; x=1713909975; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=Qt1xoilG/L7Fu1loOHz9Fk2Q2Og1EPjmLUv1rFNp/s4=; b=ROjkT2oOUrxMkNmc00oCBxA5sklFb126MaXlzJT2UnD7RKVfX2tAKmxr p7AdkRrTlvHLoM91MK33FKlVUCWWJ1A32dTZkBbAUREUX6dZKLViA9fvP gsYpkY5Qi25O8H2Qt4DeJST5oBBdt8SAZFMZiE7IR6pI0rdsBmEBxtLmM ZD8mdRFB2HNb0prFBnIafn74IBQ6a+v1dLBjsBRwFrPObGqJ4y/iFo860 aHcV+jBNX8+6UMGorJraCsDrHj8LUWqDxAkPxFIiCva2K9xOqj4X/MW5R zcJMQuX+UlZWcnqe7wnqsMQkY7h9LEdau4b5moTCJh3PsJKG0P4724b2c A==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473757" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473757" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500409" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500409" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:06 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:42 +0200 Subject: [PATCH v8 5/7] drm/i915: Correct type of wakeref variable MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-5-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764097554923366127?= X-GMAIL-MSGID: =?utf-8?q?1764097554923366127?= Wakeref has dedicated type. Assumption it will be int compatible forever is incorrect. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index ee3e8352637f28..fe390d59929b02 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -3248,7 +3248,7 @@ static void destroyed_worker_func(struct work_struct *w) struct intel_guc *guc = container_of(w, struct intel_guc, submission_state.destroyed_worker); struct intel_gt *gt = guc_to_gt(guc); - int tmp; + intel_wakeref_t tmp; with_intel_gt_pm(gt, tmp) deregister_destroyed_contexts(guc); From patchwork Mon Apr 24 22:05:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87177 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3032094vqo; Mon, 24 Apr 2023 15:11:21 -0700 (PDT) X-Google-Smtp-Source: AKy350bZwlDZpighNwxg73CbNaEldmNbqlfpnaFrAdlKEg+CRCQWpdmJgdkZ3t0Av6Abo/Y5MVt0 X-Received: by 2002:a17:90b:17d2:b0:247:78e7:8739 with SMTP id me18-20020a17090b17d200b0024778e78739mr15165110pjb.0.1682374281398; Mon, 24 Apr 2023 15:11:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682374281; cv=none; d=google.com; s=arc-20160816; b=fuU8/SQFK5+rWuGpjLcIbxWC4MNvwsfwD/3hvyHs35ymszOOBQ87ifMvvME7IcOSjv 0FSl5tMoIbCD7m5x44+2kcgAOQHy1VVyIGNrQZRamW0QHC0MQ3q5j/xB6nPlg2TKtzUD +x0WLb0C/ep2qDPnTGt5Zi96jJnGcEqZAjucH7LpQgqgkdd+9sloGJmG9M/JxoAjUHEc XmWJMOXiF3zgd1HdVjms779ME3xN07SvqnAmFvPBTqEY6k2u2AlVUCSLCYjgDiDWicYi 6qco+RyTfRbLXKIdGhUNvf688O7T/mf8+hw6rtw8MVC2xW1VPuBpaNb+lz+5PXAAXMMd 8Gwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=GhCqzB5Uo0FeirFmgiZI/J/vj0Xew1St2EECR3YuiJs=; b=sD72Q2O93x+iu8PkERGig+GX0GlAcn/J2baqysoXE5s3pJquCPapBhfEZIuxrwvXAW CGVZl11yzCsbGwZr2/ixMF+UCKX+Ixub8VsN9U6+/2dY1Hyy2qRGW4sJN6aq5xHvt/k5 RUY13T/fMlWa/4T3U9grlBwhkQSdAK4NxkqA/3zVp91SwQO0Z9UiRJZIbQDwlBK/b1F3 lKwksnI3SJtYPvT8KZaC1oSX7yx5AcNp4Ox7i2AFwBH69l/PGkQ1zBBiCgHdr1dEzd/m L21t/vdOlotlza/z3jKuI3+IyTqxbW5KhDlUoVWtFeEgxUtoEVzRKYn0cd8urkl6sSYi fOHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eTLeUs4s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ik8-20020a170902ab0800b001a6458c9f96si8829568plb.37.2023.04.24.15.11.06; Mon, 24 Apr 2023 15:11:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eTLeUs4s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232071AbjDXWHD (ORCPT + 99 others); Mon, 24 Apr 2023 18:07:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233214AbjDXWGm (ORCPT ); Mon, 24 Apr 2023 18:06:42 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6567EA5C6; Mon, 24 Apr 2023 15:06:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373977; x=1713909977; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=e0eOMTekkDOu9lX+wxTyqCu8zuxpKBliVs1N5+Ykyps=; b=eTLeUs4snSyQab5vObz4pO63yV0g4rHiD4AxEiB0bbOJSmZX3BqwredW l4cKNmHKBD2Vy3HBzIv0bkPhChXjHQKjC8n7LXKpvjOKsBoO2prAkVm77 6mCBvHKt7AMMklFyfKn8Hx4EGKefhvGKgZxxmNA97CV+1RZBUiMbQtvdO 7Et7ApwBuWRObcKA94ssxERmt63W3p05hVk2fQ02v0Ga35i8S8ilKWUFR 32irML7rpaBqbWtUmd/aKv2nz9h2/LEfhQnrZmSbeuz9JM2dnPlLOznHp 7pziszsn5DkvpObwvwwnqXf+qDcG8SdIHaxusQlNdrOuL2tGKEMCzskqo w==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473761" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473761" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500413" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500413" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:10 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:43 +0200 Subject: [PATCH v8 6/7] drm/i915: Replace custom intel runtime_pm tracker with ref_tracker library MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-6-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764097294830002942?= X-GMAIL-MSGID: =?utf-8?q?1764097294830002942?= Beside reusing existing code, the main advantage of ref_tracker is tracking per instance of wakeref. It allows also to catch double put. On the other side we lose information about the first acquire and the last release, but the advantages outweigh it. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/Kconfig.debug | 4 + drivers/gpu/drm/i915/display/intel_display_power.c | 2 +- drivers/gpu/drm/i915/i915_driver.c | 2 +- drivers/gpu/drm/i915/intel_runtime_pm.c | 221 ++------------------- drivers/gpu/drm/i915/intel_runtime_pm.h | 11 +- drivers/gpu/drm/i915/intel_wakeref.c | 28 +++ drivers/gpu/drm/i915/intel_wakeref.h | 35 +++- 7 files changed, 86 insertions(+), 217 deletions(-) diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 47e845353ffad8..76454fcbf65228 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -24,7 +24,9 @@ config DRM_I915_DEBUG select DEBUG_FS select PREEMPT_COUNT select I2C_CHARDEV + select REF_TRACKER select STACKDEPOT + select STACKTRACE select DRM_DP_AUX_CHARDEV select X86_MSR # used by igt/pm_rpm select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks) @@ -230,7 +232,9 @@ config DRM_I915_DEBUG_RUNTIME_PM bool "Enable extra state checking for runtime PM" depends on DRM_I915 default n + select REF_TRACKER select STACKDEPOT + select STACKTRACE help Choose this option to turn on extra state checking for the runtime PM functionality. This may introduce overhead during diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index 5150069f3f8218..36d202d3391857 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -407,7 +407,7 @@ print_async_put_domains_state(struct i915_power_domains *power_domains) struct drm_i915_private, display.power.domains); - drm_dbg(&i915->drm, "async_put_wakeref %u\n", + drm_dbg(&i915->drm, "async_put_wakeref %lu\n", power_domains->async_put_wakeref); print_power_domains(power_domains, "async_put_domains[0]", diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index fd198700272b10..4e2fb438e26f04 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1018,7 +1018,7 @@ void i915_driver_shutdown(struct drm_i915_private *i915) intel_power_domains_driver_remove(i915); enable_rpm_wakeref_asserts(&i915->runtime_pm); - intel_runtime_pm_driver_release(&i915->runtime_pm); + intel_runtime_pm_driver_last_release(&i915->runtime_pm); } static bool suspend_to_idle(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index cf5122299b6b8c..2166c209f17f04 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -52,182 +52,37 @@ #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM) -#include - -#define STACKDEPTH 8 - -static noinline depot_stack_handle_t __save_depot_stack(void) -{ - unsigned long entries[STACKDEPTH]; - unsigned int n; - - n = stack_trace_save(entries, ARRAY_SIZE(entries), 1); - return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN); -} - static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { - spin_lock_init(&rpm->debug.lock); - stack_depot_init(); + ref_tracker_dir_init(&rpm->debug, INTEL_REFTRACK_DEAD_COUNT, dev_name(rpm->kdev)); } -static noinline depot_stack_handle_t +static intel_wakeref_t track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { - depot_stack_handle_t stack, *stacks; - unsigned long flags; - - if (rpm->no_wakeref_tracking) - return -1; - - stack = __save_depot_stack(); - if (!stack) + if (!rpm->available || rpm->no_wakeref_tracking) return -1; - spin_lock_irqsave(&rpm->debug.lock, flags); - - if (!rpm->debug.count) - rpm->debug.last_acquire = stack; - - stacks = krealloc(rpm->debug.owners, - (rpm->debug.count + 1) * sizeof(*stacks), - GFP_NOWAIT | __GFP_NOWARN); - if (stacks) { - stacks[rpm->debug.count++] = stack; - rpm->debug.owners = stacks; - } else { - stack = -1; - } - - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - return stack; + return intel_ref_tracker_alloc(&rpm->debug); } static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, - depot_stack_handle_t stack) + intel_wakeref_t wakeref) { - struct drm_i915_private *i915 = container_of(rpm, - struct drm_i915_private, - runtime_pm); - unsigned long flags, n; - bool found = false; - - if (unlikely(stack == -1)) + if (!rpm->available || rpm->no_wakeref_tracking) return; - spin_lock_irqsave(&rpm->debug.lock, flags); - for (n = rpm->debug.count; n--; ) { - if (rpm->debug.owners[n] == stack) { - memmove(rpm->debug.owners + n, - rpm->debug.owners + n + 1, - (--rpm->debug.count - n) * sizeof(stack)); - found = true; - break; - } - } - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - if (drm_WARN(&i915->drm, !found, - "Unmatched wakeref (tracking %lu), count %u\n", - rpm->debug.count, atomic_read(&rpm->wakeref_count))) { - char *buf; - - buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN); - if (!buf) - return; - - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - DRM_DEBUG_DRIVER("wakeref %x from\n%s", stack, buf); - - stack = READ_ONCE(rpm->debug.last_release); - if (stack) { - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - DRM_DEBUG_DRIVER("wakeref last released at\n%s", buf); - } - - kfree(buf); - } + intel_ref_tracker_free(&rpm->debug, wakeref); } -static int cmphandle(const void *_a, const void *_b) +static void untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm) { - const depot_stack_handle_t * const a = _a, * const b = _b; - - if (*a < *b) - return -1; - else if (*a > *b) - return 1; - else - return 0; -} - -static void -__print_intel_runtime_pm_wakeref(struct drm_printer *p, - const struct intel_runtime_pm_debug *dbg) -{ - unsigned long i; - char *buf; - - buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN); - if (!buf) - return; - - if (dbg->last_acquire) { - stack_depot_snprint(dbg->last_acquire, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref last acquired:\n%s", buf); - } - - if (dbg->last_release) { - stack_depot_snprint(dbg->last_release, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref last released:\n%s", buf); - } - - drm_printf(p, "Wakeref count: %lu\n", dbg->count); - - sort(dbg->owners, dbg->count, sizeof(*dbg->owners), cmphandle, NULL); - - for (i = 0; i < dbg->count; i++) { - depot_stack_handle_t stack = dbg->owners[i]; - unsigned long rep; - - rep = 1; - while (i + 1 < dbg->count && dbg->owners[i + 1] == stack) - rep++, i++; - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref x%lu taken at:\n%s", rep, buf); - } - - kfree(buf); -} - -static noinline void -__untrack_all_wakerefs(struct intel_runtime_pm_debug *debug, - struct intel_runtime_pm_debug *saved) -{ - *saved = *debug; - - debug->owners = NULL; - debug->count = 0; - debug->last_release = __save_depot_stack(); -} - -static void -dump_and_free_wakeref_tracking(struct intel_runtime_pm_debug *debug) -{ - if (debug->count) { - struct drm_printer p = drm_debug_printer("i915"); - - __print_intel_runtime_pm_wakeref(&p, debug); - } - - kfree(debug->owners); + ref_tracker_dir_exit(&rpm->debug); } static noinline void __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm) { - struct intel_runtime_pm_debug dbg = {}; unsigned long flags; if (!atomic_dec_and_lock_irqsave(&rpm->wakeref_count, @@ -235,60 +90,14 @@ __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm) flags)) return; - __untrack_all_wakerefs(&rpm->debug, &dbg); + ref_tracker_dir_print_locked(&rpm->debug, INTEL_REFTRACK_PRINT_LIMIT); spin_unlock_irqrestore(&rpm->debug.lock, flags); - - dump_and_free_wakeref_tracking(&dbg); -} - -static noinline void -untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm) -{ - struct intel_runtime_pm_debug dbg = {}; - unsigned long flags; - - spin_lock_irqsave(&rpm->debug.lock, flags); - __untrack_all_wakerefs(&rpm->debug, &dbg); - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - dump_and_free_wakeref_tracking(&dbg); } void print_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, struct drm_printer *p) { - struct intel_runtime_pm_debug dbg = {}; - - do { - unsigned long alloc = dbg.count; - depot_stack_handle_t *s; - - spin_lock_irq(&rpm->debug.lock); - dbg.count = rpm->debug.count; - if (dbg.count <= alloc) { - memcpy(dbg.owners, - rpm->debug.owners, - dbg.count * sizeof(*s)); - } - dbg.last_acquire = rpm->debug.last_acquire; - dbg.last_release = rpm->debug.last_release; - spin_unlock_irq(&rpm->debug.lock); - if (dbg.count <= alloc) - break; - - s = krealloc(dbg.owners, - dbg.count * sizeof(*s), - GFP_NOWAIT | __GFP_NOWARN); - if (!s) - goto out; - - dbg.owners = s; - } while (1); - - __print_intel_runtime_pm_wakeref(p, &dbg); - -out: - kfree(dbg.owners); + intel_ref_tracker_show(&rpm->debug, p); } #else @@ -297,14 +106,14 @@ static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { } -static depot_stack_handle_t +static intel_wakeref_t track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { return -1; } static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, - intel_wakeref_t wref) + intel_wakeref_t wakeref) { } @@ -639,7 +448,11 @@ void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm) "i915 raw-wakerefs=%d wakelocks=%d on cleanup\n", intel_rpm_raw_wakeref_count(count), intel_rpm_wakelock_count(count)); +} +void intel_runtime_pm_driver_last_release(struct intel_runtime_pm *rpm) +{ + intel_runtime_pm_driver_release(rpm); untrack_all_intel_runtime_pm_wakerefs(rpm); } diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h index e592e8d6499a1f..2f81d685bdb4d1 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.h +++ b/drivers/gpu/drm/i915/intel_runtime_pm.h @@ -83,15 +83,7 @@ struct intel_runtime_pm { * paired rpm_put) we can remove corresponding pairs of and keep * the array trimmed to active wakerefs. */ - struct intel_runtime_pm_debug { - spinlock_t lock; - - depot_stack_handle_t last_acquire; - depot_stack_handle_t last_release; - - depot_stack_handle_t *owners; - unsigned long count; - } debug; + struct ref_tracker_dir debug; #endif }; @@ -195,6 +187,7 @@ void intel_runtime_pm_init_early(struct intel_runtime_pm *rpm); void intel_runtime_pm_enable(struct intel_runtime_pm *rpm); void intel_runtime_pm_disable(struct intel_runtime_pm *rpm); void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm); +void intel_runtime_pm_driver_last_release(struct intel_runtime_pm *rpm); intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm); intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm); diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c index dfd87d08221807..48c62fee1f4ede 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.c +++ b/drivers/gpu/drm/i915/intel_wakeref.c @@ -187,3 +187,31 @@ void intel_wakeref_auto_fini(struct intel_wakeref_auto *wf) intel_wakeref_auto(wf, 0); INTEL_WAKEREF_BUG_ON(wf->wakeref); } + +void intel_ref_tracker_show(struct ref_tracker_dir *dir, + struct drm_printer *p) +{ + const size_t buf_size = PAGE_SIZE; + char *buf, *sb, *se; + size_t count; + + buf = kmalloc(buf_size, GFP_NOWAIT); + if (!buf) + return; + + count = ref_tracker_dir_snprint(dir, buf, buf_size); + if (!count) + goto free; + /* printk does not like big buffers, so we split it */ + for (sb = buf; *sb; sb = se + 1) { + se = strchrnul(sb, '\n'); + drm_printf(p, "%.*s", (int)(se - sb + 1), sb); + if (!*se) + break; + } + if (count >= buf_size) + drm_printf(p, "\n...dropped %zd extra bytes of leak report.\n", + count + 1 - buf_size); +free: + kfree(buf); +} diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h index 0b6b4852ab236a..439d8b172748ad 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.h +++ b/drivers/gpu/drm/i915/intel_wakeref.h @@ -7,16 +7,25 @@ #ifndef INTEL_WAKEREF_H #define INTEL_WAKEREF_H +#include + #include #include #include #include #include #include +#include +#include #include #include #include +typedef unsigned long intel_wakeref_t; + +#define INTEL_REFTRACK_DEAD_COUNT 16 +#define INTEL_REFTRACK_PRINT_LIMIT 16 + #if IS_ENABLED(CONFIG_DRM_I915_DEBUG) #define INTEL_WAKEREF_BUG_ON(expr) BUG_ON(expr) #else @@ -26,8 +35,6 @@ struct intel_runtime_pm; struct intel_wakeref; -typedef depot_stack_handle_t intel_wakeref_t; - struct intel_wakeref_ops { int (*get)(struct intel_wakeref *wf); int (*put)(struct intel_wakeref *wf); @@ -261,6 +268,30 @@ __intel_wakeref_defer_park(struct intel_wakeref *wf) */ int intel_wakeref_wait_for_idle(struct intel_wakeref *wf); +#define INTEL_WAKEREF_DEF ((intel_wakeref_t)(-1)) + +static inline intel_wakeref_t intel_ref_tracker_alloc(struct ref_tracker_dir *dir) +{ + struct ref_tracker *user = NULL; + + ref_tracker_alloc(dir, &user, GFP_NOWAIT); + + return (intel_wakeref_t)user ?: INTEL_WAKEREF_DEF; +} + +static inline void intel_ref_tracker_free(struct ref_tracker_dir *dir, + intel_wakeref_t handle) +{ + struct ref_tracker *user; + + user = (handle == INTEL_WAKEREF_DEF) ? NULL : (void *)handle; + + ref_tracker_free(dir, &user); +} + +void intel_ref_tracker_show(struct ref_tracker_dir *dir, + struct drm_printer *p); + struct intel_wakeref_auto { struct intel_runtime_pm *rpm; struct timer_list timer; From patchwork Mon Apr 24 22:05:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 87178 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3033782vqo; Mon, 24 Apr 2023 15:15:10 -0700 (PDT) X-Google-Smtp-Source: AKy350b/vnSLLmkKTkl9Yw8R3pVAkGH3ucd0fZtRoETASG6zZr2ARUwr/bdcQw+yjndldFPDtb71 X-Received: by 2002:a05:6a21:3286:b0:f2:6fc6:9ca3 with SMTP id yt6-20020a056a21328600b000f26fc69ca3mr15576010pzb.43.1682374509627; Mon, 24 Apr 2023 15:15:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682374509; cv=none; d=google.com; s=arc-20160816; b=J6HGruvLlBRwkMw1350FIxbnxsea2dOi07Vap3tooWuCV7Qrumvanrjzs9n/IU9Bgr 9nkmgCxRuTNo2aoXEhlOUMPOmqlqUGm3i0WvRn158tpb3DrqCtwDfHDYhti9tN2Fsk93 U4llk04FrvbaUwEoEjONhs8sRWpXBzQdO/zIzenLZYQgA6YlGY1irlmVCMFtCvCB53gP ME5VKrzV+omCGj2OdUEEFWHyH6rWtyak4xuS+nZklEG8PqTfGRpfLIdYyd555OoirjKb HQifykcOtRMXd8iIAyo5e+tmXWEhguN1WjW7fmlZwc2G6FTlMZ/0tmM+UzlovnRRkaNQ Q/Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=3HymJ+wp5/Hw8IqB0oQW+picLjOk1jG/xcb4fjJBpdE=; b=KSpa7U/8pg0qXjTJvMcICJXP0jgx9NvECgb2xkA7IfpNnc2wEPLV4ZfYq7BAlBEHU0 lgzE4zanQZKpnzQV59QyCAw292NaAEhpkJFvgK7srq2y0bykE7cH8ttPgduq50c9P35P ctvl+4WxKTwdoxE85vKcM790kt8s9J6wiYXgi9bZOVX3I5gIw0esTOlg79rSvuWETd7C pL/CaZGP+kW0WxBUkydjDhD61EOKj7O1lA374W8t51P5iz6MxbboTjBVFnij05QojMPe e67w2slueHAUCMhMRlFZAe9bfPjnoWytjbMMj7sCB+Wgizfc0cgs67vh7OTvBbA3PS6p yulg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YefPCNmR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 72-20020a63074b000000b0051b71bd43b9si12695068pgh.784.2023.04.24.15.14.54; Mon, 24 Apr 2023 15:15:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YefPCNmR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233082AbjDXWH1 (ORCPT + 99 others); Mon, 24 Apr 2023 18:07:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233040AbjDXWHQ (ORCPT ); Mon, 24 Apr 2023 18:07:16 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 871639765; Mon, 24 Apr 2023 15:06:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682373993; x=1713909993; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=AQg7KZc9r4pt/1GKFRFO1byGhvFwUpRPfpH4lqM6wX0=; b=YefPCNmR68CTYKzsfFL6w4Ae4V3gwU6yqiuYgxz26mjQnu7d4ylNtgu4 WENhylD38krsro1fzXLpUxHOwFytGWkuj3dwn+MP58NlbufnorSia/NyS lk6ufdDEWBgX79sccJc/KdI6ojn6JHO8Nj0x/XBxHxg0Q/qf9OjJSFzPu BaNhaRj3I255Yjzdxhouoz7SQ/mt24BeZX0fFqUl/e17rW6/DKtiM0kq/ IqwtGIndjVBW0ZOJeMqCnEONQbj7TBlG7YOu1xChorXxU+fHb/uHK/QmM 95I7km06VUscou5S77DQECTyW/1G6XLNYfA8cA/2wjwqtkK+rHCWWMYYY w==; X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="335473769" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="335473769" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10690"; a="939500416" X-IronPort-AV: E=Sophos;i="5.99,223,1677571200"; d="scan'208";a="939500416" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 15:06:13 -0700 From: Andrzej Hajda Date: Tue, 25 Apr 2023 00:05:44 +0200 Subject: [PATCH v8 7/7] drm/i915: Track gt pm wakerefs MIME-Version: 1.0 Message-Id: <20230224-track_gt-v8-7-4b6517e61be6@intel.com> References: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> In-Reply-To: <20230224-track_gt-v8-0-4b6517e61be6@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Eric Dumazet Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_PDS_OTHER_BAD_TLD,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764097534047868391?= X-GMAIL-MSGID: =?utf-8?q?1764097534047868391?= Track every intel_gt_pm_get() until its corresponding release in intel_gt_pm_put() by returning a cookie to the caller for acquire that must be passed by on released. When there is an imbalance, we can see who either tried to free a stale wakeref, or who forgot to free theirs. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/Kconfig.debug | 14 ++++++++ drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 ++-- .../drm/i915/gem/selftests/i915_gem_coherency.c | 10 +++--- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 14 ++++---- drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 13 +++++--- drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h | 3 +- drivers/gpu/drm/i915/gt/intel_context.h | 4 +-- drivers/gpu/drm/i915/gt/intel_context_types.h | 2 ++ drivers/gpu/drm/i915/gt/intel_engine_pm.c | 7 ++-- drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 ++ .../gpu/drm/i915/gt/intel_execlists_submission.c | 2 +- drivers/gpu/drm/i915/gt/intel_gt_pm.c | 12 ++++--- drivers/gpu/drm/i915/gt/intel_gt_pm.h | 38 +++++++++++++++++----- drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c | 4 +-- drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 20 +++++++----- drivers/gpu/drm/i915/gt/selftest_gt_pm.c | 5 +-- drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +++--- drivers/gpu/drm/i915/gt/selftest_rps.c | 17 ++++++---- drivers/gpu/drm/i915/gt/selftest_slpc.c | 5 +-- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 10 +++--- drivers/gpu/drm/i915/i915_pmu.c | 16 +++++---- drivers/gpu/drm/i915/intel_wakeref.c | 7 +++- drivers/gpu/drm/i915/intel_wakeref.h | 38 ++++++++++++++++++++-- 23 files changed, 182 insertions(+), 78 deletions(-) diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 76454fcbf65228..01e18e4c0e2590 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -40,6 +40,7 @@ config DRM_I915_DEBUG select DRM_I915_DEBUG_GEM_ONCE select DRM_I915_DEBUG_MMIO select DRM_I915_DEBUG_RUNTIME_PM + select DRM_I915_DEBUG_WAKEREF select DRM_I915_SW_FENCE_DEBUG_OBJECTS select DRM_I915_SELFTEST default n @@ -243,3 +244,16 @@ config DRM_I915_DEBUG_RUNTIME_PM Recommended for driver developers only. If in doubt, say "N" + +config DRM_I915_DEBUG_WAKEREF + bool "Enable extra tracking for wakerefs" + depends on DRM_I915 + select REF_TRACKER + select STACKDEPOT + select STACKTRACE + help + Choose this option to turn on extra state checking and usage + tracking for the wakerefPM functionality. This may introduce + overhead during driver runtime. + + If in doubt, say "N" diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 3aeede6aee4dcc..33a034a9c42f11 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -253,6 +253,7 @@ struct i915_execbuffer { struct intel_gt *gt; /* gt for the execbuf */ struct intel_context *context; /* logical state for the request */ struct i915_gem_context *gem_context; /** caller's context */ + intel_wakeref_t wakeref; /** our requests to build */ struct i915_request *requests[MAX_ENGINE_INSTANCE + 1]; @@ -2709,7 +2710,7 @@ eb_select_engine(struct i915_execbuffer *eb) for_each_child(ce, child) intel_context_get(child); - intel_gt_pm_get(ce->engine->gt); + eb->wakeref = intel_gt_pm_get(ce->engine->gt); if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) { err = intel_context_alloc_state(ce); @@ -2748,7 +2749,7 @@ eb_select_engine(struct i915_execbuffer *eb) return err; err: - intel_gt_pm_put(ce->engine->gt); + intel_gt_pm_put(ce->engine->gt, eb->wakeref); for_each_child(ce, child) intel_context_put(child); intel_context_put(ce); @@ -2761,7 +2762,7 @@ eb_put_engine(struct i915_execbuffer *eb) struct intel_context *child; i915_vm_put(eb->context->vm); - intel_gt_pm_put(eb->gt); + intel_gt_pm_put(eb->context->engine->gt, eb->wakeref); for_each_child(eb->context, child) intel_context_put(child); intel_context_put(eb->context); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 3bef1beec7cbb5..3fd68a099a85ef 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -85,6 +85,7 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) static int gtt_set(struct context *ctx, unsigned long offset, u32 v) { + intel_wakeref_t wakeref; struct i915_vma *vma; u32 __iomem *map; int err = 0; @@ -99,7 +100,7 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); @@ -112,12 +113,13 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v) i915_vma_unpin_iomap(vma); out_rpm: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) { + intel_wakeref_t wakeref; struct i915_vma *vma; u32 __iomem *map; int err = 0; @@ -132,7 +134,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); @@ -145,7 +147,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) i915_vma_unpin_iomap(vma); out_rpm: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 56279908ed305b..f6f36a85688814 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -630,14 +630,14 @@ static bool assert_mmap_offset(struct drm_i915_private *i915, static void disable_retire_worker(struct drm_i915_private *i915) { i915_gem_driver_unregister__shrinker(i915); - intel_gt_pm_get(to_gt(i915)); + intel_gt_pm_get_untracked(to_gt(i915)); cancel_delayed_work_sync(&to_gt(i915)->requests.retire_work); } static void restore_retire_worker(struct drm_i915_private *i915) { igt_flush_test(i915); - intel_gt_pm_put(to_gt(i915)); + intel_gt_pm_put_untracked(to_gt(i915)); i915_gem_driver_register__shrinker(i915); } @@ -778,6 +778,7 @@ static int igt_mmap_offset_exhaustion(void *arg) static int gtt_set(struct drm_i915_gem_object *obj) { + intel_wakeref_t wakeref; struct i915_vma *vma; void __iomem *map; int err = 0; @@ -786,7 +787,7 @@ static int gtt_set(struct drm_i915_gem_object *obj) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); if (IS_ERR(map)) { @@ -798,12 +799,13 @@ static int gtt_set(struct drm_i915_gem_object *obj) i915_vma_unpin_iomap(vma); out: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } static int gtt_check(struct drm_i915_gem_object *obj) { + intel_wakeref_t wakeref; struct i915_vma *vma; void __iomem *map; int err = 0; @@ -812,7 +814,7 @@ static int gtt_check(struct drm_i915_gem_object *obj) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); if (IS_ERR(map)) { @@ -828,7 +830,7 @@ static int gtt_check(struct drm_i915_gem_object *obj) i915_vma_unpin_iomap(vma); out: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c index ecc990ec1b9526..d650beb8ed22f6 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c @@ -28,11 +28,14 @@ static void irq_disable(struct intel_breadcrumbs *b) static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) { + intel_wakeref_t wakeref; + /* * Since we are waiting on a request, the GPU should be busy * and should have its own rpm reference. */ - if (GEM_WARN_ON(!intel_gt_pm_get_if_awake(b->irq_engine->gt))) + wakeref = intel_gt_pm_get_if_awake(b->irq_engine->gt); + if (GEM_WARN_ON(!wakeref)) return; /* @@ -41,7 +44,7 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) * which we can add a new waiter and avoid the cost of re-enabling * the irq. */ - WRITE_ONCE(b->irq_armed, true); + WRITE_ONCE(b->irq_armed, wakeref); /* Requests may have completed before we could enable the interrupt. */ if (!b->irq_enabled++ && b->irq_enable(b)) @@ -61,12 +64,14 @@ static void intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) { + intel_wakeref_t wakeref = b->irq_armed; + GEM_BUG_ON(!b->irq_enabled); if (!--b->irq_enabled) b->irq_disable(b); - WRITE_ONCE(b->irq_armed, false); - intel_gt_pm_put_async(b->irq_engine->gt); + WRITE_ONCE(b->irq_armed, 0); + intel_gt_pm_put_async(b->irq_engine->gt, wakeref); } static void intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h index 72dfd3748c4c33..bdf09fd67b6e70 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h @@ -13,6 +13,7 @@ #include #include "intel_engine_types.h" +#include "intel_wakeref.h" /* * Rather than have every client wait upon all user interrupts, @@ -43,7 +44,7 @@ struct intel_breadcrumbs { spinlock_t irq_lock; /* protects the interrupt from hardirq context */ struct irq_work irq_work; /* for use from inside irq_lock */ unsigned int irq_enabled; - bool irq_armed; + intel_wakeref_t irq_armed; /* Not all breadcrumbs are attached to physical HW */ intel_engine_mask_t engine_mask; diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 48f888c3da083b..582faa21181e58 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -212,7 +212,7 @@ static inline void intel_context_enter(struct intel_context *ce) return; ce->ops->enter(ce); - intel_gt_pm_get(ce->vm->gt); + ce->wakeref = intel_gt_pm_get(ce->vm->gt); } static inline void intel_context_mark_active(struct intel_context *ce) @@ -229,7 +229,7 @@ static inline void intel_context_exit(struct intel_context *ce) if (--ce->active_count) return; - intel_gt_pm_put_async(ce->vm->gt); + intel_gt_pm_put_async(ce->vm->gt, ce->wakeref); ce->ops->exit(ce); } diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index e36670f2e6260b..5dc39a9d7a501c 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -17,6 +17,7 @@ #include "i915_utils.h" #include "intel_engine_types.h" #include "intel_sseu.h" +#include "intel_wakeref.h" #include "uc/intel_guc_fwif.h" @@ -110,6 +111,7 @@ struct intel_context { u32 ring_size; struct intel_ring *ring; struct intel_timeline *timeline; + intel_wakeref_t wakeref; unsigned long flags; #define CONTEXT_BARRIER_BIT 0 diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index ee531a5c142c77..bb021d83d3f882 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -63,7 +63,7 @@ static int __engine_unpark(struct intel_wakeref *wf) ENGINE_TRACE(engine, "\n"); - intel_gt_pm_get(engine->gt); + engine->wakeref_track = intel_gt_pm_get(engine->gt); /* Discard stale context state from across idling */ ce = engine->kernel_context; @@ -122,6 +122,7 @@ __queue_and_release_pm(struct i915_request *rq, */ GEM_BUG_ON(rq->context->active_count != 1); __intel_gt_pm_get(engine->gt); + rq->context->wakeref = intel_wakeref_track(&engine->gt->wakeref); /* * We have to serialise all potential retirement paths with our @@ -285,7 +286,7 @@ static int __engine_park(struct intel_wakeref *wf) engine->park(engine); /* While gt calls i915_vma_parked(), we have to break the lock cycle */ - intel_gt_pm_put_async(engine->gt); + intel_gt_pm_put_async(engine->gt, engine->wakeref_track); return 0; } @@ -298,7 +299,7 @@ void intel_engine_init__pm(struct intel_engine_cs *engine) { struct intel_runtime_pm *rpm = engine->uncore->rpm; - intel_wakeref_init(&engine->wakeref, rpm, &wf_ops); + intel_wakeref_init(&engine->wakeref, rpm, &wf_ops, engine->name); intel_engine_init_heartbeat(engine); intel_gsc_idle_msg_enable(engine); diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index 960291f88fd6cf..f55b524e518035 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -433,7 +433,9 @@ struct intel_engine_cs { unsigned long serial; unsigned long wakeref_serial; + intel_wakeref_t wakeref_track; struct intel_wakeref wakeref; + struct file *default_state; struct { diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 750326434677f1..f7d674c3c6ee65 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -630,7 +630,7 @@ static void __execlists_schedule_out(struct i915_request * const rq, execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); if (engine->fw_domain && !--engine->fw_active) intel_uncore_forcewake_put(engine->uncore, engine->fw_domain); - intel_gt_pm_put_async(engine->gt); + intel_gt_pm_put_async_untracked(engine->gt); /* * If this is part of a virtual engine, its next request may diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c index e02cb90723ae03..76378dd2873dfc 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c @@ -27,19 +27,20 @@ static void user_forcewake(struct intel_gt *gt, bool suspend) { int count = atomic_read(>->user_wakeref); + intel_wakeref_t wakeref; /* Inside suspend/resume so single threaded, no races to worry about. */ if (likely(!count)) return; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); if (suspend) { GEM_BUG_ON(count > atomic_read(>->wakeref.count)); atomic_sub(count, >->wakeref.count); } else { atomic_add(count, >->wakeref.count); } - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); } static void runtime_begin(struct intel_gt *gt) @@ -137,7 +138,7 @@ void intel_gt_pm_init_early(struct intel_gt *gt) * runtime_pm is per-device rather than per-tile, so this is still the * correct structure. */ - intel_wakeref_init(>->wakeref, >->i915->runtime_pm, &wf_ops); + intel_wakeref_init(>->wakeref, >->i915->runtime_pm, &wf_ops, "GT"); seqcount_mutex_init(>->stats.lock, >->wakeref.mutex); } @@ -220,6 +221,7 @@ int intel_gt_resume(struct intel_gt *gt) { struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err; err = intel_gt_has_unrecoverable_error(gt); @@ -236,7 +238,7 @@ int intel_gt_resume(struct intel_gt *gt) */ gt_sanitize(gt, true); - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL); intel_rc6_sanitize(>->rc6); @@ -279,7 +281,7 @@ int intel_gt_resume(struct intel_gt *gt) out_fw: intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; err_wedged: diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h index 6c9a4645236467..2ae5fbbede6ca8 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h @@ -16,19 +16,28 @@ static inline bool intel_gt_pm_is_awake(const struct intel_gt *gt) return intel_wakeref_is_active(>->wakeref); } -static inline void intel_gt_pm_get(struct intel_gt *gt) +static inline void intel_gt_pm_get_untracked(struct intel_gt *gt) { intel_wakeref_get(>->wakeref); } +static inline intel_wakeref_t intel_gt_pm_get(struct intel_gt *gt) +{ + intel_gt_pm_get_untracked(gt); + return intel_wakeref_track(>->wakeref); +} + static inline void __intel_gt_pm_get(struct intel_gt *gt) { __intel_wakeref_get(>->wakeref); } -static inline bool intel_gt_pm_get_if_awake(struct intel_gt *gt) +static inline intel_wakeref_t intel_gt_pm_get_if_awake(struct intel_gt *gt) { - return intel_wakeref_get_if_active(>->wakeref); + if (!intel_wakeref_get_if_active(>->wakeref)) + return 0; + + return intel_wakeref_track(>->wakeref); } static inline void intel_gt_pm_might_get(struct intel_gt *gt) @@ -36,12 +45,18 @@ static inline void intel_gt_pm_might_get(struct intel_gt *gt) intel_wakeref_might_get(>->wakeref); } -static inline void intel_gt_pm_put(struct intel_gt *gt) +static inline void intel_gt_pm_put_untracked(struct intel_gt *gt) { intel_wakeref_put(>->wakeref); } -static inline void intel_gt_pm_put_async(struct intel_gt *gt) +static inline void intel_gt_pm_put(struct intel_gt *gt, intel_wakeref_t handle) +{ + intel_wakeref_untrack(>->wakeref, handle); + intel_gt_pm_put_untracked(gt); +} + +static inline void intel_gt_pm_put_async_untracked(struct intel_gt *gt) { intel_wakeref_put_async(>->wakeref); } @@ -51,9 +66,14 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt) intel_wakeref_might_put(>->wakeref); } -#define with_intel_gt_pm(gt, tmp) \ - for (tmp = 1, intel_gt_pm_get(gt); tmp; \ - intel_gt_pm_put(gt), tmp = 0) +static inline void intel_gt_pm_put_async(struct intel_gt *gt, intel_wakeref_t handle) +{ + intel_wakeref_untrack(>->wakeref, handle); + intel_gt_pm_put_async_untracked(gt); +} + +#define with_intel_gt_pm(gt, wf) \ + for (wf = intel_gt_pm_get(gt); wf; intel_gt_pm_put(gt, wf), wf = 0) /** * with_intel_gt_pm_if_awake - if GT is PM awake, get a reference to prevent @@ -64,7 +84,7 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt) * @wf: pointer to a temporary wakeref. */ #define with_intel_gt_pm_if_awake(gt, wf) \ - for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt), wf = 0) + for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt, wf), wf = 0) static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt) { diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c index 80dbbef86b1dbf..0960fae5a0b205 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c @@ -27,7 +27,7 @@ void intel_gt_pm_debugfs_forcewake_user_open(struct intel_gt *gt) { atomic_inc(>->user_wakeref); - intel_gt_pm_get(gt); + intel_gt_pm_get_untracked(gt); if (GRAPHICS_VER(gt->i915) >= 6) intel_uncore_forcewake_user_get(gt->uncore); } @@ -36,7 +36,7 @@ void intel_gt_pm_debugfs_forcewake_user_release(struct intel_gt *gt) { if (GRAPHICS_VER(gt->i915) >= 6) intel_uncore_forcewake_user_put(gt->uncore); - intel_gt_pm_put(gt); + intel_gt_pm_put_untracked(gt); atomic_dec(>->user_wakeref); } diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c index 542ce6d2de1922..f3d21bdfd7f639 100644 --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c @@ -21,20 +21,22 @@ static int cmp_u32(const void *A, const void *B) return *a - *b; } -static void perf_begin(struct intel_gt *gt) +static intel_wakeref_t perf_begin(struct intel_gt *gt) { - intel_gt_pm_get(gt); + intel_wakeref_t wakeref = intel_gt_pm_get(gt); /* Boost gpufreq to max [waitboost] and keep it fixed */ atomic_inc(>->rps.num_waiters); schedule_work(>->rps.work); flush_work(>->rps.work); + + return wakeref; } -static int perf_end(struct intel_gt *gt) +static int perf_end(struct intel_gt *gt, intel_wakeref_t wakeref) { atomic_dec(>->rps.num_waiters); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return igt_flush_test(gt->i915); } @@ -133,12 +135,13 @@ static int perf_mi_bb_start(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - perf_begin(gt); + wakeref = perf_begin(gt); for_each_engine(engine, gt, id) { struct intel_context *ce = engine->kernel_context; struct i915_vma *batch; @@ -207,7 +210,7 @@ static int perf_mi_bb_start(void *arg) pr_info("%s: MI_BB_START cycles: %u\n", engine->name, trifilter(cycles)); } - if (perf_end(gt)) + if (perf_end(gt, wakeref)) err = -EIO; return err; @@ -260,12 +263,13 @@ static int perf_mi_noop(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - perf_begin(gt); + wakeref = perf_begin(gt); for_each_engine(engine, gt, id) { struct intel_context *ce = engine->kernel_context; struct i915_vma *base, *nop; @@ -364,7 +368,7 @@ static int perf_mi_noop(void *arg) pr_info("%s: 16K MI_NOOP cycles: %u\n", engine->name, trifilter(cycles)); } - if (perf_end(gt)) + if (perf_end(gt, wakeref)) err = -EIO; return err; diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c index 0971241707ce83..33351deeea4f0b 100644 --- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c @@ -81,6 +81,7 @@ static int live_gt_clocks(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (!gt->clock_frequency) { /* unknown */ @@ -91,7 +92,7 @@ static int live_gt_clocks(void *arg) if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL); for_each_engine(engine, gt, id) { @@ -128,7 +129,7 @@ static int live_gt_clocks(void *arg) } intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index a9e0a91bc0e026..ab464cd72b8c39 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -257,11 +257,12 @@ static int igt_atomic_reset(void *arg) { struct intel_gt *gt = arg; const typeof(*igt_atomic_phases) *p; + intel_wakeref_t wakeref; int err = 0; /* Check that the resets are usable from atomic context */ - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); igt_global_reset_lock(gt); /* Flush any requests before we get started and check basics */ @@ -292,7 +293,7 @@ static int igt_atomic_reset(void *arg) unlock: igt_global_reset_unlock(gt); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } @@ -303,6 +304,7 @@ static int igt_atomic_engine_reset(void *arg) const typeof(*igt_atomic_phases) *p; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; /* Check that the resets are usable from atomic context */ @@ -313,7 +315,7 @@ static int igt_atomic_engine_reset(void *arg) if (intel_uc_uses_guc_submission(>->uc)) return 0; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); igt_global_reset_lock(gt); /* Flush any requests before we get started and check basics */ @@ -361,7 +363,7 @@ static int igt_atomic_engine_reset(void *arg) out_unlock: igt_global_reset_unlock(gt); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c index fb30f733b03665..dcef8d49891972 100644 --- a/drivers/gpu/drm/i915/gt/selftest_rps.c +++ b/drivers/gpu/drm/i915/gt/selftest_rps.c @@ -224,6 +224,7 @@ int live_rps_clock_interval(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; int err = 0; if (!intel_rps_is_enabled(rps) || GRAPHICS_VER(gt->i915) < 6) @@ -236,7 +237,7 @@ int live_rps_clock_interval(void *arg) saved_work = rps->work.func; rps->work.func = dummy_rps_work; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_rps_disable(>->rps); intel_gt_check_clock_frequency(gt); @@ -355,7 +356,7 @@ int live_rps_clock_interval(void *arg) } intel_rps_enable(>->rps); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); @@ -376,6 +377,7 @@ int live_rps_control(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; int err = 0; /* @@ -398,7 +400,7 @@ int live_rps_control(void *arg) saved_work = rps->work.func; rps->work.func = dummy_rps_work; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); for_each_engine(engine, gt, id) { struct i915_request *rq; ktime_t min_dt, max_dt; @@ -488,7 +490,7 @@ int live_rps_control(void *arg) break; } } - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); @@ -1023,6 +1025,7 @@ int live_rps_interrupt(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; u32 pm_events; int err = 0; @@ -1033,9 +1036,9 @@ int live_rps_interrupt(void *arg) if (!intel_rps_has_interrupts(rps) || GRAPHICS_VER(gt->i915) < 6) return 0; - intel_gt_pm_get(gt); - pm_events = rps->pm_events; - intel_gt_pm_put(gt); + pm_events = 0; + with_intel_gt_pm(gt, wakeref) + pm_events = rps->pm_events; if (!pm_events) { pr_err("No RPS PM events registered, but RPS is enabled?\n"); return -ENODEV; diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c index bd44ce73a5044e..81fa891eac19fc 100644 --- a/drivers/gpu/drm/i915/gt/selftest_slpc.c +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c @@ -241,6 +241,7 @@ static int run_test(struct intel_gt *gt, int test_type) struct intel_rps *rps = >->rps; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; struct igt_spinner spin; u32 slpc_min_freq, slpc_max_freq; int err = 0; @@ -278,7 +279,7 @@ static int run_test(struct intel_gt *gt, int test_type) } intel_gt_pm_wait_for_idle(gt); - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); for_each_engine(engine, gt, id) { struct i915_request *rq; u32 max_act_freq; @@ -365,7 +366,7 @@ static int run_test(struct intel_gt *gt, int test_type) if (igt_flush_test(gt->i915)) err = -EIO; - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); intel_gt_pm_wait_for_idle(gt); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index fe390d59929b02..76d3dfbdf479b7 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1106,7 +1106,7 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc) if (deregister) guc_signal_context_fence(ce); if (destroyed) { - intel_gt_pm_put_async(guc_to_gt(guc)); + intel_gt_pm_put_async_untracked(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } @@ -1302,6 +1302,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) unsigned long flags; u32 reset_count; bool in_reset; + intel_wakeref_t wakeref; spin_lock_irqsave(&guc->timestamp.lock, flags); @@ -1324,7 +1325,8 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) * start_gt_clk is derived from GuC state. To get a consistent * view of activity, we query the GuC state only if gt is awake. */ - if (!in_reset && intel_gt_pm_get_if_awake(gt)) { + wakeref = in_reset ? 0 : intel_gt_pm_get_if_awake(gt); + if (wakeref) { stats_saved = *stats; gt_stamp_saved = guc->timestamp.gt_stamp; /* @@ -1333,7 +1335,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) */ guc_update_engine_gt_clks(engine); guc_update_pm_timestamp(guc, now); - intel_gt_pm_put_async(gt); + intel_gt_pm_put_async(gt, wakeref); if (i915_reset_count(gpu_error) != reset_count) { *stats = stats_saved; guc->timestamp.gt_stamp = gt_stamp_saved; @@ -4604,7 +4606,7 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc, intel_context_put(ce); } else if (context_destroyed(ce)) { /* Context has been destroyed */ - intel_gt_pm_put_async(guc_to_gt(guc)); + intel_gt_pm_put_async_untracked(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c index 7ece883a7d9567..12d858e17902ec 100644 --- a/drivers/gpu/drm/i915/i915_pmu.c +++ b/drivers/gpu/drm/i915/i915_pmu.c @@ -167,19 +167,19 @@ static u64 get_rc6(struct intel_gt *gt) { struct drm_i915_private *i915 = gt->i915; struct i915_pmu *pmu = &i915->pmu; + intel_wakeref_t wakeref; unsigned long flags; - bool awake = false; u64 val; - if (intel_gt_pm_get_if_awake(gt)) { + wakeref = intel_gt_pm_get_if_awake(gt); + if (wakeref) { val = __get_rc6(gt); - intel_gt_pm_put_async(gt); - awake = true; + intel_gt_pm_put_async(gt, wakeref); } spin_lock_irqsave(&pmu->lock, flags); - if (awake) { + if (wakeref) { pmu->sample[__I915_SAMPLE_RC6].cur = val; } else { /* @@ -372,12 +372,14 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns) struct drm_i915_private *i915 = gt->i915; struct i915_pmu *pmu = &i915->pmu; struct intel_rps *rps = >->rps; + intel_wakeref_t wakeref; if (!frequency_sampling_enabled(pmu)) return; /* Report 0/0 (actual/requested) frequency while parked. */ - if (!intel_gt_pm_get_if_awake(gt)) + wakeref = intel_gt_pm_get_if_awake(gt); + if (!wakeref) return; if (pmu->enable & config_mask(I915_PMU_ACTUAL_FREQUENCY)) { @@ -406,7 +408,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns) period_ns / 1000); } - intel_gt_pm_put_async(gt); + intel_gt_pm_put_async(gt, wakeref); } static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer) diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c index 48c62fee1f4ede..957b25638b7a71 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.c +++ b/drivers/gpu/drm/i915/intel_wakeref.c @@ -96,7 +96,8 @@ static void __intel_wakeref_put_work(struct work_struct *wrk) void __intel_wakeref_init(struct intel_wakeref *wf, struct intel_runtime_pm *rpm, const struct intel_wakeref_ops *ops, - struct intel_wakeref_lockclass *key) + struct intel_wakeref_lockclass *key, + const char *name) { wf->rpm = rpm; wf->ops = ops; @@ -108,6 +109,10 @@ void __intel_wakeref_init(struct intel_wakeref *wf, INIT_DELAYED_WORK(&wf->work, __intel_wakeref_put_work); lockdep_init_map(&wf->work.work.lockdep_map, "wakeref.work", &key->work, 0); + +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + ref_tracker_dir_init(&wf->debug, INTEL_REFTRACK_DEAD_COUNT, name); +#endif } int intel_wakeref_wait_for_idle(struct intel_wakeref *wf) diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h index 439d8b172748ad..cd3fc88e4f657b 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.h +++ b/drivers/gpu/drm/i915/intel_wakeref.h @@ -50,6 +50,10 @@ struct intel_wakeref { const struct intel_wakeref_ops *ops; struct delayed_work work; + +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + struct ref_tracker_dir debug; +#endif }; struct intel_wakeref_lockclass { @@ -60,11 +64,12 @@ struct intel_wakeref_lockclass { void __intel_wakeref_init(struct intel_wakeref *wf, struct intel_runtime_pm *rpm, const struct intel_wakeref_ops *ops, - struct intel_wakeref_lockclass *key); -#define intel_wakeref_init(wf, rpm, ops) do { \ + struct intel_wakeref_lockclass *key, + const char *name); +#define intel_wakeref_init(wf, rpm, ops, name) do { \ static struct intel_wakeref_lockclass __key; \ \ - __intel_wakeref_init((wf), (rpm), (ops), &__key); \ + __intel_wakeref_init((wf), (rpm), (ops), &__key, name); \ } while (0) int __intel_wakeref_get_first(struct intel_wakeref *wf); @@ -292,6 +297,33 @@ static inline void intel_ref_tracker_free(struct ref_tracker_dir *dir, void intel_ref_tracker_show(struct ref_tracker_dir *dir, struct drm_printer *p); +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + +static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf) +{ + return intel_ref_tracker_alloc(&wf->debug); +} + +static inline void intel_wakeref_untrack(struct intel_wakeref *wf, + intel_wakeref_t handle) +{ + intel_ref_tracker_free(&wf->debug, handle); +} + +#else + +static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf) +{ + return -1; +} + +static inline void intel_wakeref_untrack(struct intel_wakeref *wf, + intel_wakeref_t handle) +{ +} + +#endif + struct intel_wakeref_auto { struct intel_runtime_pm *rpm; struct timer_list timer;