From patchwork Tue Mar 28 15:15:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76132 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2304935vqo; Tue, 28 Mar 2023 08:27:08 -0700 (PDT) X-Google-Smtp-Source: AKy350ZloGQxL3AmntDd7Q4t90SDSK9IYlsgggtwKfwgn4ZrnCrlYHVK+SfgO8uRgSAAy4qjzKuG X-Received: by 2002:a17:906:3288:b0:8b1:fc58:a4ad with SMTP id 8-20020a170906328800b008b1fc58a4admr17759308ejw.11.1680017228046; Tue, 28 Mar 2023 08:27:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017228; cv=none; d=google.com; s=arc-20160816; b=mYJlTvpGFo+FIV9EIztgfshkpX95z59kTCflFnuvpljThM40w2cgxt28vG45on9eBS TwXRr5kPBSVCrtGMsM9Tf5VtsIduiERbIvx2hKBqmWFVu/UBZOjBJAFuDVxkzoiBu08O u4OFBf2BiYngKBiB5h3t4KWdAxdta2jwnQZkUbOfAqCOhUYq9HvX+H2fAE//ece+Duor RktHnmMjnp6oYlD97t5FBQ6+cz+VrR53SrnZ8lD6bOU5lZ76G8/oYSO/pEYEhco4tS3+ Rf0PGvFaiZ/yuGjqbryjwQQ5TziYT3/3IyPTNBlNK9suyFs9czsh0feeyzMsGBya4bEd OtAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=Oyv6fg7oGPUoq0MdQL9r0gWMiSMU5Bnx5Pv9OV93+EE=; b=EjxpR5qKOcM8NAqyEUqEi+8EBJ43g85EbQGaTX3C+gMx9jl4MdUw6XQzParNnTZ5hJ bxn2CI2CCLvyvWtFNyzbPnynvUlCgJsb9X6q0A4FNqdrrRcGJdUtEuHWJBOjwf5UzXJx 14J7c4GxgQDNvrACTzXNuV3CsLTetAHnBt8Snf6qVwtHddalEh3saylDRzzxo3Pc3GXh mVtVqdhgTmFVnZc6JbknR0fTmxCgG2v1701mxcfQyBGuv8wEhnJHT8KtG3kmVUh3nu3+ 7lZDljIESQYOZ774UUxcttplUgcseuw2S/7KSTTB0Fk74loNXdLGjyBzfNI18YdRBCoS N2dA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ChPqkUu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p9-20020a1709061b4900b0092fe3fdc1fdsi28944922ejg.605.2023.03.28.08.26.43; Tue, 28 Mar 2023 08:27:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ChPqkUu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233911AbjC1PR1 (ORCPT + 99 others); Tue, 28 Mar 2023 11:17:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232624AbjC1PRO (ORCPT ); Tue, 28 Mar 2023 11:17:14 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D34C61117F; Tue, 28 Mar 2023 08:16:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016588; x=1711552588; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=k2hx/WBFqL7VuFz0I6+8fGkxNCCWOhJe2NR5annG8QI=; b=ChPqkUu4nXQMSKWoAx6g1zrYvJ8YQgnBgllAnsuMLFqKSDD6eLhjmz53 sRSvi2z+TIlep5fLxlaZ7lp3k0/UdQnxXUr45ROS5HQFfMAW/k47Z/IvI TR4HrHbjMO2ZpEQFCKkaU05YddJ5aKd7NAPKyU4MkZrBGGPMREq9PJgYZ tA9ZI6necyza8uCd+C2KG8+AysrfYm5yPKI9o27sgM+3LfuuD4eesG+6j 1EM1x5ljApxHhqSz/5uFOimDIrZxz+toO6ey4lNLiznmRBdFoJg/KL7fj JYnC2bDD4nZHqMrjFtdrgfQacPZ21TwlsSe0xOdx/yFiwJI3LX4hzEm4P Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208612" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208612" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181739" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181739" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:46 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:24 +0200 Subject: [PATCH v5 1/8] lib/ref_tracker: add unlocked leak print helper MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-1-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761625744905432767?= X-GMAIL-MSGID: =?utf-8?q?1761625744905432767?= To have reliable detection of leaks, caller must be able to check under the same lock both: tracked counter and the leaks. dir.lock is natural candidate for such lock and unlocked print helper can be called with this lock taken. As a bonus we can reuse this helper in ref_tracker_dir_exit. Signed-off-by: Andrzej Hajda --- include/linux/ref_tracker.h | 8 ++++++ lib/ref_tracker.c | 66 ++++++++++++++++++++++++++------------------- 2 files changed, 46 insertions(+), 28 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 9ca353ab712b5e..87a92f2bec1b88 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -36,6 +36,9 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, void ref_tracker_dir_exit(struct ref_tracker_dir *dir); +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit); + void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); @@ -56,6 +59,11 @@ static inline void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { } +static inline void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ +} + static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit) { diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index dc7b14aa3431e2..d4eb0929af8f96 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -14,6 +14,38 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker *tracker; + unsigned int i = 0; + + lockdep_assert_held(&dir->lock); + + list_for_each_entry(tracker, &dir->list, head) { + if (i < display_limit) { + pr_err("leaked reference.\n"); + if (tracker->alloc_stack_handle) + stack_depot_print(tracker->alloc_stack_handle); + i++; + } else { + break; + } + } +} +EXPORT_SYMBOL(ref_tracker_dir_print_locked); + +void ref_tracker_dir_print(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + ref_tracker_dir_print_locked(dir, display_limit); + spin_unlock_irqrestore(&dir->lock, flags); +} +EXPORT_SYMBOL(ref_tracker_dir_print); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; @@ -27,13 +59,13 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) kfree(tracker); dir->quarantine_avail++; } - list_for_each_entry_safe(tracker, n, &dir->list, head) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); + if (!list_empty(&dir->list)) { + ref_tracker_dir_print_locked(dir, 16); leak = true; - list_del(&tracker->head); - kfree(tracker); + list_for_each_entry_safe(tracker, n, &dir->list, head) { + list_del(&tracker->head); + kfree(tracker); + } } spin_unlock_irqrestore(&dir->lock, flags); WARN_ON_ONCE(leak); @@ -42,28 +74,6 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) } EXPORT_SYMBOL(ref_tracker_dir_exit); -void ref_tracker_dir_print(struct ref_tracker_dir *dir, - unsigned int display_limit) -{ - struct ref_tracker *tracker; - unsigned long flags; - unsigned int i = 0; - - spin_lock_irqsave(&dir->lock, flags); - list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; - } - } - spin_unlock_irqrestore(&dir->lock, flags); -} -EXPORT_SYMBOL(ref_tracker_dir_print); - int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) From patchwork Tue Mar 28 15:15:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76147 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2314963vqo; Tue, 28 Mar 2023 08:42:31 -0700 (PDT) X-Google-Smtp-Source: AKy350Y39Y2NGrnGiwTzh/69A5skxbanENDOjYW3MFM2LlPY9q9LprqQK/VmYg6mcLhYDpJhwN/1 X-Received: by 2002:a17:906:a418:b0:878:7a0e:5730 with SMTP id l24-20020a170906a41800b008787a0e5730mr15991210ejz.56.1680018151255; Tue, 28 Mar 2023 08:42:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680018151; cv=none; d=google.com; s=arc-20160816; b=vGPEf13zRwoCFRmX+Dozq+ZDgo2b2aUX+wDueY5Mn5uIwuXI3JoFfY0e/OeJ9osClF r6AW4mm0ar76V/vaROXeRLlRwQ2/qKpOWzeK74SL2s8jcbBqpwUx5h8wKo+P26xwpFMQ h2bW8ot2KOc1pYFk1R3R8vEBuMfcTThLNxabcbSJjjxadSRg0wtd1MxPaJZ2a9siwflP MPXAAOFeBNtuyGqlsXeuV4MVNEl+rnd+d+gw/r0JC+P36rE3PZWUIf/xHV6xoFBvYjdu hwqkRww1C1Cvb3aRN9Q3QD5LIeEmmv1tSrj6uN5L/AeYm1JTXTl9Se0b5HRJVoU8xn9L ijCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=D4VN+C0U1SR/M5OZWurjUwAETocz3Ge/YYACuN2pNzU=; b=Ba/S2DNxWAtmusdj4es3EMxYOCC7kvQ5XSBCtLr+rio0/an0inzqTpcgkqEgW+i9vn B2hqE8R/Q87fZQMf7OKpiXKIxtNW6l96gvcO7rWe+w0PAkVIyFiPlW2YzsRtsEJLHcKk kKoFVg2i8qCCMAi4d8h8IVs4EQf/LFawFYsIrM35UnE8MhTh6eI+jmooRVKyJ+AfY6Bf dJWxlEKANiiS3SEuwgEXZrCmpuuAbnJ6QHPSPgTLGRiV7wkwfbJVXh1w53WPHUsPk2+L Jyi3zmK3tU3pKqCLhmxexH0gXA2AUFJMIpy9TUXjqzRSCFLBIHZZ3Wul9Iz/CnF/eiOP fBeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dzLuT6FD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sa40-20020a1709076d2800b009334a4e66f0si29342754ejc.171.2023.03.28.08.42.07; Tue, 28 Mar 2023 08:42:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dzLuT6FD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbjC1PRy (ORCPT + 99 others); Tue, 28 Mar 2023 11:17:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233262AbjC1PRZ (ORCPT ); Tue, 28 Mar 2023 11:17:25 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F069E11E83; Tue, 28 Mar 2023 08:16:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016601; x=1711552601; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=kEFDoIdEQUaoXtYqPzYKibOwd36nmghIfH3/F6UGSm4=; b=dzLuT6FDpyQb/NSAYJBNk81DUscNVesgtlSLOeS1F465P8DdWDl1Dt8U TYaee91IQiM9a+5f+XqxLgT7R8DZ+HlBoMm+iGBYKcLUgTEFpBIujBAcE L24D75l1/dQlBtUFdXXuXb5qVGhFjT/EUD0supbcPokMv/DkoqMbbKu/l 11PWec69hNtgbeuyh2WXze2fABjWFsV4iBpSDXxptOaHj4yKLsGTvfa+c +qFdHomy8CeTrFcVfdH4wx2HZpGQDx3GJeQo4EjwC7h2sldV9XsfVwwyb HpeDSE6dMIV0ekMIBex9rVFenFgGVJrVdOQ/3ktslvSEEQMaftE6DxSKq g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208640" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208640" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181756" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181756" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:50 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:25 +0200 Subject: [PATCH v5 2/8] lib/ref_tracker: improve printing stats MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-2-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761626712684082270?= X-GMAIL-MSGID: =?utf-8?q?1761626712684082270?= In case the library is tracking busy subsystem, simply printing stack for every active reference will spam log with long, hard to read, redundant stack traces. To improve readabilty following changes have been made: - reports are printed per stack_handle - log is more compact, - added display name for ref_tracker_dir - it will differentiate multiple subsystems, - stack trace is printed indented, in the same printk call, - info about dropped references is printed as well. Signed-off-by: Andrzej Hajda --- include/linux/ref_tracker.h | 15 ++++++-- lib/ref_tracker.c | 90 +++++++++++++++++++++++++++++++++++++++------ 2 files changed, 91 insertions(+), 14 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 87a92f2bec1b88..fc9ef9952f01fd 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -17,12 +17,19 @@ struct ref_tracker_dir { bool dead; struct list_head list; /* List of active trackers */ struct list_head quarantine; /* List of dead trackers */ + char name[32]; #endif }; #ifdef CONFIG_REF_TRACKER -static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + +/* Temporary allow two and three arguments, until consumers are converted */ +#define ref_tracker_dir_init(_d, _q, args...) _ref_tracker_dir_init(_d, _q, ##args, #_d) +#define _ref_tracker_dir_init(_d, _q, _n, ...) __ref_tracker_dir_init(_d, _q, _n) + +static inline void __ref_tracker_dir_init(struct ref_tracker_dir *dir, + unsigned int quarantine_count, + const char *name) { INIT_LIST_HEAD(&dir->list); INIT_LIST_HEAD(&dir->quarantine); @@ -31,6 +38,7 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, dir->dead = false; refcount_set(&dir->untracked, 1); refcount_set(&dir->no_tracker, 1); + strlcpy(dir->name, name, sizeof(dir->name)); stack_depot_init(); } @@ -51,7 +59,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, #else /* CONFIG_REF_TRACKER */ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + unsigned int quarantine_count, + ...) { } diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index d4eb0929af8f96..2ffe79c90c1771 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -1,11 +1,16 @@ // SPDX-License-Identifier: GPL-2.0-or-later + +#define pr_fmt(fmt) "ref_tracker: " fmt + #include +#include #include #include #include #include #define REF_TRACKER_STACK_ENTRIES 16 +#define STACK_BUF_SIZE 1024 struct ref_tracker { struct list_head head; /* anchor into dir->list or dir->quarantine */ @@ -14,24 +19,87 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ref_tracker_dir_stats { + int total; + int count; + struct { + depot_stack_handle_t stack_handle; + unsigned int count; + } stacks[]; +}; + +static struct ref_tracker_dir_stats * +ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) { + struct ref_tracker_dir_stats *stats; struct ref_tracker *tracker; - unsigned int i = 0; - lockdep_assert_held(&dir->lock); + stats = kmalloc(struct_size(stats, stacks, limit), + GFP_NOWAIT | __GFP_NOWARN); + if (!stats) + return ERR_PTR(-ENOMEM); + stats->total = 0; + stats->count = 0; list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; + depot_stack_handle_t stack = tracker->alloc_stack_handle; + int i; + + ++stats->total; + for (i = 0; i < stats->count; ++i) + if (stats->stacks[i].stack_handle == stack) + break; + if (i >= limit) + continue; + if (i >= stats->count) { + stats->stacks[i].stack_handle = stack; + stats->stacks[i].count = 0; + ++stats->count; } + ++stats->stacks[i].count; + } + + return stats; +} + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker_dir_stats *stats; + unsigned int i = 0, skipped; + depot_stack_handle_t stack; + char *sbuf; + + lockdep_assert_held(&dir->lock); + + if (list_empty(&dir->list)) + return; + + stats = ref_tracker_get_stats(dir, display_limit); + if (IS_ERR(stats)) { + pr_err("%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); + return; } + + sbuf = kmalloc(STACK_BUF_SIZE, GFP_NOWAIT | __GFP_NOWARN); + + for (i = 0, skipped = stats->total; i < stats->count; ++i) { + stack = stats->stacks[i].stack_handle; + if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) + sbuf[0] = 0; + pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); + skipped -= stats->stacks[i].count; + } + + if (skipped) + pr_err("%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); + + kfree(sbuf); + + kfree(stats); } EXPORT_SYMBOL(ref_tracker_dir_print_locked); From patchwork Tue Mar 28 15:15:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76130 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2304206vqo; Tue, 28 Mar 2023 08:25:57 -0700 (PDT) X-Google-Smtp-Source: AKy350bCM702JR3lpFuXwYEGoHwIG3vJkhve5R3RJjny7a4jYj73Mp61IRc5Q8xEsCrSJWNLDcqf X-Received: by 2002:a05:6402:8d6:b0:502:2953:8ecb with SMTP id d22-20020a05640208d600b0050229538ecbmr12957578edz.13.1680017157693; Tue, 28 Mar 2023 08:25:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017157; cv=none; d=google.com; s=arc-20160816; b=b+vxblQ+Hf3pITrOCRNlEmgCw3+KNxIZA0VTjZu36FT/KciFdGnco+hUoGar6cPDUx 1qdFeDBs3scxPf1/dqfmIt3RIl3g35glO7p2kXUZtKYKQXvhe/HnlDhPw4cQAO3PE6Up prkR/AXqgQiA1mMjmjo9iUkbvD8zA/273gBCHyDeJx/j3X0z2HWxHOTHPCWUB4nVIW8f ddDmZlvBebAujbKd1xW7D1wThz9edOAoS5z1sEuwD8Gs3izWjTKXUq2mP0J2ZmlXarbn ypU8EUZ1t+KRtK6/aLKpBiIBGy9nTaeaI1gayesUI7IUbPw2GPjOLxyPDQafpNnN7nWA xR1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=eK06zgGmn0TfapCBH9CX2D8DI2CSafJVUHaDOBdWDXY=; b=n/Qht3OToabEtn5Gyiepg+6OWz4VR0nVuH5d7t4Nm0LCUsl5HFKT5PoDel1LOno9wb 6E/DbYuQ5HZlLGig5hY0kuh+9zL2cnRqZ0U5c15tI+z6QQNN7BuXmiGUlUPsoltbe2fO K2pHsnVaZLDWJ/R+Xr/tAy0ln4P8G8BwiUzMMmWto/WEXumJV+bvysX6FGiOdUU3wIDw LudsjaEwnep2iEbx3L0ZuL6Ti6ouDstXGhtOJBF2NKZnPwSzzLZOzLRrucMJIQ17TZaO /kDtdpza+4gVse3yoA1h7TlrEqc27XnJ1OvZ/0xazT3uXr0MrlIHrkwFJqJqfWtvZvUZ Ms/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="bl/G00C9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r8-20020aa7d148000000b004c12b7f5a48si28452892edo.177.2023.03.28.08.25.34; Tue, 28 Mar 2023 08:25:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="bl/G00C9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233985AbjC1PSO (ORCPT + 99 others); Tue, 28 Mar 2023 11:18:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233960AbjC1PRn (ORCPT ); Tue, 28 Mar 2023 11:17:43 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76445E3AC; Tue, 28 Mar 2023 08:16:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016609; x=1711552609; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=SjGS/NBEg10FgFcG60Eo0L1khFE2QrsMt8Dx57P3heg=; b=bl/G00C9LR4yezFHAidwNd4VSSQAW4o55g1DZb1DJWiTazQWYvjvzC/O bcRf5GvhfLzG7BAyT7DqeVyqNH+NbAaUFYl5eAEl1LeDeTxdU9BMZf4ee WguLCdzYJOFUw4pugPjby3erZO2Tmkmxv3HTerTYPSh6KKLCXoxj0b2ji gdQR5OAdFHiIwxm4x+Xedznk4MzWiNYWF5UzI7eTL2wEY+YrG23KD9An8 zgW+VAfip8OF9lEeZUobQv/nBSRiBOqUQn+Tcg2QvP1xT++Y5cRmGju/y pFFGhjYvPM6Xo4hUu+V5lM890sNY+YnMKUE12fbmsW+1ZVE7lf/UdDl4w w==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208667" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208667" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181761" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181761" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:53 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:26 +0200 Subject: [PATCH v5 3/8] lib/ref_tracker: add printing to memory buffer MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-3-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761625671125322503?= X-GMAIL-MSGID: =?utf-8?q?1761625671125322503?= Similar to stack_(depot|trace)_snprint the patch adds helper to printing stats to memory buffer. It will be helpful in case of debugfs. Signed-off-by: Andrzej Hajda --- include/linux/ref_tracker.h | 8 +++++++ lib/ref_tracker.c | 56 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 56 insertions(+), 8 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index fc9ef9952f01fd..4cd260efc4023f 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -50,6 +50,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size); + int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp); @@ -78,6 +80,12 @@ static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, { } +static inline int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, + char *buf, size_t size) +{ + return 0; +} + static inline int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index 2ffe79c90c1771..cce4614b07940f 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -62,8 +62,27 @@ ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) return stats; } -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ostream { + char *buf; + int size, used; +}; + +#define pr_ostream(stream, fmt, args...) \ +({ \ + struct ostream *_s = (stream); \ +\ + if (!_s->buf) { \ + pr_err(fmt, ##args); \ + } else { \ + int ret, len = _s->size - _s->used; \ + ret = snprintf(_s->buf + _s->used, len, pr_fmt(fmt), ##args); \ + _s->used += min(ret, len); \ + } \ +}) + +static void +__ref_tracker_dir_pr_ostream(struct ref_tracker_dir *dir, + unsigned int display_limit, struct ostream *s) { struct ref_tracker_dir_stats *stats; unsigned int i = 0, skipped; @@ -77,8 +96,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stats = ref_tracker_get_stats(dir, display_limit); if (IS_ERR(stats)) { - pr_err("%s@%pK: couldn't get stats, error %pe\n", - dir->name, dir, stats); + pr_ostream(s, "%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); return; } @@ -88,19 +107,27 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stack = stats->stacks[i].stack_handle; if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) sbuf[0] = 0; - pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, - stats->stacks[i].count, stats->total, sbuf); + pr_ostream(s, "%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); skipped -= stats->stacks[i].count; } if (skipped) - pr_err("%s@%pK skipped reports about %d/%d users.\n", - dir->name, dir, skipped, stats->total); + pr_ostream(s, "%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); kfree(sbuf); kfree(stats); } + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ostream os = {}; + + __ref_tracker_dir_pr_ostream(dir, display_limit, &os); +} EXPORT_SYMBOL(ref_tracker_dir_print_locked); void ref_tracker_dir_print(struct ref_tracker_dir *dir, @@ -114,6 +141,19 @@ void ref_tracker_dir_print(struct ref_tracker_dir *dir, } EXPORT_SYMBOL(ref_tracker_dir_print); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size) +{ + struct ostream os = { .buf = buf, .size = size }; + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + __ref_tracker_dir_pr_ostream(dir, 16, &os); + spin_unlock_irqrestore(&dir->lock, flags); + + return os.used; +} +EXPORT_SYMBOL(ref_tracker_dir_snprint); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; From patchwork Tue Mar 28 15:15:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76141 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2312948vqo; Tue, 28 Mar 2023 08:39:05 -0700 (PDT) X-Google-Smtp-Source: AKy350bRCGHemK1XnYhqAaG2kOYB302fMkW6sJvRmhCc+x+uCHdqSclM4vfqlwOXoqgnRYnbTFaK X-Received: by 2002:a17:906:768a:b0:930:be60:ce0c with SMTP id o10-20020a170906768a00b00930be60ce0cmr17189912ejm.71.1680017945365; Tue, 28 Mar 2023 08:39:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017945; cv=none; d=google.com; s=arc-20160816; b=YggV+YduKz+/lr+7ZRcd3VQUz8jR0QhVHjvMpIXMA4pNttf28yWhQGYcmn8V4sNwoE wLjk+yxL6z6Y7x/3EsPLHu+Tn8TdE7/X/sP5gIa+GdHGO3mogbSqrhDy6550S/3swfpe 7kkTNQwxePw19TX3Y/QrDwTnmDKsXk/Xb+qxe0pSf2MEOA1QAv5HFih+NZnBLf92rBsY S7njhr/YzsFyQHuI2wLs6oN5++UOGnkHUTm9Q48WHdRpa7eZuEALTe7k1GJfrv5MN9oe V2ArA1W1Jq76cqNBsquJkGRFuhweAYWnEafizyDB+LjE1AxPVhFZSajMjHNsAdRLUbyy Trrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=GyYopXCNXXRMovMXZAlkOeOZRO3qBBaL4hztzkKV1mA=; b=j9P5KnHAoB1za6VOUoPK88btN5BF3rLvnIBn3HoPQffD6PKWkddiPsz2PkGsQi2Jqw 2pJuG8bbnFFAKLVzKI4qfvohqx5ffrwWZfshoCIA8fK0s2oSF5O/8yDCMu5SnFsXtGen Lhe+UkbO1RqXLPKElMLvwXBlc1O6UQn914ija3HwpC+A09IyEXbhPuM5Gq7+SajCzoCU oZI73Vviev+IuhjdOBu14nLjQZwb+jJtNQKc4hbG3/sE8Q7G1UIXRClTQuBZqe8Y90ZH fSUD7AGE2qMahFN89oniaUrD8oyB2l6EFAm374FYne7GsaIFcwqHPQ3W4BkIKgnY5Q3i MmhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KL7XwqF6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w4-20020aa7d284000000b004fe9bb8eb0asi29804614edq.587.2023.03.28.08.38.40; Tue, 28 Mar 2023 08:39:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KL7XwqF6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234009AbjC1PSS (ORCPT + 99 others); Tue, 28 Mar 2023 11:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230346AbjC1PRn (ORCPT ); Tue, 28 Mar 2023 11:17:43 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76BFAEB78; Tue, 28 Mar 2023 08:16:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016609; x=1711552609; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=V3l4oFrTXgNaHDsFH5ny9LhlqFRgL1aXxFRyebnTRqc=; b=KL7XwqF6TkSV1/5HIh8nOR4qMTpZpQAJmac17MzESTu76AW5iEh3QZsR 33s7uWN6a7wMU2cZ4mOX/QWmcv1+97jMUiGwrJkUPGRRUV2P6S4vC9EFb lT0amzSLeYTYFQq52ZKdAPJ0se34ecGdb8b8gj6+K9ZBkxmsf7XpvSFDb U6/GoGHiQBMDXWtrSwNopSYfzf00xp32Lw2t01RNyluo9rRPrpqYLI0Yp zmPQoJQO0pBkVadJJaUJziywjeKXik8rx5S2lRW1oyn1AhL5lG3377ERq jcV9iU2fbA9aJLFEUMmB1PNcHBS7aqlLefl1kzZcn/ALvmHzvDTqLw3W4 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208688" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208688" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181764" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181764" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:15:56 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:27 +0200 Subject: [PATCH v5 4/8] lib/ref_tracker: remove warnings in case of allocation failure MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-4-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761626497177133770?= X-GMAIL-MSGID: =?utf-8?q?1761626497177133770?= Library can handle allocation failures. To avoid allocation warnings __GFP_NOWARN has been added everywhere. Moreover GFP_ATOMIC has been replaced with GFP_NOWAIT in case of stack allocation on tracker free call. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- lib/ref_tracker.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index cce4614b07940f..cf5609b1ca7936 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -189,7 +189,7 @@ int ref_tracker_alloc(struct ref_tracker_dir *dir, unsigned long entries[REF_TRACKER_STACK_ENTRIES]; struct ref_tracker *tracker; unsigned int nr_entries; - gfp_t gfp_mask = gfp; + gfp_t gfp_mask = gfp | __GFP_NOWARN; unsigned long flags; WARN_ON_ONCE(dir->dead); @@ -237,7 +237,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, return -EEXIST; } nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 1); - stack_handle = stack_depot_save(entries, nr_entries, GFP_ATOMIC); + stack_handle = stack_depot_save(entries, nr_entries, + GFP_NOWAIT | __GFP_NOWARN); spin_lock_irqsave(&dir->lock, flags); if (tracker->dead) { From patchwork Tue Mar 28 15:15:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76144 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2314483vqo; Tue, 28 Mar 2023 08:41:44 -0700 (PDT) X-Google-Smtp-Source: AKy350Z0+dcDKzXlQcyjTx/CGf6qMk/2QZw6u6l0+9wAgYBhH1zYvnNaXYmG/K6QZ6IMpm9Dk6Z8 X-Received: by 2002:a17:906:5a94:b0:933:15c0:6e05 with SMTP id l20-20020a1709065a9400b0093315c06e05mr16013094ejq.7.1680018104597; Tue, 28 Mar 2023 08:41:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680018104; cv=none; d=google.com; s=arc-20160816; b=S9FyZuuCcDkM4VpQjV8Rzz4QW+HORuvv+Jmznh7cJzESe5SYxCgWw7AKBls3Ac7Jkk oil4qIed6RaBmcW8PMunIcGN2QlE9tJehojFkhoLHkqH8Xyix3s6aniJ7n7SAQ6wZdsw +O3hk67FffWadDzIffQ05ZskySwIe4TIYRZzbAGJ3s5kG+qxi9Cq6UrMO+k7mKiLi4uF /rPMTpgX1DT7dxvYb2768bzfKI9Jhpt65HJvFz+DuC6KMSf3URYGxgAl8J+FDjRDWWhX 69dfbOa0+z6P/2R4atNB4wsHCjD9fg3EdJQ5ItzENBv40d+cohej3kVcqYzmEgKx1E4j wDiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=oiVQvH742AxxBrieq3OC450bmEIM81iJhdlH9UX/b0o=; b=AV0u+88wUOBDTzD3bX7AoyyrRc9blhJMrvn0iPrSTr6OkmkbFXRfdBgadExO29Oonx cc+oiw9FSkJKYwbcnMGPv81FQLxMOwn9/huxY918QJodFn4GuHuyaDwy4xBGNlQNRPaI FvbClheckFUjlj83mPgCMj9lA/aS0hHNAyvUSdvSYUa+M+KFyfHhxiTCSVptYdAodhBg G+XVC6+jyFjXffBEX6ighrenAGCiJvixuHe4S/4hXhDnkC9J4hTBzGEmroX8AGfhZ93G 5Ip3eY1reZ1ggGOMfoNAIUUiwGaqq9/Irk6AmjFxav0eehf8eGphYqH/nK3t5DoCLaUI 1A+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="j/3o/1NN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g12-20020a1709065d0c00b009453c587601si4787106ejt.782.2023.03.28.08.41.19; Tue, 28 Mar 2023 08:41:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="j/3o/1NN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234002AbjC1PS6 (ORCPT + 99 others); Tue, 28 Mar 2023 11:18:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234003AbjC1PRz (ORCPT ); Tue, 28 Mar 2023 11:17:55 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00BA210A9B; Tue, 28 Mar 2023 08:17:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016626; x=1711552626; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=LrADJYnQd6OA8VRZxmBcmho+KZF+ugscr5DiCLcs2Nw=; b=j/3o/1NN+V5/4aSjY9tW9Av8jUEnnkOtmkESuUjsSY2kr8gKOWKRTq3a N0ekhUYftH24pCTEqEICvdyio6uDkR/cPwUqbsgL45yR3BpGDhniqGZrz 5K5aiP7ez5Ptu19uTXJfc4NeuV1054D2JKz4PJG2pEOoAgOVz+lJmYyFa eKGJmCWaIGB/TF5v21Uw82c6Z6IwBqnlvJIEvoVKOT2f/Yvw6blF3oxFn hty/u4hA/giorkxtxfu310jj2Z8ytk8LQ0iy4I1vDwJMl/vqBA+ESL/us paj9P/ke7SaMDByDXU6heFVdUqEneC+UDTKPxrN9tuDeFmqedZbSHKiue g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208712" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208712" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181768" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181768" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:00 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:28 +0200 Subject: [PATCH v5 5/8] drm/i915: Correct type of wakeref variable MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-5-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761626663848280512?= X-GMAIL-MSGID: =?utf-8?q?1761626663848280512?= Wakeref has dedicated type. Assumption it will be int compatible forever is incorrect. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 88e881b100cf0a..74d28a3af2d57b 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -3248,7 +3248,7 @@ static void destroyed_worker_func(struct work_struct *w) struct intel_guc *guc = container_of(w, struct intel_guc, submission_state.destroyed_worker); struct intel_gt *gt = guc_to_gt(guc); - int tmp; + intel_wakeref_t tmp; with_intel_gt_pm(gt, tmp) deregister_destroyed_contexts(guc); From patchwork Tue Mar 28 15:15:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76139 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2312299vqo; Tue, 28 Mar 2023 08:38:06 -0700 (PDT) X-Google-Smtp-Source: AKy350a1w1dLlpxRg2gl+EoqzPqQpKkicAj+VImulguTa68cvkDYtT4NhltoUrYxA6cbd77mHv/Q X-Received: by 2002:aa7:8f17:0:b0:627:e024:98ca with SMTP id x23-20020aa78f17000000b00627e02498camr13065508pfr.32.1680017886099; Tue, 28 Mar 2023 08:38:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017886; cv=none; d=google.com; s=arc-20160816; b=EX7I4OU3dXBdN2Ixk6YlNYMMou844T+pcjhDKIGoYpzgIOt675mSV2ijd79G+c9eTL +nWS007PuolQvxTyt23EJ1niN0nuoP7J/huu+GREM30iNYVMQ3og3dxf4N2wIROY3+yk h4oBSzz/mTZp0RO9WBd4JvCnR7EhcghcIPYx6EDz2/2sT3nXBqs7dSOa44iGwDO3efYo dxJIuRtnZk6juJ3kasHID3nqmSwSLbZUTKJu03Wv/FFFt639w+V0wJt1t+ohSIsb/asL td6FVhmRh0xpIUzN+KOam5bG2TPOibAj0qfti5VDf5F8eCnK2aPCDUSwQqWQGQACdrP5 VPjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=Fo7BEE5GQlvTxjIcaIwg/e9J8emWet4vd9bgLH5+Kkg=; b=BeefV31PCONY4l9JtqD5QXKsuzr/gQqNJdJASZ9H6zQb6LeUuhbWpIAeZyrGXqGHPo qHBewmIKek7ghXDAgQB5RBHesg55vujUdzpiZsocBuL3b/lFpcbCVrwlQvz7fUeTmzWB 6eeQDPNDy/EvXR7Mv/UOttX/fdYA8LFolKGJJZTLiKgLKDKYbiHKTP0luHXizH6d0To/ Sw037gdDaKADQUzuC6rc1gxAn2E/bVF08tASJeRsij6tBBBh8va+yrj5z8XtCrhi+KRf ZROgtdD/Wc63xPUWyXVh+eCBDm4dZDUfBDDzc+FNnx/l7nXImAdS9AK039WHaux79EGt nRag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=P5dVDNaY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k190-20020a6384c7000000b0050bf0f0870esi29409987pgd.627.2023.03.28.08.37.53; Tue, 28 Mar 2023 08:38:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=P5dVDNaY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232855AbjC1PTM (ORCPT + 99 others); Tue, 28 Mar 2023 11:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234012AbjC1PSS (ORCPT ); Tue, 28 Mar 2023 11:18:18 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30A1D113E8; Tue, 28 Mar 2023 08:17:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016632; x=1711552632; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=tbt2zhreEQ/Xelj8U3ljiMX6uQQUfiNcA/9me15noEM=; b=P5dVDNaYpzMWRVxyY+62txeNhnLcyjVB9S5Ly1hkxXBPw3Ut9Z7M1zoU byyqFNcka+voY5e/6CNJtvB18Q4RXnlKNg7dp5Nh421hxu/coDjo0INX1 H0qBYMoQECz9KQB1JYBpD6B9hX3YWydbXTiijV0910bvzT6e77OTJcJhW bpYzvgljXDUFyiextv/O6ZajJOKuzdFQ4LACj01457LjHiFpq1K7viTWk B4VUDanqsdxJk01LHHjnxsvCc9BakwiCLKVNEtLhZxEKtJdzZJYHrmbRF Wxvk7MTvfnsu2XMXLb8/Mtgp2noFYD1GStYQi9wL4U8f3cKYHEDasl+pY Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208737" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208737" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181772" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181772" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:03 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:29 +0200 Subject: [PATCH v5 6/8] drm/i915: Replace custom intel runtime_pm tracker with ref_tracker library MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-6-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761626434520734522?= X-GMAIL-MSGID: =?utf-8?q?1761626434520734522?= Beside reusing existing code, the main advantage of ref_tracker is tracking per instance of wakeref. It allows also to catch double put. On the other side we lose information about the first acquire and the last release, but the advantages outweigh it. Signed-off-by: Andrzej Hajda --- drivers/gpu/drm/i915/Kconfig.debug | 4 + drivers/gpu/drm/i915/display/intel_display_power.c | 2 +- drivers/gpu/drm/i915/i915_driver.c | 2 +- drivers/gpu/drm/i915/intel_runtime_pm.c | 221 ++------------------- drivers/gpu/drm/i915/intel_runtime_pm.h | 11 +- drivers/gpu/drm/i915/intel_wakeref.h | 61 +++++- 6 files changed, 84 insertions(+), 217 deletions(-) diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 93dfb7ed970547..e2da6fb04438b9 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -24,7 +24,9 @@ config DRM_I915_DEBUG select DEBUG_FS select PREEMPT_COUNT select I2C_CHARDEV + select REF_TRACKER select STACKDEPOT + select STACKTRACE select DRM_DP_AUX_CHARDEV select X86_MSR # used by igt/pm_rpm select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks) @@ -231,7 +233,9 @@ config DRM_I915_DEBUG_RUNTIME_PM bool "Enable extra state checking for runtime PM" depends on DRM_I915 default n + select REF_TRACKER select STACKDEPOT + select STACKTRACE help Choose this option to turn on extra state checking for the runtime PM functionality. This may introduce overhead during diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index f86060195987cb..e388d41e403f4f 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -404,7 +404,7 @@ print_async_put_domains_state(struct i915_power_domains *power_domains) struct drm_i915_private, display.power.domains); - drm_dbg(&i915->drm, "async_put_wakeref %u\n", + drm_dbg(&i915->drm, "async_put_wakeref %lu\n", power_domains->async_put_wakeref); print_power_domains(power_domains, "async_put_domains[0]", diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index da249337c23bdd..1e9d51ef4062be 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1024,7 +1024,7 @@ void i915_driver_shutdown(struct drm_i915_private *i915) intel_power_domains_driver_remove(i915); enable_rpm_wakeref_asserts(&i915->runtime_pm); - intel_runtime_pm_driver_release(&i915->runtime_pm); + intel_runtime_pm_driver_last_release(&i915->runtime_pm); } static bool suspend_to_idle(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index cf5122299b6b8c..c5785ddaa717e4 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -52,182 +52,37 @@ #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM) -#include - -#define STACKDEPTH 8 - -static noinline depot_stack_handle_t __save_depot_stack(void) -{ - unsigned long entries[STACKDEPTH]; - unsigned int n; - - n = stack_trace_save(entries, ARRAY_SIZE(entries), 1); - return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN); -} - static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { - spin_lock_init(&rpm->debug.lock); - stack_depot_init(); + ref_tracker_dir_init(&rpm->debug, INTEL_REFTRACK_DEAD_COUNT, dev_name(rpm->kdev)); } -static noinline depot_stack_handle_t +static intel_wakeref_t track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { - depot_stack_handle_t stack, *stacks; - unsigned long flags; - - if (rpm->no_wakeref_tracking) - return -1; - - stack = __save_depot_stack(); - if (!stack) + if (!rpm->available || rpm->no_wakeref_tracking) return -1; - spin_lock_irqsave(&rpm->debug.lock, flags); - - if (!rpm->debug.count) - rpm->debug.last_acquire = stack; - - stacks = krealloc(rpm->debug.owners, - (rpm->debug.count + 1) * sizeof(*stacks), - GFP_NOWAIT | __GFP_NOWARN); - if (stacks) { - stacks[rpm->debug.count++] = stack; - rpm->debug.owners = stacks; - } else { - stack = -1; - } - - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - return stack; + return intel_ref_tracker_alloc(&rpm->debug); } static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, - depot_stack_handle_t stack) + intel_wakeref_t wakeref) { - struct drm_i915_private *i915 = container_of(rpm, - struct drm_i915_private, - runtime_pm); - unsigned long flags, n; - bool found = false; - - if (unlikely(stack == -1)) + if (!rpm->available || rpm->no_wakeref_tracking) return; - spin_lock_irqsave(&rpm->debug.lock, flags); - for (n = rpm->debug.count; n--; ) { - if (rpm->debug.owners[n] == stack) { - memmove(rpm->debug.owners + n, - rpm->debug.owners + n + 1, - (--rpm->debug.count - n) * sizeof(stack)); - found = true; - break; - } - } - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - if (drm_WARN(&i915->drm, !found, - "Unmatched wakeref (tracking %lu), count %u\n", - rpm->debug.count, atomic_read(&rpm->wakeref_count))) { - char *buf; - - buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN); - if (!buf) - return; - - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - DRM_DEBUG_DRIVER("wakeref %x from\n%s", stack, buf); - - stack = READ_ONCE(rpm->debug.last_release); - if (stack) { - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - DRM_DEBUG_DRIVER("wakeref last released at\n%s", buf); - } - - kfree(buf); - } + intel_ref_tracker_free(&rpm->debug, wakeref); } -static int cmphandle(const void *_a, const void *_b) +static void untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm) { - const depot_stack_handle_t * const a = _a, * const b = _b; - - if (*a < *b) - return -1; - else if (*a > *b) - return 1; - else - return 0; -} - -static void -__print_intel_runtime_pm_wakeref(struct drm_printer *p, - const struct intel_runtime_pm_debug *dbg) -{ - unsigned long i; - char *buf; - - buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN); - if (!buf) - return; - - if (dbg->last_acquire) { - stack_depot_snprint(dbg->last_acquire, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref last acquired:\n%s", buf); - } - - if (dbg->last_release) { - stack_depot_snprint(dbg->last_release, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref last released:\n%s", buf); - } - - drm_printf(p, "Wakeref count: %lu\n", dbg->count); - - sort(dbg->owners, dbg->count, sizeof(*dbg->owners), cmphandle, NULL); - - for (i = 0; i < dbg->count; i++) { - depot_stack_handle_t stack = dbg->owners[i]; - unsigned long rep; - - rep = 1; - while (i + 1 < dbg->count && dbg->owners[i + 1] == stack) - rep++, i++; - stack_depot_snprint(stack, buf, PAGE_SIZE, 2); - drm_printf(p, "Wakeref x%lu taken at:\n%s", rep, buf); - } - - kfree(buf); -} - -static noinline void -__untrack_all_wakerefs(struct intel_runtime_pm_debug *debug, - struct intel_runtime_pm_debug *saved) -{ - *saved = *debug; - - debug->owners = NULL; - debug->count = 0; - debug->last_release = __save_depot_stack(); -} - -static void -dump_and_free_wakeref_tracking(struct intel_runtime_pm_debug *debug) -{ - if (debug->count) { - struct drm_printer p = drm_debug_printer("i915"); - - __print_intel_runtime_pm_wakeref(&p, debug); - } - - kfree(debug->owners); + ref_tracker_dir_exit(&rpm->debug); } static noinline void __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm) { - struct intel_runtime_pm_debug dbg = {}; unsigned long flags; if (!atomic_dec_and_lock_irqsave(&rpm->wakeref_count, @@ -235,60 +90,14 @@ __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm) flags)) return; - __untrack_all_wakerefs(&rpm->debug, &dbg); + ref_tracker_dir_print_locked(&rpm->debug, INTEL_REFTRACK_PRINT_LIMIT); spin_unlock_irqrestore(&rpm->debug.lock, flags); - - dump_and_free_wakeref_tracking(&dbg); -} - -static noinline void -untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm) -{ - struct intel_runtime_pm_debug dbg = {}; - unsigned long flags; - - spin_lock_irqsave(&rpm->debug.lock, flags); - __untrack_all_wakerefs(&rpm->debug, &dbg); - spin_unlock_irqrestore(&rpm->debug.lock, flags); - - dump_and_free_wakeref_tracking(&dbg); } void print_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, struct drm_printer *p) { - struct intel_runtime_pm_debug dbg = {}; - - do { - unsigned long alloc = dbg.count; - depot_stack_handle_t *s; - - spin_lock_irq(&rpm->debug.lock); - dbg.count = rpm->debug.count; - if (dbg.count <= alloc) { - memcpy(dbg.owners, - rpm->debug.owners, - dbg.count * sizeof(*s)); - } - dbg.last_acquire = rpm->debug.last_acquire; - dbg.last_release = rpm->debug.last_release; - spin_unlock_irq(&rpm->debug.lock); - if (dbg.count <= alloc) - break; - - s = krealloc(dbg.owners, - dbg.count * sizeof(*s), - GFP_NOWAIT | __GFP_NOWARN); - if (!s) - goto out; - - dbg.owners = s; - } while (1); - - __print_intel_runtime_pm_wakeref(p, &dbg); - -out: - kfree(dbg.owners); + intel_wakeref_tracker_show(&rpm->debug, p); } #else @@ -297,14 +106,14 @@ static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { } -static depot_stack_handle_t +static intel_wakeref_t track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm) { return -1; } static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm, - intel_wakeref_t wref) + intel_wakeref_t wakeref) { } @@ -639,7 +448,11 @@ void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm) "i915 raw-wakerefs=%d wakelocks=%d on cleanup\n", intel_rpm_raw_wakeref_count(count), intel_rpm_wakelock_count(count)); +} +void intel_runtime_pm_driver_last_release(struct intel_runtime_pm *rpm) +{ + intel_runtime_pm_driver_release(rpm); untrack_all_intel_runtime_pm_wakerefs(rpm); } diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h index e592e8d6499a1f..2f81d685bdb4d1 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.h +++ b/drivers/gpu/drm/i915/intel_runtime_pm.h @@ -83,15 +83,7 @@ struct intel_runtime_pm { * paired rpm_put) we can remove corresponding pairs of and keep * the array trimmed to active wakerefs. */ - struct intel_runtime_pm_debug { - spinlock_t lock; - - depot_stack_handle_t last_acquire; - depot_stack_handle_t last_release; - - depot_stack_handle_t *owners; - unsigned long count; - } debug; + struct ref_tracker_dir debug; #endif }; @@ -195,6 +187,7 @@ void intel_runtime_pm_init_early(struct intel_runtime_pm *rpm); void intel_runtime_pm_enable(struct intel_runtime_pm *rpm); void intel_runtime_pm_disable(struct intel_runtime_pm *rpm); void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm); +void intel_runtime_pm_driver_last_release(struct intel_runtime_pm *rpm); intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm); intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm); diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h index 71b8a63f6f104d..6556c68794800a 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.h +++ b/drivers/gpu/drm/i915/intel_wakeref.h @@ -7,16 +7,25 @@ #ifndef INTEL_WAKEREF_H #define INTEL_WAKEREF_H +#include + #include #include #include #include #include #include +#include +#include #include #include #include +typedef unsigned long intel_wakeref_t; + +#define INTEL_REFTRACK_DEAD_COUNT 16 +#define INTEL_REFTRACK_PRINT_LIMIT 16 + #if IS_ENABLED(CONFIG_DRM_I915_DEBUG) #define INTEL_WAKEREF_BUG_ON(expr) BUG_ON(expr) #else @@ -26,8 +35,6 @@ struct intel_runtime_pm; struct intel_wakeref; -typedef depot_stack_handle_t intel_wakeref_t; - struct intel_wakeref_ops { int (*get)(struct intel_wakeref *wf); int (*put)(struct intel_wakeref *wf); @@ -261,6 +268,56 @@ __intel_wakeref_defer_park(struct intel_wakeref *wf) */ int intel_wakeref_wait_for_idle(struct intel_wakeref *wf); +#define INTEL_WAKEREF_DEF ((intel_wakeref_t)(-1)) + +static inline intel_wakeref_t intel_ref_tracker_alloc(struct ref_tracker_dir *dir) +{ + struct ref_tracker *user = NULL; + + ref_tracker_alloc(dir, &user, GFP_NOWAIT); + + return (intel_wakeref_t)user ?: INTEL_WAKEREF_DEF; +} + +static inline void intel_ref_tracker_free(struct ref_tracker_dir *dir, + intel_wakeref_t handle) +{ + struct ref_tracker *user; + + user = (handle == INTEL_WAKEREF_DEF) ? NULL : (void *)handle; + + ref_tracker_free(dir, &user); +} + +static inline void +intel_wakeref_tracker_show(struct ref_tracker_dir *dir, + struct drm_printer *p) +{ + const size_t buf_size = PAGE_SIZE; + char *buf, *sb, *se; + size_t count; + + buf = kmalloc(buf_size, GFP_NOWAIT); + if (!buf) + return; + + count = ref_tracker_dir_snprint(dir, buf, buf_size); + if (!count) + goto free; + /* printk does not like big buffers, so we split it */ + for (sb = buf; *sb; sb = se + 1) { + se = strchrnul(sb, '\n'); + drm_printf(p, "%.*s", (int)(se - sb + 1), sb); + if (!*se) + break; + } + if (count >= buf_size) + drm_printf(p, "\n...dropped %zd extra bytes of leak report.\n", + count + 1 - buf_size); +free: + kfree(buf); +} + struct intel_wakeref_auto { struct intel_runtime_pm *rpm; struct timer_list timer; From patchwork Tue Mar 28 15:15:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76136 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2311009vqo; Tue, 28 Mar 2023 08:36:02 -0700 (PDT) X-Google-Smtp-Source: AK7set+ViV9+PtF8pKBFQaPqF8WzNhVAlE9mbPVJE1/lZyoTp0oR/XBrRnxrXqs1iLIZRq/aV1tx X-Received: by 2002:a05:6a20:1e4d:b0:db:4fae:ad15 with SMTP id cy13-20020a056a201e4d00b000db4faead15mr15071952pzb.42.1680017762340; Tue, 28 Mar 2023 08:36:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017762; cv=none; d=google.com; s=arc-20160816; b=xBxjGnUcaaM0gSkGtU35BBlRdD5/DO0msHirABwr9J/LD21K9eBuinlblDzVaQQ52d bKAvd1SrmD+9qeQQrQiIZyH1Cb96kuHilnNWiLEiq2DWvJkRxKynWkxqU3p6q6UquUF2 DD+o8uQ5yTX4FYxjjUmnjYobaoY3D1dGf1evdbEPL9cnPRmlUVJLZS1ELgo2cKT3injJ eaD7Vs/0NM86DX9WhmlNcAvT/U4FB5j4SYOUg2T2sDNcBKhzYCir4DUPcJUzYSenow8Z SR/4JcZmbQDFk9nuTpTIYg0CVOzL7kNRI5ossRGDnQfQR/MImqRv9BlMsNeLgs4qcpnR rthg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=TsrxpuJ2GTD2tDIlIODG9I8Eu062Ij6ZNtPkM67PYx8=; b=k04XYfpwpYzhDwrZKoIsOcua0WU7KxKd1r2tN0PHK/FuAVMC+aQj3PyC2h9S5Y1HRO 5Mp0wiYR2ZyvEK5+km23NfWx11C8ITyIcO/1kpZhQEeOqVl6TEgyqyPY7ao7jj/aBBGK uk4cHdsJSASHqg6WOy01LET/zWmmSApd5pa8U9RyrI9xhPA0XzcbV4WgUG5Bzt8kEYyX mfcGSBkKGh9r/YQJfsFE6qRYUXXRTcIPwrIve6P92r2Y8/LxxrIQXP6tqfHTOqa//GaX EzdQ8Jpo3Bm0MnDoTm9Yqu8iQull9z4EmeukS53JtfKIIR25iYtOX6tyLjl6W2v1iw/G kgjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="H/L/ssDv"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 207-20020a6300d8000000b0050beb3fe616si29728646pga.462.2023.03.28.08.35.50; Tue, 28 Mar 2023 08:36:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="H/L/ssDv"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233150AbjC1PTC (ORCPT + 99 others); Tue, 28 Mar 2023 11:19:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234013AbjC1PSS (ORCPT ); Tue, 28 Mar 2023 11:18:18 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373A4113E9; Tue, 28 Mar 2023 08:17:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016632; x=1711552632; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=B8YnyvEqmHM+zRWtrOkf33pW+HfMtg0e9gggedrHS/w=; b=H/L/ssDvboDs0fHF3YXSY62JGcMeTlRUkGLKHwqTmzyYKaSz+JXK/R2z pwPkpwQNIhHMWwEkTchUwQSweVNnTYr5t2Stgh+x3TgvKTUTC3X4QQlk/ TaW9uh9FjVCt5bKmcQZUw/N0nQ9F7UpVWufw6Jc9wU5qNwUr1CG4n0SeT vU00grFQWlw2ad5XB+J15pvUdjpELVNTBwIu/1FJkvbfQcVHnBEAZ7hqg l5eV2ZpAz/DtOgBsImAEzqiu2rN93BpI2b0SQy+k1l7Whx4uaI5OLrX6s N0dmUNHKFLko3WlhzB36dtM+hl7GBMKaQ0K+j4J+SU40R4anIglTtFszG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208756" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208756" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181777" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181777" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:06 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:30 +0200 Subject: [PATCH v5 7/8] drm/i915: track gt pm wakerefs MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-7-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_PDS_OTHER_BAD_TLD autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761626305558367007?= X-GMAIL-MSGID: =?utf-8?q?1761626305558367007?= Track every intel_gt_pm_get() until its corresponding release in intel_gt_pm_put() by returning a cookie to the caller for acquire that must be passed by on released. When there is an imbalance, we can see who either tried to free a stale wakeref, or who forgot to free theirs. Signed-off-by: Andrzej Hajda --- drivers/gpu/drm/i915/Kconfig.debug | 15 +++++++++ drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 ++-- .../drm/i915/gem/selftests/i915_gem_coherency.c | 10 +++--- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 14 ++++---- drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 13 +++++--- drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h | 3 +- drivers/gpu/drm/i915/gt/intel_engine_pm.c | 6 ++-- drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 ++ .../gpu/drm/i915/gt/intel_execlists_submission.c | 2 +- drivers/gpu/drm/i915/gt/intel_gt_pm.c | 12 ++++--- drivers/gpu/drm/i915/gt/intel_gt_pm.h | 38 +++++++++++++++++----- drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c | 4 +-- drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 20 +++++++----- drivers/gpu/drm/i915/gt/selftest_gt_pm.c | 5 +-- drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +++--- drivers/gpu/drm/i915/gt/selftest_rps.c | 17 ++++++---- drivers/gpu/drm/i915/gt/selftest_slpc.c | 5 +-- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 9 ++--- drivers/gpu/drm/i915/i915_pmu.c | 16 +++++---- drivers/gpu/drm/i915/intel_wakeref.c | 7 +++- drivers/gpu/drm/i915/intel_wakeref.h | 38 ++++++++++++++++++++-- 21 files changed, 177 insertions(+), 76 deletions(-) diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index e2da6fb04438b9..d26fb4569873ea 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -40,6 +40,7 @@ config DRM_I915_DEBUG select DRM_I915_DEBUG_GEM_ONCE select DRM_I915_DEBUG_MMIO select DRM_I915_DEBUG_RUNTIME_PM + select DRM_I915_DEBUG_WAKEREF select DRM_I915_SW_FENCE_DEBUG_OBJECTS select DRM_I915_SELFTEST select BROKEN # for prototype uAPI @@ -244,3 +245,17 @@ config DRM_I915_DEBUG_RUNTIME_PM Recommended for driver developers only. If in doubt, say "N" + +config DRM_I915_DEBUG_WAKEREF + bool "Enable extra tracking for wakerefs" + depends on DRM_I915 + default n + select REF_TRACKER + select STACKDEPOT + select STACKTRACE + help + Choose this option to turn on extra state checking and usage + tracking for the wakerefPM functionality. This may introduce + overhead during driver runtime. + + If in doubt, say "N" diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 3aeede6aee4dcc..33a034a9c42f11 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -253,6 +253,7 @@ struct i915_execbuffer { struct intel_gt *gt; /* gt for the execbuf */ struct intel_context *context; /* logical state for the request */ struct i915_gem_context *gem_context; /** caller's context */ + intel_wakeref_t wakeref; /** our requests to build */ struct i915_request *requests[MAX_ENGINE_INSTANCE + 1]; @@ -2709,7 +2710,7 @@ eb_select_engine(struct i915_execbuffer *eb) for_each_child(ce, child) intel_context_get(child); - intel_gt_pm_get(ce->engine->gt); + eb->wakeref = intel_gt_pm_get(ce->engine->gt); if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) { err = intel_context_alloc_state(ce); @@ -2748,7 +2749,7 @@ eb_select_engine(struct i915_execbuffer *eb) return err; err: - intel_gt_pm_put(ce->engine->gt); + intel_gt_pm_put(ce->engine->gt, eb->wakeref); for_each_child(ce, child) intel_context_put(child); intel_context_put(ce); @@ -2761,7 +2762,7 @@ eb_put_engine(struct i915_execbuffer *eb) struct intel_context *child; i915_vm_put(eb->context->vm); - intel_gt_pm_put(eb->gt); + intel_gt_pm_put(eb->context->engine->gt, eb->wakeref); for_each_child(eb->context, child) intel_context_put(child); intel_context_put(eb->context); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 3bef1beec7cbb5..3fd68a099a85ef 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -85,6 +85,7 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) static int gtt_set(struct context *ctx, unsigned long offset, u32 v) { + intel_wakeref_t wakeref; struct i915_vma *vma; u32 __iomem *map; int err = 0; @@ -99,7 +100,7 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); @@ -112,12 +113,13 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v) i915_vma_unpin_iomap(vma); out_rpm: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) { + intel_wakeref_t wakeref; struct i915_vma *vma; u32 __iomem *map; int err = 0; @@ -132,7 +134,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); @@ -145,7 +147,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v) i915_vma_unpin_iomap(vma); out_rpm: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 56279908ed305b..f6f36a85688814 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -630,14 +630,14 @@ static bool assert_mmap_offset(struct drm_i915_private *i915, static void disable_retire_worker(struct drm_i915_private *i915) { i915_gem_driver_unregister__shrinker(i915); - intel_gt_pm_get(to_gt(i915)); + intel_gt_pm_get_untracked(to_gt(i915)); cancel_delayed_work_sync(&to_gt(i915)->requests.retire_work); } static void restore_retire_worker(struct drm_i915_private *i915) { igt_flush_test(i915); - intel_gt_pm_put(to_gt(i915)); + intel_gt_pm_put_untracked(to_gt(i915)); i915_gem_driver_register__shrinker(i915); } @@ -778,6 +778,7 @@ static int igt_mmap_offset_exhaustion(void *arg) static int gtt_set(struct drm_i915_gem_object *obj) { + intel_wakeref_t wakeref; struct i915_vma *vma; void __iomem *map; int err = 0; @@ -786,7 +787,7 @@ static int gtt_set(struct drm_i915_gem_object *obj) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); if (IS_ERR(map)) { @@ -798,12 +799,13 @@ static int gtt_set(struct drm_i915_gem_object *obj) i915_vma_unpin_iomap(vma); out: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } static int gtt_check(struct drm_i915_gem_object *obj) { + intel_wakeref_t wakeref; struct i915_vma *vma; void __iomem *map; int err = 0; @@ -812,7 +814,7 @@ static int gtt_check(struct drm_i915_gem_object *obj) if (IS_ERR(vma)) return PTR_ERR(vma); - intel_gt_pm_get(vma->vm->gt); + wakeref = intel_gt_pm_get(vma->vm->gt); map = i915_vma_pin_iomap(vma); i915_vma_unpin(vma); if (IS_ERR(map)) { @@ -828,7 +830,7 @@ static int gtt_check(struct drm_i915_gem_object *obj) i915_vma_unpin_iomap(vma); out: - intel_gt_pm_put(vma->vm->gt); + intel_gt_pm_put(vma->vm->gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c index ecc990ec1b9526..d650beb8ed22f6 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c @@ -28,11 +28,14 @@ static void irq_disable(struct intel_breadcrumbs *b) static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) { + intel_wakeref_t wakeref; + /* * Since we are waiting on a request, the GPU should be busy * and should have its own rpm reference. */ - if (GEM_WARN_ON(!intel_gt_pm_get_if_awake(b->irq_engine->gt))) + wakeref = intel_gt_pm_get_if_awake(b->irq_engine->gt); + if (GEM_WARN_ON(!wakeref)) return; /* @@ -41,7 +44,7 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) * which we can add a new waiter and avoid the cost of re-enabling * the irq. */ - WRITE_ONCE(b->irq_armed, true); + WRITE_ONCE(b->irq_armed, wakeref); /* Requests may have completed before we could enable the interrupt. */ if (!b->irq_enabled++ && b->irq_enable(b)) @@ -61,12 +64,14 @@ static void intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) { + intel_wakeref_t wakeref = b->irq_armed; + GEM_BUG_ON(!b->irq_enabled); if (!--b->irq_enabled) b->irq_disable(b); - WRITE_ONCE(b->irq_armed, false); - intel_gt_pm_put_async(b->irq_engine->gt); + WRITE_ONCE(b->irq_armed, 0); + intel_gt_pm_put_async(b->irq_engine->gt, wakeref); } static void intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h index 72dfd3748c4c33..bdf09fd67b6e70 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h @@ -13,6 +13,7 @@ #include #include "intel_engine_types.h" +#include "intel_wakeref.h" /* * Rather than have every client wait upon all user interrupts, @@ -43,7 +44,7 @@ struct intel_breadcrumbs { spinlock_t irq_lock; /* protects the interrupt from hardirq context */ struct irq_work irq_work; /* for use from inside irq_lock */ unsigned int irq_enabled; - bool irq_armed; + intel_wakeref_t irq_armed; /* Not all breadcrumbs are attached to physical HW */ intel_engine_mask_t engine_mask; diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index e971b153fda976..7063dea2112943 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -63,7 +63,7 @@ static int __engine_unpark(struct intel_wakeref *wf) ENGINE_TRACE(engine, "\n"); - intel_gt_pm_get(engine->gt); + engine->wakeref_track = intel_gt_pm_get(engine->gt); /* Discard stale context state from across idling */ ce = engine->kernel_context; @@ -276,7 +276,7 @@ static int __engine_park(struct intel_wakeref *wf) engine->park(engine); /* While gt calls i915_vma_parked(), we have to break the lock cycle */ - intel_gt_pm_put_async(engine->gt); + intel_gt_pm_put_async(engine->gt, engine->wakeref_track); return 0; } @@ -289,7 +289,7 @@ void intel_engine_init__pm(struct intel_engine_cs *engine) { struct intel_runtime_pm *rpm = engine->uncore->rpm; - intel_wakeref_init(&engine->wakeref, rpm, &wf_ops); + intel_wakeref_init(&engine->wakeref, rpm, &wf_ops, engine->name); intel_engine_init_heartbeat(engine); intel_gsc_idle_msg_enable(engine); diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index 0a071e5da1a85d..2791d487ab114c 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -431,7 +431,9 @@ struct intel_engine_cs { unsigned long serial; unsigned long wakeref_serial; + intel_wakeref_t wakeref_track; struct intel_wakeref wakeref; + struct file *default_state; struct { diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 1bbe6708d0a7f4..43c70ea63f0ac4 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -630,7 +630,7 @@ static void __execlists_schedule_out(struct i915_request * const rq, execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); if (engine->fw_domain && !--engine->fw_active) intel_uncore_forcewake_put(engine->uncore, engine->fw_domain); - intel_gt_pm_put_async(engine->gt); + intel_gt_pm_put_async_untracked(engine->gt); /* * If this is part of a virtual engine, its next request may diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c index e02cb90723ae03..76378dd2873dfc 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c @@ -27,19 +27,20 @@ static void user_forcewake(struct intel_gt *gt, bool suspend) { int count = atomic_read(>->user_wakeref); + intel_wakeref_t wakeref; /* Inside suspend/resume so single threaded, no races to worry about. */ if (likely(!count)) return; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); if (suspend) { GEM_BUG_ON(count > atomic_read(>->wakeref.count)); atomic_sub(count, >->wakeref.count); } else { atomic_add(count, >->wakeref.count); } - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); } static void runtime_begin(struct intel_gt *gt) @@ -137,7 +138,7 @@ void intel_gt_pm_init_early(struct intel_gt *gt) * runtime_pm is per-device rather than per-tile, so this is still the * correct structure. */ - intel_wakeref_init(>->wakeref, >->i915->runtime_pm, &wf_ops); + intel_wakeref_init(>->wakeref, >->i915->runtime_pm, &wf_ops, "GT"); seqcount_mutex_init(>->stats.lock, >->wakeref.mutex); } @@ -220,6 +221,7 @@ int intel_gt_resume(struct intel_gt *gt) { struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err; err = intel_gt_has_unrecoverable_error(gt); @@ -236,7 +238,7 @@ int intel_gt_resume(struct intel_gt *gt) */ gt_sanitize(gt, true); - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL); intel_rc6_sanitize(>->rc6); @@ -279,7 +281,7 @@ int intel_gt_resume(struct intel_gt *gt) out_fw: intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; err_wedged: diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h index 6c9a4645236467..2ae5fbbede6ca8 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h @@ -16,19 +16,28 @@ static inline bool intel_gt_pm_is_awake(const struct intel_gt *gt) return intel_wakeref_is_active(>->wakeref); } -static inline void intel_gt_pm_get(struct intel_gt *gt) +static inline void intel_gt_pm_get_untracked(struct intel_gt *gt) { intel_wakeref_get(>->wakeref); } +static inline intel_wakeref_t intel_gt_pm_get(struct intel_gt *gt) +{ + intel_gt_pm_get_untracked(gt); + return intel_wakeref_track(>->wakeref); +} + static inline void __intel_gt_pm_get(struct intel_gt *gt) { __intel_wakeref_get(>->wakeref); } -static inline bool intel_gt_pm_get_if_awake(struct intel_gt *gt) +static inline intel_wakeref_t intel_gt_pm_get_if_awake(struct intel_gt *gt) { - return intel_wakeref_get_if_active(>->wakeref); + if (!intel_wakeref_get_if_active(>->wakeref)) + return 0; + + return intel_wakeref_track(>->wakeref); } static inline void intel_gt_pm_might_get(struct intel_gt *gt) @@ -36,12 +45,18 @@ static inline void intel_gt_pm_might_get(struct intel_gt *gt) intel_wakeref_might_get(>->wakeref); } -static inline void intel_gt_pm_put(struct intel_gt *gt) +static inline void intel_gt_pm_put_untracked(struct intel_gt *gt) { intel_wakeref_put(>->wakeref); } -static inline void intel_gt_pm_put_async(struct intel_gt *gt) +static inline void intel_gt_pm_put(struct intel_gt *gt, intel_wakeref_t handle) +{ + intel_wakeref_untrack(>->wakeref, handle); + intel_gt_pm_put_untracked(gt); +} + +static inline void intel_gt_pm_put_async_untracked(struct intel_gt *gt) { intel_wakeref_put_async(>->wakeref); } @@ -51,9 +66,14 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt) intel_wakeref_might_put(>->wakeref); } -#define with_intel_gt_pm(gt, tmp) \ - for (tmp = 1, intel_gt_pm_get(gt); tmp; \ - intel_gt_pm_put(gt), tmp = 0) +static inline void intel_gt_pm_put_async(struct intel_gt *gt, intel_wakeref_t handle) +{ + intel_wakeref_untrack(>->wakeref, handle); + intel_gt_pm_put_async_untracked(gt); +} + +#define with_intel_gt_pm(gt, wf) \ + for (wf = intel_gt_pm_get(gt); wf; intel_gt_pm_put(gt, wf), wf = 0) /** * with_intel_gt_pm_if_awake - if GT is PM awake, get a reference to prevent @@ -64,7 +84,7 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt) * @wf: pointer to a temporary wakeref. */ #define with_intel_gt_pm_if_awake(gt, wf) \ - for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt), wf = 0) + for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt, wf), wf = 0) static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt) { diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c index 80dbbef86b1dbf..0960fae5a0b205 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c @@ -27,7 +27,7 @@ void intel_gt_pm_debugfs_forcewake_user_open(struct intel_gt *gt) { atomic_inc(>->user_wakeref); - intel_gt_pm_get(gt); + intel_gt_pm_get_untracked(gt); if (GRAPHICS_VER(gt->i915) >= 6) intel_uncore_forcewake_user_get(gt->uncore); } @@ -36,7 +36,7 @@ void intel_gt_pm_debugfs_forcewake_user_release(struct intel_gt *gt) { if (GRAPHICS_VER(gt->i915) >= 6) intel_uncore_forcewake_user_put(gt->uncore); - intel_gt_pm_put(gt); + intel_gt_pm_put_untracked(gt); atomic_dec(>->user_wakeref); } diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c index 542ce6d2de1922..f3d21bdfd7f639 100644 --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c @@ -21,20 +21,22 @@ static int cmp_u32(const void *A, const void *B) return *a - *b; } -static void perf_begin(struct intel_gt *gt) +static intel_wakeref_t perf_begin(struct intel_gt *gt) { - intel_gt_pm_get(gt); + intel_wakeref_t wakeref = intel_gt_pm_get(gt); /* Boost gpufreq to max [waitboost] and keep it fixed */ atomic_inc(>->rps.num_waiters); schedule_work(>->rps.work); flush_work(>->rps.work); + + return wakeref; } -static int perf_end(struct intel_gt *gt) +static int perf_end(struct intel_gt *gt, intel_wakeref_t wakeref) { atomic_dec(>->rps.num_waiters); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return igt_flush_test(gt->i915); } @@ -133,12 +135,13 @@ static int perf_mi_bb_start(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - perf_begin(gt); + wakeref = perf_begin(gt); for_each_engine(engine, gt, id) { struct intel_context *ce = engine->kernel_context; struct i915_vma *batch; @@ -207,7 +210,7 @@ static int perf_mi_bb_start(void *arg) pr_info("%s: MI_BB_START cycles: %u\n", engine->name, trifilter(cycles)); } - if (perf_end(gt)) + if (perf_end(gt, wakeref)) err = -EIO; return err; @@ -260,12 +263,13 @@ static int perf_mi_noop(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - perf_begin(gt); + wakeref = perf_begin(gt); for_each_engine(engine, gt, id) { struct intel_context *ce = engine->kernel_context; struct i915_vma *base, *nop; @@ -364,7 +368,7 @@ static int perf_mi_noop(void *arg) pr_info("%s: 16K MI_NOOP cycles: %u\n", engine->name, trifilter(cycles)); } - if (perf_end(gt)) + if (perf_end(gt, wakeref)) err = -EIO; return err; diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c index 0971241707ce83..33351deeea4f0b 100644 --- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c @@ -81,6 +81,7 @@ static int live_gt_clocks(void *arg) struct intel_gt *gt = arg; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; if (!gt->clock_frequency) { /* unknown */ @@ -91,7 +92,7 @@ static int live_gt_clocks(void *arg) if (GRAPHICS_VER(gt->i915) < 4) /* Any CS_TIMESTAMP? */ return 0; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL); for_each_engine(engine, gt, id) { @@ -128,7 +129,7 @@ static int live_gt_clocks(void *arg) } intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index a9e0a91bc0e026..ab464cd72b8c39 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -257,11 +257,12 @@ static int igt_atomic_reset(void *arg) { struct intel_gt *gt = arg; const typeof(*igt_atomic_phases) *p; + intel_wakeref_t wakeref; int err = 0; /* Check that the resets are usable from atomic context */ - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); igt_global_reset_lock(gt); /* Flush any requests before we get started and check basics */ @@ -292,7 +293,7 @@ static int igt_atomic_reset(void *arg) unlock: igt_global_reset_unlock(gt); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } @@ -303,6 +304,7 @@ static int igt_atomic_engine_reset(void *arg) const typeof(*igt_atomic_phases) *p; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; int err = 0; /* Check that the resets are usable from atomic context */ @@ -313,7 +315,7 @@ static int igt_atomic_engine_reset(void *arg) if (intel_uc_uses_guc_submission(>->uc)) return 0; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); igt_global_reset_lock(gt); /* Flush any requests before we get started and check basics */ @@ -361,7 +363,7 @@ static int igt_atomic_engine_reset(void *arg) out_unlock: igt_global_reset_unlock(gt); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); return err; } diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c index 84e77e8dbba19c..514ad27c3654ba 100644 --- a/drivers/gpu/drm/i915/gt/selftest_rps.c +++ b/drivers/gpu/drm/i915/gt/selftest_rps.c @@ -223,6 +223,7 @@ int live_rps_clock_interval(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; int err = 0; if (!intel_rps_is_enabled(rps) || GRAPHICS_VER(gt->i915) < 6) @@ -235,7 +236,7 @@ int live_rps_clock_interval(void *arg) saved_work = rps->work.func; rps->work.func = dummy_rps_work; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); intel_rps_disable(>->rps); intel_gt_check_clock_frequency(gt); @@ -354,7 +355,7 @@ int live_rps_clock_interval(void *arg) } intel_rps_enable(>->rps); - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); @@ -375,6 +376,7 @@ int live_rps_control(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; int err = 0; /* @@ -397,7 +399,7 @@ int live_rps_control(void *arg) saved_work = rps->work.func; rps->work.func = dummy_rps_work; - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); for_each_engine(engine, gt, id) { struct i915_request *rq; ktime_t min_dt, max_dt; @@ -487,7 +489,7 @@ int live_rps_control(void *arg) break; } } - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); @@ -1022,6 +1024,7 @@ int live_rps_interrupt(void *arg) struct intel_engine_cs *engine; enum intel_engine_id id; struct igt_spinner spin; + intel_wakeref_t wakeref; u32 pm_events; int err = 0; @@ -1032,9 +1035,9 @@ int live_rps_interrupt(void *arg) if (!intel_rps_has_interrupts(rps) || GRAPHICS_VER(gt->i915) < 6) return 0; - intel_gt_pm_get(gt); - pm_events = rps->pm_events; - intel_gt_pm_put(gt); + pm_events = 0; + with_intel_gt_pm(gt, wakeref) + pm_events = rps->pm_events; if (!pm_events) { pr_err("No RPS PM events registered, but RPS is enabled?\n"); return -ENODEV; diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c index bd44ce73a5044e..81fa891eac19fc 100644 --- a/drivers/gpu/drm/i915/gt/selftest_slpc.c +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c @@ -241,6 +241,7 @@ static int run_test(struct intel_gt *gt, int test_type) struct intel_rps *rps = >->rps; struct intel_engine_cs *engine; enum intel_engine_id id; + intel_wakeref_t wakeref; struct igt_spinner spin; u32 slpc_min_freq, slpc_max_freq; int err = 0; @@ -278,7 +279,7 @@ static int run_test(struct intel_gt *gt, int test_type) } intel_gt_pm_wait_for_idle(gt); - intel_gt_pm_get(gt); + wakeref = intel_gt_pm_get(gt); for_each_engine(engine, gt, id) { struct i915_request *rq; u32 max_act_freq; @@ -365,7 +366,7 @@ static int run_test(struct intel_gt *gt, int test_type) if (igt_flush_test(gt->i915)) err = -EIO; - intel_gt_pm_put(gt); + intel_gt_pm_put(gt, wakeref); igt_spinner_fini(&spin); intel_gt_pm_wait_for_idle(gt); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 74d28a3af2d57b..a1960d46a50902 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1106,7 +1106,7 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc) if (deregister) guc_signal_context_fence(ce); if (destroyed) { - intel_gt_pm_put_async(guc_to_gt(guc)); + intel_gt_pm_put_async_untracked(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } @@ -1302,6 +1302,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) unsigned long flags; u32 reset_count; bool in_reset; + intel_wakeref_t wakeref; spin_lock_irqsave(&guc->timestamp.lock, flags); @@ -1324,7 +1325,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) * start_gt_clk is derived from GuC state. To get a consistent * view of activity, we query the GuC state only if gt is awake. */ - if (!in_reset && intel_gt_pm_get_if_awake(gt)) { + if (!in_reset && (wakeref = intel_gt_pm_get_if_awake(gt))) { stats_saved = *stats; gt_stamp_saved = guc->timestamp.gt_stamp; /* @@ -1333,7 +1334,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now) */ guc_update_engine_gt_clks(engine); guc_update_pm_timestamp(guc, now); - intel_gt_pm_put_async(gt); + intel_gt_pm_put_async(gt, wakeref); if (i915_reset_count(gpu_error) != reset_count) { *stats = stats_saved; guc->timestamp.gt_stamp = gt_stamp_saved; @@ -4604,7 +4605,7 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc, intel_context_put(ce); } else if (context_destroyed(ce)) { /* Context has been destroyed */ - intel_gt_pm_put_async(guc_to_gt(guc)); + intel_gt_pm_put_async_untracked(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c index 7ece883a7d9567..12d858e17902ec 100644 --- a/drivers/gpu/drm/i915/i915_pmu.c +++ b/drivers/gpu/drm/i915/i915_pmu.c @@ -167,19 +167,19 @@ static u64 get_rc6(struct intel_gt *gt) { struct drm_i915_private *i915 = gt->i915; struct i915_pmu *pmu = &i915->pmu; + intel_wakeref_t wakeref; unsigned long flags; - bool awake = false; u64 val; - if (intel_gt_pm_get_if_awake(gt)) { + wakeref = intel_gt_pm_get_if_awake(gt); + if (wakeref) { val = __get_rc6(gt); - intel_gt_pm_put_async(gt); - awake = true; + intel_gt_pm_put_async(gt, wakeref); } spin_lock_irqsave(&pmu->lock, flags); - if (awake) { + if (wakeref) { pmu->sample[__I915_SAMPLE_RC6].cur = val; } else { /* @@ -372,12 +372,14 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns) struct drm_i915_private *i915 = gt->i915; struct i915_pmu *pmu = &i915->pmu; struct intel_rps *rps = >->rps; + intel_wakeref_t wakeref; if (!frequency_sampling_enabled(pmu)) return; /* Report 0/0 (actual/requested) frequency while parked. */ - if (!intel_gt_pm_get_if_awake(gt)) + wakeref = intel_gt_pm_get_if_awake(gt); + if (!wakeref) return; if (pmu->enable & config_mask(I915_PMU_ACTUAL_FREQUENCY)) { @@ -406,7 +408,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns) period_ns / 1000); } - intel_gt_pm_put_async(gt); + intel_gt_pm_put_async(gt, wakeref); } static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer) diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c index dfd87d08221807..61f974d97a3757 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.c +++ b/drivers/gpu/drm/i915/intel_wakeref.c @@ -96,7 +96,8 @@ static void __intel_wakeref_put_work(struct work_struct *wrk) void __intel_wakeref_init(struct intel_wakeref *wf, struct intel_runtime_pm *rpm, const struct intel_wakeref_ops *ops, - struct intel_wakeref_lockclass *key) + struct intel_wakeref_lockclass *key, + const char *name) { wf->rpm = rpm; wf->ops = ops; @@ -108,6 +109,10 @@ void __intel_wakeref_init(struct intel_wakeref *wf, INIT_DELAYED_WORK(&wf->work, __intel_wakeref_put_work); lockdep_init_map(&wf->work.work.lockdep_map, "wakeref.work", &key->work, 0); + +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + ref_tracker_dir_init(&wf->debug, INTEL_REFTRACK_DEAD_COUNT, name); +#endif } int intel_wakeref_wait_for_idle(struct intel_wakeref *wf) diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h index 6556c68794800a..8b8a55ff8d1496 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.h +++ b/drivers/gpu/drm/i915/intel_wakeref.h @@ -50,6 +50,10 @@ struct intel_wakeref { const struct intel_wakeref_ops *ops; struct delayed_work work; + +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + struct ref_tracker_dir debug; +#endif }; struct intel_wakeref_lockclass { @@ -60,11 +64,12 @@ struct intel_wakeref_lockclass { void __intel_wakeref_init(struct intel_wakeref *wf, struct intel_runtime_pm *rpm, const struct intel_wakeref_ops *ops, - struct intel_wakeref_lockclass *key); -#define intel_wakeref_init(wf, rpm, ops) do { \ + struct intel_wakeref_lockclass *key, + const char *name); +#define intel_wakeref_init(wf, rpm, ops, name) do { \ static struct intel_wakeref_lockclass __key; \ \ - __intel_wakeref_init((wf), (rpm), (ops), &__key); \ + __intel_wakeref_init((wf), (rpm), (ops), &__key, name); \ } while (0) int __intel_wakeref_get_first(struct intel_wakeref *wf); @@ -318,6 +323,33 @@ intel_wakeref_tracker_show(struct ref_tracker_dir *dir, kfree(buf); } +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF) + +static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf) +{ + return intel_ref_tracker_alloc(&wf->debug); +} + +static inline void intel_wakeref_untrack(struct intel_wakeref *wf, + intel_wakeref_t handle) +{ + intel_ref_tracker_free(&wf->debug, handle); +} + +#else + +static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf) +{ + return -1; +} + +static inline void intel_wakeref_untrack(struct intel_wakeref *wf, + intel_wakeref_t handle) +{ +} + +#endif + struct intel_wakeref_auto { struct intel_runtime_pm *rpm; struct timer_list timer; From patchwork Tue Mar 28 15:15:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 76133 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2307113vqo; Tue, 28 Mar 2023 08:30:31 -0700 (PDT) X-Google-Smtp-Source: AKy350aB5IKfz2tpFbFwxFVdhgmCPY10wf4vQA2yL8noBmGHDFJrmKf7BKzvgOnpFcqY+TpNuDme X-Received: by 2002:a17:907:6ea8:b0:930:a3a1:bede with SMTP id sh40-20020a1709076ea800b00930a3a1bedemr21807945ejc.50.1680017431155; Tue, 28 Mar 2023 08:30:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680017431; cv=none; d=google.com; s=arc-20160816; b=H3oJ7TDxqwt5iYwLJvlgJrpDkaLaZ3hCux4snAlf0smV2Rizm8XlPfqYtQwgp+uWb+ fhrOp9/PO8YV4BxIPPayVeF4sptHUek53S1eMX3eEeAk9jZAjwTUvdCON/Q4eUoPwjhs wWwdYYb2tsksuINzmMTp0E+xZ0wTgKpCRtwYI1h4RWlFJBVK++fkpMEHH9jhg80nkEyd tZ+Rsw5PK5fl+cVZuyVcgmW0AgVC/9/yjUyVcY38bLBXh2fPfjZIffFWLaw2a11dELqJ pm2IUMZNxzayAb5HjvbienadHrOVCJA6WILOCMkPVYR44PCBAXN3WnClk7QA/jo6QkLq PJYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=vujjWSFfrEVLUx7SXCvop7jyoOBLiPyxkHQMpSOS8hI=; b=MghIVeG8mh34vboMb0oU1Z8c9mgCdAfof/BVbp7fokZffo+z4hzVjTjqW6U1JaxGUF QyUezweHh++Gk9O48I0gO1toOQh4BYczBISZCSiU7tZcEZxA3bOty9H6v0HrRf78WW5N E22IIUUu1hUeZ3LlIZhmsNacD4MhZwakDW7M4Yd9Zh8RfzzTbBNfKmjmvCwogxaCMTbx kJGdLI5kfFdbllhy574pzx50EXxS/rqB2RJXQkzk7gEFT3vT4MQcPOR9LsTOBibsuSQv U+xFaOKWvP2rDmDNwElOxQ2KzLiUuinsdMk9PNG9uUA/+bHpg+zURkKOshG0ABepACKQ bHYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JLNe9raE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p9-20020a1709061b4900b0092fe3fdc1fdsi28944922ejg.605.2023.03.28.08.30.06; Tue, 28 Mar 2023 08:30:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JLNe9raE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233303AbjC1PTh (ORCPT + 99 others); Tue, 28 Mar 2023 11:19:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233907AbjC1PSw (ORCPT ); Tue, 28 Mar 2023 11:18:52 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 957C111150; Tue, 28 Mar 2023 08:17:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680016649; x=1711552649; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=LsmUDbnIlE4apXp83QZEhT9gpY1ZA5IO+nuheEoXvek=; b=JLNe9raE1O4kncgBgoGGGJ0HPYcaT7Jt2nQoWAAzzH91q8qQAnmm0FaA NGgEcy77VLTv3eJREywT3kgXU6CR+meieVUD7twFIapTESwFo4Gjn0fqi shsr9JzGGioV88v5UTnqCYKG4ITmgEJqRl0TGLZqgGBANagjr6G8W2ItL UaVSauPs/DTLLgLtatvWUW0BPHBidPHK2ipdkxiYvSNkzvlNMHFI1Cm4Q eqVTHTXZusUgWdyBVftBK6TZcwnjTxtYDrArJlEc0X0E8nSzHLjCait3h kh8FJBqqC5UNGGBGVzokTHkQMDU4CL/coJ/WBjsbMUjbaaZdRQNAu3HbT w==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="403208773" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="403208773" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="773181781" X-IronPort-AV: E=Sophos;i="5.98,297,1673942400"; d="scan'208";a="773181781" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2023 08:16:10 -0700 From: Andrzej Hajda Date: Tue, 28 Mar 2023 17:15:31 +0200 Subject: [PATCH v5 8/8] drm/i915/gt: Hold a wakeref for the active VM MIME-Version: 1.0 Message-Id: <20230224-track_gt-v5-8-77be86f2c872@intel.com> References: <20230224-track_gt-v5-0-77be86f2c872@intel.com> In-Reply-To: <20230224-track_gt-v5-0-77be86f2c872@intel.com> To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Eric Dumazet , Jakub Kicinski , Dmitry Vyukov , "David S. Miller" , Andrzej Hajda , Andi Shyti X-Mailer: b4 0.11.1 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761625958154718584?= X-GMAIL-MSGID: =?utf-8?q?1761625958154718584?= From: Chris Wilson There may be a disconnect between the GT used by the engine and the GT used for the VM, requiring us to hold a wakeref on both while the GPU is active with this request. Signed-off-by: Chris Wilson Signed-off-by: Andrzej Hajda --- drivers/gpu/drm/i915/gt/intel_context.h | 15 +++++++++++---- drivers/gpu/drm/i915/gt/intel_context_types.h | 2 ++ drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 ++++ 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 0a8d553da3f439..582faa21181e58 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -14,6 +14,7 @@ #include "i915_drv.h" #include "intel_context_types.h" #include "intel_engine_types.h" +#include "intel_gt_pm.h" #include "intel_ring_types.h" #include "intel_timeline_types.h" #include "i915_trace.h" @@ -207,8 +208,11 @@ void intel_context_exit_engine(struct intel_context *ce); static inline void intel_context_enter(struct intel_context *ce) { lockdep_assert_held(&ce->timeline->mutex); - if (!ce->active_count++) - ce->ops->enter(ce); + if (ce->active_count++) + return; + + ce->ops->enter(ce); + ce->wakeref = intel_gt_pm_get(ce->vm->gt); } static inline void intel_context_mark_active(struct intel_context *ce) @@ -222,8 +226,11 @@ static inline void intel_context_exit(struct intel_context *ce) { lockdep_assert_held(&ce->timeline->mutex); GEM_BUG_ON(!ce->active_count); - if (!--ce->active_count) - ce->ops->exit(ce); + if (--ce->active_count) + return; + + intel_gt_pm_put_async(ce->vm->gt, ce->wakeref); + ce->ops->exit(ce); } static inline struct intel_context *intel_context_get(struct intel_context *ce) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index e36670f2e6260b..5dc39a9d7a501c 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -17,6 +17,7 @@ #include "i915_utils.h" #include "intel_engine_types.h" #include "intel_sseu.h" +#include "intel_wakeref.h" #include "uc/intel_guc_fwif.h" @@ -110,6 +111,7 @@ struct intel_context { u32 ring_size; struct intel_ring *ring; struct intel_timeline *timeline; + intel_wakeref_t wakeref; unsigned long flags; #define CONTEXT_BARRIER_BIT 0 diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index 7063dea2112943..c2d17c97bfe989 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -114,6 +114,10 @@ __queue_and_release_pm(struct i915_request *rq, ENGINE_TRACE(engine, "parking\n"); + GEM_BUG_ON(rq->context->active_count != 1); + __intel_gt_pm_get(engine->gt); + rq->context->wakeref = intel_wakeref_track(&engine->gt->wakeref); + /* * We have to serialise all potential retirement paths with our * submission, as we don't want to underflow either the