From patchwork Tue Jan 9 16:59:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 186470 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2411:b0:101:2151:f287 with SMTP id m17csp244442dyi; Tue, 9 Jan 2024 08:59:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IEtnBxVq8aaDG0I7o3ZvR7OrELmPy7PD75cIWZMJm+RMuR/VJkblrZVApIKPTxzXRIX+Kgb X-Received: by 2002:a05:622a:22a3:b0:429:a81f:8d10 with SMTP id ay35-20020a05622a22a300b00429a81f8d10mr783473qtb.110.1704819589222; Tue, 09 Jan 2024 08:59:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704819589; cv=none; d=google.com; s=arc-20160816; b=ZLtV9Po3LBKE/fr1+DanFHlBmdEt2UJoiYPrmt0+isDLAdhfpV4PP5tBSrqFFNkIVz tKgv6xmaRsWHsy4BcuTo1MfvEomDsWqvnzq3EBTQChlUKM5cYeucvG5UD60HvGxL3YFi //OYHu0xPdOH/YH0iA8RDTgmOgkjdH8m4zRSc4JHgFUscnSNb1fy6Cj8ac98StD5rpGV B4njmyFpgrZbUIcBw2LXCUC0SNkIiI8RijRjMHGhQXN/FfooHTKjroUjpGExMaXX9Mpj F6Wd+OHBl8QX8npObZ1ZsmLgq0Qez0ai0IfjVDsIaa3DUfC0V8MtQgRUjpbaJw2L6iS/ axzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=wk+TVFY1MwiQXiPvNscQQMlMMqLZt/n6P6NITGVqMkM=; fh=WGXy1p2sL2yqDno/Jf9LPz6Ns1wpjGHXhevjps1hyHA=; b=Dy4y9L5MGu76weKFIKE8OsJOXg1cn5R3Wk4Z9MjIiJWpGjTRzBRaeDN0zh+4ENU+xC MAMMxDCnUp+S996vT5Lf8/ZyA1ymJhgb2oFvNhFWJfbHPQJVH/pYXI5OLnLKcdkYHAe0 Zi1TyQxZ+fA5st593aceBfqZqi9WUp1PfNwdWA7yMnOi/0P2/bXw2FfOLTEOZkgcXa40 GCYUUa8qnUK0XszeH2qf67rhaBGmp/8cNs2AVPeA1PSrYgNSr7sf3Xxxfu8sENzmDT2S AUi7lCId8F9zUxybGLqGydgzlqnUCXrdUoAXNSj0lWPAuGe/rAVzQAR76pDqaV92Gyim aLEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-21139-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-21139-ouuuleilei=gmail.com@vger.kernel.org" Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id bb28-20020a05622a1b1c00b004299349dc68si2581672qtb.750.2024.01.09.08.59.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 08:59:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-21139-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-21139-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-21139-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id DD9961C23652 for ; Tue, 9 Jan 2024 16:59:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 032873AC12; Tue, 9 Jan 2024 16:59:31 +0000 (UTC) Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 074193A8C5; Tue, 9 Jan 2024 16:59:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 5.4.0) id 5c1102fa8f2cf404; Tue, 9 Jan 2024 17:59:23 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 40513669107; Tue, 9 Jan 2024 17:59:23 +0100 (CET) From: "Rafael J. Wysocki" To: Linux PM , Ulf Hansson Cc: LKML , Stanislaw Gruszka Subject: [PATCH v1] PM: sleep: Restore asynchronous device resume optimization Date: Tue, 09 Jan 2024 17:59:22 +0100 Message-ID: <10423008.nUPlyArG6x@kreacher> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvkedrvdehledgleefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepffffffekgfehheffleetieevfeefvefhleetjedvvdeijeejledvieehueevueffnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqedpnhgspghrtghpthhtohepgedprhgtphhtthhopehlihhnuhigqdhpmhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehulhhfrdhhrghnshhsohhnsehlihhnrghrohdrohhrghdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehsthgrnhhishhlrgifrdhgrhhushiikhgrsehlihhnuhigrdhinhhtvghlrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=4 Fuz1=4 Fuz2=4 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787632905862768504 X-GMAIL-MSGID: 1787632905862768504 From: Rafael J. Wysocki Before commit 7839d0078e0d ("PM: sleep: Fix possible deadlocks in core system-wide PM code"), the resume of devices that were allowed to resume asynchronously was scheduled before starting the resume of the other devices, so the former did not have to wait for the latter unless functional dependencies were present. Commit 7839d0078e0d removed that optimization in order to address a correctness issue, but it can be restored with the help of a new device power management flag, so do that now. Signed-off-by: Rafael J. Wysocki Reviewed-by: Stanislaw Gruszka --- I said I'd probably do this in 6.9, but then I thought more about it and now I think it would be nice to have 6.8-rc1 without a suspend performance regression and the change is relatively straightforward, so here it goes. --- drivers/base/power/main.c | 117 +++++++++++++++++++++++++--------------------- include/linux/pm.h | 1 2 files changed, 65 insertions(+), 53 deletions(-) Index: linux-pm/include/linux/pm.h =================================================================== --- linux-pm.orig/include/linux/pm.h +++ linux-pm/include/linux/pm.h @@ -681,6 +681,7 @@ struct dev_pm_info { bool wakeup_path:1; bool syscore:1; bool no_pm_callbacks:1; /* Owned by the PM core */ + bool in_progress:1; /* Owned by the PM core */ unsigned int must_resume:1; /* Owned by the PM core */ unsigned int may_skip_resume:1; /* Set by subsystems */ #else Index: linux-pm/drivers/base/power/main.c =================================================================== --- linux-pm.orig/drivers/base/power/main.c +++ linux-pm/drivers/base/power/main.c @@ -579,7 +579,7 @@ bool dev_pm_skip_resume(struct device *d } /** - * __device_resume_noirq - Execute a "noirq resume" callback for given device. + * device_resume_noirq - Execute a "noirq resume" callback for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. @@ -587,7 +587,7 @@ bool dev_pm_skip_resume(struct device *d * The driver of @dev will not receive interrupts while this function is being * executed. */ -static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async) +static void device_resume_noirq(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -674,16 +674,22 @@ static bool dpm_async_fn(struct device * { reinit_completion(&dev->power.completion); - if (!is_async(dev)) - return false; + if (is_async(dev)) { + dev->power.in_progress = true; - get_device(dev); - - if (async_schedule_dev_nocall(func, dev)) - return true; + get_device(dev); - put_device(dev); + if (async_schedule_dev_nocall(func, dev)) + return true; + put_device(dev); + } + /* + * Because async_schedule_dev_nocall() above has returned false or it + * has not been called at all, func() is not running and it safe to + * update the in_progress flag without additional synchronization. + */ + dev->power.in_progress = false; return false; } @@ -691,18 +697,10 @@ static void async_resume_noirq(void *dat { struct device *dev = data; - __device_resume_noirq(dev, pm_transition, true); + device_resume_noirq(dev, pm_transition, true); put_device(dev); } -static void device_resume_noirq(struct device *dev) -{ - if (dpm_async_fn(dev, async_resume_noirq)) - return; - - __device_resume_noirq(dev, pm_transition, false); -} - static void dpm_noirq_resume_devices(pm_message_t state) { struct device *dev; @@ -712,18 +710,28 @@ static void dpm_noirq_resume_devices(pm_ mutex_lock(&dpm_list_mtx); pm_transition = state; + /* + * Trigger the resume of "async" devices upfront so they don't have to + * wait for the "non-async" ones they don't depend on. + */ + list_for_each_entry(dev, &dpm_noirq_list, power.entry) + dpm_async_fn(dev, async_resume_noirq); + while (!list_empty(&dpm_noirq_list)) { dev = to_device(dpm_noirq_list.next); - get_device(dev); list_move_tail(&dev->power.entry, &dpm_late_early_list); - mutex_unlock(&dpm_list_mtx); + if (!dev->power.in_progress) { + get_device(dev); - device_resume_noirq(dev); + mutex_unlock(&dpm_list_mtx); - put_device(dev); + device_resume_noirq(dev, state, false); + + put_device(dev); - mutex_lock(&dpm_list_mtx); + mutex_lock(&dpm_list_mtx); + } } mutex_unlock(&dpm_list_mtx); async_synchronize_full(); @@ -747,14 +755,14 @@ void dpm_resume_noirq(pm_message_t state } /** - * __device_resume_early - Execute an "early resume" callback for given device. + * device_resume_early - Execute an "early resume" callback for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. * * Runtime PM is disabled for @dev while this function is being executed. */ -static void __device_resume_early(struct device *dev, pm_message_t state, bool async) +static void device_resume_early(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -820,18 +828,10 @@ static void async_resume_early(void *dat { struct device *dev = data; - __device_resume_early(dev, pm_transition, true); + device_resume_early(dev, pm_transition, true); put_device(dev); } -static void device_resume_early(struct device *dev) -{ - if (dpm_async_fn(dev, async_resume_early)) - return; - - __device_resume_early(dev, pm_transition, false); -} - /** * dpm_resume_early - Execute "early resume" callbacks for all devices. * @state: PM transition of the system being carried out. @@ -845,18 +845,28 @@ void dpm_resume_early(pm_message_t state mutex_lock(&dpm_list_mtx); pm_transition = state; + /* + * Trigger the resume of "async" devices upfront so they don't have to + * wait for the "non-async" ones they don't depend on. + */ + list_for_each_entry(dev, &dpm_late_early_list, power.entry) + dpm_async_fn(dev, async_resume_early); + while (!list_empty(&dpm_late_early_list)) { dev = to_device(dpm_late_early_list.next); - get_device(dev); list_move_tail(&dev->power.entry, &dpm_suspended_list); - mutex_unlock(&dpm_list_mtx); + if (!dev->power.in_progress) { + get_device(dev); - device_resume_early(dev); + mutex_unlock(&dpm_list_mtx); - put_device(dev); + device_resume_early(dev, state, false); + + put_device(dev); - mutex_lock(&dpm_list_mtx); + mutex_lock(&dpm_list_mtx); + } } mutex_unlock(&dpm_list_mtx); async_synchronize_full(); @@ -876,12 +886,12 @@ void dpm_resume_start(pm_message_t state EXPORT_SYMBOL_GPL(dpm_resume_start); /** - * __device_resume - Execute "resume" callbacks for given device. + * device_resume - Execute "resume" callbacks for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. */ -static void __device_resume(struct device *dev, pm_message_t state, bool async) +static void device_resume(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -975,18 +985,10 @@ static void async_resume(void *data, asy { struct device *dev = data; - __device_resume(dev, pm_transition, true); + device_resume(dev, pm_transition, true); put_device(dev); } -static void device_resume(struct device *dev) -{ - if (dpm_async_fn(dev, async_resume)) - return; - - __device_resume(dev, pm_transition, false); -} - /** * dpm_resume - Execute "resume" callbacks for non-sysdev devices. * @state: PM transition of the system being carried out. @@ -1006,16 +1008,25 @@ void dpm_resume(pm_message_t state) pm_transition = state; async_error = 0; + /* + * Trigger the resume of "async" devices upfront so they don't have to + * wait for the "non-async" ones they don't depend on. + */ + list_for_each_entry(dev, &dpm_suspended_list, power.entry) + dpm_async_fn(dev, async_resume); + while (!list_empty(&dpm_suspended_list)) { dev = to_device(dpm_suspended_list.next); get_device(dev); - mutex_unlock(&dpm_list_mtx); + if (!dev->power.in_progress) { + mutex_unlock(&dpm_list_mtx); - device_resume(dev); + device_resume(dev, state, false); - mutex_lock(&dpm_list_mtx); + mutex_lock(&dpm_list_mtx); + } if (!list_empty(&dev->power.entry)) list_move_tail(&dev->power.entry, &dpm_prepared_list);