From patchwork Wed Dec 27 20:37:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 183558 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1658089dyb; Wed, 27 Dec 2023 12:43:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IGAf2hXOy3s7X3Q6LPgXtocSRIMlquSkci1drMiQA8wLUDbryFW8hNm6rYSWyFgxzXWOUZ7 X-Received: by 2002:a05:6a21:7892:b0:195:d5a5:da02 with SMTP id bf18-20020a056a21789200b00195d5a5da02mr4584003pzc.9.1703709792277; Wed, 27 Dec 2023 12:43:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703709791; cv=none; d=google.com; s=arc-20160816; b=LoEqszVn0/XnG1Uy0mUJyNabVl7m9ay3m0relzC7Zg503p13LoVr8vuFgD75Rx4Hkb SxzN8m9p0pmW971B/q8BTeImh0yCyMnr1+ydz72u5zHfXHyOFrvyWqZgELWAuOz0X1sF 10773n189L8yoJT81z0Q3NW/X3BhsEk+zf8Dzs4QiGLxPzHXkl/5+PQFwkx+0xn7wKxe wuEmi1MyENOnwAbVZ+3+niKdFzefZSCYwzGfJ9XSO5/5KNOc2fF2vaB9msCXmyrWxhLA JhkQ46FrAubHykDRBchN+B6N9mq10XZ5OTeWY5XCPz2Ys2APk4XgT3cLT9Bl0amrWnuR vl4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=aQVaZ9XsOiLs39up7cstxJpKm2aUZkTxb806HVU++GY=; fh=GJObpfi22GEDsdUJ/UPF0sV9R5wqW+3fN2C4DnWtGgw=; b=cZxzlwDZcd/1JGeeTGrBU4p6pUZKwvV/CsgKyq4WMhUhTbxVQNKdezHQlxInBoN8QQ w5TJiLWI/tkaH9wDPRBs3625FKI+MPvMAkF0fOiUe/S7q1vDjW+uH/KkeHqkhP9o4nxf Q6Vk/ZkWje10HvfqA8h88EfBPgjyi79zVm7gtH1BCmAHGugd8M0fNnRD4rZXTrqcVQuI MRd7YecOuaKBnIUFvDVkQrkWqJMWo/OishT8ofIkeb6DQtu2canx/wZNUfKlAGiH+phu SphrSKy4iLlW2IGOjnitiKqw2193qKPpcT49uxZcAD1ArVQzSu7lNB+kjfij8n9bYsKY 2JDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12246-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id s124-20020a632c82000000b005cdf93487basi9588897pgs.585.2023.12.27.12.43.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 12:43:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12246-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8BB06B20E0A for ; Wed, 27 Dec 2023 20:42:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2C32E48CDB; Wed, 27 Dec 2023 20:41:50 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2DC047776; Wed, 27 Dec 2023 20:41:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 5.4.0) id bd786dfcd3488790; Wed, 27 Dec 2023 21:41:44 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id E462F668E17; Wed, 27 Dec 2023 21:41:43 +0100 (CET) From: "Rafael J. Wysocki" To: Greg KH , linux-pm@vger.kernel.org Cc: Youngmin Nam , rafael@kernel.org, linux-kernel@vger.kernel.org, d7271.choe@samsung.com, janghyuck.kim@samsung.com, hyesoo.yu@samsung.com, Alan Stern , Ulf Hansson Subject: [PATCH v1 1/3] async: Split async_schedule_node_domain() Date: Wed, 27 Dec 2023 21:37:02 +0100 Message-ID: <4551962.LvFx2qVVIh@kreacher> In-Reply-To: <6019796.lOV4Wx5bFT@kreacher> References: <5754861.DvuYhMxLoT@kreacher> <6019796.lOV4Wx5bFT@kreacher> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvkedrvddvledgudegudcutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfjqffogffrnfdpggftiffpkfenuceurghilhhouhhtmecuudehtdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvvefufffkjghfggfgtgesthfuredttddtjeenucfhrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqeenucggtffrrghtthgvrhhnpedvffeuiedtgfdvtddugeeujedtffetteegfeekffdvfedttddtuefhgeefvdejhfenucfkphepudelhedrudefiedrudelrdelgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpeduleehrddufeeirdduledrleegpdhhvghlohepkhhrvggrtghhvghrrdhlohgtrghlnhgvthdpmhgrihhlfhhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqpdhnsggprhgtphhtthhopedutddprhgtphhtthhopehgrhgvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehlihhnuhigqdhpmhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopeihohhunhhgmhhinhdrnhgrmhesshgrmhhsuhhnghdrtghomhdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidq khgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepugejvdejuddrtghhohgvsehsrghmshhunhhgrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786469198487508714 X-GMAIL-MSGID: 1786469198487508714 From: Rafael J. Wysocki In preparation for subsequent changes, split async_schedule_node_domain() in two pieces so as to allow the bottom part of it to be called from a somewhat different code path. No functional impact. Signed-off-by: Rafael J. Wysocki --- kernel/async.c | 56 ++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 22 deletions(-) Index: linux-pm/kernel/async.c =================================================================== --- linux-pm.orig/kernel/async.c +++ linux-pm/kernel/async.c @@ -145,6 +145,39 @@ static void async_run_entry_fn(struct wo wake_up(&async_done); } +static async_cookie_t __async_schedule_node_domain(async_func_t func, + void *data, int node, + struct async_domain *domain, + struct async_entry *entry) +{ + async_cookie_t newcookie; + unsigned long flags; + + INIT_LIST_HEAD(&entry->domain_list); + INIT_LIST_HEAD(&entry->global_list); + INIT_WORK(&entry->work, async_run_entry_fn); + entry->func = func; + entry->data = data; + entry->domain = domain; + + spin_lock_irqsave(&async_lock, flags); + + /* allocate cookie and queue */ + newcookie = entry->cookie = next_cookie++; + + list_add_tail(&entry->domain_list, &domain->pending); + if (domain->registered) + list_add_tail(&entry->global_list, &async_global_pending); + + atomic_inc(&entry_count); + spin_unlock_irqrestore(&async_lock, flags); + + /* schedule for execution */ + queue_work_node(node, system_unbound_wq, &entry->work); + + return newcookie; +} + /** * async_schedule_node_domain - NUMA specific version of async_schedule_domain * @func: function to execute asynchronously @@ -186,29 +219,8 @@ async_cookie_t async_schedule_node_domai func(data, newcookie); return newcookie; } - INIT_LIST_HEAD(&entry->domain_list); - INIT_LIST_HEAD(&entry->global_list); - INIT_WORK(&entry->work, async_run_entry_fn); - entry->func = func; - entry->data = data; - entry->domain = domain; - - spin_lock_irqsave(&async_lock, flags); - /* allocate cookie and queue */ - newcookie = entry->cookie = next_cookie++; - - list_add_tail(&entry->domain_list, &domain->pending); - if (domain->registered) - list_add_tail(&entry->global_list, &async_global_pending); - - atomic_inc(&entry_count); - spin_unlock_irqrestore(&async_lock, flags); - - /* schedule for execution */ - queue_work_node(node, system_unbound_wq, &entry->work); - - return newcookie; + return __async_schedule_node_domain(func, data, node, domain, entry); } EXPORT_SYMBOL_GPL(async_schedule_node_domain); From patchwork Wed Dec 27 20:38:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 183556 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1657781dyb; Wed, 27 Dec 2023 12:42:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEdlgwxF3pFZQWgynOYeQ7aheJ6kPS8cs5+aX/N5795sHcbB217V9kBdhXd7s1MyAPHUaSO X-Received: by 2002:a05:6870:6111:b0:203:c50f:6fc2 with SMTP id s17-20020a056870611100b00203c50f6fc2mr7535011oae.42.1703709745043; Wed, 27 Dec 2023 12:42:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703709745; cv=none; d=google.com; s=arc-20160816; b=jPyAe0OoC0u4Hjws5n9WXY3UUwQ1vt1k6vgYFx0VmPM6pdK8GLgIHXb3lSHYNWWUBB fdgzvCKTQWKzzwgFyhnej+L2QjMJuYPCsI3bU6N8u0leU2Rab7DPFLfMub0rDqUsSNVl J7R3bOC+J/zaT2z7WgI6acMfnRVPgu+TkQkAWyZy/R58u4VE+WkdgXYmS88k0wVkdup4 FCL/Iz47a0ei2kReoGsUJE9unoV0uh9Wl/9HgJ2jatr0Xl1s78q2CgV2u0mHazXlNtGz EtmR4kcyNCgqgMx3wN80zXaEFcLdkfccsfn6TSMqFxv1n0DwPrObxtU2JmGcQvlqN5St Kc1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=E3nP6v5M1BrosgMS9iWuOP0mDo+ITDGa7At8cr+WIdk=; fh=GJObpfi22GEDsdUJ/UPF0sV9R5wqW+3fN2C4DnWtGgw=; b=HjsmKCLKu4te1ZXVdNOWfi4HuqoB1EDJjK+Nj+A8VFRTrO7C0v5abS2HYD33jh7pf1 TOt5P4cgY3P/slRnq5tl+4fyn0lqMQI5plPc07mflcZ0y9xr5C/ceZvmMgy27SpZaHFT +8b3qB5C4N2g2u/rRCj4NUy3jQLtxeLq23AJ4lMuZZbatBHdyu8AH0wPsfnTE0t0ed1Q LAx3qE9SEYC9euQwPppQnyCs0QNFfW2jsK+QM0Smsbt95fbkRumGCSvJmd+kdTbpMlYl QuMMXauPI6tTIWQSudM5JUjLD5ITDb0+o+yU8SwFCcW72JcY8rgRXg1JEFLDbJE5pKhI lP3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12245-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12245-ouuuleilei=gmail.com@vger.kernel.org" Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id x7-20020a05620a258700b007817145f9a5si371970qko.411.2023.12.27.12.42.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 12:42:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12245-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12245-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12245-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id CD2061C225A9 for ; Wed, 27 Dec 2023 20:42:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6787B48798; Wed, 27 Dec 2023 20:41:49 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B56F747F64; Wed, 27 Dec 2023 20:41:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 5.4.0) id de14cebbb85df213; Wed, 27 Dec 2023 21:41:43 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 23511668E17; Wed, 27 Dec 2023 21:41:43 +0100 (CET) From: "Rafael J. Wysocki" To: Greg KH , linux-pm@vger.kernel.org Cc: Youngmin Nam , rafael@kernel.org, linux-kernel@vger.kernel.org, d7271.choe@samsung.com, janghyuck.kim@samsung.com, hyesoo.yu@samsung.com, Alan Stern , Ulf Hansson Subject: [PATCH v1 2/3] async: Introduce async_schedule_dev_nocall() Date: Wed, 27 Dec 2023 21:38:23 +0100 Message-ID: <4874693.GXAFRqVoOG@kreacher> In-Reply-To: <6019796.lOV4Wx5bFT@kreacher> References: <5754861.DvuYhMxLoT@kreacher> <6019796.lOV4Wx5bFT@kreacher> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvkedrvddvledgudegudcutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfjqffogffrnfdpggftiffpkfenuceurghilhhouhhtmecuudehtdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvvefufffkjghfggfgtgesthfuredttddtjeenucfhrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqeenucggtffrrghtthgvrhhnpedvffeuiedtgfdvtddugeeujedtffetteegfeekffdvfedttddtuefhgeefvdejhfenucfkphepudelhedrudefiedrudelrdelgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpeduleehrddufeeirdduledrleegpdhhvghlohepkhhrvggrtghhvghrrdhlohgtrghlnhgvthdpmhgrihhlfhhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqpdhnsggprhgtphhtthhopedutddprhgtphhtthhopehgrhgvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehlihhnuhigqdhpmhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopeihohhunhhgmhhinhdrnhgrmhesshgrmhhsuhhnghdrtghomhdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidq khgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepugejvdejuddrtghhohgvsehsrghmshhunhhgrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786469149502994083 X-GMAIL-MSGID: 1786469149502994083 From: Rafael J. Wysocki In preparation for subsequent changes, introduce a specialized variant of async_schedule_dev() that will not invoke the argument function synchronously when it cannot be scheduled for asynchronous execution. The new function, async_schedule_dev_nocall(), will be used for fixing possible deadlocks in the system-wide power management core code. Signed-off-by: Rafael J. Wysocki Reviewed-by: Stanislaw Gruszka for the series. --- drivers/base/power/main.c | 12 ++++++++---- include/linux/async.h | 2 ++ kernel/async.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 4 deletions(-) Index: linux-pm/kernel/async.c =================================================================== --- linux-pm.orig/kernel/async.c +++ linux-pm/kernel/async.c @@ -244,6 +244,35 @@ async_cookie_t async_schedule_node(async EXPORT_SYMBOL_GPL(async_schedule_node); /** + * async_schedule_dev_nocall - A simplified variant of async_schedule_dev() + * @func: function to execute asynchronously + * @dev: device argument to be passed to function + * + * @dev is used as both the argument for the function and to provide NUMA + * context for where to run the function. + * + * If the asynchronous execution of @func is scheduled successfully, return + * true. Otherwise, do nothing and return false, unlike async_schedule_dev() + * that will run the function synchronously then. + */ +bool async_schedule_dev_nocall(async_func_t func, struct device *dev) +{ + struct async_entry *entry; + + entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL); + + /* Give up if there is no memory or too much work. */ + if (!entry || atomic_read(&entry_count) > MAX_WORK) { + kfree(entry); + return false; + } + + __async_schedule_node_domain(func, dev, dev_to_node(dev), + &async_dfl_domain, entry); + return true; +} + +/** * async_synchronize_full - synchronize all asynchronous function calls * * This function waits until all asynchronous function calls have been done. Index: linux-pm/include/linux/async.h =================================================================== --- linux-pm.orig/include/linux/async.h +++ linux-pm/include/linux/async.h @@ -90,6 +90,8 @@ async_schedule_dev(async_func_t func, st return async_schedule_node(func, dev, dev_to_node(dev)); } +bool async_schedule_dev_nocall(async_func_t func, struct device *dev); + /** * async_schedule_dev_domain - A device specific version of async_schedule_domain * @func: function to execute asynchronously From patchwork Wed Dec 27 20:41:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 183557 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1657856dyb; Wed, 27 Dec 2023 12:42:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IF+t31LHXCsJvyRbDSA/5KXOv81plND4iz6G9ALC+WeYPH73wjcZEHR5fuVg5KXyaq3CQTB X-Received: by 2002:a05:6e02:160c:b0:360:1d7d:58ea with SMTP id t12-20020a056e02160c00b003601d7d58eamr1814130ilu.62.1703709755887; Wed, 27 Dec 2023 12:42:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703709755; cv=none; d=google.com; s=arc-20160816; b=RVQYQQqsLnP1F3RrcZ9zL7/tfZtbgYEdwwB5jtBRO6qIpV3iHdovhJK2CVDOlbdwqq 92VxknCyLvh0w4sGBagxyqBFBOu6g4BFQtoH3kUIlhjPInjuQG5GedTv83qRhVk+rKDp ON5y/mYyBTth9lwq1xaZCwByx57OnfYsTfuEkk+TYEtHdJaDpXkGwZ0VCBDml2pr+2Xw W/GMa3dzjHE6xyb8WFCDzjJ3jS+IwMIKwKnhr7BGdUHI2VxXG7Z2pAOEucqiWb+aq8Gv YuVNp3JR3PzDd2GzeMLw5JqjWONzUZyhS/bYf5NTqo+3/lPxUbSXfJ5cmeAHqVNqXOaQ E+TA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=EmuFDTKseJoszlu2pdVw3U2poR51t5H9H7DI3wXxbyM=; fh=GJObpfi22GEDsdUJ/UPF0sV9R5wqW+3fN2C4DnWtGgw=; b=tDaVBLM7OFHGt23Nbo3Z7Ms4vimr/T8EHdKu54WqipSHgLVA5VT4MphB9ymOMzYwlO EgoNAAVpuSrIBkQ3e6/ahabxXuJfrwFUZRnIAH0bF+DJ6yxg9ueucR1gnA6SwXWlEJ+S dvEv1h9ZusbT9q5PQTIF1FThMFfu2qb2eFlNvp+3gq/LU2wfxWb2Zy7cdc8/24LtEyif EmteVFrfkympUK8GdAtOOvoN9bcpJKIYMMN0LMe+OHUZOH3VInlR0mhc+/AxFYFMjHQL oNlymDDs6mskNQeHzvHiG+RWZwT6OoxDu7v1IPlWwZsCl5RadoJledEPwnkfMAE74crT PByw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12244-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id s124-20020a632c82000000b005cdf93487basi9588897pgs.585.2023.12.27.12.42.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 12:42:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12244-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 188E1B228F3 for ; Wed, 27 Dec 2023 20:42:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 302D3482E1; Wed, 27 Dec 2023 20:41:49 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A60647F4D; Wed, 27 Dec 2023 20:41:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 5.4.0) id 5b718565e66a1574; Wed, 27 Dec 2023 21:41:42 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 5AEA0668E17; Wed, 27 Dec 2023 21:41:42 +0100 (CET) From: "Rafael J. Wysocki" To: Greg KH , linux-pm@vger.kernel.org Cc: Youngmin Nam , rafael@kernel.org, linux-kernel@vger.kernel.org, d7271.choe@samsung.com, janghyuck.kim@samsung.com, hyesoo.yu@samsung.com, Alan Stern , Ulf Hansson Subject: [PATCH v1 3/3] PM: sleep: Fix possible deadlocks in core system-wide PM code Date: Wed, 27 Dec 2023 21:41:06 +0100 Message-ID: <13435856.uLZWGnKmhe@kreacher> In-Reply-To: <6019796.lOV4Wx5bFT@kreacher> References: <5754861.DvuYhMxLoT@kreacher> <6019796.lOV4Wx5bFT@kreacher> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvkedrvddvledgudegudcutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfjqffogffrnfdpggftiffpkfenuceurghilhhouhhtmecuudehtdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvvefufffkjghfggfgtgesthfuredttddtjeenucfhrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqeenucggtffrrghtthgvrhhnpeefudduuedtuefgleffudeigeeitdeufeelvdejgefftdethffhhfethfeljefgteenucffohhmrghinhepkhgvrhhnvghlrdhorhhgnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqedpnhgspghrtghpthhtohepuddtpdhrtghpthhtohepghhrvghgkhhhsehlihhnuhigfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohephihouhhnghhmihhnrdhnrghmsehsrghmshhunhhgrdgtohhmpdhrtghpthhtoheprhgrfhgrvghlsehkvghrnhgv lhdrohhrghdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopegujedvjedurdgthhhovgesshgrmhhsuhhnghdrtghomh X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786469160537441164 X-GMAIL-MSGID: 1786469160537441164 From: Rafael J. Wysocki It is reported that in low-memory situations the system-wide resume core code deadlocks, because async_schedule_dev() executes its argument function synchronously if it cannot allocate memory (an not only then) and that function attempts to acquire a mutex that is already held. Address this by changing the code in question to use async_schedule_dev_nocall() for scheduling the asynchronous execution of device suspend and resume functions and to directly run them synchronously if async_schedule_dev_nocall() returns false. Fixes: 09beebd8f93b ("PM: sleep: core: Switch back to async_schedule_dev()") Link: https://lore.kernel.org/linux-pm/ZYvjiqX6EsL15moe@perf/ Reported-by: Youngmin Nam Signed-off-by: Rafael J. Wysocki --- The commit pointed to by the Fixes: tag is the last one that modified the code in question, even though the bug had been there already before. Still, the fix will not apply to the code before that commit. --- drivers/base/power/main.c | 148 +++++++++++++++++++++------------------------- 1 file changed, 68 insertions(+), 80 deletions(-) Index: linux-pm/drivers/base/power/main.c =================================================================== --- linux-pm.orig/drivers/base/power/main.c +++ linux-pm/drivers/base/power/main.c @@ -579,7 +579,7 @@ bool dev_pm_skip_resume(struct device *d } /** - * device_resume_noirq - Execute a "noirq resume" callback for given device. + * __device_resume_noirq - Execute a "noirq resume" callback for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. @@ -587,7 +587,7 @@ bool dev_pm_skip_resume(struct device *d * The driver of @dev will not receive interrupts while this function is being * executed. */ -static int device_resume_noirq(struct device *dev, pm_message_t state, bool async) +static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -655,7 +655,13 @@ Skip: Out: complete_all(&dev->power.completion); TRACE_RESUME(error); - return error; + + if (error) { + suspend_stats.failed_resume_noirq++; + dpm_save_failed_step(SUSPEND_RESUME_NOIRQ); + dpm_save_failed_dev(dev_name(dev)); + pm_dev_err(dev, state, async ? " async noirq" : " noirq", error); + } } static bool is_async(struct device *dev) @@ -668,11 +674,15 @@ static bool dpm_async_fn(struct device * { reinit_completion(&dev->power.completion); - if (is_async(dev)) { - get_device(dev); - async_schedule_dev(func, dev); + if (!is_async(dev)) + return false; + + get_device(dev); + + if (async_schedule_dev_nocall(func, dev)) return true; - } + + put_device(dev); return false; } @@ -680,15 +690,19 @@ static bool dpm_async_fn(struct device * static void async_resume_noirq(void *data, async_cookie_t cookie) { struct device *dev = data; - int error; - - error = device_resume_noirq(dev, pm_transition, true); - if (error) - pm_dev_err(dev, pm_transition, " async", error); + __device_resume_noirq(dev, pm_transition, true); put_device(dev); } +static void device_resume_noirq(struct device *dev) +{ + if (dpm_async_fn(dev, async_resume_noirq)) + return; + + __device_resume_noirq(dev, pm_transition, false); +} + static void dpm_noirq_resume_devices(pm_message_t state) { struct device *dev; @@ -698,14 +712,6 @@ static void dpm_noirq_resume_devices(pm_ mutex_lock(&dpm_list_mtx); pm_transition = state; - /* - * Advanced the async threads upfront, - * in case the starting of async threads is - * delayed by non-async resuming devices. - */ - list_for_each_entry(dev, &dpm_noirq_list, power.entry) - dpm_async_fn(dev, async_resume_noirq); - while (!list_empty(&dpm_noirq_list)) { dev = to_device(dpm_noirq_list.next); get_device(dev); @@ -713,17 +719,7 @@ static void dpm_noirq_resume_devices(pm_ mutex_unlock(&dpm_list_mtx); - if (!is_async(dev)) { - int error; - - error = device_resume_noirq(dev, state, false); - if (error) { - suspend_stats.failed_resume_noirq++; - dpm_save_failed_step(SUSPEND_RESUME_NOIRQ); - dpm_save_failed_dev(dev_name(dev)); - pm_dev_err(dev, state, " noirq", error); - } - } + device_resume_noirq(dev); put_device(dev); @@ -751,14 +747,14 @@ void dpm_resume_noirq(pm_message_t state } /** - * device_resume_early - Execute an "early resume" callback for given device. + * __device_resume_early - Execute an "early resume" callback for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. * * Runtime PM is disabled for @dev while this function is being executed. */ -static int device_resume_early(struct device *dev, pm_message_t state, bool async) +static void __device_resume_early(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -811,21 +807,31 @@ Out: pm_runtime_enable(dev); complete_all(&dev->power.completion); - return error; + + if (error) { + suspend_stats.failed_resume_early++; + dpm_save_failed_step(SUSPEND_RESUME_EARLY); + dpm_save_failed_dev(dev_name(dev)); + pm_dev_err(dev, state, async ? " async early" : " early", error); + } } static void async_resume_early(void *data, async_cookie_t cookie) { struct device *dev = data; - int error; - - error = device_resume_early(dev, pm_transition, true); - if (error) - pm_dev_err(dev, pm_transition, " async", error); + __device_resume_early(dev, pm_transition, true); put_device(dev); } +static void device_resume_early(struct device *dev) +{ + if (dpm_async_fn(dev, async_resume_early)) + return; + + __device_resume_early(dev, pm_transition, false); +} + /** * dpm_resume_early - Execute "early resume" callbacks for all devices. * @state: PM transition of the system being carried out. @@ -839,14 +845,6 @@ void dpm_resume_early(pm_message_t state mutex_lock(&dpm_list_mtx); pm_transition = state; - /* - * Advanced the async threads upfront, - * in case the starting of async threads is - * delayed by non-async resuming devices. - */ - list_for_each_entry(dev, &dpm_late_early_list, power.entry) - dpm_async_fn(dev, async_resume_early); - while (!list_empty(&dpm_late_early_list)) { dev = to_device(dpm_late_early_list.next); get_device(dev); @@ -854,17 +852,7 @@ void dpm_resume_early(pm_message_t state mutex_unlock(&dpm_list_mtx); - if (!is_async(dev)) { - int error; - - error = device_resume_early(dev, state, false); - if (error) { - suspend_stats.failed_resume_early++; - dpm_save_failed_step(SUSPEND_RESUME_EARLY); - dpm_save_failed_dev(dev_name(dev)); - pm_dev_err(dev, state, " early", error); - } - } + device_resume_early(dev); put_device(dev); @@ -888,12 +876,12 @@ void dpm_resume_start(pm_message_t state EXPORT_SYMBOL_GPL(dpm_resume_start); /** - * device_resume - Execute "resume" callbacks for given device. + * __device_resume - Execute "resume" callbacks for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. * @async: If true, the device is being resumed asynchronously. */ -static int device_resume(struct device *dev, pm_message_t state, bool async) +static void __device_resume(struct device *dev, pm_message_t state, bool async) { pm_callback_t callback = NULL; const char *info = NULL; @@ -975,20 +963,30 @@ static int device_resume(struct device * TRACE_RESUME(error); - return error; + if (error) { + suspend_stats.failed_resume++; + dpm_save_failed_step(SUSPEND_RESUME); + dpm_save_failed_dev(dev_name(dev)); + pm_dev_err(dev, state, async ? " async" : "", error); + } } static void async_resume(void *data, async_cookie_t cookie) { struct device *dev = data; - int error; - error = device_resume(dev, pm_transition, true); - if (error) - pm_dev_err(dev, pm_transition, " async", error); + __device_resume(dev, pm_transition, true); put_device(dev); } +static void device_resume(struct device *dev) +{ + if (dpm_async_fn(dev, async_resume)) + return; + + __device_resume(dev, pm_transition, false); +} + /** * dpm_resume - Execute "resume" callbacks for non-sysdev devices. * @state: PM transition of the system being carried out. @@ -1008,27 +1006,17 @@ void dpm_resume(pm_message_t state) pm_transition = state; async_error = 0; - list_for_each_entry(dev, &dpm_suspended_list, power.entry) - dpm_async_fn(dev, async_resume); - while (!list_empty(&dpm_suspended_list)) { dev = to_device(dpm_suspended_list.next); + get_device(dev); - if (!is_async(dev)) { - int error; - mutex_unlock(&dpm_list_mtx); + mutex_unlock(&dpm_list_mtx); - error = device_resume(dev, state, false); - if (error) { - suspend_stats.failed_resume++; - dpm_save_failed_step(SUSPEND_RESUME); - dpm_save_failed_dev(dev_name(dev)); - pm_dev_err(dev, state, "", error); - } + device_resume(dev); + + mutex_lock(&dpm_list_mtx); - mutex_lock(&dpm_list_mtx); - } if (!list_empty(&dev->power.entry)) list_move_tail(&dev->power.entry, &dpm_prepared_list);