From patchwork Tue Feb 27 17:28:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207344 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851094dyb; Tue, 27 Feb 2024 09:33:01 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUynxGdC2w53q4Fx6fzr8qoL5tz2XTBcsJaYaf2zGf9fIhFn+ngVRC08suZgs0gJdoUUwYOnS4M2pu6jB4fyWCEZhzRjQ== X-Google-Smtp-Source: AGHT+IH/tyStTiYCl1nxis3T8Ut+t9kzXb96ynYklAIR/+5dg9O3VuPY5ZmlUryQqsvE7RfXFSkR X-Received: by 2002:a05:620a:24c3:b0:787:28c3:b323 with SMTP id m3-20020a05620a24c300b0078728c3b323mr3529008qkn.62.1709055181476; Tue, 27 Feb 2024 09:33:01 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055181; cv=pass; d=google.com; s=arc-20160816; b=VGt4f5XPY8yiu/WKfNCAsJrDiU47YAGi0FZLimSFWxDBs6JvmTfZjDOqftlxZpl3Da e+uv9gJgEf82LpNqa+B6WFct63lqHIjnIV9PjuGzoy5j+5sBeiOK+7bUSAq9drkjaYGN B0EQsdnHmv2VY36xOkvFBZ72jW8YPSK8p/zay0l5igX0M4KnpaYGb/v8vv3ifWrLKF7+ ypemsdRiuQ1MBy7ImW3motQghCulN1I/+TQAvJFNxnqi+9vQnx+xxh/AVIcB0qVzDDzp J4Cz2hy5DWJzoXSgPanDtwfe4Wywy+kdzj0P9q8cilYtCVo5oRwIdjMqO+f3i7dF4oKO xv5Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=KZlyQoghN9GBCCxkpBZzxQRSZjG7ZQUSHe19BeafxXM=; fh=X9WXNbNApnmPFE5x+sUnnn2UFtEZVBBaS+Im7SIKlgY=; b=DiuBJ3fOQk6aK+T1f9y0fvYmuugTL4q0jfea5MyN/3HXJiZMMBaNvsHSNBXg4T9Xuo e04wL+DFQRgOopVyD3AgTqtMIBwswWBVBCIvVjqyIfhoJuqlmFHtR2zZyjGHKx5hFWa1 VlCwbMmlX057bUohwG2ZIIHxB4w/kvLRM/u1zGBjCqsqhxF70E7fR57cJysn9xAEkcLw 8dIX2V7U85prtdj0n9s1kxzUxqyG5NPqqHL47V2PJ0Zhcfm5sqv6MC1KCLa60wpiV+fW jOFH/P1Y8RyOsyg83Kb5eGv1g4FqobKGuRfwQ2F8OjbmnBLa0Thltu5VjNDv0iF6UH/o GwvA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Pk0SBPEO; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83725-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83725-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id vu23-20020a05620a561700b00787be694972si7765999qkn.666.2024.02.27.09.33.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:33:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83725-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Pk0SBPEO; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83725-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83725-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 061D51C21ED6 for ; Tue, 27 Feb 2024 17:33:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C2EEF149E14; Tue, 27 Feb 2024 17:29:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Pk0SBPEO" Received: from mail-io1-f41.google.com (mail-io1-f41.google.com [209.85.166.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 713AE487A5 for ; Tue, 27 Feb 2024 17:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054940; cv=none; b=RaPJkEBgi/e8dK3np1QoSphsrN73cRSabE5f0TUu+pmhTfAAP57YkzX7DHm5LU2/rnfFcffpjwVoGTOj1BQUk/oCdDVVUhSplNBm4lwhHJVV90QwyeKSjD9x9vfJKzVtjisEWxrCVNwkAjq3Czq2BypxkUpfh0odaKQ9SBcTwN8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054940; c=relaxed/simple; bh=2HWVi+Tfkakhu3gQVirEXtnvYmC8z8IRGCwi4KJtDTE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UYB4aBZ5u19z/N28plCG1kuAWB9V+kpI0B9FSC26AleWRIh2CJuNFLQi+diHz8wi8eGsHDVbPHqxXxN22hShbn9yyqzJvNrAV5KzAo1wsyBuvJdFWMVpV1h+KbniIXl87XbtEkte0Cv/tAl+4OiOk+zwqf+M7HJrmH9N7yx1mWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Pk0SBPEO; arc=none smtp.client-ip=209.85.166.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-7c7b9b25ba6so90488839f.1 for ; Tue, 27 Feb 2024 09:28:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054937; x=1709659737; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=KZlyQoghN9GBCCxkpBZzxQRSZjG7ZQUSHe19BeafxXM=; b=Pk0SBPEOnJvp72oVOb0cOEQ+C73gbrdygxxLFOYXgiEw+FuG440J0hvjwdlwosAVAi wtsnwR3C7sGKwubydu/HMGjF16AZBIoQ4frbteIEqGF0wht8W1oWFdU3G+z2275C/hn5 rHe22TBEZ726xWd5kvjOYrJtFahp2oIirgjJmPupPVeeqNgUudbw09KrNfVG8dPYg618 D1L9W+QC2BNj7Oe6AjhGd2nmDMluBi9yH2ugFmPH01ZOeQ3ilzk7Lf1N1glKOQyplpkd nOvo9vNb7UZq+82GXY9+b7AD7RHsTDRS2uKpu896IQ3AHqsc7lGRuGyYpQezDbrYZ9+d 51uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054937; x=1709659737; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=KZlyQoghN9GBCCxkpBZzxQRSZjG7ZQUSHe19BeafxXM=; b=JMxgA2PGJqcCNxk6uGW0T3dK1HmQ/55Ttz1OHvi5yk/JQ9XDm7Bbp1zBdAUANoZprk P6QGHQRWOwpbOj5kk7Qp5ifZPJpiXBXG+4k7IvQYgQQctcn8whzwNvR4XJHZXtG2A/d4 2AT6Jv2q4zfSqcJCUyn4lcgKUmppD0twTVXePN+hGbArWcEGe1igpJPpMa08Pnk185XV IW8NS8KrJA1XrxGG1yO1Cff0F9qzPhBwaW63MwtrKgZw7zJIb1j8xbDEpk/vfSMfAZm9 Q8szxV0msZQ4Br7J9R/uLaRSbOTsYj06BMaORKlaXtXdkiCiVH9D3jVywiKWg1vz6Gxn olHg== X-Forwarded-Encrypted: i=1; AJvYcCU5gr2pjwYAXw2+J2LXvUJ2F35RaNqTCdGV0bHHsCX6lIKradWYX4DUmc7GPhtdNfj2D7LhI/0v4GEvhtByDWoBRqRFKlHCLeTTzZ3R X-Gm-Message-State: AOJu0YyGnLfpUe9ONlIg9cpVns82YVnbOKVenMnqej1hG1Y35/BK/LyL NE+eu2qnEbrtq1IIsW42S2k2pTc8hTMLE3itlYyH4j6rHQRfnJq8 X-Received: by 2002:a92:cac7:0:b0:365:175e:e7a3 with SMTP id m7-20020a92cac7000000b00365175ee7a3mr10625463ilq.20.1709054937462; Tue, 27 Feb 2024 09:28:57 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id y14-20020a17090a600e00b00298cc4c56cdsm8599262pji.22.2024.02.27.09.28.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:28:57 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 1/6] workqueue: Preserve OFFQ bits in cancel[_sync] paths Date: Tue, 27 Feb 2024 07:28:12 -1000 Message-ID: <20240227172852.2386358-2-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074246033073213 X-GMAIL-MSGID: 1792074246033073213 The cancel[_sync] paths acquire and release WORK_STRUCT_PENDING, and manipulate WORK_OFFQ_CANCELING. However, they assume that all the OFFQ bit values except for the pool ID are statically known and don't preserve them, which is not wrong in the current code as the pool ID and CANCELING are the only information carried. However, the planned disable/enable support will add more fields and need them to be preserved. This patch updates work data handling so that only the bits which need updating are updated. - struct work_offq_data is added along with work_offqd_unpack() and work_offqd_pack_flags() to help manipulating multiple fields contained in work->data. Note that the helpers look a bit silly right now as there isn't that much to pack. The next patch will add more. - mark_work_canceling() which is used only by __cancel_work_sync() is replaced by open-coded usage of work_offq_data and set_work_pool_and_keep_pending() in __cancel_work_sync(). - __cancel_work[_sync]() uses offq_data helpers to preserve other OFFQ bits when clearing WORK_STRUCT_PENDING and WORK_OFFQ_CANCELING at the end. - This removes all users of get_work_pool_id() which is dropped. Note that get_work_pool_id() could handle both WORK_STRUCT_PWQ and !WORK_STRUCT_PWQ cases; however, it was only being called after try_to_grab_pending() succeeded, in which case WORK_STRUCT_PWQ is never set and thus it's safe to use work_offqd_unpack() instead. No behavior changes intended. Signed-off-by: Tejun Heo --- include/linux/workqueue.h | 1 + kernel/workqueue.c | 51 ++++++++++++++++++++++++--------------- 2 files changed, 32 insertions(+), 20 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 0ad534fe6673..e15fc77bf2e2 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -97,6 +97,7 @@ enum wq_misc_consts { /* Convenience constants - of type 'unsigned long', not 'enum'! */ #define WORK_OFFQ_CANCELING (1ul << WORK_OFFQ_CANCELING_BIT) +#define WORK_OFFQ_FLAG_MASK (((1ul << WORK_OFFQ_FLAG_BITS) - 1) << WORK_OFFQ_FLAG_SHIFT) #define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1) #define WORK_STRUCT_NO_POOL (WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT) #define WORK_STRUCT_PWQ_MASK (~((1ul << WORK_STRUCT_PWQ_SHIFT) - 1)) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 38783e3a60bb..ecd46fbed60b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -391,6 +391,11 @@ struct wq_pod_type { int *cpu_pod; /* cpu -> pod */ }; +struct work_offq_data { + u32 pool_id; + u32 flags; +}; + static const char *wq_affn_names[WQ_AFFN_NR_TYPES] = { [WQ_AFFN_DFL] = "default", [WQ_AFFN_CPU] = "cpu", @@ -891,29 +896,23 @@ static struct worker_pool *get_work_pool(struct work_struct *work) return idr_find(&worker_pool_idr, pool_id); } -/** - * get_work_pool_id - return the worker pool ID a given work is associated with - * @work: the work item of interest - * - * Return: The worker_pool ID @work was last associated with. - * %WORK_OFFQ_POOL_NONE if none. - */ -static int get_work_pool_id(struct work_struct *work) +static unsigned long shift_and_mask(unsigned long v, u32 shift, u32 bits) { - unsigned long data = atomic_long_read(&work->data); + return (v >> shift) & ((1 << bits) - 1); +} - if (data & WORK_STRUCT_PWQ) - return work_struct_pwq(data)->pool->id; +static void work_offqd_unpack(struct work_offq_data *offqd, unsigned long data) +{ + WARN_ON_ONCE(data & WORK_STRUCT_PWQ); - return data >> WORK_OFFQ_POOL_SHIFT; + offqd->pool_id = shift_and_mask(data, WORK_OFFQ_POOL_SHIFT, + WORK_OFFQ_POOL_BITS); + offqd->flags = data & WORK_OFFQ_FLAG_MASK; } -static void mark_work_canceling(struct work_struct *work) +static unsigned long work_offqd_pack_flags(struct work_offq_data *offqd) { - unsigned long pool_id = get_work_pool_id(work); - - pool_id <<= WORK_OFFQ_POOL_SHIFT; - set_work_data(work, pool_id | WORK_STRUCT_PENDING | WORK_OFFQ_CANCELING); + return (unsigned long)offqd->flags; } static bool work_is_canceling(struct work_struct *work) @@ -4186,6 +4185,7 @@ EXPORT_SYMBOL(flush_rcu_work); static bool __cancel_work(struct work_struct *work, u32 cflags) { + struct work_offq_data offqd; unsigned long irq_flags; int ret; @@ -4196,19 +4196,26 @@ static bool __cancel_work(struct work_struct *work, u32 cflags) if (unlikely(ret < 0)) return false; - set_work_pool_and_clear_pending(work, get_work_pool_id(work), 0); + work_offqd_unpack(&offqd, *work_data_bits(work)); + set_work_pool_and_clear_pending(work, offqd.pool_id, + work_offqd_pack_flags(&offqd)); local_irq_restore(irq_flags); return ret; } static bool __cancel_work_sync(struct work_struct *work, u32 cflags) { + struct work_offq_data offqd; unsigned long irq_flags; bool ret; /* claim @work and tell other tasks trying to grab @work to back off */ ret = work_grab_pending(work, cflags, &irq_flags); - mark_work_canceling(work); + + work_offqd_unpack(&offqd, *work_data_bits(work)); + offqd.flags |= WORK_OFFQ_CANCELING; + set_work_pool_and_keep_pending(work, offqd.pool_id, + work_offqd_pack_flags(&offqd)); local_irq_restore(irq_flags); /* @@ -4218,12 +4225,16 @@ static bool __cancel_work_sync(struct work_struct *work, u32 cflags) if (wq_online) __flush_work(work, true); + work_offqd_unpack(&offqd, *work_data_bits(work)); + /* * smp_mb() at the end of set_work_pool_and_clear_pending() is paired * with prepare_to_wait() above so that either waitqueue_active() is * visible here or !work_is_canceling() is visible there. */ - set_work_pool_and_clear_pending(work, WORK_OFFQ_POOL_NONE, 0); + offqd.flags &= ~WORK_OFFQ_CANCELING; + set_work_pool_and_clear_pending(work, WORK_OFFQ_POOL_NONE, + work_offqd_pack_flags(&offqd)); if (waitqueue_active(&wq_cancel_waitq)) __wake_up(&wq_cancel_waitq, TASK_NORMAL, 1, work); From patchwork Tue Feb 27 17:28:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207345 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851501dyb; Tue, 27 Feb 2024 09:33:39 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVEPCOnBkByjsesYudfcN/Rze8K+4BmslseqSy9MP+X/OExRbt7AT5I1Cd4SjvAtFLOsoK2KDagRuGabkixKxU+h5fWjg== X-Google-Smtp-Source: AGHT+IHSAahnyi2Ci5Wi1OxJdcvfBbHb7GcLEWHl2EIkpIalMq6RcSu/Forkp1DlZyKGh4RyClOf X-Received: by 2002:a05:622a:408:b0:42e:8835:592b with SMTP id n8-20020a05622a040800b0042e8835592bmr5917361qtx.3.1709055219726; Tue, 27 Feb 2024 09:33:39 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055219; cv=pass; d=google.com; s=arc-20160816; b=EyGlkh3wsO1CG2K58LYbh0looJhk7EsEwUYR399ba40FeiapekHBSvUvs+mu5Rg1QJ n8Mx6SNQsE/KXu9M6iyWIkyGnvKE1tfYOEU7mmpXu0WXe0PfDRf0ikHh7TwF+/27kWTV hCuGR+idnggqPi2OA/G+fq5phPFY2DifVDOu5WaVGgLcjoDsd4dIg7n6y8l0UyJmFJyd wpBwQnKlGIMuw8Ih9velIKqr0yN1PW6cNUdpzhn+kbgXJ5rku7GvySSAEaLH2WkGN2W1 NSV3kkVOXPpo4URSKbgoL4NWpMoV+SD39frMKcAbMaJ9XmDcEmYhjSsC7h37yuWuRDm3 Y84g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=d0Ekf56kJLi7dYIIZeSuDI3c9Him+81uK7xWqIKuLwY=; fh=plzfz9QXsQdnT0rs4Vv4Idbn8NNqerdi7Zs8Wj+ABaI=; b=Wn9hbIdIkz6db1gHJv52HjlKBExarHCl26gFX/Fo1L7lNDhoMfBnH12ycCQ8KUvm+V gIrlcmtG4Nk8ODtgToO7mfySQSs16zmprk/1rL5FM/oVpavMWp2IhB6QUCznML7+jj1c d3Mq45Zqcc/wZekDNcdKB1euh7V7z1KtpR3HLUaK1Vml9BbXYFqBQ1JpNBbhAAieVNCk juUIzJC0fnFvVOP/ZUuU0T3B0DWWc9/n/1B0fBVGF8jrqR0B3lQQfo5j45qLb6Ff3Zr3 b9WfrVMP3WUnYaX6nRKA5l7qNlkEY9kDkRMa2cZwpa9SBi7wq2CXHTnDiHkLdes0k/TU BLHg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=A86V4NUH; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83726-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83726-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id h4-20020a05622a170400b0042e56dca08csi8667603qtk.179.2024.02.27.09.33.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:33:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83726-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=A86V4NUH; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83726-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83726-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 74A411C20CE3 for ; Tue, 27 Feb 2024 17:33:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C5A19153BD5; Tue, 27 Feb 2024 17:29:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="A86V4NUH" Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E3711487D5 for ; Tue, 27 Feb 2024 17:29:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054942; cv=none; b=Sx1SuvKVPnguqYqjaJcqwtmBys17D28oNP4fudPJDiRkAAlSrBtrigRHNIY9Kx6k9eQ+o9Fiqdgyyi1UQdAeFE50wgT/VZfhuqOqt6Fb1b4ZFFId14BtlqZtEBVnZMLyXN7kTHniC0OB5svLjSGhmyt/rLG9rJTpTpsIarSsGz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054942; c=relaxed/simple; bh=8j9O7fbKpKNgOozhCUm0J0cL4Ud8NkaLUeQFgeTAH3A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B66xUzN9IRV50XOjAbw2y255ItMYUwoPDEKz3YyHkmM5zkFIZI43ZxZ0BOT9oX/pYl+3bHnM9QDgEMaDRkDkfSF5q6DHfzl0su75V06JlZKCcKFAKUmnHztGgbrdQlcPfk8hmcNIXO/Yl5fFQsgU7A2nPnwISf/Kv+kkvCPpTI0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=A86V4NUH; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1dc0e5b223eso37825725ad.1 for ; Tue, 27 Feb 2024 09:29:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054939; x=1709659739; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=d0Ekf56kJLi7dYIIZeSuDI3c9Him+81uK7xWqIKuLwY=; b=A86V4NUHuv9Ubxn+ZS2r9HHY8SlyJJg5gxpuqvRhrWmc8iAB8FaqOtmGqAYvpNnQfY eu+sSyNNr3QfZvGl0NGhRXKfWUr8QNDGi0k2Oot+uPcaw8VaRgAscfDP9nG6yS6uhQpE 50fdMjewSHSIuoVDKpVPKqMzxtnREi1h8ecBjHVlaWOGizyuIESFrkwvf+Cvjizjwi9v ChYlLiMwMTR50HxCWgUwYjsa6lvE6WbksZz2gBvjPgEpwaRDFGWxpxapgD5Y/WJApqOd F7Ca/Y0QFWiqzjturLKJJ49gK/VaU4NfpcRJ8IFMAPZa2wow9XpA9R3r4Ui5BMVhbVTD fmtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054939; x=1709659739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=d0Ekf56kJLi7dYIIZeSuDI3c9Him+81uK7xWqIKuLwY=; b=rp8Jal/WUUp4HsVnxkuQ4/pD9Jq7OdaU5vYZuxJsdSldTSbtl7PTR4JSo0Ou3W6AmB u574eYpXuvLDtc5OfINum3b3ON3aOlfiD0DUEALel2axpdY8Bg5NT4f5J3J+HVfW+/Tr vX80Ikq2mQtHIJLjkPYKZxvn7x7pozUcXv/ZXcZyHHzb88dz4YqabzNjFD87Ly+af7iE hhUIpMYOq4bap2cFmgraBz0vOfu4LiUISz0+JtPebrdSxOo8ze9jBq8ThpNAgN6MR7p7 dWq3LFo+PX3gjGECnMyD4q1M4Lws5DIi12q39Eaw5cNfR/dg7vCRsu/u9ZX7niyu2we1 ZLnw== X-Forwarded-Encrypted: i=1; AJvYcCUf2MwroITCvsoqKGDysxzK2ryPKRUNJe/RPi24++X9zBZvOayRwfm+Kzb8G9fpvuE27mf3xrQa7LLQUSa3xqpPF49xzpi6eqZ2olQI X-Gm-Message-State: AOJu0YzuJruL4evXNhWU8b7RIluFWHG81wk+OKlX4+ePGWt0QE314792 BInFNc7jt3ewUwtZv2rrTnknc+nIaicEF0K9BgWYZQKPyV/BeyMW X-Received: by 2002:a17:902:ef8d:b0:1dc:1ca9:daf9 with SMTP id iz13-20020a170902ef8d00b001dc1ca9daf9mr8300287plb.12.1709054939240; Tue, 27 Feb 2024 09:28:59 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id mq11-20020a170902fd4b00b001dc8f1e06eesm1768959plb.291.2024.02.27.09.28.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:28:58 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 2/6] workqueue: Implement disable/enable for (delayed) work items Date: Tue, 27 Feb 2024 07:28:13 -1000 Message-ID: <20240227172852.2386358-3-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074286236518735 X-GMAIL-MSGID: 1792074286236518735 While (delayed) work items could be flushed and canceled, there was no way to prevent them from being queued in the future. While this didn't lead to functional deficiencies, it sometimes required a bit more effort from the workqueue users to e.g. sequence shutdown steps with more care. Workqueue is currently in the process of replacing tasklet which does support disabling and enabling. The feature is used relatively widely to, for example, temporarily suppress main path while a control plane operation (reset or config change) is in progress. To enable easy conversion of tasklet users and as it seems like an inherent useful feature, this patch implements disabling and enabling of work items. - A work item carries 16bit disable count in work->data while not queued. The access to the count is synchronized by the PENDING bit like all other parts of work->data. - If the count is non-zero, the work item cannot be queued. Any attempt to queue the work item fails and returns %false. - disable_work[_sync](), enable_work(), disable_delayed_work[_sync]() and enable_delayed_work() are added. v3: enable_work() was using local_irq_enable() instead of local_irq_restore() to undo IRQ-disable by work_grab_pending(). This is awkward now and will become incorrect as enable_work() will later be used from IRQ context too. (Lai) v2: Lai noticed that queue_work_node() wasn't checking the disable count. Fixed. queue_rcu_work() is updated to trigger warning if the inner work item is disabled. Signed-off-by: Tejun Heo Cc: Lai Jiangshan --- include/linux/workqueue.h | 18 +++- kernel/workqueue.c | 177 +++++++++++++++++++++++++++++++++++--- 2 files changed, 182 insertions(+), 13 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index e15fc77bf2e2..f25915e47efb 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -51,20 +51,23 @@ enum work_bits { * data contains off-queue information when !WORK_STRUCT_PWQ. * * MSB - * [ pool ID ] [ OFFQ flags ] [ STRUCT flags ] - * 1 bit 4 or 5 bits + * [ pool ID ] [ disable depth ] [ OFFQ flags ] [ STRUCT flags ] + * 16 bits 1 bit 4 or 5 bits */ WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS, WORK_OFFQ_CANCELING_BIT = WORK_OFFQ_FLAG_SHIFT, WORK_OFFQ_FLAG_END, WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT, + WORK_OFFQ_DISABLE_SHIFT = WORK_OFFQ_FLAG_SHIFT + WORK_OFFQ_FLAG_BITS, + WORK_OFFQ_DISABLE_BITS = 16, + /* * When a work item is off queue, the high bits encode off-queue flags * and the last pool it was on. Cap pool ID to 31 bits and use the * highest number to indicate that no pool is associated. */ - WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_SHIFT + WORK_OFFQ_FLAG_BITS, + WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_DISABLE_SHIFT + WORK_OFFQ_DISABLE_BITS, WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT, WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31, }; @@ -98,6 +101,7 @@ enum wq_misc_consts { /* Convenience constants - of type 'unsigned long', not 'enum'! */ #define WORK_OFFQ_CANCELING (1ul << WORK_OFFQ_CANCELING_BIT) #define WORK_OFFQ_FLAG_MASK (((1ul << WORK_OFFQ_FLAG_BITS) - 1) << WORK_OFFQ_FLAG_SHIFT) +#define WORK_OFFQ_DISABLE_MASK (((1ul << WORK_OFFQ_DISABLE_BITS) - 1) << WORK_OFFQ_DISABLE_SHIFT) #define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1) #define WORK_STRUCT_NO_POOL (WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT) #define WORK_STRUCT_PWQ_MASK (~((1ul << WORK_STRUCT_PWQ_SHIFT) - 1)) @@ -556,6 +560,14 @@ extern bool flush_delayed_work(struct delayed_work *dwork); extern bool cancel_delayed_work(struct delayed_work *dwork); extern bool cancel_delayed_work_sync(struct delayed_work *dwork); +extern bool disable_work(struct work_struct *work); +extern bool disable_work_sync(struct work_struct *work); +extern bool enable_work(struct work_struct *work); + +extern bool disable_delayed_work(struct delayed_work *dwork); +extern bool disable_delayed_work_sync(struct delayed_work *dwork); +extern bool enable_delayed_work(struct delayed_work *dwork); + extern bool flush_rcu_work(struct rcu_work *rwork); extern void workqueue_set_max_active(struct workqueue_struct *wq, diff --git a/kernel/workqueue.c b/kernel/workqueue.c index ecd46fbed60b..a2f2847d464b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -98,6 +98,7 @@ enum worker_flags { enum work_cancel_flags { WORK_CANCEL_DELAYED = 1 << 0, /* canceling a delayed_work */ + WORK_CANCEL_DISABLE = 1 << 1, /* canceling to disable */ }; enum wq_internal_consts { @@ -393,6 +394,7 @@ struct wq_pod_type { struct work_offq_data { u32 pool_id; + u32 disable; u32 flags; }; @@ -907,12 +909,15 @@ static void work_offqd_unpack(struct work_offq_data *offqd, unsigned long data) offqd->pool_id = shift_and_mask(data, WORK_OFFQ_POOL_SHIFT, WORK_OFFQ_POOL_BITS); + offqd->disable = shift_and_mask(data, WORK_OFFQ_DISABLE_SHIFT, + WORK_OFFQ_DISABLE_BITS); offqd->flags = data & WORK_OFFQ_FLAG_MASK; } static unsigned long work_offqd_pack_flags(struct work_offq_data *offqd) { - return (unsigned long)offqd->flags; + return ((unsigned long)offqd->disable << WORK_OFFQ_DISABLE_SHIFT) | + ((unsigned long)offqd->flags); } static bool work_is_canceling(struct work_struct *work) @@ -2405,6 +2410,21 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, rcu_read_unlock(); } +static bool clear_pending_if_disabled(struct work_struct *work) +{ + unsigned long data = *work_data_bits(work); + struct work_offq_data offqd; + + if (likely((data & WORK_STRUCT_PWQ) || + !(data & WORK_OFFQ_DISABLE_MASK))) + return false; + + work_offqd_unpack(&offqd, data); + set_work_pool_and_clear_pending(work, offqd.pool_id, + work_offqd_pack_flags(&offqd)); + return true; +} + /** * queue_work_on - queue work on specific cpu * @cpu: CPU number to execute work on @@ -2427,7 +2447,8 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq, local_irq_save(irq_flags); - if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { + if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)) && + !clear_pending_if_disabled(work)) { __queue_work(cpu, wq, work); ret = true; } @@ -2505,7 +2526,8 @@ bool queue_work_node(int node, struct workqueue_struct *wq, local_irq_save(irq_flags); - if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { + if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)) && + !clear_pending_if_disabled(work)) { int cpu = select_numa_node_cpu(node); __queue_work(cpu, wq, work); @@ -2587,7 +2609,8 @@ bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq, /* read the comment in __queue_work() */ local_irq_save(irq_flags); - if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { + if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)) && + !clear_pending_if_disabled(work)) { __queue_delayed_work(cpu, wq, dwork, delay); ret = true; } @@ -2660,7 +2683,12 @@ bool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork) { struct work_struct *work = &rwork->work; - if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { + /* + * rcu_work can't be canceled or disabled. Warn if the user reached + * inside @rwork and disabled the inner work. + */ + if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)) && + !WARN_ON_ONCE(clear_pending_if_disabled(work))) { rwork->wq = wq; call_rcu_hurry(&rwork->rcu, rcu_work_rcufn); return true; @@ -4183,20 +4211,46 @@ bool flush_rcu_work(struct rcu_work *rwork) } EXPORT_SYMBOL(flush_rcu_work); +static void work_offqd_disable(struct work_offq_data *offqd) +{ + const unsigned long max = (1lu << WORK_OFFQ_DISABLE_BITS) - 1; + + if (likely(offqd->disable < max)) + offqd->disable++; + else + WARN_ONCE(true, "workqueue: work disable count overflowed\n"); +} + +static void work_offqd_enable(struct work_offq_data *offqd) +{ + if (likely(offqd->disable > 0)) + offqd->disable--; + else + WARN_ONCE(true, "workqueue: work disable count underflowed\n"); +} + static bool __cancel_work(struct work_struct *work, u32 cflags) { struct work_offq_data offqd; unsigned long irq_flags; int ret; - do { - ret = try_to_grab_pending(work, cflags, &irq_flags); - } while (unlikely(ret == -EAGAIN)); + if (cflags & WORK_CANCEL_DISABLE) { + ret = work_grab_pending(work, cflags, &irq_flags); + } else { + do { + ret = try_to_grab_pending(work, cflags, &irq_flags); + } while (unlikely(ret == -EAGAIN)); - if (unlikely(ret < 0)) - return false; + if (unlikely(ret < 0)) + return false; + } work_offqd_unpack(&offqd, *work_data_bits(work)); + + if (cflags & WORK_CANCEL_DISABLE) + work_offqd_disable(&offqd); + set_work_pool_and_clear_pending(work, offqd.pool_id, work_offqd_pack_flags(&offqd)); local_irq_restore(irq_flags); @@ -4213,6 +4267,10 @@ static bool __cancel_work_sync(struct work_struct *work, u32 cflags) ret = work_grab_pending(work, cflags, &irq_flags); work_offqd_unpack(&offqd, *work_data_bits(work)); + + if (cflags & WORK_CANCEL_DISABLE) + work_offqd_disable(&offqd); + offqd.flags |= WORK_OFFQ_CANCELING; set_work_pool_and_keep_pending(work, offqd.pool_id, work_offqd_pack_flags(&offqd)); @@ -4312,6 +4370,105 @@ bool cancel_delayed_work_sync(struct delayed_work *dwork) } EXPORT_SYMBOL(cancel_delayed_work_sync); +/** + * disable_work - Disable and cancel a work item + * @work: work item to disable + * + * Disable @work by incrementing its disable count and cancel it if currently + * pending. As long as the disable count is non-zero, any attempt to queue @work + * will fail and return %false. The maximum supported disable depth is 2 to the + * power of %WORK_OFFQ_DISABLE_BITS, currently 65536. + * + * Must be called from a sleepable context. Returns %true if @work was pending, + * %false otherwise. + */ +bool disable_work(struct work_struct *work) +{ + return __cancel_work(work, WORK_CANCEL_DISABLE); +} +EXPORT_SYMBOL_GPL(disable_work); + +/** + * disable_work_sync - Disable, cancel and drain a work item + * @work: work item to disable + * + * Similar to disable_work() but also wait for @work to finish if currently + * executing. + * + * Must be called from a sleepable context. Returns %true if @work was pending, + * %false otherwise. + */ +bool disable_work_sync(struct work_struct *work) +{ + return __cancel_work_sync(work, WORK_CANCEL_DISABLE); +} +EXPORT_SYMBOL_GPL(disable_work_sync); + +/** + * enable_work - Enable a work item + * @work: work item to enable + * + * Undo disable_work[_sync]() by decrementing @work's disable count. @work can + * only be queued if its disable count is 0. + * + * Must be called from a sleepable context. Returns %true if the disable count + * reached 0. Otherwise, %false. + */ +bool enable_work(struct work_struct *work) +{ + struct work_offq_data offqd; + unsigned long irq_flags; + + work_grab_pending(work, 0, &irq_flags); + + work_offqd_unpack(&offqd, *work_data_bits(work)); + work_offqd_enable(&offqd); + set_work_pool_and_clear_pending(work, offqd.pool_id, + work_offqd_pack_flags(&offqd)); + local_irq_restore(irq_flags); + + return !offqd.disable; +} +EXPORT_SYMBOL_GPL(enable_work); + +/** + * disable_delayed_work - Disable and cancel a delayed work item + * @dwork: delayed work item to disable + * + * disable_work() for delayed work items. + */ +bool disable_delayed_work(struct delayed_work *dwork) +{ + return __cancel_work(&dwork->work, + WORK_CANCEL_DELAYED | WORK_CANCEL_DISABLE); +} +EXPORT_SYMBOL_GPL(disable_delayed_work); + +/** + * disable_delayed_work_sync - Disable, cancel and drain a delayed work item + * @dwork: delayed work item to disable + * + * disable_work_sync() for delayed work items. + */ +bool disable_delayed_work_sync(struct delayed_work *dwork) +{ + return __cancel_work_sync(&dwork->work, + WORK_CANCEL_DELAYED | WORK_CANCEL_DISABLE); +} +EXPORT_SYMBOL_GPL(disable_delayed_work_sync); + +/** + * enable_delayed_work - Enable a delayed work item + * @dwork: delayed work item to enable + * + * enable_work() for delayed work items. + */ +bool enable_delayed_work(struct delayed_work *dwork) +{ + return enable_work(&dwork->work); +} +EXPORT_SYMBOL_GPL(enable_delayed_work); + /** * schedule_on_each_cpu - execute a function synchronously on each online CPU * @func: the function to call From patchwork Tue Feb 27 17:28:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207347 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851676dyb; Tue, 27 Feb 2024 09:34:00 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWvenX/JPYOWKPo6NSfHaNldloT0JER7d+Fo1wIPXAEFmYbgXiNTiBsEQUfkmCCgn1YK385nBVyIME9VpHOZl8UtrxWMA== X-Google-Smtp-Source: AGHT+IH7hWkWoMnEkf1/binGMRlhibQFEUXlykWzwPIGDWyznX1Yp1KpJ38bxUUg1Kz8vigHgskc X-Received: by 2002:a17:906:5794:b0:a43:3c20:8682 with SMTP id k20-20020a170906579400b00a433c208682mr4899325ejq.73.1709055240272; Tue, 27 Feb 2024 09:34:00 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055240; cv=pass; d=google.com; s=arc-20160816; b=RP8DKgTE1KxhGPr9qXpUev5JqYqyMthJ/DnSvR7iYSNgIhLIoqxggAIHHBz7GY9qos DXE9Z+sOr+HLkA5Ky+7sUF75l9rrjCFRvxmFLjWGQ5n99mwQG7qy0HcBXKa8ZAMWsGNo P+wFHwD1NetUwFbdHXRD+SdpJLpfSqFvAgYaLZVMe4c8oEfcMRykc4wD7CohcNcgsm1h QF3iOIDiAdWz9avFRDyygVzgzFKd/CwX+qkt3ue+vqDEp6wm7P4s9wPOiLpoHyOsboss wF01XRdpPbM3SxVy1nmsXLiMINzqzkAWkJMeBOfve96UVIvu1E5eEwoUlP4fj6CC4p78 UnIw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=H8L0w+filKhnx5ou6lK4ASaEc5Nqt4aefqvsq9AGbtU=; fh=pLYaImR8N810gBjUKfEd1qEbQuI4unQbxgP4OQIai3c=; b=hcb04TTi4xy33afe/sVowbC9Em+eEXJnRU/VL766hzCcedh5XQSGze6Ja6yeZjKrnG yA4AM/L9xbdOMgL4yrbg0WGBw450RCxzRwemNC7aQ8jS6TA5K1+TwrUocJlYKICUYy6L vCWc9Le+AfnjfqkPEN58zUP0VCBmN5a9YV3nb7B2NLYsBQ0HbT7mstpExTfQ7EklLfC+ I9kzM7TDvgP1QHGvKav7ouZHaWHarqEXeAQVvxEay9+xIgyBfZO3Kk5REpywaaPHP49J UUxnfjOsj5LUyFdU1nCYsMpwcTTcWO+XcpXWmVu48vzox3UnqeQjodOk8kQllCkTY/cq /HqQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Cqgul2B7; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83727-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83727-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id p25-20020a170906141900b00a3e740dc90fsi879194ejc.442.2024.02.27.09.34.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:34:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83727-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Cqgul2B7; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83727-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83727-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 988D01F2264E for ; Tue, 27 Feb 2024 17:33:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 581B51487D5; Tue, 27 Feb 2024 17:29:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Cqgul2B7" Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C680814900E for ; Tue, 27 Feb 2024 17:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054943; cv=none; b=AiwpJ3YoX0RXvUSdBOfgilvQz1rBL9XcQOdUUgyMO09uGfZSJB3w49kHo8SGHt1WzAKLlmj/ZCHl5fJBglG6xs73ytBEjRgmWRT+Ya6Cz14IjaOQ4JYyy406ZIZjoBQptmR4hKbOLkhxPGiGcsWbD8B4WcoOKWrqOv510qGj8zA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054943; c=relaxed/simple; bh=nQAuvr5KrAyXm9br1ei0161GC9yr4LeaeUrXjWzVG4w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HHHeiQ5T6QdnkSQV3h6WLwciaYhPvUQWqe01WEIpKU6VLBDxbLWHepagMSDoZX1qk/pYdywiBNaWhAA4JdbnCd1xZFfbYEkMf675Qc9+p+rU+A2Oso4mySmeizg2y+B+98QVd9Ys/9afds5qkabN7DU2Aq1d/dGgWX9G6XyZW+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Cqgul2B7; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6e54ffdd0b5so445860b3a.1 for ; Tue, 27 Feb 2024 09:29:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054941; x=1709659741; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=H8L0w+filKhnx5ou6lK4ASaEc5Nqt4aefqvsq9AGbtU=; b=Cqgul2B7/dVm6QUywWqUzWDAlC96bLPRrPLY5P+8TTjMXs91egg+tW8EAct5cL8kfr NHNh7M1MYpnAng/NNGVD7iGRfoYJH2rayYLHHTvHbVTy7eVlYNmfzowo3BTXvcHJ7jkp Rk7g2YqSQdb+Enh4u+0jO0NzfmuXQAiRJ1J+8JUVm5SBuSp6CfUqdCn24ChqVpYBBSLF C96rfzW1MnFS+/gd0M4t210HuPy/0rPfGGIzSODdo8ACSdI/sSOKxyJtbK9sILrbVXV0 S0SFtQJ0TPHuaTeLYrKqpOtBFYPQlBw04ihZszg3Wj5I8/9uswnfaE6joKK1sgf+lQMr 5Yew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054941; x=1709659741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=H8L0w+filKhnx5ou6lK4ASaEc5Nqt4aefqvsq9AGbtU=; b=DzW073HixGfkD6xjDX5jr3j5G2HYyoThTBYUAvL8hMc5gqvCScQlN45+zAhVfHUeiq p8Klhe3grFPiiB7Fh9rv4ySKy/Bo6bn8rcU1cDhD2wkFkxE+xPWpctgjteYIhfnrVGMY AFU/aBLDUpUTBaVKe5al2lEyz7wQy4+eJIhdgqyUedkXhuptfdgo7RR9agZsUoYWxhBt HRlXrluyYXpePNU/UamwadJcm5br86XydyAs3RWoAPykZ3Ndv4BWUAPPyDADeLDpFwhu p6ZR+9e9s61hsBV7mgX33xlUiD0a1hRw1X4qmvF7dy4HIymSh1Aguc24YpmtXMy65lu1 je7A== X-Forwarded-Encrypted: i=1; AJvYcCVhVjnUNMx5Mrh8FO0NccWuMEmgKjdtBWTaCD61FLitfRksGIpB7RVDRtAqv6UqY/93t6LOXISc/LU8OE5CT+1fjF/59d/8Apd5Vdkc X-Gm-Message-State: AOJu0YxRm5jCec6nsvmESWQ2O5RQhm3xtSZJi3jdQoiazFizE/gn7aHs Jby5zMuZmWIMzESgehBXLFaLoX8IiKZGhUCijCjeQMyqK98K5tGw X-Received: by 2002:aa7:88d4:0:b0:6e5:456b:bff9 with SMTP id k20-20020aa788d4000000b006e5456bbff9mr4338406pff.12.1709054940927; Tue, 27 Feb 2024 09:29:00 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id a17-20020aa78e91000000b006e554afa254sm626033pfr.38.2024.02.27.09.29.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:29:00 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 3/6] workqueue: Remove WORK_OFFQ_CANCELING Date: Tue, 27 Feb 2024 07:28:14 -1000 Message-ID: <20240227172852.2386358-4-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074307768889875 X-GMAIL-MSGID: 1792074307768889875 cancel[_delayed]_work_sync() guarantees that it can shut down self-requeueing work items. To achieve that, it grabs and then holds WORK_STRUCT_PENDING bit set while flushing the currently executing instance. As the PENDING bit is set, all queueing attempts including the self-requeueing ones fail and once the currently executing instance is flushed, the work item should be idle as long as someone else isn't actively queueing it. This means that the cancel_work_sync path may hold the PENDING bit set while flushing the target work item. This isn't a problem for the queueing path - it can just fail which is the desired effect. It doesn't affect flush. It doesn't matter to cancel_work either as it can just report that the work item has successfully canceled. However, if there's another cancel_work_sync attempt on the work item, it can't simply fail or report success and that would breach the guarantee that it should provide. cancel_work_sync has to wait for and grab that PENDING bit and go through the motions. WORK_OFFQ_CANCELING and wq_cancel_waitq are what implement this cancel_work_sync to cancel_work_sync wait mechanism. When a work item is being canceled, WORK_OFFQ_CANCELING is also set on it and other cancel_work_sync attempts wait on the bit to be cleared using the wait queue. While this works, it's an isolated wart which doesn't jive with the rest of flush and cancel mechanisms and forces enable_work() and disable_work() to require a sleepable context, which hampers their usability. Now that a work item can be disabled, we can use that to block queueing while cancel_work_sync is in progress. Instead of holding PENDING the bit, it can temporarily disable the work item, flush and then re-enable it as that'd achieve the same end result of blocking queueings while canceling and thus enable canceling of self-requeueing work items. - WORK_OFFQ_CANCELING and the surrounding mechanims are removed. - work_grab_pending() is now simpler, no longer has to wait for a blocking operation and thus can be called from any context. - With work_grab_pending() simplified, no need to use try_to_grab_pending() directly. All users are converted to use work_grab_pending(). - __cancel_work_sync() is updated to __cancel_work() with WORK_CANCEL_DISABLE to cancel and plug racing queueing attempts. It then flushes and re-enables the work item if necessary. - These changes allow disable_work() and enable_work() to be called from any context. v2: Lai pointed out that mod_delayed_work_on() needs to check the disable count before queueing the delayed work item. Added clear_pending_if_disabled() call. Signed-off-by: Tejun Heo Cc: Lai Jiangshan --- include/linux/workqueue.h | 4 +- kernel/workqueue.c | 140 ++++++-------------------------------- 2 files changed, 20 insertions(+), 124 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index f25915e47efb..86483743ad28 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -52,10 +52,9 @@ enum work_bits { * * MSB * [ pool ID ] [ disable depth ] [ OFFQ flags ] [ STRUCT flags ] - * 16 bits 1 bit 4 or 5 bits + * 16 bits 0 bits 4 or 5 bits */ WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS, - WORK_OFFQ_CANCELING_BIT = WORK_OFFQ_FLAG_SHIFT, WORK_OFFQ_FLAG_END, WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT, @@ -99,7 +98,6 @@ enum wq_misc_consts { }; /* Convenience constants - of type 'unsigned long', not 'enum'! */ -#define WORK_OFFQ_CANCELING (1ul << WORK_OFFQ_CANCELING_BIT) #define WORK_OFFQ_FLAG_MASK (((1ul << WORK_OFFQ_FLAG_BITS) - 1) << WORK_OFFQ_FLAG_SHIFT) #define WORK_OFFQ_DISABLE_MASK (((1ul << WORK_OFFQ_DISABLE_BITS) - 1) << WORK_OFFQ_DISABLE_SHIFT) #define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index a2f2847d464b..07e77130227c 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -495,12 +495,6 @@ static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS]; /* I: attributes used when instantiating ordered pools on demand */ static struct workqueue_attrs *ordered_wq_attrs[NR_STD_WORKER_POOLS]; -/* - * Used to synchronize multiple cancel_sync attempts on the same work item. See - * work_grab_pending() and __cancel_work_sync(). - */ -static DECLARE_WAIT_QUEUE_HEAD(wq_cancel_waitq); - /* * I: kthread_worker to release pwq's. pwq release needs to be bounced to a * process context while holding a pool lock. Bounce to a dedicated kthread @@ -782,11 +776,6 @@ static int work_next_color(int color) * corresponding to a work. Pool is available once the work has been * queued anywhere after initialization until it is sync canceled. pwq is * available only while the work item is queued. - * - * %WORK_OFFQ_CANCELING is used to mark a work item which is being - * canceled. While being canceled, a work item may have its PENDING set - * but stay off timer and worklist for arbitrarily long and nobody should - * try to steal the PENDING bit. */ static inline void set_work_data(struct work_struct *work, unsigned long data) { @@ -920,13 +909,6 @@ static unsigned long work_offqd_pack_flags(struct work_offq_data *offqd) ((unsigned long)offqd->flags); } -static bool work_is_canceling(struct work_struct *work) -{ - unsigned long data = atomic_long_read(&work->data); - - return !(data & WORK_STRUCT_PWQ) && (data & WORK_OFFQ_CANCELING); -} - /* * Policy functions. These define the policies on how the global worker * pools are managed. Unless noted otherwise, these functions assume that @@ -2055,8 +2037,6 @@ static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, unsigned long work_ * 1 if @work was pending and we successfully stole PENDING * 0 if @work was idle and we claimed PENDING * -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry - * -ENOENT if someone else is canceling @work, this state may persist - * for arbitrarily long * ======== ================================================================ * * Note: @@ -2152,26 +2132,9 @@ static int try_to_grab_pending(struct work_struct *work, u32 cflags, fail: rcu_read_unlock(); local_irq_restore(*irq_flags); - if (work_is_canceling(work)) - return -ENOENT; - cpu_relax(); return -EAGAIN; } -struct cwt_wait { - wait_queue_entry_t wait; - struct work_struct *work; -}; - -static int cwt_wakefn(wait_queue_entry_t *wait, unsigned mode, int sync, void *key) -{ - struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait); - - if (cwait->work != key) - return 0; - return autoremove_wake_function(wait, mode, sync, key); -} - /** * work_grab_pending - steal work item from worklist and disable irq * @work: work item to steal @@ -2181,7 +2144,7 @@ static int cwt_wakefn(wait_queue_entry_t *wait, unsigned mode, int sync, void *k * Grab PENDING bit of @work. @work can be in any stable state - idle, on timer * or on worklist. * - * Must be called in process context. IRQ is disabled on return with IRQ state + * Can be called from any context. IRQ is disabled on return with IRQ state * stored in *@irq_flags. The caller is responsible for re-enabling it using * local_irq_restore(). * @@ -2190,41 +2153,14 @@ static int cwt_wakefn(wait_queue_entry_t *wait, unsigned mode, int sync, void *k static bool work_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags) { - struct cwt_wait cwait; int ret; - might_sleep(); -repeat: - ret = try_to_grab_pending(work, cflags, irq_flags); - if (likely(ret >= 0)) - return ret; - if (ret != -ENOENT) - goto repeat; - - /* - * Someone is already canceling. Wait for it to finish. flush_work() - * doesn't work for PREEMPT_NONE because we may get woken up between - * @work's completion and the other canceling task resuming and clearing - * CANCELING - flush_work() will return false immediately as @work is no - * longer busy, try_to_grab_pending() will return -ENOENT as @work is - * still being canceled and the other canceling task won't be able to - * clear CANCELING as we're hogging the CPU. - * - * Let's wait for completion using a waitqueue. As this may lead to the - * thundering herd problem, use a custom wake function which matches - * @work along with exclusive wait and wakeup. - */ - init_wait(&cwait.wait); - cwait.wait.func = cwt_wakefn; - cwait.work = work; - - prepare_to_wait_exclusive(&wq_cancel_waitq, &cwait.wait, - TASK_UNINTERRUPTIBLE); - if (work_is_canceling(work)) - schedule(); - finish_wait(&wq_cancel_waitq, &cwait.wait); - - goto repeat; + while (true) { + ret = try_to_grab_pending(work, cflags, irq_flags); + if (ret >= 0) + return ret; + cpu_relax(); + } } /** @@ -2642,19 +2578,14 @@ bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay) { unsigned long irq_flags; - int ret; + bool ret; - do { - ret = try_to_grab_pending(&dwork->work, WORK_CANCEL_DELAYED, - &irq_flags); - } while (unlikely(ret == -EAGAIN)); + ret = work_grab_pending(&dwork->work, WORK_CANCEL_DELAYED, &irq_flags); - if (likely(ret >= 0)) { + if (!clear_pending_if_disabled(&dwork->work)) __queue_delayed_work(cpu, wq, dwork, delay); - local_irq_restore(irq_flags); - } - /* -ENOENT from try_to_grab_pending() becomes %true */ + local_irq_restore(irq_flags); return ret; } EXPORT_SYMBOL_GPL(mod_delayed_work_on); @@ -4235,16 +4166,7 @@ static bool __cancel_work(struct work_struct *work, u32 cflags) unsigned long irq_flags; int ret; - if (cflags & WORK_CANCEL_DISABLE) { - ret = work_grab_pending(work, cflags, &irq_flags); - } else { - do { - ret = try_to_grab_pending(work, cflags, &irq_flags); - } while (unlikely(ret == -EAGAIN)); - - if (unlikely(ret < 0)) - return false; - } + ret = work_grab_pending(work, cflags, &irq_flags); work_offqd_unpack(&offqd, *work_data_bits(work)); @@ -4259,22 +4181,9 @@ static bool __cancel_work(struct work_struct *work, u32 cflags) static bool __cancel_work_sync(struct work_struct *work, u32 cflags) { - struct work_offq_data offqd; - unsigned long irq_flags; bool ret; - /* claim @work and tell other tasks trying to grab @work to back off */ - ret = work_grab_pending(work, cflags, &irq_flags); - - work_offqd_unpack(&offqd, *work_data_bits(work)); - - if (cflags & WORK_CANCEL_DISABLE) - work_offqd_disable(&offqd); - - offqd.flags |= WORK_OFFQ_CANCELING; - set_work_pool_and_keep_pending(work, offqd.pool_id, - work_offqd_pack_flags(&offqd)); - local_irq_restore(irq_flags); + ret = __cancel_work(work, cflags | WORK_CANCEL_DISABLE); /* * Skip __flush_work() during early boot when we know that @work isn't @@ -4283,19 +4192,8 @@ static bool __cancel_work_sync(struct work_struct *work, u32 cflags) if (wq_online) __flush_work(work, true); - work_offqd_unpack(&offqd, *work_data_bits(work)); - - /* - * smp_mb() at the end of set_work_pool_and_clear_pending() is paired - * with prepare_to_wait() above so that either waitqueue_active() is - * visible here or !work_is_canceling() is visible there. - */ - offqd.flags &= ~WORK_OFFQ_CANCELING; - set_work_pool_and_clear_pending(work, WORK_OFFQ_POOL_NONE, - work_offqd_pack_flags(&offqd)); - - if (waitqueue_active(&wq_cancel_waitq)) - __wake_up(&wq_cancel_waitq, TASK_NORMAL, 1, work); + if (!(cflags & WORK_CANCEL_DISABLE)) + enable_work(work); return ret; } @@ -4379,8 +4277,8 @@ EXPORT_SYMBOL(cancel_delayed_work_sync); * will fail and return %false. The maximum supported disable depth is 2 to the * power of %WORK_OFFQ_DISABLE_BITS, currently 65536. * - * Must be called from a sleepable context. Returns %true if @work was pending, - * %false otherwise. + * Can be called from any context. Returns %true if @work was pending, %false + * otherwise. */ bool disable_work(struct work_struct *work) { @@ -4411,8 +4309,8 @@ EXPORT_SYMBOL_GPL(disable_work_sync); * Undo disable_work[_sync]() by decrementing @work's disable count. @work can * only be queued if its disable count is 0. * - * Must be called from a sleepable context. Returns %true if the disable count - * reached 0. Otherwise, %false. + * Can be called from any context. Returns %true if the disable count reached 0. + * Otherwise, %false. */ bool enable_work(struct work_struct *work) { From patchwork Tue Feb 27 17:28:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207346 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851564dyb; Tue, 27 Feb 2024 09:33:45 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWu7nA3zPbStvokyN2Mpsanl+6NjipaMmGowKixaieIM0QAtR4wF+3D3taPjBq8LxrpbkoATVqr+S9FbaGetrraXiHo9A== X-Google-Smtp-Source: AGHT+IFWxXnDXA2FVU6r5g+dDallV1qNhxGaUrNRyH3CkxKcWMARYxMc7OQZw7ozkjsJ83b6WAi0 X-Received: by 2002:a17:906:f2c8:b0:a43:14db:343 with SMTP id gz8-20020a170906f2c800b00a4314db0343mr5528470ejb.38.1709055225593; Tue, 27 Feb 2024 09:33:45 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055225; cv=pass; d=google.com; s=arc-20160816; b=rbMviirYwWLxz7nO26hv3VGvdRjnApPJfQNwRj+sFyGWUbSUKARa4VX+Y5UoS6STQO 2r4pyXvgdnn6OQyFBBmZtP95FAr5sx32p7K2ENRcYO9QhY3P3XMNhuwd3uXj8ziw+37M i38B9qjnAllXrEzHUmft6tM9LJxwfr3j4JZd7kknYVvu//cQHCbNo8fvhV+HPK0cfGGj H1+raMijkETneSkXxleE71C4vH/5jlcvgoAsS14x0tvhl/o501pBZaSGr4C6wsf1Ghux Al99cdlmfwFF4Rjh9w/XCqfIgBzgASCXXB0U9lXfjE3LKFcJN2ktDQrvcJYAlP7a95+x TSIQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=T3gLZ75DcNCW2nMZ7SozZqVZgwIf3nxHu28BLf6sihc=; fh=3yrhpUauYSqi4DMHdgMlxNSVK2j8b9qkM89YhBNDvsY=; b=bE0mloDJdp83nKvWNY5J/1PwDiXtR0hdDclAjtAl+dXU/8G5ziiKapd0niTdCWsoe3 lRV2o1xxjF5aVkQeJWul3tCQs4Vyclu9OPNZybDxlTY54B9aV8qtntJjAOjMqhVA+3s3 of2j6rAOvdO3qeIqTMTSidPaevZhQ5h3owg7c6AfhYpJhanBe+qRcAaPzOFCU0ahaH/M Jt/4CYqpD7TyTQmKQO7oI8v6xYNOaIDgZkIPmJEEse2kGILnQf1Re49FaL3H1Yn3br0x tHhmWbYQh862lkrHtt+EStDYOWyrZV6FinoQn2BU50OGtR/mCScViTk5MONNiS4YBdAz iVPw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Gfx0uEDe; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83728-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83728-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id p25-20020a170906141900b00a3e740dc90fsi879194ejc.442.2024.02.27.09.33.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:33:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83728-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Gfx0uEDe; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83728-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83728-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 373D51F222EB for ; Tue, 27 Feb 2024 17:33:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 47AB914A0A5; Tue, 27 Feb 2024 17:29:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Gfx0uEDe" Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50CAA149E04 for ; Tue, 27 Feb 2024 17:29:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054944; cv=none; b=iS/IkYUmC6QXmrEvjIf9bdIzQJwAWXHpou390+VQPIiHniqK92ItF7cEu+cqWxWV2Qmx9BTVgQSZ76v9B3OZfHuyctVwuEFfkWGVfU5I8841rYi5mpnIewgeMO3SU5t7ttWkdWFEwgDNs7KhMYtW/TR5Hc7yzQ47lPUyH1fExtI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054944; c=relaxed/simple; bh=zxKHR21Jstusr40mSQjWfKjcsE9IIwgkyaIjQBWdGj8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GHykDAEXIZcKrBt9V+XXrW0+39zM4om1YzGh3sYMzqMUwBJRtcFlfsaKrJA34AUZfxGqy6xDXlUQQoGj+iwbjRo5JO9TlxMERri53C4wfMBdWvwah7sMbzT4a27ufxaeD07+QNNUACWu/qY3stYIyERtKB8jeiQGGpbnNw6OgOM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Gfx0uEDe; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-6da202aa138so3071923b3a.2 for ; Tue, 27 Feb 2024 09:29:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054942; x=1709659742; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=T3gLZ75DcNCW2nMZ7SozZqVZgwIf3nxHu28BLf6sihc=; b=Gfx0uEDeZA38GJIzxfPdkgCvfFxfO9wSVzeeWUts0XjGGWu/m0rzdwmS3RUdMbvif6 5/q1ZLltU7oAsILUOo9YYjgEDUPx9fWwaY0ihaXTGZgwLSEuhrMnKUH9OUz9EcfsKhdh AekFi+08gHTBX8/Q19Gq+CSf+pH+33Nd/ZKfFREcI8LVPBrKknPOlhrlMZeJ5VHTtqVB w9c/z+1ZY4ZJAuJbvY0n/laGQ4su+JmToQexjVsi0r1iDJ2HIq5k7ps6WChkF3bEjwRy jj4JgD1x51z9OIeBCwy3bZtyrB8R8QAkuPiLgyNO32sLYuPNxkM4EOU0/FXMs/4l5Bh4 BPBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054942; x=1709659742; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=T3gLZ75DcNCW2nMZ7SozZqVZgwIf3nxHu28BLf6sihc=; b=gXAYaLrlZavsYlYdpfsrxw+vSW9xpQzOI9gX2XcOQsgxkPQ6aRM/u+XQYK8dG+Ai3J fY2HQXU1pO/a/gZP5RYlqBQXn2KybSF4eS07LCugS3Q8FFWp4vIZ60rKYl1k5O4lP0cH 4scLd5vuZpBzQBvUB63Zbc/C3Mtfz87lJxAt3S1NKFajTDVxod1mq2LmB+dN2bFgr3h5 di7afF0f0TxfTo+cbD20Xr6hPIXhvp+YjG3XmrPLyN+a6yWeUd1ubDV6MgZ9eZrQPTGA lTVDs6XN4uY5ptdDObjNH4oQ++Ti7pRfEbkmUqoJqAasE+xFsD0NBTGp6oI0Wbmj6G5U ET2Q== X-Forwarded-Encrypted: i=1; AJvYcCVCt55GMLGub/5me1Qs8nZmeriEWqj+TvQOfImhWex819cSZT0NV7FkdCZ/Mmhgkke/bGI/8vgMYiewl6DsXN6YYTVHNISiNV5MCoGk X-Gm-Message-State: AOJu0Yw0hz5lY3N815t7Kbc27Ii3AIvb46mqKUWOzSMMeYlK+RDe1Twr mYA98HE8+OcQWPYYOxRgnwSGYkUVHK8DYTRS8IYSwKPgjmg5y+TV X-Received: by 2002:a05:6a20:499e:b0:19e:98a1:1160 with SMTP id fs30-20020a056a20499e00b0019e98a11160mr2342232pzb.28.1709054942600; Tue, 27 Feb 2024 09:29:02 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id h3-20020a170902704300b001dcc158df20sm911695plt.97.2024.02.27.09.29.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:29:02 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 4/6] workqueue: Remember whether a work item was on a BH workqueue Date: Tue, 27 Feb 2024 07:28:15 -1000 Message-ID: <20240227172852.2386358-5-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074292480792980 X-GMAIL-MSGID: 1792074292480792980 Add an off-queue flag, WORK_OFFQ_BH, that indicates whether the last workqueue the work item was on was a BH one. This will be used to test whether a work item is BH in cancel_sync path to implement atomic cancel_sync'ing for BH work items. Signed-off-by: Tejun Heo --- include/linux/workqueue.h | 4 +++- kernel/workqueue.c | 10 ++++++++-- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 86483743ad28..7710cd52f7f0 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -52,9 +52,10 @@ enum work_bits { * * MSB * [ pool ID ] [ disable depth ] [ OFFQ flags ] [ STRUCT flags ] - * 16 bits 0 bits 4 or 5 bits + * 16 bits 1 bit 4 or 5 bits */ WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS, + WORK_OFFQ_BH_BIT = WORK_OFFQ_FLAG_SHIFT, WORK_OFFQ_FLAG_END, WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT, @@ -98,6 +99,7 @@ enum wq_misc_consts { }; /* Convenience constants - of type 'unsigned long', not 'enum'! */ +#define WORK_OFFQ_BH (1ul << WORK_OFFQ_BH_BIT) #define WORK_OFFQ_FLAG_MASK (((1ul << WORK_OFFQ_FLAG_BITS) - 1) << WORK_OFFQ_FLAG_SHIFT) #define WORK_OFFQ_DISABLE_MASK (((1ul << WORK_OFFQ_DISABLE_BITS) - 1) << WORK_OFFQ_DISABLE_SHIFT) #define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 07e77130227c..5c71fbd9d854 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -763,6 +763,11 @@ static int work_next_color(int color) return (color + 1) % WORK_NR_COLORS; } +static unsigned long pool_offq_flags(struct worker_pool *pool) +{ + return (pool->flags & POOL_BH) ? WORK_OFFQ_BH : 0; +} + /* * While queued, %WORK_STRUCT_PWQ is set and non flag bits of a work's data * contain the pointer to the queued pwq. Once execution starts, the flag @@ -2119,7 +2124,8 @@ static int try_to_grab_pending(struct work_struct *work, u32 cflags, * this destroys work->data needed by the next step, stash it. */ work_data = *work_data_bits(work); - set_work_pool_and_keep_pending(work, pool->id, 0); + set_work_pool_and_keep_pending(work, pool->id, + pool_offq_flags(pool)); /* must be the last step, see the function comment */ pwq_dec_nr_in_flight(pwq, work_data); @@ -3171,7 +3177,7 @@ __acquires(&pool->lock) * PENDING and queued state changes happen together while IRQ is * disabled. */ - set_work_pool_and_clear_pending(work, pool->id, 0); + set_work_pool_and_clear_pending(work, pool->id, pool_offq_flags(pool)); pwq->stats[PWQ_STAT_STARTED]++; raw_spin_unlock_irq(&pool->lock); From patchwork Tue Feb 27 17:28:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207348 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851805dyb; Tue, 27 Feb 2024 09:34:11 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX2zwp5ASf8jb73vXqgMdAIjPEeILVfRh8C3BLpR8fK0aSzj6zPaBO+2yUZSeMT6vrWcce3NANkdXxvGXIS7EI85Hw7WQ== X-Google-Smtp-Source: AGHT+IEjwKReLde2HdNQzr8xbYAs6xAl6wVln2GCOG+MiN4I/oyXoMuKSDDxllz9oO+JFjHef0iJ X-Received: by 2002:a05:6214:f26:b0:68f:eca7:2ee4 with SMTP id iw6-20020a0562140f2600b0068feca72ee4mr3972672qvb.20.1709055250810; Tue, 27 Feb 2024 09:34:10 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055250; cv=pass; d=google.com; s=arc-20160816; b=StlNjVRS7A5qNxbzMH2fI4QyHRONGkdJkIqe4keJDFHe964qFePnPy1uiUHNd6I1dQ aJ6H1bNtT0Y33HPN8FIm3G7sbzrCrXBjxqgKS+to7VAmu+h/m84A7p0CLmYMB35lLS+/ 3hsTksoBlaMzycPPA0LGiLUfm2Bwmy1/Sqby+hrAZe6x8FGcKUtdpfRkDv5gvgb32QAl qTertVFJF6AFz82mLfVCLex+8cpG5WIoM6C7xUwPuv/mBdLfXTVFpQCJClA0jb/V1a50 hycQ1oZP7xbPLxAFsMG+tz499uaAczsUObz1jAzlmipBfAQROYfQs5BwdCGfN0kSbiog Ej4g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=ksdGhBLgzF2GG0ivGYrJOJEwZ+rvHdJ+8/q6fi1Gxlo=; fh=YsRjimdQ1buAqnLTtee2La6U/xsRxMjOjkG8htMmCLc=; b=odhRxSNifNn1lCyPRHGfUNLhSrS8bMNY3X5ZXgMrPMfpJL9Bg8JVYOgcM8CupA2Bq5 kXIvYHiynT2TjOidF/sb7YwkmLuV4WL3rTc+MpLlB6m6864GrQ3dTZzR87OCDKbJWVzy 12zlhj7IZ5p8sP7zULrcf36pE+yiwYgx7P5Yt0PDBb7ZVZwlyZViiwSR8kDumyLjOFs/ AL1UCEUPQr7T0R800vYlkALosaI2idUn5A7v8axe5aR+SD4KkmRj08bKiRl+aHolxPVa zGQbPOhkEhhPK2r8ZRltSNdJ+ENQdNPXPFce86WT2Cn7c01EAY4FO6CFcv+Sa6N3d2p9 h2xg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=eWKFDBSO; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83729-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83729-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id fo12-20020ad45f0c000000b0068f6c00f147si7670768qvb.489.2024.02.27.09.34.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:34:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83729-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=eWKFDBSO; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83729-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83729-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 877F31C21B21 for ; Tue, 27 Feb 2024 17:34:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DA1B815698D; Tue, 27 Feb 2024 17:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eWKFDBSO" Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05150150994 for ; Tue, 27 Feb 2024 17:29:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054947; cv=none; b=EA6kh/vn9w/MbywBuXXpAu2g+feh502rsplUcNtYM1yuPhdikYiK3t3DIifdt6dAZlEL08ruPDi0wUDz+HL9fjrdoCyflddpaVqldG9dM5w11eWRrCNK9jCZg65xc+y0x0iwIO6fAMyQbMdwzrwkt21lld6azlZIGLHWTLfQ7Sc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054947; c=relaxed/simple; bh=7pgXNyr370rE27KGysSsc8ThMZDf98eDMv4YAJguPuw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pYEJOJhL2YTi+3gQjEQBlgCR33FttRxoL7ySVYYBfRT0tXbtNsRM1jtW9SrSmbDSlWpr2UR5L0Z6glj2CF1UpkeEc27XrZVrWBv1xcuz3xLJknhttg12lPcaxf5UXoXcm3gKh7+hTyUJ+ttaAL4Wp3yqzzjLo3E9z2TWZcpCWJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=eWKFDBSO; arc=none smtp.client-ip=209.85.215.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-5dca1efad59so4257371a12.2 for ; Tue, 27 Feb 2024 09:29:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054944; x=1709659744; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=ksdGhBLgzF2GG0ivGYrJOJEwZ+rvHdJ+8/q6fi1Gxlo=; b=eWKFDBSO4zqcvTkGarT1E9TGGgbY439B50dTT7tnmc8NMu1wajLVh7YGXOKBx+jrbD AYg5zW3Eh205XFklXFa+4dBh94ssXjc1Qpv8PjR+B71PUwH8guoAPigXCNuvBYv5Bj1V Yk/ApyDyBlIW6y3cp8i+nHTzNe94YcAt9R/igvNTTZE4CEwdZIqc+4Ot9noOKOy19c8U gzNgEPvqtNkPTgVlTaCrgWZV/kOkBbtRk86kuGveTEmWy6WzuNuemVLWUihpqVGu3ob1 c8cCoxqWMSXZTELkoZTP+fDc5SbstFOKSNYkoq94rLeeeg6LKS9vUHGMvS15IosJTBuT LBnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054944; x=1709659744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ksdGhBLgzF2GG0ivGYrJOJEwZ+rvHdJ+8/q6fi1Gxlo=; b=D4nRvpKNTIxTUNPkf9MP1Ok+zUK9ahX8otnd0/9glz2KeYuNL4Rh9i1ZMN7fLdH49P wV1lTvtD9R/lR3+CBFLFrOwCy+OlUqgL64xaCac6viisK8cHd6llbOrIwEhLyb1EhAnP 21k/q7ju8uBNKuVrhRwTLwjzxA4GNxlsx+hu4SRqYhakFKgE5nBgNBs+YrRtDKl15NwV 1t1z0vxzI/5fGXZKkf9hX1Qij2jLQaZF9ecvT3Ui3dH94ietN0SHpMVEIdQIomFRVRZH +RxR4pA0aJAlpxgXOz1gIeHSuuUXPbyI5DDiq8O+J5wgoyUoEA6IyYigXnYvFhtuLyGh Zbyg== X-Forwarded-Encrypted: i=1; AJvYcCXtr2ApDqGHJutVg7L+y1I770uT6UKp7S+1fg/uJySU+zp9vk8RV546sHVaYIPvHuMpKw8HQn4P1MxtdZzpHzFuSeNWt6JbgcFm6Upl X-Gm-Message-State: AOJu0Yyu/nkKmt6X8gGv0HWlXJ/aY9BrKnZw1WTMwlXPJ3BWCSyVGKMO rhp8qYW/KfslmtbVIY5a067d5ULJSgjDtz1Kb4rjNHK5V0nhpAW4ka3DUQ9JK8w= X-Received: by 2002:a17:902:d491:b0:1dc:693b:407c with SMTP id c17-20020a170902d49100b001dc693b407cmr12702400plg.42.1709054944174; Tue, 27 Feb 2024 09:29:04 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id ml11-20020a17090334cb00b001dc391cc28fsm1766375plb.121.2024.02.27.09.29.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:29:03 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 5/6] workqueue: Allow cancel_work_sync() and disable_work() from atomic contexts on BH work items Date: Tue, 27 Feb 2024 07:28:16 -1000 Message-ID: <20240227172852.2386358-6-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074318684062493 X-GMAIL-MSGID: 1792074318684062493 Now that work_grab_pending() can always grab the PENDING bit without sleeping, the only thing that prevents allowing cancel_work_sync() of a BH work item from an atomic context is the flushing of the in-flight instance. When we're flushing a BH work item for cancel_work_sync(), we know that the work item is not queued and must be executing in a BH context, which means that it's safe to busy-wait for its completion from a non-hardirq atomic context. This patch updates __flush_work() so that it busy-waits when flushing a BH work item for cancel_work_sync(). might_sleep() is pushed from start_flush_work() to its callers - when operating on a BH work item, __cancel_work_sync() now enforces !in_hardirq() instead of might_sleep(). This allows cancel_work_sync() and disable_work() to be called from non-hardirq atomic contexts on BH work items. v3: In __flush_work(), test WORK_OFFQ_BH to tell whether a work item being canceled can be busy waited instead of making start_flush_work() return the pool. (Lai) v2: Lai pointed out that __flush_work() was accessing pool->flags outside the RCU critical section protecting the pool pointer. Fix it by testing and remembering the result inside the RCU critical section. Signed-off-by: Tejun Heo Cc: Lai Jiangshan --- kernel/workqueue.c | 74 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 55 insertions(+), 19 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 5c71fbd9d854..7d8eaca294c9 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -4020,8 +4020,6 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, struct pool_workqueue *pwq; struct workqueue_struct *wq; - might_sleep(); - rcu_read_lock(); pool = get_work_pool(work); if (!pool) { @@ -4073,6 +4071,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, static bool __flush_work(struct work_struct *work, bool from_cancel) { struct wq_barrier barr; + unsigned long data; if (WARN_ON(!wq_online)) return false; @@ -4080,13 +4079,41 @@ static bool __flush_work(struct work_struct *work, bool from_cancel) if (WARN_ON(!work->func)) return false; - if (start_flush_work(work, &barr, from_cancel)) { - wait_for_completion(&barr.done); - destroy_work_on_stack(&barr.work); - return true; - } else { + if (!start_flush_work(work, &barr, from_cancel)) return false; + + /* + * start_flush_work() returned %true. If @from_cancel is set, we know + * that @work must have been executing during start_flush_work() and + * can't currently be queued. Its data must contain OFFQ bits. If @work + * was queued on a BH workqueue, we also know that it was running in the + * BH context and thus can be busy-waited. + */ + data = *work_data_bits(work); + if (from_cancel && + !WARN_ON_ONCE(data & WORK_STRUCT_PWQ) && (data & WORK_OFFQ_BH)) { + /* + * On RT, prevent a live lock when %current preempted soft + * interrupt processing or prevents ksoftirqd from running by + * keeping flipping BH. If the BH work item runs on a different + * CPU then this has no effect other than doing the BH + * disable/enable dance for nothing. This is copied from + * kernel/softirq.c::tasklet_unlock_spin_wait(). + */ + while (!try_wait_for_completion(&barr.done)) { + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { + local_bh_disable(); + local_bh_enable(); + } else { + cpu_relax(); + } + } + } else { + wait_for_completion(&barr.done); } + + destroy_work_on_stack(&barr.work); + return true; } /** @@ -4102,6 +4129,7 @@ static bool __flush_work(struct work_struct *work, bool from_cancel) */ bool flush_work(struct work_struct *work) { + might_sleep(); return __flush_work(work, false); } EXPORT_SYMBOL_GPL(flush_work); @@ -4191,6 +4219,11 @@ static bool __cancel_work_sync(struct work_struct *work, u32 cflags) ret = __cancel_work(work, cflags | WORK_CANCEL_DISABLE); + if (*work_data_bits(work) & WORK_OFFQ_BH) + WARN_ON_ONCE(in_hardirq()); + else + might_sleep(); + /* * Skip __flush_work() during early boot when we know that @work isn't * executing. This allows canceling during early boot. @@ -4217,19 +4250,19 @@ EXPORT_SYMBOL(cancel_work); * cancel_work_sync - cancel a work and wait for it to finish * @work: the work to cancel * - * Cancel @work and wait for its execution to finish. This function - * can be used even if the work re-queues itself or migrates to - * another workqueue. On return from this function, @work is - * guaranteed to be not pending or executing on any CPU. + * Cancel @work and wait for its execution to finish. This function can be used + * even if the work re-queues itself or migrates to another workqueue. On return + * from this function, @work is guaranteed to be not pending or executing on any + * CPU as long as there aren't racing enqueues. * - * cancel_work_sync(&delayed_work->work) must not be used for - * delayed_work's. Use cancel_delayed_work_sync() instead. + * cancel_work_sync(&delayed_work->work) must not be used for delayed_work's. + * Use cancel_delayed_work_sync() instead. * - * The caller must ensure that the workqueue on which @work was last - * queued can't be destroyed before this function returns. + * Must be called from a sleepable context if @work was last queued on a non-BH + * workqueue. Can also be called from non-hardirq atomic contexts including BH + * if @work was last queued on a BH workqueue. * - * Return: - * %true if @work was pending, %false otherwise. + * Returns %true if @work was pending, %false otherwise. */ bool cancel_work_sync(struct work_struct *work) { @@ -4299,8 +4332,11 @@ EXPORT_SYMBOL_GPL(disable_work); * Similar to disable_work() but also wait for @work to finish if currently * executing. * - * Must be called from a sleepable context. Returns %true if @work was pending, - * %false otherwise. + * Must be called from a sleepable context if @work was last queued on a non-BH + * workqueue. Can also be called from non-hardirq atomic contexts including BH + * if @work was last queued on a BH workqueue. + * + * Returns %true if @work was pending, %false otherwise. */ bool disable_work_sync(struct work_struct *work) { From patchwork Tue Feb 27 17:28:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 207349 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2851868dyb; Tue, 27 Feb 2024 09:34:16 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVIXhc2dhi2JfJBWxg3ldCebCxGGeK4y9a26MHNObU+cooqwk76m2U1fxYFVababa64GPK5sqVNxknGyFja97Eodb080g== X-Google-Smtp-Source: AGHT+IE8tI5iytPC47bwhB3psn2UiQQZ4c3JiTRdg9wBQMIzCuP5juZoUKGq75DIGhFFJDVexk9j X-Received: by 2002:a17:90a:9918:b0:299:5c12:5ab with SMTP id b24-20020a17090a991800b002995c1205abmr7981789pjp.5.1709055256571; Tue, 27 Feb 2024 09:34:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709055256; cv=pass; d=google.com; s=arc-20160816; b=JYSDnXkYyO0SLx/ypqXlJnI3KbZBRLymN7RckheEA9QKYE0NZL0iXSQ1iT3/g33yK9 aAjrl6kMcO5ksQwbFu+jZzAtiDdBfKhbBCvWfpC4B3+lHkcKksZAtMRVS5glhexa8uex ps26a2GWwMYcap4yE7dGJ4H0fqWTo75ld4wD680pjKUqjj8YPqTHUrgumRBEaH+j5kLi 6qp/cowoaFFMHBZRsqU0IUsM02qh85LwR85rbJLCLgut9sqRIqkYNVwqDhcf5jjKQfrm vPtKOJnfF1Lf71Ac9RZB/v+azeadERUtqRC/GeDp7Frhr4d467wt2kwpOJovrboQ8N2k zjqw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:sender:dkim-signature; bh=znKiBDk/+YyZPZxI19sZt/T09kGaOnS9hAUBe89oizg=; fh=468ndzHx4zyl32Wh4U73r1uYuXfP6/Y/64i263sdtRw=; b=zYJWEbKNov9Up4SUChg4BeSxM1Y43wASi8IcivbnZPvn3zGejKLdYcENH53pUQES62 d7cjGFyC5DbiPFCJ17MQKj9Ft56N5EhdYe3g6Zs66Y/TCbpTjGrMWIsPrFkojDSJLBwD NWJxWtdfb5xQlzdo6vlkUJNFiZUMeq5xZBHRAvwVkKKJj/RS+VN+Eh6LYfNDUNIBmvI3 uPCwKjpORmRqo/6Xt9SHMGJ2OjS3Oi7EZwPrajvyEGK9I/qIV7YkyTjGI0UwucdJtTEf uEIxWPz0ac36pcO64HtlR6LAMxVhkU3hIRxEqBO8J7nnBTO1bjXtj42pk+Jsa3dvoR1P /tcg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=hAtYfpe6; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83730-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83730-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id qe1-20020a17090b4f8100b0029ac8aca113si3508030pjb.169.2024.02.27.09.34.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:34:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-83730-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=hAtYfpe6; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-83730-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-83730-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 5DB3A284FFD for ; Tue, 27 Feb 2024 17:34:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 109EE14A4DD; Tue, 27 Feb 2024 17:29:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hAtYfpe6" Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74D4F152E15 for ; Tue, 27 Feb 2024 17:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054948; cv=none; b=ZOQm3/L9xKs/9VjQter9QooEVuneuWiymGTIpckKYFabFwO0ih4U1voI1jJ1uNbqS2c9FA27NyFp25xUNI628dixzFj9UfuE4SAo/I8EWQfpOXdBUsByWUhdUFxOJtjx1TNiNWuDlrLY0CyypJDjh3LYnwOK7E7y2+PjpvVHDxk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709054948; c=relaxed/simple; bh=nDGxDIbhTaR2385iAsGzN/VEKZ6fYazF2B4H7Rt3wtk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lHRjVkAC3RgVYyRdFkwA9XF6stwYpT2wdOyRZ0U/jXy0bQ1FU64SeouzFC84sM/ZTKoGU1aJJ6zVilJSDigNhZ5IkB/EJrKukNSxcnmRt0rJJVGrXtzHL9sMgrcr1w2Jy9OOJWElgo6cWjn+Wd4FPFEFxI/PfudTUXv1iBYo764= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hAtYfpe6; arc=none smtp.client-ip=209.85.215.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-5dca1efad59so4257398a12.2 for ; Tue, 27 Feb 2024 09:29:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709054946; x=1709659746; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=znKiBDk/+YyZPZxI19sZt/T09kGaOnS9hAUBe89oizg=; b=hAtYfpe6maCwWI7nkMej5Wg4K+PbFvn8jnBL2dsBmRwzub1TtMxIn5OfyUOcjmTABw Uypy0qSQNavvLSq8rnuX/M79KQMbipIB+XQF4hJkFmOTQaBQhFurIFtxtcD77PcTMs3W VWrC0UCCq1A81yMAo2qLoTy4dH9xjvNEsJkiao3G+wMfrIfoJrDD5udu0PJNkyign/Ti 1dFOkh5MpmkSnpjziEvsYlcDnTrloYEIiQhT97XQ6lUT+iuOenvH2Q2Gt7Aq41shE+8h KwBvK0dNZ6QylQPYgi2g//bEWkAgs+lqB9N4IMMSpHFz2GElG6DZ+W2S42etbq9uTyaV BXEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709054946; x=1709659746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=znKiBDk/+YyZPZxI19sZt/T09kGaOnS9hAUBe89oizg=; b=h9XM+vErg9OOyeq+uhXaN5kOlleFFbQaL+tAarT/1XckgUlUHZRj86TVU2EHgpqpkW k5P92otI2nGHZ6mPuV0xLWrl36RDeEMOWuuCbpM9VDecng+5shEc7j8q1kG8dU5+CKIn Wo09Ex043cidqBRh8xqtmoLwKW8yancaQM9GFDcsWV8Gbn76nnUw0BflkwVwRVo3Iw3f TjrS6Al03in7JDCaCcgR9eZBC684h+j9X+pIOfe5Hcp6FZlpPFQaylsNjByZSW6hjnQp msc8uRGGIr2PpPFGx90i0vR4uRsfPmKK+1EkKfm6O2iWIGcM3H5/3wm4UQAvjzbKRGct 0vEg== X-Forwarded-Encrypted: i=1; AJvYcCUAxiIw67/MU/AzkZTAxDhC1i4fY5vp6YAnh5foxjt9ytlaogZzDbSyZw0oSAHc2Qp4FAmia0QyKwGTIN+NtDVsL9K1HU1lIO+S+BkP X-Gm-Message-State: AOJu0Yxzjisp/PvkzILdqrL9obaqsAeNVVmguy+iU6IHAJYEo+iznK97 6ogd2VGrZa3TqtQl8xOVMmOd6GwkMMAILEykAxN+P3qRjwFK4NAz X-Received: by 2002:a17:902:a386:b0:1dc:ad9d:9e7b with SMTP id x6-20020a170902a38600b001dcad9d9e7bmr4198872pla.59.1709054945777; Tue, 27 Feb 2024 09:29:05 -0800 (PST) Received: from localhost (dhcp-141-239-158-86.hawaiiantel.net. [141.239.158.86]) by smtp.gmail.com with ESMTPSA id x21-20020a1709027c1500b001dbc3f2e7e8sm1775141pll.98.2024.02.27.09.29.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 09:29:05 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, boqun.feng@gmail.com, tglx@linutronix.de, peterz@infradead.org, romain.perier@gmail.com, mingo@kernel.org, Tejun Heo Subject: [PATCH 6/6] r8152: Convert from tasklet to BH workqueue Date: Tue, 27 Feb 2024 07:28:17 -1000 Message-ID: <20240227172852.2386358-7-tj@kernel.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227172852.2386358-1-tj@kernel.org> References: <20240227172852.2386358-1-tj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792074324253661109 X-GMAIL-MSGID: 1792074324253661109 tasklet is being replaced by BH workqueue. No noticeable behavior or performance changes are expected. The following is how the two APIs map: - tasklet_setup/init() -> INIT_WORK() - tasklet_schedule() -> queue_work(system_bh_wq, ...) - tasklet_hi_schedule() -> queue_work(system_bh_highpri_wq, ...) - tasklet_disable_nosync() -> disable_work() - tasklet_disable[_in_atomic]() -> disable_work_sync() - tasklet_enable() -> enable_work() + queue_work() - tasklet_kill() -> cancel_work_sync() Note that unlike tasklet_enable(), enable_work() doesn't queue the work item automatically according to whether the work item was queued while disabled. While the caller can track this separately, unconditionally scheduling the work item after enable_work() returns %true should work for most users. r8152 conversion has been tested by repeatedly forcing the device to go through resets using usbreset under iperf3 generated traffic. Signed-off-by: Tejun Heo --- drivers/net/usb/r8152.c | 44 ++++++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 20 deletions(-) diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 9bf2140fd0a1..24e284b9eb38 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -882,7 +882,7 @@ struct r8152 { #ifdef CONFIG_PM_SLEEP struct notifier_block pm_notifier; #endif - struct tasklet_struct tx_tl; + struct work_struct tx_work; struct rtl_ops { void (*init)(struct r8152 *tp); @@ -1948,7 +1948,7 @@ static void write_bulk_callback(struct urb *urb) return; if (!skb_queue_empty(&tp->tx_queue)) - tasklet_schedule(&tp->tx_tl); + queue_work(system_bh_wq, &tp->tx_work); } static void intr_callback(struct urb *urb) @@ -2746,9 +2746,9 @@ static void tx_bottom(struct r8152 *tp) } while (res == 0); } -static void bottom_half(struct tasklet_struct *t) +static void bottom_half(struct work_struct *work) { - struct r8152 *tp = from_tasklet(tp, t, tx_tl); + struct r8152 *tp = container_of(work, struct r8152, tx_work); if (test_bit(RTL8152_INACCESSIBLE, &tp->flags)) return; @@ -2942,7 +2942,7 @@ static netdev_tx_t rtl8152_start_xmit(struct sk_buff *skb, schedule_delayed_work(&tp->schedule, 0); } else { usb_mark_last_busy(tp->udev); - tasklet_schedule(&tp->tx_tl); + queue_work(system_bh_wq, &tp->tx_work); } } else if (skb_queue_len(&tp->tx_queue) > tp->tx_qlen) { netif_stop_queue(netdev); @@ -6824,11 +6824,12 @@ static void set_carrier(struct r8152 *tp) } else { if (netif_carrier_ok(netdev)) { netif_carrier_off(netdev); - tasklet_disable(&tp->tx_tl); + disable_work_sync(&tp->tx_work); napi_disable(napi); tp->rtl_ops.disable(tp); napi_enable(napi); - tasklet_enable(&tp->tx_tl); + enable_work(&tp->tx_work); + queue_work(system_bh_wq, &tp->tx_work); netif_info(tp, link, netdev, "carrier off\n"); } } @@ -6864,7 +6865,7 @@ static void rtl_work_func_t(struct work_struct *work) /* don't schedule tasket before linking */ if (test_and_clear_bit(SCHEDULE_TASKLET, &tp->flags) && netif_carrier_ok(tp->netdev)) - tasklet_schedule(&tp->tx_tl); + queue_work(system_bh_wq, &tp->tx_work); if (test_and_clear_bit(RX_EPROTO, &tp->flags) && !list_empty(&tp->rx_done)) @@ -6971,7 +6972,7 @@ static int rtl8152_open(struct net_device *netdev) goto out_unlock; } napi_enable(&tp->napi); - tasklet_enable(&tp->tx_tl); + enable_work(&tp->tx_work); mutex_unlock(&tp->control); @@ -6999,7 +7000,7 @@ static int rtl8152_close(struct net_device *netdev) #ifdef CONFIG_PM_SLEEP unregister_pm_notifier(&tp->pm_notifier); #endif - tasklet_disable(&tp->tx_tl); + disable_work_sync(&tp->tx_work); clear_bit(WORK_ENABLE, &tp->flags); usb_kill_urb(tp->intr_urb); cancel_delayed_work_sync(&tp->schedule); @@ -8421,7 +8422,7 @@ static int rtl8152_pre_reset(struct usb_interface *intf) return 0; netif_stop_queue(netdev); - tasklet_disable(&tp->tx_tl); + disable_work_sync(&tp->tx_work); clear_bit(WORK_ENABLE, &tp->flags); usb_kill_urb(tp->intr_urb); cancel_delayed_work_sync(&tp->schedule); @@ -8466,7 +8467,8 @@ static int rtl8152_post_reset(struct usb_interface *intf) } napi_enable(&tp->napi); - tasklet_enable(&tp->tx_tl); + enable_work(&tp->tx_work); + queue_work(system_bh_wq, &tp->tx_work); netif_wake_queue(netdev); usb_submit_urb(tp->intr_urb, GFP_KERNEL); @@ -8625,12 +8627,13 @@ static int rtl8152_system_suspend(struct r8152 *tp) clear_bit(WORK_ENABLE, &tp->flags); usb_kill_urb(tp->intr_urb); - tasklet_disable(&tp->tx_tl); + disable_work_sync(&tp->tx_work); napi_disable(napi); cancel_delayed_work_sync(&tp->schedule); tp->rtl_ops.down(tp); napi_enable(napi); - tasklet_enable(&tp->tx_tl); + enable_work(&tp->tx_work); + queue_work(system_bh_wq, &tp->tx_work); } return 0; @@ -9387,11 +9390,12 @@ static int rtl8152_change_mtu(struct net_device *dev, int new_mtu) if (netif_carrier_ok(dev)) { netif_stop_queue(dev); napi_disable(&tp->napi); - tasklet_disable(&tp->tx_tl); + disable_work_sync(&tp->tx_work); tp->rtl_ops.disable(tp); tp->rtl_ops.enable(tp); rtl_start_rx(tp); - tasklet_enable(&tp->tx_tl); + enable_work(&tp->tx_work); + queue_work(system_bh_wq, &tp->tx_work); napi_enable(&tp->napi); rtl8152_set_rx_mode(dev); netif_wake_queue(dev); @@ -9819,8 +9823,8 @@ static int rtl8152_probe_once(struct usb_interface *intf, mutex_init(&tp->control); INIT_DELAYED_WORK(&tp->schedule, rtl_work_func_t); INIT_DELAYED_WORK(&tp->hw_phy_work, rtl_hw_phy_work_func_t); - tasklet_setup(&tp->tx_tl, bottom_half); - tasklet_disable(&tp->tx_tl); + INIT_WORK(&tp->tx_work, bottom_half); + disable_work(&tp->tx_work); netdev->netdev_ops = &rtl8152_netdev_ops; netdev->watchdog_timeo = RTL8152_TX_TIMEOUT; @@ -9954,7 +9958,7 @@ static int rtl8152_probe_once(struct usb_interface *intf, unregister_netdev(netdev); out1: - tasklet_kill(&tp->tx_tl); + cancel_work_sync(&tp->tx_work); cancel_delayed_work_sync(&tp->hw_phy_work); if (tp->rtl_ops.unload) tp->rtl_ops.unload(tp); @@ -10010,7 +10014,7 @@ static void rtl8152_disconnect(struct usb_interface *intf) rtl_set_unplug(tp); unregister_netdev(tp->netdev); - tasklet_kill(&tp->tx_tl); + cancel_work_sync(&tp->tx_work); cancel_delayed_work_sync(&tp->hw_phy_work); if (tp->rtl_ops.unload) tp->rtl_ops.unload(tp);