From patchwork Mon Jan 9 13:33:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 3743 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp2162053wrt; Mon, 9 Jan 2023 05:42:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXu66sGXW5T+QmG79EomU4MwYF9naArUBZ9iw11qXwSS5rgOEiYP9k5kDV9tR9rYfyFoRLoo X-Received: by 2002:a05:6a21:1649:b0:ad:d197:59c7 with SMTP id no9-20020a056a21164900b000add19759c7mr76554991pzb.46.1673271735927; Mon, 09 Jan 2023 05:42:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673271735; cv=none; d=google.com; s=arc-20160816; b=dExvYlKsK/BJv5oEIdQbuf2ZOhTU9aRtpmNx2g3CZo7a8a6QgwCpe7e7d6u9qurnf0 n+JSOBPvwGn8zePucJhhPSZWEzQ88q4jZ/ph3GI2lbLebHNgIVEiAtF6Y39PJRA3sSoe 056gBesoPLy9hBLt9a+9wYnKMuIUlcd81Ohzr36Hp9Htb4t7pnyhBbSImqLrZ+Gul+LY kDAzTQVb7czn2h8yjfkNHLjxl3SjOQIKe6xfYJKuWUa4xFEY+k9cIy574ybQPkZ4BLM3 B9k/LheBZgqHSOYTP991aX+Pj08rDdLrItOyDTqmeN4FOS2S6Rb+eW6ifb6zZxbBLXbJ Nj3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=E/BV09sMWY+LU14pQ6T0Y5YNxoaPNrCYsq0jYBksJ9g=; b=ihtAV6mgoNvdwZ/VEMNXuPL1ZlTjuHCE1DsZZdPSDkFa6imQSLJJ0ajFZ4506cZ47s +LEkVBsaZ+jSliG0EZhhl0//EM+TlJwwNwgb0+IGV9/gyDNkzkq67Lc7oQiVIkf0Apyn AX/wxnFc41Wlf9saLPu07VCDsDxWqUCkyr3HfXLC9UfKr8HxSyafTsbcgLutDvhy/4iA mBpGv5WXyzMD2DsHjMxzeEBOqQf3lU+pSO46Z/mdQRU7aS9RE3ffzCEmhhopzp9cVrO2 hb4uKjzXU7ZFwt1jMBm7joSBY0FUnXoYv5v8XdHSSdPlgF47uNzINPQILzrwpZBfBp1M g4DQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Z4qVpVWM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n1-20020a63b441000000b0049d7f3cc5b9si9116641pgu.335.2023.01.09.05.42.03; Mon, 09 Jan 2023 05:42:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Z4qVpVWM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234293AbjAINf0 (ORCPT + 99 others); Mon, 9 Jan 2023 08:35:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234937AbjAINeR (ORCPT ); Mon, 9 Jan 2023 08:34:17 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 296C61EC73 for ; Mon, 9 Jan 2023 05:33:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673271211; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=E/BV09sMWY+LU14pQ6T0Y5YNxoaPNrCYsq0jYBksJ9g=; b=Z4qVpVWMdudJKxnSBkpnQWrayjDHV/TZbmJdM1oHRCGyDDAQgm9OMMtl5suURCLAxdj+c+ mzLsN/TL998S72YCjs3Eg+e1zhyCPeEqVclOzIDp19kYTb2a3Vho6XfvvMN442DcpA4agM OtKoRFNV85VX9QYupJpxZf1pWafXENI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-152-MiZvcVjWMk-23EBBFsYsog-1; Mon, 09 Jan 2023 08:33:28 -0500 X-MC-Unique: MiZvcVjWMk-23EBBFsYsog-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C4201196EF9C; Mon, 9 Jan 2023 13:33:27 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.188]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 840C4492C14; Mon, 9 Jan 2023 13:33:25 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Peter Zijlstra , Frederic Weisbecker , Juri Lelli , Phil Auld , Marcelo Tosatti Subject: [PATCH v7 0/4] workqueue: destroy_worker() vs isolated CPUs Date: Mon, 9 Jan 2023 13:33:12 +0000 Message-Id: <20230109133316.4026472-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754552583784263453?= X-GMAIL-MSGID: =?utf-8?q?1754552583784263453?= Hi folks, New year, new version! No major changes here, mainly some tidying up from Tejun's comments and a bugfix spotted by Lai. Revisions ========= v6 -> v7 ++++++++ o Rebased onto v6.2-rc3 o Dropped work pending check in worker_enter_idle() (Tejun) o Overall comment cleanup (Tejun) o put_unbound_pool() locking issue (Lai) Unfortunately the mutex cannot be acquired from within wq_manager_inactive() as rcuwait_wait_event() sets the task state to TASK_UNINTERRUPTIBLE before invoking it, so grabbing the mutex could clobber the task state. I've gone with dropping the pool->lock and reacquiring the two locks in the right order after we've become the manager, see comments. o Applied Lai's RB to patches that just had cosmetic changes v5 -> v6 ++++++++ o Rebase onto v6.1-rc7 o Get rid of worker_pool.idle_cull_list; only do minimal amount of work in the timer callback (Tejun) o Dropped the too_many_workers() -> nr_workers_to_cull() change v4 -> v5 ++++++++ o Rebase onto v6.1-rc6 o Overall renaming from "reaping" to "cull" I somehow convinced myself this was more appropriate o Split the dwork into timer callback + work item (Tejun) I didn't want to have redudant operations happen in the timer callback and in the work item, so I made the timer callback detect which workers are "ripe" enough and then toss them to a worker for removal. This however means we release the pool->lock before getting to actually doing anything to those idle workers, which means they can wake up in the meantime. The new worker_pool.idle_cull_list is there for that reason. The alternative was to have the timer callback detect if any worker was ripe enough, kick the work item if so, and have the work item do the same thing again, which I didn't like. RFCv3 -> v4 +++++++++++ o Rebase onto v6.0 o Split into more patches for reviewability o Take dying workers out of the pool->workers as suggested by Lai RFCv2 -> RFCv3 ++++++++++++++ o Rebase onto v5.19 o Add new patch (1/3) around accessing wq_unbound_cpumask o Prevent WORKER_DIE workers for kfree()'ing themselves before the idle reaper gets to handle them (Tejun) Bit of an aside on that: I've been struggling to convince myself this can happen due to spurious wakeups and would like some help here. Idle workers are TASK_UNINTERRUPTIBLE, so they can't be woken up by signals. That state is set *under* pool->lock, and all wakeups (before this patch) are also done while holding pool->lock. wake_up_worker() is done under pool->lock AND only wakes a worker on the pool->idle_list. Thus the to-be-woken worker *cannot* have WORKER_DIE, though it could gain it *after* being woken but *before* it runs, e.g.: LOCK pool->lock wake_up_worker(pool) wake_up_process(p) UNLOCK pool->lock idle_reaper_fn() LOCK pool->lock destroy_worker(worker, list); UNLOCK pool->lock worker_thread() goto woke_up; LOCK pool->lock READ worker->flags & WORKER_DIE UNLOCK pool->lock ... kfree(worker); reap_worker(worker); // Uh-oh ... But IMO that's not a spurious wakeup, that's a concurrency issue. I don't see any spurious/unexpected worker wakeup happening once a worker is off the pool->idle_list. RFCv1 -> RFCv2 ++++++++++++++ o Change the pool->timer into a delayed_work to have a sleepable context for unbinding kworkers Cheers, Valentin Lai Jiangshan (1): workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex Valentin Schneider (3): workqueue: Factorize unbind/rebind_workers() logic workqueue: Convert the idle_timer to a timer + work_struct workqueue: Unbind kworkers before sending them to exit() kernel/workqueue.c | 205 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 154 insertions(+), 51 deletions(-)