From patchwork Sat Mar 11 09:31:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 68034 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp201891wrd; Sat, 11 Mar 2023 01:40:53 -0800 (PST) X-Google-Smtp-Source: AK7set+GBK9u0yp3OaZflOrfA9DoDHXiluuKrCLfgPXyuQDRbqaAfX0371aL/GQyplOEmf3Oc/mG X-Received: by 2002:a17:903:247:b0:19e:8076:9bd2 with SMTP id j7-20020a170903024700b0019e80769bd2mr33376455plh.17.1678527653342; Sat, 11 Mar 2023 01:40:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678527653; cv=none; d=google.com; s=arc-20160816; b=qBf6M9KkibgPgDH8fAqnS+5K3UbCMbclMcCmsGTdHE88f+mLe8UR+RZV+7F1R47nXl ddu7ZqOqFefbEobN18v/vApvqusmZEDtVmo2sVPfPItIKeWfIOTkCat3C8cY2TjKkn6K n5hPJnkIeuGLy7PG1v6ba3PXvhcbPW/ql8ukvtxeP3FQmLg00r3jEHkF8eTrOvBxCb98 VlgN6rNoioIoWurXooUiHFxzN5fTtN5mgsFGwaEVf6jATm++kzzekD+C/Z8bfGgI61jA qfvA5NSUIiWPdBG2DiMf50gOpbBKe8UjxdB2kjU1crw85kbZsrSaWlPXRJEvDSUx8d26 Adhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=V+ioKM+KXsUGl/M7+sD/aENsN/ae284I1M657kM4xTE=; b=d7mY4KcYPebCAecmgICWFEqQL67I+Rx3ykm/o3SP0dVIvJAxvgTFoVylDBAATpFesR rBHW1sq8vHegqxsPcdV1rSJNGO5mHDylJUqBpxDmRMQGlrus/ScB1+fyY08WTgmT9GUl aDXDUthrIm7sh2cZSGQ88mg1NH9ZN5+g+TntRH2Y1apLi/e8u92YOV+3+PgOfdVQG1F3 RvL9eWNDOAv62QBjCDamuhEh4XOQS7+t22vKm1ZUWerJuM0pQ9Wg9QoJCeTrDwBU4nHU PV1kpft6E8dnDDXFN9X7DU/iluaOB+JCQBgFbSqszuitaiNazCwCycG5eJaXqMMa2aO3 1UBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lb6-20020a170902fa4600b0019a86f57ef4si1883606plb.176.2023.03.11.01.40.41; Sat, 11 Mar 2023 01:40:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230426AbjCKJcq (ORCPT + 99 others); Sat, 11 Mar 2023 04:32:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjCKJcd (ORCPT ); Sat, 11 Mar 2023 04:32:33 -0500 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7288922A2B; Sat, 11 Mar 2023 01:32:31 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4PYd4C0h8nz4f3jLd; Sat, 11 Mar 2023 17:32:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiCqSgxkFn3mEg--.16173S5; Sat, 11 Mar 2023 17:32:28 +0800 (CST) From: Yu Kuai To: agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 1/5] md: pass a md_thread pointer to md_register_thread() Date: Sat, 11 Mar 2023 17:31:44 +0800 Message-Id: <20230311093148.2595222-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230311093148.2595222-1-yukuai1@huaweicloud.com> References: <20230311093148.2595222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgC3YiCqSgxkFn3mEg--.16173S5 X-Coremail-Antispam: 1UD129KBjvJXoW3XF4DWFykJF45Zr13ArWUurg_yoWfArW8pa yxGFyayr48ArW3ZrWDAa4Dua45Xw10gFWjkry3C34fA3ZxK3y3JFyY9FyUJryDAa4rAF43 tw15KFW8uF4kKr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9v14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JV W8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbec_DUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760063812253289362?= X-GMAIL-MSGID: =?utf-8?q?1760063812253289362?= From: Yu Kuai Prepare to use a disk level spinlock to protect md_thread, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/md-cluster.c | 11 +++++------ drivers/md/md-multipath.c | 6 +++--- drivers/md/md.c | 27 ++++++++++++++------------- drivers/md/md.h | 7 +++---- drivers/md/raid1.c | 5 ++--- drivers/md/raid10.c | 15 ++++++--------- drivers/md/raid5-cache.c | 5 ++--- drivers/md/raid5.c | 15 ++++++--------- 8 files changed, 41 insertions(+), 50 deletions(-) diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index 10e0c5381d01..c19e29cb73bf 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -362,9 +362,8 @@ static void __recover_slot(struct mddev *mddev, int slot) set_bit(slot, &cinfo->recovery_map); if (!cinfo->recovery_thread) { - cinfo->recovery_thread = md_register_thread(recover_bitmaps, - mddev, "recover"); - if (!cinfo->recovery_thread) { + if (md_register_thread(&cinfo->recovery_thread, recover_bitmaps, + mddev, "recover")) { pr_warn("md-cluster: Could not create recovery thread\n"); return; } @@ -888,9 +887,9 @@ static int join(struct mddev *mddev, int nodes) goto err; } /* Initiate the communication resources */ - ret = -ENOMEM; - cinfo->recv_thread = md_register_thread(recv_daemon, mddev, "cluster_recv"); - if (!cinfo->recv_thread) { + ret = md_register_thread(&cinfo->recv_thread, recv_daemon, mddev, + "cluster_recv"); + if (ret) { pr_err("md-cluster: cannot allocate memory for recv_thread!\n"); goto err; } diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 66edf5e72bd6..ceec9e4b2a60 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -400,9 +400,9 @@ static int multipath_run (struct mddev *mddev) if (ret) goto out_free_conf; - mddev->thread = md_register_thread(multipathd, mddev, - "multipath"); - if (!mddev->thread) + ret = md_register_thread(&mddev->thread, multipathd, mddev, + "multipath"); + if (ret) goto out_free_conf; pr_info("multipath: array %s active with %d out of %d IO paths\n", diff --git a/drivers/md/md.c b/drivers/md/md.c index 98970bbe32bf..0bbdde29a41f 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -7896,29 +7896,32 @@ void md_wakeup_thread(struct md_thread *thread) } EXPORT_SYMBOL(md_wakeup_thread); -struct md_thread *md_register_thread(void (*run) (struct md_thread *), - struct mddev *mddev, const char *name) +int md_register_thread(struct md_thread **threadp, + void (*run)(struct md_thread *), + struct mddev *mddev, const char *name) { struct md_thread *thread; thread = kzalloc(sizeof(struct md_thread), GFP_KERNEL); if (!thread) - return NULL; + return -ENOMEM; init_waitqueue_head(&thread->wqueue); thread->run = run; thread->mddev = mddev; thread->timeout = MAX_SCHEDULE_TIMEOUT; - thread->tsk = kthread_run(md_thread, thread, - "%s_%s", - mdname(thread->mddev), - name); + thread->tsk = kthread_run(md_thread, thread, "%s_%s", + mdname(thread->mddev), name); if (IS_ERR(thread->tsk)) { + int err = PTR_ERR(thread->tsk); + kfree(thread); - return NULL; + return err; } - return thread; + + *threadp = thread; + return 0; } EXPORT_SYMBOL(md_register_thread); @@ -9199,10 +9202,8 @@ static void md_start_sync(struct work_struct *ws) { struct mddev *mddev = container_of(ws, struct mddev, del_work); - mddev->sync_thread = md_register_thread(md_do_sync, - mddev, - "resync"); - if (!mddev->sync_thread) { + if (md_register_thread(&mddev->sync_thread, md_do_sync, mddev, + "resync")) { pr_warn("%s: could not start resync thread...\n", mdname(mddev)); /* leave the spares where they are, it shouldn't hurt */ diff --git a/drivers/md/md.h b/drivers/md/md.h index e148e3c83b0d..344e055e4d0f 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -730,10 +730,9 @@ extern int register_md_cluster_operations(struct md_cluster_operations *ops, extern int unregister_md_cluster_operations(void); extern int md_setup_cluster(struct mddev *mddev, int nodes); extern void md_cluster_stop(struct mddev *mddev); -extern struct md_thread *md_register_thread( - void (*run)(struct md_thread *thread), - struct mddev *mddev, - const char *name); +int md_register_thread(struct md_thread **threadp, + void (*run)(struct md_thread *thread), + struct mddev *mddev, const char *name); extern void md_unregister_thread(struct md_thread **threadp); extern void md_wakeup_thread(struct md_thread *thread); extern void md_check_recovery(struct mddev *mddev); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 68a9e2d9985b..1217c1db0a40 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3083,9 +3083,8 @@ static struct r1conf *setup_conf(struct mddev *mddev) } } - err = -ENOMEM; - conf->thread = md_register_thread(raid1d, mddev, "raid1"); - if (!conf->thread) + err = md_register_thread(&conf->thread, raid1d, mddev, "raid1"); + if (err) goto abort; return conf; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 2f1522bba80d..f1e54c62f930 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -4096,9 +4096,8 @@ static struct r10conf *setup_conf(struct mddev *mddev) init_waitqueue_head(&conf->wait_barrier); atomic_set(&conf->nr_pending, 0); - err = -ENOMEM; - conf->thread = md_register_thread(raid10d, mddev, "raid10"); - if (!conf->thread) + err = md_register_thread(&conf->thread, raid10d, mddev, "raid10"); + if (err) goto out; conf->mddev = mddev; @@ -4286,9 +4285,8 @@ static int raid10_run(struct mddev *mddev) clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - mddev->sync_thread = md_register_thread(md_do_sync, mddev, - "reshape"); - if (!mddev->sync_thread) + if (md_register_thread(&mddev->sync_thread, md_do_sync, mddev, + "reshape")) goto out_free_conf; } @@ -4688,9 +4686,8 @@ static int raid10_start_reshape(struct mddev *mddev) set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - mddev->sync_thread = md_register_thread(md_do_sync, mddev, - "reshape"); - if (!mddev->sync_thread) { + if (md_register_thread(&mddev->sync_thread, md_do_sync, mddev, + "reshape")) { ret = -EAGAIN; goto abort; } diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index 46182b955aef..0464d4d551fc 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -3121,9 +3121,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev) spin_lock_init(&log->tree_lock); INIT_RADIX_TREE(&log->big_stripe_tree, GFP_NOWAIT | __GFP_NOWARN); - log->reclaim_thread = md_register_thread(r5l_reclaim_thread, - log->rdev->mddev, "reclaim"); - if (!log->reclaim_thread) + if (md_register_thread(&log->reclaim_thread, r5l_reclaim_thread, + log->rdev->mddev, "reclaim")) goto reclaim_thread; log->reclaim_thread->timeout = R5C_RECLAIM_WAKEUP_INTERVAL; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 7b820b81d8c2..04b1093195d0 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7665,11 +7665,10 @@ static struct r5conf *setup_conf(struct mddev *mddev) } sprintf(pers_name, "raid%d", mddev->new_level); - conf->thread = md_register_thread(raid5d, mddev, pers_name); - if (!conf->thread) { + ret = md_register_thread(&conf->thread, raid5d, mddev, pers_name); + if (ret) { pr_warn("md/raid:%s: couldn't allocate thread.\n", mdname(mddev)); - ret = -ENOMEM; goto abort; } @@ -7989,9 +7988,8 @@ static int raid5_run(struct mddev *mddev) clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - mddev->sync_thread = md_register_thread(md_do_sync, mddev, - "reshape"); - if (!mddev->sync_thread) + if (md_register_thread(&mddev->sync_thread, md_do_sync, mddev, + "reshape")) goto abort; } @@ -8567,9 +8565,8 @@ static int raid5_start_reshape(struct mddev *mddev) clear_bit(MD_RECOVERY_DONE, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - mddev->sync_thread = md_register_thread(md_do_sync, mddev, - "reshape"); - if (!mddev->sync_thread) { + if (md_register_thread(&mddev->sync_thread, md_do_sync, mddev, + "reshape")) { mddev->recovery = 0; spin_lock_irq(&conf->device_lock); write_seqcount_begin(&conf->gen_lock); From patchwork Sat Mar 11 09:31:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 68035 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp201976wrd; Sat, 11 Mar 2023 01:41:11 -0800 (PST) X-Google-Smtp-Source: AK7set9En7rrlRYUp4wCRnG6lJs5MoqLKVJCwH1qjc/tx//JiB/soRd+0BZ1wS5z1dOEyAy7gOik X-Received: by 2002:a17:90b:3b90:b0:234:b35b:f905 with SMTP id pc16-20020a17090b3b9000b00234b35bf905mr30478790pjb.2.1678527670959; Sat, 11 Mar 2023 01:41:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678527670; cv=none; d=google.com; s=arc-20160816; b=0XvPhe6SMMxYbodADz7YM8o8X/CSsHI9EBhHhHWnscMyNEJmtCZzlFXbCiJIqHfx// ztX2b6ZmjHF5REA1x/z3r2Sx82fgM5q4XEOEDVXrr2Gy8rUiJwhPSr/SCfouxBMMskz6 MiFl8h2VHiWMjWydm5s/ldEEGtaHf8WDJ625FNGfGTZlYjIUY3oC3gZWHul2zjIpTI+g bUpYXlWrCktAU4ZlQvaBjYwfsmnpOpdcZxjLnbq1KLq3eqLPg9MLtEPC4s91KWvCZsVa LjSZQDJVuGUzEnSSUxQMSfdfcmjrDpxQqe1crGAWwUxcvNWWvhmL/BSByuh+aQUFoioP 9gww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KkhMkeLokWc8VTvsaLFJBxWFVqRpeUQSTe3pbV66qgg=; b=A04sKkr9HVyx6wakdnovhNONwRy8oywCkRC4LcuBxcot8flV/nVkRPF1SxsWvNYiPS zr5z5HvpuwcUEX7z90h0Yye5rKpkXoqfHbB9dNVmC3NEq1TtNYXZjCUSRg1daS/EpInG 7WvMe+NvPSJcYUS45uYsUgw3JLJ8WVxrwYphOCo2DvbUD2ZxVnaOibwwuQ75IC/5oHyP owOvdjut/hSMY9Q1yD4bv+4pb+Zhq4Uh0RtIutk+0tlQlVg6Tci8BwYkwjyamkOZBSlr cYLCkSsP2WUoQAclDnLxBd64TsObRJlT7z9I5knoGtVhpouiPo2EOVgUu4YlCzJ5W5Yn FxBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k7-20020a170902ba8700b0019e4154578esi1906917pls.76.2023.03.11.01.40.58; Sat, 11 Mar 2023 01:41:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230461AbjCKJc4 (ORCPT + 99 others); Sat, 11 Mar 2023 04:32:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230062AbjCKJcf (ORCPT ); Sat, 11 Mar 2023 04:32:35 -0500 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC7F9C4899; Sat, 11 Mar 2023 01:32:31 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4PYd4C4T5vz4f3m6Z; Sat, 11 Mar 2023 17:32:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiCqSgxkFn3mEg--.16173S6; Sat, 11 Mar 2023 17:32:29 +0800 (CST) From: Yu Kuai To: agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 2/5] md: refactor md_wakeup_thread() Date: Sat, 11 Mar 2023 17:31:45 +0800 Message-Id: <20230311093148.2595222-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230311093148.2595222-1-yukuai1@huaweicloud.com> References: <20230311093148.2595222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgC3YiCqSgxkFn3mEg--.16173S6 X-Coremail-Antispam: 1UD129KBjvAXoWfWry5tF48Ar17CFy5tr43Jrb_yoW5ZrWDKo Z5Cr1aqF18X3WF9FyrtwnxtFW3XryUJ34Syw1rZFWkWFnruws5W343Zay3Jrn5K3ZxWF18 XrnrWr4rGrn3K3yxn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUYm7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r15M28IrcIa0x kI8VCY1x0267AKxVW8JVW5JwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l84AC jcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrwCFx2 IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v2 6r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67 AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IY s7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr 0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUc6pPUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760063831456058381?= X-GMAIL-MSGID: =?utf-8?q?1760063831456058381?= From: Yu Kuai Pass a md_thread pointer and a mddev to md_wakeup_thread(), prepare to use a disk level spinlock to protect md_thread, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/dm-raid.c | 4 +-- drivers/md/md-bitmap.c | 6 ++-- drivers/md/md-cluster.c | 20 +++++------ drivers/md/md-multipath.c | 2 +- drivers/md/md.c | 76 ++++++++++++++++++++------------------- drivers/md/md.h | 4 +-- drivers/md/raid1.c | 10 +++--- drivers/md/raid10.c | 14 ++++---- drivers/md/raid5-cache.c | 12 +++---- drivers/md/raid5-ppl.c | 2 +- drivers/md/raid5.c | 31 ++++++++-------- 11 files changed, 93 insertions(+), 88 deletions(-) diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 60632b409b80..257c9c9f2b4d 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3755,11 +3755,11 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv, */ mddev->ro = 0; if (!mddev->suspended && mddev->sync_thread) - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); } set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); if (!mddev->suspended && mddev->thread) - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); return 0; } diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index e7cc6ba1b657..9489510405f7 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1942,7 +1942,7 @@ int md_bitmap_load(struct mddev *mddev) set_bit(MD_RECOVERY_NEEDED, &bitmap->mddev->recovery); mddev->thread->timeout = mddev->bitmap_info.daemon_sleep; - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); md_bitmap_update_sb(bitmap); @@ -2363,7 +2363,7 @@ location_store(struct mddev *mddev, const char *buf, size_t len) * metadata promptly. */ set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } rv = 0; out: @@ -2454,7 +2454,7 @@ timeout_store(struct mddev *mddev, const char *buf, size_t len) */ if (mddev->thread->timeout < MAX_SCHEDULE_TIMEOUT) { mddev->thread->timeout = timeout; - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } } return len; diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index c19e29cb73bf..92b0e49b4e53 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -325,7 +325,7 @@ static void recover_bitmaps(struct md_thread *thread) if (test_bit(MD_RESYNCING_REMOTE, &mddev->recovery) && test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && mddev->reshape_position != MaxSector) - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); if (hi > 0) { if (lo < mddev->recovery_cp) @@ -340,7 +340,7 @@ static void recover_bitmaps(struct md_thread *thread) clear_bit(MD_RESYNCING_REMOTE, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } } clear_bit: @@ -368,7 +368,7 @@ static void __recover_slot(struct mddev *mddev, int slot) return; } } - md_wakeup_thread(cinfo->recovery_thread); + md_wakeup_thread(&cinfo->recovery_thread, mddev); } static void recover_slot(void *arg, struct dlm_slot *slot) @@ -422,7 +422,7 @@ static void ack_bast(void *arg, int mode) if (mode == DLM_LOCK_EX) { if (test_bit(MD_CLUSTER_ALREADY_IN_CLUSTER, &cinfo->state)) - md_wakeup_thread(cinfo->recv_thread); + md_wakeup_thread(&cinfo->recv_thread, mddev); else set_bit(MD_CLUSTER_PENDING_RECV_EVENT, &cinfo->state); } @@ -454,7 +454,7 @@ static void process_suspend_info(struct mddev *mddev, clear_bit(MD_RESYNCING_REMOTE, &mddev->recovery); remove_suspend_info(mddev, slot); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); return; } @@ -546,7 +546,7 @@ static void process_remove_disk(struct mddev *mddev, struct cluster_msg *msg) if (rdev) { set_bit(ClusterRemove, &rdev->flags); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } else pr_warn("%s: %d Could not find disk(%d) to REMOVE\n", @@ -696,7 +696,7 @@ static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked) rv = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state); WARN_ON_ONCE(rv); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); set_bit = 1; } @@ -971,7 +971,7 @@ static void load_bitmaps(struct mddev *mddev, int total_slots) set_bit(MD_CLUSTER_ALREADY_IN_CLUSTER, &cinfo->state); /* wake up recv thread in case something need to be handled */ if (test_and_clear_bit(MD_CLUSTER_PENDING_RECV_EVENT, &cinfo->state)) - md_wakeup_thread(cinfo->recv_thread); + md_wakeup_thread(&cinfo->recv_thread, mddev); } static void resync_bitmap(struct mddev *mddev) @@ -1052,7 +1052,7 @@ static int metadata_update_start(struct mddev *mddev) ret = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state); WARN_ON_ONCE(ret); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wait_event(cinfo->wait, !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state) || @@ -1430,7 +1430,7 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev) /* Since MD_CHANGE_DEVS will be set in add_bound_rdev which * will run soon after add_new_disk, the below path will be * invoked: - * md_wakeup_thread(mddev->thread) + * md_wakeup_thread(&mddev->thread) * -> conf->thread (raid1d) * -> md_check_recovery -> md_update_sb * -> metadata_update_start/finish diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index ceec9e4b2a60..482536ec8850 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -57,7 +57,7 @@ static void multipath_reschedule_retry (struct multipath_bh *mp_bh) spin_lock_irqsave(&conf->device_lock, flags); list_add(&mp_bh->retry_list, &conf->retry_list); spin_unlock_irqrestore(&conf->device_lock, flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } /* diff --git a/drivers/md/md.c b/drivers/md/md.c index 0bbdde29a41f..97e87df4ee43 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -487,8 +487,9 @@ void mddev_resume(struct mddev *mddev) mddev->pers->quiesce(mddev, 0); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); - md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ + md_wakeup_thread(&mddev->thread, mddev); + /* possibly kick off a reshape */ + md_wakeup_thread(&mddev->sync_thread, mddev); } EXPORT_SYMBOL_GPL(mddev_resume); @@ -804,7 +805,7 @@ void mddev_unlock(struct mddev *mddev) * make sure the thread doesn't disappear */ spin_lock(&pers_lock); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wake_up(&mddev->sb_wait); spin_unlock(&pers_lock); } @@ -2814,7 +2815,7 @@ static int add_bound_rdev(struct md_rdev *rdev) set_bit(MD_RECOVERY_RECOVER, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); md_new_event(); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); return 0; } @@ -2931,7 +2932,7 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len) md_kick_rdev_from_array(rdev); if (mddev->pers) { set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } md_new_event(); } @@ -2962,7 +2963,7 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len) clear_bit(BlockedBadBlocks, &rdev->flags); wake_up(&rdev->blocked_wait); set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery); - md_wakeup_thread(rdev->mddev->thread); + md_wakeup_thread(&rdev->mddev->thread, rdev->mddev); err = 0; } else if (cmd_match(buf, "insync") && rdev->raid_disk == -1) { @@ -3000,7 +3001,7 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len) !test_bit(Replacement, &rdev->flags)) set_bit(WantReplacement, &rdev->flags); set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery); - md_wakeup_thread(rdev->mddev->thread); + md_wakeup_thread(&rdev->mddev->thread, rdev->mddev); err = 0; } else if (cmd_match(buf, "-want_replacement")) { /* Clearing 'want_replacement' is always allowed. @@ -3127,7 +3128,7 @@ slot_store(struct md_rdev *rdev, const char *buf, size_t len) if (rdev->raid_disk >= 0) return -EBUSY; set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery); - md_wakeup_thread(rdev->mddev->thread); + md_wakeup_thread(&rdev->mddev->thread, rdev->mddev); } else if (rdev->mddev->pers) { /* Activating a spare .. or possibly reactivating * if we ever get bitmaps working here. @@ -4359,7 +4360,7 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len) if (st == active) { restart_array(mddev); clear_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wake_up(&mddev->sb_wait); } else /* st == clean */ { restart_array(mddev); @@ -4826,10 +4827,10 @@ action_store(struct mddev *mddev, const char *page, size_t len) * canceling read-auto mode */ mddev->ro = MD_RDWR; - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); } set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); sysfs_notify_dirent_safe(mddev->sysfs_action); return len; } @@ -5733,7 +5734,7 @@ static void md_safemode_timeout(struct timer_list *t) if (mddev->external) sysfs_notify_dirent_safe(mddev->sysfs_state); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } static int start_dirty_degraded; @@ -6045,8 +6046,9 @@ int do_md_run(struct mddev *mddev) /* run start up tasks that require md_thread */ md_start(mddev); - md_wakeup_thread(mddev->thread); - md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ + md_wakeup_thread(&mddev->thread, mddev); + /* possibly kick off a reshape */ + md_wakeup_thread(&mddev->sync_thread, mddev); set_capacity_and_notify(mddev->gendisk, mddev->array_sectors); clear_bit(MD_NOT_READY, &mddev->flags); @@ -6066,10 +6068,10 @@ int md_start(struct mddev *mddev) if (mddev->pers->start) { set_bit(MD_RECOVERY_WAIT, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); ret = mddev->pers->start(mddev); clear_bit(MD_RECOVERY_WAIT, &mddev->recovery); - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); } return ret; } @@ -6111,8 +6113,8 @@ static int restart_array(struct mddev *mddev) pr_debug("md: %s switched to read-write mode.\n", mdname(mddev)); /* Kick recovery or resync if necessary */ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->thread, mddev); + md_wakeup_thread(&mddev->sync_thread, mddev); sysfs_notify_dirent_safe(mddev->sysfs_state); return 0; } @@ -6261,7 +6263,7 @@ static int md_set_readonly(struct mddev *mddev, struct block_device *bdev) if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) { did_freeze = 1; set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) set_bit(MD_RECOVERY_INTR, &mddev->recovery); @@ -6287,7 +6289,7 @@ static int md_set_readonly(struct mddev *mddev, struct block_device *bdev) if (did_freeze) { clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } err = -EBUSY; goto out; @@ -6302,7 +6304,7 @@ static int md_set_readonly(struct mddev *mddev, struct block_device *bdev) set_disk_ro(mddev->gendisk, 1); clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); sysfs_notify_dirent_safe(mddev->sysfs_state); err = 0; } @@ -6325,7 +6327,7 @@ static int do_md_stop(struct mddev *mddev, int mode, if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) { did_freeze = 1; set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) set_bit(MD_RECOVERY_INTR, &mddev->recovery); @@ -6350,7 +6352,7 @@ static int do_md_stop(struct mddev *mddev, int mode, if (did_freeze) { clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } return -EBUSY; } @@ -6893,7 +6895,7 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev) md_kick_rdev_from_array(rdev); set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); if (mddev->thread) - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); else md_update_sb(mddev, 1); md_new_event(); @@ -6976,7 +6978,7 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev) * array immediately. */ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); md_new_event(); return 0; @@ -7886,8 +7888,10 @@ static int md_thread(void *arg) return 0; } -void md_wakeup_thread(struct md_thread *thread) +void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev) { + struct md_thread *thread = *threadp; + if (thread) { pr_debug("md: waking up MD thread %s.\n", thread->tsk->comm); set_bit(THREAD_WAKEUP, &thread->flags); @@ -7963,7 +7967,7 @@ void md_error(struct mddev *mddev, struct md_rdev *rdev) set_bit(MD_RECOVERY_INTR, &mddev->recovery); if (!test_bit(MD_BROKEN, &mddev->flags)) { set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } if (mddev->event_work.func) queue_work(md_misc_wq, &mddev->event_work); @@ -8474,7 +8478,7 @@ void md_done_sync(struct mddev *mddev, int blocks, int ok) if (!ok) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); set_bit(MD_RECOVERY_ERROR, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); // stop recovery, signal do_sync .... } } @@ -8499,8 +8503,8 @@ bool md_write_start(struct mddev *mddev, struct bio *bi) /* need to switch to read/write */ mddev->ro = MD_RDWR; set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->thread, mddev); + md_wakeup_thread(&mddev->sync_thread, mddev); did_change = 1; } rcu_read_lock(); @@ -8515,7 +8519,7 @@ bool md_write_start(struct mddev *mddev, struct bio *bi) mddev->in_sync = 0; set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags); set_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); did_change = 1; } spin_unlock(&mddev->lock); @@ -8558,7 +8562,7 @@ void md_write_end(struct mddev *mddev) percpu_ref_put(&mddev->writes_pending); if (mddev->safemode == 2) - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); else if (mddev->safemode_delay) /* The roundup() ensures this only performs locking once * every ->safemode_delay jiffies @@ -9100,7 +9104,7 @@ void md_do_sync(struct md_thread *thread) spin_unlock(&mddev->lock); wake_up(&resync_wait); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); return; } EXPORT_SYMBOL_GPL(md_do_sync); @@ -9218,7 +9222,7 @@ static void md_start_sync(struct work_struct *ws) if (mddev->sysfs_action) sysfs_notify_dirent_safe(mddev->sysfs_action); } else - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); sysfs_notify_dirent_safe(mddev->sysfs_action); md_new_event(); } @@ -9534,7 +9538,7 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, sysfs_notify_dirent_safe(rdev->sysfs_state); set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING)); - md_wakeup_thread(rdev->mddev->thread); + md_wakeup_thread(&rdev->mddev->thread, mddev); return 1; } else return 0; @@ -9699,7 +9703,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev) /* wakeup mddev->thread here, so array could * perform resync with the new activated disk */ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } /* device faulty * We just want to do the minimum to mark the disk diff --git a/drivers/md/md.h b/drivers/md/md.h index 344e055e4d0f..aeb2fc6b65c7 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -734,7 +734,7 @@ int md_register_thread(struct md_thread **threadp, void (*run)(struct md_thread *thread), struct mddev *mddev, const char *name); extern void md_unregister_thread(struct md_thread **threadp); -extern void md_wakeup_thread(struct md_thread *thread); +extern void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev); extern void md_check_recovery(struct mddev *mddev); extern void md_reap_sync_thread(struct mddev *mddev); extern int mddev_init_writes_pending(struct mddev *mddev); @@ -805,7 +805,7 @@ static inline void rdev_dec_pending(struct md_rdev *rdev, struct mddev *mddev) int faulty = test_bit(Faulty, &rdev->flags); if (atomic_dec_and_test(&rdev->nr_pending) && faulty) { set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 1217c1db0a40..391ff239c711 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -289,7 +289,7 @@ static void reschedule_retry(struct r1bio *r1_bio) spin_unlock_irqrestore(&conf->device_lock, flags); wake_up(&conf->wait_barrier); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } /* @@ -1180,7 +1180,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule) bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_barrier); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); kfree(plug); return; } @@ -1585,7 +1585,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } } @@ -2501,7 +2501,7 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio) * get_unqueued_pending() == extra to be true. */ wake_up(&conf->wait_barrier); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } else { if (test_bit(R1BIO_WriteError, &r1_bio->state)) close_write(r1_bio); @@ -3344,7 +3344,7 @@ static int raid1_reshape(struct mddev *mddev) set_bit(MD_RECOVERY_RECOVER, &mddev->recovery); set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); mempool_exit(&oldpool); return 0; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index f1e54c62f930..920e5722040f 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -309,7 +309,7 @@ static void reschedule_retry(struct r10bio *r10_bio) /* wake up frozen array... */ wake_up(&conf->wait_barrier); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } /* @@ -1114,7 +1114,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_barrier); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); kfree(plug); return; } @@ -1329,7 +1329,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } } @@ -1441,7 +1441,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, mddev->reshape_position = conf->reshape_progress; set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_DEVS) | BIT(MD_SB_CHANGE_PENDING)); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); if (bio->bi_opf & REQ_NOWAIT) { allow_barrier(conf); bio_wouldblock_error(bio); @@ -3079,7 +3079,7 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio) * nr_pending == nr_queued + extra to be true. */ wake_up(&conf->wait_barrier); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } else { if (test_bit(R10BIO_WriteError, &r10_bio->state)) @@ -4692,7 +4692,7 @@ static int raid10_start_reshape(struct mddev *mddev) goto abort; } conf->reshape_checkpoint = jiffies; - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); md_new_event(); return 0; @@ -4874,7 +4874,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, mddev->curr_resync_completed = conf->reshape_progress; conf->reshape_checkpoint = jiffies; set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wait_event(mddev->sb_wait, mddev->sb_flags == 0 || test_bit(MD_RECOVERY_INTR, &mddev->recovery)); if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) { diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index 0464d4d551fc..d6ee6a7a83b7 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -600,7 +600,7 @@ static void r5l_log_endio(struct bio *bio) spin_unlock_irqrestore(&log->io_list_lock, flags); if (log->need_cache_flush) - md_wakeup_thread(log->rdev->mddev->thread); + md_wakeup_thread(&log->rdev->mddev->thread, log->rdev->mddev); /* finish flush only io_unit and PAYLOAD_FLUSH only io_unit */ if (has_null_flush) { @@ -1491,7 +1491,7 @@ static void r5c_do_reclaim(struct r5conf *conf) if (!test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) r5l_run_no_space_stripes(log); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } static void r5l_do_reclaim(struct r5l_log *log) @@ -1519,7 +1519,7 @@ static void r5l_do_reclaim(struct r5l_log *log) list_empty(&log->finished_ios))) break; - md_wakeup_thread(log->rdev->mddev->thread); + md_wakeup_thread(&log->rdev->mddev->thread, log->rdev->mddev); wait_event_lock_irq(log->iounit_wait, r5l_reclaimable_space(log) > reclaimable, log->io_list_lock); @@ -1571,7 +1571,7 @@ void r5l_wake_reclaim(struct r5l_log *log, sector_t space) if (new < target) return; } while (!try_cmpxchg(&log->reclaim_target, &target, new)); - md_wakeup_thread(log->reclaim_thread); + md_wakeup_thread(&log->reclaim_thread, log->rdev->mddev); } void r5l_quiesce(struct r5l_log *log, int quiesce) @@ -2776,7 +2776,7 @@ void r5c_release_extra_page(struct stripe_head *sh) if (using_disk_info_extra_page) { clear_bit(R5C_EXTRA_PAGE_IN_USE, &conf->cache_state); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } } @@ -2832,7 +2832,7 @@ void r5c_finish_stripe_write_out(struct r5conf *conf, if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state)) if (atomic_dec_and_test(&conf->pending_full_writes)) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); if (do_wakeup) wake_up(&conf->wait_for_overlap); diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c index e495939bb3e0..47cf1e85c48d 100644 --- a/drivers/md/raid5-ppl.c +++ b/drivers/md/raid5-ppl.c @@ -601,7 +601,7 @@ static void ppl_flush_endio(struct bio *bio) if (atomic_dec_and_test(&io->pending_flushes)) { ppl_io_unit_finished(io); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 04b1093195d0..2c0695d41436 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -195,7 +195,7 @@ static void raid5_wakeup_stripe_thread(struct stripe_head *sh) } if (conf->worker_cnt_per_group == 0) { - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); return; } @@ -268,13 +268,14 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh, return; } } - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } else { BUG_ON(stripe_operations_active(sh)); if (test_and_clear_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) if (atomic_dec_return(&conf->preread_active_stripes) < IO_THRESHOLD) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, + conf->mddev); atomic_dec(&conf->active_stripes); if (!test_bit(STRIPE_EXPANDING, &sh->state)) { if (!r5c_is_writeback(conf->log)) @@ -356,7 +357,7 @@ static void release_inactive_stripe_list(struct r5conf *conf, if (atomic_read(&conf->active_stripes) == 0) wake_up(&conf->wait_for_quiescent); if (conf->retry_read_aligned) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } } @@ -407,7 +408,7 @@ void raid5_release_stripe(struct stripe_head *sh) goto slow_path; wakeup = llist_add(&sh->release_list, &conf->released_stripes); if (wakeup) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); return; slow_path: /* we are ok here if STRIPE_ON_RELEASE_LIST is set or not */ @@ -981,7 +982,7 @@ static void stripe_add_to_batch_list(struct r5conf *conf, if (test_and_clear_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) if (atomic_dec_return(&conf->preread_active_stripes) < IO_THRESHOLD) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); if (test_and_clear_bit(STRIPE_BIT_DELAY, &sh->state)) { int seq = sh->bm_seq; @@ -3759,7 +3760,7 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh, if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state)) if (atomic_dec_and_test(&conf->pending_full_writes)) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } static void @@ -4156,7 +4157,7 @@ static void handle_stripe_clean_event(struct r5conf *conf, if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state)) if (atomic_dec_and_test(&conf->pending_full_writes)) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); if (head_sh->batch_head && do_endio) break_stripe_batch_list(head_sh, STRIPE_EXPAND_SYNC_FLAGS); @@ -5369,7 +5370,7 @@ static void handle_stripe(struct stripe_head *sh) atomic_dec(&conf->preread_active_stripes); if (atomic_read(&conf->preread_active_stripes) < IO_THRESHOLD) - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } clear_bit_unlock(STRIPE_ACTIVE, &sh->state); @@ -5436,7 +5437,7 @@ static void add_bio_to_retry(struct bio *bi,struct r5conf *conf) conf->retry_read_aligned_list = bi; spin_unlock_irqrestore(&conf->device_lock, flags); - md_wakeup_thread(conf->mddev->thread); + md_wakeup_thread(&conf->mddev->thread, conf->mddev); } static struct bio *remove_bio_from_retry(struct r5conf *conf, @@ -6045,7 +6046,7 @@ static enum stripe_result make_stripe_request(struct mddev *mddev, * Stripe is busy expanding or add failed due to * overlap. Flush everything and wait a while. */ - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); ret = STRIPE_SCHEDULE_AND_RETRY; goto out_release; } @@ -6345,7 +6346,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk conf->reshape_checkpoint = jiffies; set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wait_event(mddev->sb_wait, mddev->sb_flags == 0 || test_bit(MD_RECOVERY_INTR, &mddev->recovery)); if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) @@ -6453,7 +6454,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk rdev->recovery_offset = sector_nr; conf->reshape_checkpoint = jiffies; set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); wait_event(mddev->sb_wait, !test_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags) || test_bit(MD_RECOVERY_INTR, &mddev->recovery)); @@ -8585,7 +8586,7 @@ static int raid5_start_reshape(struct mddev *mddev) return -EAGAIN; } conf->reshape_checkpoint = jiffies; - md_wakeup_thread(mddev->sync_thread); + md_wakeup_thread(&mddev->sync_thread, mddev); md_new_event(); return 0; } @@ -8815,7 +8816,7 @@ static int raid5_check_reshape(struct mddev *mddev) mddev->chunk_sectors = new_chunk; } set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); - md_wakeup_thread(mddev->thread); + md_wakeup_thread(&mddev->thread, mddev); } return check_reshape(mddev); } From patchwork Sat Mar 11 09:31:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 68033 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp201795wrd; Sat, 11 Mar 2023 01:40:32 -0800 (PST) X-Google-Smtp-Source: AK7set/8myjBmw2k2F49wRpd+2c+qC8iiYjP7lMvMOyl6tGka0qfpTJ6wiW4496yXoyUx46d2T6z X-Received: by 2002:a17:90a:cd:b0:234:b35b:f8e7 with SMTP id v13-20020a17090a00cd00b00234b35bf8e7mr28238798pjd.7.1678527632298; Sat, 11 Mar 2023 01:40:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678527632; cv=none; d=google.com; s=arc-20160816; b=DDwA/sl2VeVJIG1WETHzOARBPD/zS4wGJ5ujAvz3W7dCyNynYPx8df7wr2+m8LR6z2 EPeOZQpIV53O5Qf6357/RdazUdqzRXFev9GC1Q9KHpAqX789P2mXIGkBMFXCjFDzAlzm 3emZLBbmJ7P1VlykkqFbZDrDI61xbG4nGddu0LyTmig+dpMGksDc4NoXj9yb6DnQXuGw D/Zn0GqCll0NpuzVkKrQlJ4OU0zctYDFSd9ufyG77TOm9x/GM2cp/4Reo0UtN5z098rZ Qyz+49VvI5Q0sGaY4RhP4En4mRgWffOd3fA15zWFKscjav6Owa15e9T86mkR6UPuwO4c oDKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DKX7+fw/bVlv5vDS2XyxXTqd72I0w0iD9oD9dR/CS8E=; b=V7bUl6lSIrNGkyloKx5lIGBVhwRK+81q3vhtLhSBRDeH40jZyF+Fl0Qov57FQ7ys9g TdSMtdpWL7Ff8TrbkzKJlNksAx29aybrdnMaOuF3OOqOFTaEiHygZkP+YHA5c1mqDw3+ S6HbRH+cJuToJV9NyGxak3yMguBZpF0UySnl81nkaiKVVYfD/Oj6QFvSmFP92dDY435W hMcHIZskjV/3jP843wosp7LAk0nw4uJeSDBjw5IyBa1jxuJ4r5TBk8yDA/I2aKOQa6FY RW+eCTi5EFEHlx7jdpXWmzifhWvKJ+KA0iu/MlCjL4Tpo8VLLkN5H6NtqsaaEyOxqTQY kteg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k7-20020a170902ba8700b0019e4154578esi1906917pls.76.2023.03.11.01.40.19; Sat, 11 Mar 2023 01:40:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230395AbjCKJcn (ORCPT + 99 others); Sat, 11 Mar 2023 04:32:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229901AbjCKJcd (ORCPT ); Sat, 11 Mar 2023 04:32:33 -0500 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D39EDCA45; Sat, 11 Mar 2023 01:32:32 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4PYd4C6SlSz4f3jHj; Sat, 11 Mar 2023 17:32:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiCqSgxkFn3mEg--.16173S7; Sat, 11 Mar 2023 17:32:29 +0800 (CST) From: Yu Kuai To: agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 3/5] md: use md_thread api to wake up sync_thread Date: Sat, 11 Mar 2023 17:31:46 +0800 Message-Id: <20230311093148.2595222-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230311093148.2595222-1-yukuai1@huaweicloud.com> References: <20230311093148.2595222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgC3YiCqSgxkFn3mEg--.16173S7 X-Coremail-Antispam: 1UD129KBjvJXoW7Ar47Jw4rGw13JFWkXw4rAFb_yoW8WFy7pa yxAF95ur10yry3ZFZxJa4qva4rXF10q342vry7ua1rJw15tw45tFy3ury8AryvvayfAw45 Zw1rtFWxuFs29r7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9v14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JV W8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbJ73DUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760063790206450789?= X-GMAIL-MSGID: =?utf-8?q?1760063790206450789?= From: Yu Kuai Instead of wake_up_process() directly, convert to use md_wakeup_thread(). Signed-off-by: Yu Kuai --- drivers/md/md.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 97e87df4ee43..4ecfd0508afb 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6267,10 +6267,12 @@ static int md_set_readonly(struct mddev *mddev, struct block_device *bdev) } if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) set_bit(MD_RECOVERY_INTR, &mddev->recovery); - if (mddev->sync_thread) - /* Thread might be blocked waiting for metadata update - * which will now never happen */ - wake_up_process(mddev->sync_thread->tsk); + + /* + * Thread might be blocked waiting for metadata update + * which will now never happen + */ + md_wakeup_thread(&mddev->sync_thread, mddev); if (mddev->external && test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) return -EBUSY; @@ -6331,10 +6333,12 @@ static int do_md_stop(struct mddev *mddev, int mode, } if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) set_bit(MD_RECOVERY_INTR, &mddev->recovery); - if (mddev->sync_thread) - /* Thread might be blocked waiting for metadata update - * which will now never happen */ - wake_up_process(mddev->sync_thread->tsk); + + /* + * Thread might be blocked waiting for metadata update + * which will now never happen + */ + md_wakeup_thread(&mddev->sync_thread, mddev); mddev_unlock(mddev); wait_event(resync_wait, (mddev->sync_thread == NULL && From patchwork Sat Mar 11 09:31:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 68042 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp211610wrd; Sat, 11 Mar 2023 02:13:11 -0800 (PST) X-Google-Smtp-Source: AK7set8E2j+Wo34CxDQMZx4niY2uiMyO8Xn18jwPlNIfZOUIQ+wQK29awzjYpZOGQ9j6yVTCRdxZ X-Received: by 2002:a17:90b:1c03:b0:230:a5e6:daa9 with SMTP id oc3-20020a17090b1c0300b00230a5e6daa9mr28711849pjb.36.1678529591437; Sat, 11 Mar 2023 02:13:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678529591; cv=none; d=google.com; s=arc-20160816; b=KzaZm21dCifha+fdodTre9kpyIGeyfUZUF6F3cuIm/dqQZ6xw7qmqFH4HeOrqHmTQk PInZ1gtbVErY2VTfrO9T3HgTErB3fKu0anUYxlP+3b6n9bxj4xlTq64w5A7Wvo91oIzF U1ae3E0/phBQUWC6V32n9d/7Kw0Ou30syM+DMYAiXXnY50BS5XOW1ZurPepujAGYzta2 Kqwg40MCsMU+QG7jN2G3mtdlMp580vHdrTZUooFZgMy8A+MPi8Hup3WCD5jDRO7rjSuF L15JM/S9YRbdMPTCK0n/LO307PGxtI5WXAFRCdlZoL0DDrJg6AV4ZK27p1P39x44mdA2 bwVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YKkYo9727iRrk3QkzfuDytds8y1sN+LnsHubK+VuGgM=; b=nYU6evYi0h70tWic64TMwocdbH1bPl34nxTcI3vgh/MmBiW/dtKFpN9A5ikn4dpfH4 jGwyhinl5byeIo+yyhrx9h3tWTx0/mlvBpNwQErLxou4DyLjc/SJwtNia2KL+GWIQFgh wLP5rvorQQ6In4/C+4ezsMgBjwGCZ2DzIMfLfHlgj4g2XDqZCd+vVHDIzKVmDym9tYaG +b+baaAoyFlnqjTycuYL/15Rbgb197Rrt4yVq6iVGQfVAT7IWCgT8XMX9ss+oAhSwA5Q UMI23kuw/hzTlzrgkAAKs/j5/H1KzPVcOSwifFU2/sEuHyZJw3x5vu4w3+3W3Fnt0IYU mYPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p12-20020a17090a348c00b0023b2b355352si1750685pjb.130.2023.03.11.02.12.54; Sat, 11 Mar 2023 02:13:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230432AbjCKJct (ORCPT + 99 others); Sat, 11 Mar 2023 04:32:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbjCKJce (ORCPT ); Sat, 11 Mar 2023 04:32:34 -0500 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CCC8DD344; Sat, 11 Mar 2023 01:32:32 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4PYd4D279Bz4f3jJ5; Sat, 11 Mar 2023 17:32:28 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiCqSgxkFn3mEg--.16173S8; Sat, 11 Mar 2023 17:32:30 +0800 (CST) From: Yu Kuai To: agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 4/5] md: pass a mddev to md_unregister_thread() Date: Sat, 11 Mar 2023 17:31:47 +0800 Message-Id: <20230311093148.2595222-5-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230311093148.2595222-1-yukuai1@huaweicloud.com> References: <20230311093148.2595222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgC3YiCqSgxkFn3mEg--.16173S8 X-Coremail-Antispam: 1UD129KBjvJXoW3XF4fKw4kWw4ruw4fGFWDtwb_yoW3JrWUp3 yrXFy3Ar4FvrW3Zr4DJayDuay5Z3WIqFWqyryfC3yrX3WfGrW5GFyY9FyDZr1DZa4rAr43 ta15KF48ZFWvgr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9C14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxK x2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI 0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQSdkUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760065844980313778?= X-GMAIL-MSGID: =?utf-8?q?1760065844980313778?= From: Yu Kuai Prepare to use a disk level spinlock to protect md_thread, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/dm-raid.c | 2 +- drivers/md/md-cluster.c | 8 ++++---- drivers/md/md.c | 13 +++++++------ drivers/md/md.h | 3 ++- drivers/md/raid1.c | 4 ++-- drivers/md/raid10.c | 2 +- drivers/md/raid5-cache.c | 2 +- drivers/md/raid5.c | 2 +- 8 files changed, 19 insertions(+), 17 deletions(-) diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 257c9c9f2b4d..1393c80b083b 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3729,7 +3729,7 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv, if (!strcasecmp(argv[0], "idle") || !strcasecmp(argv[0], "frozen")) { if (mddev->sync_thread) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); + md_unregister_thread(&mddev->sync_thread, mddev); md_reap_sync_thread(mddev); } } else if (decipher_sync_action(mddev, mddev->recovery) != st_idle) diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index 92b0e49b4e53..4d538c9ab7b3 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -946,8 +946,8 @@ static int join(struct mddev *mddev, int nodes) return 0; err: set_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state); - md_unregister_thread(&cinfo->recovery_thread); - md_unregister_thread(&cinfo->recv_thread); + md_unregister_thread(&cinfo->recovery_thread, mddev); + md_unregister_thread(&cinfo->recv_thread, mddev); lockres_free(cinfo->message_lockres); lockres_free(cinfo->token_lockres); lockres_free(cinfo->ack_lockres); @@ -1009,8 +1009,8 @@ static int leave(struct mddev *mddev) resync_bitmap(mddev); set_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state); - md_unregister_thread(&cinfo->recovery_thread); - md_unregister_thread(&cinfo->recv_thread); + md_unregister_thread(&cinfo->recovery_thread, mddev); + md_unregister_thread(&cinfo->recv_thread, mddev); lockres_free(cinfo->message_lockres); lockres_free(cinfo->token_lockres); lockres_free(cinfo->ack_lockres); diff --git a/drivers/md/md.c b/drivers/md/md.c index 4ecfd0508afb..ab9299187cfe 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -4775,7 +4775,8 @@ action_store(struct mddev *mddev, const char *page, size_t len) mddev_unlock(mddev); set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); + md_unregister_thread(&mddev->sync_thread, + mddev); mddev_lock_nointr(mddev); /* * set RECOVERY_INTR again and restore reshape @@ -6175,7 +6176,7 @@ static void __md_stop_writes(struct mddev *mddev) flush_workqueue(md_misc_wq); if (mddev->sync_thread) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); + md_unregister_thread(&mddev->sync_thread, mddev); md_reap_sync_thread(mddev); } @@ -6215,7 +6216,7 @@ static void mddev_detach(struct mddev *mddev) mddev->pers->quiesce(mddev, 1); mddev->pers->quiesce(mddev, 0); } - md_unregister_thread(&mddev->thread); + md_unregister_thread(&mddev->thread, mddev); if (mddev->queue) blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ } @@ -7933,7 +7934,7 @@ int md_register_thread(struct md_thread **threadp, } EXPORT_SYMBOL(md_register_thread); -void md_unregister_thread(struct md_thread **threadp) +void md_unregister_thread(struct md_thread **threadp, struct mddev *mddev) { struct md_thread *thread; @@ -9324,7 +9325,7 @@ void md_check_recovery(struct mddev *mddev) * ->spare_active and clear saved_raid_disk */ set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); + md_unregister_thread(&mddev->sync_thread, mddev); md_reap_sync_thread(mddev); clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); @@ -9360,7 +9361,7 @@ void md_check_recovery(struct mddev *mddev) goto unlock; } if (mddev->sync_thread) { - md_unregister_thread(&mddev->sync_thread); + md_unregister_thread(&mddev->sync_thread, mddev); md_reap_sync_thread(mddev); goto unlock; } diff --git a/drivers/md/md.h b/drivers/md/md.h index aeb2fc6b65c7..8f4137ad2dde 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -733,7 +733,8 @@ extern void md_cluster_stop(struct mddev *mddev); int md_register_thread(struct md_thread **threadp, void (*run)(struct md_thread *thread), struct mddev *mddev, const char *name); -extern void md_unregister_thread(struct md_thread **threadp); +extern void md_unregister_thread(struct md_thread **threadp, + struct mddev *mddev); extern void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev); extern void md_check_recovery(struct mddev *mddev); extern void md_reap_sync_thread(struct mddev *mddev); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 391ff239c711..8329a1ba9d12 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3158,7 +3158,7 @@ static int raid1_run(struct mddev *mddev) * RAID1 needs at least one disk in active */ if (conf->raid_disks - mddev->degraded < 1) { - md_unregister_thread(&conf->thread); + md_unregister_thread(&conf->thread, mddev); ret = -EINVAL; goto abort; } @@ -3185,7 +3185,7 @@ static int raid1_run(struct mddev *mddev) ret = md_integrity_register(mddev); if (ret) { - md_unregister_thread(&mddev->thread); + md_unregister_thread(&mddev->thread, mddev); goto abort; } return 0; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 920e5722040f..47d18d56000e 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -4293,7 +4293,7 @@ static int raid10_run(struct mddev *mddev) return 0; out_free_conf: - md_unregister_thread(&mddev->thread); + md_unregister_thread(&mddev->thread, mddev); raid10_free_conf(conf); mddev->private = NULL; out: diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index d6ee6a7a83b7..588c3d1f7467 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -3166,7 +3166,7 @@ void r5l_exit_log(struct r5conf *conf) /* Ensure disable_writeback_work wakes up and exits */ wake_up(&conf->mddev->sb_wait); flush_work(&log->disable_writeback_work); - md_unregister_thread(&log->reclaim_thread); + md_unregister_thread(&log->reclaim_thread, conf->mddev); conf->log = NULL; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 2c0695d41436..b9f2688b141f 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -8070,7 +8070,7 @@ static int raid5_run(struct mddev *mddev) return 0; abort: - md_unregister_thread(&mddev->thread); + md_unregister_thread(&mddev->thread, mddev); print_raid5_conf(conf); free_conf(conf); mddev->private = NULL; From patchwork Sat Mar 11 09:31:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 68040 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp206646wrd; Sat, 11 Mar 2023 01:59:03 -0800 (PST) X-Google-Smtp-Source: AK7set+/0uHMVZi0ooOUx/Glj0StNo6+9YgCxCfVqXEkpLzjKqa6RHx7vtq0xnoCUXNXFchoq4Th X-Received: by 2002:a05:6a20:6a1b:b0:cd:2d12:8176 with SMTP id p27-20020a056a206a1b00b000cd2d128176mr28459100pzk.19.1678528743004; Sat, 11 Mar 2023 01:59:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678528742; cv=none; d=google.com; s=arc-20160816; b=RpDvOLjnweNtu8Ar7yM1CvaAipuoXYMo+4KWlI4+I8jVP7yBQSXb7MWM/AiwKONuDs f9cPAYbcNy8oE8TNCtoy4djlIXgeCff22LQz+8CW7+n6CEKnBpf7UJI0EiZdfq1DuHq1 QWcUsDicO4VZu7J9pTYvP0q5YaQq++3jRzNNHCegrCGw8sC+GEevzPViYCRiHEJxppR3 /zUOHPwL1DWElOmFpuOWXrxg1T5gAf+ls8/N+AA0B9Gtgy6e0RFcWrUZnp0uuA/rRqGb 09zFyE8t3Yt0DE7vVxeN4yGKCmeFkdKsaqf4tNpmHW/B2nolu8HB0/mf08a1VVHOD5RW STdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Y61h3JkpjvwF4HbDDSQPnK4FVrlYNvQjMe0QTZrBSGI=; b=WP/8znaW44SUgxoCqgJqS9+rH5gitbdghYzM9NOP6Pxg/Q2N8P+noPm2vtw0zuW3ch lZA1PI8Qx+dtbDUrEzGk1nbcMJovoavYmUb3DGQGG2bKFv8nDuHeBZqJWJ9Vg8F5r2zv 7S6LfVjG2TGsXWWjozkOB2PM9tZNo9EVGa26nTlbffCsVUq6m7ppzKLXMbOg2CARtlDJ 3KuTdEgiyrwdmmw7zXxKhZQN2wR/q57DQy9QbxSx8qevJR75+kTrGzEBDoAKD4I0Q1gy hk9WiRlOOAzvhdbDQ8J5cXteJ4OGSgGMkvRjHcRplYVFGNufFHpbT6qNI6V/quwfByo3 hmxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u19-20020a631413000000b004ff90bed041si1873500pgl.415.2023.03.11.01.58.49; Sat, 11 Mar 2023 01:59:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230447AbjCKJcx (ORCPT + 99 others); Sat, 11 Mar 2023 04:32:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229998AbjCKJce (ORCPT ); Sat, 11 Mar 2023 04:32:34 -0500 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED588DD34B; Sat, 11 Mar 2023 01:32:32 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4PYd4D4xrTz4f3jJJ; Sat, 11 Mar 2023 17:32:28 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiCqSgxkFn3mEg--.16173S9; Sat, 11 Mar 2023 17:32:30 +0800 (CST) From: Yu Kuai To: agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 5/5] md: protect md_thread with a new disk level spin lock Date: Sat, 11 Mar 2023 17:31:48 +0800 Message-Id: <20230311093148.2595222-6-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230311093148.2595222-1-yukuai1@huaweicloud.com> References: <20230311093148.2595222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgC3YiCqSgxkFn3mEg--.16173S9 X-Coremail-Antispam: 1UD129KBjvJXoWxXFyUZF4UXr1xtr1Uuw48JFb_yoW5uw4rp3 yIqF9xAr4UZws8ZrnrJa4ku3WYqw1vga1DJrW7uw1rA3WUK3yaqryYgFyUZFn8A3WxCFs8 J3WrKayrArWDKr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760064955298452718?= X-GMAIL-MSGID: =?utf-8?q?1760064955298452718?= From: Yu Kuai Our test reports a uaf for 'mddev->sync_thread': T1 T2 md_start_sync md_register_thread raid1d md_check_recovery md_reap_sync_thread md_unregister_thread kfree md_wakeup_thread wake_up ->sync_thread was freed Currently, a global spinlock 'pers_lock' is borrowed to protect 'mddev->thread', this problem can be fixed likewise, however, there might be similar problem for other md_thread, and I really don't like the idea to borrow a global lock. This patch use a disk level spinlock to protect md_thread in relevant apis. Signed-off-by: Yu Kuai --- drivers/md/md.c | 23 ++++++++++------------- drivers/md/md.h | 1 + 2 files changed, 11 insertions(+), 13 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index ab9299187cfe..a952978884a5 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -663,6 +663,7 @@ void mddev_init(struct mddev *mddev) atomic_set(&mddev->active, 1); atomic_set(&mddev->openers, 0); spin_lock_init(&mddev->lock); + spin_lock_init(&mddev->thread_lock); atomic_set(&mddev->flush_pending, 0); init_waitqueue_head(&mddev->sb_wait); init_waitqueue_head(&mddev->recovery_wait); @@ -801,13 +802,8 @@ void mddev_unlock(struct mddev *mddev) } else mutex_unlock(&mddev->reconfig_mutex); - /* As we've dropped the mutex we need a spinlock to - * make sure the thread doesn't disappear - */ - spin_lock(&pers_lock); md_wakeup_thread(&mddev->thread, mddev); wake_up(&mddev->sb_wait); - spin_unlock(&pers_lock); } EXPORT_SYMBOL_GPL(mddev_unlock); @@ -7895,13 +7891,16 @@ static int md_thread(void *arg) void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev) { - struct md_thread *thread = *threadp; + struct md_thread *thread; + spin_lock(&mddev->thread_lock); + thread = *threadp; if (thread) { pr_debug("md: waking up MD thread %s.\n", thread->tsk->comm); set_bit(THREAD_WAKEUP, &thread->flags); wake_up(&thread->wqueue); } + spin_unlock(&mddev->thread_lock); } EXPORT_SYMBOL(md_wakeup_thread); @@ -7929,7 +7928,9 @@ int md_register_thread(struct md_thread **threadp, return err; } + spin_lock(&mddev->thread_lock); *threadp = thread; + spin_unlock(&mddev->thread_lock); return 0; } EXPORT_SYMBOL(md_register_thread); @@ -7938,18 +7939,14 @@ void md_unregister_thread(struct md_thread **threadp, struct mddev *mddev) { struct md_thread *thread; - /* - * Locking ensures that mddev_unlock does not wake_up a - * non-existent thread - */ - spin_lock(&pers_lock); + spin_lock(&mddev->thread_lock); thread = *threadp; if (!thread) { - spin_unlock(&pers_lock); + spin_unlock(&mddev->thread_lock); return; } *threadp = NULL; - spin_unlock(&pers_lock); + spin_unlock(&mddev->thread_lock); pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk)); kthread_stop(thread->tsk); diff --git a/drivers/md/md.h b/drivers/md/md.h index 8f4137ad2dde..ca182d21dd8d 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -367,6 +367,7 @@ struct mddev { int new_chunk_sectors; int reshape_backwards; + spinlock_t thread_lock; struct md_thread *thread; /* management thread */ struct md_thread *sync_thread; /* doing resync or reconstruct */