Message ID | 20231204031703.3102254-1-yukuai1@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2533824vqy; Sun, 3 Dec 2023 19:45:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IEVB22pVIGMnghZK93lYBv/2xiOQrrVzK0vmRt7E5SVUjBlDj3DUjgWTbKXqUsTxrd79WmD X-Received: by 2002:a05:6808:3c8e:b0:3b6:cadf:be06 with SMTP id gs14-20020a0568083c8e00b003b6cadfbe06mr4674929oib.23.1701661514273; Sun, 03 Dec 2023 19:45:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701661514; cv=none; d=google.com; s=arc-20160816; b=Uh10JDF8mMLcjWQcreIbF8VWyCkMMaaTxLfsgHgojRzwTwNyXzsJPVs13d7a5wCO8f yvpb8cLhL6mzaGJvsV2yVPblpG9hBHGMN6HmMORogrLE1nNmc+kRroxKQQeRQwnRBaW7 mnHZI+nvyWtia3Vkg4uW/51Sy8eyCy5gBCTzJIsXdQhaQvht4v1+cw4TuASW4hWsw8Mh BHCtKV+Sh9iQCCz5CNrOaWIyYW+y2J1hqXHEQLzvRX2dXiMn9ZXxz/kQZt1pkc3X7Hw9 hNLBGonNm0H9fjEjDNnj9WF2J97qFZ0OnVAjGiX0UbMFiyd6apBctPEMjELrxNMfaflF tQIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=e+oHDrPUck0NAPEonLhYyTlT+BFEvOLccVZ3HkLN7tA=; fh=nHkKN8ZCF6D1B2nf8z60a20882ZTqJQPCOAmh9uauqc=; b=t86iYLzwZ7gCR7Z2e645p540FpQX81eF+yGokO/9Bim61eRKaqO/duYHls3n8MSkWx siwOL977NJXIqmqV9tbrQfuI4bVWvyFZvdQsLNmT8z+u+ePMywHwHdHJ1kv9ffzGciDO OrMBp9vm9qqhnSjZRfyKRP5Lh9FcctKb/FYStIPxFCrPT0+nQlcM/kdwXCAGad3hEIhZ RmhDzw1PGqU2RZhxLYTfVlAHi9/uL/CB6r/eVkmD8eKf2YV6zNKVfNMLEM7ZSK0tF8L1 nWVXTx71YLXpoNm86gVPjVB4tNS4fOTS7a3ia8KGlDhS1LeJOc80kamLdf27BsF02plv Amgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id fj8-20020a056a003a0800b0069014d63f21si7265402pfb.148.2023.12.03.19.45.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Dec 2023 19:45:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id F3D3480702D9; Sun, 3 Dec 2023 19:45:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbjLDDSE (ORCPT <rfc822;pwkd43@gmail.com> + 99 others); Sun, 3 Dec 2023 22:18:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229510AbjLDDSD (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 3 Dec 2023 22:18:03 -0500 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B32C7E6; Sun, 3 Dec 2023 19:18:08 -0800 (PST) Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Sk84X1Pq3z4f3jql; Mon, 4 Dec 2023 11:18:04 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 355A01A0D34; Mon, 4 Dec 2023 11:18:05 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP1 (Coremail) with SMTP id cCh0CgDX2hDoRG1l06W7Cg--.50821S4; Mon, 04 Dec 2023 11:18:02 +0800 (CST) From: Yu Kuai <yukuai1@huaweicloud.com> To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, dm-devel@lists.linux.dev, song@kernel.org, yukuai3@huawei.com Cc: janpieter.sollie@edpnet.be, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next] md: split MD_RECOVERY_NEEDED out of mddev_resume Date: Mon, 4 Dec 2023 11:17:03 +0800 Message-Id: <20231204031703.3102254-1-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: cCh0CgDX2hDoRG1l06W7Cg--.50821S4 X-Coremail-Antispam: 1UD129KBjvJXoW3Jw4xtry3ZrWDZw4xJw43Wrg_yoW7Xw1Dpa 97Jas3uw47WFWrXrWDAF1qga45Aw4jgrZFyrW3u3s7AFy5t34fuF15WFyqqrZ5ta4kAFW5 Xw15JFs7ZryIgr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvY14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4U JVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gc CE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E 2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJV W8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2 Y2ka0xkIwI1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4 xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43 MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I 0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v2 6r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0J UdHUDUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sun, 03 Dec 2023 19:45:11 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784321423585353597 X-GMAIL-MSGID: 1784321423585353597 |
Series |
[-next] md: split MD_RECOVERY_NEEDED out of mddev_resume
|
|
Commit Message
Yu Kuai
Dec. 4, 2023, 3:17 a.m. UTC
From: Yu Kuai <yukuai3@huawei.com> New mddev_resume() calls are added to synchroniza IO with array reconfiguration, however, this introduce a regression while adding it in md_start_sync(): 1) someone set MD_RECOVERY_NEEDED first; 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and queue a new sync work; 3) daemon thread release reconfig_mutex; 4) in md_start_sync a) check that there are spares that can be added/removed, then suspend the array; b) remove_and_add_spares may not be called, or called without really add/remove spares; c) resume the array, then set MD_RECOVERY_NEEDED again! Loop between 2 - 4, then mddev_suspend() will be called quite often, for consequence, normal IO will be quite slow. Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so that md_start_sync() won't set such flag and hence the loop will be broken. Fixes: bc08041b32ab ("md: suspend array in md_start_sync() if array need reconfiguration") Reported-and-tested-by: Janpieter Sollie <janpieter.sollie@edpnet.be> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218200 Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/md/dm-raid.c | 1 + drivers/md/md-bitmap.c | 2 ++ drivers/md/md.c | 6 +++++- drivers/md/raid5.c | 4 ++++ 4 files changed, 12 insertions(+), 1 deletion(-)
Comments
Dear Yu, Thank you for your patch. Am 04.12.23 um 04:17 schrieb Yu Kuai: > From: Yu Kuai <yukuai3@huawei.com> > > New mddev_resume() calls are added to synchroniza IO with array synchronize > reconfiguration, however, this introduce a regression while adding it in 1. Maybe: … performance regression … 2. introduce*s* > md_start_sync(): > > 1) someone set MD_RECOVERY_NEEDED first; set*s* > 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and > queue a new sync work; grab*s*, clear*s*, queue*s* > 3) daemon thread release reconfig_mutex; release*s* > 4) in md_start_sync > a) check that there are spares that can be added/removed, then suspend > the array; > b) remove_and_add_spares may not be called, or called without really > add/remove spares; > c) resume the array, then set MD_RECOVERY_NEEDED again! > > Loop between 2 - 4, then mddev_suspend() will be called quite often, for > consequence, normal IO will be quite slow. It’d be great if you could document the exact “test case”, and numbers. > Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so split*t*ing > that md_start_sync() won't set such flag and hence the loop will be broken. > > Fixes: bc08041b32ab ("md: suspend array in md_start_sync() if array need reconfiguration") > Reported-and-tested-by: Janpieter Sollie <janpieter.sollie@edpnet.be> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218200 > Signed-off-by: Yu Kuai <yukuai3@huawei.com> > --- > drivers/md/dm-raid.c | 1 + > drivers/md/md-bitmap.c | 2 ++ > drivers/md/md.c | 6 +++++- > drivers/md/raid5.c | 4 ++++ > 4 files changed, 12 insertions(+), 1 deletion(-) > > diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c > index eb009d6bb03a..e9c0d70f7fe5 100644 > --- a/drivers/md/dm-raid.c > +++ b/drivers/md/dm-raid.c > @@ -4059,6 +4059,7 @@ static void raid_resume(struct dm_target *ti) > clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); > mddev->ro = 0; > mddev->in_sync = 0; > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > } > } > diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c > index 9672f75c3050..16112750ee64 100644 > --- a/drivers/md/md-bitmap.c > +++ b/drivers/md/md-bitmap.c > @@ -2428,6 +2428,7 @@ location_store(struct mddev *mddev, const char *buf, size_t len) > } > rv = 0; > out: > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > if (rv) > return rv; > @@ -2571,6 +2572,7 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len) > if (old_mwb != backlog) > md_bitmap_update_sb(mddev->bitmap); > > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > return len; > } > diff --git a/drivers/md/md.c b/drivers/md/md.c > index 4b1e8007dd15..48a1b12f3c2c 100644 > --- a/drivers/md/md.c > +++ b/drivers/md/md.c > @@ -515,7 +515,6 @@ void mddev_resume(struct mddev *mddev) > percpu_ref_resurrect(&mddev->active_io); > wake_up(&mddev->sb_wait); > > - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > md_wakeup_thread(mddev->thread); > md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ > > @@ -4146,6 +4145,7 @@ level_store(struct mddev *mddev, const char *buf, size_t len) > md_new_event(); > rv = len; > out_unlock: > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > return rv; > } > @@ -4652,6 +4652,8 @@ new_dev_store(struct mddev *mddev, const char *buf, size_t len) > out: > if (err) > export_rdev(rdev, mddev); > + else > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > if (!err) > md_new_event(); > @@ -5533,6 +5535,7 @@ serialize_policy_store(struct mddev *mddev, const char *buf, size_t len) > mddev_destroy_serial_pool(mddev, NULL); > mddev->serialize_policy = value; > unlock: > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > return err ?: len; > } > @@ -6593,6 +6596,7 @@ static void autorun_devices(int part) > export_rdev(rdev, mddev); > } > autorun_array(mddev); > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > } > /* on success, candidates will be empty, on error > diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c > index 42ba3581cfea..f88f92517a18 100644 > --- a/drivers/md/raid5.c > +++ b/drivers/md/raid5.c > @@ -6989,6 +6989,7 @@ raid5_store_stripe_size(struct mddev *mddev, const char *page, size_t len) > mutex_unlock(&conf->cache_size_mutex); > > out_unlock: > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > return err ?: len; > } > @@ -7090,6 +7091,7 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len) > else > blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q); > } > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > return err ?: len; > } > @@ -7169,6 +7171,7 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len) > kfree(old_groups); > } > } > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > > return err ?: len; > @@ -8920,6 +8923,7 @@ static int raid5_change_consistency_policy(struct mddev *mddev, const char *buf) > if (!err) > md_update_sb(mddev, 1); > > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > mddev_unlock_and_resume(mddev); > > return err; Acked-by: Paul Menzel <pmenzel@molgen.mpg.de> Kind regards, Paul
On Sun, Dec 3, 2023 at 7:18 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> > > New mddev_resume() calls are added to synchroniza IO with array > reconfiguration, however, this introduce a regression while adding it in > md_start_sync(): > > 1) someone set MD_RECOVERY_NEEDED first; > 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and > queue a new sync work; > 3) daemon thread release reconfig_mutex; > 4) in md_start_sync > a) check that there are spares that can be added/removed, then suspend > the array; > b) remove_and_add_spares may not be called, or called without really > add/remove spares; > c) resume the array, then set MD_RECOVERY_NEEDED again! > > Loop between 2 - 4, then mddev_suspend() will be called quite often, for > consequence, normal IO will be quite slow. > > Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so > that md_start_sync() won't set such flag and hence the loop will be broken. I hope we don't leak set_bit MD_RECOVERY_NEEDED to all call sites of mddev_resume(). How about something like the following instead? Please also incorporate feedback from Paul in the next version. Thanks, Song diff --git i/drivers/md/md.c w/drivers/md/md.c index c94373d64f2c..2d53e1b57070 100644 --- i/drivers/md/md.c +++ w/drivers/md/md.c @@ -490,7 +490,7 @@ int mddev_suspend(struct mddev *mddev, bool interruptible) } EXPORT_SYMBOL_GPL(mddev_suspend); -void mddev_resume(struct mddev *mddev) +static void __mddev_resume(struct mddev *mddev, bool recovery_needed) { lockdep_assert_not_held(&mddev->reconfig_mutex); @@ -507,12 +507,18 @@ void mddev_resume(struct mddev *mddev) percpu_ref_resurrect(&mddev->active_io); wake_up(&mddev->sb_wait); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + if (recovery_needed) + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); md_wakeup_thread(mddev->thread); md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ mutex_unlock(&mddev->suspend_mutex); } + +void mddev_resume(struct mddev *mddev) +{ + __mddev_resume(mddev, true); +} EXPORT_SYMBOL_GPL(mddev_resume); /* @@ -9403,7 +9409,9 @@ static void md_start_sync(struct work_struct *ws) goto not_running; } - suspend ? mddev_unlock_and_resume(mddev) : mddev_unlock(mddev); + mddev_unlock(mddev); + if (suspend) + __mddev_resume(mddev, false); md_wakeup_thread(mddev->sync_thread); sysfs_notify_dirent_safe(mddev->sysfs_action); md_new_event(); @@ -9415,7 +9423,9 @@ static void md_start_sync(struct work_struct *ws) clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - suspend ? mddev_unlock_and_resume(mddev) : mddev_unlock(mddev); + mddev_unlock(mddev); + if (suspend) + __mddev_resume(mddev, false); wake_up(&resync_wait); if (test_and_clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery) &&
Hi, 在 2023/12/06 16:30, Song Liu 写道: > On Sun, Dec 3, 2023 at 7:18 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> From: Yu Kuai <yukuai3@huawei.com> >> >> New mddev_resume() calls are added to synchroniza IO with array >> reconfiguration, however, this introduce a regression while adding it in >> md_start_sync(): >> >> 1) someone set MD_RECOVERY_NEEDED first; >> 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and >> queue a new sync work; >> 3) daemon thread release reconfig_mutex; >> 4) in md_start_sync >> a) check that there are spares that can be added/removed, then suspend >> the array; >> b) remove_and_add_spares may not be called, or called without really >> add/remove spares; >> c) resume the array, then set MD_RECOVERY_NEEDED again! >> >> Loop between 2 - 4, then mddev_suspend() will be called quite often, for >> consequence, normal IO will be quite slow. >> >> Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so >> that md_start_sync() won't set such flag and hence the loop will be broken. > > I hope we don't leak set_bit MD_RECOVERY_NEEDED to all call > sites of mddev_resume(). There are also some other mddev_resume() that is added later and don't need recovery, so md_start_sync() is not the only place: - md_setup_drive - rdev_attr_store - suspend_lo_store - suspend_hi_store - autorun_devices - md_ioct - r5c_disable_writeback_async - error path from new_dev_store(), ... I'm not sure add a new helper is a good idea, because all above apis should use new helper as well. > > How about something like the following instead? > > Please also incorporate feedback from Paul in the next version. Of course. Thanks, Kuai > > Thanks, > Song > > diff --git i/drivers/md/md.c w/drivers/md/md.c > index c94373d64f2c..2d53e1b57070 100644 > --- i/drivers/md/md.c > +++ w/drivers/md/md.c > @@ -490,7 +490,7 @@ int mddev_suspend(struct mddev *mddev, bool interruptible) > } > EXPORT_SYMBOL_GPL(mddev_suspend); > > -void mddev_resume(struct mddev *mddev) > +static void __mddev_resume(struct mddev *mddev, bool recovery_needed) > { > lockdep_assert_not_held(&mddev->reconfig_mutex); > > @@ -507,12 +507,18 @@ void mddev_resume(struct mddev *mddev) > percpu_ref_resurrect(&mddev->active_io); > wake_up(&mddev->sb_wait); > > - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > + if (recovery_needed) > + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); > md_wakeup_thread(mddev->thread); > md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ > > mutex_unlock(&mddev->suspend_mutex); > } > + > +void mddev_resume(struct mddev *mddev) > +{ > + __mddev_resume(mddev, true); > +} > EXPORT_SYMBOL_GPL(mddev_resume); > > /* > @@ -9403,7 +9409,9 @@ static void md_start_sync(struct work_struct *ws) > goto not_running; > } > > - suspend ? mddev_unlock_and_resume(mddev) : mddev_unlock(mddev); > + mddev_unlock(mddev); > + if (suspend) > + __mddev_resume(mddev, false); > md_wakeup_thread(mddev->sync_thread); > sysfs_notify_dirent_safe(mddev->sysfs_action); > md_new_event(); > @@ -9415,7 +9423,9 @@ static void md_start_sync(struct work_struct *ws) > clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); > clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); > clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); > - suspend ? mddev_unlock_and_resume(mddev) : mddev_unlock(mddev); > + mddev_unlock(mddev); > + if (suspend) > + __mddev_resume(mddev, false); > > wake_up(&resync_wait); > if (test_and_clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery) && > > . >
On Wed, Dec 6, 2023 at 3:36 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2023/12/06 16:30, Song Liu 写道: > > On Sun, Dec 3, 2023 at 7:18 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > >> > >> From: Yu Kuai <yukuai3@huawei.com> > >> > >> New mddev_resume() calls are added to synchroniza IO with array > >> reconfiguration, however, this introduce a regression while adding it in > >> md_start_sync(): > >> > >> 1) someone set MD_RECOVERY_NEEDED first; > >> 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and > >> queue a new sync work; > >> 3) daemon thread release reconfig_mutex; > >> 4) in md_start_sync > >> a) check that there are spares that can be added/removed, then suspend > >> the array; > >> b) remove_and_add_spares may not be called, or called without really > >> add/remove spares; > >> c) resume the array, then set MD_RECOVERY_NEEDED again! > >> > >> Loop between 2 - 4, then mddev_suspend() will be called quite often, for > >> consequence, normal IO will be quite slow. > >> > >> Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so > >> that md_start_sync() won't set such flag and hence the loop will be broken. > > > > I hope we don't leak set_bit MD_RECOVERY_NEEDED to all call > > sites of mddev_resume(). > > There are also some other mddev_resume() that is added later and don't > need recovery, so md_start_sync() is not the only place: > > - md_setup_drive > - rdev_attr_store > - suspend_lo_store > - suspend_hi_store > - autorun_devices > - md_ioct > - r5c_disable_writeback_async > - error path from new_dev_store(), ... > > I'm not sure add a new helper is a good idea, because all above apis > should use new helper as well. I think for most of these call sites, it is OK to set MD_RECOVERY_NEEDED (although it is not needed), and md_start_sync() is the only one that may trigger "loop between 2 - 4" scenario. Did I miss something? It is already rc4, so we need to send the fix soon. Thanks, Song
Hi, 在 2023/12/07 1:24, Song Liu 写道: > On Wed, Dec 6, 2023 at 3:36 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> Hi, >> >> 在 2023/12/06 16:30, Song Liu 写道: >>> On Sun, Dec 3, 2023 at 7:18 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >>>> >>>> From: Yu Kuai <yukuai3@huawei.com> >>>> >>>> New mddev_resume() calls are added to synchroniza IO with array >>>> reconfiguration, however, this introduce a regression while adding it in >>>> md_start_sync(): >>>> >>>> 1) someone set MD_RECOVERY_NEEDED first; >>>> 2) daemon thread grab reconfig_mutex, then clear MD_RECOVERY_NEEDED and >>>> queue a new sync work; >>>> 3) daemon thread release reconfig_mutex; >>>> 4) in md_start_sync >>>> a) check that there are spares that can be added/removed, then suspend >>>> the array; >>>> b) remove_and_add_spares may not be called, or called without really >>>> add/remove spares; >>>> c) resume the array, then set MD_RECOVERY_NEEDED again! >>>> >>>> Loop between 2 - 4, then mddev_suspend() will be called quite often, for >>>> consequence, normal IO will be quite slow. >>>> >>>> Fix this problem by spliting MD_RECOVERY_NEEDED out of mddev_resume(), so >>>> that md_start_sync() won't set such flag and hence the loop will be broken. >>> >>> I hope we don't leak set_bit MD_RECOVERY_NEEDED to all call >>> sites of mddev_resume(). >> >> There are also some other mddev_resume() that is added later and don't >> need recovery, so md_start_sync() is not the only place: >> >> - md_setup_drive >> - rdev_attr_store >> - suspend_lo_store >> - suspend_hi_store >> - autorun_devices >> - md_ioct >> - r5c_disable_writeback_async >> - error path from new_dev_store(), ... >> >> I'm not sure add a new helper is a good idea, because all above apis >> should use new helper as well. > > I think for most of these call sites, it is OK to set MD_RECOVERY_NEEDED > (although it is not needed), and md_start_sync() is the only one that may > trigger "loop between 2 - 4" scenario. Did I miss something? Yes, it's the only problematic one. I'll send v2. Thanks, Kuai > > It is already rc4, so we need to send the fix soon. > > Thanks, > Song > . >
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index eb009d6bb03a..e9c0d70f7fe5 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -4059,6 +4059,7 @@ static void raid_resume(struct dm_target *ti) clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); mddev->ro = 0; mddev->in_sync = 0; + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); } } diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 9672f75c3050..16112750ee64 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -2428,6 +2428,7 @@ location_store(struct mddev *mddev, const char *buf, size_t len) } rv = 0; out: + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); if (rv) return rv; @@ -2571,6 +2572,7 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len) if (old_mwb != backlog) md_bitmap_update_sb(mddev->bitmap); + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return len; } diff --git a/drivers/md/md.c b/drivers/md/md.c index 4b1e8007dd15..48a1b12f3c2c 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -515,7 +515,6 @@ void mddev_resume(struct mddev *mddev) percpu_ref_resurrect(&mddev->active_io); wake_up(&mddev->sb_wait); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); md_wakeup_thread(mddev->thread); md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */ @@ -4146,6 +4145,7 @@ level_store(struct mddev *mddev, const char *buf, size_t len) md_new_event(); rv = len; out_unlock: + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return rv; } @@ -4652,6 +4652,8 @@ new_dev_store(struct mddev *mddev, const char *buf, size_t len) out: if (err) export_rdev(rdev, mddev); + else + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); if (!err) md_new_event(); @@ -5533,6 +5535,7 @@ serialize_policy_store(struct mddev *mddev, const char *buf, size_t len) mddev_destroy_serial_pool(mddev, NULL); mddev->serialize_policy = value; unlock: + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return err ?: len; } @@ -6593,6 +6596,7 @@ static void autorun_devices(int part) export_rdev(rdev, mddev); } autorun_array(mddev); + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); } /* on success, candidates will be empty, on error diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 42ba3581cfea..f88f92517a18 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -6989,6 +6989,7 @@ raid5_store_stripe_size(struct mddev *mddev, const char *page, size_t len) mutex_unlock(&conf->cache_size_mutex); out_unlock: + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return err ?: len; } @@ -7090,6 +7091,7 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len) else blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q); } + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return err ?: len; } @@ -7169,6 +7171,7 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len) kfree(old_groups); } } + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return err ?: len; @@ -8920,6 +8923,7 @@ static int raid5_change_consistency_policy(struct mddev *mddev, const char *buf) if (!err) md_update_sb(mddev, 1); + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); mddev_unlock_and_resume(mddev); return err;