Message ID | 20230322064122.2384589-2-yukuai1@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2198841wrt; Wed, 22 Mar 2023 00:07:36 -0700 (PDT) X-Google-Smtp-Source: AK7set8rSRF81Ggkacq1k7hBaGumQe+1qtsuKndkvZCgyK/l1bvMnthtm3jTkDJpJQuF+qQYNx09 X-Received: by 2002:a17:90b:3511:b0:23b:4388:7d8a with SMTP id ls17-20020a17090b351100b0023b43887d8amr2475007pjb.21.1679468855953; Wed, 22 Mar 2023 00:07:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679468855; cv=none; d=google.com; s=arc-20160816; b=iePdypniGM6UunOM+LZDIG71oumAUqmB2e0Yf5C5ZdZ++rVHs08Deszb7KGrg6lW4r IxU6Yi6j819cXajK9bqdIlaVVQ22SLEFBU42FGgbQBsP2sH8FpByBO+O3Cmb2/xrmY7D O3b/qcJ3BGXGltI6P4uGHc5+Z9daZzw68cx2vQ7w7gY41r8vji9rPp4gRRDNqOdyd+WK 4+jHca+BPr2IrTH9qXmdaj5uk+GLhp2ENZJzjJl40A71aGHYwXG+OoBeTXM2GTdO+dXe pPCM5mZgCuGbe448s0G/RtHVexZhWxy5XzZjXKKxBQ8gbZ7swVOLBvrWtQ7LKsxSkvkZ OrwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=c6HZhzRXcBqidtKy0bqxa9zATPDs8gH3x1ttkWa4OCg=; b=LkrUaQvwquPEE0xCD7HK6ej8q46W846gV3PbrNUNZpU4GUQkb/fuysCxgm6gJcYYxt nCK6vI4cRaEyvfw9pqyHnz+K0sBfR39wh7rilwz5/nXgOnD6SwG7MbxStt/IR9faR463 U//nWleTu/7SovfFxR1xM6w3HrHIAsOaredGX9CRW42suyo9FbdMH/6wtS7KbWlrcYh2 sMPrpa3EQ81tcGNwIFD5Jugd21dq4pxcxLqVTLDVgQctLjOGqcb2a3I2Gw5BzcC0sqC3 71C0W6cl7jDqoQm/Aff++5NKsdibsUL2JwRf5KbE4yoAHIRmXKHfVX9hKUxoRS27xzsq Ulxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i4-20020a170902c94400b001a1f68ff791si189621pla.17.2023.03.22.00.07.23; Wed, 22 Mar 2023 00:07:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230009AbjCVHCG (ORCPT <rfc822;ezelljr.billy@gmail.com> + 99 others); Wed, 22 Mar 2023 03:02:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229961AbjCVHB6 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 22 Mar 2023 03:01:58 -0400 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 811355BC8B; Wed, 22 Mar 2023 00:01:52 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4PhJmm26FZz4f3lJw; Wed, 22 Mar 2023 14:42:16 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP3 (Coremail) with SMTP id _Ch0CgC3YiBHoxpkI55qFQ--.38915S5; Wed, 22 Mar 2023 14:42:18 +0800 (CST) From: Yu Kuai <yukuai1@huaweicloud.com> To: guoqing.jiang@linux.dev, logang@deltatee.com, pmenzel@molgen.mpg.de, agk@redhat.com, snitzer@kernel.org, song@kernel.org Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 1/6] Revert "md: unlock mddev before reap sync_thread in action_store" Date: Wed, 22 Mar 2023 14:41:17 +0800 Message-Id: <20230322064122.2384589-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230322064122.2384589-1-yukuai1@huaweicloud.com> References: <20230322064122.2384589-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: _Ch0CgC3YiBHoxpkI55qFQ--.38915S5 X-Coremail-Antispam: 1UD129KBjvJXoWxWr1ftF18AF45XF15ZF18Krg_yoWrKr43p3 yfJF9xJrWUArW3ZrWUta4DXa4UZw1jqrZFyrWfW34fJw1fKr43W345uFyUZFyDJas5Zw4a qa1rJFWrZFW09w7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JU4T5dUUUUU = X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761050735214175389?= X-GMAIL-MSGID: =?utf-8?q?1761050735214175389?= |
Series |
md: fix that MD_RECOVERY_RUNNING can be cleared while sync_thread is still running
|
|
Commit Message
Yu Kuai
March 22, 2023, 6:41 a.m. UTC
From: Yu Kuai <yukuai3@huawei.com> This reverts commit 9dfbdafda3b34e262e43e786077bab8e476a89d1. Because it will introduce a defect that sync_thread can be running while MD_RECOVERY_RUNNING is cleared, which will cause some unexpected problems, for example: list_add corruption. prev->next should be next (ffff0001ac1daba0), but was ffff0000ce1a02a0. (prev=ffff0000ce1a02a0). Call trace: __list_add_valid+0xfc/0x140 insert_work+0x78/0x1a0 __queue_work+0x500/0xcf4 queue_work_on+0xe8/0x12c md_check_recovery+0xa34/0xf30 raid10d+0xb8/0x900 [raid10] md_thread+0x16c/0x2cc kthread+0x1a4/0x1ec ret_from_fork+0x10/0x18 This is because work is requeued while it's still inside workqueue: t1: t2: action_store mddev_lock if (mddev->sync_thread) mddev_unlock md_unregister_thread // first sync_thread is done md_check_recovery mddev_try_lock /* * once MD_RECOVERY_DONE is set, new sync_thread * can start. */ set_bit(MD_RECOVERY_RUNNING, &mddev->recovery) INIT_WORK(&mddev->del_work, md_start_sync) queue_work(md_misc_wq, &mddev->del_work) test_and_set_bit(WORK_STRUCT_PENDING_BIT, ...) // set pending bit insert_work list_add_tail mddev_unlock mddev_lock_nointr md_reap_sync_thread // MD_RECOVERY_RUNNING is cleared mddev_unlock t3: // before queued work started from t2 md_check_recovery // MD_RECOVERY_RUNNING is not set, a new sync_thread can be started INIT_WORK(&mddev->del_work, md_start_sync) work->data = 0 // work pending bit is cleared queue_work(md_misc_wq, &mddev->del_work) insert_work list_add_tail // list is corrupted This patch revert the commit to fix the problem, the deadlock this commit tries to fix will be fixed in following patches. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/md/dm-raid.c | 1 - drivers/md/md.c | 19 ++----------------- 2 files changed, 2 insertions(+), 18 deletions(-)
Comments
On 3/22/23 14:41, Yu Kuai wrote: > From: Yu Kuai <yukuai3@huawei.com> > > This reverts commit 9dfbdafda3b34e262e43e786077bab8e476a89d1. > > Because it will introduce a defect that sync_thread can be running while > MD_RECOVERY_RUNNING is cleared, which will cause some unexpected problems, > for example: > > list_add corruption. prev->next should be next (ffff0001ac1daba0), but was ffff0000ce1a02a0. (prev=ffff0000ce1a02a0). > Call trace: > __list_add_valid+0xfc/0x140 > insert_work+0x78/0x1a0 > __queue_work+0x500/0xcf4 > queue_work_on+0xe8/0x12c > md_check_recovery+0xa34/0xf30 > raid10d+0xb8/0x900 [raid10] > md_thread+0x16c/0x2cc > kthread+0x1a4/0x1ec > ret_from_fork+0x10/0x18 > > This is because work is requeued while it's still inside workqueue: If the workqueue subsystem can have such problem because of md flag, then I have to think workqueue is fragile. > t1: t2: > action_store > mddev_lock > if (mddev->sync_thread) > mddev_unlock > md_unregister_thread > // first sync_thread is done > md_check_recovery > mddev_try_lock > /* > * once MD_RECOVERY_DONE is set, new sync_thread > * can start. > */ > set_bit(MD_RECOVERY_RUNNING, &mddev->recovery) > INIT_WORK(&mddev->del_work, md_start_sync) > queue_work(md_misc_wq, &mddev->del_work) > test_and_set_bit(WORK_STRUCT_PENDING_BIT, ...) Assume you mean below, 1551 if(!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { 1552 __queue_work(cpu, wq, work); 1553 ret = true; 1554 } Could you explain how the same work can be re-queued? Isn't the PENDING_BIT is already set in t3? I believe queue_work shouldn't do that per the comment but I am not expert ... Returns %false if @work was already on a queue, %true otherwise. > // set pending bit > insert_work > list_add_tail > mddev_unlock > mddev_lock_nointr > md_reap_sync_thread > // MD_RECOVERY_RUNNING is cleared > mddev_unlock > > t3: > > // before queued work started from t2 > md_check_recovery > // MD_RECOVERY_RUNNING is not set, a new sync_thread can be started > INIT_WORK(&mddev->del_work, md_start_sync) > work->data = 0 > // work pending bit is cleared > queue_work(md_misc_wq, &mddev->del_work) > insert_work > list_add_tail > // list is corrupted > > This patch revert the commit to fix the problem, the deadlock this > commit tries to fix will be fixed in following patches. Pls cc the previous users who had encounter the problem to test the second patch. And can you share your test which can trigger the re-queued issue? I'd like to try with latest mainline such as 6.3-rc3, and your test is not only run against 5.10 kernel as you described before, right? Thanks, Guoqing
Hi, 在 2023/03/22 15:19, Guoqing Jiang 写道: > > > On 3/22/23 14:41, Yu Kuai wrote: >> From: Yu Kuai <yukuai3@huawei.com> >> >> This reverts commit 9dfbdafda3b34e262e43e786077bab8e476a89d1. >> >> Because it will introduce a defect that sync_thread can be running while >> MD_RECOVERY_RUNNING is cleared, which will cause some unexpected >> problems, >> for example: >> >> list_add corruption. prev->next should be next (ffff0001ac1daba0), but >> was ffff0000ce1a02a0. (prev=ffff0000ce1a02a0). >> Call trace: >> __list_add_valid+0xfc/0x140 >> insert_work+0x78/0x1a0 >> __queue_work+0x500/0xcf4 >> queue_work_on+0xe8/0x12c >> md_check_recovery+0xa34/0xf30 >> raid10d+0xb8/0x900 [raid10] >> md_thread+0x16c/0x2cc >> kthread+0x1a4/0x1ec >> ret_from_fork+0x10/0x18 >> >> This is because work is requeued while it's still inside workqueue: > > If the workqueue subsystem can have such problem because of md flag, > then I have to think workqueue is fragile. > >> t1: t2: >> action_store >> mddev_lock >> if (mddev->sync_thread) >> mddev_unlock >> md_unregister_thread >> // first sync_thread is done >> md_check_recovery >> mddev_try_lock >> /* >> * once MD_RECOVERY_DONE is set, new sync_thread >> * can start. >> */ >> set_bit(MD_RECOVERY_RUNNING, &mddev->recovery) >> INIT_WORK(&mddev->del_work, md_start_sync) >> queue_work(md_misc_wq, &mddev->del_work) >> test_and_set_bit(WORK_STRUCT_PENDING_BIT, ...) > > Assume you mean below, > > 1551 if(!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { > 1552 __queue_work(cpu, wq, work); > 1553 ret = true; > 1554 } > > Could you explain how the same work can be re-queued? Isn't the PENDING_BIT > is already set in t3? I believe queue_work shouldn't do that per the > comment > but I am not expert ... This is not related to workqueue, it is just because raid10 reinitialize the work that is already queued, like I discribed later in t3: t2: md_check_recovery: INIT_WORK -> clear pending queue_work -> set pending list_add_tail ... t3: -> work is still pending md_check_recovery: INIT_WORK -> clear pending queue_work -> set pending list_add_tail -> list is corrupted > > Returns %false if @work was already on a queue, %true otherwise. > >> // set pending bit >> insert_work >> list_add_tail >> mddev_unlock >> mddev_lock_nointr >> md_reap_sync_thread >> // MD_RECOVERY_RUNNING is cleared >> mddev_unlock >> >> t3: >> >> // before queued work started from t2 >> md_check_recovery >> // MD_RECOVERY_RUNNING is not set, a new sync_thread can be started >> INIT_WORK(&mddev->del_work, md_start_sync) >> work->data = 0 >> // work pending bit is cleared >> queue_work(md_misc_wq, &mddev->del_work) >> insert_work >> list_add_tail >> // list is corrupted >> >> This patch revert the commit to fix the problem, the deadlock this >> commit tries to fix will be fixed in following patches. > > Pls cc the previous users who had encounter the problem to test the > second patch. Ok, cc Marc. Can you try if this patchset fix the problem you reproted in the following thread? md_raid: mdX_raid6 looping after sync_action "check" to "idle" transition > > And can you share your test which can trigger the re-queued issue? > I'd like to try with latest mainline such as 6.3-rc3, and your test is > not only run against 5.10 kernel as you described before, right? > Of course, our 5.10 and mainline are the same, there are some tests: First the deadlock can be reporduced reliably, test script is simple: mdadm -Cv /dev/md0 -n 4 -l10 /dev/sd[abcd] fio -filename=/dev/md0 -rw=randwrite -direct=1 -name=a -bs=4k -numjobs=16 -iodepth=16 & echo -1 > /sys/kernel/debug/fail_make_request/times echo 1 > /sys/kernel/debug/fail_make_request/probability echo 1 > /sys/block/sda/make-it-fail { while true; do mdadm -f /dev/md0 /dev/sda mdadm -r /dev/md0 /dev/sda mdadm --zero-superblock /dev/sda mdadm -a /dev/md0 /dev/sda sleep 2 done } & { while true; do mdadm -f /dev/md0 /dev/sdd mdadm -r /dev/md0 /dev/sdd mdadm --zero-superblock /dev/sdd mdadm -a /dev/md0 /dev/sdd sleep 10 done } & { while true; do echo frozen > /sys/block/md0/md/sync_action echo idle > /sys/block/md0/md/sync_action sleep 0.1 done } & Then, the problem MD_RECOVERY_RUNNING can be cleared can't be reporduced reliably, usually it takes 2+ days to triggered a problem, and each time problem phenomenon can be different, I'm hacking the kernel and add some BUG_ON to test MD_RECOVERY_RUNNING in attached patch, following test can trigger the BUG_ON: mdadm -Cv /dev/md0 -e1.0 -n 4 -l 10 /dev/sd{a..d} --run sleep 5 echo 1 > /sys/module/md_mod/parameters/set_delay echo idle > /sys/block/md0/md/sync_action & sleep 5 echo "want_replacement" > /sys/block/md0/md/dev-sdd/state test result: [ 228.390237] md_check_recovery: running is set [ 228.391376] md_check_recovery: queue new sync thread [ 233.671041] action_store unregister success! delay 10s [ 233.689276] md_check_recovery: running is set [ 238.722448] md_check_recovery: running is set [ 238.723328] md_check_recovery: queue new sync thread [ 238.724851] md_do_sync: before new wor, sleep 10s [ 239.725818] md_do_sync: delay done [ 243.674828] action_store delay done [ 243.700102] md_reap_sync_thread: running is cleared! [ 243.748703] ------------[ cut here ]------------ [ 243.749656] kernel BUG at drivers/md/md.c:9084! [ 243.750548] invalid opcode: 0000 [#1] PREEMPT SMP [ 243.752028] CPU: 6 PID: 1495 Comm: md0_resync Not tainted 6.3.0-rc1-next-20230310-00001-g4b3965bcb967-dirty #47 [ 243.755030] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014 [ 243.758516] RIP: 0010:md_do_sync+0x16a9/0x1b00 [ 243.759583] Code: ff 48 83 05 60 ce a7 0c 01 e9 8d f9 ff ff 48 83 05 13 ce a7 0c 01 48 c7 c6 e9 e0 29 83 e9 3b f9 ff ff 48 83 05 5f d0 a7 0c 01 <0f> 0b 48 83 05 5d d0 a7 0c 01 e8 f8 d5 0b0 [ 243.763661] RSP: 0018:ffffc90003847d50 EFLAGS: 00010202 [ 243.764212] RAX: 0000000000000028 RBX: ffff88817b529000 RCX: 0000000000000000 [ 243.764936] RDX: 0000000000000000 RSI: 0000000000000206 RDI: ffff888100040740 [ 243.765648] RBP: 00000000002d6780 R08: 0101010101010101 R09: ffff888165671d80 [ 243.766352] R10: ffffffff8ad6096c R11: ffff88816fcfa9f0 R12: 0000000000000001 [ 243.767066] R13: ffff888173920040 R14: ffff88817b529000 R15: 0000000000187100 [ 243.767781] FS: 0000000000000000(0000) GS:ffff888ffef80000(0000) knlGS:0000000000000000 [ 243.768588] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 243.769172] CR2: 00005599effa8451 CR3: 00000001663e6000 CR4: 00000000000006e0 [ 243.769888] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 243.770598] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 243.771300] Call Trace: [ 243.771555] <TASK> [ 243.771779] ? kvm_clock_read+0x14/0x30 [ 243.772169] ? kvm_sched_clock_read+0x9/0x20 [ 243.772611] ? sched_clock_cpu+0x21/0x330 [ 243.773023] md_thread+0x2ec/0x300 [ 243.773373] ? md_write_start+0x420/0x420 [ 243.773845] kthread+0x13e/0x1a0 [ 243.774210] ? kthread_exit+0x50/0x50 [ 243.774591] ret_from_fork+0x1f/0x30 > Thanks, > Guoqing > > . > From 0f82a9298db4b3711863022dc0805c908db3bb98 Mon Sep 17 00:00:00 2001 From: Li Nan <linan122@huawei.com> Date: Mon, 20 Mar 2023 16:53:08 +0800 Subject: [PATCH] echo idle Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/md/md.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 546b1b81eb28..196810067f29 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -4753,6 +4753,7 @@ action_show(struct mddev *mddev, char *page) return sprintf(page, "%s\n", type); } +static bool set_delay = false; static ssize_t action_store(struct mddev *mddev, const char *page, size_t len) { @@ -4775,6 +4776,11 @@ action_store(struct mddev *mddev, const char *page, size_t len) mddev_unlock(mddev); set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_unregister_thread(&mddev->sync_thread); + if (set_delay) { + printk("%s unregister success! delay 10s\n", __func__); + mdelay(10000); + printk("%s delay done\n", __func__); + } mddev_lock_nointr(mddev); /* * set RECOVERY_INTR again and restore reshape @@ -8700,6 +8706,13 @@ void md_do_sync(struct md_thread *thread) struct blk_plug plug; int ret; + if (set_delay) { + printk("%s: before new wor, sleep 10s\n", __func__); + mdelay(1000); + printk("%s: delay done\n", __func__); + } + BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); + /* just incase thread restarts... */ if (test_bit(MD_RECOVERY_DONE, &mddev->recovery) || test_bit(MD_RECOVERY_WAIT, &mddev->recovery)) @@ -8899,6 +8912,7 @@ void md_do_sync(struct md_thread *thread) skipped = 0; + BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && ((mddev->curr_resync > mddev->curr_resync_completed && (mddev->curr_resync - mddev->curr_resync_completed) @@ -9067,6 +9081,7 @@ void md_do_sync(struct md_thread *thread) /* set CHANGE_PENDING here since maybe another update is needed, * so other nodes are informed. It should be harmless for normal * raid */ + BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_PENDING) | BIT(MD_SB_CHANGE_DEVS)); @@ -9361,6 +9376,7 @@ void md_check_recovery(struct mddev *mddev) mddev->curr_resync_completed = 0; spin_lock(&mddev->lock); set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + printk("%s: running is set\n", __func__); spin_unlock(&mddev->lock); /* Clear some bits that don't mean anything, but * might be left set @@ -9405,6 +9421,9 @@ void md_check_recovery(struct mddev *mddev) */ md_bitmap_write_all(mddev->bitmap); } + + printk("%s: queue new sync thread\n", __func__); + BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); INIT_WORK(&mddev->del_work, md_start_sync); queue_work(md_misc_wq, &mddev->del_work); goto unlock; @@ -9463,6 +9482,7 @@ void md_reap_sync_thread(struct mddev *mddev) if (test_and_clear_bit(MD_CLUSTER_RESYNC_LOCKED, &mddev->flags)) md_cluster_ops->resync_finish(mddev); clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + printk("%s: running is cleared!\n", __func__); clear_bit(MD_RECOVERY_DONE, &mddev->recovery); clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); @@ -9956,6 +9976,7 @@ module_param_call(start_ro, set_ro, get_ro, NULL, S_IRUSR|S_IWUSR); module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR); module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR); module_param(create_on_open, bool, S_IRUSR|S_IWUSR); +module_param(set_delay, bool, S_IRUSR|S_IWUSR); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("MD RAID framework");
On 3/22/23 17:00, Yu Kuai wrote: > Hi, > > 在 2023/03/22 15:19, Guoqing Jiang 写道: >> >> >> On 3/22/23 14:41, Yu Kuai wrote: >>> From: Yu Kuai <yukuai3@huawei.com> >>> >>> This reverts commit 9dfbdafda3b34e262e43e786077bab8e476a89d1. >>> >>> Because it will introduce a defect that sync_thread can be running >>> while >>> MD_RECOVERY_RUNNING is cleared, which will cause some unexpected >>> problems, >>> for example: >>> >>> list_add corruption. prev->next should be next (ffff0001ac1daba0), >>> but was ffff0000ce1a02a0. (prev=ffff0000ce1a02a0). >>> Call trace: >>> __list_add_valid+0xfc/0x140 >>> insert_work+0x78/0x1a0 >>> __queue_work+0x500/0xcf4 >>> queue_work_on+0xe8/0x12c >>> md_check_recovery+0xa34/0xf30 >>> raid10d+0xb8/0x900 [raid10] >>> md_thread+0x16c/0x2cc >>> kthread+0x1a4/0x1ec >>> ret_from_fork+0x10/0x18 >>> >>> This is because work is requeued while it's still inside workqueue: >> >> If the workqueue subsystem can have such problem because of md flag, >> then I have to think workqueue is fragile. >> >>> t1: t2: >>> action_store >>> mddev_lock >>> if (mddev->sync_thread) >>> mddev_unlock >>> md_unregister_thread >>> // first sync_thread is done >>> md_check_recovery >>> mddev_try_lock >>> /* >>> * once MD_RECOVERY_DONE is set, new sync_thread >>> * can start. >>> */ >>> set_bit(MD_RECOVERY_RUNNING, &mddev->recovery) >>> INIT_WORK(&mddev->del_work, md_start_sync) >>> queue_work(md_misc_wq, &mddev->del_work) >>> test_and_set_bit(WORK_STRUCT_PENDING_BIT, ...) >> >> Assume you mean below, >> >> 1551 if(!test_and_set_bit(WORK_STRUCT_PENDING_BIT, >> work_data_bits(work))) { >> 1552 __queue_work(cpu, wq, work); >> 1553 ret = true; >> 1554 } >> >> Could you explain how the same work can be re-queued? Isn't the >> PENDING_BIT >> is already set in t3? I believe queue_work shouldn't do that per the >> comment >> but I am not expert ... > > This is not related to workqueue, it is just because raid10 > reinitialize the work that is already queued, I am trying to understand the possibility. > like I discribed later in t3: > > t2: > md_check_recovery: > INIT_WORK -> clear pending > queue_work -> set pending > list_add_tail > ... > > t3: -> work is still pending > md_check_recovery: > INIT_WORK -> clear pending > queue_work -> set pending > list_add_tail -> list is corrupted First, t2 and t3 can't be run in parallel since reconfig_mutex must be held. And if sync_thread existed, the second process would unregister and reap sync_thread which means the second process will call INIT_WORK and queue_work again. Maybe your description is valid, I would prefer call work_pending and flush_workqueue instead of INIT_WORK and queue_work. > >> >> Returns %false if @work was already on a queue, %true otherwise. >> >>> // set pending bit >>> insert_work >>> list_add_tail >>> mddev_unlock >>> mddev_lock_nointr >>> md_reap_sync_thread >>> // MD_RECOVERY_RUNNING is cleared >>> mddev_unlock >>> >>> t3: >>> >>> // before queued work started from t2 >>> md_check_recovery >>> // MD_RECOVERY_RUNNING is not set, a new sync_thread can be started >>> INIT_WORK(&mddev->del_work, md_start_sync) >>> work->data = 0 >>> // work pending bit is cleared >>> queue_work(md_misc_wq, &mddev->del_work) >>> insert_work >>> list_add_tail >>> // list is corrupted >>> >>> This patch revert the commit to fix the problem, the deadlock this >>> commit tries to fix will be fixed in following patches. >> >> Pls cc the previous users who had encounter the problem to test the >> second patch. > > Ok, cc Marc. Can you try if this patchset fix the problem you reproted > in the following thread? > > md_raid: mdX_raid6 looping after sync_action "check" to "idle" > transition >> >> And can you share your test which can trigger the re-queued issue? >> I'd like to try with latest mainline such as 6.3-rc3, and your test is >> not only run against 5.10 kernel as you described before, right? >> > > Of course, our 5.10 and mainline are the same, > > there are some tests: > > First the deadlock can be reporduced reliably, test script is simple: > > mdadm -Cv /dev/md0 -n 4 -l10 /dev/sd[abcd] So this is raid10 while the previous problem was appeared in raid456, I am not sure it is the same issue, but let's see. > > fio -filename=/dev/md0 -rw=randwrite -direct=1 -name=a -bs=4k > -numjobs=16 -iodepth=16 & > > echo -1 > /sys/kernel/debug/fail_make_request/times > echo 1 > /sys/kernel/debug/fail_make_request/probability > echo 1 > /sys/block/sda/make-it-fail > > { > while true; do > mdadm -f /dev/md0 /dev/sda > mdadm -r /dev/md0 /dev/sda > mdadm --zero-superblock /dev/sda > mdadm -a /dev/md0 /dev/sda > sleep 2 > done > } & > > { > while true; do > mdadm -f /dev/md0 /dev/sdd > mdadm -r /dev/md0 /dev/sdd > mdadm --zero-superblock /dev/sdd > mdadm -a /dev/md0 /dev/sdd > sleep 10 > done > } & > > { > while true; do > echo frozen > /sys/block/md0/md/sync_action > echo idle > /sys/block/md0/md/sync_action > sleep 0.1 > done > } & > > Then, the problem MD_RECOVERY_RUNNING can be cleared can't be reporduced > reliably, usually it takes 2+ days to triggered a problem, and each time > problem phenomenon can be different, I'm hacking the kernel and add > some BUG_ON to test MD_RECOVERY_RUNNING in attached patch, following > test can trigger the BUG_ON: Also your debug patch obviously added large delay which make the calltrace happen, I doubt if user can hit it in real life. Anyway, will try below test from my side. > mdadm -Cv /dev/md0 -e1.0 -n 4 -l 10 /dev/sd{a..d} --run > sleep 5 > echo 1 > /sys/module/md_mod/parameters/set_delay > echo idle > /sys/block/md0/md/sync_action & > sleep 5 > echo "want_replacement" > /sys/block/md0/md/dev-sdd/state > > test result: > > [ 228.390237] md_check_recovery: running is set > [ 228.391376] md_check_recovery: queue new sync thread > [ 233.671041] action_store unregister success! delay 10s > [ 233.689276] md_check_recovery: running is set > [ 238.722448] md_check_recovery: running is set > [ 238.723328] md_check_recovery: queue new sync thread > [ 238.724851] md_do_sync: before new wor, sleep 10s > [ 239.725818] md_do_sync: delay done > [ 243.674828] action_store delay done > [ 243.700102] md_reap_sync_thread: running is cleared! > [ 243.748703] ------------[ cut here ]------------ > [ 243.749656] kernel BUG at drivers/md/md.c:9084! After your debug patch applied, is L9084 points to below? 9084 mddev->curr_resync = MaxSector; I don't understand how it triggers below calltrace, and it has nothing to do with list corruption, right? > > [ 243.750548] invalid opcode: 0000 [#1] PREEMPT SMP > [ 243.752028] CPU: 6 PID: 1495 Comm: md0_resync Not tainted > 6.3.0-rc1-next-20230310-00001-g4b3965bcb967-dirty #47 > [ 243.755030] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 > 04/01/2014 > [ 243.758516] RIP: 0010:md_do_sync+0x16a9/0x1b00 > [ 243.759583] Code: ff 48 83 05 60 ce a7 0c 01 e9 8d f9 ff ff 48 83 > 05 13 ce a7 0c 01 48 c7 c6 e9 e0 29 83 e9 3b f9 ff ff 48 83 05 5f d0 > a7 0c 01 <0f> 0b 48 83 05 5d d0 a7 0c 01 e8 f8 d5 0b0 > [ 243.763661] RSP: 0018:ffffc90003847d50 EFLAGS: 00010202 > [ 243.764212] RAX: 0000000000000028 RBX: ffff88817b529000 RCX: > 0000000000000000 > [ 243.764936] RDX: 0000000000000000 RSI: 0000000000000206 RDI: > ffff888100040740 > [ 243.765648] RBP: 00000000002d6780 R08: 0101010101010101 R09: > ffff888165671d80 > [ 243.766352] R10: ffffffff8ad6096c R11: ffff88816fcfa9f0 R12: > 0000000000000001 > [ 243.767066] R13: ffff888173920040 R14: ffff88817b529000 R15: > 0000000000187100 > [ 243.767781] FS: 0000000000000000(0000) GS:ffff888ffef80000(0000) > knlGS:0000000000000000 > [ 243.768588] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 243.769172] CR2: 00005599effa8451 CR3: 00000001663e6000 CR4: > 00000000000006e0 > [ 243.769888] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > 0000000000000000 > [ 243.770598] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: > 0000000000000400 > [ 243.771300] Call Trace: > [ 243.771555] <TASK> > [ 243.771779] ? kvm_clock_read+0x14/0x30 > [ 243.772169] ? kvm_sched_clock_read+0x9/0x20 > [ 243.772611] ? sched_clock_cpu+0x21/0x330 > [ 243.773023] md_thread+0x2ec/0x300 > [ 243.773373] ? md_write_start+0x420/0x420 > [ 243.773845] kthread+0x13e/0x1a0 > [ 243.774210] ? kthread_exit+0x50/0x50 > [ 243.774591] ret_from_fork+0x1f/0x30 > Thanks, Guoqing
Hi, 在 2023/03/22 22:32, Guoqing Jiang 写道: >>> Could you explain how the same work can be re-queued? Isn't the >>> PENDING_BIT >>> is already set in t3? I believe queue_work shouldn't do that per the >>> comment >>> but I am not expert ... >> >> This is not related to workqueue, it is just because raid10 >> reinitialize the work that is already queued, > > I am trying to understand the possibility. > >> like I discribed later in t3: >> >> t2: >> md_check_recovery: >> INIT_WORK -> clear pending >> queue_work -> set pending >> list_add_tail >> ... >> >> t3: -> work is still pending >> md_check_recovery: >> INIT_WORK -> clear pending >> queue_work -> set pending >> list_add_tail -> list is corrupted > > First, t2 and t3 can't be run in parallel since reconfig_mutex must be > held. And if sync_thread existed, > the second process would unregister and reap sync_thread which means the > second process will > call INIT_WORK and queue_work again. > > Maybe your description is valid, I would prefer call work_pending and > flush_workqueue instead of > INIT_WORK and queue_work. This is not enough, it's right this can avoid list corruption, but the worker function md_start_sync just register a sync_thread, and md_do_sync() can still in progress, hence this can't prevent a new sync_thread to start while the old one is not done, some other problems like deadlock can still be triggered. >> Of course, our 5.10 and mainline are the same, >> >> there are some tests: >> >> First the deadlock can be reporduced reliably, test script is simple: >> >> mdadm -Cv /dev/md0 -n 4 -l10 /dev/sd[abcd] > > So this is raid10 while the previous problem was appeared in raid456, I > am not sure it is the same > issue, but let's see. Ok, I'm not quite familiar with raid456 yet, however, the problem is still related to that action_store hold mutex to unregister sync_thread, right? >> Then, the problem MD_RECOVERY_RUNNING can be cleared can't be reporduced >> reliably, usually it takes 2+ days to triggered a problem, and each time >> problem phenomenon can be different, I'm hacking the kernel and add >> some BUG_ON to test MD_RECOVERY_RUNNING in attached patch, following >> test can trigger the BUG_ON: > > Also your debug patch obviously added large delay which make the > calltrace happen, I doubt > if user can hit it in real life. Anyway, will try below test from my side. > >> mdadm -Cv /dev/md0 -e1.0 -n 4 -l 10 /dev/sd{a..d} --run >> sleep 5 >> echo 1 > /sys/module/md_mod/parameters/set_delay >> echo idle > /sys/block/md0/md/sync_action & >> sleep 5 >> echo "want_replacement" > /sys/block/md0/md/dev-sdd/state >> >> test result: >> >> [ 228.390237] md_check_recovery: running is set >> [ 228.391376] md_check_recovery: queue new sync thread >> [ 233.671041] action_store unregister success! delay 10s >> [ 233.689276] md_check_recovery: running is set >> [ 238.722448] md_check_recovery: running is set >> [ 238.723328] md_check_recovery: queue new sync thread >> [ 238.724851] md_do_sync: before new wor, sleep 10s >> [ 239.725818] md_do_sync: delay done >> [ 243.674828] action_store delay done >> [ 243.700102] md_reap_sync_thread: running is cleared! >> [ 243.748703] ------------[ cut here ]------------ >> [ 243.749656] kernel BUG at drivers/md/md.c:9084! > > After your debug patch applied, is L9084 points to below? > > 9084 mddev->curr_resync = MaxSector; In my environment, it's a BUG_ON() that I added in md_do_sync: 9080 skip: 9081 /* set CHANGE_PENDING here since maybe another update is needed, 9082 ┊* so other nodes are informed. It should be harmless for normal 9083 ┊* raid */ 9084 BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); 9085 set_mask_bits(&mddev->sb_flags, 0, 9086 ┊ BIT(MD_SB_CHANGE_PENDING) | BIT(MD_SB_CHANGE_DEVS)); > > I don't understand how it triggers below calltrace, and it has nothing > to do with > list corruption, right? Yes, this is just a early BUG_ON() to detect that if MD_RECOVERY_RUNNING is cleared while sync_thread is still in progress. Thanks, Kuai
On 3/23/23 09:36, Yu Kuai wrote: > Hi, > > 在 2023/03/22 22:32, Guoqing Jiang 写道: >>>> Could you explain how the same work can be re-queued? Isn't the >>>> PENDING_BIT >>>> is already set in t3? I believe queue_work shouldn't do that per >>>> the comment >>>> but I am not expert ... >>> >>> This is not related to workqueue, it is just because raid10 >>> reinitialize the work that is already queued, >> >> I am trying to understand the possibility. >> >>> like I discribed later in t3: >>> >>> t2: >>> md_check_recovery: >>> INIT_WORK -> clear pending >>> queue_work -> set pending >>> list_add_tail >>> ... >>> >>> t3: -> work is still pending >>> md_check_recovery: >>> INIT_WORK -> clear pending >>> queue_work -> set pending >>> list_add_tail -> list is corrupted >> >> First, t2 and t3 can't be run in parallel since reconfig_mutex must >> be held. And if sync_thread existed, >> the second process would unregister and reap sync_thread which means >> the second process will >> call INIT_WORK and queue_work again. >> >> Maybe your description is valid, I would prefer call work_pending and >> flush_workqueue instead of >> INIT_WORK and queue_work. > > This is not enough, it's right this can avoid list corruption, but the > worker function md_start_sync just register a sync_thread, and > md_do_sync() can still in progress, hence this can't prevent a new > sync_thread to start while the old one is not done, some other problems > like deadlock can still be triggered. > >>> Of course, our 5.10 and mainline are the same, >>> >>> there are some tests: >>> >>> First the deadlock can be reporduced reliably, test script is simple: >>> >>> mdadm -Cv /dev/md0 -n 4 -l10 /dev/sd[abcd] >> >> So this is raid10 while the previous problem was appeared in raid456, >> I am not sure it is the same >> issue, but let's see. > > Ok, I'm not quite familiar with raid456 yet, however, the problem is > still related to that action_store hold mutex to unregister sync_thread, > right? Yes and no, the previous raid456 bug also existed because it can't get stripe while barrier is involved as you mentioned in patch 4, which is different. > >>> Then, the problem MD_RECOVERY_RUNNING can be cleared can't be >>> reporduced >>> reliably, usually it takes 2+ days to triggered a problem, and each >>> time >>> problem phenomenon can be different, I'm hacking the kernel and add >>> some BUG_ON to test MD_RECOVERY_RUNNING in attached patch, following >>> test can trigger the BUG_ON: >> >> Also your debug patch obviously added large delay which make the >> calltrace happen, I doubt >> if user can hit it in real life. Anyway, will try below test from my >> side. >> >>> mdadm -Cv /dev/md0 -e1.0 -n 4 -l 10 /dev/sd{a..d} --run >>> sleep 5 >>> echo 1 > /sys/module/md_mod/parameters/set_delay >>> echo idle > /sys/block/md0/md/sync_action & >>> sleep 5 >>> echo "want_replacement" > /sys/block/md0/md/dev-sdd/state Combined your debug patch with above steps. Seems you are 1. add delay to action_store, so it can't get lock in time. 2. echo "want_replacement"**triggers md_check_recovery which can grab lock to start sync thread. 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync thread. 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared in step 3. >>> >>> test result: >>> >>> [ 228.390237] md_check_recovery: running is set >>> [ 228.391376] md_check_recovery: queue new sync thread >>> [ 233.671041] action_store unregister success! delay 10s >>> [ 233.689276] md_check_recovery: running is set >>> [ 238.722448] md_check_recovery: running is set >>> [ 238.723328] md_check_recovery: queue new sync thread >>> [ 238.724851] md_do_sync: before new wor, sleep 10s >>> [ 239.725818] md_do_sync: delay done >>> [ 243.674828] action_store delay done >>> [ 243.700102] md_reap_sync_thread: running is cleared! >>> [ 243.748703] ------------[ cut here ]------------ >>> [ 243.749656] kernel BUG at drivers/md/md.c:9084! >> >> After your debug patch applied, is L9084 points to below? >> >> 9084 mddev->curr_resync = MaxSector; > > In my environment, it's a BUG_ON() that I added in md_do_sync: Ok, so we are on different code base ... > 9080 skip: > 9081 /* set CHANGE_PENDING here since maybe another update is > needed, > 9082 ┊* so other nodes are informed. It should be harmless for > normal > 9083 ┊* raid */ > 9084 BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)); > 9085 set_mask_bits(&mddev->sb_flags, 0, > 9086 ┊ BIT(MD_SB_CHANGE_PENDING) | > BIT(MD_SB_CHANGE_DEVS)); > >> >> I don't understand how it triggers below calltrace, and it has >> nothing to do with >> list corruption, right? > > Yes, this is just a early BUG_ON() to detect that if MD_RECOVERY_RUNNING > is cleared while sync_thread is still in progress. sync_thread can be interrupted once MD_RECOVERY_INTR is set which means the RUNNING can be cleared, so I am not sure the added BUG_ON is reasonable. And change BUG_ON like this makes more sense to me. +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) && +!test_bit(MD_RECOVERY_INTR, &mddev->recovery)); I think there might be racy window like you described but it should be really small, I prefer to just add a few lines like this instead of revert and introduce new lock to resolve the same issue (if it is). @@ -4792,9 +4793,15 @@action_store(struct mddev *mddev, const char *page, size_t len) if (mddev->sync_thread) { sector_t save_rp = mddev->reshape_position; +set_bit(MD_RECOVERY_DONOT, &mddev->recovery); @@ -4805,6 +4812,7 @@action_store(struct mddev *mddev, const char *page, size_t len) mddev->reshape_position = save_rp; set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_reap_sync_thread(mddev); +clear_bit(MD_RECOVERY_DONOT, &mddev->recovery); } mddev_unlock(mddev); @@ -9296,6 +9313,9 @@void md_check_recovery(struct mddev *mddev) if (!md_is_rdwr(mddev) && !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) return; +/* action_store is in the middle of reap sync thread, let's wait */ +if (test_bit(MD_RECOVERY_DONOT, &mddev->recovery)) +return; --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -553,6 +553,7 @@enum recovery_flags { MD_RECOVERY_ERROR, /* sync-action interrupted because io-error */ MD_RECOVERY_WAIT, /* waiting for pers->start() to finish */ MD_RESYNCING_REMOTE, /* remote node is running resync thread */ +MD_RECOVERY_DONOT, /* for a nasty racy issue */ }; TBH, I am reluctant to see the changes in the series, it can only be considered acceptable with conditions: 1. the previous raid456 bug can be fixed in this way too, hopefully Marc or others can verify it. 2. pass all the tests in mdadm. Thanks, Guoqing
Hi, 在 2023/03/23 11:50, Guoqing Jiang 写道: > Combined your debug patch with above steps. Seems you are > > 1. add delay to action_store, so it can't get lock in time. > 2. echo "want_replacement"**triggers md_check_recovery which can grab lock > to start sync thread. > 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync > thread. > 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared > in step 3. Yes, this is exactly what I did. > sync_thread can be interrupted once MD_RECOVERY_INTR is set which means > the RUNNING > can be cleared, so I am not sure the added BUG_ON is reasonable. And > change BUG_ON I think BUG_ON() is reasonable because only md_reap_sync_thread can clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise there is no gurantee that only one sync_thread can be in progress. > like this makes more sense to me. > > +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) && > +!test_bit(MD_RECOVERY_INTR, &mddev->recovery)); I think this can be reporduced likewise, md_check_recovery clear MD_RECOVERY_INTR, and new sync_thread triggered by echo "want_replacement" won't set this bit. > > I think there might be racy window like you described but it should be > really small, I prefer > to just add a few lines like this instead of revert and introduce new > lock to resolve the same > issue (if it is). The new lock that I add in this patchset is just try to synchronize idle and forzen from action_store(patch 3), I can drop it if you think this is not necessary. The main changes is patch 4, new lines is not much and I really don't like to add new flags unless we have to, current code is already hard to understand... By the way, I'm concerned that drop the mutex to unregister sync_thread might not be safe, since the mutex protects lots of stuff, and there might exist other implicit dependencies. > > TBH, I am reluctant to see the changes in the series, it can only be > considered > acceptable with conditions: > > 1. the previous raid456 bug can be fixed in this way too, hopefully Marc > or others > can verify it. > 2. pass all the tests in mdadm I already test this patchset with mdadm, If there are reporducer for raid456 bug, I can try to verify it myself. Thanks, Kuai > > Thanks, > Guoqing > . >
On Wed, Mar 22, 2023 at 11:32 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2023/03/23 11:50, Guoqing Jiang 写道: > > > Combined your debug patch with above steps. Seems you are > > > > 1. add delay to action_store, so it can't get lock in time. > > 2. echo "want_replacement"**triggers md_check_recovery which can grab lock > > to start sync thread. > > 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync > > thread. > > 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared > > in step 3. > > Yes, this is exactly what I did. > > > sync_thread can be interrupted once MD_RECOVERY_INTR is set which means > > the RUNNING > > can be cleared, so I am not sure the added BUG_ON is reasonable. And > > change BUG_ON > > I think BUG_ON() is reasonable because only md_reap_sync_thread can > clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but > md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise > there is no gurantee that only one sync_thread can be in progress. > > > like this makes more sense to me. > > > > +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) && > > +!test_bit(MD_RECOVERY_INTR, &mddev->recovery)); > > I think this can be reporduced likewise, md_check_recovery clear > MD_RECOVERY_INTR, and new sync_thread triggered by echo > "want_replacement" won't set this bit. > > > > > I think there might be racy window like you described but it should be > > really small, I prefer > > to just add a few lines like this instead of revert and introduce new > > lock to resolve the same > > issue (if it is). > > The new lock that I add in this patchset is just try to synchronize idle > and forzen from action_store(patch 3), I can drop it if you think this > is not necessary. > > The main changes is patch 4, new lines is not much and I really don't > like to add new flags unless we have to, current code is already hard > to understand... > > By the way, I'm concerned that drop the mutex to unregister sync_thread > might not be safe, since the mutex protects lots of stuff, and there > might exist other implicit dependencies. > > > > > TBH, I am reluctant to see the changes in the series, it can only be > > considered > > acceptable with conditions: > > > > 1. the previous raid456 bug can be fixed in this way too, hopefully Marc > > or others > > can verify it. > > 2. pass all the tests in mdadm AFAICT, this set looks like a better solution for this problem. But I agree that we need to make sure it fixes the original bug. mdadm tests are not in a very good shape at the moment. I will spend more time to look into these tests. Thanks, Song
Hi, 在 2023/03/29 7:58, Song Liu 写道: > On Wed, Mar 22, 2023 at 11:32 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> Hi, >> >> 在 2023/03/23 11:50, Guoqing Jiang 写道: >> >>> Combined your debug patch with above steps. Seems you are >>> >>> 1. add delay to action_store, so it can't get lock in time. >>> 2. echo "want_replacement"**triggers md_check_recovery which can grab lock >>> to start sync thread. >>> 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync >>> thread. >>> 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared >>> in step 3. >> >> Yes, this is exactly what I did. >> >>> sync_thread can be interrupted once MD_RECOVERY_INTR is set which means >>> the RUNNING >>> can be cleared, so I am not sure the added BUG_ON is reasonable. And >>> change BUG_ON >> >> I think BUG_ON() is reasonable because only md_reap_sync_thread can >> clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but >> md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise >> there is no gurantee that only one sync_thread can be in progress. >> >>> like this makes more sense to me. >>> >>> +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) && >>> +!test_bit(MD_RECOVERY_INTR, &mddev->recovery)); >> >> I think this can be reporduced likewise, md_check_recovery clear >> MD_RECOVERY_INTR, and new sync_thread triggered by echo >> "want_replacement" won't set this bit. >> >>> >>> I think there might be racy window like you described but it should be >>> really small, I prefer >>> to just add a few lines like this instead of revert and introduce new >>> lock to resolve the same >>> issue (if it is). >> >> The new lock that I add in this patchset is just try to synchronize idle >> and forzen from action_store(patch 3), I can drop it if you think this >> is not necessary. >> >> The main changes is patch 4, new lines is not much and I really don't >> like to add new flags unless we have to, current code is already hard >> to understand... >> >> By the way, I'm concerned that drop the mutex to unregister sync_thread >> might not be safe, since the mutex protects lots of stuff, and there >> might exist other implicit dependencies. >> >>> >>> TBH, I am reluctant to see the changes in the series, it can only be >>> considered >>> acceptable with conditions: >>> >>> 1. the previous raid456 bug can be fixed in this way too, hopefully Marc >>> or others >>> can verify it. >>> 2. pass all the tests in mdadm > > AFAICT, this set looks like a better solution for this problem. But I agree > that we need to make sure it fixes the original bug. mdadm tests are not > in a very good shape at the moment. I will spend more time to look into > these tests. While I'm working on another thread to protect md_thread with rcu, I found that this patch has other defects that can cause null-ptr- deference in theory where md_unregister_thread(&mddev->sync_thread) can concurrent with other context to access sync_thread, for example: t1: md_set_readonly t2: action_store md_unregister_thread // 'reconfig_mutex' is not held // 'reconfig_mutex' is held by caller if (mddev->sync_thread) thread = *threadp *threadp = NULL wake_up_process(mddev->sync_thread->tsk) // null-ptr-deference So, I think this revert will make more sence. 😉 Thanks, Kuai
Hi, Song and Guoqing 在 2023/04/06 16:53, Yu Kuai 写道: > Hi, > > 在 2023/03/29 7:58, Song Liu 写道: >> On Wed, Mar 22, 2023 at 11:32 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >>> >>> Hi, >>> >>> 在 2023/03/23 11:50, Guoqing Jiang 写道: >>> >>>> Combined your debug patch with above steps. Seems you are >>>> >>>> 1. add delay to action_store, so it can't get lock in time. >>>> 2. echo "want_replacement"**triggers md_check_recovery which can >>>> grab lock >>>> to start sync thread. >>>> 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap >>>> sync >>>> thread. >>>> 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is >>>> cleared >>>> in step 3. >>> >>> Yes, this is exactly what I did. >>> >>>> sync_thread can be interrupted once MD_RECOVERY_INTR is set which means >>>> the RUNNING >>>> can be cleared, so I am not sure the added BUG_ON is reasonable. And >>>> change BUG_ON >>> >>> I think BUG_ON() is reasonable because only md_reap_sync_thread can >>> clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but >>> md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise >>> there is no gurantee that only one sync_thread can be in progress. >>> >>>> like this makes more sense to me. >>>> >>>> +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) && >>>> +!test_bit(MD_RECOVERY_INTR, &mddev->recovery)); >>> >>> I think this can be reporduced likewise, md_check_recovery clear >>> MD_RECOVERY_INTR, and new sync_thread triggered by echo >>> "want_replacement" won't set this bit. >>> >>>> >>>> I think there might be racy window like you described but it should be >>>> really small, I prefer >>>> to just add a few lines like this instead of revert and introduce new >>>> lock to resolve the same >>>> issue (if it is). >>> >>> The new lock that I add in this patchset is just try to synchronize idle >>> and forzen from action_store(patch 3), I can drop it if you think this >>> is not necessary. >>> >>> The main changes is patch 4, new lines is not much and I really don't >>> like to add new flags unless we have to, current code is already hard >>> to understand... >>> >>> By the way, I'm concerned that drop the mutex to unregister sync_thread >>> might not be safe, since the mutex protects lots of stuff, and there >>> might exist other implicit dependencies. >>> >>>> >>>> TBH, I am reluctant to see the changes in the series, it can only be >>>> considered >>>> acceptable with conditions: >>>> >>>> 1. the previous raid456 bug can be fixed in this way too, hopefully >>>> Marc >>>> or others >>>> can verify it. After reading the thread: https://lore.kernel.org/linux-raid/5ed54ffc-ce82-bf66-4eff-390cb23bc1ac@molgen.mpg.de/T/#t The deadlock in raid456 has same conditions as raid10: 1) echo idle hold mutex to stop sync thread; 2) sync thread wait for io to complete; 3) io can't be handled by daemon thread because sb flag is set; 4) sb flag can't be cleared because daemon thread can't hold mutex; I tried to reporduce the deadlock with the reporducer provided in the thread, howerver, the deadlock is not reporduced after running for more than a day. I changed the reporducer to below: [root@fedora raid5]# cat test_deadlock.sh #! /bin/bash ( while true; do echo check > /sys/block/md0/md/sync_action sleep 0.5 echo idle > /sys/block/md0/md/sync_action done ) & echo 0 > /proc/sys/vm/dirty_background_ratio ( while true; do fio -filename=/dev/md0 -bs=4k -rw=write -numjobs=1 -name=xxx done ) & And I finially able to reporduce the deadlock with this patch reverted(running for about an hour): [root@fedora raid5]# ps -elf | grep " D " | grep -v grep 1 D root 156 2 16 80 0 - 0 md_wri 06:51 ? 00:19:15 [kworker/u8:11+flush-9:0] 5 D root 2239 1 2 80 0 - 992 kthrea 06:57 pts/0 00:02:15 sh test_deadlock.sh 1 D root 42791 2 0 80 0 - 0 raid5_ 07:45 ? 00:00:00 [md0_resync] 5 D root 42803 42797 0 80 0 - 92175 balanc 07:45 ? 00:00:06 fio -filename=/dev/md0 -bs=4k -rw=write -numjobs=1 -name=xxx [root@fedora raid5]# cat /proc/2239/stack [<0>] kthread_stop+0x96/0x2b0 [<0>] md_unregister_thread+0x5e/0xd0 [<0>] md_reap_sync_thread+0x27/0x370 [<0>] action_store+0x1fa/0x490 [<0>] md_attr_store+0xa7/0x120 [<0>] sysfs_kf_write+0x3a/0x60 [<0>] kernfs_fop_write_iter+0x144/0x2b0 [<0>] new_sync_write+0x140/0x210 [<0>] vfs_write+0x21a/0x350 [<0>] ksys_write+0x77/0x150 [<0>] __x64_sys_write+0x1d/0x30 [<0>] do_syscall_64+0x45/0x70 [<0>] entry_SYSCALL_64_after_hwframe+0x61/0xc6 [root@fedora raid5]# cat /proc/42791/stack [<0>] raid5_get_active_stripe+0x606/0x960 [<0>] raid5_sync_request+0x508/0x570 [<0>] md_do_sync.cold+0xaa6/0xee7 [<0>] md_thread+0x266/0x280 [<0>] kthread+0x151/0x1b0 [<0>] ret_from_fork+0x1f/0x30 And with this patchset applied, I run the above reporducer for more than a day now, and I think the deadlock in raid456 can be fixed. Can this patchset be considered in next merge window? If so, I'll rebase this patchset. Thanks, Kuai >>>> 2. pass all the tests in mdadm >> >> AFAICT, this set looks like a better solution for this problem. But I >> agree >> that we need to make sure it fixes the original bug. mdadm tests are not >> in a very good shape at the moment. I will spend more time to look into >> these tests. > > While I'm working on another thread to protect md_thread with rcu, I > found that this patch has other defects that can cause null-ptr- > deference in theory where md_unregister_thread(&mddev->sync_thread) can > concurrent with other context to access sync_thread, for example: > > t1: md_set_readonly t2: action_store > md_unregister_thread > // 'reconfig_mutex' is not held > // 'reconfig_mutex' is held by caller > if (mddev->sync_thread) > thread = *threadp > *threadp = NULL > wake_up_process(mddev->sync_thread->tsk) > // null-ptr-deference > > So, I think this revert will make more sence. 😉 > > Thanks, > Kuai > > . >
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 60632b409b80..0601edbf579f 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3729,7 +3729,6 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv, if (!strcasecmp(argv[0], "idle") || !strcasecmp(argv[0], "frozen")) { if (mddev->sync_thread) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); md_reap_sync_thread(mddev); } } else if (decipher_sync_action(mddev, mddev->recovery) != st_idle) diff --git a/drivers/md/md.c b/drivers/md/md.c index 546b1b81eb28..acf57a5156c7 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -4770,19 +4770,6 @@ action_store(struct mddev *mddev, const char *page, size_t len) if (work_pending(&mddev->del_work)) flush_workqueue(md_misc_wq); if (mddev->sync_thread) { - sector_t save_rp = mddev->reshape_position; - - mddev_unlock(mddev); - set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); - mddev_lock_nointr(mddev); - /* - * set RECOVERY_INTR again and restore reshape - * position in case others changed them after - * got lock, eg, reshape_position_store and - * md_check_recovery. - */ - mddev->reshape_position = save_rp; set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_reap_sync_thread(mddev); } @@ -6173,7 +6160,6 @@ static void __md_stop_writes(struct mddev *mddev) flush_workqueue(md_misc_wq); if (mddev->sync_thread) { set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); md_reap_sync_thread(mddev); } @@ -9315,7 +9301,6 @@ void md_check_recovery(struct mddev *mddev) * ->spare_active and clear saved_raid_disk */ set_bit(MD_RECOVERY_INTR, &mddev->recovery); - md_unregister_thread(&mddev->sync_thread); md_reap_sync_thread(mddev); clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); @@ -9351,7 +9336,6 @@ void md_check_recovery(struct mddev *mddev) goto unlock; } if (mddev->sync_thread) { - md_unregister_thread(&mddev->sync_thread); md_reap_sync_thread(mddev); goto unlock; } @@ -9431,7 +9415,8 @@ void md_reap_sync_thread(struct mddev *mddev) sector_t old_dev_sectors = mddev->dev_sectors; bool is_reshaped = false; - /* sync_thread should be unregistered, collect result */ + /* resync has finished, collect result */ + md_unregister_thread(&mddev->sync_thread); if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) && mddev->degraded != mddev->raid_disks) {