Message ID | 20240120103734.4155446-1-yukuai1@huaweicloud.com |
---|---|
Headers |
Return-Path: <linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2bc4:b0:101:a8e8:374 with SMTP id hx4csp1546768dyb; Sat, 20 Jan 2024 02:43:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IGMMTgSGGWHDm9qWYw1MRucH/cF0gqOW+3fO3yueT67gPLTLjIYY4IGuJkxHVYR/oByWQ6B X-Received: by 2002:a9d:4f06:0:b0:6dc:9c6:b060 with SMTP id d6-20020a9d4f06000000b006dc09c6b060mr1420897otl.13.1705747390022; Sat, 20 Jan 2024 02:43:10 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705747389; cv=pass; d=google.com; s=arc-20160816; b=ZR34JJPHSgJ6fnWoVK0dtN+qWEiB8pNwL/Ggc7YZlm6ojuA8D/m8oFw/o3ts4BxjMR oMAVVQwUYdTBzr2kIjm1woHLO87KYkpbb+V2gmtQQfDzAx3nMoLcNbEGU4aaaOL4Q+Hk d2bArfv4ZNBZ0VrVP5C0HpaAxeJaup6HUfOWCHMEovLHyRw+62PyttRXQVwCVTuKl6IH YWHUUvCW3EHXJqGX1r/BS/wuJvCjqvYb1WKti+UGIV+bFGqWHlYOOOmDPb9PW/mGdnd9 EC2NdRkDETAZSSwBNdZ1zRLMflMkFhq5Q+BukuWmSDrFcR94ZGe8rR6TTuVEy68kXkxF 0V9Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=kirXKZ8BrYN3BQb57vZVTgpbUe104Yu48NHyAcMAZRA=; fh=y2Ma2pP32dbTTwcFlrDJ//qG1pVVCx07l+xa/F5EZpc=; b=Kxo3momsrv9A7iHccKXE0zZoOeGtUuZ1dJS23PbgtBqrD/WfNGD2kVZ4A2gl11bDf1 L8fjNT4tGtairMyaSHkBbt/4dApqohekgjcs9YK6ufrZuTSoAsbGse26iUCTf/tuO5kO +glNSNEFuBRIOgHUD6LIuKz8aJ6Vd0yO/ImaAsFA8ioviX03ApjmiqfCIIoNbEmrRsGh HBiGRg7B3Nkf5rrpIFgsSMD4nzN3edyGVxI9S5Cqiqq/9JFPtSe3sNYjKyL+W/ZbgJq4 dNOOJ1WlcUKiOjVQxVNVT42dJ6er7zfnoPDLxCPiFwB9aUi0dmXFqeGgh9Enel63bsbH BoSQ== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id d19-20020a056a0010d300b006daa9f122d0si7402297pfu.332.2024.01.20.02.43.09 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 20 Jan 2024 02:43:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-31761-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 2CBD1B22A5F for <ouuuleilei@gmail.com>; Sat, 20 Jan 2024 10:42:38 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0A2C510785; Sat, 20 Jan 2024 10:41:25 +0000 (UTC) Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6131ED299; Sat, 20 Jan 2024 10:41:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705747283; cv=none; b=AEX7OzCQ/asV2+HkJP87VBD+54ouS5oL5ck6jQb8PWM802NdayF+wUBi9z0IB1eABeSIiq2hw2+ZOikaTvG37mqYLjlc9sKF/fh8KMAgfq6ZMQtw3s7r0++jpvX19GkcyZ0wfAlP5iSS8IW8o0he7yl1//a9w6yAYi/xFjzRwGA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705747283; c=relaxed/simple; bh=fWSeSobJcb8/VlAKsgexEgZ0zygnoLBIO2J4LToYTbQ=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=QWiOeuw4gFQHRVUjKXOj/UJHTMWK2qzs0CYneZr5cC8UeT70SbWdVNeaR5Ct5sG5re1uB+WYot9rPco1VWtdJ8XcjZadmL5Tah4+lduNjT0Y9l6+h3N176Rk2JyPUzyU03XAqff5rstgz/PepMpNYHmKqiOkXbSj9LZ1wOQOOtM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4THCh62z2vz4f3lCp; Sat, 20 Jan 2024 18:41:10 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 97CD61A0B3B; Sat, 20 Jan 2024 18:41:16 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP1 (Coremail) with SMTP id cCh0CgAn+RFKo6tlY4mmBQ--.38494S4; Sat, 20 Jan 2024 18:41:16 +0800 (CST) From: Yu Kuai <yukuai1@huaweicloud.com> To: mpatocka@redhat.com, dm-devel@lists.linux.dev, msnitzer@redhat.com, heinzm@redhat.com, song@kernel.org, yukuai3@huawei.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 0/5] md: fix/prevent dm-raid regressions Date: Sat, 20 Jan 2024 18:37:29 +0800 Message-Id: <20240120103734.4155446-1-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: cCh0CgAn+RFKo6tlY4mmBQ--.38494S4 X-Coremail-Antispam: 1UD129KBjvdXoWruFW8Cry7KF4rCrWrJw4kCrg_yoWDWFbEka yI9F97Gr1Uu3Z3WayUur4SyryUCFZrWayUXFWDKrW0qry7X34fuF4Dtr4F9ry7ZFWDKF1k Cry8Z3yFv3sFvjkaLaAFLSUrUUUUUb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbx8FF20E14v26r4j6ryUM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8w A2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_ Cr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s 1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0 cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8Jw ACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2Y2ka 0xkIwI1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67 AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIY rxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14 v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v26r1j 6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUdHU DUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788605775199064923 X-GMAIL-MSGID: 1788605775199064923 |
Series |
md: fix/prevent dm-raid regressions
|
|
Message
Yu Kuai
Jan. 20, 2024, 10:37 a.m. UTC
From: Yu Kuai <yukuai3@huawei.com>
There are some problems that we fixed in md/raid, and some apis is changed.
However, dm-raid rely the old apis(noted that old apis is problematic in
corner cases), and now there are regressions in lvm2 testsuite.
This patchset fix some regressions(patch 1-3), and revert changes to
prevent regressions(patch 4,5). Noted that the problems in patch 4,5 is
not clear yet, and I'm not able to locate the root cause ASAP, hence I
decide to revert changes to prevent regressions first.
Yu Kuai (5):
md: don't ignore suspended array in md_check_recovery()
md: don't ignore read-only array in md_check_recovery()
md: make sure md_do_sync() will set MD_RECOVERY_DONE
md: revert commit fa2bbff7b0b4 ("md: synchronize flush io with array
reconfiguration") for dm-raid
md: use md_reap_sync_thread() directly for dm-raid
drivers/md/md.c | 58 ++++++++++++++++++++++++++++++-------------------
1 file changed, 36 insertions(+), 22 deletions(-)
Comments
On Sat, Jan 20, 2024 at 2:41 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> > > There are some problems that we fixed in md/raid, and some apis is changed. > However, dm-raid rely the old apis(noted that old apis is problematic in > corner cases), and now there are regressions in lvm2 testsuite. > > This patchset fix some regressions(patch 1-3), and revert changes to > prevent regressions(patch 4,5). Noted that the problems in patch 4,5 is > not clear yet, and I'm not able to locate the root cause ASAP, hence I > decide to revert changes to prevent regressions first. Thanks for looking into this! Patch 1-3 look good to me. But since we need to back port these fixes to 6.7 kernels, let's make it very clear what issues are being fixed. Please: 1. Test on both Linus' master branch and 6.7.y, and explain which tests are failing before the fixes. (From my tests, the two branches don't have the same test results). We can put these results in the cover letter and include them in a merge commit. 2. If possible, add Fixes tag to all patches. 3. Add more details in the commit log, so it is clear what is being fixed. 4. Add "reported-by" and maybe also "closes" tag. For patch 4-5, especially 5, I wonder whether the same issue also happens with md. We can probably ship 4-5 now, with the same improvements as patch 1-3. I will run more tests on my side. Mykulas, please also review and test these patches. Thanks, Song > > Yu Kuai (5): > md: don't ignore suspended array in md_check_recovery() > md: don't ignore read-only array in md_check_recovery() > md: make sure md_do_sync() will set MD_RECOVERY_DONE > md: revert commit fa2bbff7b0b4 ("md: synchronize flush io with array > reconfiguration") for dm-raid > md: use md_reap_sync_thread() directly for dm-raid > > drivers/md/md.c | 58 ++++++++++++++++++++++++++++++------------------- > 1 file changed, 36 insertions(+), 22 deletions(-) > > -- > 2.39.2 > >
Hi, 在 2024/01/21 12:41, Song Liu 写道: > On Sat, Jan 20, 2024 at 2:41 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> From: Yu Kuai <yukuai3@huawei.com> >> >> There are some problems that we fixed in md/raid, and some apis is changed. >> However, dm-raid rely the old apis(noted that old apis is problematic in >> corner cases), and now there are regressions in lvm2 testsuite. >> >> This patchset fix some regressions(patch 1-3), and revert changes to >> prevent regressions(patch 4,5). Noted that the problems in patch 4,5 is >> not clear yet, and I'm not able to locate the root cause ASAP, hence I >> decide to revert changes to prevent regressions first. > > Thanks for looking into this! > > Patch 1-3 look good to me. But since we need to back port these fixes > to 6.7 kernels, let's make it very clear what issues are being fixed. > Please: > 1. Test on both Linus' master branch and 6.7.y, and explain which tests > are failing before the fixes. (From my tests, the two branches don't have > the same test results). We can put these results in the cover letter and > include them in a merge commit. > 2. If possible, add Fixes tag to all patches. > 3. Add more details in the commit log, so it is clear what is being fixed. > 4. Add "reported-by" and maybe also "closes" tag. > Will do this is the next version. I verified that the following tests will pass now in my VM: shell/integrity-caching.sh shell/lvconvert-raid-reshape.sh > For patch 4-5, especially 5, I wonder whether the same issue also > happens with md. We can probably ship 4-5 now, with the same > improvements as patch 1-3. With patch 1-3, the test lvconvert-raid-reshape.sh won't hang anymore, however it still fails and complain that ext4 is corrupted, and I'm still trying to understand how reshape works in dm-raid. :( > > I will run more tests on my side. Notice that the problem Mykulas mentioned in the patch md: partially revert "md/raid6: use valid sector values to determine if an I/O should wait on the reshape" still exist. And again, I'm stll trying to understand how raid5 works in detail. > > Mykulas, please also review and test these patches. > > Thanks, > Song > > > >> >> Yu Kuai (5): >> md: don't ignore suspended array in md_check_recovery() >> md: don't ignore read-only array in md_check_recovery() >> md: make sure md_do_sync() will set MD_RECOVERY_DONE >> md: revert commit fa2bbff7b0b4 ("md: synchronize flush io with array >> reconfiguration") for dm-raid >> md: use md_reap_sync_thread() directly for dm-raid >> >> drivers/md/md.c | 58 ++++++++++++++++++++++++++++++------------------- >> 1 file changed, 36 insertions(+), 22 deletions(-) >> >> -- >> 2.39.2 >> >> > . >
Hi, 在 2024/01/21 12:41, Song Liu 写道: > On Sat, Jan 20, 2024 at 2:41 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> From: Yu Kuai <yukuai3@huawei.com> >> >> There are some problems that we fixed in md/raid, and some apis is changed. >> However, dm-raid rely the old apis(noted that old apis is problematic in >> corner cases), and now there are regressions in lvm2 testsuite. >> >> This patchset fix some regressions(patch 1-3), and revert changes to >> prevent regressions(patch 4,5). Noted that the problems in patch 4,5 is >> not clear yet, and I'm not able to locate the root cause ASAP, hence I >> decide to revert changes to prevent regressions first. > > Thanks for looking into this! > > Patch 1-3 look good to me. But since we need to back port these fixes > to 6.7 kernels, let's make it very clear what issues are being fixed. > Please: I'm attaching my test result here, before I send the next version. The tested patched add following changes for patch 5: @@ -9379,6 +9387,15 @@ static void md_start_sync(struct work_struct *ws) suspend ? mddev_suspend_and_lock_nointr(mddev) : mddev_lock_nointr(mddev); + if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) { + /* + * dm-raid calls md_reap_sync_thread() directly to unregister + * sync_thread, and md/raid should never trigger this. + */ + WARN_ON_ONCE(mddev->gendisk); + goto not_running;; + } + if (!md_is_rdwr(mddev)) { Failed tests for v6.6: ### failed: [ndev-vanilla] shell/duplicate-vgid.sh ### failed: [ndev-vanilla] shell/fsadm-crypt.sh ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh ### failed: [ndev-vanilla] shell/lvconvert-cache-abort.sh ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh ### failed: [ndev-vanilla] shell/lvextend-raid.sh ### failed: [ndev-vanilla] shell/select-report.sh Failed tests for next-20240117(latest linux-next, between v6.7 to v6.8-rc1) ### failed: [ndev-vanilla] shell/duplicate-vgid.sh ### failed: [ndev-vanilla] shell/fsadm-crypt.sh ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh ### failed: [ndev-vanilla] shell/lvextend-raid.sh ### failed: [ndev-vanilla] shell/select-report.sh Please noted that the test lvconvert-raid-reshape.sh is still possible to fail due to commit c467e97f079f ("md/raid6: use valid sector values to determine if an I/O should wait on the reshape"). Thanks, Kuai > 1. Test on both Linus' master branch and 6.7.y, and explain which tests > are failing before the fixes. (From my tests, the two branches don't have > the same test results). We can put these results in the cover letter and > include them in a merge commit. > 2. If possible, add Fixes tag to all patches. > 3. Add more details in the commit log, so it is clear what is being fixed. > 4. Add "reported-by" and maybe also "closes" tag. > > For patch 4-5, especially 5, I wonder whether the same issue also > happens with md. We can probably ship 4-5 now, with the same > improvements as patch 1-3. > > I will run more tests on my side. > > Mykulas, please also review and test these patches. > > Thanks, > Song > > > >> >> Yu Kuai (5): >> md: don't ignore suspended array in md_check_recovery() >> md: don't ignore read-only array in md_check_recovery() >> md: make sure md_do_sync() will set MD_RECOVERY_DONE >> md: revert commit fa2bbff7b0b4 ("md: synchronize flush io with array >> reconfiguration") for dm-raid >> md: use md_reap_sync_thread() directly for dm-raid >> >> drivers/md/md.c | 58 ++++++++++++++++++++++++++++++------------------- >> 1 file changed, 36 insertions(+), 22 deletions(-) >> >> -- >> 2.39.2 >> >> > . >
On Mon, Jan 22, 2024 at 12:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2024/01/21 12:41, Song Liu 写道: > > On Sat, Jan 20, 2024 at 2:41 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > >> > >> From: Yu Kuai <yukuai3@huawei.com> > >> > >> There are some problems that we fixed in md/raid, and some apis is changed. > >> However, dm-raid rely the old apis(noted that old apis is problematic in > >> corner cases), and now there are regressions in lvm2 testsuite. > >> > >> This patchset fix some regressions(patch 1-3), and revert changes to > >> prevent regressions(patch 4,5). Noted that the problems in patch 4,5 is > >> not clear yet, and I'm not able to locate the root cause ASAP, hence I > >> decide to revert changes to prevent regressions first. > > > > Thanks for looking into this! > > > > Patch 1-3 look good to me. But since we need to back port these fixes > > to 6.7 kernels, let's make it very clear what issues are being fixed. > > Please: > > I'm attaching my test result here, before I send the next version. > > The tested patched add following changes for patch 5: > > @@ -9379,6 +9387,15 @@ static void md_start_sync(struct work_struct *ws) > suspend ? mddev_suspend_and_lock_nointr(mddev) : > mddev_lock_nointr(mddev); > > + if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) { > + /* > + * dm-raid calls md_reap_sync_thread() directly to > unregister > + * sync_thread, and md/raid should never trigger this. > + */ > + WARN_ON_ONCE(mddev->gendisk); > + goto not_running;; > + } > + > if (!md_is_rdwr(mddev)) { > > Failed tests for v6.6: > ### failed: [ndev-vanilla] shell/duplicate-vgid.sh > ### failed: [ndev-vanilla] shell/fsadm-crypt.sh > ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh > ### failed: [ndev-vanilla] shell/lvconvert-cache-abort.sh > ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh > ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh > ### failed: [ndev-vanilla] shell/lvextend-raid.sh > ### failed: [ndev-vanilla] shell/select-report.sh > > Failed tests for next-20240117(latest linux-next, between v6.7 to v6.8-rc1) > ### failed: [ndev-vanilla] shell/duplicate-vgid.sh > ### failed: [ndev-vanilla] shell/fsadm-crypt.sh > ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh > ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh > ### failed: [ndev-vanilla] shell/lvextend-raid.sh > ### failed: [ndev-vanilla] shell/select-report.sh > > Please noted that the test lvconvert-raid-reshape.sh is still possible > to fail due to commit c467e97f079f ("md/raid6: use valid sector values > to determine if an I/O should wait on the reshape"). Thanks for the information! I will look closer into the raid6 issue. Song