From patchwork Mon Oct 24 11:31:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 9975 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp614920wru; Mon, 24 Oct 2022 12:25:10 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7d19XRx0lee1kWw6NOT54lSlpGzBvMnaoMYJq5GQrJMKXi/zQf4ENsMNa8H2ErhQprGir9 X-Received: by 2002:a17:906:770d:b0:73c:a08f:593c with SMTP id q13-20020a170906770d00b0073ca08f593cmr29906218ejm.182.1666639509914; Mon, 24 Oct 2022 12:25:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666639509; cv=none; d=google.com; s=arc-20160816; b=w9TYwFOI8vwOO719GcJ9pQr3IPCzKZic160tzc8wlC1oV9WQSG3Y6nq0P2Hf6SVAvN vlWnwydovvA0NFJx7L8WS5hrWwoDDlt7QZLOHXIYGOt7SR/SZ68AliX6+kAOwZ49HF5z i21eYZwVoBur0pBB1b7E1QvWwtVmHlGpZitF0vYa2qtjXw6F7faDNFP5cKgL1gLjgQN+ dB0LQ0r4Fgf8sJnT/Dpx/cPsaecCZP2/keGET1slgWyOhG5QSipq0bv3WiEBA9E0E+Tt H16aXsjNGb3CcNPmgyj7uA/2Yj1W4f2Fy/zbSRhMC7l3qWRieH2keAQfWkKgIMHU2Ujt UoFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=9mqe2ykGF7C0qCqmbBNtMqzzidaaKpRYxBbTYRiZiBE=; b=A9Vi3UqgHse31vI9vu8FyJEG9s712FLDAs5mB5Dbvo5jM9OYRnKlZ9O831+i+1N1y4 vw3jOFjaSIHoTGQ69n7h6s4Y99CBchWrKHMMkhgyoXaqi5+8c1gJe6kOReUVHyIfdEaJ wZZr5jkYnV8PrUzqdsCkRsCzZ8s4YBOnKLtFZl+jqUtb2jcZVwT1jDJehEoQ1eg9y8iC ktOuNTAKIC9DNj6laapWE1sxOubeADdniAhl15hZzz2PKjuIV5s/06xDbxbJiriPLqtc kcQaC9kbaXiNNbmmXFgEUCoTKFDuiwZuQnhQnuFfFftqAZy2C84BmkVcFl7HIdlsJFfV ZZ6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=k9JpPzCE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t15-20020a056402524f00b0046197a8b7a1si643314edd.485.2022.10.24.12.24.42; Mon, 24 Oct 2022 12:25:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=k9JpPzCE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232130AbiJXTUE (ORCPT + 99 others); Mon, 24 Oct 2022 15:20:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233512AbiJXTTb (ORCPT ); Mon, 24 Oct 2022 15:19:31 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A859814F5; Mon, 24 Oct 2022 10:56:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 2FC48CE136E; Mon, 24 Oct 2022 11:54:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1CF07C433D7; Mon, 24 Oct 2022 11:54:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666612491; bh=XeXgFmBb2J/4H2fAnQcF2pRLaqG6XK2GwrF5T7orXxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k9JpPzCE01Ev+BZPPOSNaxm+si1a14BbiRJaI1+ADVEZ2USDPFl/xOFs8qiNXMJiz nF1JNIabWuIpIogPh3Ux1KyOeOm6Sqbm8wxQDK+K9QcEtXceFz6YJwKjTn892tvQN8 sZKv4aQ23jztzxUDUINWwfmZh4SbBlROHBd5um2I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Logan Gunthorpe , Song Liu , Sasha Levin Subject: [PATCH 4.14 198/210] md/raid5: Wait for MD_SB_CHANGE_PENDING in raid5d Date: Mon, 24 Oct 2022 13:31:55 +0200 Message-Id: <20221024113003.395495277@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024112956.797777597@linuxfoundation.org> References: <20221024112956.797777597@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747598190328112354?= X-GMAIL-MSGID: =?utf-8?q?1747598190328112354?= From: Logan Gunthorpe [ Upstream commit 5e2cf333b7bd5d3e62595a44d598a254c697cd74 ] A complicated deadlock exists when using the journal and an elevated group_thrtead_cnt. It was found with loop devices, but its not clear whether it can be seen with real disks. The deadlock can occur simply by writing data with an fio script. When the deadlock occurs, multiple threads will hang in different ways: 1) The group threads will hang in the blk-wbt code with bios waiting to be submitted to the block layer: io_schedule+0x70/0xb0 rq_qos_wait+0x153/0x210 wbt_wait+0x115/0x1b0 io_schedule+0x70/0xb0 rq_qos_wait+0x153/0x210 wbt_wait+0x115/0x1b0 __rq_qos_throttle+0x38/0x60 blk_mq_submit_bio+0x589/0xcd0 wbt_wait+0x115/0x1b0 __rq_qos_throttle+0x38/0x60 blk_mq_submit_bio+0x589/0xcd0 __submit_bio+0xe6/0x100 submit_bio_noacct_nocheck+0x42e/0x470 submit_bio_noacct+0x4c2/0xbb0 ops_run_io+0x46b/0x1a30 handle_stripe+0xcd3/0x36b0 handle_active_stripes.constprop.0+0x6f6/0xa60 raid5_do_work+0x177/0x330 Or: io_schedule+0x70/0xb0 rq_qos_wait+0x153/0x210 wbt_wait+0x115/0x1b0 __rq_qos_throttle+0x38/0x60 blk_mq_submit_bio+0x589/0xcd0 __submit_bio+0xe6/0x100 submit_bio_noacct_nocheck+0x42e/0x470 submit_bio_noacct+0x4c2/0xbb0 flush_deferred_bios+0x136/0x170 raid5_do_work+0x262/0x330 2) The r5l_reclaim thread will hang in the same way, submitting a bio to the block layer: io_schedule+0x70/0xb0 rq_qos_wait+0x153/0x210 wbt_wait+0x115/0x1b0 __rq_qos_throttle+0x38/0x60 blk_mq_submit_bio+0x589/0xcd0 __submit_bio+0xe6/0x100 submit_bio_noacct_nocheck+0x42e/0x470 submit_bio_noacct+0x4c2/0xbb0 submit_bio+0x3f/0xf0 md_super_write+0x12f/0x1b0 md_update_sb.part.0+0x7c6/0xff0 md_update_sb+0x30/0x60 r5l_do_reclaim+0x4f9/0x5e0 r5l_reclaim_thread+0x69/0x30b However, before hanging, the MD_SB_CHANGE_PENDING flag will be set for sb_flags in r5l_write_super_and_discard_space(). This flag will never be cleared because the submit_bio() call never returns. 3) Due to the MD_SB_CHANGE_PENDING flag being set, handle_stripe() will do no processing on any pending stripes and re-set STRIPE_HANDLE. This will cause the raid5d thread to enter an infinite loop, constantly trying to handle the same stripes stuck in the queue. The raid5d thread has a blk_plug that holds a number of bios that are also stuck waiting seeing the thread is in a loop that never schedules. These bios have been accounted for by blk-wbt thus preventing the other threads above from continuing when they try to submit bios. --Deadlock. To fix this, add the same wait_event() that is used in raid5_do_work() to raid5d() such that if MD_SB_CHANGE_PENDING is set, the thread will schedule and wait until the flag is cleared. The schedule action will flush the plug which will allow the r5l_reclaim thread to continue, thus preventing the deadlock. However, md_check_recovery() calls can also clear MD_SB_CHANGE_PENDING from the same thread and can thus deadlock if the thread is put to sleep. So avoid waiting if md_check_recovery() is being called in the loop. It's not clear when the deadlock was introduced, but the similar wait_event() call in raid5_do_work() was added in 2017 by this commit: 16d997b78b15 ("md/raid5: simplfy delaying of writes while metadata is updated.") Link: https://lore.kernel.org/r/7f3b87b6-b52a-f737-51d7-a4eec5c44112@deltatee.com Signed-off-by: Logan Gunthorpe Signed-off-by: Song Liu Signed-off-by: Sasha Levin --- drivers/md/raid5.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 78b48dca3fda..dc053a43a3dc 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -44,6 +44,7 @@ */ #include +#include #include #include #include @@ -6308,7 +6309,18 @@ static void raid5d(struct md_thread *thread) spin_unlock_irq(&conf->device_lock); md_check_recovery(mddev); spin_lock_irq(&conf->device_lock); + + /* + * Waiting on MD_SB_CHANGE_PENDING below may deadlock + * seeing md_check_recovery() is needed to clear + * the flag when using mdmon. + */ + continue; } + + wait_event_lock_irq(mddev->sb_wait, + !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags), + conf->device_lock); } pr_debug("%d stripes handled\n", handled);