From patchwork Wed Apr 26 08:20:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 87730 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp90309vqo; Wed, 26 Apr 2023 01:36:37 -0700 (PDT) X-Google-Smtp-Source: AKy350bpKYXo15b2H6ZIeI9ME6s4TaI4xJNUeDmt2QcI+xZIgi7d2Cwyc3lr0ncS86fzPXw/c1tm X-Received: by 2002:a17:90a:fd87:b0:23f:9439:9a27 with SMTP id cx7-20020a17090afd8700b0023f94399a27mr19543081pjb.20.1682498197456; Wed, 26 Apr 2023 01:36:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682498197; cv=none; d=google.com; s=arc-20160816; b=RzrSdlcU11j/HQ8jt3isi3PiyYOUbrhgI+qrpGHuATA0/k/sOUW7Le5Wwmw2yYBwKT EwXNyvilZzYCH9/4OBrnHY7oKhqhdOcwZHto4jzyAnrFNr86RvEZ+Wrbc1Cair7r6bhQ VSNqhui7uKmMJsDlHA1pq4HTCIgk1oSVo0n//avmnVRIZsh97Im7n/XK8NStkx1PLcxg 2M4QMtlevQH1aa77shSd/SERlL2B9KYjWKCc6bI2BrHSHhHCAQyJAe2G9BixMDlKpmno /LRhVGdSsbYrdWxraD+YRYQCR5Ny+rd+TF7GIoa6Py4Q+WXUKhBGSi5cdmYsRU4r9zQV QkEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=bBFoORUdJMnitAyA3b6vvGp9WicnwR3FM2kpil2Cyl0=; b=CqXbJg4rXLk57WIle+PaKcsTMG5DSVtYBde1uz/4CqAHhpaWEghNOPH5AaCThulOcU wzLeDpr0NXVoOuYw3akm1+XLOfDN+7oDgC93bsJOCImXbcs1D+0JQqB/wlaBXWvK8eLn j7RgI+bIir/lH7esN97PDN98f0RHwyeGLIqYhkRE7XMmgDyuCCspWeO5w6TgVKwfyuuP 45/yI5zEwHPsS62Pp+eEWOfb2cVPlFV4Yn2BDUqCRFsZfBrc73oPV2EEW8wUhK2EQjwC KlC+qIMSAUXAy5t+mx22PzyeVlSofwBwB1h5aeVsW1gd1SR3NQCr5mBZQXFAl8XLd531 tq2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk5-20020a17090b33c500b00246bd5445d7si13089365pjb.104.2023.04.26.01.36.25; Wed, 26 Apr 2023 01:36:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240205AbjDZIWz (ORCPT + 99 others); Wed, 26 Apr 2023 04:22:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240118AbjDZIW1 (ORCPT ); Wed, 26 Apr 2023 04:22:27 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8E6A3591; Wed, 26 Apr 2023 01:22:25 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Q5sL54Zfhz4f4WY6; Wed, 26 Apr 2023 16:22:21 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP2 (Coremail) with SMTP id Syh0CgBnW+k430hkUexVIA--.50201S10; Wed, 26 Apr 2023 16:22:23 +0800 (CST) From: Yu Kuai To: song@kernel.org, akpm@osdl.org, neilb@suse.de Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 6/7] md/raid1-10: don't handle pluged bio by daemon thread Date: Wed, 26 Apr 2023 16:20:30 +0800 Message-Id: <20230426082031.1299149-7-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230426082031.1299149-1-yukuai1@huaweicloud.com> References: <20230426082031.1299149-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgBnW+k430hkUexVIA--.50201S10 X-Coremail-Antispam: 1UD129KBjvJXoWxZw4DAw1ruF48GFyfWrW8tFb_yoWrAF1xp3 yYqa1YqrWUJFW5Xw4DZFW8uFyFq3WvgFZrAFZ5uws5uFy3XF9xWFWrGFW0q34DZrsxGFy7 Ary5trWDGa1YvFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764227229551335172?= X-GMAIL-MSGID: =?utf-8?q?1764227229551335172?= From: Yu Kuai current->bio_list will be set under submit_bio() context, in this case bitmap io will be added to the list and wait for current io submission to finish, while current io submission must wait for bitmap io to be done. commit 874807a83139 ("md/raid1{,0}: fix deadlock in bitmap_unplug.") fix the deadlock by handling plugged bio by daemon thread. On the one hand, the deadlock won't exist after commit a214b949d8e3 ("blk-mq: only flush requests from the plug in blk_mq_submit_bio"). On the other hand, current solution makes it impossible to flush plugged bio in raid1/10_make_request(), because this will cause that all the writes will goto daemon thread. In order to limit the number of plugged bio, commit 874807a83139 ("md/raid1{,0}: fix deadlock in bitmap_unplug.") is reverted, and the deadlock is fixed by handling bitmap io asynchronously. Signed-off-by: Yu Kuai --- drivers/md/raid1-10.c | 14 ++++++++++++++ drivers/md/raid1.c | 4 ++-- drivers/md/raid10.c | 8 +++----- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index 4167bda139b1..98d678b7df3f 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -150,3 +150,17 @@ static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, return true; } + +/* + * current->bio_list will be set under submit_bio() context, in this case bitmap + * io will be added to the list and wait for current io submission to finish, + * while current io submission must wait for bitmap io to be done. In order to + * avoid such deadlock, submit bitmap io asynchronously. + */ +static inline void md_prepare_flush_writes(struct bitmap *bitmap) +{ + if (current->bio_list) + md_bitmap_unplug_async(bitmap); + else + md_bitmap_unplug(bitmap); +} diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index d79525d6ec8f..639e09cecf01 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -794,7 +794,7 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect static void flush_bio_list(struct r1conf *conf, struct bio *bio) { /* flush any pending bitmap writes to disk before proceeding w/ I/O */ - md_bitmap_unplug(conf->mddev->bitmap); + md_prepare_flush_writes(conf->mddev->bitmap); wake_up(&conf->wait_barrier); while (bio) { /* submit pending writes */ @@ -1166,7 +1166,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule) struct r1conf *conf = mddev->private; struct bio *bio; - if (from_schedule || current->bio_list) { + if (from_schedule) { spin_lock_irq(&conf->device_lock); bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 626cb8ed2099..bd9e655ca408 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -902,9 +902,7 @@ static void flush_pending_writes(struct r10conf *conf) __set_current_state(TASK_RUNNING); blk_start_plug(&plug); - /* flush any pending bitmap writes to disk - * before proceeding w/ I/O */ - md_bitmap_unplug(conf->mddev->bitmap); + md_prepare_flush_writes(conf->mddev->bitmap); wake_up(&conf->wait_barrier); while (bio) { /* submit pending writes */ @@ -1108,7 +1106,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) struct r10conf *conf = mddev->private; struct bio *bio; - if (from_schedule || current->bio_list) { + if (from_schedule) { spin_lock_irq(&conf->device_lock); bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); @@ -1120,7 +1118,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) /* we aren't scheduling, so we can do the write-out directly. */ bio = bio_list_get(&plug->pending); - md_bitmap_unplug(mddev->bitmap); + md_prepare_flush_writes(mddev->bitmap); wake_up(&conf->wait_barrier); while (bio) { /* submit pending writes */