From patchwork Thu Apr 20 11:29:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85873 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp262568vqo; Thu, 20 Apr 2023 04:34:37 -0700 (PDT) X-Google-Smtp-Source: AKy350bYQOf5m9j1U4nUvlWtyNhxG94sJyOnVdApEWMq+6vLap+VECqRTFKHmPytRN5Mzu3YGA9W X-Received: by 2002:a05:6a20:7345:b0:db:22dc:23d with SMTP id v5-20020a056a20734500b000db22dc023dmr2094657pzc.5.1681990476825; Thu, 20 Apr 2023 04:34:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990476; cv=none; d=google.com; s=arc-20160816; b=dJfRsXjFh5Rs3iec5r4YV4TETxiJWwipiF6w3Qxb3lMN7QyP7W7hmU1l9Ok0uSC5zd Rg89RBJbd0yeqvXgD0sM5MKxNkHo5iupfkVt+A9HQYeXRNO1uI/Xp7NJ477hnpnLqIM9 pEPckKlA4gnaHF8RLxlNsStD1R/gKcZVvSZOY7FaEJlkn/8I4g/CuwguZAttJUU07WIx O828cw3YjRQQD5BfOAgL+976S0g4leaP4AVj705WkuwDsT0JFwFHJcuSfiuOtfCuUEgb Udc4/C8SLiu4+znAvMrcKqps4sCAVLxODHO9MHLoYm0KWPTJyKm2xy6gfNVUrO+Tblj7 plyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1KZsonSZZXj3Eh34affA6h4sCTndHBVZ61QTb9+9Cjw=; b=pPDhFFkj0eoEAJIMSlP/uqUiKSLpBYuPwvN2d0OMs4a4klpxHi0uZxA5aHk7SS9yr2 l1U+LB/ksFZnqNvQWOajHzgL+ubWnNI7ZrEGzKmRMv69sbV/5jbr/N/fLf0+V7Qd7Biu y3VHmPWOYu3Uki2JuVhUicu3u27Ulx3u0kZV1QsHjhV3mkJAIEWPOD2Hw4zhsLPSOPIO xeNNSjJTebPkxThLWwrOb+UFiMZTiGeDZ5CZQLl84QC6VO1//MDYZM4AGNRlD7YgoXbM PC3lKX+twkRg0Cg3M1pzz+I/OADMwqRfrJUxHQsDGZiyEMDcvBhoxP9QoJ/key9Un++D Hbbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i4-20020a635404000000b004e018302ac3si1362769pgb.612.2023.04.20.04.34.23; Thu, 20 Apr 2023 04:34:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234391AbjDTLbq (ORCPT + 99 others); Thu, 20 Apr 2023 07:31:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231936AbjDTLbn (ORCPT ); Thu, 20 Apr 2023 07:31:43 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C788D4C08; Thu, 20 Apr 2023 04:31:20 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpn2K5hz4f3tpt; Thu, 20 Apr 2023 19:31:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S5; Thu, 20 Apr 2023 19:31:14 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 1/8] md/raid10: prevent soft lockup while flush writes Date: Thu, 20 Apr 2023 19:29:39 +0800 Message-Id: <20230420112946.2869956-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S5 X-Coremail-Antispam: 1UD129KBjvJXoW7ZFyxGw43ZryUArW3Jw1rXrb_yoW8ZrW3p3 90gFWYyw1UCw13AwsIyF4xKFyrZa90q3y7AFWkAw17XF13XFyUGa1DJryjgrWDuryfGrWU CF1vkrZ7Xw15taDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrwCFx2 IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v2 6r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67 AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IY s7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr 0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUqAp5UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763694846545446464?= X-GMAIL-MSGID: =?utf-8?q?1763694846545446464?= From: Yu Kuai Currently, there is no limit for raid1/raid10 plugged bio. While flushing writes, raid1 has cond_resched() while raid10 doesn't, and too many writes can cause soft lockup. Follow up soft lockup can be triggered easily with writeback test for raid10 with ramdisks: watchdog: BUG: soft lockup - CPU#10 stuck for 27s! [md0_raid10:1293] Call Trace: call_rcu+0x16/0x20 put_object+0x41/0x80 __delete_object+0x50/0x90 delete_object_full+0x2b/0x40 kmemleak_free+0x46/0xa0 slab_free_freelist_hook.constprop.0+0xed/0x1a0 kmem_cache_free+0xfd/0x300 mempool_free_slab+0x1f/0x30 mempool_free+0x3a/0x100 bio_free+0x59/0x80 bio_put+0xcf/0x2c0 free_r10bio+0xbf/0xf0 raid_end_bio_io+0x78/0xb0 one_write_done+0x8a/0xa0 raid10_end_write_request+0x1b4/0x430 bio_endio+0x175/0x320 brd_submit_bio+0x3b9/0x9b7 [brd] __submit_bio+0x69/0xe0 submit_bio_noacct_nocheck+0x1e6/0x5a0 submit_bio_noacct+0x38c/0x7e0 flush_pending_writes+0xf0/0x240 raid10d+0xac/0x1ed0 This patch fix the problem by adding cond_resched() to raid10 like what raid1 did. Note that unlimited plugged bio still need to be optimized because in the case of writeback lots of dirty pages, this will take lots of memory and io latecy is quite bad. Signed-off-by: Yu Kuai --- drivers/md/raid10.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 6590aa49598c..a116b7c9d9f3 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -921,6 +921,7 @@ static void flush_pending_writes(struct r10conf *conf) else submit_bio_noacct(bio); bio = next; + cond_resched(); } blk_finish_plug(&plug); } else @@ -1140,6 +1141,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) else submit_bio_noacct(bio); bio = next; + cond_resched(); } kfree(plug); } From patchwork Thu Apr 20 11:29:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85876 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp263378vqo; Thu, 20 Apr 2023 04:36:07 -0700 (PDT) X-Google-Smtp-Source: AKy350bVR7+c1Lx6s7IfvdlPP7AVL5cE5dGa/n4jCTzshwA4mmUInBcPB9B9DJ2bpgSqquhlcOOS X-Received: by 2002:a05:6a20:9384:b0:f0:578d:897b with SMTP id x4-20020a056a20938400b000f0578d897bmr1841692pzh.7.1681990567204; Thu, 20 Apr 2023 04:36:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990567; cv=none; d=google.com; s=arc-20160816; b=KwuzrrwbYLhHyYE6Z7nRwIhjdKixesEiMT0proqhc1d11cZ3Xxz0USu+HTrDBPVsQ/ C7hvmJ6aEoBgqQRJBNxauEpB4g1QdvpsJFVuKAi9H+5jsBWjhvpqxW+eRDtcbEzo+fKT gPitZYv3XvMg5kdZErGBdCtDY6pQV5qw4b+UXHj5gHtQbynuFKmYq5qOfmcv3L4n3msA VSal2RpIIC8dt8B29RC7udjiskdP7zj13+zb1Tmvz3SSbUJi7FbFqxnzsMv0bzFtvXjK Cx8ApduoAaBZCgzuxQC8U9EA5k7Y8EwNpiY8jjpCT1lfuWerugrZnOL5f1iiD5xDr88y 2Xpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=bG2hG1CSc54Nk+LUkGnFBPP53imgpIvjeYcXYBmaP5w=; b=MqtJXE4HCm9Hpx+6+ICfX8gfbuKAxdDs970741fbKNaCX5jQIJwQedf/13Fj9zqomu 4KHR6uGYlJhChFahMc5PH8ECfEyP4fU6q+lh+udmh62KkfntUzdyIIDuxafE1Dfljb7Q Q43/HBFTsDw7505mR/+R2t3/SZByMEA3phw62iJlosbTdUKjVyyS3DusBGl7OZ9C6wph Yb996l9IbPCPLCxPlXMIVArSZFzqc0tUF/wF7Y0FZjP9VcYaTXyGcTrohBdZy2Us/oYz 6ZGxWWwtJW48sWPj23C6e4MhWLSzmCGqx6cxtvjqRQ5yRmd4DErx37FlbGMNclHHCs7l aN2Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u184-20020a6385c1000000b005139edd3958si1477898pgd.6.2023.04.20.04.35.54; Thu, 20 Apr 2023 04:36:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234668AbjDTLb6 (ORCPT + 99 others); Thu, 20 Apr 2023 07:31:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233978AbjDTLbo (ORCPT ); Thu, 20 Apr 2023 07:31:44 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FACBE75; Thu, 20 Apr 2023 04:31:20 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpn1d4tz4f3mLX; Thu, 20 Apr 2023 19:31:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S6; Thu, 20 Apr 2023 19:31:14 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 2/8] md/raid1-10: rename raid1-10.c to raid1-10.h Date: Thu, 20 Apr 2023 19:29:40 +0800 Message-Id: <20230420112946.2869956-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S6 X-Coremail-Antispam: 1UD129KBjvJXoWxAw1DXw1UJFWUKrW3GryxAFb_yoW5tFWDpa 1DJry3Z3yUGayUua4DX34UuFy3C3Z8K3yUCFykuws5ZFy3XFW0qa1Utay5WF1DZF4rGFy7 tryDKw4UCF1FqFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrwCFx2 IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v2 6r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67 AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IY s7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr 0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUc6pPUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763694940778306881?= X-GMAIL-MSGID: =?utf-8?q?1763694940778306881?= From: Yu Kuai raid1-10.c contains definitions that are used both for raid1 and raid10, it's werid to use ".c" suffix. Signed-off-by: Yu Kuai --- drivers/md/{raid1-10.c => raid1-10.h} | 10 +++++++--- drivers/md/raid1.c | 2 -- drivers/md/raid1.h | 2 ++ drivers/md/raid10.c | 2 -- drivers/md/raid10.h | 2 ++ 5 files changed, 11 insertions(+), 7 deletions(-) rename drivers/md/{raid1-10.c => raid1-10.h} (92%) diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.h similarity index 92% rename from drivers/md/raid1-10.c rename to drivers/md/raid1-10.h index e61f6cad4e08..04beef35142d 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.h @@ -1,4 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#ifndef _RAID1_10_H +#define _RAID1_10_H + /* Maximum size of each resync request */ #define RESYNC_BLOCK_SIZE (64*1024) #define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE) @@ -33,7 +36,7 @@ struct raid1_plug_cb { struct bio_list pending; }; -static void rbio_pool_free(void *rbio, void *data) +static inline void rbio_pool_free(void *rbio, void *data) { kfree(rbio); } @@ -91,8 +94,8 @@ static inline struct resync_pages *get_resync_pages(struct bio *bio) } /* generally called after bio_reset() for reseting bvec */ -static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp, - int size) +static inline void md_bio_reset_resync_pages(struct bio *bio, + struct resync_pages *rp, int size) { int idx = 0; @@ -109,3 +112,4 @@ static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp, size -= len; } while (idx++ < RESYNC_PAGES && size > 0); } +#endif diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 2f1011ffdf09..84724b9b20b8 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -49,8 +49,6 @@ static void lower_barrier(struct r1conf *conf, sector_t sector_nr); #define raid1_log(md, fmt, args...) \ do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid1 " fmt, ##args); } while (0) -#include "raid1-10.c" - #define START(node) ((node)->start) #define LAST(node) ((node)->last) INTERVAL_TREE_DEFINE(struct serial_info, node, sector_t, _subtree_last, diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h index 468f189da7a0..80de4d66f010 100644 --- a/drivers/md/raid1.h +++ b/drivers/md/raid1.h @@ -2,6 +2,8 @@ #ifndef _RAID1_H #define _RAID1_H +#include "raid1-10.h" + /* * each barrier unit size is 64MB fow now * note: it must be larger than RESYNC_DEPTH diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index a116b7c9d9f3..50d56b6af42f 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -77,8 +77,6 @@ static void end_reshape(struct r10conf *conf); #define raid10_log(md, fmt, args...) \ do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid10 " fmt, ##args); } while (0) -#include "raid1-10.c" - #define NULL_CMD #define cmd_before(conf, cmd) \ do { \ diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h index 63e48b11b552..63e88dd774f7 100644 --- a/drivers/md/raid10.h +++ b/drivers/md/raid10.h @@ -2,6 +2,8 @@ #ifndef _RAID10_H #define _RAID10_H +#include "raid1-10.h" + /* Note: raid10_info.rdev can be set to NULL asynchronously by * raid10_remove_disk. * There are three safe ways to access raid10_info.rdev. From patchwork Thu Apr 20 11:29:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85874 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp262698vqo; Thu, 20 Apr 2023 04:34:50 -0700 (PDT) X-Google-Smtp-Source: AKy350bg0GWTyE95uhYAq0n8ZV6q2ZK+3bNzxsioe5Hj8JexcD/qWBcfwikqBE/FOG05Z6/q3yb2 X-Received: by 2002:a05:6a00:1408:b0:63d:3339:e967 with SMTP id l8-20020a056a00140800b0063d3339e967mr1037152pfu.19.1681990490444; Thu, 20 Apr 2023 04:34:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990490; cv=none; d=google.com; s=arc-20160816; b=wogNIbmk8CJRpURDjHKPB4LwRAMv5OOavUMPV8aQBl1pfq7BrqJ7OaKerHMpmyQfDA 55Hhy9PaDEWzj5mYwOt287wS+h5FP0kzSjbKLc6/l76WI3G0QWq++bETB0zNlV3w7xQe jBkyAG1sm2bc2KPBH2s+AXyyc0+P1zrboPnZ46lKSLvNwgj2rRnWCS5JfaHjnVE/tD1U P3fFOSy5wNCKzAOjG3SzCWpXgBTru/q1uaf+HnYC2kfpja2Lhz6pwGR7f2AUYHwkUpS3 7xCQBqNW6y90qDMT2WsmLc6dul2PZd90pEZnZwwIiTTYTeB5vR+aTv71+gI0q3uyv8Yr 3Hqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=2XCmfO7Pxakq7jxny8x0ejzyF7WNPBjPKEBEtxLzMVU=; b=KbtEYnuY4UUG7O5qsNMzFLNgbbjROwMLniHILTA5A/Tk1pV+d8aLkKQgtOt9ijxGCJ ySH4/80j+oNopGfGywI2ci/qz51sZA/PaslXrIK5m8wjyvtlKcZ+NAx9GtRm/OJPYfmU RrpdTxAebdcKCYlKEWJiCB9pOnre7znv/EUvxHty/2/Rtn8hedpmK0hXPEAzzj3Iakp4 wO5xdGtv4LDV2Tsma25Pa3ZVbQ0ZMHzX0aUcrBZGwUsh7cyxRmGDpcPOG0FpSAelXewL aiUbEPdA1GOcb6v0wL/mecCfjUaCydsbSMkhJnJ/YAstb497cD6GH8/mq/LpZPoERGWQ X9Kg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bs129-20020a632887000000b005200cd1a9bdsi1338279pgb.827.2023.04.20.04.34.38; Thu, 20 Apr 2023 04:34:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234475AbjDTLbu (ORCPT + 99 others); Thu, 20 Apr 2023 07:31:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbjDTLbn (ORCPT ); Thu, 20 Apr 2023 07:31:43 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6C1C30F5; Thu, 20 Apr 2023 04:31:20 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpn6qSpz4f3r7S; Thu, 20 Apr 2023 19:31:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S7; Thu, 20 Apr 2023 19:31:15 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 3/8] md/raid1-10: factor out a helper to add bio to plug Date: Thu, 20 Apr 2023 19:29:41 +0800 Message-Id: <20230420112946.2869956-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S7 X-Coremail-Antispam: 1UD129KBjvJXoWxXrWUXw1UAF1kAryDJFWxCrg_yoW5uFWfpa 15Ka4avrWDXrW5Xw4kJF4DuFy5K3ZIgFZFkr93C3s3JFy7XFWUGa15JFWrCrn8uFZxury7 Jwn0krZrGa13KFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrwCFx2 IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v2 6r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67 AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IY s7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr 0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUd8n5UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763694860636893899?= X-GMAIL-MSGID: =?utf-8?q?1763694860636893899?= From: Yu Kuai The code in raid1 and raid10 is identical, prepare to limit the number of pluged bios. Signed-off-by: Yu Kuai --- drivers/md/raid1-10.h | 16 ++++++++++++++++ drivers/md/raid1.c | 12 +----------- drivers/md/raid10.c | 11 +---------- 3 files changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/md/raid1-10.h b/drivers/md/raid1-10.h index 04beef35142d..664646a3591a 100644 --- a/drivers/md/raid1-10.h +++ b/drivers/md/raid1-10.h @@ -112,4 +112,20 @@ static inline void md_bio_reset_resync_pages(struct bio *bio, size -= len; } while (idx++ < RESYNC_PAGES && size > 0); } + +static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, + blk_plug_cb_fn unplug) +{ + struct raid1_plug_cb *plug = NULL; + struct blk_plug_cb *cb = blk_check_plugged(unplug, mddev, + sizeof(*plug)); + + if (!cb) + return false; + + plug = container_of(cb, struct raid1_plug_cb, cb); + bio_list_add(&plug->pending, bio); + + return true; +} #endif diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 84724b9b20b8..44c8d113621f 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1341,8 +1341,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, struct bitmap *bitmap = mddev->bitmap; unsigned long flags; struct md_rdev *blocked_rdev; - struct blk_plug_cb *cb; - struct raid1_plug_cb *plug = NULL; int first_clone; int max_sectors; bool write_behind = false; @@ -1571,15 +1569,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, r1_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; - - cb = blk_check_plugged(raid1_unplug, mddev, sizeof(*plug)); - if (cb) - plug = container_of(cb, struct raid1_plug_cb, cb); - else - plug = NULL; - if (plug) { - bio_list_add(&plug->pending, mbio); - } else { + if (!md_add_bio_to_plug(mddev, mbio, raid1_unplug)) { spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 50d56b6af42f..d67c5672933c 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1279,8 +1279,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC; const blk_opf_t do_fua = bio->bi_opf & REQ_FUA; unsigned long flags; - struct blk_plug_cb *cb; - struct raid1_plug_cb *plug = NULL; struct r10conf *conf = mddev->private; struct md_rdev *rdev; int devnum = r10_bio->devs[n_copy].devnum; @@ -1320,14 +1318,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, atomic_inc(&r10_bio->remaining); - cb = blk_check_plugged(raid10_unplug, mddev, sizeof(*plug)); - if (cb) - plug = container_of(cb, struct raid1_plug_cb, cb); - else - plug = NULL; - if (plug) { - bio_list_add(&plug->pending, mbio); - } else { + if (!md_add_bio_to_plug(mddev, mbio, raid10_unplug)) { spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags); From patchwork Thu Apr 20 11:29:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85877 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp263418vqo; Thu, 20 Apr 2023 04:36:13 -0700 (PDT) X-Google-Smtp-Source: AKy350a1lKutpGm0+OpShs8OrjCKJCGzSRmAmPJj2UQpkwu+r3YzD01Xwpg5FsgRHL8uylqtz+Gb X-Received: by 2002:a17:903:11c4:b0:1a6:81bd:c4d9 with SMTP id q4-20020a17090311c400b001a681bdc4d9mr1232305plh.39.1681990573691; Thu, 20 Apr 2023 04:36:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990573; cv=none; d=google.com; s=arc-20160816; b=N+46ax/SN8QfF3Ar6AQwJF92tGa4TJ7l32XCFLtODTRih3mbDy4GA72DMkHgtRhvh2 yn5XJ6Q1HdAYjkLX+QTNZAuQX7hgmOY1JZASJ8IWr1RTyY2Gv5jARW0OH13JKDVKHLZs nNMvgyRZD/kvf0JYGagekW9Eew0GnFr4EvuSP9O3RrweDoF/Hnr/wDGFyOerFstNmM0W QburtWypatk/djcdCaIZwFuKbjoMCctwKgklyNuM4uPnVjyl2/N4QxTAz26VOL6wJPwW iyF/+U/9LyrLId0cmDygVOuSkmOvYo0dZrR+TPsfONMmNHpCDpFdiCbfFazPdN7Tpnap KNVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aV72MPo/QVgpm81UJG9HNcQ8101Ja154dK8BzvUH0wQ=; b=yw6m/qheFTjlx2vCOMKCFfM5KLhDEuuAO8+hWob+1GPLAaoafBqlX16iNKmKE61YKh BlU5bPEHcEuBHV5nbXbz6koU/gqbO+A9jvy76uehhf4uyBZxzsHa9CYYS9kNl0YQ/xMt MAhdiVxpTmgwJQBZEJb43hBe9UEY2A5VESlhCAA7zrue4ZOVWe/IxoJOucQPFsC+qzeo fpbVt6Wl3xGziAMvDdxnPuJBvZT7RihXAt/zX/Uy1bzzJ/3RIVRVEdWqjhrHAkC6g2km cnXwoZF4h9iwoxrMYNZGRspDZg0T4l915mhJ8tXhXNxhdMLgGEpK2Q7zxzoaufQC2k0T 4kcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b3-20020a170902e94300b001a64b5976c4si1756628pll.110.2023.04.20.04.36.01; Thu, 20 Apr 2023 04:36:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234696AbjDTLcC (ORCPT + 99 others); Thu, 20 Apr 2023 07:32:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234052AbjDTLbo (ORCPT ); Thu, 20 Apr 2023 07:31:44 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F3B51FFB; Thu, 20 Apr 2023 04:31:21 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpp3d77z4f3nV8; Thu, 20 Apr 2023 19:31:14 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S8; Thu, 20 Apr 2023 19:31:15 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 4/8] md/raid1-10: factor out a helper to submit normal write Date: Thu, 20 Apr 2023 19:29:42 +0800 Message-Id: <20230420112946.2869956-5-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S8 X-Coremail-Antispam: 1UD129KBjvJXoWxAF1rXFW5ZF1xGrW5urW3Jrb_yoW5tFWxp3 9Iqas3Z3y7XFW7Wa1DZay8J3WSg3WDtrWUCFW3CayfAFy3ZryDta18JryIgryDAFyrCry7 ZF18K3y7Ww47JFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9C14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxK x2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI 0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQSdkUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763694948128878305?= X-GMAIL-MSGID: =?utf-8?q?1763694948128878305?= From: Yu Kuai There are multiple places to do the same thing, factor out a helper to prevent redundant code, and the helper will be used in following patch as well. Signed-off-by: Yu Kuai --- drivers/md/raid1-10.h | 17 +++++++++++++++++ drivers/md/raid1.c | 13 ++----------- drivers/md/raid10.c | 26 ++++---------------------- 3 files changed, 23 insertions(+), 33 deletions(-) diff --git a/drivers/md/raid1-10.h b/drivers/md/raid1-10.h index 664646a3591a..9dc53d8a8129 100644 --- a/drivers/md/raid1-10.h +++ b/drivers/md/raid1-10.h @@ -113,6 +113,22 @@ static inline void md_bio_reset_resync_pages(struct bio *bio, } while (idx++ < RESYNC_PAGES && size > 0); } +static inline void md_submit_write(struct bio *bio) +{ + struct md_rdev *rdev = (struct md_rdev *)bio->bi_bdev; + + bio->bi_next = NULL; + bio_set_dev(bio, rdev->bdev); + if (test_bit(Faulty, &rdev->flags)) + bio_io_error(bio); + else if (unlikely(bio_op(bio) == REQ_OP_DISCARD && + !bdev_max_discard_sectors(bio->bi_bdev))) + /* Just ignore it */ + bio_endio(bio); + else + submit_bio_noacct(bio); +} + static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, blk_plug_cb_fn unplug) { @@ -128,4 +144,5 @@ static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, return true; } + #endif diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 44c8d113621f..c068ed3e6c96 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -797,17 +797,8 @@ static void flush_bio_list(struct r1conf *conf, struct bio *bio) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; - struct md_rdev *rdev = (void *)bio->bi_bdev; - bio->bi_next = NULL; - bio_set_dev(bio, rdev->bdev); - if (test_bit(Faulty, &rdev->flags)) { - bio_io_error(bio); - } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !bdev_max_discard_sectors(bio->bi_bdev))) - /* Just ignore it */ - bio_endio(bio); - else - submit_bio_noacct(bio); + + md_submit_write(bio); bio = next; cond_resched(); } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index d67c5672933c..fd625026c97b 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -907,17 +907,8 @@ static void flush_pending_writes(struct r10conf *conf) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; - struct md_rdev *rdev = (void*)bio->bi_bdev; - bio->bi_next = NULL; - bio_set_dev(bio, rdev->bdev); - if (test_bit(Faulty, &rdev->flags)) { - bio_io_error(bio); - } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !bdev_max_discard_sectors(bio->bi_bdev))) - /* Just ignore it */ - bio_endio(bio); - else - submit_bio_noacct(bio); + + md_submit_write(bio); bio = next; cond_resched(); } @@ -1127,17 +1118,8 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; - struct md_rdev *rdev = (void*)bio->bi_bdev; - bio->bi_next = NULL; - bio_set_dev(bio, rdev->bdev); - if (test_bit(Faulty, &rdev->flags)) { - bio_io_error(bio); - } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !bdev_max_discard_sectors(bio->bi_bdev))) - /* Just ignore it */ - bio_endio(bio); - else - submit_bio_noacct(bio); + + md_submit_write(bio); bio = next; cond_resched(); } From patchwork Thu Apr 20 11:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85875 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp263075vqo; Thu, 20 Apr 2023 04:35:35 -0700 (PDT) X-Google-Smtp-Source: AKy350aAbc6rdn05u9GVZKRC+kBcmKYTtfnEa6+WdLdXdsCFR7OGyFhKHLXP3lCJV2PgCR6RjOAs X-Received: by 2002:a17:90a:df0d:b0:249:8930:1cd with SMTP id gp13-20020a17090adf0d00b00249893001cdmr1372777pjb.4.1681990534998; Thu, 20 Apr 2023 04:35:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990534; cv=none; d=google.com; s=arc-20160816; b=a1aXysZR0US/MEGKVBNS4V8a3ydGbcR9dvnjMFEPdMt2WHZTMaM9WIHFuFLniNeG+I 4syJaCjPr9iiT7uK6xXlZwwog/bf9FeNJP8WJl01MqNFiCkv2SYFd+kwNVvBvVsnHxwl Xki+bnfSa2MrHs/4zlr0jlpHcZ1PD4KvcD8NR1i4746K8KZ47MibQAqSLnk8lBD5mJso abKbxNUbI+def814FAQCtm7r1XmVBNWMgstFI+Mz7bGoIvQMxj3atOe4x+6WXb91oms+ 5NB3psw7KHQmIB3+Goo4knynfAnWGa1oeM2CkPcd4m9HWKwQ8uUH4RuXoSV/Ld+DFfGE a5rQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3d9RsCLZuTKpRLAhktDb1/2e0MVKCNjcC3lUaYF7jNc=; b=0k0pNgfHwFqZrzMvvf4KQOOvma6Gx8zeKbQFmzsMTF147CAPMfDdHTH5drVkfwsfsr yyLvsj4IkUK8possLXAPzdb2oe+mwKE3Acy/uWTjRMVInyen59cB17gNdLdNYWnqJQW0 1c0DimFCLtJEIr6q9FGjmxhIE4cMT58rqIcaAZR8eAAoxGKSTKhT+z+U317Xe9eI/po+ xDQ02zh+jgB/NaM5rEmhFKwuGyIU4krA/5dnEb4QfMJBuzWDi6LEnzQQdKNDxOfcTdLF o+1gb9uTWzDvrenaoUNW/xCmZeZCkNw5mgi7kdoiVp4t41RmbRCARtLYkcy+Pys2TPmS 0Nog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 15-20020a63020f000000b0051423ad5cdfsi1444816pgc.621.2023.04.20.04.35.22; Thu, 20 Apr 2023 04:35:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234741AbjDTLcJ (ORCPT + 99 others); Thu, 20 Apr 2023 07:32:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233995AbjDTLbo (ORCPT ); Thu, 20 Apr 2023 07:31:44 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6ACF630D6; Thu, 20 Apr 2023 04:31:21 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpp5KjSz4f3yb9; Thu, 20 Apr 2023 19:31:14 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S9; Thu, 20 Apr 2023 19:31:16 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 5/8] md/raid1-10: submit write io directly if bitmap is not enabled Date: Thu, 20 Apr 2023 19:29:43 +0800 Message-Id: <20230420112946.2869956-6-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S9 X-Coremail-Antispam: 1UD129KBjvJXoWxuF15JFyrGw47Ww45AF1DJrb_yoW5ZFyfpa yDGa4Ykr15JFW3X3ZxAa4DAFyFy3s3tr9rKryfC398uFy3XFsxGr4rGay5t3ZrCrnxGF43 Xr1YkryUCr18XrJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763694907387508205?= X-GMAIL-MSGID: =?utf-8?q?1763694907387508205?= From: Yu Kuai Commit 6cce3b23f6f8 ("[PATCH] md: write intent bitmap support for raid10") add bitmap support, and it changed that write io is submitted through demon thread because bitmap need to be updated before write io. And later, plug is used to fix performance regression because all the write io will go to demon thread, which means io can't be issued concurrently. However, if bitmap is not enabled, the write io should not go to demon thread in the first place, and plug is not needed as well. Fixes: 6cce3b23f6f8 ("[PATCH] md: write intent bitmap support for raid10") Signed-off-by: Yu Kuai --- drivers/md/md-bitmap.c | 4 +--- drivers/md/md-bitmap.h | 7 +++++++ drivers/md/raid1-10.h | 15 +++++++++++++-- 3 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index ab27f66dbb1f..4bd980b272ef 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1000,7 +1000,6 @@ static int md_bitmap_file_test_bit(struct bitmap *bitmap, sector_t block) return set; } - /* this gets called when the md device is ready to unplug its underlying * (slave) device queues -- before we let any writes go down, we need to * sync the dirty pages of the bitmap file to disk */ @@ -1010,8 +1009,7 @@ void md_bitmap_unplug(struct bitmap *bitmap) int dirty, need_write; int writing = 0; - if (!bitmap || !bitmap->storage.filemap || - test_bit(BITMAP_STALE, &bitmap->flags)) + if (!md_bitmap_enabled(bitmap)) return; /* look at each page to see if there are any set bits that need to be diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h index cfd7395de8fd..3a4750952b3a 100644 --- a/drivers/md/md-bitmap.h +++ b/drivers/md/md-bitmap.h @@ -273,6 +273,13 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot, sector_t *lo, sector_t *hi, bool clear_bits); void md_bitmap_free(struct bitmap *bitmap); void md_bitmap_wait_behind_writes(struct mddev *mddev); + +static inline bool md_bitmap_enabled(struct bitmap *bitmap) +{ + return bitmap && bitmap->storage.filemap && + !test_bit(BITMAP_STALE, &bitmap->flags); +} + #endif #endif diff --git a/drivers/md/raid1-10.h b/drivers/md/raid1-10.h index 9dc53d8a8129..95b2fb4dd9aa 100644 --- a/drivers/md/raid1-10.h +++ b/drivers/md/raid1-10.h @@ -2,6 +2,8 @@ #ifndef _RAID1_10_H #define _RAID1_10_H +#include "md-bitmap.h" + /* Maximum size of each resync request */ #define RESYNC_BLOCK_SIZE (64*1024) #define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE) @@ -133,9 +135,18 @@ static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, blk_plug_cb_fn unplug) { struct raid1_plug_cb *plug = NULL; - struct blk_plug_cb *cb = blk_check_plugged(unplug, mddev, - sizeof(*plug)); + struct blk_plug_cb *cb; + + /* + * If bitmap is not enabled, it's safe to submit the io directly, and + * this can get optimal performance. + */ + if (!md_bitmap_enabled(mddev->bitmap)) { + md_submit_write(bio); + return true; + } + cb = blk_check_plugged(unplug, mddev, sizeof(*plug)); if (!cb) return false; From patchwork Thu Apr 20 11:29:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85879 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp268323vqo; Thu, 20 Apr 2023 04:45:47 -0700 (PDT) X-Google-Smtp-Source: AKy350bEpIjvSFbVhEQ7K1Dg7Ljo+lgIsoiWZ/kXbJxJYxBpBGQZGpEYNIbGKRYFu6CK0CVVMNSB X-Received: by 2002:a05:6a00:1582:b0:63f:4a9:679b with SMTP id u2-20020a056a00158200b0063f04a9679bmr1212427pfk.15.1681991147248; Thu, 20 Apr 2023 04:45:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681991147; cv=none; d=google.com; s=arc-20160816; b=qnAyIAO0d3w7uvxSOXcO/OLLbtTBB83UP1n80SRy2yV/GBQv6LtAlqJ+dt3Tv9rK5R 41AiiLTiOsneDkpB1eFUmNnvquzYJsuGa+MhSLjFLBBCRPWQxqPMo6vkU7xISQA3gosc CIc2oCKSacPiVgqcN9V5XQ69xFyyh/4xZLC1BeBXKtm0H+0ErPg+ra2dU6YEySM1HGNb CeJ6qtMzbcxGbztktvx9rS4yHuiS3KUly+kHmT+886yWePNIhbsnEwAO36F+gYs4jrrp toUFvE54JtcQkE3LkYEyrRA8kvq8Tmkl1zW+sOflgw57a8m0dA0BjhuWaIsh2fHtmzMR WZVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=V6BCJHcToUpVSoFLkYHGSXwMxTpRhv1mBrJ9u4JC3jE=; b=Utw4XwuYWaerclabnUmHW7pDv0ent0zBcvBfPVLn/UD9WMCaaEh0xNTp83jJzBG6Mc 1TwaORb7T6MdhRJcm69x/ubA7/y+spG19w6QjCz80vspKr3kWhdBQl88xU0rP02Xe300 rq8tcgYo5NOfRbnaH1jnHnKVvOlp72RQ8JtQDx/BjyVEKftN/8wKMvhBc0QiYrHo1Pg3 87fKvNMao9ZH4UpbEg9ZlWRl+H21W10Jp2O40UNgofddslL0xjBYSpwV7QAZiZJt6h2A EmWJZ/aQxklwFRuRBU/NVcoWjv3VhQt1m6gV6vbblza+8wB9AS3QTAHyqj2e3fSagKd1 aSXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x18-20020aa79572000000b005dd4ab3a2c3si1444221pfq.182.2023.04.20.04.45.33; Thu, 20 Apr 2023 04:45:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229543AbjDTLcM (ORCPT + 99 others); Thu, 20 Apr 2023 07:32:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234162AbjDTLbp (ORCPT ); Thu, 20 Apr 2023 07:31:45 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0D333592; Thu, 20 Apr 2023 04:31:21 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpq23yfz4f3tps; Thu, 20 Apr 2023 19:31:15 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S10; Thu, 20 Apr 2023 19:31:16 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 6/8] md/md-bitmap: support to unplug bitmap asynchrously Date: Thu, 20 Apr 2023 19:29:44 +0800 Message-Id: <20230420112946.2869956-7-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S10 X-Coremail-Antispam: 1UD129KBjvJXoW3JF17try8KF4UGr1xGFyfWFg_yoWxJrWfpr Z8t3s0kr45JFW3Xw1fAry7CF1Fv3Wvqr9rJryfC3s8uFy3XF9xXF48GFWjyw1DCrsxGF43 Zw1Yyr98Gr1fXF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763695548976343709?= X-GMAIL-MSGID: =?utf-8?q?1763695548976343709?= From: Yu Kuai If bitmap is enabled, bitmap must update before submiting write io, this is why unplug callback must move these io to 'conf->pending_io_list' if 'current->bio_list' is not empty, which will suffer performance degeration. This patch add a new helper md_bitmap_unplug_async() to submit bitmap io in a kworker, so that submit bitmap io in raid10_unplug() doesn't require that 'current->bio_list' is empty. This patch prepare to limit the number of plugged bio. Signed-off-by: Yu Kuai --- drivers/md/md-bitmap.c | 59 +++++++++++++++++++++++++++++++++++++++--- drivers/md/md-bitmap.h | 3 +++ drivers/md/raid1.c | 3 ++- drivers/md/raid10.c | 2 +- 4 files changed, 62 insertions(+), 5 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 4bd980b272ef..da8ad2e95e88 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1000,10 +1000,18 @@ static int md_bitmap_file_test_bit(struct bitmap *bitmap, sector_t block) return set; } -/* this gets called when the md device is ready to unplug its underlying +struct bitmap_unplug_work { + struct work_struct work; + struct bitmap *bitmap; + struct completion *done; +}; + +/* + * This gets called when the md device is ready to unplug its underlying * (slave) device queues -- before we let any writes go down, we need to - * sync the dirty pages of the bitmap file to disk */ -void md_bitmap_unplug(struct bitmap *bitmap) + * sync the dirty pages of the bitmap file to disk. + */ +static void md_do_bitmap_unplug(struct bitmap *bitmap) { unsigned long i; int dirty, need_write; @@ -1035,9 +1043,45 @@ void md_bitmap_unplug(struct bitmap *bitmap) if (test_bit(BITMAP_WRITE_ERROR, &bitmap->flags)) md_bitmap_file_kick(bitmap); + +} +static void md_bitmap_unplug_fn(struct work_struct *work) +{ + struct bitmap_unplug_work *unplug_work = + container_of(work, struct bitmap_unplug_work, work); + + md_do_bitmap_unplug(unplug_work->bitmap); + complete(unplug_work->done); +} + +static void __md_bitmap_unplug(struct bitmap *bitmap, bool async) +{ + DECLARE_COMPLETION_ONSTACK(done); + struct bitmap_unplug_work unplug_work; + + if (!async) + return md_do_bitmap_unplug(bitmap); + + INIT_WORK(&unplug_work.work, md_bitmap_unplug_fn); + unplug_work.bitmap = bitmap; + unplug_work.done = &done; + + queue_work(bitmap->unplug_wq, &unplug_work.work); + wait_for_completion(&done); +} + +void md_bitmap_unplug(struct bitmap *bitmap) +{ + return __md_bitmap_unplug(bitmap, false); } EXPORT_SYMBOL(md_bitmap_unplug); +void md_bitmap_unplug_async(struct bitmap *bitmap) +{ + return __md_bitmap_unplug(bitmap, true); +} +EXPORT_SYMBOL(md_bitmap_unplug_async); + static void md_bitmap_set_memory_bits(struct bitmap *bitmap, sector_t offset, int needed); /* * bitmap_init_from_disk -- called at bitmap_create time to initialize * the in-memory bitmap from the on-disk bitmap -- also, sets up the @@ -1753,6 +1797,9 @@ void md_bitmap_free(struct bitmap *bitmap) if (!bitmap) /* there was no bitmap */ return; + if (bitmap->unplug_wq) + destroy_workqueue(bitmap->unplug_wq); + if (bitmap->sysfs_can_clear) sysfs_put(bitmap->sysfs_can_clear); @@ -1843,6 +1890,12 @@ struct bitmap *md_bitmap_create(struct mddev *mddev, int slot) if (!bitmap) return ERR_PTR(-ENOMEM); + bitmap->unplug_wq = create_workqueue("md_bitmap"); + if (!bitmap->unplug_wq) { + err = -ENOMEM; + goto error; + } + spin_lock_init(&bitmap->counts.lock); atomic_set(&bitmap->pending_writes, 0); init_waitqueue_head(&bitmap->write_wait); diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h index 3a4750952b3a..55531669db24 100644 --- a/drivers/md/md-bitmap.h +++ b/drivers/md/md-bitmap.h @@ -231,6 +231,8 @@ struct bitmap { struct kernfs_node *sysfs_can_clear; int cluster_slot; /* Slot offset for clustered env */ + + struct workqueue_struct *unplug_wq; }; /* the bitmap API */ @@ -264,6 +266,7 @@ void md_bitmap_sync_with_cluster(struct mddev *mddev, sector_t new_lo, sector_t new_hi); void md_bitmap_unplug(struct bitmap *bitmap); +void md_bitmap_unplug_async(struct bitmap *bitmap); void md_bitmap_daemon_work(struct mddev *mddev); int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks, diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index c068ed3e6c96..7389e599f34e 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -792,7 +792,6 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect static void flush_bio_list(struct r1conf *conf, struct bio *bio) { /* flush any pending bitmap writes to disk before proceeding w/ I/O */ - md_bitmap_unplug(conf->mddev->bitmap); wake_up(&conf->wait_barrier); while (bio) { /* submit pending writes */ @@ -829,6 +828,7 @@ static void flush_pending_writes(struct r1conf *conf) */ __set_current_state(TASK_RUNNING); blk_start_plug(&plug); + md_bitmap_unplug(conf->mddev->bitmap); flush_bio_list(conf, bio); blk_finish_plug(&plug); } else @@ -1176,6 +1176,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule) /* we aren't scheduling, so we can do the write-out directly. */ bio = bio_list_get(&plug->pending); + md_bitmap_unplug_async(conf->mddev->bitmap); flush_bio_list(conf, bio); kfree(plug); } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index fd625026c97b..9f307ff5d4f6 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1113,7 +1113,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) /* we aren't scheduling, so we can do the write-out directly. */ bio = bio_list_get(&plug->pending); - md_bitmap_unplug(mddev->bitmap); + md_bitmap_unplug_async(mddev->bitmap); wake_up(&conf->wait_barrier); while (bio) { /* submit pending writes */ From patchwork Thu Apr 20 11:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85880 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp270554vqo; Thu, 20 Apr 2023 04:50:05 -0700 (PDT) X-Google-Smtp-Source: AKy350YJLIZh/OeVtBqHHBVSSsaIAVLyhK/5/l2zyPad8vOp951mYErTFt8s9qtEx0HAmALrCIlt X-Received: by 2002:a17:902:db05:b0:1a6:67e1:4d2c with SMTP id m5-20020a170902db0500b001a667e14d2cmr1807872plx.6.1681991404923; Thu, 20 Apr 2023 04:50:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681991404; cv=none; d=google.com; s=arc-20160816; b=KxyygyuG2CBSE9gPo2+rc7ffmTZ2kEEcuEjIYJatSDDVmUY9haHKTQAI2b1XVdYMgD fvapvSpXVsamLf0EgEJ0bgm6oK2zPSIMdBn6gF6j0yqAva6VFAJkV74AYk+45GdMNjQU 6/X+b1wpnCGPbvqIe2JBEv52nBtvd6OwZ1kt9q3QeMfSVrZga7vvb9YbTFgHGkD6pyQT yGcCaOqhh7ylSv4+J8LxOrdxEwi+fXxEoegAjlfvNtPbYLkFi2ozHwOuW7vK1kFiDtiU mCloesx0N+L/B3aSvmguDdLB/9g0Nutze2AG7acHAS1eO/gpK+oK6oBZZfoQ+XuyCs3L SQkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=p/yz7JfnBH3rqtPEht6WH6kwXZ7aCjXMn0qv0NqZEgc=; b=jb8BxyBHnCMoZigPBMl7OhHtgL7+O+cibMWdc0lWu8QbltqAiORoDdbSi2HNBcex5r HZDLFSxqMjw0ohG+KWKm3pq9EOg+/JzzP4yM+L41e7XQ9m/70HUfZAaG024knrQBZsh1 w2tWUVZE8kILFJvzKnw7uGiUuNDcOiQ1i9Q43lShtlhAX6LPjnwswKvisJH4basAYV+K ZK1soZgSPSmNJRgOzc8WH6WXDLMLa+Ah2bbME0tr1sCKfsQaZWbzpClYcnYNoop3TUzK NkZIG6Lzyh36DjFncd1U8rbCdsjHrXhau9Vq0Bo7o2OpcpkYdeniO6zDVI68c5HsS2Jx Xbtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o12-20020a170902d4cc00b001a8031f3964si1854453plg.437.2023.04.20.04.49.52; Thu, 20 Apr 2023 04:50:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234446AbjDTLcF (ORCPT + 99 others); Thu, 20 Apr 2023 07:32:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234042AbjDTLbo (ORCPT ); Thu, 20 Apr 2023 07:31:44 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED587272C; Thu, 20 Apr 2023 04:31:21 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpq1Q3Lz4f3rg0; Thu, 20 Apr 2023 19:31:15 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S11; Thu, 20 Apr 2023 19:31:16 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 7/8] md/raid1{,0}: Revert "md/raid1{,0}: fix deadlock in bitmap_unplug." Date: Thu, 20 Apr 2023 19:29:45 +0800 Message-Id: <20230420112946.2869956-8-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S11 X-Coremail-Antispam: 1UD129KBjvJXoW7tF1DXr4fJF4UGry8Cr15urg_yoW8WFWkp3 90gayavrWUWrWUZ3yDua1DuFyFg3WqgFW2kF95Z395WFyYqFy3W3y5JFWxWryDZFW3Ja47 Zry5K39rJF4YyaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763695819156818138?= X-GMAIL-MSGID: =?utf-8?q?1763695819156818138?= From: Yu Kuai This reverts commit 874807a83139abc094f939e93623c5623573d543. Because the deadlock won't exist after commit a214b949d8e3 ("blk-mq: only flush requests from the plug in blk_mq_submit_bio"). And bitmap io is handled asynchronously now, the deadlock won't be triggered anymore. By the way, for performance considerations, plugged bio should not be handled asynchronously. Signed-off-by: Yu Kuai --- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 7389e599f34e..91e1dbc48228 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1164,7 +1164,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule) struct r1conf *conf = mddev->private; struct bio *bio; - if (from_schedule || current->bio_list) { + if (from_schedule) { spin_lock_irq(&conf->device_lock); bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 9f307ff5d4f6..d92b1efe9eee 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1101,7 +1101,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) struct r10conf *conf = mddev->private; struct bio *bio; - if (from_schedule || current->bio_list) { + if (from_schedule) { spin_lock_irq(&conf->device_lock); bio_list_merge(&conf->pending_bio_list, &plug->pending); spin_unlock_irq(&conf->device_lock); From patchwork Thu Apr 20 11:29:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 85878 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp265822vqo; Thu, 20 Apr 2023 04:40:58 -0700 (PDT) X-Google-Smtp-Source: AKy350asr1NqKU47u3nwyWSTQaID8/DzFDEwI6ZENuDHLrjtGHhQOaaM2uvgQ6T52QqsDFk2c0sH X-Received: by 2002:a17:903:1250:b0:1a6:cd08:5594 with SMTP id u16-20020a170903125000b001a6cd085594mr1240642plh.69.1681990858161; Thu, 20 Apr 2023 04:40:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681990858; cv=none; d=google.com; s=arc-20160816; b=S7JLbg3/QkiVuuFneMO/RmWA97PBTpQauCYpXj9lCh8APIAZ0RoezgJZRwU1PYEX2W bg6sGRaqTFjoBPzs28/LR/2ue5v9jKSxT4KY6mhEgJpi2DMhrALA/v5+JKunuOmcfClS PVpkMnl/qZTzIE39ECGgKveUntAG3KsqOvbv63uubvuDNRjSPDy1wodc+MeFy0PHm5BG GIFpDcIbxS5jb5J8+V3ZLoqIJw73gW/B2s/2xtB5WIU2o77dEcME6jyqhgqgN9+RlpHK lb0FpqhEx5+NPH8hYaKjovNLNidGzXzsPP2DyARAZc21InRM72BtMWcwKliCttBPUk2z W9dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=fcaONA0Ons4m36pBA9WoLnnqyvns7eV5VTqteBNA9tQ=; b=qtlXZKXT5c4S/W1C3l92Dc29htsRMpgoL0NhTlWTGrs0QUHbbFOtDpemHBlmxfd9Ah iWSh1qMvxHxSAFg1IBPGhwjnQUceXIAC2i0oORnsRlFxm/W/68i7vrXRwHDcI7Db/c+v +z4TY4PHWWOgdf3iZmKHx29eWKAoYSyKyS6w/0oNwSA63gyW7vWCgfwguX6w9W+pIFIx 0hEmGSk+nO5E2+51GbHfw2HAREd0mwADc2FXGD2l6FZg5AHfQQHoEjIXY+AjzVh6mU4P KH0e7XVdaKgVoW9Xj3dgY0J2SqIYFpmlcrD7Ir2MoTaisZPSwz+ktaIClZSEXc4CqKjn DQkQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v6-20020a655c46000000b0051b10dc3a47si1519763pgr.158.2023.04.20.04.40.45; Thu, 20 Apr 2023 04:40:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234584AbjDTLcO (ORCPT + 99 others); Thu, 20 Apr 2023 07:32:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234139AbjDTLbp (ORCPT ); Thu, 20 Apr 2023 07:31:45 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7472C1FC4; Thu, 20 Apr 2023 04:31:22 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Q2Fpq6XV8z4f3yNq; Thu, 20 Apr 2023 19:31:15 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnD7OAIkFkVY8hHw--.17021S12; Thu, 20 Apr 2023 19:31:17 +0800 (CST) From: Yu Kuai To: song@kernel.org, neilb@suse.de, akpm@osdl.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next 8/8] md/raid1-10: limit the number of plugged bio Date: Thu, 20 Apr 2023 19:29:46 +0800 Message-Id: <20230420112946.2869956-9-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420112946.2869956-1-yukuai1@huaweicloud.com> References: <20230420112946.2869956-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCnD7OAIkFkVY8hHw--.17021S12 X-Coremail-Antispam: 1UD129KBjvJXoWxWw1DZF4xXr18CFyxXw47Jwb_yoW5uryfpa 1Uta4avrWUZrWxX3yDJa1UCFyFqw4DXFWqkFZ5C395tFy7XFWjga1rJFWrur1DZFW3Gry3 JFn0krWxGF15tF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCF04k20xvY0x0EwIxGrw CFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE 14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2 IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,KHOP_HELO_FCRDNS, MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763695245845230305?= X-GMAIL-MSGID: =?utf-8?q?1763695245845230305?= From: Yu Kuai bio can be added to plug infinitely, and following writeback test can trigger huge amount of pluged bio: Test script: modprobe brd rd_nr=4 rd_size=10485760 mdadm -CR /dev/md0 -l10 -n4 /dev/ram[0123] --assume-clean echo 0 > /proc/sys/vm/dirty_background_ratio echo 60 > /proc/sys/vm/dirty_ratio fio -filename=/dev/md0 -ioengine=libaio -rw=write -thread -bs=1k-8k -numjobs=1 -iodepth=128 -name=xxx Test result: Monitor /sys/block/md0/inflight will found that inflight keep increasing until fio finish writing, after running for about 2 minutes: [root@fedora ~]# cat /sys/block/md0/inflight 0 4474191 Fix the problem by limiting the number of pluged bio based on the number of copies for orininal bio. Signed-off-by: Yu Kuai --- drivers/md/raid1-10.h | 9 ++++++++- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/md/raid1-10.h b/drivers/md/raid1-10.h index 95b2fb4dd9aa..2785ae805953 100644 --- a/drivers/md/raid1-10.h +++ b/drivers/md/raid1-10.h @@ -33,9 +33,12 @@ struct resync_pages { struct page *pages[RESYNC_PAGES]; }; +#define MAX_PLUG_BIO 32 + struct raid1_plug_cb { struct blk_plug_cb cb; struct bio_list pending; + unsigned int count; }; static inline void rbio_pool_free(void *rbio, void *data) @@ -132,7 +135,7 @@ static inline void md_submit_write(struct bio *bio) } static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, - blk_plug_cb_fn unplug) + blk_plug_cb_fn unplug, int copies) { struct raid1_plug_cb *plug = NULL; struct blk_plug_cb *cb; @@ -152,6 +155,10 @@ static inline bool md_add_bio_to_plug(struct mddev *mddev, struct bio *bio, plug = container_of(cb, struct raid1_plug_cb, cb); bio_list_add(&plug->pending, bio); + if (++plug->count / MAX_PLUG_BIO >= copies) { + list_del(&cb->list); + cb->callback(cb, false); + } return true; } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 91e1dbc48228..6a38104a7b89 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1561,7 +1561,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, r1_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; - if (!md_add_bio_to_plug(mddev, mbio, raid1_unplug)) { + if (!md_add_bio_to_plug(mddev, mbio, raid1_unplug, disks)) { spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index d92b1efe9eee..721d50646043 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1300,7 +1300,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, atomic_inc(&r10_bio->remaining); - if (!md_add_bio_to_plug(mddev, mbio, raid10_unplug)) { + if (!md_add_bio_to_plug(mddev, mbio, raid10_unplug, conf->copies)) { spin_lock_irqsave(&conf->device_lock, flags); bio_list_add(&conf->pending_bio_list, mbio); spin_unlock_irqrestore(&conf->device_lock, flags);