Message ID | 20230104142259.2673013-9-shikemeng@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4994116wrt; Tue, 3 Jan 2023 22:25:30 -0800 (PST) X-Google-Smtp-Source: AMrXdXvd8NkjjuroRwzzw/Oaw6z6i9E+wEhKjY9bVeFt/78AIeNZYzheUNjQ7NKTVU59tPSrUaxP X-Received: by 2002:a17:903:44a:b0:185:441e:4d0e with SMTP id iw10-20020a170903044a00b00185441e4d0emr50272733plb.62.1672813529789; Tue, 03 Jan 2023 22:25:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672813529; cv=none; d=google.com; s=arc-20160816; b=cPgKkQAL8CdmPKNuj8wWJ7tGti2U8xMkk4Rf4xsCXbQDrW14bsbhWh608jHf0KC1CU Q6z1U85JSo+Shn94/nW2XdTIgoZm1l64ISR36b30/xOamNNmCxza9rLiB8weJzOFedXO H1D2nqwroTvr7aFjpIF7HFrrvovZ9MPtd4WvCuHwf7jrNWvpj0KAcMCtpB3OKx9gWvfO ktR7muxsB2DAbHiet3VR9BeOqJ5+U5L5qLf0aLvpfjw2mJXwEqBf9549YE/uW0mWdmnf 84CHW6S1PpcLPM4lKvDF86oln8MlVVaMQJdu+1NeWqE0tqcEoQaKJJ9HFmYoXc98hJjg J88w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=VIcJVFXHIoTlvx4eM8X0YwbBvWhLudoqo1S+DSzloxI=; b=VqaEG1l5CrNlq1JOK0IBuOxv9HF9NdNpolrfMgYU2/lZGqoUGDyYNM3+iIZ4to44VN joJL6Uk5L93Py+g0S5UOOwH3YsyuP6Th7+Okfc1aM/jv0/b8p2qDq5jBrV7gVu/qzTtT 8dAGbXpyTXDIR2yf4AAHKEPryF+Kz+C7/hEis/h+PHmOSoccCvFxalrJei4Juz0KB424 C1ja989IqRKNhhwENnhQ2gAzEZFNsfpUUcGnyC6oY4DNP7VluUCGXJcSPAZl54XQp7k1 4soUQNFPwMkft8I04uApuyWVXJM2j/KePzinTrdJmuRca5G8DmgJM6fYMVsS5O6A7j2a 8R4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l11-20020a170902f68b00b00188630db249si24824462plg.177.2023.01.03.22.25.16; Tue, 03 Jan 2023 22:25:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233209AbjADGYm (ORCPT <rfc822;tmhikaru@gmail.com> + 99 others); Wed, 4 Jan 2023 01:24:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230483AbjADGYX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 4 Jan 2023 01:24:23 -0500 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56335183A0; Tue, 3 Jan 2023 22:24:22 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Nn01Y1SSWz4f3pPl; Wed, 4 Jan 2023 14:24:17 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP4 (Coremail) with SMTP id gCh0CgDnnbGOG7Vju3lKBA--.23788S10; Wed, 04 Jan 2023 14:24:19 +0800 (CST) From: Kemeng Shi <shikemeng@huaweicloud.com> To: axboe@kernel.dk, dwagner@suse.de, hare@suse.de, ming.lei@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hch@lst.de, john.garry@huawei.com, jack@suse.cz Subject: [PATCH v2 08/13] blk-mq: simplify flush check in blk_mq_dispatch_rq_list Date: Wed, 4 Jan 2023 22:22:54 +0800 Message-Id: <20230104142259.2673013-9-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230104142259.2673013-1-shikemeng@huaweicloud.com> References: <20230104142259.2673013-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgDnnbGOG7Vju3lKBA--.23788S10 X-Coremail-Antispam: 1UD129KBjvJXoW7CFyxAr1fCF4kuw48XrWkZwb_yoW8JFWkpF W3Gayqkryaqr12yr18Ca9xGasrAws5AF1UuFZxCw1aqF45Cr4xKrZaga15Wa4kCrsayF4Y vayUWrWY9FW5Xa7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBIb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3w A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41l42xK82IYc2 Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s02 6x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0x vE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY 6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aV CY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU0TqcUUUUUU== X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754072120038018362?= X-GMAIL-MSGID: =?utf-8?q?1754072120038018362?= |
Series |
A few bugfix and cleanup patches for blk-mq
|
|
Commit Message
Kemeng Shi
Jan. 4, 2023, 2:22 p.m. UTC
For busy error BLK_STS*_RESOURCE, request will always be added
back to list, so need_resource will not be true and ret will
not be == BLK_STS_DEV_RESOURCE if list is empty. We could remove
these dead check.
If list is empty, we only need to send extra flush
if error happens at last request in the list which is stored in
ret. So send a extra flush if ret is not BLK_STS_OK instead of
errors is non-zero to avoid unnecessary flush for error at middle
request in list.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
block/blk-mq.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Comments
I think we need to come up with a clear rule on when commit_rqs needs to be called, and follow that. In this case I'd be confused if there was any case where we need to call it if list was empty.
Hi, Christoph, thank you so much for review. on 1/9/2023 2:06 AM, Christoph Hellwig wrote: > I think we need to come up with a clear rule on when commit_rqs > needs to be called, and follow that. In this case I'd be confused > if there was any case where we need to call it if list was empty. > After we queue request[s] to one driver queue, we need to notify driver that there are no more request to the queue or driver will keep waiting for the last request to be queued and IO hung could happen. Normaly, we will notify this by setting .last in struct blk_mq_queue_data along with the normal last request .rq in struct blk_mq_queue_data. The extra commit is only needed if normal last information in .last is lost. (See comment in struct blk_mq_ops for commit_rqs). The lost could occur if error happens for sending last request with .last set or error happen in middle of list and we even do not send the request with .last set.
On Mon, Jan 09, 2023 at 10:27:33AM +0800, Kemeng Shi wrote: > After we queue request[s] to one driver queue, we need to notify driver > that there are no more request to the queue or driver will keep waiting > for the last request to be queued and IO hung could happen. Yes. > Normaly, we will notify this by setting .last in struct blk_mq_queue_data > along with the normal last request .rq in struct blk_mq_queue_data. The > extra commit is only needed if normal last information in .last is lost. > (See comment in struct blk_mq_ops for commit_rqs). > > The lost could occur if error happens for sending last request with .last > set or error happen in middle of list and we even do not send the request > with .last set. Yes. So the rule is: 1) did not queue everything initially scheduled to queue OR 2) the last attempt to queue a request failed I think we need to find a way to clearly document that and that make all callers match it. For most this becomes a if (ret || !list_empty(list)) or even just if (ret) as an error is often the only way to break out of the submission loop. I wonder if we need to split the queued clearing from blk_mq_commit_rqs and just clear it in the existing callers, so that we can use that helpers for all commits, nicely hiding the ->commit_rqs presence check, and then move that call to where it is needed directly. Something like this untested patch (which needs to be split up), which also makes sure we trace these calls consistently: --- diff --git a/block/blk-mq.c b/block/blk-mq.c index c5cf0dbca1db8d..436ca56a0b7172 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2001,6 +2001,15 @@ static void blk_mq_release_budgets(struct request_queue *q, } } +static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int queued, + bool from_schedule) +{ + if (queued && hctx->queue->mq_ops->commit_rqs) { + trace_block_unplug(hctx->queue, queued, !from_schedule); + hctx->queue->mq_ops->commit_rqs(hctx); + } +} + /* * Returns true if we did some work AND can potentially do more. */ @@ -2082,12 +2091,9 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list, if (!list_empty(&zone_list)) list_splice_tail_init(&zone_list, list); - /* If we didn't flush the entire list, we could have told the driver - * there was more coming, but that turned out to be a lie. - */ - if ((!list_empty(list) || errors || needs_resource || - ret == BLK_STS_DEV_RESOURCE) && q->mq_ops->commit_rqs && queued) - q->mq_ops->commit_rqs(hctx); + if (!list_empty(list) || ret) + blk_mq_commit_rqs(hctx, queued, false); + /* * Any items that need requeuing? Stuff them into hctx->dispatch, * that is where we will continue on next queue run. @@ -2548,16 +2554,6 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, spin_unlock(&ctx->lock); } -static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int *queued, - bool from_schedule) -{ - if (hctx->queue->mq_ops->commit_rqs) { - trace_block_unplug(hctx->queue, *queued, !from_schedule); - hctx->queue->mq_ops->commit_rqs(hctx); - } - *queued = 0; -} - static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, unsigned int nr_segs) { @@ -2684,17 +2680,17 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) { struct blk_mq_hw_ctx *hctx = NULL; + blk_status_t ret = BLK_STS_OK; struct request *rq; int queued = 0; - int errors = 0; while ((rq = rq_list_pop(&plug->mq_list))) { bool last = rq_list_empty(plug->mq_list); - blk_status_t ret; if (hctx != rq->mq_hctx) { if (hctx) - blk_mq_commit_rqs(hctx, &queued, from_schedule); + blk_mq_commit_rqs(hctx, queued, from_schedule); + queued = 0; hctx = rq->mq_hctx; } @@ -2706,21 +2702,15 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) case BLK_STS_RESOURCE: case BLK_STS_DEV_RESOURCE: blk_mq_request_bypass_insert(rq, false, true); - blk_mq_commit_rqs(hctx, &queued, from_schedule); - return; + goto out; default: blk_mq_end_request(rq, ret); - errors++; break; } } - - /* - * If we didn't flush the entire list, we could have told the driver - * there was more coming, but that turned out to be a lie. - */ - if (errors) - blk_mq_commit_rqs(hctx, &queued, from_schedule); +out: + if (ret) + blk_mq_commit_rqs(hctx, queued, from_schedule); } static void __blk_mq_flush_plug_list(struct request_queue *q, @@ -2804,37 +2794,33 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list) { + blk_status_t ret = BLK_STS_OK; + struct request *rq; int queued = 0; - int errors = 0; - - while (!list_empty(list)) { - blk_status_t ret; - struct request *rq = list_first_entry(list, struct request, - queuelist); + bool last; + while ((rq = list_first_entry_or_null(list, struct request, + queuelist))) { list_del_init(&rq->queuelist); - ret = blk_mq_request_issue_directly(rq, list_empty(list)); - if (ret != BLK_STS_OK) { - errors++; - if (ret == BLK_STS_RESOURCE || - ret == BLK_STS_DEV_RESOURCE) { - blk_mq_request_bypass_insert(rq, false, - list_empty(list)); - break; - } - blk_mq_end_request(rq, ret); - } else + last = list_empty(list); + + ret = blk_mq_request_issue_directly(rq, last); + switch (ret) { + case BLK_STS_OK: queued++; + break; + case BLK_STS_RESOURCE: + case BLK_STS_DEV_RESOURCE: + blk_mq_request_bypass_insert(rq, false, last); + goto out; + default: + blk_mq_end_request(rq, ret); + break; + } } - - /* - * If we didn't flush the entire list, we could have told - * the driver there was more coming, but that turned out to - * be a lie. - */ - if ((!list_empty(list) || errors) && - hctx->queue->mq_ops->commit_rqs && queued) - hctx->queue->mq_ops->commit_rqs(hctx); +out: + if (ret) + blk_mq_commit_rqs(hctx, queued, false); } static bool blk_mq_attempt_bio_merge(struct request_queue *q,
on 1/10/2023 4:09 PM, Christoph Hellwig wrote: > On Mon, Jan 09, 2023 at 10:27:33AM +0800, Kemeng Shi wrote: >> After we queue request[s] to one driver queue, we need to notify driver >> that there are no more request to the queue or driver will keep waiting >> for the last request to be queued and IO hung could happen. > > Yes. > >> Normaly, we will notify this by setting .last in struct blk_mq_queue_data >> along with the normal last request .rq in struct blk_mq_queue_data. The >> extra commit is only needed if normal last information in .last is lost. >> (See comment in struct blk_mq_ops for commit_rqs). >> >> The lost could occur if error happens for sending last request with .last >> set or error happen in middle of list and we even do not send the request >> with .last set. > > Yes. So the rule is: > > 1) did not queue everything initially scheduled to queue > > OR > > 2) the last attempt to queue a request failed > > I think we need to find a way to clearly document that and that > make all callers match it. > For most this becomes a > > if (ret || !list_empty(list)) > > or even just > > if (ret) > > as an error is often the only way to break out of the submission > loop. > > I wonder if we need to split the queued clearing from blk_mq_commit_rqs > and just clear it in the existing callers, so that we can use that > helpers for all commits, nicely hiding the ->commit_rqs presence > check, and then move that call to where it is needed directly. Something > like this untested patch (which needs to be split up), which also > makes sure we trace these calls consistently: Yes, using helper also makes queued check consistently. Currently, most code only calls commit_rqs if any request is queued, one exception is that blk_mq_plug_issue_direct calls commit_rqs without queued check. Besides, we can document the the rule before blk_mq_commit_rqs. Any caller in future can notice the rule and match it. I will send next version based on suggested helper. Thanks. > --- > diff --git a/block/blk-mq.c b/block/blk-mq.c > index c5cf0dbca1db8d..436ca56a0b7172 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2001,6 +2001,15 @@ static void blk_mq_release_budgets(struct request_queue *q, > } > } > > +static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int queued, > + bool from_schedule) > +{ > + if (queued && hctx->queue->mq_ops->commit_rqs) { > + trace_block_unplug(hctx->queue, queued, !from_schedule); > + hctx->queue->mq_ops->commit_rqs(hctx); > + } > +} > + > /* > * Returns true if we did some work AND can potentially do more. > */ > @@ -2082,12 +2091,9 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list, > if (!list_empty(&zone_list)) > list_splice_tail_init(&zone_list, list); > > - /* If we didn't flush the entire list, we could have told the driver > - * there was more coming, but that turned out to be a lie. > - */ > - if ((!list_empty(list) || errors || needs_resource || > - ret == BLK_STS_DEV_RESOURCE) && q->mq_ops->commit_rqs && queued) > - q->mq_ops->commit_rqs(hctx); > + if (!list_empty(list) || ret) > + blk_mq_commit_rqs(hctx, queued, false); > + > /* > * Any items that need requeuing? Stuff them into hctx->dispatch, > * that is where we will continue on next queue run. > @@ -2548,16 +2554,6 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, > spin_unlock(&ctx->lock); > } > > -static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int *queued, > - bool from_schedule) > -{ > - if (hctx->queue->mq_ops->commit_rqs) { > - trace_block_unplug(hctx->queue, *queued, !from_schedule); > - hctx->queue->mq_ops->commit_rqs(hctx); > - } > - *queued = 0; > -} > - > static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, > unsigned int nr_segs) > { > @@ -2684,17 +2680,17 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) > static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) > { > struct blk_mq_hw_ctx *hctx = NULL; > + blk_status_t ret = BLK_STS_OK; > struct request *rq; > int queued = 0; > - int errors = 0; > > while ((rq = rq_list_pop(&plug->mq_list))) { > bool last = rq_list_empty(plug->mq_list); > - blk_status_t ret; > > if (hctx != rq->mq_hctx) { > if (hctx) > - blk_mq_commit_rqs(hctx, &queued, from_schedule); > + blk_mq_commit_rqs(hctx, queued, from_schedule); > + queued = 0; > hctx = rq->mq_hctx; > } > > @@ -2706,21 +2702,15 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) > case BLK_STS_RESOURCE: > case BLK_STS_DEV_RESOURCE: > blk_mq_request_bypass_insert(rq, false, true); > - blk_mq_commit_rqs(hctx, &queued, from_schedule); > - return; > + goto out; > default: > blk_mq_end_request(rq, ret); > - errors++; > break; > } > } > - > - /* > - * If we didn't flush the entire list, we could have told the driver > - * there was more coming, but that turned out to be a lie. > - */ > - if (errors) > - blk_mq_commit_rqs(hctx, &queued, from_schedule); > +out: > + if (ret) > + blk_mq_commit_rqs(hctx, queued, from_schedule); > } > > static void __blk_mq_flush_plug_list(struct request_queue *q, > @@ -2804,37 +2794,33 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) > void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, > struct list_head *list) > { > + blk_status_t ret = BLK_STS_OK; > + struct request *rq; > int queued = 0; > - int errors = 0; > - > - while (!list_empty(list)) { > - blk_status_t ret; > - struct request *rq = list_first_entry(list, struct request, > - queuelist); > + bool last; > > + while ((rq = list_first_entry_or_null(list, struct request, > + queuelist))) { > list_del_init(&rq->queuelist); > - ret = blk_mq_request_issue_directly(rq, list_empty(list)); > - if (ret != BLK_STS_OK) { > - errors++; > - if (ret == BLK_STS_RESOURCE || > - ret == BLK_STS_DEV_RESOURCE) { > - blk_mq_request_bypass_insert(rq, false, > - list_empty(list)); > - break; > - } > - blk_mq_end_request(rq, ret); > - } else > + last = list_empty(list); > + > + ret = blk_mq_request_issue_directly(rq, last); > + switch (ret) { > + case BLK_STS_OK: > queued++; > + break; > + case BLK_STS_RESOURCE: > + case BLK_STS_DEV_RESOURCE: > + blk_mq_request_bypass_insert(rq, false, last); > + goto out; > + default: > + blk_mq_end_request(rq, ret); > + break; > + } > } > - > - /* > - * If we didn't flush the entire list, we could have told > - * the driver there was more coming, but that turned out to > - * be a lie. > - */ > - if ((!list_empty(list) || errors) && > - hctx->queue->mq_ops->commit_rqs && queued) > - hctx->queue->mq_ops->commit_rqs(hctx); > +out: > + if (ret) > + blk_mq_commit_rqs(hctx, queued, false); > } > > static bool blk_mq_attempt_bio_merge(struct request_queue *q, >
diff --git a/block/blk-mq.c b/block/blk-mq.c index a9e88037550b..c543c14fdb47 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2085,8 +2085,8 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list, /* If we didn't flush the entire list, we could have told the driver * there was more coming, but that turned out to be a lie. */ - if ((!list_empty(list) || errors || needs_resource || - ret == BLK_STS_DEV_RESOURCE) && q->mq_ops->commit_rqs && queued) + if ((!list_empty(list) || ret != BLK_STS_OK) && + q->mq_ops->commit_rqs && queued) q->mq_ops->commit_rqs(hctx); /* * Any items that need requeuing? Stuff them into hctx->dispatch,