[12/13] blk-mq: use switch/case to improve readability in blk_mq_try_issue_list_directly
Message ID | 20221223125223.1687670-13-shikemeng@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp146100wrn; Thu, 22 Dec 2022 21:21:17 -0800 (PST) X-Google-Smtp-Source: AMrXdXvQQG0XCI+SrlMtt4+Rs1m5MP6uaRYDLfbETQxvQO9cFBwIN1HogOuXrxEjuJBAnFM9KWLj X-Received: by 2002:a05:6a20:c78d:b0:a4:b615:2239 with SMTP id hk13-20020a056a20c78d00b000a4b6152239mr10772587pzb.24.1671772877331; Thu, 22 Dec 2022 21:21:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671772877; cv=none; d=google.com; s=arc-20160816; b=wyuI5whL7QQWgxul1tHOHdXXPfF4kN4POZnWSp41+xaiSjtpLz9JHXjDno97uH3IHm sRtwllk3JqLg6VUeIqvoHF8ypHLF1Eb3rpBcN6bqyZks9Mpi4gwfWOwweuAGGDNe7PoO ZVYI8LkNrf44wamMNS4Oe29gsX4tXNMu928mz1hWvscAlH/fqBgMFrIiHU2dF1OSsrcS 2rL45UzUuYTn99hoPO12UJlZLkAVlPfNZV+AOGBEa7S0EpXZ7iI3URuopU5oqPmOhTXb jWYLsheThX7yDcqsayJZINBMwsKNG7o0UMdzeGu1hm5HiI5+1p4VYObW3s2yD7FunCDd PS1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=V04DH0yH3Ls5kx3gASe6K/1Fp3X0fINRA8Beo7/bSUA=; b=TDfAb5xe3rwIIkEkOIw7wb3YEnEVurgv//mzgKcf0F2mbFX0h86A4KpgaWIBagio1c r/xw8anLJxADrMXyoi1X14GHH4HJPlURoyFJM7O/rNaUFsSUbZt0iqHZ2NMxa9JHCUEE ObG1/Z5U+LRLKvzwAW9jSrFtZXV9RyM0RpPtlKQVkVOcpcXLUMAW7sXDVYWQH6tg/ZbW Vw0Pa5xb8sJXNhTr3ClpUHJ2wI7RdxuTAeS/yRrkAxx5nrW33cCbETS7AAI4B65I3jTX ll+9NGDs0IOW45mZE9uAHyLOkhvpYItZmRzSmLF6eV963dhuyD0HloNJcplENEoOxkL/ TbOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c13-20020a65618d000000b00478f1d4badcsi2543726pgv.365.2022.12.22.21.21.03; Thu, 22 Dec 2022 21:21:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235921AbiLWEyt (ORCPT <rfc822;pacteraone@gmail.com> + 99 others); Thu, 22 Dec 2022 23:54:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbiLWExg (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 22 Dec 2022 23:53:36 -0500 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D789201B9; Thu, 22 Dec 2022 20:53:35 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4NdZZL3YpNz4f3nqq; Fri, 23 Dec 2022 12:53:30 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP4 (Coremail) with SMTP id gCh0CgDXjbFHNKVjpOduAQ--.93S14; Fri, 23 Dec 2022 12:53:33 +0800 (CST) From: Kemeng Shi <shikemeng@huaweicloud.com> To: axboe@kernel.dk, dwagner@suse.de, hare@suse.de, ming.lei@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hch@lst.de, john.garry@huawei.com, shikemeng@huaweicloud.com Subject: [PATCH 12/13] blk-mq: use switch/case to improve readability in blk_mq_try_issue_list_directly Date: Fri, 23 Dec 2022 20:52:22 +0800 Message-Id: <20221223125223.1687670-13-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20221223125223.1687670-1-shikemeng@huaweicloud.com> References: <20221223125223.1687670-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgDXjbFHNKVjpOduAQ--.93S14 X-Coremail-Antispam: 1UD129KBjvdXoW7Xry5WF4kGF43Xw1kKF4DArb_yoWktFc_uF yI9rZ7GFZ8GF13CFWFka17tFnrG3ykJF1xuFZrtFy5WF1xWFs8Ga1UJF1YqrZrGay7CFyr Wry5ZrnYkr1xXjkaLaAFLSUrUUUUUb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbqxFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M28IrcIa0xkI8V A2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJ M28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2I x0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAF wI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc4 0Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AK xVW8Jr0_Cr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JV WxJwCI42IY6I8E87Iv6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRiVb yDUUUUU== X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752980916237250993?= X-GMAIL-MSGID: =?utf-8?q?1752980916237250993?= |
Series |
A few bugfix and cleanup patches for blk-mq
|
|
Commit Message
Kemeng Shi
Dec. 23, 2022, 12:52 p.m. UTC
Use switch/case handle error as other function do to improve
readability in blk_mq_try_issue_list_directly.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
block/blk-mq.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
Comments
On Fri, Dec 23, 2022 at 08:52:22PM +0800, Kemeng Shi wrote:
> + blk_mq_request_bypass_insert(rq, false, list_empty(list));
Please try to avoid the overly long line here.
That beng said blk_mq_request_bypass_insert is simply a horrible
API. I think we should do something like this:
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 53202eff545efb..b6157ae11df651 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -432,7 +432,8 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq);
+ blk_mq_run_hw_queue(rq->mq_hctx, false);
return;
}
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 23d1a90fec4271..d49fe4503b09d7 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -437,12 +437,13 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
+ spin_lock(&hctx->lock);
+ if ((rq->rq_flags & RQF_FLUSH_SEQ) || at_head)
+ list_add(&rq->queuelist, &hctx->dispatch);
+ else
+ list_add_tail(&rq->queuelist, &hctx->dispatch);
+ spin_unlock(&hctx->lock);
+ } else if (e) {
LIST_HEAD(list);
list_add(&rq->queuelist, &list);
@@ -453,7 +454,6 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
spin_unlock(&ctx->lock);
}
-run:
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c5cf0dbca1db8d..43bb9b36c90da7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1467,7 +1467,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
* merge.
*/
if (rq->rq_flags & RQF_DONTPREP)
- blk_mq_request_bypass_insert(rq, false, false);
+ blk_mq_request_bypass_insert(rq);
else
blk_mq_sched_insert_request(rq, true, false, false);
}
@@ -2504,26 +2504,17 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
- * @at_head: true if the request should be inserted at the head of the list.
- * @run_queue: If we should run the hardware queue after inserting the request.
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue)
+void blk_mq_request_bypass_insert(struct request *rq)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
spin_lock(&hctx->lock);
- if (at_head)
- list_add(&rq->queuelist, &hctx->dispatch);
- else
- list_add_tail(&rq->queuelist, &hctx->dispatch);
+ list_add_tail(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, false);
}
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
@@ -2670,10 +2661,17 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
blk_status_t ret =
__blk_mq_try_issue_directly(hctx, rq, false, true);
- if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
- blk_mq_request_bypass_insert(rq, false, true);
- else if (ret != BLK_STS_OK)
+ switch (ret) {
+ case BLK_STS_OK:
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
+ blk_mq_request_bypass_insert(rq);
+ blk_mq_run_hw_queue(rq->mq_hctx, false);
+ break;
+ default:
blk_mq_end_request(rq, ret);
+ }
}
static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
@@ -2705,7 +2703,8 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq);
+ blk_mq_run_hw_queue(rq->mq_hctx, false);
blk_mq_commit_rqs(hctx, &queued, from_schedule);
return;
default:
@@ -2818,8 +2817,9 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
errors++;
if (ret == BLK_STS_RESOURCE ||
ret == BLK_STS_DEV_RESOURCE) {
- blk_mq_request_bypass_insert(rq, false,
- list_empty(list));
+ blk_mq_request_bypass_insert(rq);
+ if (list_empty(list))
+ blk_mq_run_hw_queue(rq->mq_hctx, false);
break;
}
blk_mq_end_request(rq, ret);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index ef59fee62780d3..3733429561e1eb 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -61,8 +61,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
*/
void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue);
+void blk_mq_request_bypass_insert(struct request *rq);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
struct list_head *list);
void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
on 12/23/2022 1:53 PM, Christoph Hellwig wrote: > On Fri, Dec 23, 2022 at 08:52:22PM +0800, Kemeng Shi wrote: >> + blk_mq_request_bypass_insert(rq, false, list_empty(list)); > > Please try to avoid the overly long line here. Get it and I will fix this in next version. Thanks! > That beng said blk_mq_request_bypass_insert is simply a horrible > API. I think we should do something like this: I am not quite follow this. I guess this API is horrible for two possbile reasons: 1. It accepts two bool parameters which may be confusing betwwen them. 2. It adds additional checks for if we need to insert at head and if we need to run queue which is already checked by caller. Anyway, it seems another patch is needed for this, but I don't know proper way to send this patch. Add your patch to this patchset or you want to send a single one after this patchset.
diff --git a/block/blk-mq.c b/block/blk-mq.c index a48f2a913295..2a3db9524974 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2789,16 +2789,20 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, list_del_init(&rq->queuelist); ret = blk_mq_request_issue_directly(rq, list_empty(list)); - if (ret != BLK_STS_OK) { - if (ret == BLK_STS_RESOURCE || - ret == BLK_STS_DEV_RESOURCE) { - blk_mq_request_bypass_insert(rq, false, - list_empty(list)); - break; - } - blk_mq_end_request(rq, ret); - } else + switch (ret) { + case BLK_STS_OK: queued++; + break; + case BLK_STS_RESOURCE: + case BLK_STS_DEV_RESOURCE: + blk_mq_request_bypass_insert(rq, false, list_empty(list)); + if (hctx->queue->mq_ops->commit_rqs && queued) + hctx->queue->mq_ops->commit_rqs(hctx); + return; + default: + blk_mq_end_request(rq, ret); + break; + } } /*