[v4,12/14] blk-mq: remove set of bd->last when get driver tag for next request fails

Message ID 20230118093726.3939160-12-shikemeng@huaweicloud.com
State New
Headers
Series A few bugfix and cleanup patches for blk-mq |

Commit Message

Kemeng Shi Jan. 18, 2023, 9:37 a.m. UTC
  Commit 113285b473824 ("blk-mq: ensure that bd->last is always set
correctly") will set last if we failed to get driver tag for next
request to avoid flush miss as we break the list walk and will not
send the last request in the list which will be sent with last set
normally.
This code seems stale now becase the flush introduced is always
redundant as:
For case tag is really out, we will send a extra flush if we find
list is not empty after list walk.
For case some tag is freed before retry in blk_mq_prep_dispatch_rq for
next, then we can get a tag for next request in retry and flush notified
already is not necessary.

Just remove these stale codes.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq.c | 24 ++----------------------
 1 file changed, 2 insertions(+), 22 deletions(-)
  

Comments

Christoph Hellwig Jan. 18, 2023, 5:42 p.m. UTC | #1
On Wed, Jan 18, 2023 at 05:37:24PM +0800, Kemeng Shi wrote:
> Commit 113285b473824 ("blk-mq: ensure that bd->last is always set
> correctly") will set last if we failed to get driver tag for next
> request to avoid flush miss as we break the list walk and will not
> send the last request in the list which will be sent with last set
> normally.
> This code seems stale now becase the flush introduced is always
> redundant as:
> For case tag is really out, we will send a extra flush if we find
> list is not empty after list walk.
> For case some tag is freed before retry in blk_mq_prep_dispatch_rq for
> next, then we can get a tag for next request in retry and flush notified
> already is not necessary.

I think Ming will know this code better than me, but aren't we
losing the blk_mq_get_driver_tag call entirely here now.  Where
is getting the driver tag covered now?
  
Kemeng Shi Jan. 19, 2023, 1:45 a.m. UTC | #2
on 1/19/2023 1:42 AM, Christoph Hellwig wrote:
> On Wed, Jan 18, 2023 at 05:37:24PM +0800, Kemeng Shi wrote:
>> Commit 113285b473824 ("blk-mq: ensure that bd->last is always set
>> correctly") will set last if we failed to get driver tag for next
>> request to avoid flush miss as we break the list walk and will not
>> send the last request in the list which will be sent with last set
>> normally.
>> This code seems stale now becase the flush introduced is always
>> redundant as:
>> For case tag is really out, we will send a extra flush if we find
>> list is not empty after list walk.
>> For case some tag is freed before retry in blk_mq_prep_dispatch_rq for
>> next, then we can get a tag for next request in retry and flush notified
>> already is not necessary.
> 
> I think Ming will know this code better than me, but aren't we
> losing the blk_mq_get_driver_tag call entirely here now.  Where
> is getting the driver tag covered now?
> 
We will get driver tag in blk_mq_prep_dispatch_rq at beginning of dispatch
loop, so it's fine to remove blk_mq_get_driver_tag here. Thanks.
  
Kemeng Shi Jan. 28, 2023, 2:03 a.m. UTC | #3
on 1/19/2023 9:45 AM, Kemeng Shi wrote:
> 
> 
> on 1/19/2023 1:42 AM, Christoph Hellwig wrote:
>> On Wed, Jan 18, 2023 at 05:37:24PM +0800, Kemeng Shi wrote:
>>> Commit 113285b473824 ("blk-mq: ensure that bd->last is always set
>>> correctly") will set last if we failed to get driver tag for next
>>> request to avoid flush miss as we break the list walk and will not
>>> send the last request in the list which will be sent with last set
>>> normally.
>>> This code seems stale now becase the flush introduced is always
>>> redundant as:
>>> For case tag is really out, we will send a extra flush if we find
>>> list is not empty after list walk.
>>> For case some tag is freed before retry in blk_mq_prep_dispatch_rq for
>>> next, then we can get a tag for next request in retry and flush notified
>>> already is not necessary.
>>
>> I think Ming will know this code better than me, but aren't we
>> losing the blk_mq_get_driver_tag call entirely here now.  Where
>> is getting the driver tag covered now?
>>
> We will get driver tag in blk_mq_prep_dispatch_rq at beginning of dispatch
> loop, so it's fine to remove blk_mq_get_driver_tag here. Thanks.
> 

Hi Ming and everyone familiar with code invovled, could you help with
reviewing this patch and patch "[PATCH v4 04/14] blk-mq: Fix potential
io hung for shared sbitmap per tagset" in the same patchset.
Thanks.
  
Kemeng Shi Feb. 6, 2023, 1:05 a.m. UTC | #4
on 1/28/2023 10:03 AM, Kemeng Shi wrote:
> 
> 
> on 1/19/2023 9:45 AM, Kemeng Shi wrote:
>>
>>
>> on 1/19/2023 1:42 AM, Christoph Hellwig wrote:
>>> On Wed, Jan 18, 2023 at 05:37:24PM +0800, Kemeng Shi wrote:
>>>> Commit 113285b473824 ("blk-mq: ensure that bd->last is always set
>>>> correctly") will set last if we failed to get driver tag for next
>>>> request to avoid flush miss as we break the list walk and will not
>>>> send the last request in the list which will be sent with last set
>>>> normally.
>>>> This code seems stale now becase the flush introduced is always
>>>> redundant as:
>>>> For case tag is really out, we will send a extra flush if we find
>>>> list is not empty after list walk.
>>>> For case some tag is freed before retry in blk_mq_prep_dispatch_rq for
>>>> next, then we can get a tag for next request in retry and flush notified
>>>> already is not necessary.
>>>
>>> I think Ming will know this code better than me, but aren't we
>>> losing the blk_mq_get_driver_tag call entirely here now.  Where
>>> is getting the driver tag covered now?
>>>
>> We will get driver tag in blk_mq_prep_dispatch_rq at beginning of dispatch
>> loop, so it's fine to remove blk_mq_get_driver_tag here. Thanks.
>>
> 
> Hi Ming and everyone familiar with code invovled, could you help with
> reviewing this patch and patch "[PATCH v4 04/14] blk-mq: Fix potential
> io hung for shared sbitmap per tagset" in the same patchset.
> Thanks.
> 
Could anyone please help review the last two patches without reviewed-by.
Thanks.
  
Christoph Hellwig Feb. 6, 2023, 4:06 p.m. UTC | #5
On Mon, Feb 06, 2023 at 09:05:25AM +0800, Kemeng Shi wrote:
> Could anyone please help review the last two patches without reviewed-by.

I'm still not really convinved of this one patch.

The rest of the series looks like great small cleanups, but this one
isn't quite as trivial.   Maybe you can split it from the series and
resend it with a more detailed and better explanation for the next
merge window?
  

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 90471b5c868f..afd4f0c3166b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1923,16 +1923,6 @@  static void blk_mq_update_dispatch_busy(struct blk_mq_hw_ctx *hctx, bool busy)
 static void blk_mq_handle_dev_resource(struct request *rq,
 				       struct list_head *list)
 {
-	struct request *next =
-		list_first_entry_or_null(list, struct request, queuelist);
-
-	/*
-	 * If an I/O scheduler has been configured and we got a driver tag for
-	 * the next request already, free it.
-	 */
-	if (next)
-		blk_mq_put_driver_tag(next);
-
 	list_add(&rq->queuelist, list);
 	__blk_mq_requeue_request(rq);
 }
@@ -2032,7 +2022,7 @@  bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
 {
 	enum prep_dispatch prep;
 	struct request_queue *q = hctx->queue;
-	struct request *rq, *nxt;
+	struct request *rq;
 	int queued;
 	blk_status_t ret = BLK_STS_OK;
 	LIST_HEAD(zone_list);
@@ -2058,17 +2048,7 @@  bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
 		list_del_init(&rq->queuelist);
 
 		bd.rq = rq;
-
-		/*
-		 * Flag last if we have no more requests, or if we have more
-		 * but can't assign a driver tag to it.
-		 */
-		if (list_empty(list))
-			bd.last = true;
-		else {
-			nxt = list_first_entry(list, struct request, queuelist);
-			bd.last = !blk_mq_get_driver_tag(nxt);
-		}
+		bd.last = list_empty(list);
 
 		/*
 		 * once the request is queued to lld, no need to cover the