From patchwork Tue Nov 1 09:34:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13580 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2845716wru; Tue, 1 Nov 2022 02:36:36 -0700 (PDT) X-Google-Smtp-Source: AMsMyM52BGvsLXlEe1IZaW10XU4GniImdcbBczkDFohvANB+2m8NRCc4He0P4BsQnk8MkVcF7jLb X-Received: by 2002:a63:2cd2:0:b0:41c:5901:67d8 with SMTP id s201-20020a632cd2000000b0041c590167d8mr16389302pgs.365.1667295396428; Tue, 01 Nov 2022 02:36:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667295396; cv=none; d=google.com; s=arc-20160816; b=i4kUbdAJ8yruAUFAuAZjCYPOjINH/ajwmGjwpjhAl9D4NlOjZTehH5tVeko+8VX9fY i7BP+Wk/e63Gu8AaUUxUg2ixPedUAt9ru83gfCmYOp3VMPfHz4xblLb44MyIxYFbb6JM 8C2/xN52CNZ+V6gGi2xVZdHVa1bBzMVwdCi3gtrGR3dQ41TP7qnyuQtHuHuDfEDWvpky kOz4s3+8vxo2ukNrJyU1O7AKyDFdTqJlqFyvrxkQbwWu48EQzYKI/MFf7DLOek9RLQJb 9GGKqWMxZQjJilSdsJ7od6uv06+8XkFNfeEQDwgcMG1xCV0LUfndMS9HYS/fLpzmiPxl N1pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=SI3PjZ5bXOBygfs6taitnug571/HMmGwRZhDgBpNHok=; b=DCiC7mNlQiaOF8pCn4IowwvECJBKby7ahP6FVs5C+2ZDogTgEZoYMKuLC70+5DZ5p9 EAykWm/x3KybIeYFRiXPbkvgsLhBzfxn8m21jkUvscoT+6lEPItrAMMGCG0BBvBHTpPV 5iHbE7LXDSI/0nAdw+UIrdLNVph9oJsEClcSrzLlYFc5s8GtSKlsn6xeaojkld7rHYmo mVWw7MBcE4Uey6RxERTkQOcjHlNDJECGpz8Mz+30pFVDIhAF1grPv8Em09BwgRRtzL02 wXW3iTkFp+98l9v7PztQnXzd1c0TB7uzr376ft59o7lkXJc59RV3DjSvCJvDftRzMyK+ vZ2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n12-20020a635c4c000000b004609faa2dbesi13574172pgm.285.2022.11.01.02.36.22; Tue, 01 Nov 2022 02:36:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230294AbiKAJfP (ORCPT + 99 others); Tue, 1 Nov 2022 05:35:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230202AbiKAJei (ORCPT ); Tue, 1 Nov 2022 05:34:38 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 247201901C; Tue, 1 Nov 2022 02:34:30 -0700 (PDT) Received: from kwepemi500016.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4N1lCC0pDLzJnMC; Tue, 1 Nov 2022 17:31:35 +0800 (CST) Received: from huawei.com (10.174.178.129) by kwepemi500016.china.huawei.com (7.221.188.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 1 Nov 2022 17:34:27 +0800 From: Kemeng Shi To: , CC: , , Subject: [PATCH 13/20] block,bfq: remove redundant nonrot_with_queueing check in bfq_setup_cooperator Date: Tue, 1 Nov 2022 17:34:10 +0800 Message-ID: <20221101093417.10540-14-shikemeng@huawei.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20221101093417.10540-1-shikemeng@huawei.com> References: <20221101093417.10540-1-shikemeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.129] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemi500016.china.huawei.com (7.221.188.220) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748285938042373474?= X-GMAIL-MSGID: =?utf-8?q?1748285938042373474?= Commit 430a67f9d6169 ("block, bfq: merge bursts of newly-created queues") add stable merge logic in bfq_setup_cooperator and will only be executed for !nonrot_with_queueing device. Actually, bfq_setup_cooperator is designed for only !nonrot_with_queueing and has already returned NULL before doing real work if device is nonrot_with_queueing. We can add stable merge after existing nonrot_with_queueing check and no need to re-check nonrot_with_queueing. Signed-off-by: Kemeng Shi --- block/bfq-iosched.c | 97 ++++++++++++++++++++++----------------------- 1 file changed, 47 insertions(+), 50 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index a46e49de895a..b8af0bb98d66 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2886,56 +2886,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, if (bfqq->new_bfqq) return bfqq->new_bfqq; - /* - * Check delayed stable merge for rotational or non-queueing - * devs. For this branch to be executed, bfqq must not be - * currently merged with some other queue (i.e., bfqq->bic - * must be non null). If we considered also merged queues, - * then we should also check whether bfqq has already been - * merged with bic->stable_merge_bfqq. But this would be - * costly and complicated. - */ - if (unlikely(!bfqd->nonrot_with_queueing)) { - /* - * Make sure also that bfqq is sync, because - * bic->stable_merge_bfqq may point to some queue (for - * stable merging) also if bic is associated with a - * sync queue, but this bfqq is async - */ - if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq && - !bfq_bfqq_just_created(bfqq) && - time_is_before_jiffies(bfqq->split_time + - msecs_to_jiffies(bfq_late_stable_merging)) && - time_is_before_jiffies(bfqq->creation_time + - msecs_to_jiffies(bfq_late_stable_merging))) { - struct bfq_queue *stable_merge_bfqq = - bic->stable_merge_bfqq; - int proc_ref = min(bfqq_process_refs(bfqq), - bfqq_process_refs(stable_merge_bfqq)); - - /* deschedule stable merge, because done or aborted here */ - bfq_put_stable_ref(stable_merge_bfqq); - - bic->stable_merge_bfqq = NULL; - - if (!idling_boosts_thr_without_issues(bfqd, bfqq) && - proc_ref > 0) { - /* next function will take at least one ref */ - struct bfq_queue *new_bfqq = - bfq_setup_merge(bfqq, stable_merge_bfqq); - - if (new_bfqq) { - bic->stably_merged = true; - if (new_bfqq->bic) - new_bfqq->bic->stably_merged = - true; - } - return new_bfqq; - } else - return NULL; - } - } - /* * Do not perform queue merging if the device is non * rotational and performs internal queueing. In fact, such a @@ -2976,6 +2926,53 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, if (likely(bfqd->nonrot_with_queueing)) return NULL; + /* + * Check delayed stable merge for rotational or non-queueing + * devs. For this branch to be executed, bfqq must not be + * currently merged with some other queue (i.e., bfqq->bic + * must be non null). If we considered also merged queues, + * then we should also check whether bfqq has already been + * merged with bic->stable_merge_bfqq. But this would be + * costly and complicated. + * Make sure also that bfqq is sync, because + * bic->stable_merge_bfqq may point to some queue (for + * stable merging) also if bic is associated with a + * sync queue, but this bfqq is async + */ + if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq && + !bfq_bfqq_just_created(bfqq) && + time_is_before_jiffies(bfqq->split_time + + msecs_to_jiffies(bfq_late_stable_merging)) && + time_is_before_jiffies(bfqq->creation_time + + msecs_to_jiffies(bfq_late_stable_merging))) { + struct bfq_queue *stable_merge_bfqq = + bic->stable_merge_bfqq; + int proc_ref = min(bfqq_process_refs(bfqq), + bfqq_process_refs(stable_merge_bfqq)); + + /* deschedule stable merge, because done or aborted here */ + bfq_put_stable_ref(stable_merge_bfqq); + + bic->stable_merge_bfqq = NULL; + + if (!idling_boosts_thr_without_issues(bfqd, bfqq) && + proc_ref > 0) { + /* next function will take at least one ref */ + struct bfq_queue *new_bfqq = + bfq_setup_merge(bfqq, stable_merge_bfqq); + + if (new_bfqq) { + bic->stably_merged = true; + if (new_bfqq->bic) + new_bfqq->bic->stably_merged = + true; + } + return new_bfqq; + } else + return NULL; + } + + /* * Prevent bfqq from being merged if it has been created too * long ago. The idea is that true cooperating processes, and