Message ID | 202312131510+0800-wangjinchao@xfusion.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7596868dys; Tue, 12 Dec 2023 23:12:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IHhoHqDugK3cTbNtJRXKks+xW/CUNin6GkxILa53nGe6fxtsuR4zIFz7tNUVbt1JiM1BnOm X-Received: by 2002:a05:6808:11c7:b0:3b9:e073:1c48 with SMTP id p7-20020a05680811c700b003b9e0731c48mr9564117oiv.61.1702451539163; Tue, 12 Dec 2023 23:12:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451539; cv=none; d=google.com; s=arc-20160816; b=M7vc3xIyFuOA6CDcfE/BDEOITGJ22N/FbYLRUQRFX7u0bMeDYcxpno278HUUbFrytd XDGS4p0FTPuKdJsG9NsD72rUViSCltr1FIJUVS1uDz30n4YOlVImsYVBK7vznNPDUY9t 9UR0NBUqePJ4iQUWKnyuN/BOoBQOSCpd0YPwHSB+Cz61xtezeiK8Jx0XTkVe6fsnq8+Z 9Et3VQP8VbGwavYgAWQawjcc0LV0w3sQw4ml7ydE+HVq2BOacgqqHs76ZJmhSoTVKMC0 Il/geyoxTK0Uv7eEOYOrptr+4DNBxXYjRa8TdzgSAYI0+Jbul+LTUfKJAWJ52nYZw314 14pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-disposition:mime-version:message-id :subject:cc:to:from:date; bh=eSSkMOiWAr1gvs0jJ6NrgAuR5IXQVekwGCUD25D1MMo=; fh=f40UFx4PIoN5VHjot5Te0Vsnl2epc2I3Vpkjl2f6MCo=; b=QZ6b/mCqR+Ps01BXsMpgXTZF3uSjdS6e8fALhsZ91fF8xI9l5ufLUuhhH1DrE6s99D ujnqjR7EDamyI8oV44ah3kvadDhP2r6SKwrLvw2XIKmzfIp9SWPlw/lvBVQJPKHyciX8 ouEhJr17JIC7LQIr58rYbIviqUbl2+5+FhRwQ/XqL1l6xlIFwCayY/2WrFDs70BmISX7 zLNyr4rLNrlm7XkE0ja4c6rg8O6Md07LALd2uyrerai0LQ2Xg+nQG7IYkgz0Hhvp6fQS W4yjWJJsiGQIOy8FvtiT38gzkt3Dxr7MotjwqAHiOksnLfYW+7iSKq9Bw93bU/TcBfvX kqCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id bx2-20020a056a02050200b005bdf59618e3si9683159pgb.497.2023.12.12.23.12.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:12:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 0B17F80BBC6A; Tue, 12 Dec 2023 23:12:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233264AbjLMHMG (ORCPT <rfc822;dexuan.linux@gmail.com> + 99 others); Wed, 13 Dec 2023 02:12:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233183AbjLMHMA (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 13 Dec 2023 02:12:00 -0500 Received: from wxsgout04.xfusion.com (wxsgout03.xfusion.com [36.139.52.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA78AD for <linux-kernel@vger.kernel.org>; Tue, 12 Dec 2023 23:12:05 -0800 (PST) Received: from wuxshcsitd00600.xfusion.com (unknown [10.32.133.213]) by wxsgout04.xfusion.com (SkyGuard) with ESMTPS id 4SqmmV0zh0zB0xYh; Wed, 13 Dec 2023 15:08:42 +0800 (CST) Received: from localhost (10.82.147.3) by wuxshcsitd00600.xfusion.com (10.32.133.213) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 13 Dec 2023 15:12:02 +0800 Date: Wed, 13 Dec 2023 15:12:02 +0800 From: WangJinchao <wangjinchao@xfusion.com> To: Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <vschneid@redhat.com>, <linux-kernel@vger.kernel.org> CC: <stone.xulei@xfusion.com>, <wangjinchao@xfusion.com> Subject: [PATCH v2] sched/fair: merge same code in enqueue_task_fair Message-ID: <202312131510+0800-wangjinchao@xfusion.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline X-Originating-IP: [10.82.147.3] X-ClientProxiedBy: wuxshcsitd00601.xfusion.com (10.32.135.241) To wuxshcsitd00600.xfusion.com (10.32.133.213) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:12:18 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784887293053634358 X-GMAIL-MSGID: 1785149825159296501 |
Series |
[v2] sched/fair: merge same code in enqueue_task_fair
|
|
Commit Message
Wang Jinchao
Dec. 13, 2023, 7:12 a.m. UTC
The code below is duplicated in two for loops and need to be
consolidated
Signed-off-by: WangJinchao <wangjinchao@xfusion.com>
---
kernel/sched/fair.c | 31 ++++++++-----------------------
1 file changed, 8 insertions(+), 23 deletions(-)
Comments
Hi Jinchao, On 12/13/23 3:12 PM, WangJinchao Wrote: > The code below is duplicated in two for loops and need to be > consolidated It doesn't need to, but it can actually bring some benefit from the point of view of text size, especially in warehouse-scale computers where icache is extremely contended. add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-56 (-56) Function old new delta enqueue_task_fair 936 880 -56 Total: Before=64899, After=64843, chg -0.09% > > Signed-off-by: WangJinchao <wangjinchao@xfusion.com> > --- > kernel/sched/fair.c | 31 ++++++++----------------------- > 1 file changed, 8 insertions(+), 23 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index d7a3c63a2171..e1373bfd4f2e 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6681,30 +6681,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) > cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); > > for_each_sched_entity(se) { > - if (se->on_rq) > - break; > cfs_rq = cfs_rq_of(se); > - enqueue_entity(cfs_rq, se, flags); > - > - cfs_rq->h_nr_running++; > - cfs_rq->idle_h_nr_running += idle_h_nr_running; > - > - if (cfs_rq_is_idle(cfs_rq)) > - idle_h_nr_running = 1; > - > - /* end evaluation on encountering a throttled cfs_rq */ > - if (cfs_rq_throttled(cfs_rq)) > - goto enqueue_throttle; > - > - flags = ENQUEUE_WAKEUP; > - } > - > - for_each_sched_entity(se) { > - cfs_rq = cfs_rq_of(se); > - > - update_load_avg(cfs_rq, se, UPDATE_TG); > - se_update_runnable(se); > - update_cfs_group(se); > + if (se->on_rq) { > + update_load_avg(cfs_rq, se, UPDATE_TG); > + se_update_runnable(se); > + update_cfs_group(se); > + } else { > + enqueue_entity(cfs_rq, se, flags); > + flags = ENQUEUE_WAKEUP; > + } > > cfs_rq->h_nr_running++; > cfs_rq->idle_h_nr_running += idle_h_nr_running; I have no strong opinon about this 'cleanup', but the same pattern can also be found in dequeue_task_fair() and I think it would be better get them synchronized. Thanks, Abel
On Wed, 13 Dec 2023 at 09:19, Abel Wu <wuyun.abel@bytedance.com> wrote: > > Hi Jinchao, > > On 12/13/23 3:12 PM, WangJinchao Wrote: > > The code below is duplicated in two for loops and need to be > > consolidated > > It doesn't need to, but it can actually bring some benefit from > the point of view of text size, especially in warehouse-scale > computers where icache is extremely contended. > > add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-56 (-56) > Function old new delta > enqueue_task_fair 936 880 -56 > Total: Before=64899, After=64843, chg -0.09% > > > > > Signed-off-by: WangJinchao <wangjinchao@xfusion.com> > > --- > > kernel/sched/fair.c | 31 ++++++++----------------------- > > 1 file changed, 8 insertions(+), 23 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index d7a3c63a2171..e1373bfd4f2e 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -6681,30 +6681,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) > > cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); > > > > for_each_sched_entity(se) { > > - if (se->on_rq) > > - break; > > cfs_rq = cfs_rq_of(se); > > - enqueue_entity(cfs_rq, se, flags); > > - > > - cfs_rq->h_nr_running++; > > - cfs_rq->idle_h_nr_running += idle_h_nr_running; > > - > > - if (cfs_rq_is_idle(cfs_rq)) > > - idle_h_nr_running = 1; > > - > > - /* end evaluation on encountering a throttled cfs_rq */ > > - if (cfs_rq_throttled(cfs_rq)) > > - goto enqueue_throttle; > > - > > - flags = ENQUEUE_WAKEUP; > > - } > > - > > - for_each_sched_entity(se) { > > - cfs_rq = cfs_rq_of(se); > > - > > - update_load_avg(cfs_rq, se, UPDATE_TG); > > - se_update_runnable(se); > > - update_cfs_group(se); > > + if (se->on_rq) { > > + update_load_avg(cfs_rq, se, UPDATE_TG); > > + se_update_runnable(se); > > + update_cfs_group(se); > > + } else { > > + enqueue_entity(cfs_rq, se, flags); > > + flags = ENQUEUE_WAKEUP; > > + } > > > > cfs_rq->h_nr_running++; > > cfs_rq->idle_h_nr_running += idle_h_nr_running; > > I have no strong opinon about this 'cleanup', but the same pattern > can also be found in dequeue_task_fair() and I think it would be > better get them synchronized. I agree, I don't see any benefit from this change > > Thanks, > Abel
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d7a3c63a2171..e1373bfd4f2e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6681,30 +6681,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); for_each_sched_entity(se) { - if (se->on_rq) - break; cfs_rq = cfs_rq_of(se); - enqueue_entity(cfs_rq, se, flags); - - cfs_rq->h_nr_running++; - cfs_rq->idle_h_nr_running += idle_h_nr_running; - - if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running = 1; - - /* end evaluation on encountering a throttled cfs_rq */ - if (cfs_rq_throttled(cfs_rq)) - goto enqueue_throttle; - - flags = ENQUEUE_WAKEUP; - } - - for_each_sched_entity(se) { - cfs_rq = cfs_rq_of(se); - - update_load_avg(cfs_rq, se, UPDATE_TG); - se_update_runnable(se); - update_cfs_group(se); + if (se->on_rq) { + update_load_avg(cfs_rq, se, UPDATE_TG); + se_update_runnable(se); + update_cfs_group(se); + } else { + enqueue_entity(cfs_rq, se, flags); + flags = ENQUEUE_WAKEUP; + } cfs_rq->h_nr_running++; cfs_rq->idle_h_nr_running += idle_h_nr_running;