Message ID | 20230719130527.8074-1-xuewen.yan@unisoc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2437991vqt; Wed, 19 Jul 2023 06:26:41 -0700 (PDT) X-Google-Smtp-Source: APBJJlHTh+RNVI6Auu9yeHvKhxbwCXByHtbAxLecY70qgsa9K2qLQzPCwRV0HL3jcu6NvVbawJil X-Received: by 2002:a05:6a00:1586:b0:67e:18c6:d2db with SMTP id u6-20020a056a00158600b0067e18c6d2dbmr6432957pfk.28.1689773201523; Wed, 19 Jul 2023 06:26:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689773201; cv=none; d=google.com; s=arc-20160816; b=K3JssPmxtoFr35J08JobTQei6AgRi4fN2DSZs2vq+hTU4Yxn6gmsHn6F81TrovOhEe Ssysb2q0O6O+u4/bwizbgVsJImGIzlz2RXGXqio+V4zMQxWDy5hLm6Wwen4xXEi5ssYw pY79dX1oecNJ9+HCtB7We80IcV3hD/LRhHyaZBNcvutFDZIMhda73UKutHjk/KnFdkEO DKT1dqyxEPjokZmTYtUTegcZatccg3k2897JzhlqsDC4lgduE5IkWBWCUffYdIALXS/J ubJu2I0JMQLBvmnNl3Cz2wAh33gvuE0ZkAICW0tv/IMkuR9SmpCd6gWaw3My/N6uB9cd pXYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=rWsfZSTggDeW4UF3qJacqU+qfYkHByL3+knQJTTEIAc=; fh=jB3RFmkL4ZcCpRVJnBESXBROqUq2t2fdebv3Kn2zBho=; b=nCDWqDcOUxhhWxIUyRHuAnDdQlNmsy9bFetphwJ5J5C8YnFNDaY4+kC3uFf0j6jwpH 88IxNlcrYxcA2xGvUKgTUXLLIm+5M4gf8+UjGblDekm2zdms2201NbdZ964qFCFrJjwC T+yp2Hs0QQMThlJ9+z6uoABG9XfVvXcexVBGxRyif9D1d6fWH0rar7dt3N1pO6VnA+Tm k3+QNAAQVN/3dFw6XassSyMFsw+01XpBcPsA3htq44HYe+jgQXhsDgKQcEaImwE9//7i Q5hm6rX9Ilc4qGSu/t4eODv85dSfTd59moPwRbNNLtN9ZbvRlKxXLmBEXG2HLqnjFwSu D/bA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v24-20020a63f218000000b0055b037b55ecsi3455379pgh.147.2023.07.19.06.26.27; Wed, 19 Jul 2023 06:26:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjGSNFr (ORCPT <rfc822;assdfgzxcv4@gmail.com> + 99 others); Wed, 19 Jul 2023 09:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230405AbjGSNFq (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 19 Jul 2023 09:05:46 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 143161711 for <linux-kernel@vger.kernel.org>; Wed, 19 Jul 2023 06:05:43 -0700 (PDT) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 36JD5Zqj043653; Wed, 19 Jul 2023 21:05:35 +0800 (+08) (envelope-from Xuewen.Yan@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4R5bch0SXJz2K1r9p; Wed, 19 Jul 2023 21:04:20 +0800 (CST) Received: from BJ10918NBW01.spreadtrum.com (10.0.73.72) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Wed, 19 Jul 2023 21:05:33 +0800 From: Xuewen Yan <xuewen.yan@unisoc.com> To: <rafael@kernel.org>, <viresh.kumar@linaro.org>, <mingo@redhat.com>, <peterz@infradead.org>, <vincent.guittot@linaro.org> CC: <dietmar.eggemann@arm.com>, <rostedt@goodmis.org>, <bsegall@google.com>, <mgorman@suse.de>, <bristot@redhat.com>, <vschneid@redhat.com>, <guohua.yan@unisoc.com>, <qyousef@layalina.io>, <linux-pm@vger.kernel.org>, <linux-kernel@vger.kernel.org> Subject: [PATCH] cpufreq: schedutil: next_freq need update when cpufreq_limits changed Date: Wed, 19 Jul 2023 21:05:27 +0800 Message-ID: <20230719130527.8074-1-xuewen.yan@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.0.73.72] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 36JD5Zqj043653 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771855624728499559 X-GMAIL-MSGID: 1771855624728499559 |
Series |
cpufreq: schedutil: next_freq need update when cpufreq_limits changed
|
|
Commit Message
Xuewen Yan
July 19, 2023, 1:05 p.m. UTC
When cpufreq's policy is single, there is a scenario that will cause sg_policy's next_freq to be unable to update. When the cpu's util is always max, the cpufreq will be max, and then if we change the policy's scaling_max_freq to be a lower freq, indeed, the sg_policy's next_freq need change to be the lower freq, however, because the cpu_is_busy, the next_freq would keep the max_freq. For example: The cpu7 is single cpu: unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& [1] 4737 unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 pid 4737's current affinity mask: ff pid 4737's new affinity mask: 80 unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq 2301000 unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq 2301000 unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq 2171000 At this time, the sg_policy's next_freq would keep 2301000. To prevent the case happen, add the judgment of the need_freq_update flag. Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> --- kernel/sched/cpufreq_schedutil.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Comments
On 07/19/23 21:05, Xuewen Yan wrote: > When cpufreq's policy is single, there is a scenario that will > cause sg_policy's next_freq to be unable to update. > > When the cpu's util is always max, the cpufreq will be max, > and then if we change the policy's scaling_max_freq to be a > lower freq, indeed, the sg_policy's next_freq need change to > be the lower freq, however, because the cpu_is_busy, the next_freq > would keep the max_freq. > > For example: > The cpu7 is single cpu: > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > [1] 4737 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > pid 4737's current affinity mask: ff > pid 4737's new affinity mask: 80 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > 2301000 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > 2301000 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > 2171000 > > At this time, the sg_policy's next_freq would keep 2301000. > > To prevent the case happen, add the judgment of the need_freq_update flag. > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > --- > kernel/sched/cpufreq_schedutil.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 4492608b7d7f..458d359f5991 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > * Except when the rq is capped by uclamp_max. > */ > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > + !sg_policy->need_freq_update) { What about sugov_update_single_perf()? It seems to have the same problem, no? LGTM otherwise. Cheers -- Qais Yousef > next_f = sg_policy->next_freq; > > /* Restore cached freq as next_freq has changed */ > -- > 2.25.1 >
On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > On 07/19/23 21:05, Xuewen Yan wrote: > > When cpufreq's policy is single, there is a scenario that will > > cause sg_policy's next_freq to be unable to update. > > > > When the cpu's util is always max, the cpufreq will be max, > > and then if we change the policy's scaling_max_freq to be a > > lower freq, indeed, the sg_policy's next_freq need change to > > be the lower freq, however, because the cpu_is_busy, the next_freq > > would keep the max_freq. > > > > For example: > > The cpu7 is single cpu: > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > [1] 4737 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > pid 4737's current affinity mask: ff > > pid 4737's new affinity mask: 80 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > 2301000 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > 2301000 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > 2171000 > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > --- > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 4492608b7d7f..458d359f5991 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > * Except when the rq is capped by uclamp_max. > > */ > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > + !sg_policy->need_freq_update) { > > What about sugov_update_single_perf()? It seems to have the same problem, no? There is no problem in sugov_update_single_perf, because the next_freq is updated by drivers, maybe the next_freq is not used when using sugov_update_single_perf.. But for the last_freq_update_time, I think there are some problems when using sugov_update_single_perf: Now, there is no judgment condition for the update of the last_freq_update_time. That means the last_freq_update_time is always updated in sugov_update_single_perf. And in sugov_should_update_freq: it would judge the freq_update_delay_ns. As a result, If we use the sugov_update_single_perf, the cpu frequency would only be periodically updated according to freq_update_delay_ns. Maybe we should judge the cpufreq_driver_adjust_perf's return value, if the freq is not updated, the last_freq_update_time also does not have to update. Just like: --- diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 458d359f5991..10f18b054f01 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time, struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); unsigned long prev_util = sg_cpu->util; unsigned long max_cap; + bool freq_updated; /* * Fall back to the "frequency" path if frequency invariance is not @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time, sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) sg_cpu->util = prev_util; - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), map_util_perf(sg_cpu->util), max_cap); - sg_cpu->sg_policy->last_freq_update_time = time; + if (freq_updated) + sg_cpu->sg_policy->last_freq_update_time = time; } BR Thanks! --- xuewen > > LGTM otherwise. > > > Cheers > > -- > Qais Yousef > > > next_f = sg_policy->next_freq; > > > > /* Restore cached freq as next_freq has changed */ > > -- > > 2.25.1 > >
On 7/24/23 05:36, Xuewen Yan wrote: > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: >> >> On 07/19/23 21:05, Xuewen Yan wrote: >>> When cpufreq's policy is single, there is a scenario that will >>> cause sg_policy's next_freq to be unable to update. >>> >>> When the cpu's util is always max, the cpufreq will be max, >>> and then if we change the policy's scaling_max_freq to be a >>> lower freq, indeed, the sg_policy's next_freq need change to >>> be the lower freq, however, because the cpu_is_busy, the next_freq >>> would keep the max_freq. >>> >>> For example: >>> The cpu7 is single cpu: >>> >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& >>> [1] 4737 >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 >>> pid 4737's current affinity mask: ff >>> pid 4737's new affinity mask: 80 >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq >>> 2301000 >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq >>> 2301000 >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq >>> 2171000 >>> >>> At this time, the sg_policy's next_freq would keep 2301000. >>> >>> To prevent the case happen, add the judgment of the need_freq_update flag. >>> >>> Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> >>> Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> >>> Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> >>> --- >>> kernel/sched/cpufreq_schedutil.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c >>> index 4492608b7d7f..458d359f5991 100644 >>> --- a/kernel/sched/cpufreq_schedutil.c >>> +++ b/kernel/sched/cpufreq_schedutil.c >>> @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, >>> * Except when the rq is capped by uclamp_max. >>> */ >>> if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && >>> - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { >>> + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && >>> + !sg_policy->need_freq_update) { >> >> What about sugov_update_single_perf()? It seems to have the same problem, no? > > There is no problem in sugov_update_single_perf, because the next_freq > is updated by drivers, maybe the next_freq is not used when using > sugov_update_single_perf.. > > But for the last_freq_update_time, I think there are some problems > when using sugov_update_single_perf: > Now, there is no judgment condition for the update of the > last_freq_update_time. That means the last_freq_update_time is always > updated in sugov_update_single_perf. > And in sugov_should_update_freq: it would judge the > freq_update_delay_ns. As a result, If we use the > sugov_update_single_perf, the cpu frequency would only be periodically > updated according to freq_update_delay_ns. > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > if the freq is not updated, the last_freq_update_time also does not > have to update. > > Just like: > --- > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 458d359f5991..10f18b054f01 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > struct sugov_cpu *sg_cpu = container_of(hook, struct > sugov_cpu, update_util); > unsigned long prev_util = sg_cpu->util; > unsigned long max_cap; > + bool freq_updated; > > /* > * Fall back to the "frequency" path if frequency invariance is not > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > sg_cpu->util = prev_util; > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > map_util_perf(sg_cpu->bw_dl), > map_util_perf(sg_cpu->util), max_cap); > > - sg_cpu->sg_policy->last_freq_update_time = time; > + if (freq_updated) > + sg_cpu->sg_policy->last_freq_update_time = time; > } > Hello Xuewen, FWIW, the patch and explanation for sugov_update_single_perf() seem sensible to me. Just a comment about cpufreq_driver_adjust_perf() and (struct cpufreq_driver)->adjust_perf(): wouldn't their prototype need to be updated (i.e. not return void) to do the change suggested above ? Regards, Pierre > > BR > Thanks! > > --- > xuewen >> >> LGTM otherwise. >> >> >> Cheers >> >> -- >> Qais Yousef >> >>> next_f = sg_policy->next_freq; >>> >>> /* Restore cached freq as next_freq has changed */ >>> -- >>> 2.25.1 >>> >
On 07/24/23 11:36, Xuewen Yan wrote: > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > > > On 07/19/23 21:05, Xuewen Yan wrote: > > > When cpufreq's policy is single, there is a scenario that will > > > cause sg_policy's next_freq to be unable to update. > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > and then if we change the policy's scaling_max_freq to be a > > > lower freq, indeed, the sg_policy's next_freq need change to > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > would keep the max_freq. > > > > > > For example: > > > The cpu7 is single cpu: > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > [1] 4737 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > pid 4737's current affinity mask: ff > > > pid 4737's new affinity mask: 80 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > 2301000 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > 2301000 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > 2171000 > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > --- > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > index 4492608b7d7f..458d359f5991 100644 > > > --- a/kernel/sched/cpufreq_schedutil.c > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > * Except when the rq is capped by uclamp_max. > > > */ > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > + !sg_policy->need_freq_update) { > > > > What about sugov_update_single_perf()? It seems to have the same problem, no? > > There is no problem in sugov_update_single_perf, because the next_freq > is updated by drivers, maybe the next_freq is not used when using > sugov_update_single_perf.. Ah I see; we just use prev_util but the request will go through and the driver should observe the new limit regardless of what util value we pass to it. Got ya. > > But for the last_freq_update_time, I think there are some problems > when using sugov_update_single_perf: > Now, there is no judgment condition for the update of the > last_freq_update_time. That means the last_freq_update_time is always > updated in sugov_update_single_perf. > And in sugov_should_update_freq: it would judge the > freq_update_delay_ns. As a result, If we use the > sugov_update_single_perf, the cpu frequency would only be periodically > updated according to freq_update_delay_ns. > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > if the freq is not updated, the last_freq_update_time also does not > have to update. > > Just like: > --- > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 458d359f5991..10f18b054f01 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > struct sugov_cpu *sg_cpu = container_of(hook, struct > sugov_cpu, update_util); > unsigned long prev_util = sg_cpu->util; > unsigned long max_cap; > + bool freq_updated; > > /* > * Fall back to the "frequency" path if frequency invariance is not > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > sg_cpu->util = prev_util; > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > map_util_perf(sg_cpu->bw_dl), > map_util_perf(sg_cpu->util), max_cap); > > - sg_cpu->sg_policy->last_freq_update_time = time; > + if (freq_updated) > + sg_cpu->sg_policy->last_freq_update_time = time; > } Sound reasonable in principle, but it could lead to overhead; for example when the system is busy and maxed out, the last_freq_update_time will never be updated and will end up continuously calling to the driver to change frequency without any rate limit AFAICS. Which might not be an acceptable overhead, I don't know. Logically this is wasted cycles preventing the tasks from doing useful work. I think we need to look at such corner cases and treat them appropriately to not call the driver if we go with this approach. Cheers -- Qais Yousef > > > BR > Thanks! > > --- > xuewen > > > > LGTM otherwise. > > > > > > Cheers > > > > -- > > Qais Yousef > > > > > next_f = sg_policy->next_freq; > > > > > > /* Restore cached freq as next_freq has changed */ > > > -- > > > 2.25.1 > > >
Hi Pierre, On Mon, Jul 24, 2023 at 11:33 PM Pierre Gondois <pierre.gondois@arm.com> wrote: > > > > On 7/24/23 05:36, Xuewen Yan wrote: > > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > >> > >> On 07/19/23 21:05, Xuewen Yan wrote: > >>> When cpufreq's policy is single, there is a scenario that will > >>> cause sg_policy's next_freq to be unable to update. > >>> > >>> When the cpu's util is always max, the cpufreq will be max, > >>> and then if we change the policy's scaling_max_freq to be a > >>> lower freq, indeed, the sg_policy's next_freq need change to > >>> be the lower freq, however, because the cpu_is_busy, the next_freq > >>> would keep the max_freq. > >>> > >>> For example: > >>> The cpu7 is single cpu: > >>> > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > >>> [1] 4737 > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > >>> pid 4737's current affinity mask: ff > >>> pid 4737's new affinity mask: 80 > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > >>> 2301000 > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > >>> 2301000 > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > >>> unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > >>> 2171000 > >>> > >>> At this time, the sg_policy's next_freq would keep 2301000. > >>> > >>> To prevent the case happen, add the judgment of the need_freq_update flag. > >>> > >>> Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > >>> Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > >>> Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > >>> --- > >>> kernel/sched/cpufreq_schedutil.c | 3 ++- > >>> 1 file changed, 2 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > >>> index 4492608b7d7f..458d359f5991 100644 > >>> --- a/kernel/sched/cpufreq_schedutil.c > >>> +++ b/kernel/sched/cpufreq_schedutil.c > >>> @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > >>> * Except when the rq is capped by uclamp_max. > >>> */ > >>> if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > >>> - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > >>> + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > >>> + !sg_policy->need_freq_update) { > >> > >> What about sugov_update_single_perf()? It seems to have the same problem, no? > > > > There is no problem in sugov_update_single_perf, because the next_freq > > is updated by drivers, maybe the next_freq is not used when using > > sugov_update_single_perf.. > > > > But for the last_freq_update_time, I think there are some problems > > when using sugov_update_single_perf: > > Now, there is no judgment condition for the update of the > > last_freq_update_time. That means the last_freq_update_time is always > > updated in sugov_update_single_perf. > > And in sugov_should_update_freq: it would judge the > > freq_update_delay_ns. As a result, If we use the > > sugov_update_single_perf, the cpu frequency would only be periodically > > updated according to freq_update_delay_ns. > > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > > if the freq is not updated, the last_freq_update_time also does not > > have to update. > > > > Just like: > > --- > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 458d359f5991..10f18b054f01 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > sugov_cpu, update_util); > > unsigned long prev_util = sg_cpu->util; > > unsigned long max_cap; > > + bool freq_updated; > > > > /* > > * Fall back to the "frequency" path if frequency invariance is not > > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > sg_cpu->util = prev_util; > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > map_util_perf(sg_cpu->bw_dl), > > map_util_perf(sg_cpu->util), max_cap); > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > + if (freq_updated) > > + sg_cpu->sg_policy->last_freq_update_time = time; > > } > > > > Hello Xuewen, > FWIW, the patch and explanation for sugov_update_single_perf() seem sensible to > me. Just a comment about cpufreq_driver_adjust_perf() and > (struct cpufreq_driver)->adjust_perf(): wouldn't their prototype need to be > updated (i.e. not return void) to do the change suggested above ? Yes, their function type should be changed from void to bool or init. For this patch, I just raise a question for everyone to discuss. If this is a problem, the official patch needs to be revised later. BR xuewen > > Regards, > Pierre > > > > > BR > > Thanks! > > > > --- > > xuewen > >> > >> LGTM otherwise. > >> > >> > >> Cheers > >> > >> -- > >> Qais Yousef > >> > >>> next_f = sg_policy->next_freq; > >>> > >>> /* Restore cached freq as next_freq has changed */ > >>> -- > >>> 2.25.1 > >>> > >
On Mon, Jul 24, 2023 at 11:53 PM Qais Yousef <qyousef@layalina.io> wrote: > > On 07/24/23 11:36, Xuewen Yan wrote: > > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > On 07/19/23 21:05, Xuewen Yan wrote: > > > > When cpufreq's policy is single, there is a scenario that will > > > > cause sg_policy's next_freq to be unable to update. > > > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > > and then if we change the policy's scaling_max_freq to be a > > > > lower freq, indeed, the sg_policy's next_freq need change to > > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > > would keep the max_freq. > > > > > > > > For example: > > > > The cpu7 is single cpu: > > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > > [1] 4737 > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > > pid 4737's current affinity mask: ff > > > > pid 4737's new affinity mask: 80 > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > 2301000 > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > > 2301000 > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > 2171000 > > > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > > --- > > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > index 4492608b7d7f..458d359f5991 100644 > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > > * Except when the rq is capped by uclamp_max. > > > > */ > > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > > + !sg_policy->need_freq_update) { > > > > > > What about sugov_update_single_perf()? It seems to have the same problem, no? > > > > There is no problem in sugov_update_single_perf, because the next_freq > > is updated by drivers, maybe the next_freq is not used when using > > sugov_update_single_perf.. > > Ah I see; we just use prev_util but the request will go through and the driver > should observe the new limit regardless of what util value we pass to it. Got > ya. > > > > > But for the last_freq_update_time, I think there are some problems > > when using sugov_update_single_perf: > > Now, there is no judgment condition for the update of the > > last_freq_update_time. That means the last_freq_update_time is always > > updated in sugov_update_single_perf. > > And in sugov_should_update_freq: it would judge the > > freq_update_delay_ns. As a result, If we use the > > sugov_update_single_perf, the cpu frequency would only be periodically > > updated according to freq_update_delay_ns. > > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > > if the freq is not updated, the last_freq_update_time also does not > > have to update. > > > > Just like: > > --- > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 458d359f5991..10f18b054f01 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > sugov_cpu, update_util); > > unsigned long prev_util = sg_cpu->util; > > unsigned long max_cap; > > + bool freq_updated; > > > > /* > > * Fall back to the "frequency" path if frequency invariance is not > > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > sg_cpu->util = prev_util; > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > map_util_perf(sg_cpu->bw_dl), > > map_util_perf(sg_cpu->util), max_cap); > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > + if (freq_updated) > > + sg_cpu->sg_policy->last_freq_update_time = time; > > } > > Sound reasonable in principle, but it could lead to overhead; for example when > the system is busy and maxed out, the last_freq_update_time will never be > updated and will end up continuously calling to the driver to change frequency > without any rate limit AFAICS. Which might not be an acceptable overhead, > I don't know. Logically this is wasted cycles preventing the tasks from doing > useful work. I think we need to look at such corner cases and treat them > appropriately to not call the driver if we go with this approach. Hi Qais, I can understand what you mean, but I don't think this is a problem. For the driver, the calculation of whether to update the frequency may not be the main time-consuming, but the main time-consuming may be the frequency conversion time of the hardware. If the hardware does not need frequency conversion, the operation of calculating the frequency takes a very short time. If the operation of calling the driver frequently is unacceptable, can prev_util be used? --- diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 4492608b7d7f..3febfd032eee 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -379,7 +379,9 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time, { struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); unsigned long prev_util = sg_cpu->util; + unsigned long prev_bw_dl = sg_cpu->bw_dl; unsigned long max_cap; + bool freq_updated; /* * Fall back to the "frequency" path if frequency invariance is not @@ -406,10 +408,14 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time, sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) sg_cpu->util = prev_util; - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), + if (prev_util == sg_cpu->util && prev_bw_dl == sg_cpu->bw_dl) + return; + + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), map_util_perf(sg_cpu->util), max_cap); - sg_cpu->sg_policy->last_freq_update_time = time; + if (freq_updated) + sg_cpu->sg_policy->last_freq_update_time = time; } static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) BR --- xuewen > > > Cheers > > -- > Qais Yousef > > > > > > > BR > > Thanks! > > > > --- > > xuewen > > > > > > LGTM otherwise. > > > > > > > > > Cheers > > > > > > -- > > > Qais Yousef > > > > > > > next_f = sg_policy->next_freq; > > > > > > > > /* Restore cached freq as next_freq has changed */ > > > > -- > > > > 2.25.1 > > > >
On Tue, Jul 25, 2023 at 4:21 AM Xuewen Yan <xuewen.yan94@gmail.com> wrote: > > On Mon, Jul 24, 2023 at 11:53 PM Qais Yousef <qyousef@layalina.io> wrote: > > > > On 07/24/23 11:36, Xuewen Yan wrote: > > > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > > > On 07/19/23 21:05, Xuewen Yan wrote: > > > > > When cpufreq's policy is single, there is a scenario that will > > > > > cause sg_policy's next_freq to be unable to update. > > > > > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > > > and then if we change the policy's scaling_max_freq to be a > > > > > lower freq, indeed, the sg_policy's next_freq need change to > > > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > > > would keep the max_freq. > > > > > > > > > > For example: > > > > > The cpu7 is single cpu: > > > > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > > > [1] 4737 > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > > > pid 4737's current affinity mask: ff > > > > > pid 4737's new affinity mask: 80 > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > 2301000 > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > > > 2301000 > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > 2171000 > > > > > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > --- > > > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > > index 4492608b7d7f..458d359f5991 100644 > > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > > > * Except when the rq is capped by uclamp_max. > > > > > */ > > > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > > > + !sg_policy->need_freq_update) { > > > > > > > > What about sugov_update_single_perf()? It seems to have the same problem, no? > > > > > > There is no problem in sugov_update_single_perf, because the next_freq > > > is updated by drivers, maybe the next_freq is not used when using > > > sugov_update_single_perf.. > > > > Ah I see; we just use prev_util but the request will go through and the driver > > should observe the new limit regardless of what util value we pass to it. Got > > ya. > > > > > > > > But for the last_freq_update_time, I think there are some problems > > > when using sugov_update_single_perf: > > > Now, there is no judgment condition for the update of the > > > last_freq_update_time. That means the last_freq_update_time is always > > > updated in sugov_update_single_perf. > > > And in sugov_should_update_freq: it would judge the > > > freq_update_delay_ns. As a result, If we use the > > > sugov_update_single_perf, the cpu frequency would only be periodically > > > updated according to freq_update_delay_ns. > > > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > > > if the freq is not updated, the last_freq_update_time also does not > > > have to update. > > > > > > Just like: > > > --- > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > index 458d359f5991..10f18b054f01 100644 > > > --- a/kernel/sched/cpufreq_schedutil.c > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > > > update_util_data *hook, u64 time, > > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > > sugov_cpu, update_util); > > > unsigned long prev_util = sg_cpu->util; > > > unsigned long max_cap; > > > + bool freq_updated; > > > > > > /* > > > * Fall back to the "frequency" path if frequency invariance is not > > > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > > > update_util_data *hook, u64 time, > > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > > sg_cpu->util = prev_util; > > > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > > map_util_perf(sg_cpu->bw_dl), > > > map_util_perf(sg_cpu->util), max_cap); > > > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > > + if (freq_updated) > > > + sg_cpu->sg_policy->last_freq_update_time = time; > > > } > > > > Sound reasonable in principle, but it could lead to overhead; for example when > > the system is busy and maxed out, the last_freq_update_time will never be > > updated and will end up continuously calling to the driver to change frequency > > without any rate limit AFAICS. Which might not be an acceptable overhead, > > I don't know. Logically this is wasted cycles preventing the tasks from doing > > useful work. I think we need to look at such corner cases and treat them > > appropriately to not call the driver if we go with this approach. > > Hi Qais, > > I can understand what you mean, but I don't think this is a problem. > For the driver, the calculation of whether to update the frequency may > not be the main time-consuming, but the main time-consuming may be the > frequency conversion time of the hardware. If the hardware does not > need frequency conversion, the operation of calculating the frequency > takes a very short time. > If the operation of calling the driver frequently is unacceptable, can > prev_util be used? No, it's better to pass the data to the driver directly and let it sort that out in this particular case. > --- > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 4492608b7d7f..3febfd032eee 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -379,7 +379,9 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > { > struct sugov_cpu *sg_cpu = container_of(hook, struct > sugov_cpu, update_util); > unsigned long prev_util = sg_cpu->util; > + unsigned long prev_bw_dl = sg_cpu->bw_dl; > unsigned long max_cap; > + bool freq_updated; > > /* > * Fall back to the "frequency" path if frequency invariance is not > @@ -406,10 +408,14 @@ static void sugov_update_single_perf(struct > update_util_data *hook, u64 time, > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > sg_cpu->util = prev_util; > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > + if (prev_util == sg_cpu->util && prev_bw_dl == sg_cpu->bw_dl) > + return; > + > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > map_util_perf(sg_cpu->bw_dl), > map_util_perf(sg_cpu->util), max_cap); > > - sg_cpu->sg_policy->last_freq_update_time = time; > + if (freq_updated) > + sg_cpu->sg_policy->last_freq_update_time = time; > } > > static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) > > > BR > --- > xuewen > > > > > > Cheers > > > > -- > > Qais Yousef > > > > > > > > > > > BR > > > Thanks! > > > > > > --- > > > xuewen > > > > > > > > LGTM otherwise. > > > > > > > > > > > > Cheers > > > > > > > > -- > > > > Qais Yousef > > > > > > > > > next_f = sg_policy->next_freq; > > > > > > > > > > /* Restore cached freq as next_freq has changed */ > > > > > -- > > > > > 2.25.1 > > > > >
Hi Rafael On Tue, Jul 25, 2023 at 4:51 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > On Tue, Jul 25, 2023 at 4:21 AM Xuewen Yan <xuewen.yan94@gmail.com> wrote: > > > > On Mon, Jul 24, 2023 at 11:53 PM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > On 07/24/23 11:36, Xuewen Yan wrote: > > > > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > > > > > On 07/19/23 21:05, Xuewen Yan wrote: > > > > > > When cpufreq's policy is single, there is a scenario that will > > > > > > cause sg_policy's next_freq to be unable to update. > > > > > > > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > > > > and then if we change the policy's scaling_max_freq to be a > > > > > > lower freq, indeed, the sg_policy's next_freq need change to > > > > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > > > > would keep the max_freq. > > > > > > > > > > > > For example: > > > > > > The cpu7 is single cpu: > > > > > > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > > > > [1] 4737 > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > > > > pid 4737's current affinity mask: ff > > > > > > pid 4737's new affinity mask: 80 > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > > 2301000 > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > > > > 2301000 > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > > 2171000 > > > > > > > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > > --- > > > > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > > > index 4492608b7d7f..458d359f5991 100644 > > > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > > > > * Except when the rq is capped by uclamp_max. > > > > > > */ > > > > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > > > > + !sg_policy->need_freq_update) { > > > > > > > > > > What about sugov_update_single_perf()? It seems to have the same problem, no? > > > > > > > > There is no problem in sugov_update_single_perf, because the next_freq > > > > is updated by drivers, maybe the next_freq is not used when using > > > > sugov_update_single_perf.. > > > > > > Ah I see; we just use prev_util but the request will go through and the driver > > > should observe the new limit regardless of what util value we pass to it. Got > > > ya. > > > > > > > > > > > But for the last_freq_update_time, I think there are some problems > > > > when using sugov_update_single_perf: > > > > Now, there is no judgment condition for the update of the > > > > last_freq_update_time. That means the last_freq_update_time is always > > > > updated in sugov_update_single_perf. > > > > And in sugov_should_update_freq: it would judge the > > > > freq_update_delay_ns. As a result, If we use the > > > > sugov_update_single_perf, the cpu frequency would only be periodically > > > > updated according to freq_update_delay_ns. > > > > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > > > > if the freq is not updated, the last_freq_update_time also does not > > > > have to update. > > > > > > > > Just like: > > > > --- > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > index 458d359f5991..10f18b054f01 100644 > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > > > > update_util_data *hook, u64 time, > > > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > > > sugov_cpu, update_util); > > > > unsigned long prev_util = sg_cpu->util; > > > > unsigned long max_cap; > > > > + bool freq_updated; > > > > > > > > /* > > > > * Fall back to the "frequency" path if frequency invariance is not > > > > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > > > > update_util_data *hook, u64 time, > > > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > > > sg_cpu->util = prev_util; > > > > > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > > > map_util_perf(sg_cpu->bw_dl), > > > > map_util_perf(sg_cpu->util), max_cap); > > > > > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > > > + if (freq_updated) > > > > + sg_cpu->sg_policy->last_freq_update_time = time; > > > > } > > > > > > Sound reasonable in principle, but it could lead to overhead; for example when > > > the system is busy and maxed out, the last_freq_update_time will never be > > > updated and will end up continuously calling to the driver to change frequency > > > without any rate limit AFAICS. Which might not be an acceptable overhead, > > > I don't know. Logically this is wasted cycles preventing the tasks from doing > > > useful work. I think we need to look at such corner cases and treat them > > > appropriately to not call the driver if we go with this approach. > > > > Hi Qais, > > > > I can understand what you mean, but I don't think this is a problem. > > For the driver, the calculation of whether to update the frequency may > > not be the main time-consuming, but the main time-consuming may be the > > frequency conversion time of the hardware. If the hardware does not > > need frequency conversion, the operation of calculating the frequency > > takes a very short time. > > If the operation of calling the driver frequently is unacceptable, can > > prev_util be used? > > No, it's better to pass the data to the driver directly and let it > sort that out in this particular case. Yes, I know. we should not interfere with the driver's behavior. By the way, What do you think of the patch fixing the sugov_update_single_freq? Thanks! --- xuewen > > > --- > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 4492608b7d7f..3febfd032eee 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -379,7 +379,9 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > { > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > sugov_cpu, update_util); > > unsigned long prev_util = sg_cpu->util; > > + unsigned long prev_bw_dl = sg_cpu->bw_dl; > > unsigned long max_cap; > > + bool freq_updated; > > > > /* > > * Fall back to the "frequency" path if frequency invariance is not > > @@ -406,10 +408,14 @@ static void sugov_update_single_perf(struct > > update_util_data *hook, u64 time, > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > sg_cpu->util = prev_util; > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > + if (prev_util == sg_cpu->util && prev_bw_dl == sg_cpu->bw_dl) > > + return; > > + > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > map_util_perf(sg_cpu->bw_dl), > > map_util_perf(sg_cpu->util), max_cap); > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > + if (freq_updated) > > + sg_cpu->sg_policy->last_freq_update_time = time; > > } > > > > static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) > > > > > > BR > > --- > > xuewen > > > > > > > > > Cheers > > > > > > -- > > > Qais Yousef > > > > > > > > > > > > > > > BR > > > > Thanks! > > > > > > > > --- > > > > xuewen > > > > > > > > > > LGTM otherwise. > > > > > > > > > > > > > > > Cheers > > > > > > > > > > -- > > > > > Qais Yousef > > > > > > > > > > > next_f = sg_policy->next_freq; > > > > > > > > > > > > /* Restore cached freq as next_freq has changed */ > > > > > > -- > > > > > > 2.25.1 > > > > > >
On Tue, Jul 25, 2023 at 2:09 PM Xuewen Yan <xuewen.yan94@gmail.com> wrote: > > Hi Rafael > > On Tue, Jul 25, 2023 at 4:51 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > On Tue, Jul 25, 2023 at 4:21 AM Xuewen Yan <xuewen.yan94@gmail.com> wrote: > > > > > > On Mon, Jul 24, 2023 at 11:53 PM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > > > On 07/24/23 11:36, Xuewen Yan wrote: > > > > > On Sat, Jul 22, 2023 at 7:02 AM Qais Yousef <qyousef@layalina.io> wrote: > > > > > > > > > > > > On 07/19/23 21:05, Xuewen Yan wrote: > > > > > > > When cpufreq's policy is single, there is a scenario that will > > > > > > > cause sg_policy's next_freq to be unable to update. > > > > > > > > > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > > > > > and then if we change the policy's scaling_max_freq to be a > > > > > > > lower freq, indeed, the sg_policy's next_freq need change to > > > > > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > > > > > would keep the max_freq. > > > > > > > > > > > > > > For example: > > > > > > > The cpu7 is single cpu: > > > > > > > > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > > > > > [1] 4737 > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > > > > > pid 4737's current affinity mask: ff > > > > > > > pid 4737's new affinity mask: 80 > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > > > 2301000 > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > > > > > 2301000 > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > > > > > 2171000 > > > > > > > > > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > > > > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > > > > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > > > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > > > > > --- > > > > > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > > > > index 4492608b7d7f..458d359f5991 100644 > > > > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > > > > > * Except when the rq is capped by uclamp_max. > > > > > > > */ > > > > > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > > > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > > > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > > > > > + !sg_policy->need_freq_update) { > > > > > > > > > > > > What about sugov_update_single_perf()? It seems to have the same problem, no? > > > > > > > > > > There is no problem in sugov_update_single_perf, because the next_freq > > > > > is updated by drivers, maybe the next_freq is not used when using > > > > > sugov_update_single_perf.. > > > > > > > > Ah I see; we just use prev_util but the request will go through and the driver > > > > should observe the new limit regardless of what util value we pass to it. Got > > > > ya. > > > > > > > > > > > > > > But for the last_freq_update_time, I think there are some problems > > > > > when using sugov_update_single_perf: > > > > > Now, there is no judgment condition for the update of the > > > > > last_freq_update_time. That means the last_freq_update_time is always > > > > > updated in sugov_update_single_perf. > > > > > And in sugov_should_update_freq: it would judge the > > > > > freq_update_delay_ns. As a result, If we use the > > > > > sugov_update_single_perf, the cpu frequency would only be periodically > > > > > updated according to freq_update_delay_ns. > > > > > Maybe we should judge the cpufreq_driver_adjust_perf's return value, > > > > > if the freq is not updated, the last_freq_update_time also does not > > > > > have to update. > > > > > > > > > > Just like: > > > > > --- > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > > > index 458d359f5991..10f18b054f01 100644 > > > > > --- a/kernel/sched/cpufreq_schedutil.c > > > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > > > @@ -381,6 +381,7 @@ static void sugov_update_single_perf(struct > > > > > update_util_data *hook, u64 time, > > > > > struct sugov_cpu *sg_cpu = container_of(hook, struct > > > > > sugov_cpu, update_util); > > > > > unsigned long prev_util = sg_cpu->util; > > > > > unsigned long max_cap; > > > > > + bool freq_updated; > > > > > > > > > > /* > > > > > * Fall back to the "frequency" path if frequency invariance is not > > > > > @@ -407,10 +408,11 @@ static void sugov_update_single_perf(struct > > > > > update_util_data *hook, u64 time, > > > > > sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) > > > > > sg_cpu->util = prev_util; > > > > > > > > > > - cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), > > > > > + freq_updated = cpufreq_driver_adjust_perf(sg_cpu->cpu, > > > > > map_util_perf(sg_cpu->bw_dl), > > > > > map_util_perf(sg_cpu->util), max_cap); > > > > > > > > > > - sg_cpu->sg_policy->last_freq_update_time = time; > > > > > + if (freq_updated) > > > > > + sg_cpu->sg_policy->last_freq_update_time = time; > > > > > } > > > > > > > > Sound reasonable in principle, but it could lead to overhead; for example when > > > > the system is busy and maxed out, the last_freq_update_time will never be > > > > updated and will end up continuously calling to the driver to change frequency > > > > without any rate limit AFAICS. Which might not be an acceptable overhead, > > > > I don't know. Logically this is wasted cycles preventing the tasks from doing > > > > useful work. I think we need to look at such corner cases and treat them > > > > appropriately to not call the driver if we go with this approach. > > > > > > Hi Qais, > > > > > > I can understand what you mean, but I don't think this is a problem. > > > For the driver, the calculation of whether to update the frequency may > > > not be the main time-consuming, but the main time-consuming may be the > > > frequency conversion time of the hardware. If the hardware does not > > > need frequency conversion, the operation of calculating the frequency > > > takes a very short time. > > > If the operation of calling the driver frequently is unacceptable, can > > > prev_util be used? > > > > No, it's better to pass the data to the driver directly and let it > > sort that out in this particular case. > > Yes, I know. we should not interfere with the driver's behavior. > > By the way, What do you think of the patch fixing the sugov_update_single_freq? IIUC, you have found a genuine issue and the patch should address it.
* Xuewen Yan <xuewen.yan@unisoc.com> wrote: > When cpufreq's policy is single, there is a scenario that will > cause sg_policy's next_freq to be unable to update. > > When the cpu's util is always max, the cpufreq will be max, > and then if we change the policy's scaling_max_freq to be a > lower freq, indeed, the sg_policy's next_freq need change to > be the lower freq, however, because the cpu_is_busy, the next_freq > would keep the max_freq. > > For example: > The cpu7 is single cpu: > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > [1] 4737 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > pid 4737's current affinity mask: ff > pid 4737's new affinity mask: 80 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > 2301000 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > 2301000 > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > 2171000 > > At this time, the sg_policy's next_freq would keep 2301000. > > To prevent the case happen, add the judgment of the need_freq_update flag. > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > --- > kernel/sched/cpufreq_schedutil.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 4492608b7d7f..458d359f5991 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > * Except when the rq is capped by uclamp_max. > */ > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > + !sg_policy->need_freq_update) { > next_f = sg_policy->next_freq; > > /* Restore cached freq as next_freq has changed */ Just wondering about the status of this fix - is it pending in some tree, or should we apply it to the scheduler tree? Thanks, Ingo
On Thu, Oct 5, 2023 at 1:26 PM Ingo Molnar <mingo@kernel.org> wrote: > > > * Xuewen Yan <xuewen.yan@unisoc.com> wrote: > > > When cpufreq's policy is single, there is a scenario that will > > cause sg_policy's next_freq to be unable to update. > > > > When the cpu's util is always max, the cpufreq will be max, > > and then if we change the policy's scaling_max_freq to be a > > lower freq, indeed, the sg_policy's next_freq need change to > > be the lower freq, however, because the cpu_is_busy, the next_freq > > would keep the max_freq. > > > > For example: > > The cpu7 is single cpu: > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > [1] 4737 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > pid 4737's current affinity mask: ff > > pid 4737's new affinity mask: 80 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > 2301000 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > 2301000 > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > 2171000 > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > --- > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 4492608b7d7f..458d359f5991 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > * Except when the rq is capped by uclamp_max. > > */ > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > + !sg_policy->need_freq_update) { > > next_f = sg_policy->next_freq; > > > > /* Restore cached freq as next_freq has changed */ > > Just wondering about the status of this fix - is it pending in > some tree, or should we apply it to the scheduler tree? I have not queued it up yet, so it can be applied to the scheduler tree. Thanks!
* Rafael J. Wysocki <rafael@kernel.org> wrote: > On Thu, Oct 5, 2023 at 1:26 PM Ingo Molnar <mingo@kernel.org> wrote: > > > > > > * Xuewen Yan <xuewen.yan@unisoc.com> wrote: > > > > > When cpufreq's policy is single, there is a scenario that will > > > cause sg_policy's next_freq to be unable to update. > > > > > > When the cpu's util is always max, the cpufreq will be max, > > > and then if we change the policy's scaling_max_freq to be a > > > lower freq, indeed, the sg_policy's next_freq need change to > > > be the lower freq, however, because the cpu_is_busy, the next_freq > > > would keep the max_freq. > > > > > > For example: > > > The cpu7 is single cpu: > > > > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # while true;do done& > > > [1] 4737 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # taskset -p 80 4737 > > > pid 4737's current affinity mask: ff > > > pid 4737's new affinity mask: 80 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > 2301000 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_cur_freq > > > 2301000 > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # echo 2171000 > scaling_max_freq > > > unisoc:/sys/devices/system/cpu/cpufreq/policy7 # cat scaling_max_freq > > > 2171000 > > > > > > At this time, the sg_policy's next_freq would keep 2301000. > > > > > > To prevent the case happen, add the judgment of the need_freq_update flag. > > > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> > > > Co-developed-by: Guohua Yan <guohua.yan@unisoc.com> > > > Signed-off-by: Guohua Yan <guohua.yan@unisoc.com> > > > --- > > > kernel/sched/cpufreq_schedutil.c | 3 ++- > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > > index 4492608b7d7f..458d359f5991 100644 > > > --- a/kernel/sched/cpufreq_schedutil.c > > > +++ b/kernel/sched/cpufreq_schedutil.c > > > @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, > > > * Except when the rq is capped by uclamp_max. > > > */ > > > if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && > > > - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { > > > + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && > > > + !sg_policy->need_freq_update) { > > > next_f = sg_policy->next_freq; > > > > > > /* Restore cached freq as next_freq has changed */ > > > > Just wondering about the status of this fix - is it pending in > > some tree, or should we apply it to the scheduler tree? > > I have not queued it up yet, so it can be applied to the scheduler tree. Ok, I've applied it - and I've added your Acked-by. Thanks, Ingo
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 4492608b7d7f..458d359f5991 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -350,7 +350,8 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time, * Except when the rq is capped by uclamp_max. */ if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) && - sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { + sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq && + !sg_policy->need_freq_update) { next_f = sg_policy->next_freq; /* Restore cached freq as next_freq has changed */