[tip:,sched/core] sched/fair: remove util_est boosting

Message ID 169079878708.28540.11161051369114712527.tip-bot2@tip-bot2
State New
Headers
Series [tip:,sched/core] sched/fair: remove util_est boosting |

Commit Message

tip-bot2 for Thomas Gleixner July 31, 2023, 10:19 a.m. UTC
  The following commit has been merged into the sched/core branch of tip:

Commit-ID:     c2e164ac33f75e0acb93004960c73bd9166d3d35
Gitweb:        https://git.kernel.org/tip/c2e164ac33f75e0acb93004960c73bd9166d3d35
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Thu, 06 Jul 2023 15:51:44 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 26 Jul 2023 12:28:50 +02:00

sched/fair: remove util_est boosting

There is no need to use runnable_avg when estimating util_est and that
even generates wrong behavior because one includes blocked tasks whereas
the other one doesn't. This can lead to accounting twice the waking task p,
once with the blocked runnable_avg and another one when adding its
util_est.

cpu's runnable_avg is already used when computing util_avg which is then
compared with util_est.

In some situation, feec will not select prev_cpu but another one on the
same performance domain because of higher max_util

Fixes: 7d0583cf9ec7 ("sched/fair, cpufreq: Introduce 'runnable boosting'")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Link: https://lore.kernel.org/r/20230706135144.324311-1-vincent.guittot@linaro.org
---
 kernel/sched/fair.c | 3 ---
 1 file changed, 3 deletions(-)
  

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d3df5b1..f55b0a7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7320,9 +7320,6 @@  cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
 
 		util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
 
-		if (boost)
-			util_est = max(util_est, runnable);
-
 		/*
 		 * During wake-up @p isn't enqueued yet and doesn't contribute
 		 * to any cpu_rq(cpu)->cfs.avg.util_est.enqueued.