[v2,2/2] sched/core: Adjusting the order of scanning CPU

Message ID 20221026064300.78869-3-jiahao.os@bytedance.com
State New
Headers
Series Clean up the process of scanning the CPU for some functions |

Commit Message

Hao Jia Oct. 26, 2022, 6:43 a.m. UTC
  When select_idle_capacity() starts scanning for an idle CPU, it starts
with target CPU that has already been checked in select_idle_sibling().
So we start checking from the next CPU and try the target CPU at the end.
Similarly for task_numa_assign(), we have just checked numa_migrate_on
of dst_cpu, so start from the next CPU. This also works for
steal_cookie_task(), the first scan must fail and start directly
from the next one.

Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
---
 kernel/sched/core.c | 2 +-
 kernel/sched/fair.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
  

Comments

Mel Gorman Nov. 14, 2022, 12:15 p.m. UTC | #1
On Wed, Oct 26, 2022 at 02:43:00PM +0800, Hao Jia wrote:
> When select_idle_capacity() starts scanning for an idle CPU, it starts
> with target CPU that has already been checked in select_idle_sibling().
> So we start checking from the next CPU and try the target CPU at the end.
> Similarly for task_numa_assign(), we have just checked numa_migrate_on
> of dst_cpu, so start from the next CPU. This also works for
> steal_cookie_task(), the first scan must fail and start directly
> from the next one.
> 
> Signed-off-by: Hao Jia <jiahao.os@bytedance.com>

Test results in general look ok so

Acked-by: Mel Gorman <mgorman@techsingularity.net>
  
Hao Jia Nov. 16, 2022, 8:48 a.m. UTC | #2
On 2022/11/14 Mel Gorman wrote:
> On Wed, Oct 26, 2022 at 02:43:00PM +0800, Hao Jia wrote:
>> When select_idle_capacity() starts scanning for an idle CPU, it starts
>> with target CPU that has already been checked in select_idle_sibling().
>> So we start checking from the next CPU and try the target CPU at the end.
>> Similarly for task_numa_assign(), we have just checked numa_migrate_on
>> of dst_cpu, so start from the next CPU. This also works for
>> steal_cookie_task(), the first scan must fail and start directly
>> from the next one.
>>
>> Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
> 
> Test results in general look ok so
> 
> Acked-by: Mel Gorman <mgorman@techsingularity.net>
> 

Thanks for your review and feedback.

Thanks,
Hao
  
Hao Jia Nov. 30, 2022, 6:35 a.m. UTC | #3
On 2022/11/14 Mel Gorman wrote:
> On Wed, Oct 26, 2022 at 02:43:00PM +0800, Hao Jia wrote:
>> When select_idle_capacity() starts scanning for an idle CPU, it starts
>> with target CPU that has already been checked in select_idle_sibling().
>> So we start checking from the next CPU and try the target CPU at the end.
>> Similarly for task_numa_assign(), we have just checked numa_migrate_on
>> of dst_cpu, so start from the next CPU. This also works for
>> steal_cookie_task(), the first scan must fail and start directly
>> from the next one.
>>
>> Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
> 
> Test results in general look ok so
> 
> Acked-by: Mel Gorman <mgorman@techsingularity.net>
> 

Hi, Peter
These two patches have been Acked-by Mel Gorman.
If you have time, please review these two patches.

Thanks,
Hao
  
Vincent Guittot Nov. 30, 2022, 7:59 a.m. UTC | #4
On Wed, 30 Nov 2022 at 07:35, Hao Jia <jiahao.os@bytedance.com> wrote:
>
>
>
> On 2022/11/14 Mel Gorman wrote:
> > On Wed, Oct 26, 2022 at 02:43:00PM +0800, Hao Jia wrote:
> >> When select_idle_capacity() starts scanning for an idle CPU, it starts
> >> with target CPU that has already been checked in select_idle_sibling().
> >> So we start checking from the next CPU and try the target CPU at the end.
> >> Similarly for task_numa_assign(), we have just checked numa_migrate_on
> >> of dst_cpu, so start from the next CPU. This also works for
> >> steal_cookie_task(), the first scan must fail and start directly
> >> from the next one.
> >>
> >> Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
> >
> > Test results in general look ok so
> >
> > Acked-by: Mel Gorman <mgorman@techsingularity.net>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> >
>
> Hi, Peter
> These two patches have been Acked-by Mel Gorman.
> If you have time, please review these two patches.
>
> Thanks,
> Hao
  

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cb2aa2b54c7a..5c3c539e1712 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6154,7 +6154,7 @@  static bool steal_cookie_task(int cpu, struct sched_domain *sd)
 {
 	int i;
 
-	for_each_cpu_wrap(i, sched_domain_span(sd), cpu) {
+	for_each_cpu_wrap(i, sched_domain_span(sd), cpu + 1) {
 		if (i == cpu)
 			continue;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dfcb620bfe50..ba91d4478260 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1824,7 +1824,7 @@  static void task_numa_assign(struct task_numa_env *env,
 		int start = env->dst_cpu;
 
 		/* Find alternative idle CPU. */
-		for_each_cpu_wrap(cpu, cpumask_of_node(env->dst_nid), start) {
+		for_each_cpu_wrap(cpu, cpumask_of_node(env->dst_nid), start + 1) {
 			if (cpu == env->best_cpu || !idle_cpu(cpu) ||
 			    !cpumask_test_cpu(cpu, env->p->cpus_ptr)) {
 				continue;
@@ -6663,7 +6663,7 @@  select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 
 	task_util = uclamp_task_util(p);
 
-	for_each_cpu_wrap(cpu, cpus, target) {
+	for_each_cpu_wrap(cpu, cpus, target + 1) {
 		unsigned long cpu_cap = capacity_of(cpu);
 
 		if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu))