[1/1] sched/fair: Update min_vruntime in more relaxed way

Message ID 20231116105425.84773-2-s921975628@gmail.com
State New
Headers
Series sched/fair: Update min_vruntime in more relaxed way |

Commit Message

Yiwei Lin Nov. 16, 2023, 10:54 a.m. UTC
  As EEVDF adopts lag-based solution which is irrespective of
min_vruntime like CFS before, min_vruntime is only used as
an offset to avoid overflow on evaluation of avg_vruntime now.
Rely on the fact we will always update_curr() before change
to cfs_rq, it seems to make sense if we just
update_min_vruntime() with update_curr() to reduce the cost.

Signed-off-by: Yiwei Lin <s921975628@gmail.com>
---
 kernel/sched/fair.c | 20 +-------------------
 1 file changed, 1 insertion(+), 19 deletions(-)
  

Comments

Abel Wu Nov. 16, 2023, 11:16 a.m. UTC | #1
On 11/16/23 6:54 PM, Yiwei Lin Wrote:
> As EEVDF adopts lag-based solution which is irrespective of
> min_vruntime like CFS before, min_vruntime is only used as
> an offset to avoid overflow on evaluation of avg_vruntime now.
> Rely on the fact we will always update_curr() before change
> to cfs_rq, it seems to make sense if we just
> update_min_vruntime() with update_curr() to reduce the cost.
> 
> Signed-off-by: Yiwei Lin <s921975628@gmail.com>
> ---
>   kernel/sched/fair.c | 20 +-------------------
>   1 file changed, 1 insertion(+), 19 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 07f555857..5c40adfae 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3815,17 +3815,8 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
>   	enqueue_load_avg(cfs_rq, se);
>   	if (se->on_rq) {
>   		update_load_add(&cfs_rq->load, se->load.weight);
> -		if (!curr) {
> -			/*
> -			 * The entity's vruntime has been adjusted, so let's check
> -			 * whether the rq-wide min_vruntime needs updated too. Since
> -			 * the calculations above require stable min_vruntime rather
> -			 * than up-to-date one, we do the update at the end of the
> -			 * reweight process.
> -			 */
> +		if (!curr)
>   			__enqueue_entity(cfs_rq, se);
> -			update_min_vruntime(cfs_rq);
> -		}
>   	}
>   }
>   
> @@ -5347,15 +5338,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>   
>   	update_cfs_group(se);
>   
> -	/*
> -	 * Now advance min_vruntime if @se was the entity holding it back,
> -	 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
> -	 * put back on, and if we advance min_vruntime, we'll be placed back
> -	 * further than we started -- ie. we'll be penalized.
> -	 */
> -	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> -		update_min_vruntime(cfs_rq);
> -
>   	if (cfs_rq->nr_running == 0)
>   		update_idle_cfs_rq_clock_pelt(cfs_rq);
>   }

For now, core pick of core scheduling relies on min_vruntime to be fresh,
so please just fix commit eab03c23c2a1 to preserve its original behavior
by moving update_min_vruntime() to proper position. And behavior change
can be posted separated.

BTW it seems unnecessary to include a cover-letter for a single patch.

Thanks,
	Abel
  
Yiwei Lin Nov. 16, 2023, 11:33 a.m. UTC | #2
On 11/16/23 19:16, Abel Wu wrote:
> On 11/16/23 6:54 PM, Yiwei Lin Wrote:
>> As EEVDF adopts lag-based solution which is irrespective of
>> min_vruntime like CFS before, min_vruntime is only used as
>> an offset to avoid overflow on evaluation of avg_vruntime now.
>> Rely on the fact we will always update_curr() before change
>> to cfs_rq, it seems to make sense if we just
>> update_min_vruntime() with update_curr() to reduce the cost.
>>
>> Signed-off-by: Yiwei Lin <s921975628@gmail.com>
>> ---
>>   kernel/sched/fair.c | 20 +-------------------
>>   1 file changed, 1 insertion(+), 19 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 07f555857..5c40adfae 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3815,17 +3815,8 @@ static void reweight_entity(struct cfs_rq 
>> *cfs_rq, struct sched_entity *se,
>>       enqueue_load_avg(cfs_rq, se);
>>       if (se->on_rq) {
>>           update_load_add(&cfs_rq->load, se->load.weight);
>> -        if (!curr) {
>> -            /*
>> -             * The entity's vruntime has been adjusted, so let's check
>> -             * whether the rq-wide min_vruntime needs updated too. 
>> Since
>> -             * the calculations above require stable min_vruntime 
>> rather
>> -             * than up-to-date one, we do the update at the end of the
>> -             * reweight process.
>> -             */
>> +        if (!curr)
>>               __enqueue_entity(cfs_rq, se);
>> -            update_min_vruntime(cfs_rq);
>> -        }
>>       }
>>   }
>>   @@ -5347,15 +5338,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct 
>> sched_entity *se, int flags)
>>         update_cfs_group(se);
>>   -    /*
>> -     * Now advance min_vruntime if @se was the entity holding it back,
>> -     * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case 
>> we'll be
>> -     * put back on, and if we advance min_vruntime, we'll be placed 
>> back
>> -     * further than we started -- ie. we'll be penalized.
>> -     */
>> -    if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
>> -        update_min_vruntime(cfs_rq);
>> -
>>       if (cfs_rq->nr_running == 0)
>>           update_idle_cfs_rq_clock_pelt(cfs_rq);
>>   }
>
> For now, core pick of core scheduling relies on min_vruntime to be fresh,
> so please just fix commit eab03c23c2a1 to preserve its original behavior
> by moving update_min_vruntime() to proper position. And behavior change
> can be posted separated.

Sorry for not noticing the requirement on core scheduling and applying 
bad solution. I should take a closer look for the influence when 
changing the approach to update_min_vruntime().

I'll send another patch which just move update_min_vruntime() to the 
right place later on.

>
> BTW it seems unnecessary to include a cover-letter for a single patch.
>
Got it! Still learning how to work with the kernel mailing list. Thanks 
for the kind suggestion!
> Thanks,
>     Abel
  

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 07f555857..5c40adfae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3815,17 +3815,8 @@  static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
 	enqueue_load_avg(cfs_rq, se);
 	if (se->on_rq) {
 		update_load_add(&cfs_rq->load, se->load.weight);
-		if (!curr) {
-			/*
-			 * The entity's vruntime has been adjusted, so let's check
-			 * whether the rq-wide min_vruntime needs updated too. Since
-			 * the calculations above require stable min_vruntime rather
-			 * than up-to-date one, we do the update at the end of the
-			 * reweight process.
-			 */
+		if (!curr)
 			__enqueue_entity(cfs_rq, se);
-			update_min_vruntime(cfs_rq);
-		}
 	}
 }
 
@@ -5347,15 +5338,6 @@  dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 
 	update_cfs_group(se);
 
-	/*
-	 * Now advance min_vruntime if @se was the entity holding it back,
-	 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
-	 * put back on, and if we advance min_vruntime, we'll be placed back
-	 * further than we started -- ie. we'll be penalized.
-	 */
-	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
-		update_min_vruntime(cfs_rq);
-
 	if (cfs_rq->nr_running == 0)
 		update_idle_cfs_rq_clock_pelt(cfs_rq);
 }