[v8,1/9] sched/fair: fix unfairness at wakeup

Message ID 20221110175009.18458-2-vincent.guittot@linaro.org
State New
Headers
Series [v8,1/9] sched/fair: fix unfairness at wakeup |

Commit Message

Vincent Guittot Nov. 10, 2022, 5:50 p.m. UTC
  At wake up, the vruntime of a task is updated to not be more older than
a sched_latency period behind the min_vruntime. This prevents long sleeping
task to get unlimited credit at wakeup.
Such waking task should preempt current one to use its CPU bandwidth but
wakeup_gran() can be larger than sched_latency, filter out the
wakeup preemption and as a results steals some CPU bandwidth to
the waking task.

Make sure that a task, which vruntime has been capped, will preempt current
task and use its CPU bandwidth even if wakeup_gran() is in the same range
as sched_latency.

If the waking task failed to preempt current it could to wait up to
sysctl_sched_min_granularity before preempting it during next tick.

Strictly speaking, we should use cfs->min_vruntime instead of
curr->vruntime but it doesn't worth the additional overhead and complexity
as the vruntime of current should be close to min_vruntime if not equal.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c  | 46 ++++++++++++++++++++------------------------
 kernel/sched/sched.h | 30 ++++++++++++++++++++++++++++-
 2 files changed, 50 insertions(+), 26 deletions(-)
  

Comments

Joel Fernandes Nov. 14, 2022, 3:06 a.m. UTC | #1
Hi Vincent,

On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:
> At wake up, the vruntime of a task is updated to not be more older than
> a sched_latency period behind the min_vruntime. This prevents long sleeping
> task to get unlimited credit at wakeup.
> Such waking task should preempt current one to use its CPU bandwidth but
> wakeup_gran() can be larger than sched_latency, filter out the
> wakeup preemption and as a results steals some CPU bandwidth to
> the waking task.

Just a thought: one can argue that this also hurts the running task because
wakeup_gran() is expected to not preempt the running task for a certain
minimum amount of time right?

So for example, if I set sysctl_sched_wakeup_granularity to a high value, I
expect the current task to not be preempted for that long, even if the
sched_latency cap in place_entity() makes the delta smaller than
wakeup_gran(). The place_entity() in current code is used to cap the sleep
credit, it does not really talk about preemption.

I don't mind this change, but it does change the meaning a bit of
sysctl_sched_wakeup_granularity I think.

> Make sure that a task, which vruntime has been capped, will preempt current
> task and use its CPU bandwidth even if wakeup_gran() is in the same range
> as sched_latency.

nit: I would prefer we say, instead of "is in the same range", "is greater
than". Because it got confusing a bit for me.

> If the waking task failed to preempt current it could to wait up to
> sysctl_sched_min_granularity before preempting it during next tick.
> 
> Strictly speaking, we should use cfs->min_vruntime instead of
> curr->vruntime but it doesn't worth the additional overhead and complexity
> as the vruntime of current should be close to min_vruntime if not equal.

Could we add here,
Reported-by: Youssef Esmat <youssefesmat@chromium.org>

> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Just a few more comments below:

> ---
>  kernel/sched/fair.c  | 46 ++++++++++++++++++++------------------------
>  kernel/sched/sched.h | 30 ++++++++++++++++++++++++++++-
>  2 files changed, 50 insertions(+), 26 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 5ffec4370602..eb04c83112a0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4345,33 +4345,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
>  {
>  	u64 vruntime = cfs_rq->min_vruntime;
>  
> -	/*
> -	 * The 'current' period is already promised to the current tasks,
> -	 * however the extra weight of the new task will slow them down a
> -	 * little, place the new task so that it fits in the slot that
> -	 * stays open at the end.
> -	 */
> -	if (initial && sched_feat(START_DEBIT))
> -		vruntime += sched_vslice(cfs_rq, se);
> -
> -	/* sleeps up to a single latency don't count. */
> -	if (!initial) {
> -		unsigned long thresh;
> -
> -		if (se_is_idle(se))
> -			thresh = sysctl_sched_min_granularity;
> -		else
> -			thresh = sysctl_sched_latency;
> -
> +	if (!initial)
> +		/* sleeps up to a single latency don't count. */
> +		vruntime -= get_sched_latency(se_is_idle(se));
> +	else if (sched_feat(START_DEBIT))
>  		/*
> -		 * Halve their sleep time's effect, to allow
> -		 * for a gentler effect of sleepers:
> +		 * The 'current' period is already promised to the current tasks,
> +		 * however the extra weight of the new task will slow them down a
> +		 * little, place the new task so that it fits in the slot that
> +		 * stays open at the end.
>  		 */
> -		if (sched_feat(GENTLE_FAIR_SLEEPERS))
> -			thresh >>= 1;
> -
> -		vruntime -= thresh;
> -	}
> +		vruntime += sched_vslice(cfs_rq, se);
>  
>  	/* ensure we never gain time by being placed backwards. */
>  	se->vruntime = max_vruntime(se->vruntime, vruntime);
> @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
>  		return -1;
>  
>  	gran = wakeup_gran(se);
> +
> +	/*
> +	 * At wake up, the vruntime of a task is capped to not be older than
> +	 * a sched_latency period compared to min_vruntime. This prevents long
> +	 * sleeping task to get unlimited credit at wakeup. Such waking up task
> +	 * has to preempt current in order to not lose its share of CPU
> +	 * bandwidth but wakeup_gran() can become higher than scheduling period
> +	 * for low priority task. Make sure that long sleeping task will get a
> +	 * chance to preempt current.
> +	 */
> +	gran = min_t(s64, gran, get_latency_max());
> +

Can we move this to wakeup_gran(se)? IMO, it belongs there because you are
adjusting the wakeup_gran().

>  	if (vdiff > gran)
>  		return 1;
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 1fc198be1ffd..14879d429919 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
>  extern const_debug unsigned int sysctl_sched_nr_migrate;
>  extern const_debug unsigned int sysctl_sched_migration_cost;
>  
> -#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_latency;
>  extern unsigned int sysctl_sched_min_granularity;
> +#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_idle_min_granularity;
>  extern unsigned int sysctl_sched_wakeup_granularity;
>  extern int sysctl_resched_latency_warn_ms;
> @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
>  extern unsigned int sysctl_numa_balancing_scan_size;
>  #endif
>  
> +static inline unsigned long  get_sched_latency(bool idle)
> +{

IMO, since there are other users of sysctl_sched_latency, it would be better
to call this get_max_sleep_credit() or something.

> +	unsigned long thresh;
> +
> +	if (idle)
> +		thresh = sysctl_sched_min_granularity;
> +	else
> +		thresh = sysctl_sched_latency;
> +
> +	/*
> +	 * Halve their sleep time's effect, to allow
> +	 * for a gentler effect of sleepers:
> +	 */
> +	if (sched_feat(GENTLE_FAIR_SLEEPERS))
> +		thresh >>= 1;
> +
> +	return thresh;
> +}
> +
> +static inline unsigned long  get_latency_max(void)
> +{
> +	unsigned long thresh = get_sched_latency(false);
> +
> +	thresh -= sysctl_sched_min_granularity;

Could you clarify, why are you subtracting sched_min_granularity here? Could
you add some comments here to make it clear?

thanks,

 - Joel


> +
> +	return thresh;
> +}
> +
>  #ifdef CONFIG_SCHED_HRTICK
>  
>  /*
> -- 
> 2.17.1
>
  
Vincent Guittot Nov. 14, 2022, 11:05 a.m. UTC | #2
On Mon, 14 Nov 2022 at 04:06, Joel Fernandes <joel@joelfernandes.org> wrote:
>
> Hi Vincent,
>
> On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:
> > At wake up, the vruntime of a task is updated to not be more older than
> > a sched_latency period behind the min_vruntime. This prevents long sleeping
> > task to get unlimited credit at wakeup.
> > Such waking task should preempt current one to use its CPU bandwidth but
> > wakeup_gran() can be larger than sched_latency, filter out the
> > wakeup preemption and as a results steals some CPU bandwidth to
> > the waking task.
>
> Just a thought: one can argue that this also hurts the running task because
> wakeup_gran() is expected to not preempt the running task for a certain
> minimum amount of time right?

No because you should not make wakeup_gran() higher than sched_latency.

>
> So for example, if I set sysctl_sched_wakeup_granularity to a high value, I
> expect the current task to not be preempted for that long, even if the
> sched_latency cap in place_entity() makes the delta smaller than
> wakeup_gran(). The place_entity() in current code is used to cap the sleep
> credit, it does not really talk about preemption.

But one should never set such nonsense values.

>
> I don't mind this change, but it does change the meaning a bit of
> sysctl_sched_wakeup_granularity I think.
>
> > Make sure that a task, which vruntime has been capped, will preempt current
> > task and use its CPU bandwidth even if wakeup_gran() is in the same range
> > as sched_latency.
>
> nit: I would prefer we say, instead of "is in the same range", "is greater
> than". Because it got confusing a bit for me.

I prefer keeping current description because the sentence below gives
the reason why it's not strictly greater than

>
> > If the waking task failed to preempt current it could to wait up to
> > sysctl_sched_min_granularity before preempting it during next tick.
> >
> > Strictly speaking, we should use cfs->min_vruntime instead of
> > curr->vruntime but it doesn't worth the additional overhead and complexity
> > as the vruntime of current should be close to min_vruntime if not equal.
>
> Could we add here,
> Reported-by: Youssef Esmat <youssefesmat@chromium.org>

yes

>
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>
> Just a few more comments below:
>
> > ---
> >  kernel/sched/fair.c  | 46 ++++++++++++++++++++------------------------
> >  kernel/sched/sched.h | 30 ++++++++++++++++++++++++++++-
> >  2 files changed, 50 insertions(+), 26 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 5ffec4370602..eb04c83112a0 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4345,33 +4345,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> >  {
> >       u64 vruntime = cfs_rq->min_vruntime;
> >
> > -     /*
> > -      * The 'current' period is already promised to the current tasks,
> > -      * however the extra weight of the new task will slow them down a
> > -      * little, place the new task so that it fits in the slot that
> > -      * stays open at the end.
> > -      */
> > -     if (initial && sched_feat(START_DEBIT))
> > -             vruntime += sched_vslice(cfs_rq, se);
> > -
> > -     /* sleeps up to a single latency don't count. */
> > -     if (!initial) {
> > -             unsigned long thresh;
> > -
> > -             if (se_is_idle(se))
> > -                     thresh = sysctl_sched_min_granularity;
> > -             else
> > -                     thresh = sysctl_sched_latency;
> > -
> > +     if (!initial)
> > +             /* sleeps up to a single latency don't count. */
> > +             vruntime -= get_sched_latency(se_is_idle(se));
> > +     else if (sched_feat(START_DEBIT))
> >               /*
> > -              * Halve their sleep time's effect, to allow
> > -              * for a gentler effect of sleepers:
> > +              * The 'current' period is already promised to the current tasks,
> > +              * however the extra weight of the new task will slow them down a
> > +              * little, place the new task so that it fits in the slot that
> > +              * stays open at the end.
> >                */
> > -             if (sched_feat(GENTLE_FAIR_SLEEPERS))
> > -                     thresh >>= 1;
> > -
> > -             vruntime -= thresh;
> > -     }
> > +             vruntime += sched_vslice(cfs_rq, se);
> >
> >       /* ensure we never gain time by being placed backwards. */
> >       se->vruntime = max_vruntime(se->vruntime, vruntime);
> > @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
> >               return -1;
> >
> >       gran = wakeup_gran(se);
> > +
> > +     /*
> > +      * At wake up, the vruntime of a task is capped to not be older than
> > +      * a sched_latency period compared to min_vruntime. This prevents long
> > +      * sleeping task to get unlimited credit at wakeup. Such waking up task
> > +      * has to preempt current in order to not lose its share of CPU
> > +      * bandwidth but wakeup_gran() can become higher than scheduling period
> > +      * for low priority task. Make sure that long sleeping task will get a
> > +      * chance to preempt current.
> > +      */
> > +     gran = min_t(s64, gran, get_latency_max());
> > +
>
> Can we move this to wakeup_gran(se)? IMO, it belongs there because you are
> adjusting the wakeup_gran().

I prefer keep current code because patch 8 adds offset in the equation

>
> >       if (vdiff > gran)
> >               return 1;
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 1fc198be1ffd..14879d429919 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
> >  extern const_debug unsigned int sysctl_sched_nr_migrate;
> >  extern const_debug unsigned int sysctl_sched_migration_cost;
> >
> > -#ifdef CONFIG_SCHED_DEBUG
> >  extern unsigned int sysctl_sched_latency;
> >  extern unsigned int sysctl_sched_min_granularity;
> > +#ifdef CONFIG_SCHED_DEBUG
> >  extern unsigned int sysctl_sched_idle_min_granularity;
> >  extern unsigned int sysctl_sched_wakeup_granularity;
> >  extern int sysctl_resched_latency_warn_ms;
> > @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
> >  extern unsigned int sysctl_numa_balancing_scan_size;
> >  #endif
> >
> > +static inline unsigned long  get_sched_latency(bool idle)
> > +{
>
> IMO, since there are other users of sysctl_sched_latency, it would be better
> to call this get_max_sleep_credit() or something.

get_sleep_latency()

>
> > +     unsigned long thresh;
> > +
> > +     if (idle)
> > +             thresh = sysctl_sched_min_granularity;
> > +     else
> > +             thresh = sysctl_sched_latency;
> > +
> > +     /*
> > +      * Halve their sleep time's effect, to allow
> > +      * for a gentler effect of sleepers:
> > +      */
> > +     if (sched_feat(GENTLE_FAIR_SLEEPERS))
> > +             thresh >>= 1;
> > +
> > +     return thresh;
> > +}
> > +
> > +static inline unsigned long  get_latency_max(void)
> > +{
> > +     unsigned long thresh = get_sched_latency(false);
> > +
> > +     thresh -= sysctl_sched_min_granularity;
>
> Could you clarify, why are you subtracting sched_min_granularity here? Could
> you add some comments here to make it clear?

If the waking task failed to preempt current it could to wait up to
sysctl_sched_min_granularity before preempting it during next tick.

>
> thanks,
>
>  - Joel
>
>
> > +
> > +     return thresh;
> > +}
> > +
> >  #ifdef CONFIG_SCHED_HRTICK
> >
> >  /*
> > --
> > 2.17.1
> >
  
Patrick Bellasi Nov. 14, 2022, 4:19 p.m. UTC | #3
Hi Vincent!

On 10-Nov 18:50, Vincent Guittot wrote:

[...]
  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 1fc198be1ffd..14879d429919 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
>  extern const_debug unsigned int sysctl_sched_nr_migrate;
>  extern const_debug unsigned int sysctl_sched_migration_cost;
>  
> -#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_latency;
>  extern unsigned int sysctl_sched_min_granularity;
> +#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_idle_min_granularity;
>  extern unsigned int sysctl_sched_wakeup_granularity;
>  extern int sysctl_resched_latency_warn_ms;
> @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
>  extern unsigned int sysctl_numa_balancing_scan_size;
>  #endif
>  
> +static inline unsigned long  get_sched_latency(bool idle)
                                ^^^^^^^^^^^^^^^^^

This can be confusing since it's not always returning the sysctl_sched_latency
value. It's also being used to tune the vruntime at wakeup time.

Thus, what about renaming this to something more close to what's used for, e.g.
   get_wakeup_latency(se)
?

Also, in the following patches we call this always with a false parametr.
Thus, perhaps in a following patch, we can better add something like:
   #define max_wakeup_latency get_wakeup_latency(false)
?

> +{
> +	unsigned long thresh;
> +
> +	if (idle)
> +		thresh = sysctl_sched_min_granularity;
> +	else
> +		thresh = sysctl_sched_latency;
> +
> +	/*
> +	 * Halve their sleep time's effect, to allow
> +	 * for a gentler effect of sleepers:
> +	 */
> +	if (sched_feat(GENTLE_FAIR_SLEEPERS))
> +		thresh >>= 1;
> +
> +	return thresh;
> +}
> +
> +static inline unsigned long  get_latency_max(void)
                                ^^^^^^^^^^^^^^^

This is always used to cap some form of vruntime deltas in:
 - check_preempt_tick()
 - wakeup_latency_gran()
 - wakeup_preempt_entity()
It's always smaller then the max_wakeup_latency (as defined above).

Thus, does not seems something like:
   wakeup_latency_threshold()
a better documenting naming?

> +{
> +	unsigned long thresh = get_sched_latency(false);
> +
> +	thresh -= sysctl_sched_min_granularity;
> +
> +	return thresh;
> +}

[...]

Best,
Patrick
  
Vincent Guittot Nov. 14, 2022, 4:46 p.m. UTC | #4
On Mon, 14 Nov 2022 at 17:20, Patrick Bellasi
<patrick.bellasi@matbug.net> wrote:
>
> Hi Vincent!
>
> On 10-Nov 18:50, Vincent Guittot wrote:
>
> [...]
>
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 1fc198be1ffd..14879d429919 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
> >  extern const_debug unsigned int sysctl_sched_nr_migrate;
> >  extern const_debug unsigned int sysctl_sched_migration_cost;
> >
> > -#ifdef CONFIG_SCHED_DEBUG
> >  extern unsigned int sysctl_sched_latency;
> >  extern unsigned int sysctl_sched_min_granularity;
> > +#ifdef CONFIG_SCHED_DEBUG
> >  extern unsigned int sysctl_sched_idle_min_granularity;
> >  extern unsigned int sysctl_sched_wakeup_granularity;
> >  extern int sysctl_resched_latency_warn_ms;
> > @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
> >  extern unsigned int sysctl_numa_balancing_scan_size;
> >  #endif
> >
> > +static inline unsigned long  get_sched_latency(bool idle)
>                                 ^^^^^^^^^^^^^^^^^
>
> This can be confusing since it's not always returning the sysctl_sched_latency
> value. It's also being used to tune the vruntime at wakeup time.
>
> Thus, what about renaming this to something more close to what's used for, e.g.
>    get_wakeup_latency(se)
> ?
>
> Also, in the following patches we call this always with a false parametr.
> Thus, perhaps in a following patch, we can better add something like:
>    #define max_wakeup_latency get_wakeup_latency(false)
> ?

I'm going to rename get_wakeup_latency by get_sleep_latency() as
proposed earlier.

I don't see the benefit of adding a macro of top so will keep the parameter

>
> > +{
> > +     unsigned long thresh;
> > +
> > +     if (idle)
> > +             thresh = sysctl_sched_min_granularity;
> > +     else
> > +             thresh = sysctl_sched_latency;
> > +
> > +     /*
> > +      * Halve their sleep time's effect, to allow
> > +      * for a gentler effect of sleepers:
> > +      */
> > +     if (sched_feat(GENTLE_FAIR_SLEEPERS))
> > +             thresh >>= 1;
> > +
> > +     return thresh;
> > +}
> > +
> > +static inline unsigned long  get_latency_max(void)
>                                 ^^^^^^^^^^^^^^^
>
> This is always used to cap some form of vruntime deltas in:
>  - check_preempt_tick()
>  - wakeup_latency_gran()
>  - wakeup_preempt_entity()
> It's always smaller then the max_wakeup_latency (as defined above).
>
> Thus, does not seems something like:
>    wakeup_latency_threshold()
> a better documenting naming?
>
> > +{
> > +     unsigned long thresh = get_sched_latency(false);
> > +
> > +     thresh -= sysctl_sched_min_granularity;
> > +
> > +     return thresh;
> > +}
>
> [...]
>
> Best,
> Patrick
>
> --
> #include <best/regards.h>
>
> Patrick Bellasi
>
  
Dietmar Eggemann Nov. 14, 2022, 7:13 p.m. UTC | #5
On 10/11/2022 18:50, Vincent Guittot wrote:
> At wake up, the vruntime of a task is updated to not be more older than
> a sched_latency period behind the min_vruntime. This prevents long sleeping
> task to get unlimited credit at wakeup.
> Such waking task should preempt current one to use its CPU bandwidth but
> wakeup_gran() can be larger than sched_latency, filter out the
> wakeup preemption and as a results steals some CPU bandwidth to
> the waking task.
> 
> Make sure that a task, which vruntime has been capped, will preempt current
> task and use its CPU bandwidth even if wakeup_gran() is in the same range
> as sched_latency.

Looks like that gran can be nuch higher than sched_latency for extreme
cases?

> 
> If the waking task failed to preempt current it could to wait up to
> sysctl_sched_min_granularity before preempting it during next tick.
> 
> Strictly speaking, we should use cfs->min_vruntime instead of
> curr->vruntime but it doesn't worth the additional overhead and complexity
> as the vruntime of current should be close to min_vruntime if not equal.

^^^ Does this related to the `if (vdiff > gran) return 1` condition in
wakeup_preempt_entity()?

[...]

> @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
>  		return -1;
>  
>  	gran = wakeup_gran(se);
> +
> +	/*
> +	 * At wake up, the vruntime of a task is capped to not be older than
> +	 * a sched_latency period compared to min_vruntime. This prevents long
> +	 * sleeping task to get unlimited credit at wakeup. Such waking up task
> +	 * has to preempt current in order to not lose its share of CPU
> +	 * bandwidth but wakeup_gran() can become higher than scheduling period
> +	 * for low priority task. Make sure that long sleeping task will get a

low priority task or taskgroup with low cpu.shares, right?

6 CPUs

sysctl_sched
  .sysctl_sched_latency              : 18.000000
  .sysctl_sched_min_granularity      : 2.250000
  .sysctl_sched_idle_min_granularity : 0.750000
  .sysctl_sched_wakeup_granularity   : 3.000000
  ...

p1 & p2 affine to CPUX

     '/'
     /\
   p1  p2

p1 & p2	nice=0	      - vdiff=9ms gran=3ms lat_max=6.75ms
p1 & p2	nice=4	      - vdiff=9ms gran=7.26ms lat_max=6.75ms
p1 & p2	nice=19	      - vdiff=9ms gran=204.79ms lat_max=6.75ms


     '/'
     /\
    A  B
   /    \
  p1    p2

A & B cpu.shares=1024 - vdiff=9ms gran=3ms lat_max=6.75ms
A & B cpu.shares=448  - vdiff=9ms gran=6.86ms lat_max=6.75ms
A & B cpu.shares=2    - vdiff=9ms gran=1536ms lat_max=6.75ms

> +	 * chance to preempt current.
> +	 */
> +	gran = min_t(s64, gran, get_latency_max());
> +

[...]

> @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
>  extern unsigned int sysctl_numa_balancing_scan_size;
>  #endif
>  
> +static inline unsigned long  get_sched_latency(bool idle)
                              ^^
2 white-spaces

[...]

> +
> +static inline unsigned long  get_latency_max(void)
                              ^^

[...]
  
Vincent Guittot Nov. 15, 2022, 7:26 a.m. UTC | #6
On Mon, 14 Nov 2022 at 20:13, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>
> On 10/11/2022 18:50, Vincent Guittot wrote:
> > At wake up, the vruntime of a task is updated to not be more older than
> > a sched_latency period behind the min_vruntime. This prevents long sleeping
> > task to get unlimited credit at wakeup.
> > Such waking task should preempt current one to use its CPU bandwidth but
> > wakeup_gran() can be larger than sched_latency, filter out the
> > wakeup preemption and as a results steals some CPU bandwidth to
> > the waking task.
> >
> > Make sure that a task, which vruntime has been capped, will preempt current
> > task and use its CPU bandwidth even if wakeup_gran() is in the same range
> > as sched_latency.
>
> Looks like that gran can be nuch higher than sched_latency for extreme
> cases?

It's not that extreme, all tasks with nice prio 5 and above will face
the problem

>
> >
> > If the waking task failed to preempt current it could to wait up to
> > sysctl_sched_min_granularity before preempting it during next tick.
> >
> > Strictly speaking, we should use cfs->min_vruntime instead of
> > curr->vruntime but it doesn't worth the additional overhead and complexity
> > as the vruntime of current should be close to min_vruntime if not equal.
>
> ^^^ Does this related to the `if (vdiff > gran) return 1` condition in
> wakeup_preempt_entity()?

yes

>
> [...]
>
> > @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
> >               return -1;
> >
> >       gran = wakeup_gran(se);
> > +
> > +     /*
> > +      * At wake up, the vruntime of a task is capped to not be older than
> > +      * a sched_latency period compared to min_vruntime. This prevents long
> > +      * sleeping task to get unlimited credit at wakeup. Such waking up task
> > +      * has to preempt current in order to not lose its share of CPU
> > +      * bandwidth but wakeup_gran() can become higher than scheduling period
> > +      * for low priority task. Make sure that long sleeping task will get a
>
> low priority task or taskgroup with low cpu.shares, right?

yes

>
> 6 CPUs
>
> sysctl_sched
>   .sysctl_sched_latency              : 18.000000
>   .sysctl_sched_min_granularity      : 2.250000
>   .sysctl_sched_idle_min_granularity : 0.750000
>   .sysctl_sched_wakeup_granularity   : 3.000000
>   ...
>
> p1 & p2 affine to CPUX
>
>      '/'
>      /\
>    p1  p2
>
> p1 & p2 nice=0        - vdiff=9ms gran=3ms lat_max=6.75ms
> p1 & p2 nice=4        - vdiff=9ms gran=7.26ms lat_max=6.75ms

p1 & p2 nice = 5        - vdiff=9ms gran=9.17ms lat_max=6.75ms

> p1 & p2 nice=19       - vdiff=9ms gran=204.79ms lat_max=6.75ms
>
>
>      '/'
>      /\
>     A  B
>    /    \
>   p1    p2
>
> A & B cpu.shares=1024 - vdiff=9ms gran=3ms lat_max=6.75ms
> A & B cpu.shares=448  - vdiff=9ms gran=6.86ms lat_max=6.75ms
> A & B cpu.shares=2    - vdiff=9ms gran=1536ms lat_max=6.75ms
>
> > +      * chance to preempt current.
> > +      */
> > +     gran = min_t(s64, gran, get_latency_max());
> > +
>
> [...]
>
> > @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
> >  extern unsigned int sysctl_numa_balancing_scan_size;
> >  #endif
> >
> > +static inline unsigned long  get_sched_latency(bool idle)
>                               ^^
> 2 white-spaces

ok

>
> [...]
>
> > +
> > +static inline unsigned long  get_latency_max(void)
>                               ^^

ok

>
> [...]
  
Joel Fernandes Nov. 16, 2022, 2:10 a.m. UTC | #7
Hi Vincent,

On Mon, Nov 14, 2022 at 11:05 AM Vincent Guittot
<vincent.guittot@linaro.org> wrote:
[...]
> >
> > On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:
> > > At wake up, the vruntime of a task is updated to not be more older than
> > > a sched_latency period behind the min_vruntime. This prevents long sleeping
> > > task to get unlimited credit at wakeup.
> > > Such waking task should preempt current one to use its CPU bandwidth but
> > > wakeup_gran() can be larger than sched_latency, filter out the
> > > wakeup preemption and as a results steals some CPU bandwidth to
> > > the waking task.
> >
> > Just a thought: one can argue that this also hurts the running task because
> > wakeup_gran() is expected to not preempt the running task for a certain
> > minimum amount of time right?
>
> No because you should not make wakeup_gran() higher than sched_latency.
>
> >
> > So for example, if I set sysctl_sched_wakeup_granularity to a high value, I
> > expect the current task to not be preempted for that long, even if the
> > sched_latency cap in place_entity() makes the delta smaller than
> > wakeup_gran(). The place_entity() in current code is used to cap the sleep
> > credit, it does not really talk about preemption.
>
> But one should never set such nonsense values.

It is not about the user setting nonsense sysctl value. Even if you do
not change sysctl_sched_wakeup_granularity, wakeup_gran() can be large
due to NICE scaling.
wakeup_gran() scales the sysctl by the ratio of the nice-load of the
se, with the NICE_0_LOAD.

On my system, by default sysctl_sched_wakeup_granularity is 3ms, and
sysctl_sched_latency is 18ms.

However, if you set the task to nice +10, the wakeup_gran() scaling
can easily make the gran exceed sysctl_sched_latency. And also, just
to note (per my experience) sysctl_sched_latency does not really hold
anyway when nice values are not default. IOW, all tasks are not
guaranteed to run within the sched_latency window always.

Again, like I said I don't mind this change (and I think it is OK to
do) but I was just preparing you/us for someone who might say they
don't much like the aggressive preemption.

> > I don't mind this change, but it does change the meaning a bit of
> > sysctl_sched_wakeup_granularity I think.
> >
> > > Make sure that a task, which vruntime has been capped, will preempt current
> > > task and use its CPU bandwidth even if wakeup_gran() is in the same range
> > > as sched_latency.
> >
> > nit: I would prefer we say, instead of "is in the same range", "is greater
> > than". Because it got confusing a bit for me.
>
> I prefer keeping current description because the sentence below gives
> the reason why it's not strictly greater than

Honestly saying "is in the same range" is ambiguous and confusing. I
prefer the commit messages to be clear, but I leave it up to you.

> > Just a few more comments below:
[...]
> > > +
> > > +     /*
> > > +      * At wake up, the vruntime of a task is capped to not be older than
> > > +      * a sched_latency period compared to min_vruntime. This prevents long
> > > +      * sleeping task to get unlimited credit at wakeup. Such waking up task
> > > +      * has to preempt current in order to not lose its share of CPU
> > > +      * bandwidth but wakeup_gran() can become higher than scheduling period
> > > +      * for low priority task. Make sure that long sleeping task will get a
> > > +      * chance to preempt current.
> > > +      */
> > > +     gran = min_t(s64, gran, get_latency_max());
> > > +
> >
> > Can we move this to wakeup_gran(se)? IMO, it belongs there because you are
> > adjusting the wakeup_gran().
>
> I prefer keep current code because patch 8 adds offset in the equation

Ack.

> > >       if (vdiff > gran)
> > >               return 1;
> > >
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index 1fc198be1ffd..14879d429919 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
> > >  extern const_debug unsigned int sysctl_sched_nr_migrate;
> > >  extern const_debug unsigned int sysctl_sched_migration_cost;
> > >
> > > -#ifdef CONFIG_SCHED_DEBUG
> > >  extern unsigned int sysctl_sched_latency;
> > >  extern unsigned int sysctl_sched_min_granularity;
> > > +#ifdef CONFIG_SCHED_DEBUG
> > >  extern unsigned int sysctl_sched_idle_min_granularity;
> > >  extern unsigned int sysctl_sched_wakeup_granularity;
> > >  extern int sysctl_resched_latency_warn_ms;
> > > @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
> > >  extern unsigned int sysctl_numa_balancing_scan_size;
> > >  #endif
> > >
> > > +static inline unsigned long  get_sched_latency(bool idle)
> > > +{
> >
> > IMO, since there are other users of sysctl_sched_latency, it would be better
> > to call this get_max_sleep_credit() or something.
>
> get_sleep_latency()

Ack.

> >
> > > +     unsigned long thresh;
> > > +
> > > +     if (idle)
> > > +             thresh = sysctl_sched_min_granularity;
> > > +     else
> > > +             thresh = sysctl_sched_latency;
> > > +
> > > +     /*
> > > +      * Halve their sleep time's effect, to allow
> > > +      * for a gentler effect of sleepers:
> > > +      */
> > > +     if (sched_feat(GENTLE_FAIR_SLEEPERS))
> > > +             thresh >>= 1;
> > > +
> > > +     return thresh;
> > > +}
> > > +
> > > +static inline unsigned long  get_latency_max(void)
> > > +{
> > > +     unsigned long thresh = get_sched_latency(false);
> > > +
> > > +     thresh -= sysctl_sched_min_granularity;
> >
> > Could you clarify, why are you subtracting sched_min_granularity here? Could
> > you add some comments here to make it clear?
>
> If the waking task failed to preempt current it could to wait up to
> sysctl_sched_min_granularity before preempting it during next tick.

Ok, makes sense, thanks.

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

 - Joel
  
Aaron Lu Nov. 16, 2022, 8:25 a.m. UTC | #8
On Mon, Nov 14, 2022 at 12:05:18PM +0100, Vincent Guittot wrote:
> On Mon, 14 Nov 2022 at 04:06, Joel Fernandes <joel@joelfernandes.org> wrote:
> >
> > Hi Vincent,
> >
> > On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:

... ...

> > > +static inline unsigned long  get_latency_max(void)
> > > +{
> > > +     unsigned long thresh = get_sched_latency(false);
> > > +
> > > +     thresh -= sysctl_sched_min_granularity;
> >
> > Could you clarify, why are you subtracting sched_min_granularity here? Could
> > you add some comments here to make it clear?
> 
> If the waking task failed to preempt current it could to wait up to
> sysctl_sched_min_granularity before preempting it during next tick.

check_preempt_tick() compares vdiff/delta between the leftmost se and
curr against curr's ideal_runtime, it doesn't use thresh here or the
adjusted wakeup_gran, so I don't see why reducing thresh here can help
se to preempt curr during next tick if it failed to preempt curr in its
wakeup path.

I can see reducing thresh here with whatever value can help the waking
se to preempt curr in wakeup_preempt_entity() though, because most
likely the waking se's vruntime is cfs_rq->min_vruntime -
sysctl_sched_latency/2 and curr->vruntime is near cfs_rq->min_vruntime
so vdiff is about sysctl_sched_latency/2, which is the same value as
get_sched_latency(false) and when thresh is reduced some bit, then vdiff
in wakeup_preempt_entity() will be larger than gran and make it possible
to preempt.

So I'm confused by your comment or I might misread the code.
  
Vincent Guittot Nov. 17, 2022, 9:18 a.m. UTC | #9
On Wed, 16 Nov 2022 at 09:26, Aaron Lu <aaron.lu@intel.com> wrote:
>
> On Mon, Nov 14, 2022 at 12:05:18PM +0100, Vincent Guittot wrote:
> > On Mon, 14 Nov 2022 at 04:06, Joel Fernandes <joel@joelfernandes.org> wrote:
> > >
> > > Hi Vincent,
> > >
> > > On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:
>
> ... ...
>
> > > > +static inline unsigned long  get_latency_max(void)
> > > > +{
> > > > +     unsigned long thresh = get_sched_latency(false);
> > > > +
> > > > +     thresh -= sysctl_sched_min_granularity;
> > >
> > > Could you clarify, why are you subtracting sched_min_granularity here? Could
> > > you add some comments here to make it clear?
> >
> > If the waking task failed to preempt current it could to wait up to
> > sysctl_sched_min_granularity before preempting it during next tick.
>
> check_preempt_tick() compares vdiff/delta between the leftmost se and
> curr against curr's ideal_runtime, it doesn't use thresh here or the
> adjusted wakeup_gran, so I don't see why reducing thresh here can help
> se to preempt curr during next tick if it failed to preempt curr in its
> wakeup path.

If waking task doesn't preempt curr, it will wait for the next
check_preempt_tick(), but check_preempt_tick() ensures a minimum
runtime of sysctl_sched_min_granularity before comparing the vruntime.
Thresh doesn't help in check_preempt_tick() but anticipate the fact
that if it fails to preempt now, current can get an additional
sysctl_sched_min_granularity runtime before being preempted.

>
> I can see reducing thresh here with whatever value can help the waking
> se to preempt curr in wakeup_preempt_entity() though, because most
> likely the waking se's vruntime is cfs_rq->min_vruntime -
> sysctl_sched_latency/2 and curr->vruntime is near cfs_rq->min_vruntime
> so vdiff is about sysctl_sched_latency/2, which is the same value as
> get_sched_latency(false) and when thresh is reduced some bit, then vdiff
> in wakeup_preempt_entity() will be larger than gran and make it possible
> to preempt.
>
> So I'm confused by your comment or I might misread the code.
  

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5ffec4370602..eb04c83112a0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4345,33 +4345,17 @@  place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
 {
 	u64 vruntime = cfs_rq->min_vruntime;
 
-	/*
-	 * The 'current' period is already promised to the current tasks,
-	 * however the extra weight of the new task will slow them down a
-	 * little, place the new task so that it fits in the slot that
-	 * stays open at the end.
-	 */
-	if (initial && sched_feat(START_DEBIT))
-		vruntime += sched_vslice(cfs_rq, se);
-
-	/* sleeps up to a single latency don't count. */
-	if (!initial) {
-		unsigned long thresh;
-
-		if (se_is_idle(se))
-			thresh = sysctl_sched_min_granularity;
-		else
-			thresh = sysctl_sched_latency;
-
+	if (!initial)
+		/* sleeps up to a single latency don't count. */
+		vruntime -= get_sched_latency(se_is_idle(se));
+	else if (sched_feat(START_DEBIT))
 		/*
-		 * Halve their sleep time's effect, to allow
-		 * for a gentler effect of sleepers:
+		 * The 'current' period is already promised to the current tasks,
+		 * however the extra weight of the new task will slow them down a
+		 * little, place the new task so that it fits in the slot that
+		 * stays open at the end.
 		 */
-		if (sched_feat(GENTLE_FAIR_SLEEPERS))
-			thresh >>= 1;
-
-		vruntime -= thresh;
-	}
+		vruntime += sched_vslice(cfs_rq, se);
 
 	/* ensure we never gain time by being placed backwards. */
 	se->vruntime = max_vruntime(se->vruntime, vruntime);
@@ -7187,6 +7171,18 @@  wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
 		return -1;
 
 	gran = wakeup_gran(se);
+
+	/*
+	 * At wake up, the vruntime of a task is capped to not be older than
+	 * a sched_latency period compared to min_vruntime. This prevents long
+	 * sleeping task to get unlimited credit at wakeup. Such waking up task
+	 * has to preempt current in order to not lose its share of CPU
+	 * bandwidth but wakeup_gran() can become higher than scheduling period
+	 * for low priority task. Make sure that long sleeping task will get a
+	 * chance to preempt current.
+	 */
+	gran = min_t(s64, gran, get_latency_max());
+
 	if (vdiff > gran)
 		return 1;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1fc198be1ffd..14879d429919 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2432,9 +2432,9 @@  extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
 extern const_debug unsigned int sysctl_sched_nr_migrate;
 extern const_debug unsigned int sysctl_sched_migration_cost;
 
-#ifdef CONFIG_SCHED_DEBUG
 extern unsigned int sysctl_sched_latency;
 extern unsigned int sysctl_sched_min_granularity;
+#ifdef CONFIG_SCHED_DEBUG
 extern unsigned int sysctl_sched_idle_min_granularity;
 extern unsigned int sysctl_sched_wakeup_granularity;
 extern int sysctl_resched_latency_warn_ms;
@@ -2448,6 +2448,34 @@  extern unsigned int sysctl_numa_balancing_scan_period_max;
 extern unsigned int sysctl_numa_balancing_scan_size;
 #endif
 
+static inline unsigned long  get_sched_latency(bool idle)
+{
+	unsigned long thresh;
+
+	if (idle)
+		thresh = sysctl_sched_min_granularity;
+	else
+		thresh = sysctl_sched_latency;
+
+	/*
+	 * Halve their sleep time's effect, to allow
+	 * for a gentler effect of sleepers:
+	 */
+	if (sched_feat(GENTLE_FAIR_SLEEPERS))
+		thresh >>= 1;
+
+	return thresh;
+}
+
+static inline unsigned long  get_latency_max(void)
+{
+	unsigned long thresh = get_sched_latency(false);
+
+	thresh -= sysctl_sched_min_granularity;
+
+	return thresh;
+}
+
 #ifdef CONFIG_SCHED_HRTICK
 
 /*