sched: move access of avg_rt and avg_dl into existing helper functions

Message ID 20231220065522.351915-1-sshegde@linux.vnet.ibm.com
State New
Headers
Series sched: move access of avg_rt and avg_dl into existing helper functions |

Commit Message

Shrikanth Hegde Dec. 20, 2023, 6:55 a.m. UTC
  This is a minor code simplification. There are helper functions called
cpu_util_dl and cpu_util_rt which gives the average utilization of DL
and RT respectively. But there are few places in code where these
variables are used directly.

Instead use the helper function so that code becomes simpler and easy to
maintain later on.

Signed-off-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
---
 kernel/sched/fair.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

--
2.39.3
  

Comments

Vincent Guittot Dec. 20, 2023, 1:59 p.m. UTC | #1
On Wed, 20 Dec 2023 at 07:55, Shrikanth Hegde
<sshegde@linux.vnet.ibm.com> wrote:
>
> This is a minor code simplification. There are helper functions called
> cpu_util_dl and cpu_util_rt which gives the average utilization of DL
> and RT respectively. But there are few places in code where these
> variables are used directly.
>
> Instead use the helper function so that code becomes simpler and easy to
> maintain later on.
>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
> ---
>  kernel/sched/fair.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bcea3d55d95d..02631060ca7e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9212,19 +9212,17 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
>
>  static inline bool others_have_blocked(struct rq *rq)
>  {
> -       if (READ_ONCE(rq->avg_rt.util_avg))
> +       if (cpu_util_rt(rq))
>                 return true;
>
> -       if (READ_ONCE(rq->avg_dl.util_avg))
> +       if (cpu_util_dl(rq))
>                 return true;
>
>         if (thermal_load_avg(rq))
>                 return true;
>
> -#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> -       if (READ_ONCE(rq->avg_irq.util_avg))
> +       if (cpu_util_irq(rq))

cpu_util_irq doesn't call READ_ONCE()


>                 return true;
> -#endif
>
>         return false;
>  }
> @@ -9481,8 +9479,8 @@ static unsigned long scale_rt_capacity(int cpu)
>          * avg_thermal.load_avg tracks thermal pressure and the weighted
>          * average uses the actual delta max capacity(load).
>          */
> -       used = READ_ONCE(rq->avg_rt.util_avg);
> -       used += READ_ONCE(rq->avg_dl.util_avg);
> +       used = cpu_util_rt(rq);
> +       used += cpu_util_dl(rq);
>         used += thermal_load_avg(rq);
>
>         if (unlikely(used >= max))
> --
> 2.39.3
>
  
Shrikanth Hegde Dec. 20, 2023, 2:48 p.m. UTC | #2
On 12/20/23 7:29 PM, Vincent Guittot wrote:

Hi Vincent, thanks for taking a look.

> On Wed, 20 Dec 2023 at 07:55, Shrikanth Hegde
> <sshegde@linux.vnet.ibm.com> wrote:
>>
>> This is a minor code simplification. There are helper functions called
>> cpu_util_dl and cpu_util_rt which gives the average utilization of DL
>> and RT respectively. But there are few places in code where these
>> variables are used directly.
>>
>> Instead use the helper function so that code becomes simpler and easy to
>> maintain later on.
>>
>> Signed-off-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
>> ---
>>  kernel/sched/fair.c | 12 +++++-------
>>  1 file changed, 5 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index bcea3d55d95d..02631060ca7e 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9212,19 +9212,17 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
>>
>>  static inline bool others_have_blocked(struct rq *rq)
>>  {
>> -       if (READ_ONCE(rq->avg_rt.util_avg))
>> +       if (cpu_util_rt(rq))
>>                 return true;
>>
>> -       if (READ_ONCE(rq->avg_dl.util_avg))
>> +       if (cpu_util_dl(rq))
>>                 return true;
>>
>>         if (thermal_load_avg(rq))
>>                 return true;
>>
>> -#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
>> -       if (READ_ONCE(rq->avg_irq.util_avg))
>> +       if (cpu_util_irq(rq))
> 
> cpu_util_irq doesn't call READ_ONCE()
> 


I see. Actually it would be right if cpu_util_irq does call READ_ONCE no? 

Sorry i havent yet understood the memory barriers in details. Please correct me 
if i am wrong here, 
since ___update_load_avg(&rq->avg_irq, 1) does use WRITE_ONCE and reading out this 
value using cpu_util_irq on a different CPU should use READ_ONCE no? 

> 
>>                 return true;
>> -#endif
>>
>>         return false;
>>  }
>> @@ -9481,8 +9479,8 @@ static unsigned long scale_rt_capacity(int cpu)
>>          * avg_thermal.load_avg tracks thermal pressure and the weighted
>>          * average uses the actual delta max capacity(load).
>>          */
>> -       used = READ_ONCE(rq->avg_rt.util_avg);
>> -       used += READ_ONCE(rq->avg_dl.util_avg);
>> +       used = cpu_util_rt(rq);
>> +       used += cpu_util_dl(rq);
>>         used += thermal_load_avg(rq);
>>
>>         if (unlikely(used >= max))
>> --
>> 2.39.3
>>
  
Ingo Molnar Dec. 20, 2023, 7:53 p.m. UTC | #3
* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> On Wed, 20 Dec 2023 at 07:55, Shrikanth Hegde
> <sshegde@linux.vnet.ibm.com> wrote:
> >
> > This is a minor code simplification. There are helper functions called
> > cpu_util_dl and cpu_util_rt which gives the average utilization of DL
> > and RT respectively. But there are few places in code where these
> > variables are used directly.
> >
> > Instead use the helper function so that code becomes simpler and easy to
> > maintain later on.
> >
> > Signed-off-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
> > ---
> >  kernel/sched/fair.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index bcea3d55d95d..02631060ca7e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9212,19 +9212,17 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
> >
> >  static inline bool others_have_blocked(struct rq *rq)
> >  {
> > -       if (READ_ONCE(rq->avg_rt.util_avg))
> > +       if (cpu_util_rt(rq))
> >                 return true;
> >
> > -       if (READ_ONCE(rq->avg_dl.util_avg))
> > +       if (cpu_util_dl(rq))
> >                 return true;
> >
> >         if (thermal_load_avg(rq))
> >                 return true;
> >
> > -#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> > -       if (READ_ONCE(rq->avg_irq.util_avg))
> > +       if (cpu_util_irq(rq))
> 
> cpu_util_irq doesn't call READ_ONCE()

Oh, that's nasty - according to the title only avg_rt and avg_dl were 
changed, which I double checked, but the patch indeed does more ...

I've removed this patch from tip:sched/core.

Thanks,

	Ingo
  
Vincent Guittot Dec. 21, 2023, 4:16 p.m. UTC | #4
Hi Shrikanth,

On Wed, 20 Dec 2023 at 15:49, Shrikanth Hegde
<sshegde@linux.vnet.ibm.com> wrote:
>
>
>
> On 12/20/23 7:29 PM, Vincent Guittot wrote:
>
> Hi Vincent, thanks for taking a look.
>
> > On Wed, 20 Dec 2023 at 07:55, Shrikanth Hegde
> > <sshegde@linux.vnet.ibm.com> wrote:
> >>

[...]

> >> -#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> >> -       if (READ_ONCE(rq->avg_irq.util_avg))
> >> +       if (cpu_util_irq(rq))
> >
> > cpu_util_irq doesn't call READ_ONCE()
> >
>
>
> I see. Actually it would be right if cpu_util_irq does call READ_ONCE no?

Yes, cpu_util_irq should call READ_ONCE()

>
> Sorry i havent yet understood the memory barriers in details. Please correct me
> if i am wrong here,
> since ___update_load_avg(&rq->avg_irq, 1) does use WRITE_ONCE and reading out this
> value using cpu_util_irq on a different CPU should use READ_ONCE no?

Yes

>
> >
> >>                 return true;
> >> -#endif
> >>
> >>         return false;
> >>  }
> >> @@ -9481,8 +9479,8 @@ static unsigned long scale_rt_capacity(int cpu)
> >>          * avg_thermal.load_avg tracks thermal pressure and the weighted
> >>          * average uses the actual delta max capacity(load).
> >>          */
> >> -       used = READ_ONCE(rq->avg_rt.util_avg);
> >> -       used += READ_ONCE(rq->avg_dl.util_avg);
> >> +       used = cpu_util_rt(rq);
> >> +       used += cpu_util_dl(rq);
> >>         used += thermal_load_avg(rq);
> >>
> >>         if (unlikely(used >= max))
> >> --
> >> 2.39.3
> >>
  
Shrikanth Hegde Dec. 22, 2023, 8:02 a.m. UTC | #5
On 12/21/23 9:46 PM, Vincent Guittot wrote:
> Hi Shrikanth,
> 
> On Wed, 20 Dec 2023 at 15:49, Shrikanth Hegde
> <sshegde@linux.vnet.ibm.com> wrote:
>>
>>
>>
>> On 12/20/23 7:29 PM, Vincent Guittot wrote:
>>
>> Hi Vincent, thanks for taking a look.
>>
>>> On Wed, 20 Dec 2023 at 07:55, Shrikanth Hegde
>>> <sshegde@linux.vnet.ibm.com> wrote:
>>>>
> 
> [...]
> 
>>>> -#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
>>>> -       if (READ_ONCE(rq->avg_irq.util_avg))
>>>> +       if (cpu_util_irq(rq))
>>>
>>> cpu_util_irq doesn't call READ_ONCE()
>>>
>>
>>
>> I see. Actually it would be right if cpu_util_irq does call READ_ONCE no?
> 
> Yes, cpu_util_irq should call READ_ONCE()
> 

Ok. 

Sorry I forgot to mention about avg_irq. 
I will send out v2 with the above change of READ_ONCE added soon. 

>>
>> Sorry i havent yet understood the memory barriers in details. Please correct me
>> if i am wrong here,
>> since ___update_load_avg(&rq->avg_irq, 1) does use WRITE_ONCE and reading out this
>> value using cpu_util_irq on a different CPU should use READ_ONCE no?
> 
> Yes
> 
>>
>>>
>>>>                 return true;
>>>> -#endif
>>>>
>>>>         return false;
>>>>  }
>>>> @@ -9481,8 +9479,8 @@ static unsigned long scale_rt_capacity(int cpu)
>>>>          * avg_thermal.load_avg tracks thermal pressure and the weighted
>>>>          * average uses the actual delta max capacity(load).
>>>>          */
>>>> -       used = READ_ONCE(rq->avg_rt.util_avg);
>>>> -       used += READ_ONCE(rq->avg_dl.util_avg);
>>>> +       used = cpu_util_rt(rq);
>>>> +       used += cpu_util_dl(rq);
>>>>         used += thermal_load_avg(rq);
>>>>
>>>>         if (unlikely(used >= max))
>>>> --
>>>> 2.39.3
>>>>
  

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bcea3d55d95d..02631060ca7e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9212,19 +9212,17 @@  static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)

 static inline bool others_have_blocked(struct rq *rq)
 {
-	if (READ_ONCE(rq->avg_rt.util_avg))
+	if (cpu_util_rt(rq))
 		return true;

-	if (READ_ONCE(rq->avg_dl.util_avg))
+	if (cpu_util_dl(rq))
 		return true;

 	if (thermal_load_avg(rq))
 		return true;

-#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-	if (READ_ONCE(rq->avg_irq.util_avg))
+	if (cpu_util_irq(rq))
 		return true;
-#endif

 	return false;
 }
@@ -9481,8 +9479,8 @@  static unsigned long scale_rt_capacity(int cpu)
 	 * avg_thermal.load_avg tracks thermal pressure and the weighted
 	 * average uses the actual delta max capacity(load).
 	 */
-	used = READ_ONCE(rq->avg_rt.util_avg);
-	used += READ_ONCE(rq->avg_dl.util_avg);
+	used = cpu_util_rt(rq);
+	used += cpu_util_dl(rq);
 	used += thermal_load_avg(rq);

 	if (unlikely(used >= max))