[v3,0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()

Message ID 20240206043921.850302-1-void@manifault.com
Headers
Series sched/fair: Simplify and optimize update_sd_pick_busiest() |

Message

David Vernet Feb. 6, 2024, 4:39 a.m. UTC
  update_sd_pick_busiest() (and its caller) has some room for small
optimizations, and some improvements in readability.

- In update_sd_lb_stats(), we're using a goto to skip a single if check.
  Let's remove the goto and just add another condition to the if.

- In update_sd_pick_busiest(), only update a group_misfit_task group to
  be the busiest if it has strictly more load than the current busiest
  task, rather than >= the load.

- When comparing the current struct sched_group with the yet-busiest
  domain in update_sd_pick_busiest(), if the two groups have the same
  group type, we're currently doing a bit of unnecessary work for any
  group >= group_misfit_task. We're comparing the two groups, and then
  returning only if false (the group in question is not the busiest).
  Othewise, we break, do an extra unnecessary conditional check that's
  vacuously false for any group type > group_fully_busy, and then always
  return true. This patch series has us instead simply return directly
  in the switch statement, saving some bytes in load_balance().

---

v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/

v2 -> v3:
- Add Vincent's Reviewed-by tags
- Fix stale commit summary sentence (Vincent)

v1 -> v2 changes:

- Split the patch series into separate patches (Valentin)
- Update the group_misfit_task busiest check to use strict inequality

David Vernet (3):
  sched/fair: Remove unnecessary goto in update_sd_lb_stats()
  sched/fair: Do strict inequality check for busiest misfit task group
  sched/fair: Simplify some logic in update_sd_pick_busiest()

 kernel/sched/fair.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)
  

Comments

David Vernet Feb. 16, 2024, 7:44 p.m. UTC | #1
On Mon, Feb 05, 2024 at 10:39:18PM -0600, David Vernet wrote:
> update_sd_pick_busiest() (and its caller) has some room for small
> optimizations, and some improvements in readability.

Hello Peter, hello Ingo,

Friendly ping. Is there anything else required for this to land?

Thanks,
David

> 
> - In update_sd_lb_stats(), we're using a goto to skip a single if check.
>   Let's remove the goto and just add another condition to the if.
> 
> - In update_sd_pick_busiest(), only update a group_misfit_task group to
>   be the busiest if it has strictly more load than the current busiest
>   task, rather than >= the load.
> 
> - When comparing the current struct sched_group with the yet-busiest
>   domain in update_sd_pick_busiest(), if the two groups have the same
>   group type, we're currently doing a bit of unnecessary work for any
>   group >= group_misfit_task. We're comparing the two groups, and then
>   returning only if false (the group in question is not the busiest).
>   Othewise, we break, do an extra unnecessary conditional check that's
>   vacuously false for any group type > group_fully_busy, and then always
>   return true. This patch series has us instead simply return directly
>   in the switch statement, saving some bytes in load_balance().
> 
> ---
> 
> v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
> v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
> 
> v2 -> v3:
> - Add Vincent's Reviewed-by tags
> - Fix stale commit summary sentence (Vincent)
> 
> v1 -> v2 changes:
> 
> - Split the patch series into separate patches (Valentin)
> - Update the group_misfit_task busiest check to use strict inequality
> 
> David Vernet (3):
>   sched/fair: Remove unnecessary goto in update_sd_lb_stats()
>   sched/fair: Do strict inequality check for busiest misfit task group
>   sched/fair: Simplify some logic in update_sd_pick_busiest()
> 
>  kernel/sched/fair.c | 19 ++++---------------
>  1 file changed, 4 insertions(+), 15 deletions(-)
> 
> -- 
> 2.43.0
>
  
David Vernet Feb. 26, 2024, 5:50 p.m. UTC | #2
On Fri, Feb 16, 2024 at 01:44:40PM -0600, David Vernet wrote:
> Hello Peter, hello Ingo,
> 
> Friendly ping. Is there anything else required for this to land?

Hello,

Sending another ping.

Thanks,
David

> 
> Thanks,
> David
> 
> > 
> > - In update_sd_lb_stats(), we're using a goto to skip a single if check.
> >   Let's remove the goto and just add another condition to the if.
> > 
> > - In update_sd_pick_busiest(), only update a group_misfit_task group to
> >   be the busiest if it has strictly more load than the current busiest
> >   task, rather than >= the load.
> > 
> > - When comparing the current struct sched_group with the yet-busiest
> >   domain in update_sd_pick_busiest(), if the two groups have the same
> >   group type, we're currently doing a bit of unnecessary work for any
> >   group >= group_misfit_task. We're comparing the two groups, and then
> >   returning only if false (the group in question is not the busiest).
> >   Othewise, we break, do an extra unnecessary conditional check that's
> >   vacuously false for any group type > group_fully_busy, and then always
> >   return true. This patch series has us instead simply return directly
> >   in the switch statement, saving some bytes in load_balance().
> > 
> > ---
> > 
> > v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
> > v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
> > 
> > v2 -> v3:
> > - Add Vincent's Reviewed-by tags
> > - Fix stale commit summary sentence (Vincent)
> > 
> > v1 -> v2 changes:
> > 
> > - Split the patch series into separate patches (Valentin)
> > - Update the group_misfit_task busiest check to use strict inequality
> > 
> > David Vernet (3):
> >   sched/fair: Remove unnecessary goto in update_sd_lb_stats()
> >   sched/fair: Do strict inequality check for busiest misfit task group
> >   sched/fair: Simplify some logic in update_sd_pick_busiest()
> > 
> >  kernel/sched/fair.c | 19 ++++---------------
> >  1 file changed, 4 insertions(+), 15 deletions(-)
> > 
> > -- 
> > 2.43.0
> >