[v2,0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()

Message ID 20240204044618.46100-1-void@manifault.com
Headers
Series sched/fair: Simplify and optimize update_sd_pick_busiest() |

Message

David Vernet Feb. 4, 2024, 4:46 a.m. UTC
  update_sd_pick_busiest() (and its caller) has some room for small
optimizations, and some improvements in readability.


- In update_sd_lb_stats(), we're using a goto to skip a single if check.
  Let's remove the goto and just add another condition to the if.

- In update_sd_pick_busiest(), only update a group_misfit_task group to
  be the busiest if it has strictly more load than the current busiest
  task, rather than >= the load.

- When comparing the current struct sched_group with the yet-busiest
  domain in update_sd_pick_busiest(), if the two groups have the same
  group type, we're currently doing a bit of unnecessary work for any
  group >= group_misfit_task. We're comparing the two groups, and then
  returning only if false (the group in question is not the busiest).
  Othewise, we break, do an extra unnecessary conditional check that's
  vacuously false for any group type > group_fully_busy, and then always
  return true. This patch series has us instead simply return directly
  in the switch statement, saving some bytes in load_balance().

---

v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/

v1 -> v2 changes:

- Split the patch series into separate patches (Valentin)
- Update the group_misfit_task busiest check to use strict inequality


David Vernet (3):
  sched/fair: Remove unnecessary goto in update_sd_lb_stats()
  sched/fair: Do strict inequality check for busiest misfit task group
  sched/fair: Simplify some logic in update_sd_pick_busiest()

 kernel/sched/fair.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)