[v2] sched/fair: Fixes for capacity inversion detection
Commit Message
Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). rcu_read_lock() is held while in
load_balance(), add an assert to ensure this is always the case.
Also skip capacity inversion detection for our own pd; which was an
error.
Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---
Changes in v2:
* Make sure to hold rcu_read_lock() as we need it's not held in all
paths (thanks Dietmar!)
kernel/sched/fair.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
Comments
On 12/08/22 14:54, Qais Yousef wrote:
> Traversing the Perf Domains requires rcu_read_lock() to be held and is
> conditional on sched_energy_enabled(). rcu_read_lock() is held while in
> load_balance(), add an assert to ensure this is always the case.
Err that should say instead
Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). Ensure right protections applied.
Peter, let me know if you want me to resend with that fixed or happy to apply
yourself.
Thanks!
--
Qais Yousef
>
> Also skip capacity inversion detection for our own pd; which was an
> error.
>
> Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
> Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> ---
>
> Changes in v2:
>
> * Make sure to hold rcu_read_lock() as we need it's not held in all
> paths (thanks Dietmar!)
@@ -8856,16 +8856,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
* * Thermal pressure will impact all cpus in this perf domain
* equally.
*/
- if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+ if (sched_energy_enabled()) {
unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
- struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+ struct perf_domain *pd;
+
+ rcu_read_lock();
+ pd = rcu_dereference(rq->rd->pd);
rq->cpu_capacity_inverted = 0;
for (; pd; pd = pd->next) {
struct cpumask *pd_span = perf_domain_span(pd);
unsigned long pd_cap_orig, pd_cap;
+ /* We can't be inverted against our own pd */
+ if (cpumask_test_cpu(cpu_of(rq), pd_span))
+ continue;
+
cpu = cpumask_any(pd_span);
pd_cap_orig = arch_scale_cpu_capacity(cpu);
@@ -8890,6 +8897,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
break;
}
}
+
+ rcu_read_unlock();
}
trace_sched_cpu_capacity_tp(rq);