[RFC,v2,0/5] sched/fair: Defer CFS throttle to user entry

Message ID 20240202080920.3337862-1-vschneid@redhat.com
Headers
Series sched/fair: Defer CFS throttle to user entry |

Message

Valentin Schneider Feb. 2, 2024, 8:09 a.m. UTC
  Hi folks,

Problem statement
=================

CFS tasks can end up throttled while holding locks that other, non-throttled
tasks are blocking on.

For !PREEMPT_RT, this can be a source of latency due to the throttling causing a
resource acquisition denial.

For PREEMPT_RT, this is worse and can lead to a deadlock:
o A CFS task p0 gets throttled while holding read_lock(&lock)
o A task p1 blocks on write_lock(&lock), making further readers enter the
  slowpath
o A ktimers or ksoftirqd task blocks on read_lock(&lock)

If the cfs_bandwidth.period_timer to replenish p0's runtime is enqueued on
the same CPU as one where ktimers/ksoftirqd is blocked on read_lock(&lock),
this creates a circular dependency.

This has been observed to happen with:
o fs/eventpoll.c::ep->lock
o net/netlink/af_netlink.c::nl_table_lock (after hand-fixing the above)
but can trigger with any rwlock that can be acquired in both process and
softirq contexts.

The linux-rt tree has had
  1ea50f9636f0 ("softirq: Use a dedicated thread for timer wakeups.")
which helped this scenario for non-rwlock locks by ensuring the throttled
task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately,
rwlocks cannot sanely do PI as they allow multiple readers.

Proposed approach
=================

Peter mentioned [1] that there have been discussions on changing /when/ the
throttling happens: rather than have it be done immediately upon updating
the runtime statistics and realizing the cfs_rq has depleted its quota, we wait
for the task to be about to return to userspace: if it's in userspace, it can't
hold any in-kernel lock.

I submitted an initial jab at this [2] and Ben Segall added his own version to
the conversation [3]. This series contains Ben's patch plus my additions. The
main change here is updating the .h_nr_running counts throughout the cfs_rq
hierachies to improve the picture given to load_balance().

The main thing that remains doing for this series is making the second cfs_rq
tree an actual RB tree (it's just a plain list ATM).

This also doesn't touch rq.nr_running yet, I'm not entirely sure whether we want
to expose this outside of CFS, but it is another field that's used by load balance.

Testing
=======

Tested on QEMU via:

  mount -t cgroup -o cpu none /root/cpu

  mkdir /root/cpu/cg0
  echo 10000 >  /root/cpu/cg0/cpu.cfs_period_us
  echo 1000 > /root/cpu/cg0/cpu.cfs_quota_us

  mkdir /root/cpu/cg0/cg00
  mkdir /root/cpu/cg0/cg01

  mkdir /root/cpu/cg0/cg00/cg000
  mkdir /root/cpu/cg0/cg00/cg001

  spawn() {
      while true; do cat /sys/devices/system/cpu/smt/active &>/dev/null; done &
      PID=$!
      echo "Starting PID${PID}"
      echo $PID > $1
  }

  spawn cpu/cg0/tasks
  spawn cpu/cg0/tasks
  spawn cpu/cg0/tasks
  spawn cpu/cg0/tasks

  spawn cpu/cg0/cg01/tasks

  spawn cpu/cg0/cg00/cg000/tasks
  spawn cpu/cg0/cg00/cg001/tasks

  sleep 120

  kill $(jobs -p)  

Links
=====
  
[1]: https://lore.kernel.org/all/20231031160120.GE15024@noisy.programming.kicks-ass.net/
[2]: http://lore.kernel.org/r/20231130161245.3894682-1-vschneid@redhat.com
[3]: http://lore.kernel.org/r/xm26edfxpock.fsf@bsegall-linux.svl.corp.google.com

Benjamin Segall (1):
  sched/fair: Only throttle CFS tasks on return to userspace

Valentin Schneider (4):
  sched: Note schedule() invocations at return-to-user with SM_USER
  sched/fair: Delete cfs_rq_throttled_loose(), use
    cfs_rq->throttle_pending instead
  sched/fair: Track count of tasks running in userspace
  sched/fair: Assert user/kernel/total nr invariants

 include/linux/sched.h |   7 +
 kernel/entry/common.c |   2 +-
 kernel/entry/kvm.c    |   2 +-
 kernel/sched/core.c   |  45 ++++-
 kernel/sched/debug.c  |  28 +++
 kernel/sched/fair.c   | 399 ++++++++++++++++++++++++++++++++++++++++--
 kernel/sched/sched.h  |   5 +
 7 files changed, 466 insertions(+), 22 deletions(-)

--
2.43.0
  

Comments

Benjamin Segall Feb. 6, 2024, 9:55 p.m. UTC | #1
Valentin Schneider <vschneid@redhat.com> writes:


> Proposed approach
> =================
>
> Peter mentioned [1] that there have been discussions on changing /when/ the
> throttling happens: rather than have it be done immediately upon updating
> the runtime statistics and realizing the cfs_rq has depleted its quota, we wait
> for the task to be about to return to userspace: if it's in userspace, it can't
> hold any in-kernel lock.
>
> I submitted an initial jab at this [2] and Ben Segall added his own version to
> the conversation [3]. This series contains Ben's patch plus my additions. The
> main change here is updating the .h_nr_running counts throughout the cfs_rq
> hierachies to improve the picture given to load_balance().
>
> The main thing that remains doing for this series is making the second cfs_rq
> tree an actual RB tree (it's just a plain list ATM).
>
> This also doesn't touch rq.nr_running yet, I'm not entirely sure whether we want
> to expose this outside of CFS, but it is another field that's used by load balance.

Then there's also all the load values as well; I don't know the load
balance code well, but it looks like the main thing would be
runnable_avg and that it isn't doing anything that would particularly
care about h_nr_running and runnable_avg being out of sync.

Maybe pulling a pending-throttle user task and then not seeing the
update in h_nr_running could be a bit of trouble?
  
Valentin Schneider Feb. 7, 2024, 1:34 p.m. UTC | #2
On 06/02/24 13:55, Benjamin Segall wrote:
> Valentin Schneider <vschneid@redhat.com> writes:
>
>
>> Proposed approach
>> =================
>>
>> Peter mentioned [1] that there have been discussions on changing /when/ the
>> throttling happens: rather than have it be done immediately upon updating
>> the runtime statistics and realizing the cfs_rq has depleted its quota, we wait
>> for the task to be about to return to userspace: if it's in userspace, it can't
>> hold any in-kernel lock.
>>
>> I submitted an initial jab at this [2] and Ben Segall added his own version to
>> the conversation [3]. This series contains Ben's patch plus my additions. The
>> main change here is updating the .h_nr_running counts throughout the cfs_rq
>> hierachies to improve the picture given to load_balance().
>>
>> The main thing that remains doing for this series is making the second cfs_rq
>> tree an actual RB tree (it's just a plain list ATM).
>>
>> This also doesn't touch rq.nr_running yet, I'm not entirely sure whether we want
>> to expose this outside of CFS, but it is another field that's used by load balance.
>
> Then there's also all the load values as well; I don't know the load
> balance code well, but it looks like the main thing would be
> runnable_avg and that it isn't doing anything that would particularly
> care about h_nr_running and runnable_avg being out of sync.
>

Yes, all of the runnable, load and util averages are still going to be an
issue unfortunately. AFAICT tackling this would imply pretty much dequeuing
the throttle_pending user tasks, which was my earlier attempt.

> Maybe pulling a pending-throttle user task and then not seeing the
> update in h_nr_running could be a bit of trouble?

That too is something I hadn't considered. Given the h_nr_running count is
updated accordingly, we could change can_migrate_task() to only allow
kernel tasks to be pulled if the hierarchy is ->throttle_pending. That
would probably require implementing a throttle_pending_count (as you
suggested in the other email) so we don't waste too much time checking up
the hierarchy for every task.