sched/fair: Make the BW replenish timer expire in hardirq context for PREEMPT_RT

Message ID 20231030145104.4107573-1-vschneid@redhat.com
State New
Headers
Series sched/fair: Make the BW replenish timer expire in hardirq context for PREEMPT_RT |

Commit Message

Valentin Schneider Oct. 30, 2023, 2:51 p.m. UTC
  Consider the following scenario under PREEMPT_RT:
o A CFS task p0 gets throttled while holding read_lock(&lock)
o A task p1 blocks on write_lock(&lock), making further readers enter the
  slowpath
o A ktimers or ksoftirqd task blocks on read_lock(&lock)

If the cfs_bandwidth.period_timer to replenish p0's runtime is enqueued on
the same CPU as one where ktimers/ksoftirqd is blocked on read_lock(&lock),
this creates a circular dependency.

This has been observed to happen with:
o fs/eventpoll.c::ep->lock
o net/netlink/af_netlink.c::nl_table_lock (after hand-fixing the above)
but can trigger with any rwlock that can be acquired in both process and
softirq contexts.

The linux-rt tree has had
  1ea50f9636f0 ("softirq: Use a dedicated thread for timer wakeups.")
which helped this scenario for non-rwlock locks by ensuring the throttled
task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately,
rwlocks cannot sanely do PI as they allow multiple readers.

Make the period_timer expire in hardirq context under PREEMPT_RT. The
callback for this timer can end up doing a lot of work, but this is
mitigated somewhat when using nohz_full / CPU isolation: the timers *are*
pinned, but on the CPUs the taskgroups are created on, which is usually
going to be HK CPUs.

Link: https://lore.kernel.org/all/xhsmhttqvnall.mognet@vschneid.remote.csb/
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
 kernel/sched/fair.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
  

Comments

Peter Zijlstra Oct. 31, 2023, 4:01 p.m. UTC | #1
On Mon, Oct 30, 2023 at 03:51:04PM +0100, Valentin Schneider wrote:
> Consider the following scenario under PREEMPT_RT:
> o A CFS task p0 gets throttled while holding read_lock(&lock)
> o A task p1 blocks on write_lock(&lock), making further readers enter the
>   slowpath
> o A ktimers or ksoftirqd task blocks on read_lock(&lock)
> 
> If the cfs_bandwidth.period_timer to replenish p0's runtime is enqueued on
> the same CPU as one where ktimers/ksoftirqd is blocked on read_lock(&lock),
> this creates a circular dependency.
> 
> This has been observed to happen with:
> o fs/eventpoll.c::ep->lock
> o net/netlink/af_netlink.c::nl_table_lock (after hand-fixing the above)
> but can trigger with any rwlock that can be acquired in both process and
> softirq contexts.
> 
> The linux-rt tree has had
>   1ea50f9636f0 ("softirq: Use a dedicated thread for timer wakeups.")
> which helped this scenario for non-rwlock locks by ensuring the throttled
> task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately,
> rwlocks cannot sanely do PI as they allow multiple readers.
> 
> Make the period_timer expire in hardirq context under PREEMPT_RT. The
> callback for this timer can end up doing a lot of work, but this is
> mitigated somewhat when using nohz_full / CPU isolation: the timers *are*
> pinned, but on the CPUs the taskgroups are created on, which is usually
> going to be HK CPUs.

Moo... so I think 'people' have been pushing towards changing the
bandwidth thing to only throttle on the return-to-user path. This solves
the kernel side of the lock holder 'preemption' issue.

I'm thinking working on that is saner than adding this O(n) cgroup loop
to hard-irq context. Hmm?
  
Sebastian Andrzej Siewior Nov. 2, 2023, 4:19 p.m. UTC | #2
On 2023-10-31 17:01:20 [+0100], Peter Zijlstra wrote:
> On Mon, Oct 30, 2023 at 03:51:04PM +0100, Valentin Schneider wrote:
> > task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately,
> > rwlocks cannot sanely do PI as they allow multiple readers.
> I'm thinking working on that is saner than adding this O(n) cgroup loop
> to hard-irq context. Hmm?

I have plans to get rid of the softirq issue and the argument for "bad"
or inefficient rwlocks is usually "get rid of rwlocks then". So…

Then I looked at the patch and it only swapped the flag nothing else and
this hardly works. So I looked at sched_cfs_period_timer():
| static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
| {
…
|         raw_spin_lock_irqsave(&cfs_b->lock, flags);
…
|         raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
| 
|         return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
|}

Judging by this, the whole callback runs already with disabled
interrupts. At least now it enabled interrupts if multiple callbacks are
invoked…

Sebastian
  

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8767988242ee3..15cf7de865a97 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6236,7 +6236,7 @@  void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *paren
 	cfs_b->hierarchical_quota = parent ? parent->hierarchical_quota : RUNTIME_INF;
 
 	INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
-	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
 	cfs_b->period_timer.function = sched_cfs_period_timer;
 
 	/* Add a random offset so that timers interleave */
@@ -6263,7 +6263,7 @@  void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 
 	cfs_b->period_active = 1;
 	hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
-	hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);
+	hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED_HARD);
 }
 
 static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)