[RFC,v1] sched/deadline: Add more reschedule cases to prio_changed_dl()
Commit Message
I've been tracking down an issue on a ~5.17ish kernel where:
CPUx CPUy
<DL task p0 owns an rtmutex M>
<p0 depletes its runtime, gets throttled>
<rq switches to the idle task>
<DL task p1 blocks on M, boost/replenish p0>
<No call to resched_curr() happens here>
[idle task keeps running here until *something*
accidentally sets TIF_NEED_RESCHED]
On that kernel, it is quite easy to trigger using rt-tests's deadline_test
[1] with the test running on isolated CPUs (this reduces the chance of
something unrelated setting TIF_NEED_RESCHED on the idle tasks, making the
issue even more obvious as the hung task detector chimes in).
I haven't been able to reproduce this using a mainline kernel, even if I
revert
2972e3050e35 ("tracing: Make trace_marker{,_raw} stream-like")
which gets rid of the lock involved in the above test, *but* I cannot
convince myself the issue isn't there from looking at the code.
Make prio_changed_dl() issue a reschedule if the current task isn't a
deadline one. While at it, ensure a reschedule is emitted when a
queued-but-not-current task gets boosted with an earlier deadline that
current's.
[1]: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
kernel/sched/deadline.c | 45 ++++++++++++++++++++++++++---------------
1 file changed, 29 insertions(+), 16 deletions(-)
Comments
Hi,
On 02/02/23 18:28, Valentin Schneider wrote:
> I've been tracking down an issue on a ~5.17ish kernel where:
>
> CPUx CPUy
>
> <DL task p0 owns an rtmutex M>
> <p0 depletes its runtime, gets throttled>
> <rq switches to the idle task>
> <DL task p1 blocks on M, boost/replenish p0>
> <No call to resched_curr() happens here>
>
> [idle task keeps running here until *something*
> accidentally sets TIF_NEED_RESCHED]
>
> On that kernel, it is quite easy to trigger using rt-tests's deadline_test
> [1] with the test running on isolated CPUs (this reduces the chance of
> something unrelated setting TIF_NEED_RESCHED on the idle tasks, making the
> issue even more obvious as the hung task detector chimes in).
>
> I haven't been able to reproduce this using a mainline kernel, even if I
> revert
>
> 2972e3050e35 ("tracing: Make trace_marker{,_raw} stream-like")
>
> which gets rid of the lock involved in the above test, *but* I cannot
> convince myself the issue isn't there from looking at the code.
>
> Make prio_changed_dl() issue a reschedule if the current task isn't a
> deadline one. While at it, ensure a reschedule is emitted when a
> queued-but-not-current task gets boosted with an earlier deadline that
> current's.
As discussed offline I agree this needs fixing, but .. :)
> [1]: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
> Signed-off-by: Valentin Schneider <vschneid@redhat.com>
> ---
> kernel/sched/deadline.c | 45 ++++++++++++++++++++++++++---------------
> 1 file changed, 29 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 0d97d54276cc8..faa382ea084c1 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2663,17 +2663,28 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
> static void prio_changed_dl(struct rq *rq, struct task_struct *p,
> int oldprio)
> {
> - if (task_on_rq_queued(p) || task_current(rq, p)) {
> -#ifdef CONFIG_SMP
Doesn't this break UP? Don't think earlierst_dl etc are defined in UP.
Thanks,
Juri
On 03/02/23 08:06, Juri Lelli wrote:
> Hi,
>
> On 02/02/23 18:28, Valentin Schneider wrote:
>> I've been tracking down an issue on a ~5.17ish kernel where:
>>
>> CPUx CPUy
>>
>> <DL task p0 owns an rtmutex M>
>> <p0 depletes its runtime, gets throttled>
>> <rq switches to the idle task>
>> <DL task p1 blocks on M, boost/replenish p0>
>> <No call to resched_curr() happens here>
>>
>> [idle task keeps running here until *something*
>> accidentally sets TIF_NEED_RESCHED]
>>
>> On that kernel, it is quite easy to trigger using rt-tests's deadline_test
>> [1] with the test running on isolated CPUs (this reduces the chance of
>> something unrelated setting TIF_NEED_RESCHED on the idle tasks, making the
>> issue even more obvious as the hung task detector chimes in).
>>
>> I haven't been able to reproduce this using a mainline kernel, even if I
>> revert
>>
>> 2972e3050e35 ("tracing: Make trace_marker{,_raw} stream-like")
>>
>> which gets rid of the lock involved in the above test, *but* I cannot
>> convince myself the issue isn't there from looking at the code.
>>
>> Make prio_changed_dl() issue a reschedule if the current task isn't a
>> deadline one. While at it, ensure a reschedule is emitted when a
>> queued-but-not-current task gets boosted with an earlier deadline that
>> current's.
>
> As discussed offline I agree this needs fixing, but .. :)
>
>> [1]: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
>> Signed-off-by: Valentin Schneider <vschneid@redhat.com>
>> ---
>> kernel/sched/deadline.c | 45 ++++++++++++++++++++++++++---------------
>> 1 file changed, 29 insertions(+), 16 deletions(-)
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 0d97d54276cc8..faa382ea084c1 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -2663,17 +2663,28 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
>> static void prio_changed_dl(struct rq *rq, struct task_struct *p,
>> int oldprio)
>> {
>> - if (task_on_rq_queued(p) || task_current(rq, p)) {
>> -#ifdef CONFIG_SMP
>
> Doesn't this break UP? Don't think earlierst_dl etc are defined in UP.
>
Indeed, I thought myself clever by getting rid of the ifdefs...
> Thanks,
> Juri
@@ -2663,17 +2663,28 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
static void prio_changed_dl(struct rq *rq, struct task_struct *p,
int oldprio)
{
- if (task_on_rq_queued(p) || task_current(rq, p)) {
-#ifdef CONFIG_SMP
- /*
- * This might be too much, but unfortunately
- * we don't have the old deadline value, and
- * we can't argue if the task is increasing
- * or lowering its prio, so...
- */
- if (!rq->dl.overloaded)
- deadline_queue_pull_task(rq);
+ if (!task_on_rq_queued(p))
+ return;
+
+ /*
+ * We don't know if p has a earlier or later deadline, so let's blindly
+ * set a (maybe not needed) rescheduling point.
+ */
+ if (!IS_ENABLED(CONFIG_SMP)) {
+ resched_curr(rq);
+ return;
+ }
+ /*
+ * This might be too much, but unfortunately
+ * we don't have the old deadline value, and
+ * we can't argue if the task is increasing
+ * or lowering its prio, so...
+ */
+ if (!rq->dl.overloaded)
+ deadline_queue_pull_task(rq);
+
+ if (task_current(rq, p)) {
/*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
@@ -2681,14 +2692,16 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
*/
if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline))
resched_curr(rq);
-#else
+ } else {
/*
- * Again, we don't know if p has a earlier
- * or later deadline, so let's blindly set a
- * (maybe not needed) rescheduling point.
+ * Current may not be deadline in case p was throttled but we
+ * have just replenished it (e.g. rt_mutex_setprio()).
+ *
+ * Otherwise, if p was given an earlier deadline, reschedule.
*/
- resched_curr(rq);
-#endif /* CONFIG_SMP */
+ if (!dl_task(rq->curr) ||
+ dl_time_before(p->dl.deadline, rq->curr->dl.deadline))
+ resched_curr(rq);
}
}