[v7,12/23] sched: Fix proxy/current (push,pull)ability

Message ID 20231220001856.3710363-13-jstultz@google.com
State New
Headers
Series Proxy Execution: A generalized form of Priority Inheritance v7 |

Commit Message

John Stultz Dec. 20, 2023, 12:18 a.m. UTC
  From: Valentin Schneider <valentin.schneider@arm.com>

Proxy execution forms atomic pairs of tasks: The selected task
(scheduling context) and a proxy (execution context). The
selected task, along with the rest of the blocked chain,
follows the proxy wrt CPU placement.

They can be the same task, in which case push/pull doesn't need any
modification. When they are different, however,
FIFO1 & FIFO42:

	      ,->  RT42
	      |     | blocked-on
	      |     v
blocked_donor |   mutex
	      |     | owner
	      |     v
	      `--  RT1

   RT1
   RT42

  CPU0            CPU1
   ^                ^
   |                |
  overloaded    !overloaded
  rq prio = 42  rq prio = 0

RT1 is eligible to be pushed to CPU1, but should that happen it will
"carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
push/pullable.

Unfortunately, only the selected task is usually dequeued from the
rq, and the proxy'ed execution context (rq->curr) remains on the rq.
This can cause RT1 to be selected for migration from logic like the
rt pushable_list.

This patch adds a dequeue/enqueue cycle on the proxy task before
__schedule returns, which allows the sched class logic to avoid
adding the now current task to the pushable_list.

Furthermore, tasks becoming blocked on a mutex don't need an explicit
dequeue/enqueue cycle to be made (push/pull)able: they have to be running
to block on a mutex, thus they will eventually hit put_prev_task().

XXX: pinned tasks becoming unblocked should be removed from the push/pull
lists, but those don't get to see __schedule() straight away.

Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Youssef Esmat <youssefesmat@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@android.com
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: John Stultz <jstultz@google.com>
---
v3:
* Tweaked comments & commit message
v5:
* Minor simplifications to utilize the fix earlier
  in the patch series.
* Rework the wording of the commit message to match selected/
  proxy terminology and expand a bit to make it more clear how
  it works.
v6:
* Droped now-unused proxied value, to be re-added later in the
  series when it is used, as caught by Dietmar
v7:
* Unused function argument fixup
* Commit message nit pointed out by Metin Kaya
* Droped unproven unlikely() and use sched_proxy_exec()
  in proxy_tag_curr, suggested by Metin Kaya
---
 kernel/sched/core.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)
  

Comments

Metin Kaya Dec. 21, 2023, 3:03 p.m. UTC | #1
On 20/12/2023 12:18 am, John Stultz wrote:
> From: Valentin Schneider <valentin.schneider@arm.com>
> 
> Proxy execution forms atomic pairs of tasks: The selected task
> (scheduling context) and a proxy (execution context). The
> selected task, along with the rest of the blocked chain,
> follows the proxy wrt CPU placement.
> 
> They can be the same task, in which case push/pull doesn't need any
> modification. When they are different, however,
> FIFO1 & FIFO42:
> 
> 	      ,->  RT42
> 	      |     | blocked-on
> 	      |     v
> blocked_donor |   mutex
> 	      |     | owner
> 	      |     v
> 	      `--  RT1
> 
>     RT1
>     RT42
> 
>    CPU0            CPU1
>     ^                ^
>     |                |
>    overloaded    !overloaded
>    rq prio = 42  rq prio = 0
> 
> RT1 is eligible to be pushed to CPU1, but should that happen it will
> "carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
> push/pullable.
> 
> Unfortunately, only the selected task is usually dequeued from the
> rq, and the proxy'ed execution context (rq->curr) remains on the rq.
> This can cause RT1 to be selected for migration from logic like the
> rt pushable_list.
> 
> This patch adds a dequeue/enqueue cycle on the proxy task before
> __schedule returns, which allows the sched class logic to avoid
> adding the now current task to the pushable_list.
> 
> Furthermore, tasks becoming blocked on a mutex don't need an explicit
> dequeue/enqueue cycle to be made (push/pull)able: they have to be running
> to block on a mutex, thus they will eventually hit put_prev_task().
> 
> XXX: pinned tasks becoming unblocked should be removed from the push/pull
> lists, but those don't get to see __schedule() straight away.
> 
> Cc: Joel Fernandes <joelaf@google.com>
> Cc: Qais Yousef <qyousef@google.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Zimuzo Ezeozue <zezeozue@google.com>
> Cc: Youssef Esmat <youssefesmat@google.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Cc: Metin Kaya <Metin.Kaya@arm.com>
> Cc: Xuewen Yan <xuewen.yan94@gmail.com>
> Cc: K Prateek Nayak <kprateek.nayak@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: kernel-team@android.com
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> Signed-off-by: Connor O'Brien <connoro@google.com>
> Signed-off-by: John Stultz <jstultz@google.com>
> ---
> v3:
> * Tweaked comments & commit message
> v5:
> * Minor simplifications to utilize the fix earlier
>    in the patch series.
> * Rework the wording of the commit message to match selected/
>    proxy terminology and expand a bit to make it more clear how
>    it works.
> v6:
> * Droped now-unused proxied value, to be re-added later in the

   Dropped

>    series when it is used, as caught by Dietmar
> v7:
> * Unused function argument fixup
> * Commit message nit pointed out by Metin Kaya
> * Droped unproven unlikely() and use sched_proxy_exec()

   ditto

>    in proxy_tag_curr, suggested by Metin Kaya
> ---
>   kernel/sched/core.c | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 12f5a0618328..f6bf3b62194c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6674,6 +6674,23 @@ find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
>   }
>   #endif /* SCHED_PROXY_EXEC */
>   
> +static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
> +{
> +	if (sched_proxy_exec()) {

Should we immediately return in !sched_proxy_exec() case to save one 
level of indentation?

> +		/*
> +		 * pick_next_task() calls set_next_task() on the selected task
> +		 * at some point, which ensures it is not push/pullable.
> +		 * However, the selected task *and* the ,mutex owner form an

Super-nit: , before mutex should be dropped.

> +		 * atomic pair wrt push/pull.
> +		 *
> +		 * Make sure owner is not pushable. Unfortunately we can only
> +		 * deal with that by means of a dequeue/enqueue cycle. :-/
> +		 */
> +		dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
> +		enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
> +	}
> +}
> +
>   /*
>    * __schedule() is the main scheduler function.
>    *
> @@ -6796,6 +6813,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
>   		 * changes to task_struct made by pick_next_task().
>   		 */
>   		RCU_INIT_POINTER(rq->curr, next);
> +
> +		if (!task_current_selected(rq, next))
> +			proxy_tag_curr(rq, next);
> +
>   		/*
>   		 * The membarrier system call requires each architecture
>   		 * to have a full memory barrier after updating
> @@ -6820,6 +6841,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
>   		/* Also unlocks the rq: */
>   		rq = context_switch(rq, prev, next, &rf);
>   	} else {
> +		/* In case next was already curr but just got blocked_donor*/

Super-nit: please keep a space before */.

> +		if (!task_current_selected(rq, next))
> +			proxy_tag_curr(rq, next);
> +
>   		rq_unpin_lock(rq, &rf);
>   		__balance_callbacks(rq);
>   		raw_spin_rq_unlock_irq(rq);
  
John Stultz Dec. 21, 2023, 9:02 p.m. UTC | #2
On Thu, Dec 21, 2023 at 7:03 AM Metin Kaya <metin.kaya@arm.com> wrote:
> On 20/12/2023 12:18 am, John Stultz wrote:
> > +static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
> > +{
> > +     if (sched_proxy_exec()) {
>
> Should we immediately return in !sched_proxy_exec() case to save one
> level of indentation?

Sure.

> > +             /*
> > +              * pick_next_task() calls set_next_task() on the selected task
> > +              * at some point, which ensures it is not push/pullable.
> > +              * However, the selected task *and* the ,mutex owner form an
>
> Super-nit: , before mutex should be dropped.
>
> > +              * atomic pair wrt push/pull.
> > +              *
> > +              * Make sure owner is not pushable. Unfortunately we can only
> > +              * deal with that by means of a dequeue/enqueue cycle. :-/
> > +              */
> > +             dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
> > +             enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
> > +     }
> > +}
> > +
> >   /*
> >    * __schedule() is the main scheduler function.
> >    *
> > @@ -6796,6 +6813,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
> >                * changes to task_struct made by pick_next_task().
> >                */
> >               RCU_INIT_POINTER(rq->curr, next);
> > +
> > +             if (!task_current_selected(rq, next))
> > +                     proxy_tag_curr(rq, next);
> > +
> >               /*
> >                * The membarrier system call requires each architecture
> >                * to have a full memory barrier after updating
> > @@ -6820,6 +6841,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
> >               /* Also unlocks the rq: */
> >               rq = context_switch(rq, prev, next, &rf);
> >       } else {
> > +             /* In case next was already curr but just got blocked_donor*/
>
> Super-nit: please keep a space before */.

Fixed up.

Thanks for continuing to provide so much detailed feedback!
-john
  

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 12f5a0618328..f6bf3b62194c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6674,6 +6674,23 @@  find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
 }
 #endif /* SCHED_PROXY_EXEC */
 
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
+{
+	if (sched_proxy_exec()) {
+		/*
+		 * pick_next_task() calls set_next_task() on the selected task
+		 * at some point, which ensures it is not push/pullable.
+		 * However, the selected task *and* the ,mutex owner form an
+		 * atomic pair wrt push/pull.
+		 *
+		 * Make sure owner is not pushable. Unfortunately we can only
+		 * deal with that by means of a dequeue/enqueue cycle. :-/
+		 */
+		dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
+		enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
+	}
+}
+
 /*
  * __schedule() is the main scheduler function.
  *
@@ -6796,6 +6813,10 @@  static void __sched notrace __schedule(unsigned int sched_mode)
 		 * changes to task_struct made by pick_next_task().
 		 */
 		RCU_INIT_POINTER(rq->curr, next);
+
+		if (!task_current_selected(rq, next))
+			proxy_tag_curr(rq, next);
+
 		/*
 		 * The membarrier system call requires each architecture
 		 * to have a full memory barrier after updating
@@ -6820,6 +6841,10 @@  static void __sched notrace __schedule(unsigned int sched_mode)
 		/* Also unlocks the rq: */
 		rq = context_switch(rq, prev, next, &rf);
 	} else {
+		/* In case next was already curr but just got blocked_donor*/
+		if (!task_current_selected(rq, next))
+			proxy_tag_curr(rq, next);
+
 		rq_unpin_lock(rq, &rf);
 		__balance_callbacks(rq);
 		raw_spin_rq_unlock_irq(rq);