[v2,3/4] sched/rt: use put_task_struct_atomic_safe() to avoid potential splat

Message ID 20230120150246.20797-4-wander@redhat.com
State New
Headers
Series Fix put_task_struct() calls under PREEMPT_RT |

Commit Message

Wander Lairson Costa Jan. 20, 2023, 3:02 p.m. UTC
  rto_push_irq_work_func() is called in hardirq context, and it calls
push_rt_task(), which calls put_task_struct().

If the kernel is compiled with PREEMPT_RT and put_task_struct() reaches
zero usage count, it triggers a splat because __put_task_struct()
indirectly acquires sleeping locks.

The put_task_struct() call pairs with an earlier get_task_struct(),
which makes the probability of the usage count reaches zero pretty
low. In any case, let's play safe and use the atomic safe version.

Signed-off-by: Wander Lairson Costa <wander@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/sched/rt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
  

Comments

Steven Rostedt Jan. 25, 2023, 12:16 a.m. UTC | #1
On Fri, 20 Jan 2023 12:02:41 -0300
Wander Lairson Costa <wander@redhat.com> wrote:

> rto_push_irq_work_func() is called in hardirq context, and it calls
> push_rt_task(), which calls put_task_struct().
> 
> If the kernel is compiled with PREEMPT_RT and put_task_struct() reaches
> zero usage count, it triggers a splat because __put_task_struct()
> indirectly acquires sleeping locks.
> 
> The put_task_struct() call pairs with an earlier get_task_struct(),
> which makes the probability of the usage count reaches zero pretty
> low. In any case, let's play safe and use the atomic safe version.
> 
> Signed-off-by: Wander Lairson Costa <wander@redhat.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> ---
>  kernel/sched/rt.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 

For what it's worth:

Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve

> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index ed2a47e4ddae..30a4e9607bec 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -2147,7 +2147,7 @@ static int push_rt_task(struct rq *rq, bool pull)
>  		/*
>  		 * Something has shifted, try again.
>  		 */
> -		put_task_struct(next_task);
> +		put_task_struct_atomic_safe(next_task);
>  		next_task = task;
>  		goto retry;
>  	}
> @@ -2160,7 +2160,7 @@ static int push_rt_task(struct rq *rq, bool pull)
>  
>  	double_unlock_balance(rq, lowest_rq);
>  out:
> -	put_task_struct(next_task);
> +	put_task_struct_atomic_safe(next_task);
>  
>  	return ret;
>  }
  

Patch

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ed2a47e4ddae..30a4e9607bec 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2147,7 +2147,7 @@  static int push_rt_task(struct rq *rq, bool pull)
 		/*
 		 * Something has shifted, try again.
 		 */
-		put_task_struct(next_task);
+		put_task_struct_atomic_safe(next_task);
 		next_task = task;
 		goto retry;
 	}
@@ -2160,7 +2160,7 @@  static int push_rt_task(struct rq *rq, bool pull)
 
 	double_unlock_balance(rq, lowest_rq);
 out:
-	put_task_struct(next_task);
+	put_task_struct_atomic_safe(next_task);
 
 	return ret;
 }