[v7,0/3] Introduce put_task_struct_atomic_sleep()

Message ID 20230425114307.36889-1-wander@redhat.com
Headers
Series Introduce put_task_struct_atomic_sleep() |

Message

Wander Lairson Costa April 25, 2023, 11:43 a.m. UTC
  The put_task_struct() function reduces a usage counter and invokes
__put_task_struct() when the counter reaches zero.

In the case of __put_task_struct(), it indirectly acquires a spinlock,
which operates as a sleeping lock under the PREEMPT_RT configuration.
As a result, invoking put_task_struct() within an atomic context is
not feasible for real-time (RT) kernels.

One practical example is a splat inside inactive_task_timer(), which is
called in a interrupt context:

CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W ---------
Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012
Call Trace:
 dump_stack_lvl+0x57/0x7d
 mark_lock_irq.cold+0x33/0xba
 ? stack_trace_save+0x4b/0x70
 ? save_trace+0x55/0x150
 mark_lock+0x1e7/0x400
 mark_usage+0x11d/0x140
 __lock_acquire+0x30d/0x930
 lock_acquire.part.0+0x9c/0x210
 ? refill_obj_stock+0x3d/0x3a0
 ? rcu_read_lock_sched_held+0x3f/0x70
 ? trace_lock_acquire+0x38/0x140
 ? lock_acquire+0x30/0x80
 ? refill_obj_stock+0x3d/0x3a0
 rt_spin_lock+0x27/0xe0
 ? refill_obj_stock+0x3d/0x3a0
 refill_obj_stock+0x3d/0x3a0
 ? inactive_task_timer+0x1ad/0x340
 kmem_cache_free+0x357/0x560
 inactive_task_timer+0x1ad/0x340
 ? switched_from_dl+0x2d0/0x2d0
 __run_hrtimer+0x8a/0x1a0
 __hrtimer_run_queues+0x91/0x130
 hrtimer_interrupt+0x10f/0x220
 __sysvec_apic_timer_interrupt+0x7b/0xd0
 sysvec_apic_timer_interrupt+0x4f/0xd0
 ? asm_sysvec_apic_timer_interrupt+0xa/0x20
 asm_sysvec_apic_timer_interrupt+0x12/0x20
RIP: 0033:0x7fff196bf6f5

To address this issue, this patch series introduces a new function
called put_task_struct_atomic_safe(). When compiled with the
PREEMPT_RT configuration, this function defers the call to
__put_task_struct() to a process context.

Additionally, the patch series rectifies known problematic call sites
to ensure smooth functioning.

Changelog
=========

v1:
* Initial implementation fixing the splat.

v2:
* Isolate the logic in its own function.
* Fix two more cases caught in review.

v3:
* Change __put_task_struct() to handle the issue internally.

v4:
* Explain why call_rcu() is safe to call from interrupt context.

v5:
* Explain why __put_task_struct() doesn't conflict with
  put_task_sruct_rcu_user.

v6:
* As per Sebastian's review, revert back the implementation of v2
  with a distinct function.
* Add a check in put_task_struct() to warning when called from a
  non-sleepable context.
* Address more call sites.

v7:
* Fix typos.
* Add an explanation why the new function doesn't conflict with
  delayed_free_task().

Wander Lairson Costa (3):
  sched/core: warn on call put_task_struct in invalid context
  sched/task: Add the put_task_struct_atomic_safe() function
  treewide: replace put_task_struct() with the atomic safe version

 include/linux/sched/task.h | 49 ++++++++++++++++++++++++++++++++++++++
 kernel/events/core.c       |  6 ++---
 kernel/fork.c              |  8 +++++++
 kernel/locking/rtmutex.c   | 10 ++++----
 kernel/sched/core.c        |  6 ++---
 kernel/sched/deadline.c    | 16 ++++++-------
 kernel/sched/rt.c          |  4 ++--
 7 files changed, 78 insertions(+), 21 deletions(-)
  

Comments

Valentin Schneider April 26, 2023, 12:05 p.m. UTC | #1
On 25/04/23 08:43, Wander Lairson Costa wrote:
> The put_task_struct() function reduces a usage counter and invokes
> __put_task_struct() when the counter reaches zero.
>
> In the case of __put_task_struct(), it indirectly acquires a spinlock,
> which operates as a sleeping lock under the PREEMPT_RT configuration.
> As a result, invoking put_task_struct() within an atomic context is
> not feasible for real-time (RT) kernels.
>
> One practical example is a splat inside inactive_task_timer(), which is
> called in a interrupt context:
>
> CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W ---------
> Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012
> Call Trace:
>  dump_stack_lvl+0x57/0x7d
>  mark_lock_irq.cold+0x33/0xba
>  ? stack_trace_save+0x4b/0x70
>  ? save_trace+0x55/0x150
>  mark_lock+0x1e7/0x400
>  mark_usage+0x11d/0x140
>  __lock_acquire+0x30d/0x930
>  lock_acquire.part.0+0x9c/0x210
>  ? refill_obj_stock+0x3d/0x3a0
>  ? rcu_read_lock_sched_held+0x3f/0x70
>  ? trace_lock_acquire+0x38/0x140
>  ? lock_acquire+0x30/0x80
>  ? refill_obj_stock+0x3d/0x3a0
>  rt_spin_lock+0x27/0xe0
>  ? refill_obj_stock+0x3d/0x3a0
>  refill_obj_stock+0x3d/0x3a0
>  ? inactive_task_timer+0x1ad/0x340
>  kmem_cache_free+0x357/0x560
>  inactive_task_timer+0x1ad/0x340
>  ? switched_from_dl+0x2d0/0x2d0
>  __run_hrtimer+0x8a/0x1a0
>  __hrtimer_run_queues+0x91/0x130
>  hrtimer_interrupt+0x10f/0x220
>  __sysvec_apic_timer_interrupt+0x7b/0xd0
>  sysvec_apic_timer_interrupt+0x4f/0xd0
>  ? asm_sysvec_apic_timer_interrupt+0xa/0x20
>  asm_sysvec_apic_timer_interrupt+0x12/0x20
> RIP: 0033:0x7fff196bf6f5
>
> To address this issue, this patch series introduces a new function
> called put_task_struct_atomic_safe(). When compiled with the
> PREEMPT_RT configuration, this function defers the call to
> __put_task_struct() to a process context.
>
> Additionally, the patch series rectifies known problematic call sites
> to ensure smooth functioning.
>

It took me a bit of time to grok the put_task_struct_rcu_user() vs
delayed_free_task() vs put_task_struct_atomic_safe() situation, but other
than that the patches LGTM.

Reviewed-by: Valentin Schneider <vschneid@redhat.com>
  
Waiman Long April 26, 2023, 5:44 p.m. UTC | #2
On 4/25/23 07:43, Wander Lairson Costa wrote:
> The put_task_struct() function reduces a usage counter and invokes
> __put_task_struct() when the counter reaches zero.
>
> In the case of __put_task_struct(), it indirectly acquires a spinlock,
> which operates as a sleeping lock under the PREEMPT_RT configuration.
> As a result, invoking put_task_struct() within an atomic context is
> not feasible for real-time (RT) kernels.
>
> One practical example is a splat inside inactive_task_timer(), which is
> called in a interrupt context:
>
> CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W ---------
> Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012
> Call Trace:
>   dump_stack_lvl+0x57/0x7d
>   mark_lock_irq.cold+0x33/0xba
>   ? stack_trace_save+0x4b/0x70
>   ? save_trace+0x55/0x150
>   mark_lock+0x1e7/0x400
>   mark_usage+0x11d/0x140
>   __lock_acquire+0x30d/0x930
>   lock_acquire.part.0+0x9c/0x210
>   ? refill_obj_stock+0x3d/0x3a0
>   ? rcu_read_lock_sched_held+0x3f/0x70
>   ? trace_lock_acquire+0x38/0x140
>   ? lock_acquire+0x30/0x80
>   ? refill_obj_stock+0x3d/0x3a0
>   rt_spin_lock+0x27/0xe0
>   ? refill_obj_stock+0x3d/0x3a0
>   refill_obj_stock+0x3d/0x3a0
>   ? inactive_task_timer+0x1ad/0x340
>   kmem_cache_free+0x357/0x560
>   inactive_task_timer+0x1ad/0x340
>   ? switched_from_dl+0x2d0/0x2d0
>   __run_hrtimer+0x8a/0x1a0
>   __hrtimer_run_queues+0x91/0x130
>   hrtimer_interrupt+0x10f/0x220
>   __sysvec_apic_timer_interrupt+0x7b/0xd0
>   sysvec_apic_timer_interrupt+0x4f/0xd0
>   ? asm_sysvec_apic_timer_interrupt+0xa/0x20
>   asm_sysvec_apic_timer_interrupt+0x12/0x20
> RIP: 0033:0x7fff196bf6f5
>
> To address this issue, this patch series introduces a new function
> called put_task_struct_atomic_safe(). When compiled with the
> PREEMPT_RT configuration, this function defers the call to
> __put_task_struct() to a process context.
>
> Additionally, the patch series rectifies known problematic call sites
> to ensure smooth functioning.
>
> Changelog
> =========
>
> v1:
> * Initial implementation fixing the splat.
>
> v2:
> * Isolate the logic in its own function.
> * Fix two more cases caught in review.
>
> v3:
> * Change __put_task_struct() to handle the issue internally.
>
> v4:
> * Explain why call_rcu() is safe to call from interrupt context.
>
> v5:
> * Explain why __put_task_struct() doesn't conflict with
>    put_task_sruct_rcu_user.
>
> v6:
> * As per Sebastian's review, revert back the implementation of v2
>    with a distinct function.
> * Add a check in put_task_struct() to warning when called from a
>    non-sleepable context.
> * Address more call sites.
>
> v7:
> * Fix typos.
> * Add an explanation why the new function doesn't conflict with
>    delayed_free_task().
>
> Wander Lairson Costa (3):
>    sched/core: warn on call put_task_struct in invalid context
>    sched/task: Add the put_task_struct_atomic_safe() function
>    treewide: replace put_task_struct() with the atomic safe version
>
>   include/linux/sched/task.h | 49 ++++++++++++++++++++++++++++++++++++++
>   kernel/events/core.c       |  6 ++---
>   kernel/fork.c              |  8 +++++++
>   kernel/locking/rtmutex.c   | 10 ++++----
>   kernel/sched/core.c        |  6 ++---
>   kernel/sched/deadline.c    | 16 ++++++-------
>   kernel/sched/rt.c          |  4 ++--
>   7 files changed, 78 insertions(+), 21 deletions(-)

This patch series looks good to me.

Acked-by: Waiman Long <longman@redhat.com>

I notice that __put_task_struct() invokes quite a bit of cleanup works 
from different subsystems. So it may burn quite a bit of cpu cycles to 
complete. This may not be something we want in an atomic context, maybe 
we should call call_rcu() irrespective of the PREEMPT_RT setting. 
Anyway, this can be a follow-up patch if we want to do that.

Cheers,
Longman