[v1,1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

Message ID 20240117031212.1104034-2-nunes.erico@gmail.com
State New
Headers
Series [v1,1/6] drm/lima: fix devfreq refcount imbalance for job timeouts |

Commit Message

Erico Nunes Jan. 17, 2024, 3:12 a.m. UTC
  In case a task manages to complete but it took just long enough to also
trigger the timeout handler, the current code results in a refcount
imbalance on lima_pm_idle.

While this can be a rare occurrence, when it happens it may fill user
logs with stack traces such as:

[10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669628] Call trace:
[10136.669634]  lima_devfreq_record_idle+0xa0/0xb0
[10136.669646]  lima_sched_pipe_task_done+0x5c/0xb0
[10136.669656]  lima_gp_irq_handler+0xa8/0x120
[10136.669666]  __handle_irq_event_percpu+0x48/0x160
[10136.669679]  handle_irq_event+0x4c/0xc0

The imbalance happens because lima_sched_pipe_task_done() already calls
lima_pm_idle for this case if there was no error.
Check the error flag in the timeout handler to ensure we can never run
into this case.

Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
---
 drivers/gpu/drm/lima/lima_sched.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
  

Comments

Vasily Khoruzhick Jan. 17, 2024, 6:13 p.m. UTC | #1
On Tue, Jan 16, 2024 at 7:12 PM Erico Nunes <nunes.erico@gmail.com> wrote:
>
> In case a task manages to complete but it took just long enough to also
> trigger the timeout handler, the current code results in a refcount
> imbalance on lima_pm_idle.
>
> While this can be a rare occurrence, when it happens it may fill user
> logs with stack traces such as:
>
> [10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669628] Call trace:
> [10136.669634]  lima_devfreq_record_idle+0xa0/0xb0
> [10136.669646]  lima_sched_pipe_task_done+0x5c/0xb0
> [10136.669656]  lima_gp_irq_handler+0xa8/0x120
> [10136.669666]  __handle_irq_event_percpu+0x48/0x160
> [10136.669679]  handle_irq_event+0x4c/0xc0
>
> The imbalance happens because lima_sched_pipe_task_done() already calls
> lima_pm_idle for this case if there was no error.
> Check the error flag in the timeout handler to ensure we can never run
> into this case.
>
> Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>

> ---
>  drivers/gpu/drm/lima/lima_sched.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index c3bf8cda8498..66317296d831 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -427,7 +427,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
>         pipe->current_vm = NULL;
>         pipe->current_task = NULL;
>
> -       lima_pm_idle(ldev);
> +       if (pipe->error)
> +               lima_pm_idle(ldev);
>
>         drm_sched_resubmit_jobs(&pipe->base);
>         drm_sched_start(&pipe->base, true);
> --
> 2.43.0
>
  
Qiang Yu Jan. 18, 2024, 1:36 a.m. UTC | #2
So this is caused by same job trigger both done and timeout handling?
I think a better way to solve this is to make sure only one handler
(done or timeout) process the job instead of just making lima_pm_idle()
unique.

Regards,
Qiang

On Wed, Jan 17, 2024 at 11:12 AM Erico Nunes <nunes.erico@gmail.com> wrote:
>
> In case a task manages to complete but it took just long enough to also
> trigger the timeout handler, the current code results in a refcount
> imbalance on lima_pm_idle.
>
> While this can be a rare occurrence, when it happens it may fill user
> logs with stack traces such as:
>
> [10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669628] Call trace:
> [10136.669634]  lima_devfreq_record_idle+0xa0/0xb0
> [10136.669646]  lima_sched_pipe_task_done+0x5c/0xb0
> [10136.669656]  lima_gp_irq_handler+0xa8/0x120
> [10136.669666]  __handle_irq_event_percpu+0x48/0x160
> [10136.669679]  handle_irq_event+0x4c/0xc0
>
> The imbalance happens because lima_sched_pipe_task_done() already calls
> lima_pm_idle for this case if there was no error.
> Check the error flag in the timeout handler to ensure we can never run
> into this case.
>
> Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
> ---
>  drivers/gpu/drm/lima/lima_sched.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index c3bf8cda8498..66317296d831 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -427,7 +427,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
>         pipe->current_vm = NULL;
>         pipe->current_task = NULL;
>
> -       lima_pm_idle(ldev);
> +       if (pipe->error)
> +               lima_pm_idle(ldev);
>
>         drm_sched_resubmit_jobs(&pipe->base);
>         drm_sched_start(&pipe->base, true);
> --
> 2.43.0
>
  
Erico Nunes Jan. 18, 2024, 11:14 a.m. UTC | #3
On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <yuq825@gmail.com> wrote:
>
> So this is caused by same job trigger both done and timeout handling?
> I think a better way to solve this is to make sure only one handler
> (done or timeout) process the job instead of just making lima_pm_idle()
> unique.

It's not very clear to me how to best ensure that, with the drm_sched
software timeout and the irq happening potentially at the same time.
I think patch 4 in this series describes and covers the most common
case that this would be hit. So maybe now this patch could be dropped
in favour of just that one.
But since this was a bit hard to reproduce and I'm not sure the issue
is entirely covered by that, I just decided to keep this small change
as it prevented all the stack trace reproducers I was able to come up
with.

Erico
  
Qiang Yu Jan. 19, 2024, 1:50 a.m. UTC | #4
On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <nunes.erico@gmail.com> wrote:
>
> On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <yuq825@gmail.com> wrote:
> >
> > So this is caused by same job trigger both done and timeout handling?
> > I think a better way to solve this is to make sure only one handler
> > (done or timeout) process the job instead of just making lima_pm_idle()
> > unique.
>
> It's not very clear to me how to best ensure that, with the drm_sched
> software timeout and the irq happening potentially at the same time.
This could be done by stopping scheduler run more job and disable
GP/PP interrupt. Then after sync irq, there should be no more new
irq gets in when we handling timeout.

> I think patch 4 in this series describes and covers the most common
> case that this would be hit. So maybe now this patch could be dropped
> in favour of just that one.
Yes.

> But since this was a bit hard to reproduce and I'm not sure the issue
> is entirely covered by that, I just decided to keep this small change
> as it prevented all the stack trace reproducers I was able to come up
> with.
>
> Erico
  
Erico Nunes Jan. 23, 2024, 11:19 p.m. UTC | #5
On Fri, Jan 19, 2024 at 2:50 AM Qiang Yu <yuq825@gmail.com> wrote:
>
> On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <nunes.erico@gmail.com> wrote:
> >
> > On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <yuq825@gmail.com> wrote:
> > >
> > > So this is caused by same job trigger both done and timeout handling?
> > > I think a better way to solve this is to make sure only one handler
> > > (done or timeout) process the job instead of just making lima_pm_idle()
> > > unique.
> >
> > It's not very clear to me how to best ensure that, with the drm_sched
> > software timeout and the irq happening potentially at the same time.
> This could be done by stopping scheduler run more job and disable
> GP/PP interrupt. Then after sync irq, there should be no more new
> irq gets in when we handling timeout.
>
> > I think patch 4 in this series describes and covers the most common
> > case that this would be hit. So maybe now this patch could be dropped
> > in favour of just that one.
> Yes.

After dropping the patch while testing to send v2, I managed to
reproduce this issue again.
Looking at a trace it seems that this actually happened with the test workload:
lima_sched_timedout_job -> (fence by is not signaled and new fence
check is passed) -> irq happens preempting lima_sched_timedout_job,
fence is signaled and lima_pm_idle called once at
lima_sched_pipe_task_done -> lima_sched_timedout_job continues and
calls lima_pm_idle again

So I think we still need this patch to at least prevent the bug with
the current software timeout. If we move to the hardware watchdog
timeout later we might be able to remove it anyway, but that will
still require separate work and testing.

Erico
  
Qiang Yu Jan. 24, 2024, 1:03 a.m. UTC | #6
On Wed, Jan 24, 2024 at 7:19 AM Erico Nunes <nunes.erico@gmail.com> wrote:
>
> On Fri, Jan 19, 2024 at 2:50 AM Qiang Yu <yuq825@gmail.com> wrote:
> >
> > On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <nunes.erico@gmail.com> wrote:
> > >
> > > On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <yuq825@gmail.com> wrote:
> > > >
> > > > So this is caused by same job trigger both done and timeout handling?
> > > > I think a better way to solve this is to make sure only one handler
> > > > (done or timeout) process the job instead of just making lima_pm_idle()
> > > > unique.
> > >
> > > It's not very clear to me how to best ensure that, with the drm_sched
> > > software timeout and the irq happening potentially at the same time.
> > This could be done by stopping scheduler run more job and disable
> > GP/PP interrupt. Then after sync irq, there should be no more new
> > irq gets in when we handling timeout.
> >
> > > I think patch 4 in this series describes and covers the most common
> > > case that this would be hit. So maybe now this patch could be dropped
> > > in favour of just that one.
> > Yes.
>
> After dropping the patch while testing to send v2, I managed to
> reproduce this issue again.
> Looking at a trace it seems that this actually happened with the test workload:
> lima_sched_timedout_job -> (fence by is not signaled and new fence
> check is passed) -> irq happens preempting lima_sched_timedout_job,
> fence is signaled and lima_pm_idle called once at
> lima_sched_pipe_task_done -> lima_sched_timedout_job continues and
> calls lima_pm_idle again
>
> So I think we still need this patch to at least prevent the bug with
> the current software timeout. If we move to the hardware watchdog
> timeout later we might be able to remove it anyway, but that will
> still require separate work and testing.
>
Then you do need to solve the IRQ and reset race by disabling GP/PP
IRQ and sync irq after drm_sched_stop(). I mean you may keep the
patch 4 for spurious timeout, and add a new patch for solving the IRQ
and reset race.

Disable pm idle does not solve the race completely, other reset logic
may also conflict if not sequence the IRQ and reset .
  

Patch

diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index c3bf8cda8498..66317296d831 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -427,7 +427,8 @@  static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
 	pipe->current_vm = NULL;
 	pipe->current_task = NULL;
 
-	lima_pm_idle(ldev);
+	if (pipe->error)
+		lima_pm_idle(ldev);
 
 	drm_sched_resubmit_jobs(&pipe->base);
 	drm_sched_start(&pipe->base, true);