[-next,V7,1/7] riscv: ftrace: Fixup panic by disabling preemption

Message ID 20230112090603.1295340-2-guoren@kernel.org
State New
Headers
Series riscv: Optimize function trace |

Commit Message

Guo Ren Jan. 12, 2023, 9:05 a.m. UTC
  From: Andy Chiu <andy.chiu@sifive.com>

In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
forming a jump that jumps to an address over 4K. This may cause errors
if we want to enable kernel preemption and remove dependency from
patching code with stop_machine(). For example, if a task was switched
out on auipc. And, if we changed the ftrace function before it was
switched back, then it would jump to an address that has updated 11:0
bits mixing with previous XLEN:12 part.

p: patched area performed by dynamic ftrace
ftrace_prologue:
p|      REG_S   ra, -SZREG(sp)
p|      auipc   ra, 0x? ------------> preempted
					...
				change ftrace function
					...
p|      jalr    -?(ra) <------------- switched back
p|      REG_L   ra, -SZREG(sp)
func:
	xxx
	ret

Fixes: afc76b8b8011 ("riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")
Signed-off-by: Andy Chiu <andy.chiu@sifive.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Mark Rutland Jan. 12, 2023, 12:16 p.m. UTC | #1
Hi Guo,

On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> From: Andy Chiu <andy.chiu@sifive.com>
> 
> In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> forming a jump that jumps to an address over 4K. This may cause errors
> if we want to enable kernel preemption and remove dependency from
> patching code with stop_machine(). For example, if a task was switched
> out on auipc. And, if we changed the ftrace function before it was
> switched back, then it would jump to an address that has updated 11:0
> bits mixing with previous XLEN:12 part.
> 
> p: patched area performed by dynamic ftrace
> ftrace_prologue:
> p|      REG_S   ra, -SZREG(sp)
> p|      auipc   ra, 0x? ------------> preempted
> 					...
> 				change ftrace function
> 					...
> p|      jalr    -?(ra) <------------- switched back
> p|      REG_L   ra, -SZREG(sp)
> func:
> 	xxx
> 	ret

As mentioned on the last posting, I don't think this is sufficient to fix the
issue. I've replied with more detail there:

  https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/

Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
executing the ftrace_prologue while another CPU is patching the
ftrace_prologue, you have the exact same issue.

For example, if CPU X is in the prologue fetches the old AUIPC and the new
JALR (because it races with CPU Y modifying those), CPU X will branch to the
wrong address. The race window is much smaller in the absence of preemption,
but it's still there (and will be exacerbated in virtual machines since the
hypervisor can preempt a vCPU at any time).

Note that the above is even assuming that instruction fetches are atomic, which
I'm not sure is the case; for example arm64 has special CMODX / "Concurrent
MODification and eXecutuion of instructions" rules which mean only certain
instructions can be patched atomically.

Either I'm missing something that provides mutual exclusion between the
patching and execution of the ftrace_prologue, or this patch is not sufficient.

Thanks,
Mark.

> Fixes: afc76b8b8011 ("riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")
> Signed-off-by: Andy Chiu <andy.chiu@sifive.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index e2b656043abf..ee0d39b26794 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -138,7 +138,7 @@ config RISCV
>  	select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
>  	select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
>  	select HAVE_FUNCTION_GRAPH_TRACER
> -	select HAVE_FUNCTION_TRACER if !XIP_KERNEL
> +	select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
>  
>  config ARCH_MMAP_RND_BITS_MIN
>  	default 18 if 64BIT
> -- 
> 2.36.1
>
  
Mark Rutland Jan. 12, 2023, 12:57 p.m. UTC | #2
On Thu, Jan 12, 2023 at 12:16:02PM +0000, Mark Rutland wrote:
> Hi Guo,
> 
> On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> > From: Andy Chiu <andy.chiu@sifive.com>
> > 
> > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > forming a jump that jumps to an address over 4K. This may cause errors
> > if we want to enable kernel preemption and remove dependency from
> > patching code with stop_machine(). For example, if a task was switched
> > out on auipc. And, if we changed the ftrace function before it was
> > switched back, then it would jump to an address that has updated 11:0
> > bits mixing with previous XLEN:12 part.
> > 
> > p: patched area performed by dynamic ftrace
> > ftrace_prologue:
> > p|      REG_S   ra, -SZREG(sp)
> > p|      auipc   ra, 0x? ------------> preempted
> > 					...
> > 				change ftrace function
> > 					...
> > p|      jalr    -?(ra) <------------- switched back
> > p|      REG_L   ra, -SZREG(sp)
> > func:
> > 	xxx
> > 	ret
> 
> As mentioned on the last posting, I don't think this is sufficient to fix the
> issue. I've replied with more detail there:
> 
>   https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
> 
> Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> executing the ftrace_prologue while another CPU is patching the
> ftrace_prologue, you have the exact same issue.
> 
> For example, if CPU X is in the prologue fetches the old AUIPC and the new
> JALR (because it races with CPU Y modifying those), CPU X will branch to the
> wrong address. The race window is much smaller in the absence of preemption,
> but it's still there (and will be exacerbated in virtual machines since the
> hypervisor can preempt a vCPU at any time).

With that in mind, I think your current implementation of ftrace_make_call()
and ftrace_make_nop() have a simlar bug. A caller might execute:

	NOP	// not yet patched to AUIPC

				< AUIPC and JALR instructions both patched >

	JALR

... and go to the wrong place.

Assuming individual instruction fetches are atomic, and that you only ever
branch to the same trampoline, you could fix that by always leaving the AUIPC
in place, so that you only patch the JALR to enable/disable the callsite.

Depending on your calling convention, if you have two free GPRs, you might be
able to avoid the stacking of RA by always saving it to a GPR in the callsite,
using a different GPR for the address generation, and having the ftrace
trampoline restore the original RA value, e.g.

	MV	GPR1, ra
	AUIPC	GPR2, high_bits_of(ftrace_caller)
	JALR	ra, high_bits(GPR2)			// only patch this

... which'd save an instruction per callsite.

Thanks,
Mark.
  
Guo Ren Jan. 28, 2023, 9:37 a.m. UTC | #3
On Thu, Jan 12, 2023 at 8:16 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Guo,
>
> On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> > From: Andy Chiu <andy.chiu@sifive.com>
> >
> > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > forming a jump that jumps to an address over 4K. This may cause errors
> > if we want to enable kernel preemption and remove dependency from
> > patching code with stop_machine(). For example, if a task was switched
> > out on auipc. And, if we changed the ftrace function before it was
> > switched back, then it would jump to an address that has updated 11:0
> > bits mixing with previous XLEN:12 part.
> >
> > p: patched area performed by dynamic ftrace
> > ftrace_prologue:
> > p|      REG_S   ra, -SZREG(sp)
> > p|      auipc   ra, 0x? ------------> preempted
> >                                       ...
> >                               change ftrace function
> >                                       ...
> > p|      jalr    -?(ra) <------------- switched back
> > p|      REG_L   ra, -SZREG(sp)
> > func:
> >       xxx
> >       ret
>
> As mentioned on the last posting, I don't think this is sufficient to fix the
> issue. I've replied with more detail there:
>
>   https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
>
> Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> executing the ftrace_prologue while another CPU is patching the
> ftrace_prologue, you have the exact same issue.
>
> For example, if CPU X is in the prologue fetches the old AUIPC and the new
> JALR (because it races with CPU Y modifying those), CPU X will branch to the
> wrong address. The race window is much smaller in the absence of preemption,
> but it's still there (and will be exacerbated in virtual machines since the
> hypervisor can preempt a vCPU at any time).
>
> Note that the above is even assuming that instruction fetches are atomic, which
> I'm not sure is the case; for example arm64 has special CMODX / "Concurrent
> MODification and eXecutuion of instructions" rules which mean only certain
> instructions can be patched atomically.
>
> Either I'm missing something that provides mutual exclusion between the
> patching and execution of the ftrace_prologue, or this patch is not sufficient.
This patch is sufficient because riscv isn't the same as arm64. It
uses default arch_ftrace_update_code, which uses stop_machine.
See kernel/trace/ftrace.c:
void __weak arch_ftrace_update_code(int command)
{
        ftrace_run_stop_machine(command);
}

ps:
 Yes, it's not good, and it's expensive.

>
> Thanks,
> Mark.
>
> > Fixes: afc76b8b8011 ("riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")
> > Signed-off-by: Andy Chiu <andy.chiu@sifive.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > ---
> >  arch/riscv/Kconfig | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index e2b656043abf..ee0d39b26794 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -138,7 +138,7 @@ config RISCV
> >       select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
> >       select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
> >       select HAVE_FUNCTION_GRAPH_TRACER
> > -     select HAVE_FUNCTION_TRACER if !XIP_KERNEL
> > +     select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
> >
> >  config ARCH_MMAP_RND_BITS_MIN
> >       default 18 if 64BIT
> > --
> > 2.36.1
> >
  
Guo Ren Jan. 28, 2023, 9:45 a.m. UTC | #4
On Thu, Jan 12, 2023 at 8:57 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Thu, Jan 12, 2023 at 12:16:02PM +0000, Mark Rutland wrote:
> > Hi Guo,
> >
> > On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> > > From: Andy Chiu <andy.chiu@sifive.com>
> > >
> > > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > > forming a jump that jumps to an address over 4K. This may cause errors
> > > if we want to enable kernel preemption and remove dependency from
> > > patching code with stop_machine(). For example, if a task was switched
> > > out on auipc. And, if we changed the ftrace function before it was
> > > switched back, then it would jump to an address that has updated 11:0
> > > bits mixing with previous XLEN:12 part.
> > >
> > > p: patched area performed by dynamic ftrace
> > > ftrace_prologue:
> > > p|      REG_S   ra, -SZREG(sp)
> > > p|      auipc   ra, 0x? ------------> preempted
> > >                                     ...
> > >                             change ftrace function
> > >                                     ...
> > > p|      jalr    -?(ra) <------------- switched back
> > > p|      REG_L   ra, -SZREG(sp)
> > > func:
> > >     xxx
> > >     ret
> >
> > As mentioned on the last posting, I don't think this is sufficient to fix the
> > issue. I've replied with more detail there:
> >
> >   https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
> >
> > Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> > executing the ftrace_prologue while another CPU is patching the
> > ftrace_prologue, you have the exact same issue.
> >
> > For example, if CPU X is in the prologue fetches the old AUIPC and the new
> > JALR (because it races with CPU Y modifying those), CPU X will branch to the
> > wrong address. The race window is much smaller in the absence of preemption,
> > but it's still there (and will be exacerbated in virtual machines since the
> > hypervisor can preempt a vCPU at any time).
>
> With that in mind, I think your current implementation of ftrace_make_call()
> and ftrace_make_nop() have a simlar bug. A caller might execute:
>
>         NOP     // not yet patched to AUIPC
>
>                                 < AUIPC and JALR instructions both patched >
>
>         JALR
>
> ... and go to the wrong place.
>
> Assuming individual instruction fetches are atomic, and that you only ever
> branch to the same trampoline, you could fix that by always leaving the AUIPC
> in place, so that you only patch the JALR to enable/disable the callsite.
Yes, the same trampoline is one of the antidotes.

>
> Depending on your calling convention, if you have two free GPRs, you might be
> able to avoid the stacking of RA by always saving it to a GPR in the callsite,
> using a different GPR for the address generation, and having the ftrace
> trampoline restore the original RA value, e.g.
>
>         MV      GPR1, ra
>         AUIPC   GPR2, high_bits_of(ftrace_caller)
>         JALR    ra, high_bits(GPR2)                     // only patch this
I think you mean temp registers here. We are at the prologue of a
function, so we have all of them.

But why do you need another "MV      GPR1, ra"

         AUIPC   GPR2, high_bits_of(ftrace_caller)
         JALR    GPR2, high_bits(GPR2)                     // only patch this

We could reserve ra on the trampoline.
        MV      XX, ra

>
> ... which'd save an instruction per callsite.
>
> Thanks,
> Mark.
  
Mark Rutland Jan. 30, 2023, 10:54 a.m. UTC | #5
On Sat, Jan 28, 2023 at 05:37:46PM +0800, Guo Ren wrote:
> On Thu, Jan 12, 2023 at 8:16 PM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Hi Guo,
> >
> > On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> > > From: Andy Chiu <andy.chiu@sifive.com>
> > >
> > > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > > forming a jump that jumps to an address over 4K. This may cause errors
> > > if we want to enable kernel preemption and remove dependency from
> > > patching code with stop_machine(). For example, if a task was switched
> > > out on auipc. And, if we changed the ftrace function before it was
> > > switched back, then it would jump to an address that has updated 11:0
> > > bits mixing with previous XLEN:12 part.
> > >
> > > p: patched area performed by dynamic ftrace
> > > ftrace_prologue:
> > > p|      REG_S   ra, -SZREG(sp)
> > > p|      auipc   ra, 0x? ------------> preempted
> > >                                       ...
> > >                               change ftrace function
> > >                                       ...
> > > p|      jalr    -?(ra) <------------- switched back
> > > p|      REG_L   ra, -SZREG(sp)
> > > func:
> > >       xxx
> > >       ret
> >
> > As mentioned on the last posting, I don't think this is sufficient to fix the
> > issue. I've replied with more detail there:
> >
> >   https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
> >
> > Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> > executing the ftrace_prologue while another CPU is patching the
> > ftrace_prologue, you have the exact same issue.
> >
> > For example, if CPU X is in the prologue fetches the old AUIPC and the new
> > JALR (because it races with CPU Y modifying those), CPU X will branch to the
> > wrong address. The race window is much smaller in the absence of preemption,
> > but it's still there (and will be exacerbated in virtual machines since the
> > hypervisor can preempt a vCPU at any time).
> >
> > Note that the above is even assuming that instruction fetches are atomic, which
> > I'm not sure is the case; for example arm64 has special CMODX / "Concurrent
> > MODification and eXecutuion of instructions" rules which mean only certain
> > instructions can be patched atomically.
> >
> > Either I'm missing something that provides mutual exclusion between the
> > patching and execution of the ftrace_prologue, or this patch is not sufficient.
> This patch is sufficient because riscv isn't the same as arm64. It
> uses default arch_ftrace_update_code, which uses stop_machine.
> See kernel/trace/ftrace.c:
> void __weak arch_ftrace_update_code(int command)
> {
>         ftrace_run_stop_machine(command);
> }

Ah; sorry, I had misunderstood here, since the commit message spoke in terms of
removing that.

As long as stop_machine() is used I agree this is safe; sorry for the noise.

> ps:
>  Yes, it's not good, and it's expensive.

We can't have everything! :)

Thanks,
Mark.
  
Guo Ren Feb. 4, 2023, 1:19 a.m. UTC | #6
On Mon, Jan 30, 2023 at 6:54 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Sat, Jan 28, 2023 at 05:37:46PM +0800, Guo Ren wrote:
> > On Thu, Jan 12, 2023 at 8:16 PM Mark Rutland <mark.rutland@arm.com> wrote:
> > >
> > > Hi Guo,
> > >
> > > On Thu, Jan 12, 2023 at 04:05:57AM -0500, guoren@kernel.org wrote:
> > > > From: Andy Chiu <andy.chiu@sifive.com>
> > > >
> > > > In RISCV, we must use an AUIPC + JALR pair to encode an immediate,
> > > > forming a jump that jumps to an address over 4K. This may cause errors
> > > > if we want to enable kernel preemption and remove dependency from
> > > > patching code with stop_machine(). For example, if a task was switched
> > > > out on auipc. And, if we changed the ftrace function before it was
> > > > switched back, then it would jump to an address that has updated 11:0
> > > > bits mixing with previous XLEN:12 part.
> > > >
> > > > p: patched area performed by dynamic ftrace
> > > > ftrace_prologue:
> > > > p|      REG_S   ra, -SZREG(sp)
> > > > p|      auipc   ra, 0x? ------------> preempted
> > > >                                       ...
> > > >                               change ftrace function
> > > >                                       ...
> > > > p|      jalr    -?(ra) <------------- switched back
> > > > p|      REG_L   ra, -SZREG(sp)
> > > > func:
> > > >       xxx
> > > >       ret
> > >
> > > As mentioned on the last posting, I don't think this is sufficient to fix the
> > > issue. I've replied with more detail there:
> > >
> > >   https://lore.kernel.org/lkml/Y7%2F3hoFjS49yy52W@FVFF77S0Q05N/
> > >
> > > Even in a non-preemptible SMP kernel, if one CPU can be in the middle of
> > > executing the ftrace_prologue while another CPU is patching the
> > > ftrace_prologue, you have the exact same issue.
> > >
> > > For example, if CPU X is in the prologue fetches the old AUIPC and the new
> > > JALR (because it races with CPU Y modifying those), CPU X will branch to the
> > > wrong address. The race window is much smaller in the absence of preemption,
> > > but it's still there (and will be exacerbated in virtual machines since the
> > > hypervisor can preempt a vCPU at any time).
> > >
> > > Note that the above is even assuming that instruction fetches are atomic, which
> > > I'm not sure is the case; for example arm64 has special CMODX / "Concurrent
> > > MODification and eXecutuion of instructions" rules which mean only certain
> > > instructions can be patched atomically.
> > >
> > > Either I'm missing something that provides mutual exclusion between the
> > > patching and execution of the ftrace_prologue, or this patch is not sufficient.
> > This patch is sufficient because riscv isn't the same as arm64. It
> > uses default arch_ftrace_update_code, which uses stop_machine.
> > See kernel/trace/ftrace.c:
> > void __weak arch_ftrace_update_code(int command)
> > {
> >         ftrace_run_stop_machine(command);
> > }
>
> Ah; sorry, I had misunderstood here, since the commit message spoke in terms of
> removing that.
>
> As long as stop_machine() is used I agree this is safe; sorry for the noise.
Okay.

Hi Palmer,

Please take Andy's fixup patch. We would continue to find a way for PREEMPTION.

>
> > ps:
> >  Yes, it's not good, and it's expensive.
>
> We can't have everything! :)
>
> Thanks,
> Mark.
  

Patch

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index e2b656043abf..ee0d39b26794 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -138,7 +138,7 @@  config RISCV
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
 	select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
 	select HAVE_FUNCTION_GRAPH_TRACER
-	select HAVE_FUNCTION_TRACER if !XIP_KERNEL
+	select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
 
 config ARCH_MMAP_RND_BITS_MIN
 	default 18 if 64BIT