[1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler

Message ID a17a14abfb81cb0eea77c2ee10d7fc98d5d5a73e.1684120990.git.zegao@tencent.com
State New
Headers
Series Make fpobe + rethook immune to recursion |

Commit Message

Ze Gao May 15, 2023, 3:26 a.m. UTC
  This patch replace preempt_{disable, enable} with its corresponding
notrace version in rethook_trampoline_handler so no worries about stack
recursion or overflow introduced by preempt_count_{add, sub} under
fprobe + rethook context.

Signed-off-by: Ze Gao <zegao@tencent.com>
---
 kernel/trace/rethook.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
  

Comments

Masami Hiramatsu (Google) May 16, 2023, 4:25 a.m. UTC | #1
Hi Ze Gao,

Thanks for the patch.

On Mon, 15 May 2023 11:26:38 +0800
Ze Gao <zegao2021@gmail.com> wrote:

> This patch replace preempt_{disable, enable} with its corresponding
> notrace version in rethook_trampoline_handler so no worries about stack
> recursion or overflow introduced by preempt_count_{add, sub} under
> fprobe + rethook context.

So, have you ever see that recursion of preempt_count overflow case?

I intended to use the normal preempt_disable() here because it does NOT
prohibit any function-trace call (Note that both kprobes and
fprobe checks recursive call by itself) but it is used for preempt_onoff
tracer.

Thanks,

> 
> Signed-off-by: Ze Gao <zegao@tencent.com>
> ---
>  kernel/trace/rethook.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> index 32c3dfdb4d6a..60f6cb2b486b 100644
> --- a/kernel/trace/rethook.c
> +++ b/kernel/trace/rethook.c
> @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
>  	 * These loops must be protected from rethook_free_rcu() because those
>  	 * are accessing 'rhn->rethook'.
>  	 */
> -	preempt_disable();
> +	preempt_disable_notrace();
>  
>  	/*
>  	 * Run the handler on the shadow stack. Do not unlink the list here because
> @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
>  		first = first->next;
>  		rethook_recycle(rhn);
>  	}
> -	preempt_enable();
> +	preempt_enable_notrace();
>  
>  	return correct_ret_addr;
>  }
> -- 
> 2.40.1
>
  
Masami Hiramatsu (Google) May 16, 2023, 5:33 a.m. UTC | #2
On Tue, 16 May 2023 13:25:02 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> Hi Ze Gao,
> 
> Thanks for the patch.
> 
> On Mon, 15 May 2023 11:26:38 +0800
> Ze Gao <zegao2021@gmail.com> wrote:
> 
> > This patch replace preempt_{disable, enable} with its corresponding
> > notrace version in rethook_trampoline_handler so no worries about stack
> > recursion or overflow introduced by preempt_count_{add, sub} under
> > fprobe + rethook context.
> 
> So, have you ever see that recursion of preempt_count overflow case?
> 
> I intended to use the normal preempt_disable() here because it does NOT
> prohibit any function-trace call (Note that both kprobes and
> fprobe checks recursive call by itself) but it is used for preempt_onoff
> tracer.

OK, I got the point.

  rethook_trampoline_handler() {
    preempt_disable() {
      preempt_count_add() { => fprobe and set rethook
      } => rethook_trampoline_handler() {
        preempt_disable() {
          ...   

So the problem is that the preempt_disable() macro calls preempt_count_add()
which can be tracable.

So, let's make it notrace.

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

and

Fixes: 54ecbe6f1ed5 ("rethook: Add a generic return hook")
Cc: stable@vger.kernel.org

Thank you,

> 
> Thanks,
> 
> > 
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > ---
> >  kernel/trace/rethook.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> > index 32c3dfdb4d6a..60f6cb2b486b 100644
> > --- a/kernel/trace/rethook.c
> > +++ b/kernel/trace/rethook.c
> > @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> >  	 * These loops must be protected from rethook_free_rcu() because those
> >  	 * are accessing 'rhn->rethook'.
> >  	 */
> > -	preempt_disable();
> > +	preempt_disable_notrace();
> >  
> >  	/*
> >  	 * Run the handler on the shadow stack. Do not unlink the list here because
> > @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> >  		first = first->next;
> >  		rethook_recycle(rhn);
> >  	}
> > -	preempt_enable();
> > +	preempt_enable_notrace();
> >  
> >  	return correct_ret_addr;
> >  }
> > -- 
> > 2.40.1
> > 
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
  

Patch

diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
index 32c3dfdb4d6a..60f6cb2b486b 100644
--- a/kernel/trace/rethook.c
+++ b/kernel/trace/rethook.c
@@ -288,7 +288,7 @@  unsigned long rethook_trampoline_handler(struct pt_regs *regs,
 	 * These loops must be protected from rethook_free_rcu() because those
 	 * are accessing 'rhn->rethook'.
 	 */
-	preempt_disable();
+	preempt_disable_notrace();
 
 	/*
 	 * Run the handler on the shadow stack. Do not unlink the list here because
@@ -321,7 +321,7 @@  unsigned long rethook_trampoline_handler(struct pt_regs *regs,
 		first = first->next;
 		rethook_recycle(rhn);
 	}
-	preempt_enable();
+	preempt_enable_notrace();
 
 	return correct_ret_addr;
 }