x86/kprobes: Fix 1 byte conditional jump target

Message ID 20230204210807.3930-1-namit@vmware.com
State New
Headers
Series x86/kprobes: Fix 1 byte conditional jump target |

Commit Message

Nadav Amit Feb. 4, 2023, 9:08 p.m. UTC
  From: Nadav Amit <namit@vmware.com>

Commit 3bc753c06dd0 ("kbuild: treat char as always unsigned") broke
kprobes.  Setting a probe-point on 1 byte conditional jump can cause the
kernel to crash, as the branch target is not sign extended.

Fix by using s8 instead of char and use immediate.value instead of
immediate.bytes for consistency.

Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/kprobes/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Nadav Amit Feb. 5, 2023, 7:49 a.m. UTC | #1
> On Feb 4, 2023, at 11:08 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> 
> From: Nadav Amit <namit@vmware.com>
> 
> Commit 3bc753c06dd0 ("kbuild: treat char as always unsigned") broke
> kprobes.  Setting a probe-point on 1 byte conditional jump can cause the
> kernel to crash, as the branch target is not sign extended.
> 
> Fix by using s8 instead of char and use immediate.value instead of
> immediate.bytes for consistency.

I guess I forgot to put a “Fixes” tag, since it is still not a real
regression. (The bug was introduced in 6.2).

Still, this fix should not fall between the cracks… Please let me know
whether v2 is needed with a “fixes” tag.
  
Masami Hiramatsu (Google) Feb. 6, 2023, 2:18 p.m. UTC | #2
On Sat,  4 Feb 2023 21:08:07 +0000
Nadav Amit <nadav.amit@gmail.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> Commit 3bc753c06dd0 ("kbuild: treat char as always unsigned") broke
> kprobes.  Setting a probe-point on 1 byte conditional jump can cause the
> kernel to crash, as the branch target is not sign extended.

Oops, indeed!

> 
> Fix by using s8 instead of char and use immediate.value instead of
> immediate.bytes for consistency.

Looks good to me. Thanks for finding this bug!

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Fixes: 3bc753c06dd0 ("kbuild: treat char as always unsigned")

Thank you!
> 
> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/kernel/kprobes/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index b36f3c367cb2..6a56d56b3817 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
>  		/* 1 byte conditional jump */
>  		p->ainsn.emulate_op = kprobe_emulate_jcc;
>  		p->ainsn.jcc.type = opcode & 0xf;
> -		p->ainsn.rel32 = *(char *)insn->immediate.bytes;
> +		p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
>  		break;
>  	case 0x0f:
>  		opcode = insn->opcode.bytes[1];
> -- 
> 2.34.1
>
  
Masami Hiramatsu (Google) Feb. 6, 2023, 2:19 p.m. UTC | #3
On Sun, 5 Feb 2023 09:49:56 +0200
Nadav Amit <nadav.amit@gmail.com> wrote:

> 
> 
> > On Feb 4, 2023, at 11:08 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> > 
> > From: Nadav Amit <namit@vmware.com>
> > 
> > Commit 3bc753c06dd0 ("kbuild: treat char as always unsigned") broke
> > kprobes.  Setting a probe-point on 1 byte conditional jump can cause the
> > kernel to crash, as the branch target is not sign extended.
> > 
> > Fix by using s8 instead of char and use immediate.value instead of
> > immediate.bytes for consistency.
> 
> I guess I forgot to put a “Fixes” tag, since it is still not a real
> regression. (The bug was introduced in 6.2).
> 
> Still, this fix should not fall between the cracks… Please let me know
> whether v2 is needed with a “fixes” tag.

No problem, I added Fixed tag with my Ack :)

Thank you!

>
  
Dave Hansen Feb. 6, 2023, 6:42 p.m. UTC | #4
On 2/4/23 13:08, Nadav Amit wrote:
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
>  		/* 1 byte conditional jump */
>  		p->ainsn.emulate_op = kprobe_emulate_jcc;
>  		p->ainsn.jcc.type = opcode & 0xf;
> -		p->ainsn.rel32 = *(char *)insn->immediate.bytes;
> +		p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
>  		break;

This new code is at least consistent with what the other code in that
function does with 1-byte immediates.  But, I'm curious what the point
is about going through the 's8' type.

What's wrong with:

	p->ainsn.rel32 = insn->immediate.value;

?  Am I missing something subtle?
  
Nadav Amit Feb. 6, 2023, 7:05 p.m. UTC | #5
> On Feb 6, 2023, at 8:42 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> !! External Email
> 
> On 2/4/23 13:08, Nadav Amit wrote:
>> --- a/arch/x86/kernel/kprobes/core.c
>> +++ b/arch/x86/kernel/kprobes/core.c
>> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
>>              /* 1 byte conditional jump */
>>              p->ainsn.emulate_op = kprobe_emulate_jcc;
>>              p->ainsn.jcc.type = opcode & 0xf;
>> -             p->ainsn.rel32 = *(char *)insn->immediate.bytes;
>> +             p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
>>              break;
> 
> This new code is at least consistent with what the other code in that
> function does with 1-byte immediates.  But, I'm curious what the point
> is about going through the 's8' type.
> 
> What's wrong with:
> 
>        p->ainsn.rel32 = insn->immediate.value;
> 
> ?  Am I missing something subtle?

I am not sure why this is considered safe, insn->immediate.value has a
type of insn_value_t, which is signed int, so such casting seems wrong
to me. Do you imply that during decoding the sign-extension should have
been done correctly? Or am I missing something else?

Anyhow, after spending too much time on debugging kprobes failures,
I prefer to be more defensive, and not require the code to be “aware”
or rely on member types or the order of implicit casting in C.
  
Dave Hansen Feb. 6, 2023, 10:38 p.m. UTC | #6
On 2/6/23 11:05, Nadav Amit wrote:
>> On 2/4/23 13:08, Nadav Amit wrote:
>>> --- a/arch/x86/kernel/kprobes/core.c
>>> +++ b/arch/x86/kernel/kprobes/core.c
>>> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
>>>              /* 1 byte conditional jump */
>>>              p->ainsn.emulate_op = kprobe_emulate_jcc;
>>>              p->ainsn.jcc.type = opcode & 0xf;
>>> -             p->ainsn.rel32 = *(char *)insn->immediate.bytes;
>>> +             p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
>>>              break;
>>
>> This new code is at least consistent with what the other code in that
>> function does with 1-byte immediates.  But, I'm curious what the point
>> is about going through the 's8' type.
>>
>> What's wrong with:
>>
>>        p->ainsn.rel32 = insn->immediate.value;
>>
>> ?  Am I missing something subtle?
> 
> I am not sure why this is considered safe, insn->immediate.value has a
> type of insn_value_t, which is signed int, so such casting seems wrong
> to me. Do you imply that during decoding the sign-extension should have
> been done correctly? Or am I missing something else?

OK, so we've got an assignment which on the left hand side is
p->ainsn.rel32 which is a 32-bit signed integer:

struct arch_specific_insn {
	...
        s32 rel32;      /* relative offset must be s32, s16, or s8 */

The right hand side is insn->immediate.value.  Its real type is a couple
of layers deep, but it boils down to a 'signed int', also 32-bit:

Struct #1:
struct insn {
	...
        union {
                struct insn_field immediate;
		...
        };

Struct #2
struct insn_field {
        union {
                insn_value_t value;
                insn_byte_t bytes[4];
        };
	...

And a typedef:
typedef signed int insn_value_t;

So, the proposed code above is effectively this:

	s32 foo;
	signed int bar;

	foo = *(s8 *)&bar;

That works just fine as long as the value being represented fits in a
single byte.  But, it *certainly* wouldn't work for:

	s32 foo;
	signed int bar = 128;

	foo = *(s8 *)&bar;

In this specific case, I think the conditional jump offsets are all from
the (entire) second byte of the instruction, so this is _somewhat_ academic.

> Anyhow, after spending too much time on debugging kprobes failures,
> I prefer to be more defensive, and not require the code to be “aware”
> or rely on member types or the order of implicit casting in C.

Well, the code in the fix requires some awareness of the range of the
data type.  The simpler direct assignment:

	p->ainsn.rel32 = insn->immediate.value;

doesn't require much and works for a wider range of values -- *ALL*
32-bit signed integer values on x86.

I figured I must be missing something.  It would not be the first time
that C's type system rules tripped me up.  Why this:

	foo = *(s8 *)&bar;

Instead of this:

	foo = bar;

?  I'm having a hard time of seeing what the advantage is of the 's8'
version.
  
Masami Hiramatsu (Google) Feb. 7, 2023, 12:54 a.m. UTC | #7
On Mon, 6 Feb 2023 14:38:16 -0800
Dave Hansen <dave.hansen@intel.com> wrote:

> On 2/6/23 11:05, Nadav Amit wrote:
> >> On 2/4/23 13:08, Nadav Amit wrote:
> >>> --- a/arch/x86/kernel/kprobes/core.c
> >>> +++ b/arch/x86/kernel/kprobes/core.c
> >>> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
> >>>              /* 1 byte conditional jump */
> >>>              p->ainsn.emulate_op = kprobe_emulate_jcc;
> >>>              p->ainsn.jcc.type = opcode & 0xf;
> >>> -             p->ainsn.rel32 = *(char *)insn->immediate.bytes;
> >>> +             p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
> >>>              break;
> >>
> >> This new code is at least consistent with what the other code in that
> >> function does with 1-byte immediates.  But, I'm curious what the point
> >> is about going through the 's8' type.
> >>
> >> What's wrong with:
> >>
> >>        p->ainsn.rel32 = insn->immediate.value;
> >>
> >> ?  Am I missing something subtle?
> > 
> > I am not sure why this is considered safe, insn->immediate.value has a
> > type of insn_value_t, which is signed int, so such casting seems wrong
> > to me. Do you imply that during decoding the sign-extension should have
> > been done correctly? Or am I missing something else?
> 
> OK, so we've got an assignment which on the left hand side is
> p->ainsn.rel32 which is a 32-bit signed integer:
> 
> struct arch_specific_insn {
> 	...
>         s32 rel32;      /* relative offset must be s32, s16, or s8 */
> 
> The right hand side is insn->immediate.value.  Its real type is a couple
> of layers deep, but it boils down to a 'signed int', also 32-bit:
> 
> Struct #1:
> struct insn {
> 	...
>         union {
>                 struct insn_field immediate;
> 		...
>         };
> 
> Struct #2
> struct insn_field {
>         union {
>                 insn_value_t value;
>                 insn_byte_t bytes[4];
>         };
> 	...
> 
> And a typedef:
> typedef signed int insn_value_t;
> 
> So, the proposed code above is effectively this:
> 
> 	s32 foo;
> 	signed int bar;
> 
> 	foo = *(s8 *)&bar;
> 
> That works just fine as long as the value being represented fits in a
> single byte.  But, it *certainly* wouldn't work for:
> 
> 	s32 foo;
> 	signed int bar = 128;
> 
> 	foo = *(s8 *)&bar;
> 
> In this specific case, I think the conditional jump offsets are all from
> the (entire) second byte of the instruction, so this is _somewhat_ academic.

NOTE: Since we have checked the opcode is Jcc (0x70 to 0x7f) we ensured that
the immediate value is 1 byte (rel8 = -128 to +127).

        case 0x70 ... 0x7f:
                /* 1 byte conditional jump */
                p->ainsn.emulate_op = kprobe_emulate_jcc;
                p->ainsn.jcc.type = opcode & 0xf;
                p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
                break;

But I think your have a point. I missed that Nadav is using immediate.value
instead of immediate.bytes[0]. And from the instruction decoder code, it is
better to use immediate.value without casting.

In arch/x86/lib/insn.c:

int insn_get_immediate(struct insn *insn)
{
...
        switch (inat_immediate_size(insn->attr)) {
        case INAT_IMM_BYTE:
                insn_field_set(&insn->immediate, get_next(signed char, insn), 1);
                break;

And 

In arch/x86/include/asm/insn.h:

static inline void insn_field_set(struct insn_field *p, insn_value_t v,
                                  unsigned char n)
{
        p->value = v;
        p->nbytes = n;
}

Thus the immediate.value should be set correctly. (means we don't have to
pick up the 1st byte from the value)

Nadav, can you update your patch to assign immediate.value directly?

Thank you,

> 
> > Anyhow, after spending too much time on debugging kprobes failures,
> > I prefer to be more defensive, and not require the code to be “aware”
> > or rely on member types or the order of implicit casting in C.
> 
> Well, the code in the fix requires some awareness of the range of the
> data type.  The simpler direct assignment:
> 
> 	p->ainsn.rel32 = insn->immediate.value;
> 
> doesn't require much and works for a wider range of values -- *ALL*
> 32-bit signed integer values on x86.
> 
> I figured I must be missing something.  It would not be the first time
> that C's type system rules tripped me up.  Why this:
> 
> 	foo = *(s8 *)&bar;
> 
> Instead of this:
> 
> 	foo = bar;
> 
> ?  I'm having a hard time of seeing what the advantage is of the 's8'
> version.
  
Masami Hiramatsu (Google) Feb. 7, 2023, 3:21 p.m. UTC | #8
On Tue, 7 Feb 2023 09:54:24 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> On Mon, 6 Feb 2023 14:38:16 -0800
> Dave Hansen <dave.hansen@intel.com> wrote:
> 
> > On 2/6/23 11:05, Nadav Amit wrote:
> > >> On 2/4/23 13:08, Nadav Amit wrote:
> > >>> --- a/arch/x86/kernel/kprobes/core.c
> > >>> +++ b/arch/x86/kernel/kprobes/core.c
> > >>> @@ -625,7 +625,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
> > >>>              /* 1 byte conditional jump */
> > >>>              p->ainsn.emulate_op = kprobe_emulate_jcc;
> > >>>              p->ainsn.jcc.type = opcode & 0xf;
> > >>> -             p->ainsn.rel32 = *(char *)insn->immediate.bytes;
> > >>> +             p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
> > >>>              break;
> > >>
> > >> This new code is at least consistent with what the other code in that
> > >> function does with 1-byte immediates.  But, I'm curious what the point
> > >> is about going through the 's8' type.
> > >>
> > >> What's wrong with:
> > >>
> > >>        p->ainsn.rel32 = insn->immediate.value;
> > >>
> > >> ?  Am I missing something subtle?
> > > 
> > > I am not sure why this is considered safe, insn->immediate.value has a
> > > type of insn_value_t, which is signed int, so such casting seems wrong
> > > to me. Do you imply that during decoding the sign-extension should have
> > > been done correctly? Or am I missing something else?
> > 
> > OK, so we've got an assignment which on the left hand side is
> > p->ainsn.rel32 which is a 32-bit signed integer:
> > 
> > struct arch_specific_insn {
> > 	...
> >         s32 rel32;      /* relative offset must be s32, s16, or s8 */
> > 
> > The right hand side is insn->immediate.value.  Its real type is a couple
> > of layers deep, but it boils down to a 'signed int', also 32-bit:
> > 
> > Struct #1:
> > struct insn {
> > 	...
> >         union {
> >                 struct insn_field immediate;
> > 		...
> >         };
> > 
> > Struct #2
> > struct insn_field {
> >         union {
> >                 insn_value_t value;
> >                 insn_byte_t bytes[4];
> >         };
> > 	...
> > 
> > And a typedef:
> > typedef signed int insn_value_t;
> > 
> > So, the proposed code above is effectively this:
> > 
> > 	s32 foo;
> > 	signed int bar;
> > 
> > 	foo = *(s8 *)&bar;
> > 
> > That works just fine as long as the value being represented fits in a
> > single byte.  But, it *certainly* wouldn't work for:
> > 
> > 	s32 foo;
> > 	signed int bar = 128;
> > 
> > 	foo = *(s8 *)&bar;
> > 
> > In this specific case, I think the conditional jump offsets are all from
> > the (entire) second byte of the instruction, so this is _somewhat_ academic.
> 
> NOTE: Since we have checked the opcode is Jcc (0x70 to 0x7f) we ensured that
> the immediate value is 1 byte (rel8 = -128 to +127).
> 
>         case 0x70 ... 0x7f:
>                 /* 1 byte conditional jump */
>                 p->ainsn.emulate_op = kprobe_emulate_jcc;
>                 p->ainsn.jcc.type = opcode & 0xf;
>                 p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
>                 break;
> 
> But I think your have a point. I missed that Nadav is using immediate.value
> instead of immediate.bytes[0]. And from the instruction decoder code, it is
> better to use immediate.value without casting.
> 
> In arch/x86/lib/insn.c:
> 
> int insn_get_immediate(struct insn *insn)
> {
> ...
>         switch (inat_immediate_size(insn->attr)) {
>         case INAT_IMM_BYTE:
>                 insn_field_set(&insn->immediate, get_next(signed char, insn), 1);
>                 break;
> 
> And 
> 
> In arch/x86/include/asm/insn.h:
> 
> static inline void insn_field_set(struct insn_field *p, insn_value_t v,
>                                   unsigned char n)
> {
>         p->value = v;
>         p->nbytes = n;
> }
> 
> Thus the immediate.value should be set correctly. (means we don't have to
> pick up the 1st byte from the value)
> 
> Nadav, can you update your patch to assign immediate.value directly?

BTW, there are many similar casts around there. I'll fix those too.
If we need to be more conservative, 

 p->ainsn.rel32 = (s8)insn->immediate.value;

should work, right?
Or, maybe we can add WARN_ON_ONCE() as

 WARN_ON_ONCE(insn.immediate.nbytes != 1)

Thank you, 

> 
> Thank you,
> 
> > 
> > > Anyhow, after spending too much time on debugging kprobes failures,
> > > I prefer to be more defensive, and not require the code to be “aware”
> > > or rely on member types or the order of implicit casting in C.
> > 
> > Well, the code in the fix requires some awareness of the range of the
> > data type.  The simpler direct assignment:
> > 
> > 	p->ainsn.rel32 = insn->immediate.value;
> > 
> > doesn't require much and works for a wider range of values -- *ALL*
> > 32-bit signed integer values on x86.
> > 
> > I figured I must be missing something.  It would not be the first time
> > that C's type system rules tripped me up.  Why this:
> > 
> > 	foo = *(s8 *)&bar;
> > 
> > Instead of this:
> > 
> > 	foo = bar;
> > 
> > ?  I'm having a hard time of seeing what the advantage is of the 's8'
> > version.
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
  
Dave Hansen Feb. 7, 2023, 3:33 p.m. UTC | #9
On 2/7/23 07:21, Masami Hiramatsu (Google) wrote:
>> Nadav, can you update your patch to assign immediate.value directly?
> BTW, there are many similar casts around there. I'll fix those too.
> If we need to be more conservative, 

Let's focus on fixing the known bug first, please.  Cleanups can come later.
  
Nadav Amit Feb. 8, 2023, 6:34 a.m. UTC | #10
> On Feb 7, 2023, at 5:33 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> !! External Email
> 
> On 2/7/23 07:21, Masami Hiramatsu (Google) wrote:
>>> Nadav, can you update your patch to assign immediate.value directly?
>> BTW, there are many similar casts around there. I'll fix those too.
>> If we need to be more conservative,
> 
> Let's focus on fixing the known bug first, please.  Cleanups can come later.

Thank you Dave. That was my take too following your email.

I certainly did not pay attention to the fact that sign extension has already
been done in insn_get_immediate() before Masami pointed it out. So, the comment
in insn_get_immediate() should also be updated to note that the immediate
is sign-*extended* in *all* cases (instead of sign-expanded in most cases. :) )

As you said, I guess the change you and Masami proposed can be done on top of
this patch, which is (and was) only intended to fix the bug, and should
therefore go into 6.2.
  
Dave Hansen Feb. 8, 2023, 6:56 a.m. UTC | #11
On 2/7/23 22:34, Nadav Amit wrote:
> As you said, I guess the change you and Masami proposed can be done on top of
> this patch, which is (and was) only intended to fix the bug, and should
> therefore go into 6.2.

Nadav,
	
To fix this issue, you proposed:

	p->ainsn.rel32 = *(s8 *)&insn->immediate.value;

But, this is, um, rather obfuscated and potentially less correct
compared to:

	p->ainsn.rel32 = insn->immediate.value;

I'd appreciate it if you could update your patch to do this simpler
thing and resend, unless there is a strong reason to do what you
originally proposed.
  
Nadav Amit Feb. 8, 2023, 6:58 a.m. UTC | #12
> On Feb 8, 2023, at 8:56 AM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> !! External Email
> 
> On 2/7/23 22:34, Nadav Amit wrote:
>> As you said, I guess the change you and Masami proposed can be done on top of
>> this patch, which is (and was) only intended to fix the bug, and should
>> therefore go into 6.2.
> 
> Nadav,
> 
> To fix this issue, you proposed:
> 
>        p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
> 
> But, this is, um, rather obfuscated and potentially less correct
> compared to:
> 
>        p->ainsn.rel32 = insn->immediate.value;
> 
> I'd appreciate it if you could update your patch to do this simpler
> thing and resend, unless there is a strong reason to do what you
> originally proposed.

I thought it makes sense to make this change for all the other cases
in one pass, but whatever makes you happy.

I will send v2 soon.

Thanks again,
Nadav
  

Patch

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index b36f3c367cb2..6a56d56b3817 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -625,7 +625,7 @@  static int prepare_emulation(struct kprobe *p, struct insn *insn)
 		/* 1 byte conditional jump */
 		p->ainsn.emulate_op = kprobe_emulate_jcc;
 		p->ainsn.jcc.type = opcode & 0xf;
-		p->ainsn.rel32 = *(char *)insn->immediate.bytes;
+		p->ainsn.rel32 = *(s8 *)&insn->immediate.value;
 		break;
 	case 0x0f:
 		opcode = insn->opcode.bytes[1];