[bpf-next,1/2] bpf: Fix check_stack_write_fixed_off() to correctly spill imm

Message ID 20231026-fix-check-stack-write-v1-1-6b325ef3ce7e@gmail.com
State New
Headers
Series bpf: Fix incorrect immediate spill |

Commit Message

Hao Sun Oct. 26, 2023, 3:13 p.m. UTC
  In check_stack_write_fixed_off(), imm value is cast to u32 before being
spilled to the stack. Therefore, the sign information is lost, and the
range information is incorrect when load from the stack again.

For the following prog:
0: r2 = r10
1: *(u64*)(r2 -40) = -44
2: r0 = *(u64*)(r2 - 40)
3: if r0 s<= 0xa goto +2
4: r0 = 1
5: exit
6: r0  = 0
7: exit

The verifier gives:
func#0 @0
0: R1=ctx(off=0,imm=0) R10=fp0
0: (bf) r2 = r10                      ; R2_w=fp0 R10=fp0
1: (7a) *(u64 *)(r2 -40) = -44        ; R2_w=fp0 fp-40_w=4294967252
2: (79) r0 = *(u64 *)(r2 -40)         ; R0_w=4294967252 R2_w=fp0
fp-40_w=4294967252
3: (c5) if r0 s< 0xa goto pc+2
mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
3: R0_w=4294967252
4: (b7) r0 = 1                        ; R0_w=1
5: (95) exit
verification time 7971 usec
stack depth 40
processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
peak_states 0 mark_read 0

So remove the incorrect cast, since imm field is declared as s32, and
__mark_reg_known() takes u64, so imm would be correctly sign extended
by compiler.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Shung-Hsi Yu Oct. 27, 2023, 7:14 a.m. UTC | #1
On Thu, Oct 26, 2023 at 05:13:10PM +0200, Hao Sun wrote:
> In check_stack_write_fixed_off(), imm value is cast to u32 before being
> spilled to the stack. Therefore, the sign information is lost, and the
> range information is incorrect when load from the stack again.
> 
> For the following prog:
> 0: r2 = r10
> 1: *(u64*)(r2 -40) = -44
> 2: r0 = *(u64*)(r2 - 40)
> 3: if r0 s<= 0xa goto +2
> 4: r0 = 1
> 5: exit
> 6: r0  = 0
> 7: exit
> 
> The verifier gives:
> func#0 @0
> 0: R1=ctx(off=0,imm=0) R10=fp0
> 0: (bf) r2 = r10                      ; R2_w=fp0 R10=fp0
> 1: (7a) *(u64 *)(r2 -40) = -44        ; R2_w=fp0 fp-40_w=4294967252
> 2: (79) r0 = *(u64 *)(r2 -40)         ; R0_w=4294967252 R2_w=fp0
> fp-40_w=4294967252
> 3: (c5) if r0 s< 0xa goto pc+2
> mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
> mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
> 3: R0_w=4294967252
> 4: (b7) r0 = 1                        ; R0_w=1
> 5: (95) exit
> verification time 7971 usec
> stack depth 40
> processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
> peak_states 0 mark_read 0
> 
> So remove the incorrect cast, since imm field is declared as s32, and
> __mark_reg_known() takes u64, so imm would be correctly sign extended
> by compiler.
> 
> Signed-off-by: Hao Sun <sunhao.th@gmail.com>

Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>

The acked-by applies to future version of the patchset as well.

FWIW I think we'd also need the same treatment for the (BPF_ALU | BPF_MOV |
BPF_K) case in check_alu_op().

> ---
>  kernel/bpf/verifier.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 857d76694517..44af69ce1301 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4674,7 +4674,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>  		   insn->imm != 0 && env->bpf_capable) {
>  		struct bpf_reg_state fake_reg = {};
>  
> -		__mark_reg_known(&fake_reg, (u32)insn->imm);
> +		__mark_reg_known(&fake_reg, insn->imm);
>  		fake_reg.type = SCALAR_VALUE;
>  		save_register_state(state, spi, &fake_reg, size);
>  	} else if (reg && is_spillable_regtype(reg->type)) {
> 
> -- 
> 2.34.1
>
  
Shung-Hsi Yu Oct. 27, 2023, 7:44 a.m. UTC | #2
On Fri, Oct 27, 2023 at 03:14:10PM +0800, Shung-Hsi Yu wrote:
> On Thu, Oct 26, 2023 at 05:13:10PM +0200, Hao Sun wrote:
> > In check_stack_write_fixed_off(), imm value is cast to u32 before being
> > spilled to the stack. Therefore, the sign information is lost, and the
> > range information is incorrect when load from the stack again.
> > 
> > For the following prog:
> > 0: r2 = r10
> > 1: *(u64*)(r2 -40) = -44
> > 2: r0 = *(u64*)(r2 - 40)
> > 3: if r0 s<= 0xa goto +2
> > 4: r0 = 1
> > 5: exit
> > 6: r0  = 0
> > 7: exit
> > 
> > The verifier gives:
> > func#0 @0
> > 0: R1=ctx(off=0,imm=0) R10=fp0
> > 0: (bf) r2 = r10                      ; R2_w=fp0 R10=fp0
> > 1: (7a) *(u64 *)(r2 -40) = -44        ; R2_w=fp0 fp-40_w=4294967252
> > 2: (79) r0 = *(u64 *)(r2 -40)         ; R0_w=4294967252 R2_w=fp0
> > fp-40_w=4294967252
> > 3: (c5) if r0 s< 0xa goto pc+2
> > mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
> > mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
> > 3: R0_w=4294967252
> > 4: (b7) r0 = 1                        ; R0_w=1
> > 5: (95) exit
> > verification time 7971 usec
> > stack depth 40
> > processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
> > peak_states 0 mark_read 0
> > 
> > So remove the incorrect cast, since imm field is declared as s32, and
> > __mark_reg_known() takes u64, so imm would be correctly sign extended
> > by compiler.
> > 
> > Signed-off-by: Hao Sun <sunhao.th@gmail.com>
> 
> Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
> 
> The acked-by applies to future version of the patchset as well.

Oh and since this is a fix it would be great to have the fixes tag[1] to
specify when the bug was introduced

Fixes: ecdf985d7615 ("bpf: track immediate values written to stack by BPF_ST instruction")

Add Cc tag for stable[2] so stable kernels pick up the fix as well

Cc: stable@vger.kernel.org

And ideally specify that the patch should be applied to the bpf tree rather
than bpf-next[3] (though the BPF maintainers has the final say on which tree
this patch should be applied).

I'd owe you a big thank as well since this helps with our internal process
at my company. So thank you in advance!

1: https://docs.kernel.org/process/submitting-patches.html#describe-your-changes
2: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#option-1
3: https://docs.kernel.org/bpf/bpf_devel_QA.html#q-how-do-the-changes-make-their-way-into-linux

> FWIW I think we'd also need the same treatment for the (BPF_ALU | BPF_MOV |
> BPF_K) case in check_alu_op().
> 
> > ---
> >  kernel/bpf/verifier.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 857d76694517..44af69ce1301 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4674,7 +4674,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >  		   insn->imm != 0 && env->bpf_capable) {
> >  		struct bpf_reg_state fake_reg = {};
> >  
> > -		__mark_reg_known(&fake_reg, (u32)insn->imm);
> > +		__mark_reg_known(&fake_reg, insn->imm);
> >  		fake_reg.type = SCALAR_VALUE;
> >  		save_register_state(state, spi, &fake_reg, size);
> >  	} else if (reg && is_spillable_regtype(reg->type)) {
> > 
> > -- 
> > 2.34.1
> >
  
Hao Sun Oct. 27, 2023, 7:51 a.m. UTC | #3
On Fri, Oct 27, 2023 at 9:44 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote:
>
> On Fri, Oct 27, 2023 at 03:14:10PM +0800, Shung-Hsi Yu wrote:
> > On Thu, Oct 26, 2023 at 05:13:10PM +0200, Hao Sun wrote:
> > > In check_stack_write_fixed_off(), imm value is cast to u32 before being
> > > spilled to the stack. Therefore, the sign information is lost, and the
> > > range information is incorrect when load from the stack again.
> > >
> > > For the following prog:
> > > 0: r2 = r10
> > > 1: *(u64*)(r2 -40) = -44
> > > 2: r0 = *(u64*)(r2 - 40)
> > > 3: if r0 s<= 0xa goto +2
> > > 4: r0 = 1
> > > 5: exit
> > > 6: r0  = 0
> > > 7: exit
> > >
> > > The verifier gives:
> > > func#0 @0
> > > 0: R1=ctx(off=0,imm=0) R10=fp0
> > > 0: (bf) r2 = r10                      ; R2_w=fp0 R10=fp0
> > > 1: (7a) *(u64 *)(r2 -40) = -44        ; R2_w=fp0 fp-40_w=4294967252
> > > 2: (79) r0 = *(u64 *)(r2 -40)         ; R0_w=4294967252 R2_w=fp0
> > > fp-40_w=4294967252
> > > 3: (c5) if r0 s< 0xa goto pc+2
> > > mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
> > > mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
> > > 3: R0_w=4294967252
> > > 4: (b7) r0 = 1                        ; R0_w=1
> > > 5: (95) exit
> > > verification time 7971 usec
> > > stack depth 40
> > > processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
> > > peak_states 0 mark_read 0
> > >
> > > So remove the incorrect cast, since imm field is declared as s32, and
> > > __mark_reg_known() takes u64, so imm would be correctly sign extended
> > > by compiler.
> > >
> > > Signed-off-by: Hao Sun <sunhao.th@gmail.com>
> >
> > Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
> >
> > The acked-by applies to future version of the patchset as well.
>

(BPF_ALU | BPF_MOV | BPF_K) is handled correctly in the current
code, i.e., no cast in BPF_ALU64 so that the sign is extended, and
the cast in BPF_ALU correctly zero extend the reg.

> Oh and since this is a fix it would be great to have the fixes tag[1] to
> specify when the bug was introduced
>
> Fixes: ecdf985d7615 ("bpf: track immediate values written to stack by BPF_ST instruction")
>

Noted, thanks.

> Add Cc tag for stable[2] so stable kernels pick up the fix as well
>
> Cc: stable@vger.kernel.org
>
> And ideally specify that the patch should be applied to the bpf tree rather
> than bpf-next[3] (though the BPF maintainers has the final say on which tree
> this patch should be applied).
>
> I'd owe you a big thank as well since this helps with our internal process
> at my company. So thank you in advance!
>
> 1: https://docs.kernel.org/process/submitting-patches.html#describe-your-changes
> 2: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#option-1
> 3: https://docs.kernel.org/bpf/bpf_devel_QA.html#q-how-do-the-changes-make-their-way-into-linux
>
> > FWIW I think we'd also need the same treatment for the (BPF_ALU | BPF_MOV |
> > BPF_K) case in check_alu_op().
> >
> > > ---
> > >  kernel/bpf/verifier.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 857d76694517..44af69ce1301 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4674,7 +4674,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > >                insn->imm != 0 && env->bpf_capable) {
> > >             struct bpf_reg_state fake_reg = {};
> > >
> > > -           __mark_reg_known(&fake_reg, (u32)insn->imm);
> > > +           __mark_reg_known(&fake_reg, insn->imm);
> > >             fake_reg.type = SCALAR_VALUE;
> > >             save_register_state(state, spi, &fake_reg, size);
> > >     } else if (reg && is_spillable_regtype(reg->type)) {
> > >
> > > --
> > > 2.34.1
> > >
  
Shung-Hsi Yu Oct. 27, 2023, 8:01 a.m. UTC | #4
On Fri, Oct 27, 2023 at 09:51:58AM +0200, Hao Sun wrote:
> On Fri, Oct 27, 2023 at 9:44 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote:
> >
> > On Fri, Oct 27, 2023 at 03:14:10PM +0800, Shung-Hsi Yu wrote:
> > > On Thu, Oct 26, 2023 at 05:13:10PM +0200, Hao Sun wrote:
> > > > In check_stack_write_fixed_off(), imm value is cast to u32 before being
> > > > spilled to the stack. Therefore, the sign information is lost, and the
> > > > range information is incorrect when load from the stack again.
> > > >
> > > > For the following prog:
> > > > 0: r2 = r10
> > > > 1: *(u64*)(r2 -40) = -44
> > > > 2: r0 = *(u64*)(r2 - 40)
> > > > 3: if r0 s<= 0xa goto +2
> > > > 4: r0 = 1
> > > > 5: exit
> > > > 6: r0  = 0
> > > > 7: exit
> > > >
> > > > The verifier gives:
> > > > func#0 @0
> > > > 0: R1=ctx(off=0,imm=0) R10=fp0
> > > > 0: (bf) r2 = r10                      ; R2_w=fp0 R10=fp0
> > > > 1: (7a) *(u64 *)(r2 -40) = -44        ; R2_w=fp0 fp-40_w=4294967252
> > > > 2: (79) r0 = *(u64 *)(r2 -40)         ; R0_w=4294967252 R2_w=fp0
> > > > fp-40_w=4294967252
> > > > 3: (c5) if r0 s< 0xa goto pc+2
> > > > mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
> > > > mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
> > > > 3: R0_w=4294967252
> > > > 4: (b7) r0 = 1                        ; R0_w=1
> > > > 5: (95) exit
> > > > verification time 7971 usec
> > > > stack depth 40
> > > > processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
> > > > peak_states 0 mark_read 0
> > > >
> > > > So remove the incorrect cast, since imm field is declared as s32, and
> > > > __mark_reg_known() takes u64, so imm would be correctly sign extended
> > > > by compiler.
> > > >
> > > > Signed-off-by: Hao Sun <sunhao.th@gmail.com>
> > >
> > > Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
> > >
> > > The acked-by applies to future version of the patchset as well.
> 
> (BPF_ALU | BPF_MOV | BPF_K) is handled correctly in the current
> code, i.e., no cast in BPF_ALU64 so that the sign is extended, and
> the cast in BPF_ALU correctly zero extend the reg.

My mistake, you're right. Thank you for the explanation.

> > Oh and since this is a fix it would be great to have the fixes tag[1] to
> > specify when the bug was introduced
> >
> > Fixes: ecdf985d7615 ("bpf: track immediate values written to stack by BPF_ST instruction")
> 
> Noted, thanks.
> 
> > Add Cc tag for stable[2] so stable kernels pick up the fix as well
> >
> > Cc: stable@vger.kernel.org
> >
> > And ideally specify that the patch should be applied to the bpf tree rather
> > than bpf-next[3] (though the BPF maintainers has the final say on which tree
> > this patch should be applied).
> >
> > I'd owe you a big thank as well since this helps with our internal process
> > at my company. So thank you in advance!
> >
> > 1: https://docs.kernel.org/process/submitting-patches.html#describe-your-changes
> > 2: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#option-1
> > 3: https://docs.kernel.org/bpf/bpf_devel_QA.html#q-how-do-the-changes-make-their-way-into-linux
> >
> > > FWIW I think we'd also need the same treatment for the (BPF_ALU | BPF_MOV |
> > > BPF_K) case in check_alu_op().

^ This statement is incorrect as Hao has explained above.

> > > > ---
> > > >  kernel/bpf/verifier.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > > index 857d76694517..44af69ce1301 100644
> > > > --- a/kernel/bpf/verifier.c
> > > > +++ b/kernel/bpf/verifier.c
> > > > @@ -4674,7 +4674,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > > >                insn->imm != 0 && env->bpf_capable) {
> > > >             struct bpf_reg_state fake_reg = {};
> > > >
> > > > -           __mark_reg_known(&fake_reg, (u32)insn->imm);
> > > > +           __mark_reg_known(&fake_reg, insn->imm);
> > > >             fake_reg.type = SCALAR_VALUE;
> > > >             save_register_state(state, spi, &fake_reg, size);
> > > >     } else if (reg && is_spillable_regtype(reg->type)) {
> > > >
> > > > --
> > > > 2.34.1
> > > >
  

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 857d76694517..44af69ce1301 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4674,7 +4674,7 @@  static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 		   insn->imm != 0 && env->bpf_capable) {
 		struct bpf_reg_state fake_reg = {};
 
-		__mark_reg_known(&fake_reg, (u32)insn->imm);
+		__mark_reg_known(&fake_reg, insn->imm);
 		fake_reg.type = SCALAR_VALUE;
 		save_register_state(state, spi, &fake_reg, size);
 	} else if (reg && is_spillable_regtype(reg->type)) {