Message ID | 20221202094837.3872444-1-pulehui@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp747463wrr; Fri, 2 Dec 2022 02:10:11 -0800 (PST) X-Google-Smtp-Source: AA0mqf7lGaKcZj+Ijyc4vl0JE9OOI67wErPHgk23j3Jf8wLrhSB24e0FcuniSjdoYp/ttrW/NZme X-Received: by 2002:aa7:c1d0:0:b0:46b:431f:f2d4 with SMTP id d16-20020aa7c1d0000000b0046b431ff2d4mr20244290edp.198.1669975811113; Fri, 02 Dec 2022 02:10:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669975811; cv=none; d=google.com; s=arc-20160816; b=Bhraa72rTF1JhIketE2AgDjY8I/uE9pFjA9cmquCbNFhJ+m8pKB1OgG6v2+BWlDSBF oOIYgQZh01vHEAkHrs28dCjVS1ikjMpSs+dOk/1Vhw2x/3oDLvffay9ZWGRa4jOoy58q fhbvkVMTHSpCyp0LruCS0cRGUe7uNCDPCI5FBHJzQpL6xJEZwTWIn2imvfVrIc1XHcPG elEaXuBMPWUs3e6Vv4z1qD520wXODrBoQB2n/X2RTIRFVFMz4uxuejshJ0lL8/r39/iD HKEVTF5d+gLGjBhcD3XcsxWzU4bDb7mh84Bi3wSsHuIAXWZwVwHE0rTkZDJwiHcI1f32 Xg0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=F+kUcWdDB4xcyw9EZ0DadFkxkqQw++tfpA5u8mIPpuI=; b=XNRkl6NPjpKqF8Nv4/AyczCQ/DMVWRewpo0Jq9ajX9ggyauriHwRaJLCzVSBIEwlZw 5RzQKP0SX8GZ+kP0zUY3SE0Z0ZKF9PBYRHxz7EEiLBFoxxbLTuq7WQH/lUUiaNH8qxME /7vMISo33DJzffpNKtVZ4HwHF60wzo/OOq1uoPQg4mzMVtQxYwSeLeKg9+pOKIIVKgkL 9TlayT++BuKwkqpyiyD6NkWeBeYoUrNKDJtmhYyZZkG/ReHYWd0ZBAPH2UQxrzzvVWRf 4eIoStvxOr9tZ9mew5Fi20YFl4U8o0hOBJ1tGWY3Q+joJipagbl2f8D2jX3EWrtSUTM4 IEww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lg15-20020a170906f88f00b0078d4811335dsi4821175ejb.194.2022.12.02.02.09.47; Fri, 02 Dec 2022 02:10:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232789AbiLBJr6 (ORCPT <rfc822;lhua1029@gmail.com> + 99 others); Fri, 2 Dec 2022 04:47:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbiLBJr4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 2 Dec 2022 04:47:56 -0500 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E834012D27; Fri, 2 Dec 2022 01:47:54 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4NNp5c33Zzz4f3pFM; Fri, 2 Dec 2022 17:47:48 +0800 (CST) Received: from localhost.localdomain (unknown [10.67.175.61]) by APP4 (Coremail) with SMTP id gCh0CgAXidbGyYlj7LYVBg--.50134S2; Fri, 02 Dec 2022 17:47:51 +0800 (CST) From: Pu Lehui <pulehui@huaweicloud.com> To: bpf@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= <bjorn@kernel.org>, Alexei Starovoitov <ast@kernel.org>, Daniel Borkmann <daniel@iogearbox.net>, Andrii Nakryiko <andrii@kernel.org>, Martin KaFai Lau <martin.lau@linux.dev>, Song Liu <song@kernel.org>, Yonghong Song <yhs@fb.com>, John Fastabend <john.fastabend@gmail.com>, KP Singh <kpsingh@kernel.org>, Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>, Jiri Olsa <jolsa@kernel.org>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Pu Lehui <pulehui@huawei.com>, Pu Lehui <pulehui@huaweicloud.com> Subject: [PATCH bpf v2] riscv, bpf: Emit fixed-length instructions for BPF_PSEUDO_FUNC Date: Fri, 2 Dec 2022 17:48:37 +0800 Message-Id: <20221202094837.3872444-1-pulehui@huaweicloud.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgAXidbGyYlj7LYVBg--.50134S2 X-Coremail-Antispam: 1UD129KBjvJXoW7CFy5AFyxuryDur4UAryxZrb_yoW8Cw43pF ZxGry3CFWvqr1S9F13tr12qr4SkFsYqay7Kry7G3y5G3WaqwsF93Z8Gw4Yyas8ZFW8Gr15 XFWjkrn8ua4qv37anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvY14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4U JVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gc CE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E 2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJV W8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2 Y2ka0xkIwI1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4 xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5 MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I 0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v2 6r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0J UQvtAUUUUU= X-CM-SenderInfo: psxovxtxl6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751096556578394623?= X-GMAIL-MSGID: =?utf-8?q?1751096556578394623?= |
Series |
[bpf,v2] riscv, bpf: Emit fixed-length instructions for BPF_PSEUDO_FUNC
|
|
Commit Message
Pu Lehui
Dec. 2, 2022, 9:48 a.m. UTC
From: Pu Lehui <pulehui@huawei.com> For BPF_PSEUDO_FUNC instruction, verifier will refill imm with correct addresses of bpf_calls and then run last pass of JIT. Since the emit_imm of RV64 is variable-length, which will emit appropriate length instructions accorroding to the imm, it may broke ctx->offset, and lead to unpredictable problem, such as inaccurate jump. So let's fix it with fixed-length instructions. Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Pu Lehui <pulehui@huawei.com> Suggested-by: Björn Töpel <bjorn@rivosinc.com> --- arch/riscv/net/bpf_jit_comp64.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-)
Comments
Pu Lehui <pulehui@huaweicloud.com> writes: > From: Pu Lehui <pulehui@huawei.com> > > For BPF_PSEUDO_FUNC instruction, verifier will refill imm with > correct addresses of bpf_calls and then run last pass of JIT. > Since the emit_imm of RV64 is variable-length, which will emit > appropriate length instructions accorroding to the imm, it may > broke ctx->offset, and lead to unpredictable problem, such as > inaccurate jump. So let's fix it with fixed-length instructions. > > Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") > Signed-off-by: Pu Lehui <pulehui@huawei.com> > Suggested-by: Björn Töpel <bjorn@rivosinc.com> > --- > arch/riscv/net/bpf_jit_comp64.c | 20 +++++++++++++++++++- > 1 file changed, 19 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c > index eb99df41fa33..9723f34f7a06 100644 > --- a/arch/riscv/net/bpf_jit_comp64.c > +++ b/arch/riscv/net/bpf_jit_comp64.c > @@ -139,6 +139,19 @@ static bool in_auipc_jalr_range(s64 val) > val < ((1L << 31) - (1L << 11)); > } > > +/* Emit fixed-length instructions for address */ > +static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) > +{ > + u64 ip = (u64)(ctx->insns + ctx->ninsns); > + s64 off = addr - ip; > + s64 upper = (off + (1 << 11)) >> 12; > + s64 lower = ((off & 0xfff) << 52) >> 52; > + > + emit(rv_auipc(rd, upper), ctx); > + emit(rv_addi(rd, rd, lower), ctx); > +} Nice! Two instructions are better than 6! :-) One final thing. Please add a sanity check, that the range is correct, e.g.: if (!(addr && in_auipc_addi_range(off))) return -1; Have a look at emit_jump_and_link(). Thanks! Björn
On 2022/12/2 18:54, Björn Töpel wrote: > Pu Lehui <pulehui@huaweicloud.com> writes: > >> From: Pu Lehui <pulehui@huawei.com> >> >> For BPF_PSEUDO_FUNC instruction, verifier will refill imm with >> correct addresses of bpf_calls and then run last pass of JIT. >> Since the emit_imm of RV64 is variable-length, which will emit >> appropriate length instructions accorroding to the imm, it may >> broke ctx->offset, and lead to unpredictable problem, such as >> inaccurate jump. So let's fix it with fixed-length instructions. >> >> Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") >> Signed-off-by: Pu Lehui <pulehui@huawei.com> >> Suggested-by: Björn Töpel <bjorn@rivosinc.com> >> --- >> arch/riscv/net/bpf_jit_comp64.c | 20 +++++++++++++++++++- >> 1 file changed, 19 insertions(+), 1 deletion(-) >> >> diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c >> index eb99df41fa33..9723f34f7a06 100644 >> --- a/arch/riscv/net/bpf_jit_comp64.c >> +++ b/arch/riscv/net/bpf_jit_comp64.c >> @@ -139,6 +139,19 @@ static bool in_auipc_jalr_range(s64 val) >> val < ((1L << 31) - (1L << 11)); >> } >> >> +/* Emit fixed-length instructions for address */ >> +static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) >> +{ >> + u64 ip = (u64)(ctx->insns + ctx->ninsns); >> + s64 off = addr - ip; >> + s64 upper = (off + (1 << 11)) >> 12; >> + s64 lower = ((off & 0xfff) << 52) >> 52; >> + >> + emit(rv_auipc(rd, upper), ctx); >> + emit(rv_addi(rd, rd, lower), ctx); >> +} > > Nice! Two instructions are better than 6! :-) > > One final thing. Please add a sanity check, that the range is correct, > e.g.: > > if (!(addr && in_auipc_addi_range(off))) > return -1; > Hi Björn, Sorry for replying so late. For BPF_PSEUDO_FUNC instruction, verifier will set insn[0].imm and insn[1].imm to 1 that make addr to 0x100000001 before extra pass, and also ctx->insns is NULL in iteration stage, all of these make off out of range of AUIPC-ADDI range, and return failed. We could add some special handling at different stages, but that seems a little weird. By the way, I do not really like emit_addr function with return value. While a proper address is at least 2B alignment, and the valid address is from 0xffffffff00000000 to 0xffffffffffffffff, we can make address shifed 1 place to right, and addr >> 1 will always in the range of AUIPC-ADDI range. We can get rid of the range detection. The implementation is as follows: static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) { s64 imm = addr >> 1; s64 upper = (imm + (1 << 11)) >> 12; s64 lower = imm & 0xfff; emit(rv_lui(rd, upper), ctx); emit(rv_addi(rd, rd, lower), ctx); emit(rv_slli(rd, rd, 1), ctx); } What do you think? Regards, Lehui > Have a look at emit_jump_and_link(). > > > Thanks! > Björn
Pu Lehui <pulehui@huaweicloud.com> writes: > Sorry for replying so late. For BPF_PSEUDO_FUNC instruction, verifier > will set insn[0].imm and insn[1].imm to 1 that make addr to 0x100000001 > before extra pass, and also ctx->insns is NULL in iteration stage, all > of these make off out of range of AUIPC-ADDI range, and return failed. > We could add some special handling at different stages, but that seems a > little weird. By the way, I do not really like emit_addr function with > return value. My rational is that *if* for some reason the jit is passed an address that auipc/addi can't represent, we'd like to catch that and not emit broken code. > While a proper address is at least 2B alignment, and the valid address > is from 0xffffffff00000000 to 0xffffffffffffffff, we can make address > shifed 1 place to right, and addr >> 1 will always in the range of > AUIPC-ADDI range. We can get rid of the range detection. The > implementation is as follows: > > static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) > { > s64 imm = addr >> 1; > s64 upper = (imm + (1 << 11)) >> 12; > s64 lower = imm & 0xfff; > > emit(rv_lui(rd, upper), ctx); > emit(rv_addi(rd, rd, lower), ctx); > emit(rv_slli(rd, rd, 1), ctx); > } > > What do you think? That's a code generation penalty, instead of catching it at code gen. Don't like! :-) I much prefer the auipc/addi version. What do you think about the diff (on-top of your work) below? --8<-- diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index aa9410eef77c..7acaf28cb3be 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -137,15 +137,21 @@ static bool in_auipc_jalr_range(s64 val) } /* Emit fixed-length instructions for address */ -static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) +static int emit_addr(u8 rd, u64 addr, bool extra_pass, struct rv_jit_context *ctx) { u64 ip = (u64)(ctx->insns + ctx->ninsns); s64 off = addr - ip; s64 upper = (off + (1 << 11)) >> 12; s64 lower = ((off & 0xfff) << 52) >> 52; + if (extra_pass && !in_auipc_jalr_range(off)) { + pr_err("bpf-jit: target offset 0x%llx is out of range\n", off); + return -ERANGE; + } + emit(rv_auipc(rd, upper), ctx); emit(rv_addi(rd, rd, lower), ctx); + return 0; } /* Emit variable-length instructions for 32-bit and 64-bit imm */ @@ -1061,13 +1067,17 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, { struct bpf_insn insn1 = insn[1]; u64 imm64; + int ret; imm64 = (u64)insn1.imm << 32 | (u32)imm; - if (bpf_pseudo_func(insn)) + if (bpf_pseudo_func(insn)) { /* fixed-length insns for extra jit pass */ - emit_addr(rd, imm64, ctx); - else + ret = emit_addr(rd, imm64, extra_pass, ctx); + if (ret) + return ret; + } else { emit_imm(rd, imm64, ctx); + } return 1; } --8<-- Wouldn't that work? Björn
On 2022/12/6 15:55, Björn Töpel wrote: > Pu Lehui <pulehui@huaweicloud.com> writes: > >> Sorry for replying so late. For BPF_PSEUDO_FUNC instruction, verifier >> will set insn[0].imm and insn[1].imm to 1 that make addr to 0x100000001 >> before extra pass, and also ctx->insns is NULL in iteration stage, all >> of these make off out of range of AUIPC-ADDI range, and return failed. >> We could add some special handling at different stages, but that seems a >> little weird. By the way, I do not really like emit_addr function with >> return value. > > My rational is that *if* for some reason the jit is passed an address > that auipc/addi can't represent, we'd like to catch that and not emit > broken code. > >> While a proper address is at least 2B alignment, and the valid address >> is from 0xffffffff00000000 to 0xffffffffffffffff, we can make address >> shifed 1 place to right, and addr >> 1 will always in the range of >> AUIPC-ADDI range. We can get rid of the range detection. The >> implementation is as follows: >> >> static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) >> { >> s64 imm = addr >> 1; >> s64 upper = (imm + (1 << 11)) >> 12; >> s64 lower = imm & 0xfff; >> >> emit(rv_lui(rd, upper), ctx); >> emit(rv_addi(rd, rd, lower), ctx); >> emit(rv_slli(rd, rd, 1), ctx); >> } >> >> What do you think? > > That's a code generation penalty, instead of catching it at code > gen. Don't like! :-) I much prefer the auipc/addi version. > > What do you think about the diff (on-top of your work) below? > > --8<-- > diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c > index aa9410eef77c..7acaf28cb3be 100644 > --- a/arch/riscv/net/bpf_jit_comp64.c > +++ b/arch/riscv/net/bpf_jit_comp64.c > @@ -137,15 +137,21 @@ static bool in_auipc_jalr_range(s64 val) > } > > /* Emit fixed-length instructions for address */ > -static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) > +static int emit_addr(u8 rd, u64 addr, bool extra_pass, struct rv_jit_context *ctx) > { > u64 ip = (u64)(ctx->insns + ctx->ninsns); > s64 off = addr - ip; > s64 upper = (off + (1 << 11)) >> 12; > s64 lower = ((off & 0xfff) << 52) >> 52; > > + if (extra_pass && !in_auipc_jalr_range(off)) { > + pr_err("bpf-jit: target offset 0x%llx is out of range\n", off); > + return -ERANGE; > + } > + > emit(rv_auipc(rd, upper), ctx); > emit(rv_addi(rd, rd, lower), ctx); > + return 0; > } > > /* Emit variable-length instructions for 32-bit and 64-bit imm */ > @@ -1061,13 +1067,17 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, > { > struct bpf_insn insn1 = insn[1]; > u64 imm64; > + int ret; > > imm64 = (u64)insn1.imm << 32 | (u32)imm; > - if (bpf_pseudo_func(insn)) > + if (bpf_pseudo_func(insn)) { > /* fixed-length insns for extra jit pass */ > - emit_addr(rd, imm64, ctx); > - else > + ret = emit_addr(rd, imm64, extra_pass, ctx); > + if (ret) > + return ret; > + } else { > emit_imm(rd, imm64, ctx); > + } > > return 1; > } > > --8<-- > > Wouldn't that work? > It definitely works. But auipc+addi may be some holes, while lui+addi+slli support all the address of kernel and module. And this might be help for the future feature porting. > > Björn
Pu Lehui <pulehui@huaweicloud.com> writes: >> Wouldn't that work? >> > > It definitely works. But auipc+addi may be some holes, while > lui+addi+slli support all the address of kernel and module. And this > might be help for the future feature porting. We're already using auipc/jalr for calls, and I'd say it *very* unlikely that we'll hit the non-covered range. I'd say go with auipc/addi + error, and we can change if this really is a problem.
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index eb99df41fa33..9723f34f7a06 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -139,6 +139,19 @@ static bool in_auipc_jalr_range(s64 val) val < ((1L << 31) - (1L << 11)); } +/* Emit fixed-length instructions for address */ +static void emit_addr(u8 rd, u64 addr, struct rv_jit_context *ctx) +{ + u64 ip = (u64)(ctx->insns + ctx->ninsns); + s64 off = addr - ip; + s64 upper = (off + (1 << 11)) >> 12; + s64 lower = ((off & 0xfff) << 52) >> 52; + + emit(rv_auipc(rd, upper), ctx); + emit(rv_addi(rd, rd, lower), ctx); +} + +/* Emit variable-length instructions for 32-bit and 64-bit imm */ static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx) { /* Note that the immediate from the add is sign-extended, @@ -1053,7 +1066,12 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, u64 imm64; imm64 = (u64)insn1.imm << 32 | (u32)imm; - emit_imm(rd, imm64, ctx); + if (bpf_pseudo_func(insn)) + /* fixed-length insns for extra jit pass */ + emit_addr(rd, imm64, ctx); + else + emit_imm(rd, imm64, ctx); + return 1; }