From patchwork Wed Nov 23 14:15:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Sun X-Patchwork-Id: 25007 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:b599:b0:88:f841:fdc9 with SMTP id df25csp2167706dyb; Wed, 23 Nov 2022 06:24:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf6rMsjh21V1Rpmp02Uggo7n+AKbxlqx8MeVTapZ+VzF3Hg1h4vryfIcHzg/hfzIRtfrH9It X-Received: by 2002:a17:906:3b88:b0:7ae:83:5be7 with SMTP id u8-20020a1709063b8800b007ae00835be7mr23394632ejf.139.1669213486698; Wed, 23 Nov 2022 06:24:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669213486; cv=none; d=google.com; s=arc-20160816; b=LAegoglG3ERFcntc/w1ThKPdZJs+CJNFUQT3bI6oI768q5U0KHiJLk+YvoXmGH0fGs QBjREX6wWIMdNA+VS+am8+DtSfqNeeiu/ECP5mKgDlsj0gLbaIInQfouTFkPglmbHVKZ DccOojiFpygkL0VohxRYAIUimQPlrM03WhUFX5RVE2QIS5j4oelsQGoEPjU+mnmEUSDB jAg3riDeS8m71JeUewW+K6Hq29tSEOjeIrHzo6InDvpG3B+r1ZPhWEBNBe9kpnzr4N/p Sa1aGgCg03nU2XD4njmTzRMNsfdqx7ijyUEIwAlJt+gYuP23Iyupu6gi6hskmSuivApx mJ0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VRaTwuIMRWrYWwIXhrHstpbQDW7uacrpldkEzAXLU14=; b=DXRrt/EQlx2jOicEeORR3f3U6iXMERPxKbEpFuAQrMIEfd6ccx6fMTCkkmK+LUp9pg IN/JSLHOGTP0lAzoDbCYnIAETT5kecYSuXvhCd3S5F2dc9FUGEUBCJKf8mJuk77Hbu9k IUFigMkcPcuVIEMNyptmFZf+ZXubvJE+S0d2AttHGnc1fMSbOmN2AsIjnYn+1BsX9XaQ Sq4rxF5k0qqtUBnWAoA8sfEuNGy43N+hUKnmPJiB33QGVfbSNDqf+eTzht5iZAfOGJK4 47AjSAop9xUWMwXUUCNR0XffPZu9xRWRbF8uWMutwljDQ+WNj/j4xC/odUlgukp1nJj/ OMMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Tq3xBJdW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sc4-20020a1709078a0400b0078d9b5792a0si15268551ejc.319.2022.11.23.06.24.21; Wed, 23 Nov 2022 06:24:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Tq3xBJdW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238009AbiKWORN (ORCPT + 99 others); Wed, 23 Nov 2022 09:17:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237890AbiKWORG (ORCPT ); Wed, 23 Nov 2022 09:17:06 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 398782F666; Wed, 23 Nov 2022 06:17:05 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id g10so16741782plo.11; Wed, 23 Nov 2022 06:17:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VRaTwuIMRWrYWwIXhrHstpbQDW7uacrpldkEzAXLU14=; b=Tq3xBJdWm5dmTk18oYM136Mh/nt2ofDpg7fYxLQpxQOfTBGdADqSbE7tvb+YjT5t3m 8jcUJl48+HzMR0pvdmdrjb53zaDNh3p49Hx2kGRaAl1ahiGxf2/yPUJoGv3bn5FFr02V amscHI1kzsQzJGS8u/4H/yA/V/zEF3/rIdqR3h7qwDQCwqO3OlMYsCWVI1sQYkUH/4N+ YIlSQMBeoUQYElZMaFznS1vLkmsYEMMGCHaOFDKyEzquuOHlika+hr9vavH7F9VfFTB6 jttGC2EVVueXLnNVHMC9maq8gbES5QHYQw5IWjpVDxS67anL9YkedCV3+kIqLinJ2Y1O e5AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VRaTwuIMRWrYWwIXhrHstpbQDW7uacrpldkEzAXLU14=; b=Bhv9Biiyuvhs55tyEuiQ6kucNwsSxxRbp3IqWtfgmdvMvGjQaAPHFW70BBjETodyOl dHKxSc+srgMcO4OjsYqWzRjIKKoQdlcf6e8HxxAwwrJ7F+Kg/exhM9/J2CVxANohxY4k gunvpNrFcovWsKJqgWNNN+bHntYAgUj+xRwIe3jFkq+62eBFGfOcGIb3XZUp8QBexnPd 9+sNg+3ip8XGnvzt5OD0gV1AXWPoFPgKnOKfRawVi5DrIKYa8b76DACdX5ZOkPq4Jntt Em/s69sklQI/yxXDIdujBPorq2dJr73NQ96mNx1bWJZ/9rpR6+t5X97EQBabATWwq5FK BFbw== X-Gm-Message-State: ANoB5pm86cRVfoD4grPxs8r5cShqfNGttDXuyMet5XQMfI3Ow+Z9ttc5 aT3wUpzTfo4cylYak0gVpB5v3ZXkW7bx X-Received: by 2002:a17:902:eb85:b0:186:5f86:da41 with SMTP id q5-20020a170902eb8500b001865f86da41mr10708699plg.73.1669213024363; Wed, 23 Nov 2022 06:17:04 -0800 (PST) Received: from pc.localdomain ([166.111.83.15]) by smtp.gmail.com with ESMTPSA id w4-20020a170902e88400b001868ed86a95sm14371878plg.174.2022.11.23.06.17.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 06:17:04 -0800 (PST) From: Hao Sun To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, davem@davemloft.net, linux-kernel@vger.kernel.org, Hao Sun Subject: [PATCH bpf-next 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN Date: Wed, 23 Nov 2022 22:15:44 +0800 Message-Id: <20221123141546.238297-2-sunhao.th@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123141546.238297-1-sunhao.th@gmail.com> References: <20221123141546.238297-1-sunhao.th@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750297200962875942?= X-GMAIL-MSGID: =?utf-8?q?1750297200962875942?= Make the verifier sanitize STX/ST insns in jited BPF programs by dispatching addr to kernel functions that are instrumented by KASAN. Only STX/ST insns that aren't in patches added by other passes using REG_AX or dst_reg isn't R10 are sanitized. The former confilicts with us, the latter are trivial for the verifier to check, skip them to reduce the footprint. The instrumentation is conducted in two places: fixup and jit. During fixup, R0 and R1 are backed up or exchanged with dst_reg, and the address to check is stored into R1, and the corresponding bpf_asan_storeN() is inserted. In jit, R1~R5 are pushed on stack before calling the sanitize function. The sanitize functions are instrumented with KASAN and they simply write to the target addr for certain bytes, KASAN conducts the actual checking. An extra Kconfig is used to enable this. Signed-off-by: Hao Sun --- arch/x86/net/bpf_jit_comp.c | 32 +++++++++++ include/linux/bpf.h | 9 ++++ kernel/bpf/Kconfig | 14 +++++ kernel/bpf/verifier.c | 102 ++++++++++++++++++++++++++++++++++++ 4 files changed, 157 insertions(+) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index cec5195602bc..ceaef69adc49 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -338,7 +338,39 @@ static int emit_patch(u8 **pprog, void *func, void *ip, u8 opcode) static int emit_call(u8 **pprog, void *func, void *ip) { +#ifdef CONFIG_BPF_PROG_KASAN + s64 offset; + u8 *prog = *pprog; + bool is_sanitize = + func == bpf_asan_store8 || func == bpf_asan_store16 || + func == bpf_asan_store32 || func == bpf_asan_store64; + + if (!is_sanitize) + return emit_patch(pprog, func, ip, 0xE8); + + /* Six extra bytes from push insns */ + offset = func - (ip + X86_PATCH_SIZE + 6); + BUG_ON(!is_simm32(offset)); + + /* R1 has the addr to check, backup R1~R5 here, we don't + * have free regs during the fixup. + */ + EMIT1(0x57); /* push rdi */ + EMIT1(0x56); /* push rsi */ + EMIT1(0x52); /* push rdx */ + EMIT1(0x51); /* push rcx */ + EMIT2(0x41, 0x50); /* push r8 */ + EMIT1_off32(0xE8, offset); + EMIT2(0x41, 0x58); /* pop r8 */ + EMIT1(0x59); /* pop rcx */ + EMIT1(0x5a); /* pop rdx */ + EMIT1(0x5e); /* pop rsi */ + EMIT1(0x5f); /* pop rdi */ + *pprog = prog; + return 0; +#else return emit_patch(pprog, func, ip, 0xE8); +#endif } static int emit_jump(u8 **pprog, void *func, void *ip) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c9eafa67f2a2..a7eb99928fee 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2835,4 +2835,13 @@ static inline bool type_is_alloc(u32 type) return type & MEM_ALLOC; } +#ifdef CONFIG_BPF_PROG_KASAN + +u64 bpf_asan_store8(u8 *addr); +u64 bpf_asan_store16(u16 *addr); +u64 bpf_asan_store32(u32 *addr); +u64 bpf_asan_store64(u64 *addr); + +#endif /* CONFIG_BPF_PROG_KASAN */ + #endif /* _LINUX_BPF_H */ diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig index 2dfe1079f772..aeba6059b9e2 100644 --- a/kernel/bpf/Kconfig +++ b/kernel/bpf/Kconfig @@ -99,4 +99,18 @@ config BPF_LSM If you are unsure how to answer this question, answer N. +config BPF_PROG_KASAN + bool "Enable BPF Program Address Sanitize" + depends on BPF_JIT + depends on KASAN + depends on X86_64 + help + Enables instrumentation on LDX/STX/ST insn to capture memory + access errors in BPF programs missed by the verifier. + + The actual check is conducted by KASAN, this feature presents + certain overhead, and should be used mainly by testing purpose. + + If you are unsure how to answer this question, answer N. + endmenu # "BPF subsystem" diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9528a066cfa5..af214f0191e0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -15221,6 +15221,25 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } +#ifdef CONFIG_BPF_PROG_KASAN + +/* Those are functions instrumented with KASAN for actual sanitizing. */ + +#define BPF_ASAN_STORE(n) \ + notrace u64 bpf_asan_store##n(u##n *addr) \ + { \ + u##n ret = *addr; \ + *addr = ret; \ + return ret; \ + } + +BPF_ASAN_STORE(8); +BPF_ASAN_STORE(16); +BPF_ASAN_STORE(32); +BPF_ASAN_STORE(64); + +#endif + /* Do various post-verification rewrites in a single program pass. * These rewrites simplify JIT and interpreter implementations. */ @@ -15238,6 +15257,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env) struct bpf_prog *new_prog; struct bpf_map *map_ptr; int i, ret, cnt, delta = 0; +#ifdef CONFIG_BPF_PROG_KASAN + bool in_patch_use_ax = false; +#endif for (i = 0; i < insn_cnt; i++, insn++) { /* Make divide-by-zero exceptions impossible. */ @@ -15354,6 +15376,86 @@ static int do_misc_fixups(struct bpf_verifier_env *env) continue; } +#ifdef CONFIG_BPF_PROG_KASAN + /* Patches that use REG_AX confilict with us, skip it. + * This starts with first use of REG_AX, stops only when + * we see next ldx/stx/st insn with valid aux information. + */ + aux = &env->insn_aux_data[i + delta]; + if (in_patch_use_ax && (int)aux->ptr_type != 0) + in_patch_use_ax = false; + if (insn->dst_reg == BPF_REG_AX || insn->src_reg == BPF_REG_AX) + in_patch_use_ax = true; + + /* Sanitize ST/STX operation. */ + if (BPF_CLASS(insn->code) == BPF_ST || + BPF_CLASS(insn->code) == BPF_STX) { + struct bpf_insn sanitize_fn; + struct bpf_insn *patch = &insn_buf[0]; + + /* Skip st/stx to R10, they're trivial to check. */ + if (in_patch_use_ax || insn->dst_reg == BPF_REG_10 || + BPF_MODE(insn->code) == BPF_NOSPEC) + continue; + + switch (BPF_SIZE(insn->code)) { + case BPF_B: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_store8); + break; + case BPF_H: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_store16); + break; + case BPF_W: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_store32); + break; + case BPF_DW: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_store64); + break; + } + + /* Backup R0 and R1, store `dst + off` to R1, invoke the + * sanitize fn, and then restore each reg. + */ + if (insn->dst_reg == BPF_REG_1) { + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0); + } else if (insn->dst_reg == BPF_REG_0) { + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0); + } else { + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, insn->dst_reg); + *patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0); + } + if (insn->off != 0) + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off); + /* Call sanitize fn, R1~R5 are saved to stack during jit. */ + *patch++ = sanitize_fn; + if (insn->off != 0) + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off); + if (insn->dst_reg == BPF_REG_1) { + *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX); + } else if (insn->dst_reg == BPF_REG_0) { + *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX); + } else { + *patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg); + *patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX); + } + *patch++ = *insn; + cnt = patch - insn_buf; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } +#endif + if (insn->code != (BPF_JMP | BPF_CALL)) continue; if (insn->src_reg == BPF_PSEUDO_CALL) From patchwork Wed Nov 23 14:15:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Sun X-Patchwork-Id: 25011 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2821987wrr; Wed, 23 Nov 2022 06:26:05 -0800 (PST) X-Google-Smtp-Source: AA0mqf4YqMVi/1RDDP0i+YZujcP5auwEITQF1+Gye+cSlHCCR/wc9sd13Wxh7K9Xa80vpGeKZgeb X-Received: by 2002:a17:906:6813:b0:78d:a3ca:586c with SMTP id k19-20020a170906681300b0078da3ca586cmr10436507ejr.287.1669213564906; Wed, 23 Nov 2022 06:26:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669213564; cv=none; d=google.com; s=arc-20160816; b=aOlRc6oZrB8qtWWqNk3y0ePJ1uHaY5NmFE7ygPyUoypXt5b/CoL17eq91fvP1PXPhB nC3Pvvo4kKg1al53Ld4/BkTaxUU3rY4Y6Gh0Y7UuvyPYSlmFPL5qhveismSn/DSYJJDE 91XjYKQ3QUvoZerMWQBEpaEkQAfJQ7QTAKCLkYmP2Um1em5nT/5OiiP13VZr4FtWfxGl qivW47HJ2WAvORmamzASL/hoxclMLSu/2Cs4aBKa6bree2/3iUtRl7ZtSNcrdJfUD92F qVVMcSiKSJPcidqR0yc2jVz7KfRybg3v0LOyS+OMfzcUgHPpN9I0Ulnwxdckpwv1tCUP ndFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Il34hx3yTyq0XGrCJwbIooQuurpvZ+PhXZG2yVGC9M8=; b=UXS35I6XtBWpRnWXdDZ2dHD68SXSwftKmHlmzx/gVHLXhiTOs3vQKpV9z3MVudjDuW Spe3XtOdFyrdSy/eO46QfEW+Eo9k6ROmSPNAQxE1z/1SV1s9Q2LeOd7z7kXxdF/Ge9qY oqA1itFG344sfWHzgSbUtlEEg0VhU49mpITBPsmNc+q1k/+zbh38rNPUGN5i2I00M5bI 2vBxu125bV71/V5IgdX4b9KiQ9CNmJKUnadp3OBB7++Z8RozZIFPNhdxuLwtzbECcphP 6+dEFQK7MuGv+R2+tBHwHkKbpqeF9bngd5ooTCIkBNOkyUbPERn68/EqAI6oVGK580AT zxSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=W8mo7dQN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wu9-20020a170906eec900b0078dcdbb3e87si16467316ejb.530.2022.11.23.06.25.38; Wed, 23 Nov 2022 06:26:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=W8mo7dQN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237830AbiKWORQ (ORCPT + 99 others); Wed, 23 Nov 2022 09:17:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238153AbiKWORJ (ORCPT ); Wed, 23 Nov 2022 09:17:09 -0500 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4D4B30569; Wed, 23 Nov 2022 06:17:08 -0800 (PST) Received: by mail-pg1-x52c.google.com with SMTP id r18so16837893pgr.12; Wed, 23 Nov 2022 06:17:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Il34hx3yTyq0XGrCJwbIooQuurpvZ+PhXZG2yVGC9M8=; b=W8mo7dQNWNEDXJnXDo+yvbtW1uSoZlH7Mts13sZpggOqdj2zKVNczdUk/eIOJEaHU3 X76vg/D8EFFqR/JiC/6xYfBtPOaBMNgIx/VnJH6FaESFb8XGuZT6ykQ+ascdGiG6tCmj iJaBedlQkQQXrn/ZNoXd8KTYqbqC+FbvjYXYsuY9O8xaqbQ623YZevh1naHp6HjnlZea Sfh19NgsI/mjP3r+C2yXoAsdcW3OfLSXO4FeXnIim+ITodmIzs38Orry1yoRt5gVOol0 8ZSiE30H/WHI7VJHIqtV0T6K0Pb4CpmpD4oH1wIa5zK0fGnWDAZwH+k7mrwLPkzXLNNW LUMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Il34hx3yTyq0XGrCJwbIooQuurpvZ+PhXZG2yVGC9M8=; b=1MBRpvGMdmH9SfW5eMBlDeQftvpO/tks9eMjvUprdZ5r4dHb2Ho5EcCk0sViky5aJB qMGq+GyFmqwi8eqPzbHmV4CvlpI5s7on0bMHF0Bugg6i9LVh8msGf5McUceL5rLFdjaL QnNQqcfcEmVjHDst2iFhVBBSTOJ6HnoUDso6TshlKEwyl8uSN27KpaSSY4wLapDmcRkS O2c3zCm1W59MpO2yJ1rfKEeZDR/QFZ/dFs4X8TG06lnaONn0q6ItG3MthIVtosl/bNXX rCkyyMWgOd2AG+Xa2I/yX9HW4RYqXh6N5R4nZf9eKkProcGv4DtNWWLOd0kNmQTfNwiB tnGA== X-Gm-Message-State: ANoB5pmgIRNuf5kWv8Eyoa2jziQcQGU6wKBbZvgOUYfochuO1yJRCw7f 2XovfPmL7aiJjyDwo9ED9u4lE/8yjt/7 X-Received: by 2002:a63:4a25:0:b0:46e:bcc0:c735 with SMTP id x37-20020a634a25000000b0046ebcc0c735mr25945498pga.483.1669213027998; Wed, 23 Nov 2022 06:17:07 -0800 (PST) Received: from pc.localdomain ([166.111.83.15]) by smtp.gmail.com with ESMTPSA id w4-20020a170902e88400b001868ed86a95sm14371878plg.174.2022.11.23.06.17.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 06:17:07 -0800 (PST) From: Hao Sun To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, davem@davemloft.net, linux-kernel@vger.kernel.org, Hao Sun Subject: [PATCH bpf-next 2/3] bpf: Sanitize LDX in jited BPF progs with KASAN Date: Wed, 23 Nov 2022 22:15:45 +0800 Message-Id: <20221123141546.238297-3-sunhao.th@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123141546.238297-1-sunhao.th@gmail.com> References: <20221123141546.238297-1-sunhao.th@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750297282687099293?= X-GMAIL-MSGID: =?utf-8?q?1750297282687099293?= Make the verifier sanitize LDX insns in jited BPF programs, a little more complicated than STX/ST. The dst_reg and AX are free here, different insns that backup R0&R1 are inserted before calling the checking functions based on their relationships with dst_reg and src_reg, the checking funcs are then inserted, and finally regs are restored. Signed-off-by: Hao Sun --- arch/x86/net/bpf_jit_comp.c | 4 +- include/linux/bpf.h | 5 +++ kernel/bpf/verifier.c | 88 +++++++++++++++++++++++++++++++++++++ 3 files changed, 96 insertions(+), 1 deletion(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index ceaef69adc49..0fc67383ffa8 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -343,7 +343,9 @@ static int emit_call(u8 **pprog, void *func, void *ip) u8 *prog = *pprog; bool is_sanitize = func == bpf_asan_store8 || func == bpf_asan_store16 || - func == bpf_asan_store32 || func == bpf_asan_store64; + func == bpf_asan_store32 || func == bpf_asan_store64 || + func == bpf_asan_load8 || func == bpf_asan_load16 || + func == bpf_asan_load32 || func == bpf_asan_load64; if (!is_sanitize) return emit_patch(pprog, func, ip, 0xE8); diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a7eb99928fee..350d890a39ac 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2842,6 +2842,11 @@ u64 bpf_asan_store16(u16 *addr); u64 bpf_asan_store32(u32 *addr); u64 bpf_asan_store64(u64 *addr); +u64 bpf_asan_load8(u8 *addr); +u64 bpf_asan_load16(u16 *addr); +u64 bpf_asan_load32(u32 *addr); +u64 bpf_asan_load64(u64 *addr); + #endif /* CONFIG_BPF_PROG_KASAN */ #endif /* _LINUX_BPF_H */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index af214f0191e0..c0c11d24dc7b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -15238,6 +15238,17 @@ BPF_ASAN_STORE(16); BPF_ASAN_STORE(32); BPF_ASAN_STORE(64); +#define BPF_ASAN_LOAD(n) \ + notrace u64 bpf_asan_load##n(u##n *addr) \ + { \ + return *addr; \ + } + +BPF_ASAN_LOAD(8); +BPF_ASAN_LOAD(16); +BPF_ASAN_LOAD(32); +BPF_ASAN_LOAD(64); + #endif /* Do various post-verification rewrites in a single program pass. @@ -15454,6 +15465,83 @@ static int do_misc_fixups(struct bpf_verifier_env *env) insn = new_prog->insnsi + i + delta; continue; } + + /* Sanitize LDX operation*/ + if (BPF_CLASS(insn->code) == BPF_LDX) { + struct bpf_insn sanitize_fn; + struct bpf_insn *patch = &insn_buf[0]; + bool dst_is_r0 = insn->dst_reg == BPF_REG_0; + bool dst_is_r1 = insn->dst_reg == BPF_REG_1; + + if (in_patch_use_ax || insn->src_reg == BPF_REG_10) + continue; + + switch (BPF_SIZE(insn->code)) { + case BPF_B: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_load8); + break; + case BPF_H: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_load16); + break; + case BPF_W: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_load32); + break; + case BPF_DW: + sanitize_fn = BPF_EMIT_CALL(bpf_asan_load64); + break; + } + + /* Backup R0 and R1, REG_AX and dst_reg are free. */ + if (insn->src_reg == BPF_REG_1) { + if (!dst_is_r0) + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0); + } else if (insn->src_reg == BPF_REG_0) { + if (!dst_is_r1) + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0); + } else if (!dst_is_r1) { + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, insn->src_reg); + if (!dst_is_r0) + *patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0); + } else { + *patch++ = BPF_MOV64_REG(BPF_REG_1, insn->src_reg); + *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0); + } + if (insn->off != 0) + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off); + /* Invoke sanitize fn, R1~R5 are stored to stack during jit. */ + *patch++ = sanitize_fn; + if (insn->off != 0) + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off); + if (insn->src_reg == BPF_REG_1) { + if (!dst_is_r0) + *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX); + } else if (insn->src_reg == BPF_REG_0) { + *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1); + if (!dst_is_r1) + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX); + } else if (!dst_is_r1) { + if (!dst_is_r0) + *patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg); + if (insn->src_reg == insn->dst_reg) + *patch++ = BPF_MOV64_REG(insn->src_reg, BPF_REG_1); + *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX); + } else { + *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX); + } + *patch++ = *insn; + cnt = patch - insn_buf; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } #endif if (insn->code != (BPF_JMP | BPF_CALL)) From patchwork Wed Nov 23 14:15:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Sun X-Patchwork-Id: 24998 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:b599:b0:88:f841:fdc9 with SMTP id df25csp2166820dyb; Wed, 23 Nov 2022 06:23:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf5lQqkQmseRgG4M1edZIhJfJfPjR+FjWr+mwE8CpSYufD2kZ+NnYaPU+joG44VCMz9UQl7H X-Received: by 2002:a17:903:41c9:b0:185:57b6:13c3 with SMTP id u9-20020a17090341c900b0018557b613c3mr22182782ple.116.1669213413847; Wed, 23 Nov 2022 06:23:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669213413; cv=none; d=google.com; s=arc-20160816; b=RRZ+2V/hqcCwwhoDNNgKz+4cnwGJqhGwtCjT0UIQnj507eRfObM+CvJvijI8iw+3BZ CYKbcVpArghd87ErCMxP/XfSzV5OfRQ+Tc2i889wDpwqV0/NPjXuO4riywPlRiDFgpsc oQ23fAnUeIzFNFgIdmhRM2PmsMiNKUjYfw2jb7eJmFrlacHTWYEnUiAgcWEw8fgBWJHA CjPYck9kmzx1WdZcbrfejWJelbTzewnBvrWbfa1D5Yz61174xWOOlRaAbqKbiHuUsLea JMDnv9sfRyQYq0HZ7qCPb6nRKIVKMUYhYaI/VRnJi5T7W/DV7YB0RwIGEl/dLuNyZdsp JFqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1K2KGPW2mhLmlk+Ll0wyye+i0QvljWWm30P6WzuM3no=; b=aa+gKs364cPXsHET53XqVYsDFfyUkg7FNPFY0vZc89tZIriXDqde8RMhYKwqEq4eH9 eUWJiDLBt/RL7JO6sxTH9vuvklLlJzemmZZWv6N1idvPH1vjGM9HQ2PTBcFElQ+RgYMQ sf5gh7dCjleTjD1GC41U2up8uUrsh1DS7I49Is1huUtbVAC3fRej3qCOLO7on+rUcb4Z gVZ6ZeEIdOzgsMnIhCVdSwz9rGtc0JSDKZWjCea24y417pAyBm3AzLRpSLJMtcYoDvAg iMWiRsrsCAPt8vF8e+OBccmHajLxr2pfYskKNHugMTusKuJVHnNRASz4IeTt9buCWbkc zybg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=N4UAbhkQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p3-20020a170902e74300b00188a7401d89si18354370plf.481.2022.11.23.06.23.19; Wed, 23 Nov 2022 06:23:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=N4UAbhkQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236767AbiKWORW (ORCPT + 99 others); Wed, 23 Nov 2022 09:17:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237916AbiKWORN (ORCPT ); Wed, 23 Nov 2022 09:17:13 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F4162F666; Wed, 23 Nov 2022 06:17:12 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id j10-20020a17090aeb0a00b00218dfce36e5so1819789pjz.1; Wed, 23 Nov 2022 06:17:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1K2KGPW2mhLmlk+Ll0wyye+i0QvljWWm30P6WzuM3no=; b=N4UAbhkQiB0OA8KFPB9vX0y9rfEbsoZVCgvK7W3096l38caZKMdxW/7NHrGYGG4zcr ZTNl4GXeMsJ9AzE24RhlR2l4uypbHFeigf384eQQAoK1jWHtLRJUf/5aqNwE/E5zEfFp 61AoKXLNkiuGrc4+YW8k9PcZd0r0/i0W+xdM667fWOcNwkVdzy7Vu61KiMHw0xcHIbEc h4mQsDJUP2kVVDuEVTpSI7HTB6VhUTedMPqSVrGLjZUQvJAnHY0GKNUgmOjFMt21WYfG UP7vy4L8A+0qZSap+1jTsGPW6zv5xKWyhE2u3idhDZ1Dl3JDupG55EvljpUOl9Jav2VA sGMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1K2KGPW2mhLmlk+Ll0wyye+i0QvljWWm30P6WzuM3no=; b=ujqF/OMpy3UGLXMKncZfVeemgiXB8+iejgxqT7VtZ2pwBz5HulNQgZcJplJHZMBhn7 vlC9ZgjSah0ysNsL/VKJszxaPnKCPoTgo8r0aV1SKklPweRs4y+uOiXSPLedsQDWfvu/ LgJAeaj76TRXvZVwnIsqpiykbGplAJ8PzypkWAq8qi+085KO/JWdH6zG6neGjcmkxUnJ 2SKSt1x2u+5S/1zEU0BdzH3Qx4PW5OAaB/Pqt2lrylF3TRqwWVTGbPEpUpm1Yu7cCDtq Vy6y6emdoHG9fQp53s4/h4EkE1FoxNzomXlOsq42/cOfxn+nYKYQ2Gqwi5XUD1BB8MwH 53sg== X-Gm-Message-State: ANoB5pkp61w+URFjeWw10zBk++3VMpQUScRJ7skTyq+8IAN2x0OqhzIy g8r52GsYsuWMUp+cnd7jA1e2Na57D9mo X-Received: by 2002:a17:902:db86:b0:189:4bde:53b9 with SMTP id m6-20020a170902db8600b001894bde53b9mr1838704pld.11.1669213031664; Wed, 23 Nov 2022 06:17:11 -0800 (PST) Received: from pc.localdomain ([166.111.83.15]) by smtp.gmail.com with ESMTPSA id w4-20020a170902e88400b001868ed86a95sm14371878plg.174.2022.11.23.06.17.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 06:17:11 -0800 (PST) From: Hao Sun To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, davem@davemloft.net, linux-kernel@vger.kernel.org, Hao Sun Subject: [PATCH bpf-next 3/3] selftests/bpf: Add tests for LDX/STX/ST sanitize Date: Wed, 23 Nov 2022 22:15:46 +0800 Message-Id: <20221123141546.238297-4-sunhao.th@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123141546.238297-1-sunhao.th@gmail.com> References: <20221123141546.238297-1-sunhao.th@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,UPPERCASE_50_75 autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750297124608785875?= X-GMAIL-MSGID: =?utf-8?q?1750297124608785875?= Add tests for LDX/STX/ST instrumentation in each possible case. Four cases for STX/ST, include dst_reg equals to R0, R1, R10, other regs, respectively, ten cases for LDX. All new/existing selftests can pass. A slab-out-of-bounds read report is also availble, which is achieved by exploiting CVE-2022-23222 and can be reproduced in Linux v5.10: https://pastebin.com/raw/Ee1Cw492. Signed-off-by: Hao Sun --- .../selftests/bpf/verifier/sanitize_st_ldx.c | 323 ++++++++++++++++++ 1 file changed, 323 insertions(+) create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c diff --git a/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c b/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c new file mode 100644 index 000000000000..3fd571abc5cc --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c @@ -0,0 +1,323 @@ +#ifdef CONFIG_BPF_PROG_KASAN +{ + "sanitize stx: dst is R1", + .insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG), + BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1), + }, +}, +{ + "sanitize stx: dst is R0", + .insns = { + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1), + }, +}, +{ + "sanitize stx: dst is R10", + .insns = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .unexpected_insns = { + BPF_EMIT_CALL(INSN_IMM_MASK), + }, +}, +{ + "sanitize stx: dst is other regs", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + }, +}, +{ + "sanitize ldx: src is R1, dst is R0", + .insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), + }, +}, +{ + "sanitize ldx: src is R1, dst is R1", + .insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, -8), + }, +}, +{ + "sanitize ldx: src is R1, dst is other regs", + .insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8), + }, +}, +{ + "sanitize ldx: src is R0, dst is R1", + .insns = { + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), + }, +}, +{ + "sanitize ldx: src is R0, dst is R0", + .insns = { + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -8), + }, +}, +{ + "sanitize ldx: src is R0, dst is other regs", + .insns = { + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, -8), + }, +}, +{ + "sanitize ldx: src is other regs, dst is R0", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8), + }, +}, +{ + "sanitize ldx: src is other regs, dst is R1", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -8), + }, +}, +{ + "sanitize ldx: src is other regs, dst is self", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), + }, +}, +{ + "sanitize ldx: src is other regs, dst is other regs", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1), + BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 1, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, + .expected_insns = { + BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), + BPF_EMIT_CALL(INSN_IMM_MASK), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), + BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG), + BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, -8), + }, +}, +#endif /* CONFIG_BPF_PROG_KASAN */