From patchwork Mon Oct 24 11:28:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 9777 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp554059wru; Mon, 24 Oct 2022 09:56:12 -0700 (PDT) X-Google-Smtp-Source: AMsMyM56M33lk9+TF0JmzTnwgdUrmIokahDt5/WUkClB8XO9cQNdCrrWMeRxoYoUJY4qFP4Rmh+c X-Received: by 2002:a17:902:820b:b0:185:b9a:8ac1 with SMTP id x11-20020a170902820b00b001850b9a8ac1mr34675628pln.111.1666630572271; Mon, 24 Oct 2022 09:56:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666630572; cv=none; d=google.com; s=arc-20160816; b=o85SapZ4ugugQMCKUjaN7QEhJySoKNMZaXGZlK96Q1UZ6JRdBrFSbp1un44N2MDbWt 7gSZ6pP8Kjk2o19egrJgFCWqBZg5qh1a2dtahODh1fngTmjPecyDMDx5NMXj/nKFUAtt vNPrhJwlebscoHD8rV8XJtVWJumwK1eobk/1u/ULEZTdanblZ+2onxjdJirS0fp9VmAn /pMjO32YqEBfGC8PQDCSojCuZmaqVrDerdLv3ZJv2Fafb8uWJFuupGMsCZDHym/27HFr rRVq8VdNWMW/MzJZtfoHuZIwej50TXpxB2TvaM/lINb85h9viuUb1PsC9vFDbst2PpD4 A51g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=doLrjWFXJztwmgQ7pFMwIH7bkB3VizYvfCLqTqyn8d8=; b=l5YNKRTbRwtyoBqPS9F7ZEIhkXAvruVjfTcIXNqawbShBqGHreyNcNc5es2BbDCpQh 3Shfi7i9sYydQz/OUOhh9IiKo12jd7cKoZPUcdb7slC1Hf1Z01gUAziWXSdTOitD1A1t slpzyHu56Ndn/X8qSk5QdX00JYFepJd6RugPPd2ppBNSoIS7IRF+LoL+ymJR/XpTORT7 1z4logv+bBy+JYt8/KjdUdRSMyMQA6LZ8j/7UBY99sjIdEHn7+03CbA1TeGqn4PZvYQL jYGq+SXpxtUNWNFyGwiu87X+LQdODqs9Bf1gwh15N9AFjSAgtfRzeWEdZtAogeB6mDOX qpGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=XoYQvV2Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b22-20020a63d316000000b004636ce58db2si35590050pgg.776.2022.10.24.09.55.57; Mon, 24 Oct 2022 09:56:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=XoYQvV2Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234980AbiJXQqZ (ORCPT + 99 others); Mon, 24 Oct 2022 12:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234849AbiJXQpP (ORCPT ); Mon, 24 Oct 2022 12:45:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34C3A16338A; Mon, 24 Oct 2022 08:30:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 98324B818F2; Mon, 24 Oct 2022 12:39:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBDDBC433C1; Mon, 24 Oct 2022 12:39:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666615185; bh=k+hnT0wQT14vl4tNHsOjKfAfz2Vc2Z8CHByIUMoKgB4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XoYQvV2ZnJ75IKSIWjDbNqOya9/+vmFEjJv0flI5XLfYZG90Ci0SHHigU2sY7zUl+ SFi+XurW21lbGihn3qtszI7RBFggusLw5vykpJVTRSxiGP3wmUa0nn/rtsu/RE8LzW XKWz/MSgiS6wKDtsfD5P7LHDmiykN5XPOcnnxOig= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Kumar Kartikeya Dwivedi , Alexei Starovoitov , Sasha Levin Subject: [PATCH 5.15 154/530] bpf: Fix reference state management for synchronous callbacks Date: Mon, 24 Oct 2022 13:28:18 +0200 Message-Id: <20221024113052.035709377@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024113044.976326639@linuxfoundation.org> References: <20221024113044.976326639@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747588818783947918?= X-GMAIL-MSGID: =?utf-8?q?1747588818783947918?= From: Kumar Kartikeya Dwivedi [ Upstream commit 9d9d00ac29d0ef7ce426964de46fa6b380357d0a ] Currently, verifier verifies callback functions (sync and async) as if they will be executed once, (i.e. it explores execution state as if the function was being called once). The next insn to explore is set to start of subprog and the exit from nested frame is handled using curframe > 0 and prepare_func_exit. In case of async callback it uses a customized variant of push_stack simulating a kind of branch to set up custom state and execution context for the async callback. While this approach is simple and works when callback really will be executed only once, it is unsafe for all of our current helpers which are for_each style, i.e. they execute the callback multiple times. A callback releasing acquired references of the caller may do so multiple times, but currently verifier sees it as one call inside the frame, which then returns to caller. Hence, it thinks it released some reference that the cb e.g. got access through callback_ctx (register filled inside cb from spilled typed register on stack). Similarly, it may see that an acquire call is unpaired inside the callback, so the caller will copy the reference state of callback and then will have to release the register with new ref_obj_ids. But again, the callback may execute multiple times, but the verifier will only account for acquired references for a single symbolic execution of the callback, which will cause leaks. Note that for async callback case, things are different. While currently we have bpf_timer_set_callback which only executes it once, even for multiple executions it would be safe, as reference state is NULL and check_reference_leak would force program to release state before BPF_EXIT. The state is also unaffected by analysis for the caller frame. Hence async callback is safe. Since we want the reference state to be accessible, e.g. for pointers loaded from stack through callback_ctx's PTR_TO_STACK, we still have to copy caller's reference_state to callback's bpf_func_state, but we enforce that whatever references it adds to that reference_state has been released before it hits BPF_EXIT. This requires introducing a new callback_ref member in the reference state to distinguish between caller vs callee references. Hence, check_reference_leak now errors out if it sees we are in callback_fn and we have not released callback_ref refs. Since there can be multiple nested callbacks, like frame 0 -> cb1 -> cb2 etc. we need to also distinguish between whether this particular ref belongs to this callback frame or parent, and only error for our own, so we store state->frameno (which is always non-zero for callbacks). In short, callbacks can read parent reference_state, but cannot mutate it, to be able to use pointers acquired by the caller. They must only undo their changes (by releasing their own acquired_refs before BPF_EXIT) on top of caller reference_state before returning (at which point the caller and callback state will match anyway, so no need to copy it back to caller). Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Kumar Kartikeya Dwivedi Link: https://lore.kernel.org/r/20220823013125.24938-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov Signed-off-by: Sasha Levin --- include/linux/bpf_verifier.h | 11 ++++++++++ kernel/bpf/verifier.c | 42 ++++++++++++++++++++++++++++-------- 2 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index bb1cc3fbc4ba..5625e19ae95b 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -192,6 +192,17 @@ struct bpf_reference_state { * is used purely to inform the user of a reference leak. */ int insn_idx; + /* There can be a case like: + * main (frame 0) + * cb (frame 1) + * func (frame 3) + * cb (frame 4) + * Hence for frame 4, if callback_ref just stored boolean, it would be + * impossible to distinguish nested callback refs. Hence store the + * frameno and compare that to callback_ref in check_reference_leak when + * exiting a callback function. + */ + int callback_ref; }; /* state of the program: diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5c9ebcbf6f5f..c3a4158e838e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -851,6 +851,7 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) id = ++env->id_gen; state->refs[new_ofs].id = id; state->refs[new_ofs].insn_idx = insn_idx; + state->refs[new_ofs].callback_ref = state->in_callback_fn ? state->frameno : 0; return id; } @@ -863,6 +864,9 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id) last_idx = state->acquired_refs - 1; for (i = 0; i < state->acquired_refs; i++) { if (state->refs[i].id == ptr_id) { + /* Cannot release caller references in callbacks */ + if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + return -EINVAL; if (last_idx && i != last_idx) memcpy(&state->refs[i], &state->refs[last_idx], sizeof(*state->refs)); @@ -6005,10 +6009,17 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) caller->regs[BPF_REG_0] = *r0; } - /* Transfer references to the caller */ - err = copy_reference_state(caller, callee); - if (err) - return err; + /* callback_fn frame should have released its own additions to parent's + * reference state at this point, or check_reference_leak would + * complain, hence it must be the same as the caller. There is no need + * to copy it back. + */ + if (!callee->in_callback_fn) { + /* Transfer references to the caller */ + err = copy_reference_state(caller, callee); + if (err) + return err; + } *insn_idx = callee->callsite + 1; if (env->log.level & BPF_LOG_LEVEL) { @@ -6129,13 +6140,20 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, static int check_reference_leak(struct bpf_verifier_env *env) { struct bpf_func_state *state = cur_func(env); + bool refs_lingering = false; int i; + if (state->frameno && !state->in_callback_fn) + return 0; + for (i = 0; i < state->acquired_refs; i++) { + if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno) + continue; verbose(env, "Unreleased reference id=%d alloc_insn=%d\n", state->refs[i].id, state->refs[i].insn_idx); + refs_lingering = true; } - return state->acquired_refs ? -EINVAL : 0; + return refs_lingering ? -EINVAL : 0; } static int check_bpf_snprintf_call(struct bpf_verifier_env *env, @@ -11150,6 +11168,16 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } + /* We must do check_reference_leak here before + * prepare_func_exit to handle the case when + * state->curframe > 0, it may be a callback + * function, for which reference_state must + * match caller reference state when it exits. + */ + err = check_reference_leak(env); + if (err) + return err; + if (state->curframe) { /* exit from nested function */ err = prepare_func_exit(env, &env->insn_idx); @@ -11159,10 +11187,6 @@ static int do_check(struct bpf_verifier_env *env) continue; } - err = check_reference_leak(env); - if (err) - return err; - err = check_return_code(env); if (err) return err;