From patchwork Fri Oct 14 20:28:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2817 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp367426wrs; Fri, 14 Oct 2022 13:38:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Q0636/mdsJ2EYdR+EqaD+5MhlwXV3/VBMuTmQ2xWeP4IW2SCo8P0rO8oQrWJXEyKxNvOJ X-Received: by 2002:a17:903:18c:b0:17f:7008:c8aa with SMTP id z12-20020a170903018c00b0017f7008c8aamr6885423plg.25.1665779910691; Fri, 14 Oct 2022 13:38:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665779910; cv=none; d=google.com; s=arc-20160816; b=GO9spDYwaVT/1VpY/TsOjOZRetuLLEDcn7Wk6KzcPbshvv/ChJjNX8Sh7WEO/+1Wuh Kv+5rqbJtPat61drc44yBMlaQv6qxV+EXHsyavHQ4U8rUlRFYT0Et33/qdORi42KXXBF KiGBeokficVipkqa+dzyJ6H7mZB87cZkY9fiNn8P6Whnl5s6FaMDW7WU/WIqPJs2zerb ov4NDibV5FVWf4GUKr8qd3qoPymGQ1ppgVHS0HMmUT4Pb6YdACb2HcvqT+OiEzQNCFs/ /ToPzl+WOn1IaU0yFXo//VT8HaNCzOHji9D9HFdlY9KjLAZ5YWPmkXWshTxVfwlhWqx1 X2gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=db6CGA38cn3llOR2GqnSc/iqHm89NTf7glPgZKqwHZI=; b=w4PsFFjoLRHBp26+/BNff/vpGxU4glOzy0a0de/wSLoK9CfoJQSuJpCiiZyzY6a5kx 5V8h1Gj7jDrSTexa5Ww4Z+JZLyeU1JdZq0Y7mbb/ASWi91ZZfEdOi176sVw7In2oKt/i MSrnVWQUtFGr/qVLOmV/UXG4u8lmjOBU7PhVBhT1LxbLk54TDfogv9IikEgSg4McCZGm 4fbkTNVLYE/js1rTKvUguh8+az2HoAryRyUhedQatFqxGzC1g28Zd4sTowGgx7jHGi2z r6D6xrVbsOelWMMFaNbIQEaID0dajIt1q/dcYg+Jfe/anuUpFlyRu9d8fcW+8IuyRcmK UvDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z1-20020a056a00240100b005627f4b21absi3860590pfh.296.2022.10.14.13.38.18; Fri, 14 Oct 2022 13:38:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231418AbiJNU31 (ORCPT + 99 others); Fri, 14 Oct 2022 16:29:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231378AbiJNU3Y (ORCPT ); Fri, 14 Oct 2022 16:29:24 -0400 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5D005D705; Fri, 14 Oct 2022 13:29:23 -0700 (PDT) Received: by mail-qk1-f172.google.com with SMTP id 8so3250834qka.1; Fri, 14 Oct 2022 13:29:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=db6CGA38cn3llOR2GqnSc/iqHm89NTf7glPgZKqwHZI=; b=xD29nvJdvg8V3jTHczqqdSLHCEbtuViJBvz82xcUWEoz2vDB42WIha0iQDkA9Cgygt 0ub8W8nJW2OmGAzWzFpfPcIVPOX8hG+HyaEBwLkzINT/IwIqp4QfZc18Mf/iN9JA03qR Pj5axXP95uXyYUjk2FmKv4xblkVIpOi7kmagmEcD5trPbwPY2nr2fowi3iSC2m0Zg33g czoEz6+DqJk7auW/LuuQiZWsr8ntitfQ5nOK+HjaEMqNvIcb9SRN2V9Xt6a44cMesgeN gNCO6aCsSrv9lJh4jWg3zxSVkI9UNKZQ88Zahwt/q6mJwzZL7dEROnw1cy7ANPOP9VRU 1hjw== X-Gm-Message-State: ACrzQf2jSDUzTzpTOXKGoZLUC/h0wNQr77+ATVhIkzHqxUN2mbn2pFYs ohSrYzD5qjS+hkyUKpSSJVFY6aHjZeX9yg== X-Received: by 2002:a37:afc7:0:b0:6ed:bca6:f552 with SMTP id y190-20020a37afc7000000b006edbca6f552mr5008845qke.283.1665779362582; Fri, 14 Oct 2022 13:29:22 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id bk10-20020a05620a1a0a00b006eeaf9160d6sm3164371qkb.24.2022.10.14.13.29.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 13:29:22 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v4 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs Date: Fri, 14 Oct 2022 15:28:50 -0500 Message-Id: <20221014202852.2491657-2-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014202852.2491657-1-void@manifault.com> References: <20221014202852.2491657-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746696835863730141?= X-GMAIL-MSGID: =?utf-8?q?1746696835863730141?= Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal to the verifier that it should enforce that a BPF program passes it a "safe", trusted pointer. Currently, "safe" means that the pointer is either PTR_TO_CTX, or is refcounted. There may be cases, however, where the kernel passes a BPF program a safe / trusted pointer to an object that the BPF program wishes to use as a kptr, but because the object does not yet have a ref_obj_id from the perspective of the verifier, the program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS kfunc. The solution is to expand the set of pointers that are considered trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs with these pointers without getting rejected by the verifier. There is already a PTR_UNTRUSTED flag that is set in some scenarios, such as when a BPF program reads a kptr directly from a map without performing a bpf_kptr_xchg() call. These pointers of course can and should be rejected by the verifier. Unfortunately, however, PTR_UNTRUSTED does not cover all the cases for safety that need to be addressed to adequately protect kfuncs. Specifically, pointers obtained by a BPF program "walking" a struct are _not_ considered PTR_UNTRUSTED according to BPF. For example, say that we were to add a kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal that a task was unsafe to pass to a kfunc, the verifier would mistakenly allow the following unsafe BPF program to be loaded: SEC("tp_btf/task_newtask") int BPF_PROG(unsafe_acquire_task, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *nested; nested = task->last_wakee; /* Would not be rejected by the verifier. */ acquired = bpf_task_acquire(nested); if (!acquired) return 0; bpf_task_release(acquired); return 0; } To address this, this patch defines a new type flag called PTR_NESTED which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking a struct. A pointer passed directly from the kernel begins with (PTR_NESTED & type) == 0, meaning of course that it is not nested. Any pointer received from walking that object, however, would inherit that flag and become a nested pointer. With that flag, this patch also updates btf_check_func_arg_match() to only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any type modifiers (which of course includes both PTR_UNTRUSTED and PTR_NESTED). Otherwise, the pointer passes this check and continues onto the others in btf_check_func_arg_match(). A subsequent patch will add kfuncs for storing a task kfunc as a kptr, and then another patch will validate this feature by ensuring that the verifier rejects a kfunc invocation with a nested pointer. Signed-off-by: David Vernet --- include/linux/bpf.h | 6 ++++++ kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 12 +++++++++++- tools/testing/selftests/bpf/verifier/calls.c | 4 ++-- 4 files changed, 29 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9e7d46d16032..b624024edb4e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -457,6 +457,12 @@ enum bpf_type_flag { /* Size is known at compile time. */ MEM_FIXED_SIZE = BIT(10 + BPF_BASE_TYPE_BITS), + /* PTR was obtained from walking a struct. This is used with + * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a + * kfunc with KF_TRUSTED_ARGS. + */ + PTR_NESTED = BIT(11 + BPF_BASE_TYPE_BITS), + __BPF_TYPE_FLAG_MAX, __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1, }; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index eba603cec2c5..3d7bad11b10b 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, /* Check if argument must be a referenced pointer, args + i has * been verified to be a pointer (after skipping modifiers). * PTR_TO_CTX is ok without having non-zero ref_obj_id. + * + * All object pointers must be refcounted, other than: + * - PTR_TO_CTX + * - Trusted pointers (i.e. pointers with no type modifiers) */ - if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) { + if (is_kfunc && + trusted_args && + obj_ptr && + base_type(reg->type) != PTR_TO_CTX && + type_flag(reg->type) && + !reg->ref_obj_id) { bpf_log(log, "R%d must be referenced\n", regno); return -EINVAL; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6f6d2d511c06..d16a08ca507b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env, strncpy(prefix, "user_", 32); if (type & MEM_PERCPU) strncpy(prefix, "percpu_", 32); + if (type & PTR_NESTED) + strncpy(prefix, "nested_", 32); if (type & PTR_UNTRUSTED) strncpy(prefix, "untrusted_", 32); @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (type_flag(reg->type) & PTR_UNTRUSTED) flag |= PTR_UNTRUSTED; + /* All pointers obtained by walking a struct are nested. */ + flag |= PTR_NESTED; + if (atype == BPF_READ && value_regno >= 0) mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag); @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } }; static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; +static const struct bpf_reg_types btf_ptr_types = { + .types = { + PTR_TO_BTF_ID, + PTR_TO_BTF_ID | PTR_NESTED + }, +}; static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } }; static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } }; diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index e1a937277b54..496c29b1a298 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -181,7 +181,7 @@ }, .result_unpriv = REJECT, .result = REJECT, - .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed", + .errstr = "negative offset nested_ptr_ ptr R1 off=-4 disallowed", }, { "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset", @@ -243,7 +243,7 @@ }, .result_unpriv = REJECT, .result = REJECT, - .errstr = "R1 must be referenced", + .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point to scalar", }, { "calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID", From patchwork Fri Oct 14 20:28:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2818 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp367480wrs; Fri, 14 Oct 2022 13:38:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM41nRVRUU3hCXe8YXJFSzs6/Vx+F7EN1vtenzIJaYne4/WZ1R7zWlBVvv1maQ2PfGljVuoC X-Received: by 2002:a17:90b:4b8b:b0:20d:ac2f:8bb2 with SMTP id lr11-20020a17090b4b8b00b0020dac2f8bb2mr10006599pjb.194.1665779919785; Fri, 14 Oct 2022 13:38:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665779919; cv=none; d=google.com; s=arc-20160816; b=AeMEADxI8X1ASQ667JtG3sZ5I2ZByTGViA10yTrmUy/qHFRyRVupdjm71duU6zI1fH 9vQzFozI2O0xvnCMs/XJOJYYuNRgWdk6FkLlPdoGCjt3ZHRschoqtOW7VS3BmtUUoAYN ChwbMwilnPXYAg3Hwoa5k8vaFuLe8bLgGoEeHYo390QH59MuCO35A2E2IUfureow914e pj2jzv5n7d0kp56rb4M/0Jb7v0HS79G5VB6KcrRAfmxwF1HJBCm1ui51cfvA6FuG0FKq ph5adR+iTzo+KjDKA23S4+PpxQKZyo8vp0woYanzR1jnc0mwFTS7oIFlWOrJ4QITl+8X pjXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=UTjXFsPHx8lMzd/jHgnZDnZpafMbI3eXMs2OxHMY/N4=; b=jEGyrEeqmJEufEekCQJ/QL5YG2w5/nL4VBfanG8omfQxHEOTCAYoOKxaEf7/N/O7Qa eG46aA8McCIBaOmZdUIldJE8WPkipTieKXeQnmq0byXKiOWX266GAz0KE6Br+SUCAkAF wqmSvS/y2jQWIFufkBvGcPyMxPwUlMY47P92uTu8ZzFf43cvDStCHJFdhxulUBs0+H+D 40STds/8VhYW1ZS5cIC2DAvN4YdWKd6pw8ioOdvPI4ppE5tEkM6f4p8W2wXYOZZmnTiT MDbMTHJM1qdf+PwmVCn2S/c03lEgmSvsbS3LjjjEXkl93njZurTYdaTDSp6cx1jbWPKD b9IA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r19-20020a63ce53000000b0043c1cb75c22si3826315pgi.333.2022.10.14.13.38.26; Fri, 14 Oct 2022 13:38:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231439AbiJNU3c (ORCPT + 99 others); Fri, 14 Oct 2022 16:29:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231405AbiJNU30 (ORCPT ); Fri, 14 Oct 2022 16:29:26 -0400 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C7455D705; Fri, 14 Oct 2022 13:29:25 -0700 (PDT) Received: by mail-qt1-f169.google.com with SMTP id w3so4418009qtv.9; Fri, 14 Oct 2022 13:29:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UTjXFsPHx8lMzd/jHgnZDnZpafMbI3eXMs2OxHMY/N4=; b=Cb5CWWIPN8+4ufPl/DpllomOr55cTtQrzW4hb/MpSgZTL/s4mFQkksmuj6CUWsDhQ2 polFILliF6fNwLjG2u2kzaaTZTxpUehCaFC7xB4eU7pateYiY37zUVPCEU0pQ02IDgma XRwLpOMp0PKlzKYaA2I5hEL22sQYLGw5m5fe9knvAjFZHIWzk3XubPKEAtTCZ2QuklqJ pN9wi+Sya4oCptGppvbKFEVcJb6wZp9cqTkG1yFkq98vJrSGaHTbDfmLUOe77xE20bbO RydfwN/7tfWU/VvB1x60+L4f/XRgQZHhFojuW4i1wjaycjpDf02zSEzHSxdchhtUobZh vIwg== X-Gm-Message-State: ACrzQf3cZFryxre64uw1pX3HvdQfrNG5L3lL4zvHg7QHjq2Uy4up+zYk hBWOpKOYoVNGgELNvbc7gFGR9psAYj59sA== X-Received: by 2002:ac8:5889:0:b0:35c:cb1e:68cc with SMTP id t9-20020ac85889000000b0035ccb1e68ccmr5958072qta.406.1665779363989; Fri, 14 Oct 2022 13:29:23 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id fv18-20020a05622a4a1200b0039a55f78792sm2632548qtb.89.2022.10.14.13.29.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 13:29:23 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v4 2/3] Now that BPF supports adding new kernel functions with kfuncs, and storing kernel objects in maps with kptrs, we can add a set of kfuncs which allow struct task_struct objects to be stored in maps as referenced kptrs. The possible use cases for doing this are plentiful. During tracing, for example, it would be useful to be able to collect some tasks that performed a certain operation, and then periodically summarize who they are, which cgroup they're in, how much CPU time they've utilized, etc. Date: Fri, 14 Oct 2022 15:28:51 -0500 Message-Id: <20221014202852.2491657-3-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014202852.2491657-1-void@manifault.com> References: <20221014202852.2491657-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746696844957909773?= X-GMAIL-MSGID: =?utf-8?q?1746696844957909773?= In order to enable this, this patch adds three new kfuncs: struct task_struct *bpf_task_acquire(struct task_struct *p); struct task_struct *bpf_task_kptr_get(struct task_struct **pp); void bpf_task_release(struct task_struct *p); A follow-on patch will add selftests validating these kfuncs. Signed-off-by: David Vernet --- kernel/bpf/helpers.c | 86 +++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 81 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index a6b04faed282..9d0307969977 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1700,20 +1700,96 @@ bpf_base_func_proto(enum bpf_func_id func_id) } } -BTF_SET8_START(tracing_btf_ids) +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in vmlinux BTF"); + +/** + * bpf_task_acquire - Acquire a reference to a task. A task acquired by this + * kfunc which is not stored in a map as a kptr, must be released by calling + * bpf_task_release(). + * @p: The task on which a reference is being acquired. + */ +__used noinline +struct task_struct *bpf_task_acquire(struct task_struct *p) +{ + if (!p) + return NULL; + + refcount_inc(&p->rcu_users); + return p; +} + +/** + * bpf_task_kptr_get - Acquire a reference on a struct task_struct kptr. A task + * kptr acquired by this kfunc which is not subsequently stored in a map, must + * be released by calling bpf_task_release(). + * @pp: A pointer to a task kptr on which a reference is being acquired. + */ +__used noinline +struct task_struct *bpf_task_kptr_get(struct task_struct **pp) +{ + struct task_struct *p; + + rcu_read_lock(); + p = READ_ONCE(*pp); + if (p && !refcount_inc_not_zero(&p->rcu_users)) + p = NULL; + rcu_read_unlock(); + + return p; +} + +/** + * bpf_task_release - Release the reference acquired on a struct task_struct *. + * If this kfunc is invoked in an RCU read region, the task_struct is + * guaranteed to not be freed until the current grace period has ended, even if + * its refcount drops to 0. + * @p: The task on which a reference is being released. + */ +__used noinline void bpf_task_release(struct task_struct *p) +{ + if (!p) + return; + + put_task_struct_rcu_user(p); +} + +__diag_pop(); + +BTF_SET8_START(generic_kfunc_btf_ids) #ifdef CONFIG_KEXEC_CORE BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) #endif -BTF_SET8_END(tracing_btf_ids) +BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RET_NULL | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE | KF_TRUSTED_ARGS) +BTF_SET8_END(generic_kfunc_btf_ids) -static const struct btf_kfunc_id_set tracing_kfunc_set = { +static const struct btf_kfunc_id_set generic_kfunc_set = { .owner = THIS_MODULE, - .set = &tracing_btf_ids, + .set = &generic_kfunc_btf_ids, }; +BTF_ID_LIST(generic_kfunc_dtor_ids) +BTF_ID(struct, task_struct) +BTF_ID(func, bpf_task_release) + static int __init kfunc_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); + int ret; + const struct btf_id_dtor_kfunc generic_kfunc_dtors[] = { + { + .btf_id = generic_kfunc_dtor_ids[0], + .kfunc_btf_id = generic_kfunc_dtor_ids[1] + }, + }; + + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &generic_kfunc_set); + return ret ?: register_btf_id_dtor_kfuncs(generic_kfunc_dtors, + ARRAY_SIZE(generic_kfunc_dtors), + THIS_MODULE); } late_initcall(kfunc_init); From patchwork Fri Oct 14 20:28:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2820 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp367737wrs; Fri, 14 Oct 2022 13:39:44 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7q55m4lOeT/bQJwi9uaHT23Fp4u1FNhPAFvHjPCE6k1QOERrM+7bhx4sJ4CWjq1ZN2AZpj X-Received: by 2002:a17:90b:254e:b0:20b:7e26:f0a0 with SMTP id nw14-20020a17090b254e00b0020b7e26f0a0mr18697003pjb.203.1665779973547; Fri, 14 Oct 2022 13:39:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665779973; cv=none; d=google.com; s=arc-20160816; b=YcASkhfnwqkyOQUe3lw64RCmerWsH70UPEbTaYEtnbPYTmY8n0ki27Ns6fnhM8Cpbs hum+OSZYRncMBkk8bedCg87ID8VE0AAQLsiDVMXxyuH24KpwWPDPMRmggYvRdCn5v/uU MCzoFCRbQmBgdR69oAvNqEqpAFsqlqmp9wG58DYyOY8tBnKWhISETH5VMmfuA5uHCaY0 klXhBQjYLIuFub72Ri9W0UXJXOT+fad6MYWWtEoK5GJWdc8HpE01F/Rdt1ivgIAjN1ke ftNhe2T7BtYepv2362B7JKFzsGfFjUA7AbmNTtX01WeMZ6ZyLfObTpVulwOF0172iJaH HCwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=y3kA6sfh9ixG31h28lqpzkA8keoz0b9VTmFg5pP/IKA=; b=GE7F2S15yBCiyeYa3subFRkAZStGeZKYs0Mi4dkYk/ulYqxZE7ayIWqYnqZQ+z5F3+ yaF2iDxEI7fPflLDfk0z5eZJdrSyZ8vROmiz1wx4e8WqhJflA66uN3I2wneolYoGmEYZ byKjMfHnuLOkBsZ9JOpFxpEg0e90fQgN+DjcZ/P5PINarURSuVI2fGGbXv2PYHnLNTRR znuwqZZ9WB1NfUOZVP4dPyshv8cRGbCD5DjYpWCMWzgr0vopFrphRRiIOQwXXCVSEmUw zJPpVbSb+MhMrPsnnpoDAknL5+4qK/lznqUW6+xeH/WjbcG7KExWDL7O8Ogx2C//nHKg mvtQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n5-20020a63e045000000b004554963e1d2si3824667pgj.172.2022.10.14.13.39.20; Fri, 14 Oct 2022 13:39:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230400AbiJNU3h (ORCPT + 99 others); Fri, 14 Oct 2022 16:29:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231432AbiJNU32 (ORCPT ); Fri, 14 Oct 2022 16:29:28 -0400 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB4955D715; Fri, 14 Oct 2022 13:29:26 -0700 (PDT) Received: by mail-qv1-f43.google.com with SMTP id f14so3981861qvo.3; Fri, 14 Oct 2022 13:29:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y3kA6sfh9ixG31h28lqpzkA8keoz0b9VTmFg5pP/IKA=; b=fKExp3RlBkD/QfDCjxcMrSWC/q/zev21DxE5Fo3rSCGetzW2lkej5P3vNnSVagSA2S NSQK9YxH/sBKw95UfwbsUgxHgVrTkixZP98G4HRWqFq65WHWblnf3IVAGOtNAC0hpVJd Vhb/Iuch1KF526YDH1EvafD8laAkAg0ONs1pb71CZ5XigqRJWbn2KSexF54akiXsVifQ 5AbzqvP/9NKt9eQMPgE8PakSqacBnK8EHQ32yRVUFKzZ9w/PXWoTenFCZ6nDADTbDign 2Ob0I4YGCOXCthwDdXOqeOgck07eAo01Fz+4nshMQJ4KjGHGpwkyY5IKP9z711mJAP91 N15Q== X-Gm-Message-State: ACrzQf0qLdKhytDpmun1yCzu6J2Fq70dY2UphcE66eJSZ9/qbM/jiHWz n0lsxEQST5cIf4/YfF6nqmBdl5DReV/9jA== X-Received: by 2002:a05:6214:c4b:b0:4b1:9e07:388 with SMTP id r11-20020a0562140c4b00b004b19e070388mr5841047qvj.76.1665779365241; Fri, 14 Oct 2022 13:29:25 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id u6-20020a05620a430600b006e16dcf99c8sm3204126qko.71.2022.10.14.13.29.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 13:29:24 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v4 3/3] bpf/selftests: Add selftests for new task kfuncs Date: Fri, 14 Oct 2022 15:28:52 -0500 Message-Id: <20221014202852.2491657-4-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014202852.2491657-1-void@manifault.com> References: <20221014202852.2491657-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746696901116783009?= X-GMAIL-MSGID: =?utf-8?q?1746696901116783009?= A previous change added a series of kfuncs for storing struct task_struct objects as referenced kptrs. This patch adds a new task_kfunc test suite for validating their expected behavior. Signed-off-by: David Vernet --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/task_kfunc.c | 160 +++++++++ .../selftests/bpf/progs/task_kfunc_common.h | 83 +++++ .../selftests/bpf/progs/task_kfunc_failure.c | 315 ++++++++++++++++++ .../selftests/bpf/progs/task_kfunc_success.c | 132 ++++++++ 5 files changed, 691 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/task_kfunc.c create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_common.h create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_failure.c create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_success.c diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 520f12229b98..323a0e312b3d 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -52,6 +52,7 @@ skc_to_unix_sock # could not attach BPF object unexpecte socket_cookie # prog_attach unexpected error: -524 (trampoline) stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?) tailcalls # tail_calls are not allowed in non-JITed programs with bpf-to-bpf calls (?) +task_kfunc # JIT does not support calling kernel function task_local_storage # failed to auto-attach program 'trace_exit_creds': -524 (trampoline) test_bpffs # bpffs test failed 255 (iterator) test_bprm_opts # failed to auto-attach program 'secure_exec': -524 (trampoline) diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c new file mode 100644 index 000000000000..18492d010c45 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#define _GNU_SOURCE +#include +#include +#include + +#include "task_kfunc_failure.skel.h" +#include "task_kfunc_success.skel.h" + +static size_t log_buf_sz = 1 << 20; /* 1 MB */ +static char obj_log_buf[1048576]; + +static struct task_kfunc_success *open_load_task_kfunc_skel(void) +{ + struct task_kfunc_success *skel; + int err; + + skel = task_kfunc_success__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return NULL; + + skel->bss->pid = getpid(); + + err = task_kfunc_success__load(skel); + if (!ASSERT_OK(err, "skel_load")) + goto cleanup; + + return skel; + +cleanup: + task_kfunc_success__destroy(skel); + return NULL; +} + +static void run_success_test(const char *prog_name) +{ + struct task_kfunc_success *skel; + int status; + pid_t child_pid; + struct bpf_program *prog; + struct bpf_link *link = NULL; + + skel = open_load_task_kfunc_skel(); + if (!ASSERT_OK_PTR(skel, "open_load_skel")) + return; + + if (!ASSERT_OK(skel->bss->err, "pre_spawn_err")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + link = bpf_program__attach(prog); + if (!ASSERT_OK_PTR(link, "attached_link")) + goto cleanup; + + child_pid = fork(); + if (!ASSERT_GT(child_pid, -1, "child_pid")) + goto cleanup; + if (child_pid == 0) + _exit(0); + waitpid(child_pid, &status, 0); + + ASSERT_OK(skel->bss->err, "post_wait_err"); + +cleanup: + bpf_link__destroy(link); + task_kfunc_success__destroy(skel); +} + +static const char * const success_tests[] = { + "test_task_acquire_release_argument", + "test_task_acquire_release_current", + "test_task_acquire_leave_in_map", + "test_task_xchg_release", + "test_task_get_release", +}; + +static struct { + const char *prog_name; + const char *expected_err_msg; +} failure_tests[] = { + {"task_kfunc_acquire_untrusted", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_fp", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_no_null_check", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_trusted_nested", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_null", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_unreleased", "Unreleased reference"}, + {"task_kfunc_get_non_kptr_param", "arg#0 expected pointer to map value"}, + {"task_kfunc_get_non_kptr_acquired", "arg#0 expected pointer to map value"}, + {"task_kfunc_get_null", "arg#0 expected pointer to map value"}, + {"task_kfunc_xchg_unreleased", "Unreleased reference"}, + {"task_kfunc_get_unreleased", "Unreleased reference"}, + {"task_kfunc_release_untrusted", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_fp", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_null", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_unacquired", "release kernel function bpf_task_release expects refcounted PTR_TO_BTF_ID"}, +}; + +static void verify_fail(const char *prog_name, const char *expected_err_msg) +{ + LIBBPF_OPTS(bpf_object_open_opts, opts); + struct task_kfunc_failure *skel; + int err, i; + + opts.kernel_log_buf = obj_log_buf; + opts.kernel_log_size = log_buf_sz; + opts.kernel_log_level = 1; + + skel = task_kfunc_failure__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "task_kfunc_failure__open_opts")) + goto cleanup; + + skel->bss->pid = getpid(); + + for (i = 0; i < ARRAY_SIZE(failure_tests); i++) { + struct bpf_program *prog; + const char *curr_name = failure_tests[i].prog_name; + + prog = bpf_object__find_program_by_name(skel->obj, curr_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + bpf_program__set_autoload(prog, !strcmp(curr_name, prog_name)); + } + + err = task_kfunc_failure__load(skel); + if (!ASSERT_ERR(err, "unexpected load success")) + goto cleanup; + + if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) { + fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg); + fprintf(stderr, "Verifier output: %s\n", obj_log_buf); + } + +cleanup: + task_kfunc_failure__destroy(skel); +} + +void test_task_kfunc(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(success_tests); i++) { + if (!test__start_subtest(success_tests[i])) + continue; + + run_success_test(success_tests[i]); + } + + for (i = 0; i < ARRAY_SIZE(failure_tests); i++) { + if (!test__start_subtest(failure_tests[i].prog_name)) + continue; + + verify_fail(failure_tests[i].prog_name, failure_tests[i].expected_err_msg); + } +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h new file mode 100644 index 000000000000..f51257bdd695 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#ifndef _TASK_KFUNC_COMMON_H +#define _TASK_KFUNC_COMMON_H + +#include +#include +#include +#include + +struct __tasks_kfunc_map_value { + struct task_struct __kptr_ref * task; +}; + +struct hash_map { + __uint(type, BPF_MAP_TYPE_HASH); + __type(key, int); + __type(value, struct __tasks_kfunc_map_value); + __uint(max_entries, 1); +} __tasks_kfunc_map SEC(".maps"); + +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; +struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym; +void bpf_task_release(struct task_struct *p) __ksym; + +#define TEST_NAME_SZ 128 + +/* The pid of the test process used to determine if a newly created task is the test task. */ +int pid; + +static inline struct __tasks_kfunc_map_value *tasks_kfunc_map_value_lookup(struct task_struct *p) +{ + s32 pid; + long status; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid); + if (status) + return NULL; + + return bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); +} + +static inline int tasks_kfunc_map_insert(struct task_struct *p) +{ + struct __tasks_kfunc_map_value local, *v; + long status; + struct task_struct *acquired, *old; + s32 pid; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid); + if (status) + return status; + + local.task = NULL; + status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST); + if (status) + return status; + + v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); + if (!v) { + bpf_map_delete_elem(&__tasks_kfunc_map, &pid); + return status; + } + + acquired = bpf_task_acquire(p); + old = bpf_kptr_xchg(&v->task, acquired); + if (old) { + bpf_task_release(old); + return -EEXIST; + } + + return 0; +} + +static inline bool is_test_kfunc_task(void) +{ + int cur_pid = bpf_get_current_pid_tgid() >> 32; + + return pid == cur_pid; +} + +#endif /* _TASK_KFUNC_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c new file mode 100644 index 000000000000..74d2f176a2de --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -0,0 +1,315 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include + +#include "task_kfunc_common.h" + +char _license[] SEC("license") = "GPL"; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *task) +{ + int status; + + status = tasks_kfunc_map_insert(task); + if (status) + return NULL; + + return tasks_kfunc_map_value_lookup(task); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + /* Can't invoke bpf_task_acquire() on an untrusted pointer. */ + acquired = bpf_task_acquire(v->task); + if (!acquired) + return 0; + + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a random frame pointer. */ + acquired = bpf_task_acquire((struct task_struct *)&stack_task); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_no_null_check, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + /* Can't release a bpf_task_acquire()'d task without a NULL check. */ + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */ + acquired = bpf_task_acquire(task->last_wakee); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a NULL pointer. */ + acquired = bpf_task_acquire(NULL); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + + /* Acquired task is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_non_kptr_param, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot use bpf_task_kptr_get() on a non-kptr, even on a valid task. */ + kptr = bpf_task_kptr_get(&task); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr, *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + + /* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */ + kptr = bpf_task_kptr_get(&acquired); + bpf_task_release(acquired); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_null, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot use bpf_task_kptr_get() on a NULL pointer. */ + kptr = bpf_task_kptr_get(NULL); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + kptr = bpf_kptr_xchg(&v->task, NULL); + if (!kptr) + return 0; + + /* Kptr retrieved from map is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + kptr = bpf_task_kptr_get(&v->task); + if (!kptr) + return 0; + + /* Kptr acquired above is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags) +{ + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + /* Can't invoke bpf_task_release() on an untrusted pointer. */ + bpf_task_release(v->task); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired = (struct task_struct *)&clone_flags; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot release random frame pointer. */ + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) +{ + struct __tasks_kfunc_map_value local, *v; + long status; + struct task_struct *acquired, *old; + s32 pid; + + if (!is_test_kfunc_task()) + return 0; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &task->pid); + if (status) + return 0; + + local.task = NULL; + status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST); + if (status) + return status; + + v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); + if (!v) + return status; + + acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + + old = bpf_kptr_xchg(&v->task, acquired); + + /* old cannot be passed to bpf_task_release() without a NULL check. */ + bpf_task_release(old); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + /* Cannot release trusted task pointer which was not acquired. */ + bpf_task_release(task); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c new file mode 100644 index 000000000000..8d5c05b41d53 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c @@ -0,0 +1,132 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include + +#include "task_kfunc_common.h" + +char _license[] SEC("license") = "GPL"; + +int err; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +static int test_acquire_release(struct task_struct *task) +{ + struct task_struct *acquired; + + acquired = bpf_task_acquire(task); + if (!acquired) { + err = 1; + return 0; + } + + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_release_argument, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + return test_acquire_release(task); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_release_current, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + return test_acquire_release(bpf_get_current_task_btf()); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_leave_in_map, struct task_struct *task, u64 clone_flags) +{ + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) + err = 1; + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) { + err = 1; + return 0; + } + + v = tasks_kfunc_map_value_lookup(task); + if (!v) { + err = 2; + return 0; + } + + kptr = bpf_kptr_xchg(&v->task, NULL); + if (!kptr) { + err = 3; + return 0; + } + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) { + err = 1; + return 0; + } + + v = tasks_kfunc_map_value_lookup(task); + if (!v) { + err = 2; + return 0; + } + + kptr = bpf_task_kptr_get(&v->task); + if (!kptr) { + err = 3; + return 0; + } + + bpf_task_release(kptr); + + return 0; +}