From patchwork Fri Oct 14 21:21:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2839 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp382878wrs; Fri, 14 Oct 2022 14:25:48 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4vEpzpjaske2FsFRiN4+c5zK6F9KdYMJnOC8bDR06lwHN54h6CvOSgXoAKqEljCssq8CVW X-Received: by 2002:a17:907:6e03:b0:78e:1c82:1f2a with SMTP id sd3-20020a1709076e0300b0078e1c821f2amr4745773ejc.611.1665782748677; Fri, 14 Oct 2022 14:25:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665782748; cv=none; d=google.com; s=arc-20160816; b=qS6Y3gtfvISih+nEWBe32/CHx38jsmcCMc0hMzKKPPyddrla4bIelaiQ4hVE7g7H9i K+fqdI1Td0XTr5eHX0Z2W4DD7qzzJF6KZYX5Z0Yr9AprqWEtMB+Izvl1LBQhhEdYgTOg 2tW0EQd0lpWTzA7YYBAfH9orcwLlNYHnuIwd3UAYl4fpLuJ6HAzW86LQO96vCx75/L+D HCIKpZCt/Z14jJ9UBlGvodL1EGxadNM22olfNr4btQcpNZz8Lo1oNvMZiSaIYuHIRLvL SNAnNdmcK7vEmTHlzqOOSPA+V/Cc1xDI+wxKs4bSjy7bhwDenBL+qdrfMjEiYPCyy7ar H+Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=db6CGA38cn3llOR2GqnSc/iqHm89NTf7glPgZKqwHZI=; b=XaWriSutskqPqsU9ViR0FnbZdf7B4ZtbrofG+Kg/ULUK54mfkeloqYGBm9AhxemgGO QWbcRrLci5rLRu+vAqaaVSTxAaewjyKn1HyYK2MIfSvC7I2ah/AQROE5mhZWhSDFNs3N 6P0bNe1fv8tTtSXS8qfEgVe4PuviniNEay8Vht7P/ID5qOGSL47DQOGhznSDodIGiNF9 0NGXJqty6etELc9b7CTCf3nYPNsCLVpWdyMkiawE7nkVSAu48ZyFGB7QBsCTxsh+9RZQ w8fO+HcLaKyLUGpybr2GpRGTuEunT5KVinIK/6sil0LH7OI40lTu+UMz/CZrDR4EYG0L QtsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t9-20020a056402524900b0045bd677b3d5si3919808edd.342.2022.10.14.14.25.23; Fri, 14 Oct 2022 14:25:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229851AbiJNVVo (ORCPT + 99 others); Fri, 14 Oct 2022 17:21:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbiJNVVl (ORCPT ); Fri, 14 Oct 2022 17:21:41 -0400 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A67B1D3760; Fri, 14 Oct 2022 14:21:39 -0700 (PDT) Received: by mail-qk1-f172.google.com with SMTP id o22so3312095qkl.8; Fri, 14 Oct 2022 14:21:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=db6CGA38cn3llOR2GqnSc/iqHm89NTf7glPgZKqwHZI=; b=6vtGNUYBna0IYAVI3/F9txYB3dnGZGZsL8a7CL9JMAoOW6a/YvM+pR9h0OTSnuv9j4 tIbqNKAlf9AWG3njMK3ECSqv0gS9+U9jSY4BwBeV/uRK4terC9h16WA92tmPcz4bPml6 +MJHeZLR6DC5J+TwBUd4RNiE2Y6BUiAV1ICFYHUNz1rArZ1nhfYAuIKzzqIONvSprnVP vTdTGhNthvMkquXk4qRL60MuQPB9nPEDtbEmTmiTGN3b9GrWMlxIEjC7x69huSiPoj6G lJt/VdNXSDjEN5qBaWvLHRpR0gMsvJh3sEIQU+A0WWFtS2uEItcNZD6lJa4AF8Q1aTMO wvgw== X-Gm-Message-State: ACrzQf26gyjp9eKGXxlUGXwQrAF/voWNEf5SdWIweVdzcU1QshJkoLN9 hD0H1tMHFP7OvTU9XvohC2nVB7YTs8hqQQ== X-Received: by 2002:a05:620a:4248:b0:6d2:7f09:50a1 with SMTP id w8-20020a05620a424800b006d27f0950a1mr5248892qko.746.1665782497801; Fri, 14 Oct 2022 14:21:37 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id l5-20020ac80785000000b003996aa171b9sm2634774qth.97.2022.10.14.14.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 14:21:37 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs Date: Fri, 14 Oct 2022 16:21:31 -0500 Message-Id: <20221014212133.2520531-2-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014212133.2520531-1-void@manifault.com> References: <20221014212133.2520531-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746699811045227810?= X-GMAIL-MSGID: =?utf-8?q?1746699811045227810?= Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal to the verifier that it should enforce that a BPF program passes it a "safe", trusted pointer. Currently, "safe" means that the pointer is either PTR_TO_CTX, or is refcounted. There may be cases, however, where the kernel passes a BPF program a safe / trusted pointer to an object that the BPF program wishes to use as a kptr, but because the object does not yet have a ref_obj_id from the perspective of the verifier, the program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS kfunc. The solution is to expand the set of pointers that are considered trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs with these pointers without getting rejected by the verifier. There is already a PTR_UNTRUSTED flag that is set in some scenarios, such as when a BPF program reads a kptr directly from a map without performing a bpf_kptr_xchg() call. These pointers of course can and should be rejected by the verifier. Unfortunately, however, PTR_UNTRUSTED does not cover all the cases for safety that need to be addressed to adequately protect kfuncs. Specifically, pointers obtained by a BPF program "walking" a struct are _not_ considered PTR_UNTRUSTED according to BPF. For example, say that we were to add a kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal that a task was unsafe to pass to a kfunc, the verifier would mistakenly allow the following unsafe BPF program to be loaded: SEC("tp_btf/task_newtask") int BPF_PROG(unsafe_acquire_task, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *nested; nested = task->last_wakee; /* Would not be rejected by the verifier. */ acquired = bpf_task_acquire(nested); if (!acquired) return 0; bpf_task_release(acquired); return 0; } To address this, this patch defines a new type flag called PTR_NESTED which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking a struct. A pointer passed directly from the kernel begins with (PTR_NESTED & type) == 0, meaning of course that it is not nested. Any pointer received from walking that object, however, would inherit that flag and become a nested pointer. With that flag, this patch also updates btf_check_func_arg_match() to only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any type modifiers (which of course includes both PTR_UNTRUSTED and PTR_NESTED). Otherwise, the pointer passes this check and continues onto the others in btf_check_func_arg_match(). A subsequent patch will add kfuncs for storing a task kfunc as a kptr, and then another patch will validate this feature by ensuring that the verifier rejects a kfunc invocation with a nested pointer. Signed-off-by: David Vernet --- include/linux/bpf.h | 6 ++++++ kernel/bpf/btf.c | 11 ++++++++++- kernel/bpf/verifier.c | 12 +++++++++++- tools/testing/selftests/bpf/verifier/calls.c | 4 ++-- 4 files changed, 29 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9e7d46d16032..b624024edb4e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -457,6 +457,12 @@ enum bpf_type_flag { /* Size is known at compile time. */ MEM_FIXED_SIZE = BIT(10 + BPF_BASE_TYPE_BITS), + /* PTR was obtained from walking a struct. This is used with + * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a + * kfunc with KF_TRUSTED_ARGS. + */ + PTR_NESTED = BIT(11 + BPF_BASE_TYPE_BITS), + __BPF_TYPE_FLAG_MAX, __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1, }; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index eba603cec2c5..3d7bad11b10b 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, /* Check if argument must be a referenced pointer, args + i has * been verified to be a pointer (after skipping modifiers). * PTR_TO_CTX is ok without having non-zero ref_obj_id. + * + * All object pointers must be refcounted, other than: + * - PTR_TO_CTX + * - Trusted pointers (i.e. pointers with no type modifiers) */ - if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) { + if (is_kfunc && + trusted_args && + obj_ptr && + base_type(reg->type) != PTR_TO_CTX && + type_flag(reg->type) && + !reg->ref_obj_id) { bpf_log(log, "R%d must be referenced\n", regno); return -EINVAL; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6f6d2d511c06..d16a08ca507b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env, strncpy(prefix, "user_", 32); if (type & MEM_PERCPU) strncpy(prefix, "percpu_", 32); + if (type & PTR_NESTED) + strncpy(prefix, "nested_", 32); if (type & PTR_UNTRUSTED) strncpy(prefix, "untrusted_", 32); @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (type_flag(reg->type) & PTR_UNTRUSTED) flag |= PTR_UNTRUSTED; + /* All pointers obtained by walking a struct are nested. */ + flag |= PTR_NESTED; + if (atype == BPF_READ && value_regno >= 0) mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag); @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } }; static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; +static const struct bpf_reg_types btf_ptr_types = { + .types = { + PTR_TO_BTF_ID, + PTR_TO_BTF_ID | PTR_NESTED + }, +}; static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } }; static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } }; diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index e1a937277b54..496c29b1a298 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -181,7 +181,7 @@ }, .result_unpriv = REJECT, .result = REJECT, - .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed", + .errstr = "negative offset nested_ptr_ ptr R1 off=-4 disallowed", }, { "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset", @@ -243,7 +243,7 @@ }, .result_unpriv = REJECT, .result = REJECT, - .errstr = "R1 must be referenced", + .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point to scalar", }, { "calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID", From patchwork Fri Oct 14 21:21:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2840 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp382907wrs; Fri, 14 Oct 2022 14:25:55 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5JkbYprc5DTi5V8k9FkghJcjhDPmcXxCELemMYNYdlo1/IMfKxxUk3uyG7zCME1b3yH4EJ X-Received: by 2002:aa7:cb43:0:b0:458:b6ac:fb7 with SMTP id w3-20020aa7cb43000000b00458b6ac0fb7mr5739284edt.43.1665782755191; Fri, 14 Oct 2022 14:25:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665782755; cv=none; d=google.com; s=arc-20160816; b=v97q06epoLN8wdliYfIUh/38KHB7vdTaY3yfvmnmr0hzrt20Q9EsvwF8RYgKTKwwDB VvrI+ZGWQt1Sc71aKxIzUBGVkeAoDkY2DGJA8uYAIkgtSiH1WEe2F85pGIqWJPcQPtRo MhnhKQNepgPN4pvaDQWVZo+wMlXagDyYmV2RSTnijktzqpRpNDEcKc2Qj+tIj9Ti65JL 8hP8IYCPhXRQrkfLQTX7C96xVA99P/Rg8NTz+tkAj2myvD2ckPCH2g0rx4fE1Jum5wJ9 4b+3kVsZb/XWYbMLn0d4hZK8l+wbXNGadXKgT0GP6Q2i5HuHJ0GbpC6ZxPFARkGebdQW 99WQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Un9v/0s/QDKsNCY1plqqMOPWTqMpkXY8lPQPooSTmh0=; b=rU7sypRTzVRyfODrQb3O+lm9Afjc5tJ4Y2p8akcfAn+uidXy6xqv+dwGsDuHUdScVC 9XwZLszcz+XPks8zB+3YQRHh80axwtDAOAPav4zI75OW5htDAkEUwe2vgFexizCuGYug 5aXPvhXx4i81tTUNmCJstX7ahggvlqDhlaCcmGxu4pSkrAEQucf6k0rL8FmbmhS9/BZ5 SUAvID0CfI57qLjKBoCjVFHyV2sfhMBJUduA1OJcK/WkLd3ZMNqjif+MYIP8KjtMAwaY WUOqyHde6RFtkX+mgkAPDwjpjXgLUwKJ8itnq8R2Fpx19Uj0KYstkFzslu8GUbndvKdZ /6Og== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q12-20020a170906b28c00b0077d562462f5si2759290ejz.381.2022.10.14.14.25.29; Fri, 14 Oct 2022 14:25:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229885AbiJNVVu (ORCPT + 99 others); Fri, 14 Oct 2022 17:21:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229829AbiJNVVl (ORCPT ); Fri, 14 Oct 2022 17:21:41 -0400 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D377A1D1E00; Fri, 14 Oct 2022 14:21:40 -0700 (PDT) Received: by mail-qt1-f171.google.com with SMTP id z8so4511139qtv.5; Fri, 14 Oct 2022 14:21:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Un9v/0s/QDKsNCY1plqqMOPWTqMpkXY8lPQPooSTmh0=; b=5wh2nrrJHlKWIe0cplPzD8I+mptExODoEj2p87il/CIazPwEDjBrxoV00C/wsNEH5m Ta40OvM0O2mVTnA7Sv1HWALcoaSOPHy4TP6/7LNKAGhia0Le98JhBoSNOM/lLg1SPVqq /TVLtQkM84MZRpVSjf3d0JysHas05aCk4FoTKFHPY5oAVN+pwjkHhswjLrt010kUfcLR Z/FtWETEMp/DT0QRnSqm2OrkfByonE0NQuRGkCXbNsSyEGBIOqIvV2RkzttIGEorOWrn KYqlG+jX0GoSE4nhQ3sUkpYET7q1CA1y7sNbvzoWOPwMSRnb8PgJgBYgbSb+enfs7167 ZkUA== X-Gm-Message-State: ACrzQf19GaCfBJIM0tvknFEuv4vxj457HIvx3JLxi7EEUmh/j+pM3Pek Tc+JD+ZErVjQhkGb8HUmT5kEFYNjquNx1w== X-Received: by 2002:a05:622a:94:b0:39c:db0e:ed3a with SMTP id o20-20020a05622a009400b0039cdb0eed3amr3291955qtw.655.1665782498927; Fri, 14 Oct 2022 14:21:38 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id p14-20020ac8740e000000b003992448029esm2748517qtq.19.2022.10.14.14.21.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 14:21:38 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr Date: Fri, 14 Oct 2022 16:21:32 -0500 Message-Id: <20221014212133.2520531-3-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014212133.2520531-1-void@manifault.com> References: <20221014212133.2520531-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746699818352458279?= X-GMAIL-MSGID: =?utf-8?q?1746699818352458279?= Now that BPF supports adding new kernel functions with kfuncs, and storing kernel objects in maps with kptrs, we can add a set of kfuncs which allow struct task_struct objects to be stored in maps as referenced kptrs. The possible use cases for doing this are plentiful. During tracing, for example, it would be useful to be able to collect some tasks that performed a certain operation, and then periodically summarize who they are, which cgroup they're in, how much CPU time they've utilized, etc. In order to enable this, this patch adds three new kfuncs: struct task_struct *bpf_task_acquire(struct task_struct *p); struct task_struct *bpf_task_kptr_get(struct task_struct **pp); void bpf_task_release(struct task_struct *p); A follow-on patch will add selftests validating these kfuncs. Signed-off-by: David Vernet --- kernel/bpf/helpers.c | 86 +++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 81 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index a6b04faed282..9d0307969977 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1700,20 +1700,96 @@ bpf_base_func_proto(enum bpf_func_id func_id) } } -BTF_SET8_START(tracing_btf_ids) +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in vmlinux BTF"); + +/** + * bpf_task_acquire - Acquire a reference to a task. A task acquired by this + * kfunc which is not stored in a map as a kptr, must be released by calling + * bpf_task_release(). + * @p: The task on which a reference is being acquired. + */ +__used noinline +struct task_struct *bpf_task_acquire(struct task_struct *p) +{ + if (!p) + return NULL; + + refcount_inc(&p->rcu_users); + return p; +} + +/** + * bpf_task_kptr_get - Acquire a reference on a struct task_struct kptr. A task + * kptr acquired by this kfunc which is not subsequently stored in a map, must + * be released by calling bpf_task_release(). + * @pp: A pointer to a task kptr on which a reference is being acquired. + */ +__used noinline +struct task_struct *bpf_task_kptr_get(struct task_struct **pp) +{ + struct task_struct *p; + + rcu_read_lock(); + p = READ_ONCE(*pp); + if (p && !refcount_inc_not_zero(&p->rcu_users)) + p = NULL; + rcu_read_unlock(); + + return p; +} + +/** + * bpf_task_release - Release the reference acquired on a struct task_struct *. + * If this kfunc is invoked in an RCU read region, the task_struct is + * guaranteed to not be freed until the current grace period has ended, even if + * its refcount drops to 0. + * @p: The task on which a reference is being released. + */ +__used noinline void bpf_task_release(struct task_struct *p) +{ + if (!p) + return; + + put_task_struct_rcu_user(p); +} + +__diag_pop(); + +BTF_SET8_START(generic_kfunc_btf_ids) #ifdef CONFIG_KEXEC_CORE BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) #endif -BTF_SET8_END(tracing_btf_ids) +BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RET_NULL | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE | KF_TRUSTED_ARGS) +BTF_SET8_END(generic_kfunc_btf_ids) -static const struct btf_kfunc_id_set tracing_kfunc_set = { +static const struct btf_kfunc_id_set generic_kfunc_set = { .owner = THIS_MODULE, - .set = &tracing_btf_ids, + .set = &generic_kfunc_btf_ids, }; +BTF_ID_LIST(generic_kfunc_dtor_ids) +BTF_ID(struct, task_struct) +BTF_ID(func, bpf_task_release) + static int __init kfunc_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); + int ret; + const struct btf_id_dtor_kfunc generic_kfunc_dtors[] = { + { + .btf_id = generic_kfunc_dtor_ids[0], + .kfunc_btf_id = generic_kfunc_dtor_ids[1] + }, + }; + + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &generic_kfunc_set); + return ret ?: register_btf_id_dtor_kfuncs(generic_kfunc_dtors, + ARRAY_SIZE(generic_kfunc_dtors), + THIS_MODULE); } late_initcall(kfunc_init); From patchwork Fri Oct 14 21:21:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 2841 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp382945wrs; Fri, 14 Oct 2022 14:26:02 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5fVCHPf/lcvKGmJK8Xxhgu0aJOR40VbzFoMM8sp31kRxpNLvSDJ7NmmiVhiemray460LZT X-Received: by 2002:a17:907:7e87:b0:78e:2dc3:945 with SMTP id qb7-20020a1709077e8700b0078e2dc30945mr1044111ejc.326.1665782762496; Fri, 14 Oct 2022 14:26:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665782762; cv=none; d=google.com; s=arc-20160816; b=nydrfVcm1QA8mpropusALWPoIiaxyD3fERxjLETzDMxYYbLgB7nAhCkU9zTart8Uwo wM1nXqLQw/0RApQV6REiT9UqNx8kpgSQfaMDhiHlrgK17oh6FCkz1Y9mhk5AqynB64B6 YpFd7cGvelGTpfVsKlXXiInVpRIZ6ZPSKFlapBOkKSRqBl7lDAF+Ep/9T3gENycO9TMt mV8WP5fkUbDwB37QcTWUVWAJY9VnzrEbofx1HQANqMdrUQw/TGO8MKOUZnuuDuxI1WDm MPBUlJY/A1zNsoh/bVe5FkYWggHAh2KXf2nWEnAPcTut3fb9L0uQs4uW8uaB2WPMAXT3 VRmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=y3kA6sfh9ixG31h28lqpzkA8keoz0b9VTmFg5pP/IKA=; b=okv5ssQ7ulTg5EKmJJjxJCgV6dW6KC9SgssNmY1rTY6MxeuetiJ/CKtxMi9Kd6jbSF JGW7wOaro66dVPXNqzuFCo5Y5AhJMNCDeiZ7mIVtkOLd51n2JW+0xtfv2stLevl8M7qo 5R0dsPJohuEZAi1aAaeo1NBD49Mblr59tNBtqZx6hJsXjOMBuzoDuyWiAKscl0No+nGI p0++MIaxDKUEcP7Fz8Iou1Uj7BPtwb1lXdaSJI4sTM15SdcDSKNUOppGP5r7KlE6bN4O tqw9Sul0gkLKNBWfbmDgP9bGuiITgAWF2PJ3lpvzaRuprOtEYv9tmfL2yYbJRFgoGW+j pDZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s4-20020a17090699c400b0078df948151esi3057192ejn.946.2022.10.14.14.25.37; Fri, 14 Oct 2022 14:26:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229905AbiJNVVy (ORCPT + 99 others); Fri, 14 Oct 2022 17:21:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229862AbiJNVVp (ORCPT ); Fri, 14 Oct 2022 17:21:45 -0400 Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB8B71CD6BD; Fri, 14 Oct 2022 14:21:41 -0700 (PDT) Received: by mail-qt1-f182.google.com with SMTP id l28so4507328qtv.4; Fri, 14 Oct 2022 14:21:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y3kA6sfh9ixG31h28lqpzkA8keoz0b9VTmFg5pP/IKA=; b=P7gObBSYFZhHM0aipnvCft6vOpIOId6m+hSkdylkMOmYY7Hn9PPqhqI3pYFr4NdqAU EVFZE+MMRr0eohouZmdqufyqzyiuKpMO+N/g3VH93Cg1FyYi4Z8Rc0UEVjD0JbK13Gh+ Q8Kf4FAlDPBbgckYpkrTgYj0p+nu3QjTlrSJzowig63T7TtT1N8jgDntKKC7m4Q9NAZ3 Zk/ocN2NaaMULj04ijUKmaTDmkT2dnvKqN9T+hllYDvSgxxVFgdhT9QjRCqQ3NDxiBIt T+CMoTU2yrFtZSYk/n/3M2hEVwLv3Z8M/gj1J1y9LiNIpEpsmy4ipAeRqismwyObBzDV 2HyA== X-Gm-Message-State: ACrzQf3b7SkAvGnzodmFmtWbhT6+1OmwloXNXK9HKt98mkUl/tiMQfyQ tamkVHT1OGMZKF2gTtT6Pk3VE8+zYnp1hw== X-Received: by 2002:ac8:5d88:0:b0:35c:fee5:24f4 with SMTP id d8-20020ac85d88000000b0035cfee524f4mr5803011qtx.291.1665782500094; Fri, 14 Oct 2022 14:21:40 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::6918]) by smtp.gmail.com with ESMTPSA id ga8-20020a05622a590800b0039cc82a319asm2794446qtb.76.2022.10.14.14.21.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 14:21:39 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs Date: Fri, 14 Oct 2022 16:21:33 -0500 Message-Id: <20221014212133.2520531-4-void@manifault.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221014212133.2520531-1-void@manifault.com> References: <20221014212133.2520531-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746699825508511525?= X-GMAIL-MSGID: =?utf-8?q?1746699825508511525?= A previous change added a series of kfuncs for storing struct task_struct objects as referenced kptrs. This patch adds a new task_kfunc test suite for validating their expected behavior. Signed-off-by: David Vernet --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/task_kfunc.c | 160 +++++++++ .../selftests/bpf/progs/task_kfunc_common.h | 83 +++++ .../selftests/bpf/progs/task_kfunc_failure.c | 315 ++++++++++++++++++ .../selftests/bpf/progs/task_kfunc_success.c | 132 ++++++++ 5 files changed, 691 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/task_kfunc.c create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_common.h create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_failure.c create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_success.c diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 520f12229b98..323a0e312b3d 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -52,6 +52,7 @@ skc_to_unix_sock # could not attach BPF object unexpecte socket_cookie # prog_attach unexpected error: -524 (trampoline) stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?) tailcalls # tail_calls are not allowed in non-JITed programs with bpf-to-bpf calls (?) +task_kfunc # JIT does not support calling kernel function task_local_storage # failed to auto-attach program 'trace_exit_creds': -524 (trampoline) test_bpffs # bpffs test failed 255 (iterator) test_bprm_opts # failed to auto-attach program 'secure_exec': -524 (trampoline) diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c new file mode 100644 index 000000000000..18492d010c45 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#define _GNU_SOURCE +#include +#include +#include + +#include "task_kfunc_failure.skel.h" +#include "task_kfunc_success.skel.h" + +static size_t log_buf_sz = 1 << 20; /* 1 MB */ +static char obj_log_buf[1048576]; + +static struct task_kfunc_success *open_load_task_kfunc_skel(void) +{ + struct task_kfunc_success *skel; + int err; + + skel = task_kfunc_success__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return NULL; + + skel->bss->pid = getpid(); + + err = task_kfunc_success__load(skel); + if (!ASSERT_OK(err, "skel_load")) + goto cleanup; + + return skel; + +cleanup: + task_kfunc_success__destroy(skel); + return NULL; +} + +static void run_success_test(const char *prog_name) +{ + struct task_kfunc_success *skel; + int status; + pid_t child_pid; + struct bpf_program *prog; + struct bpf_link *link = NULL; + + skel = open_load_task_kfunc_skel(); + if (!ASSERT_OK_PTR(skel, "open_load_skel")) + return; + + if (!ASSERT_OK(skel->bss->err, "pre_spawn_err")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + link = bpf_program__attach(prog); + if (!ASSERT_OK_PTR(link, "attached_link")) + goto cleanup; + + child_pid = fork(); + if (!ASSERT_GT(child_pid, -1, "child_pid")) + goto cleanup; + if (child_pid == 0) + _exit(0); + waitpid(child_pid, &status, 0); + + ASSERT_OK(skel->bss->err, "post_wait_err"); + +cleanup: + bpf_link__destroy(link); + task_kfunc_success__destroy(skel); +} + +static const char * const success_tests[] = { + "test_task_acquire_release_argument", + "test_task_acquire_release_current", + "test_task_acquire_leave_in_map", + "test_task_xchg_release", + "test_task_get_release", +}; + +static struct { + const char *prog_name; + const char *expected_err_msg; +} failure_tests[] = { + {"task_kfunc_acquire_untrusted", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_fp", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_no_null_check", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_trusted_nested", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_null", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_unreleased", "Unreleased reference"}, + {"task_kfunc_get_non_kptr_param", "arg#0 expected pointer to map value"}, + {"task_kfunc_get_non_kptr_acquired", "arg#0 expected pointer to map value"}, + {"task_kfunc_get_null", "arg#0 expected pointer to map value"}, + {"task_kfunc_xchg_unreleased", "Unreleased reference"}, + {"task_kfunc_get_unreleased", "Unreleased reference"}, + {"task_kfunc_release_untrusted", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_fp", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_null", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_release_unacquired", "release kernel function bpf_task_release expects refcounted PTR_TO_BTF_ID"}, +}; + +static void verify_fail(const char *prog_name, const char *expected_err_msg) +{ + LIBBPF_OPTS(bpf_object_open_opts, opts); + struct task_kfunc_failure *skel; + int err, i; + + opts.kernel_log_buf = obj_log_buf; + opts.kernel_log_size = log_buf_sz; + opts.kernel_log_level = 1; + + skel = task_kfunc_failure__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "task_kfunc_failure__open_opts")) + goto cleanup; + + skel->bss->pid = getpid(); + + for (i = 0; i < ARRAY_SIZE(failure_tests); i++) { + struct bpf_program *prog; + const char *curr_name = failure_tests[i].prog_name; + + prog = bpf_object__find_program_by_name(skel->obj, curr_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + bpf_program__set_autoload(prog, !strcmp(curr_name, prog_name)); + } + + err = task_kfunc_failure__load(skel); + if (!ASSERT_ERR(err, "unexpected load success")) + goto cleanup; + + if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) { + fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg); + fprintf(stderr, "Verifier output: %s\n", obj_log_buf); + } + +cleanup: + task_kfunc_failure__destroy(skel); +} + +void test_task_kfunc(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(success_tests); i++) { + if (!test__start_subtest(success_tests[i])) + continue; + + run_success_test(success_tests[i]); + } + + for (i = 0; i < ARRAY_SIZE(failure_tests); i++) { + if (!test__start_subtest(failure_tests[i].prog_name)) + continue; + + verify_fail(failure_tests[i].prog_name, failure_tests[i].expected_err_msg); + } +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h new file mode 100644 index 000000000000..f51257bdd695 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#ifndef _TASK_KFUNC_COMMON_H +#define _TASK_KFUNC_COMMON_H + +#include +#include +#include +#include + +struct __tasks_kfunc_map_value { + struct task_struct __kptr_ref * task; +}; + +struct hash_map { + __uint(type, BPF_MAP_TYPE_HASH); + __type(key, int); + __type(value, struct __tasks_kfunc_map_value); + __uint(max_entries, 1); +} __tasks_kfunc_map SEC(".maps"); + +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; +struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym; +void bpf_task_release(struct task_struct *p) __ksym; + +#define TEST_NAME_SZ 128 + +/* The pid of the test process used to determine if a newly created task is the test task. */ +int pid; + +static inline struct __tasks_kfunc_map_value *tasks_kfunc_map_value_lookup(struct task_struct *p) +{ + s32 pid; + long status; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid); + if (status) + return NULL; + + return bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); +} + +static inline int tasks_kfunc_map_insert(struct task_struct *p) +{ + struct __tasks_kfunc_map_value local, *v; + long status; + struct task_struct *acquired, *old; + s32 pid; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid); + if (status) + return status; + + local.task = NULL; + status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST); + if (status) + return status; + + v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); + if (!v) { + bpf_map_delete_elem(&__tasks_kfunc_map, &pid); + return status; + } + + acquired = bpf_task_acquire(p); + old = bpf_kptr_xchg(&v->task, acquired); + if (old) { + bpf_task_release(old); + return -EEXIST; + } + + return 0; +} + +static inline bool is_test_kfunc_task(void) +{ + int cur_pid = bpf_get_current_pid_tgid() >> 32; + + return pid == cur_pid; +} + +#endif /* _TASK_KFUNC_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c new file mode 100644 index 000000000000..74d2f176a2de --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -0,0 +1,315 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include + +#include "task_kfunc_common.h" + +char _license[] SEC("license") = "GPL"; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *task) +{ + int status; + + status = tasks_kfunc_map_insert(task); + if (status) + return NULL; + + return tasks_kfunc_map_value_lookup(task); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + /* Can't invoke bpf_task_acquire() on an untrusted pointer. */ + acquired = bpf_task_acquire(v->task); + if (!acquired) + return 0; + + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a random frame pointer. */ + acquired = bpf_task_acquire((struct task_struct *)&stack_task); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_no_null_check, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + /* Can't release a bpf_task_acquire()'d task without a NULL check. */ + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */ + acquired = bpf_task_acquire(task->last_wakee); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + /* Can't invoke bpf_task_acquire() on a NULL pointer. */ + acquired = bpf_task_acquire(NULL); + if (!acquired) + return 0; + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + + /* Acquired task is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_non_kptr_param, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot use bpf_task_kptr_get() on a non-kptr, even on a valid task. */ + kptr = bpf_task_kptr_get(&task); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr, *acquired; + + if (!is_test_kfunc_task()) + return 0; + + acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + + /* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */ + kptr = bpf_task_kptr_get(&acquired); + bpf_task_release(acquired); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_null, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot use bpf_task_kptr_get() on a NULL pointer. */ + kptr = bpf_task_kptr_get(NULL); + if (!kptr) + return 0; + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + kptr = bpf_kptr_xchg(&v->task, NULL); + if (!kptr) + return 0; + + /* Kptr retrieved from map is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + kptr = bpf_task_kptr_get(&v->task); + if (!kptr) + return 0; + + /* Kptr acquired above is never released. */ + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags) +{ + struct __tasks_kfunc_map_value *v; + + if (!is_test_kfunc_task()) + return 0; + + v = insert_lookup_task(task); + if (!v) + return 0; + + /* Can't invoke bpf_task_release() on an untrusted pointer. */ + bpf_task_release(v->task); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired = (struct task_struct *)&clone_flags; + + if (!is_test_kfunc_task()) + return 0; + + /* Cannot release random frame pointer. */ + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) +{ + struct __tasks_kfunc_map_value local, *v; + long status; + struct task_struct *acquired, *old; + s32 pid; + + if (!is_test_kfunc_task()) + return 0; + + status = bpf_probe_read_kernel(&pid, sizeof(pid), &task->pid); + if (status) + return 0; + + local.task = NULL; + status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST); + if (status) + return status; + + v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid); + if (!v) + return status; + + acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + + old = bpf_kptr_xchg(&v->task, acquired); + + /* old cannot be passed to bpf_task_release() without a NULL check. */ + bpf_task_release(old); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + /* Cannot release trusted task pointer which was not acquired. */ + bpf_task_release(task); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c new file mode 100644 index 000000000000..8d5c05b41d53 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c @@ -0,0 +1,132 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include + +#include "task_kfunc_common.h" + +char _license[] SEC("license") = "GPL"; + +int err; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +static int test_acquire_release(struct task_struct *task) +{ + struct task_struct *acquired; + + acquired = bpf_task_acquire(task); + if (!acquired) { + err = 1; + return 0; + } + + bpf_task_release(acquired); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_release_argument, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + return test_acquire_release(task); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_release_current, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_kfunc_task()) + return 0; + + return test_acquire_release(bpf_get_current_task_btf()); +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_acquire_leave_in_map, struct task_struct *task, u64 clone_flags) +{ + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) + err = 1; + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) { + err = 1; + return 0; + } + + v = tasks_kfunc_map_value_lookup(task); + if (!v) { + err = 2; + return 0; + } + + kptr = bpf_kptr_xchg(&v->task, NULL); + if (!kptr) { + err = 3; + return 0; + } + + bpf_task_release(kptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *kptr; + struct __tasks_kfunc_map_value *v; + long status; + + if (!is_test_kfunc_task()) + return 0; + + status = tasks_kfunc_map_insert(task); + if (status) { + err = 1; + return 0; + } + + v = tasks_kfunc_map_value_lookup(task); + if (!v) { + err = 2; + return 0; + } + + kptr = bpf_task_kptr_get(&v->task); + if (!kptr) { + err = 3; + return 0; + } + + bpf_task_release(kptr); + + return 0; +}