From patchwork Fri Mar 31 19:57:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 77957 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp802038vqo; Fri, 31 Mar 2023 12:59:58 -0700 (PDT) X-Google-Smtp-Source: AKy350Z+xOxxHCSbFwXsugKGFJ+tY35Ui9tgzgjNFQSY+Uahfbnvt3cl8gvjsliQXqpzSuo+fVKp X-Received: by 2002:a17:90b:33cc:b0:237:40a5:7cb9 with SMTP id lk12-20020a17090b33cc00b0023740a57cb9mr31687687pjb.5.1680292798122; Fri, 31 Mar 2023 12:59:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680292798; cv=none; d=google.com; s=arc-20160816; b=wIkdjgAU2EsgbMDDhbSHFpJgFXV//dawE4xULzjfOfcLjQhsqosCc+JoleWKS/nQhv WlKLJ3rnjkWOe+5+QJlHXu5R9KUp23F8e6ywoqzHoLWCQJPjMS+Z4LUDZuBuApWjNRpG SNTB2DTZYSaPw4RRB+mSB4cv8IBMdMo1kYSxbAvAuy1oaK6ojGsGSQLIBW+1u3EfmIX5 ww2H7aW3Dp1C5XrBXWwGdWTYvuKflmAkkJMC4Yz8zUcZg6PZ5OefMkwFEXWnJ99bxdWI rlTNUh6lHXBPmWE3lkJ6lgcuR8O3LyHdQyULEg0q+3zn9ilglQBt5eOK1vG3J7NOtKFJ dA/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8PY2OLS7ghS99psywB4vrlQr89AiJPHHzvHkV9dL480=; b=fAawGHXXjvEVgyvz4YAGc7rAy3poe9jctPR7+kBQ5ApZRLz6AtndrAj/toaOuPZ8sR sWo9q8EMumq6CKQtfdgypkd3OUSGN3n/MnXVOktgAYZ4wELAOe65VEgZ/21fFmJiMwoJ FTDuNWd+xogetYDKBN3dMn5l4FjtwWE5+kmzJTtP5Xe4Mcx3gnwMO5VonpM3+yHGzWDm 9LXGroWLsidwA6NrDnnbtS+h8asyfJX0yYwu5sCpc+dFBJWB2i5KTApNdPmDNlIwTJoU NcwWP+Ev5tcVE29O6y6oWIYQWeBtEvLFGHNiQBXuaY1id50KFHfUBnjZG24kSt+wHobt uF9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ot15-20020a17090b3b4f00b00233c7c617e7si8073159pjb.101.2023.03.31.12.59.45; Fri, 31 Mar 2023 12:59:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232659AbjCaT5p (ORCPT + 99 others); Fri, 31 Mar 2023 15:57:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232633AbjCaT5l (ORCPT ); Fri, 31 Mar 2023 15:57:41 -0400 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFE041D2F4; Fri, 31 Mar 2023 12:57:39 -0700 (PDT) Received: by mail-qt1-f171.google.com with SMTP id p2so17679365qtw.13; Fri, 31 Mar 2023 12:57:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680292658; x=1682884658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8PY2OLS7ghS99psywB4vrlQr89AiJPHHzvHkV9dL480=; b=Au5hDQcsA33Qdexb1npJ5MgWVJsbEIsV7msBRzzZiSZ+WhD1ehJzOZdMhi0oj/Wx9s YP2qtJV24Nn74jdq/+d+HSLS/UVHIxgpgWmBmHjT2Vo3ORBhZNx673TsazLX5BEWrmky xSg2HFmZTvBPXUIXVrTVVhFjPncrDbQ2ElpYNXZBZaaIWRHMPrdSipkWHNqC3J73u0+D zy7tDgwazbIW198cKt2+WbxV/fR19Ts7ixT0P+m/pe4/fdbYrwoCQAHxGbu8kzvs+cTH AcO/AIoztSHCHrNwuiRtv4eFCjswVAzIsY2d4ZIVafcrZ6+UY2/PJOp/ICzAaT3wZwcl DkoA== X-Gm-Message-State: AAQBX9ey0Ke8XmqpIhczls+t1VvDD7q6shnw6qIfHIdt7Q5yslYKahvd +8BCgLCAToobUPICQ5eTnkNAG5mUgiN8xjxT X-Received: by 2002:a05:622a:551:b0:3da:e3a8:b01a with SMTP id m17-20020a05622a055100b003dae3a8b01amr43814291qtx.0.1680292658314; Fri, 31 Mar 2023 12:57:38 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:2101]) by smtp.gmail.com with ESMTPSA id t68-20020ae9df47000000b007487c780f5esm887677qkf.121.2023.03.31.12.57.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Mar 2023 12:57:37 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: [PATCH bpf-next v2 1/3] bpf: Make struct task_struct an RCU-safe type Date: Fri, 31 Mar 2023 14:57:31 -0500 Message-Id: <20230331195733.699708-2-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230331195733.699708-1-void@manifault.com> References: <20230331195733.699708-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=0.5 required=5.0 tests=FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761914701418163330?= X-GMAIL-MSGID: =?utf-8?q?1761914701418163330?= struct task_struct objects are a bit interesting in terms of how their lifetime is protected by refcounts. task structs have two refcount fields: 1. refcount_t usage: Protects the memory backing the task struct. When this refcount drops to 0, the task is immediately freed, without waiting for an RCU grace period to elapse. This is the field that most callers in the kernel currently use to ensure that a task remains valid while it's being referenced, and is what's currently tracked with bpf_task_acquire() and bpf_task_release(). 2. refcount_t rcu_users: A refcount field which, when it drops to 0, schedules an RCU callback that drops a reference held on the 'usage' field above (which is acquired when the task is first created). This field therefore provides a form of RCU protection on the task by ensuring that at least one 'usage' refcount will be held until an RCU grace period has elapsed. The qualifier "a form of" is important here, as a task can remain valid after task->rcu_users has dropped to 0 and the subsequent RCU gp has elapsed. In terms of BPF, we want to use task->rcu_users to protect tasks that function as referenced kptrs, and to allow tasks stored as referenced kptrs in maps to be accessed with RCU protection. Let's first determine whether we can safely use task->rcu_users to protect tasks stored in maps. All of the bpf_task* kfuncs can only be called from tracepoint, struct_ops, or BPF_PROG_TYPE_SCHED_CLS, program types. For tracepoint and struct_ops programs, the struct task_struct passed to a program handler will always be trusted, so it will always be safe to call bpf_task_acquire() with any task passed to a program. Note, however, that we must update bpf_task_acquire() to be KF_RET_NULL, as it is possible that the task has exited by the time the program is invoked, even if the pointer is still currently valid because the main kernel holds a task->usage refcount. For BPF_PROG_TYPE_SCHED_CLS, tasks should never be passed as an argument to the any program handlers, so it should not be relevant. The second question is whether it's safe to use RCU to access a task that was acquired with bpf_task_acquire(), and stored in a map. Because bpf_task_acquire() now uses task->rcu_users, it follows that if the task is present in the map, that it must have had at least one task->rcu_users refcount by the time the current RCU cs was started. Therefore, it's safe to access that task until the end of the current RCU cs. With all that said, this patch makes struct task_struct is an RCU-protected object. In doing so, we also change bpf_task_acquire() to be KF_ACQUIRE | KF_RCU | KF_RET_NULL, and adjust any selftests as necessary. A subsequent patch will remove bpf_task_kptr_get(), and bpf_task_acquire_not_zero() respectively. Signed-off-by: David Vernet --- kernel/bpf/helpers.c | 11 ++- kernel/bpf/verifier.c | 1 + .../selftests/bpf/prog_tests/task_kfunc.c | 1 + .../selftests/bpf/progs/task_kfunc_common.h | 5 ++ .../selftests/bpf/progs/task_kfunc_failure.c | 80 ++++++++++++++++--- .../selftests/bpf/progs/task_kfunc_success.c | 26 +++++- 6 files changed, 108 insertions(+), 16 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 8980f6859443..e71a4a54ce99 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -2013,7 +2014,9 @@ __bpf_kfunc struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) */ __bpf_kfunc struct task_struct *bpf_task_acquire(struct task_struct *p) { - return get_task_struct(p); + if (refcount_inc_not_zero(&p->rcu_users)) + return p; + return NULL; } /** @@ -2089,7 +2092,7 @@ __bpf_kfunc struct task_struct *bpf_task_kptr_get(struct task_struct **pp) */ __bpf_kfunc void bpf_task_release(struct task_struct *p) { - put_task_struct(p); + put_task_struct_rcu_user(p); } #ifdef CONFIG_CGROUPS @@ -2199,7 +2202,7 @@ __bpf_kfunc struct task_struct *bpf_task_from_pid(s32 pid) rcu_read_lock(); p = find_task_by_pid_ns(pid, &init_pid_ns); if (p) - bpf_task_acquire(p); + p = bpf_task_acquire(p); rcu_read_unlock(); return p; @@ -2371,7 +2374,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front) BTF_ID_FLAGS(func, bpf_list_push_back) BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_acquire_not_zero, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 52738f9dcb15..92ae4e8ab87b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4600,6 +4600,7 @@ BTF_SET_START(rcu_protected_types) BTF_ID(struct, prog_test_ref_kfunc) BTF_ID(struct, cgroup) BTF_ID(struct, bpf_cpumask) +BTF_ID(struct, task_struct) BTF_SET_END(rcu_protected_types) static bool rcu_protected_object(const struct btf *btf, u32 btf_id) diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c index f79fa5bc9a8d..330133ece3f6 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -78,6 +78,7 @@ static const char * const success_tests[] = { "test_task_from_pid_arg", "test_task_from_pid_current", "test_task_from_pid_invalid", + "task_kfunc_acquire_trusted_walked", }; void test_task_kfunc(void) diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h index 4c2a4b0e3a25..bf0d1da9aff8 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -24,6 +24,8 @@ struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym; void bpf_task_release(struct task_struct *p) __ksym; struct task_struct *bpf_task_from_pid(s32 pid) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; static inline struct __tasks_kfunc_map_value *tasks_kfunc_map_value_lookup(struct task_struct *p) { @@ -60,6 +62,9 @@ static inline int tasks_kfunc_map_insert(struct task_struct *p) } acquired = bpf_task_acquire(p); + if (!acquired) + return -ENOENT; + old = bpf_kptr_xchg(&v->task, acquired); if (old) { bpf_task_release(old); diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c index 2c374a7ffece..63aef547da87 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -40,6 +40,9 @@ int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_f /* Can't invoke bpf_task_acquire() on an untrusted pointer. */ acquired = bpf_task_acquire(v->task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; @@ -53,38 +56,49 @@ int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags) /* Can't invoke bpf_task_acquire() on a random frame pointer. */ acquired = bpf_task_acquire((struct task_struct *)&stack_task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; } SEC("kretprobe/free_task") -__failure __msg("reg type unsupported for arg#0 function") +__failure __msg("calling kernel function bpf_task_acquire is not allowed") int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; + /* Can't call bpf_task_acquire() or bpf_task_release() in an untrusted prog. */ acquired = bpf_task_acquire(task); - /* Can't release a bpf_task_acquire()'d task without a NULL check. */ + if (!acquired) + return 0; bpf_task_release(acquired); return 0; } -SEC("tp_btf/task_newtask") -__failure __msg("R1 must be referenced or trusted") -int BPF_PROG(task_kfunc_acquire_trusted_walked, struct task_struct *task, u64 clone_flags) +SEC("kretprobe/free_task") +__failure __msg("calling kernel function bpf_task_acquire is not allowed") +int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe_rcu, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; - /* Can't invoke bpf_task_acquire() on a trusted pointer obtained from walking a struct. */ - acquired = bpf_task_acquire(task->group_leader); - bpf_task_release(acquired); + bpf_rcu_read_lock(); + if (!task) { + bpf_rcu_read_unlock(); + return 0; + } + /* Can't call bpf_task_acquire() or bpf_task_release() in an untrusted prog. */ + acquired = bpf_task_acquire(task); + if (acquired) + bpf_task_release(acquired); + bpf_rcu_read_unlock(); return 0; } - SEC("tp_btf/task_newtask") __failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags) @@ -137,6 +151,8 @@ int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clo struct task_struct *kptr, *acquired; acquired = bpf_task_acquire(task); + if (!acquired) + return 0; /* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */ kptr = bpf_task_kptr_get(&acquired); @@ -185,6 +201,19 @@ int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_fla return 0; } +SEC("tp_btf/task_newtask") +__failure __msg("Possibly NULL pointer passed to trusted arg0") +int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + acquired = bpf_task_acquire(task); + /* Can't invoke bpf_task_release() on an acquired task without a NULL check. */ + bpf_task_release(acquired); + + return 0; +} + SEC("tp_btf/task_newtask") __failure __msg("Unreleased reference") int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags) @@ -256,12 +285,13 @@ int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) return -ENOENT; acquired = bpf_task_acquire(task); + if (!acquired) + return -EEXIST; old = bpf_kptr_xchg(&v->task, acquired); /* old cannot be passed to bpf_task_release() without a NULL check. */ bpf_task_release(old); - bpf_task_release(old); return 0; } @@ -298,6 +328,9 @@ int BPF_PROG(task_kfunc_from_lsm_task_free, struct task_struct *task) /* the argument of lsm task_free hook is untrusted. */ acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; } @@ -337,3 +370,30 @@ int BPF_PROG(task_access_comm4, struct task_struct *task, const char *buf, bool bpf_strncmp(task->comm, 16, "foo"); return 0; } + +SEC("tp_btf/task_newtask") +__failure __msg("R1 must be referenced or trusted") +int BPF_PROG(task_kfunc_release_in_map, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *local; + struct __tasks_kfunc_map_value *v; + + if (tasks_kfunc_map_insert(task)) + return 0; + + v = tasks_kfunc_map_value_lookup(task); + if (!v) + return 0; + + bpf_rcu_read_lock(); + local = v->task; + if (!local) { + bpf_rcu_read_unlock(); + return 0; + } + /* Can't release a kptr that's still stored in a map. */ + bpf_task_release(local); + bpf_rcu_read_unlock(); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c index cfa7f12b84e8..a75304a5e860 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_success.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c @@ -47,7 +47,10 @@ static int test_acquire_release(struct task_struct *task) } acquired = bpf_task_acquire(task); - bpf_task_release(acquired); + if (acquired) + bpf_task_release(acquired); + else + err = 6; return 0; } @@ -166,7 +169,10 @@ int BPF_PROG(test_task_current_acquire_release, struct task_struct *task, u64 cl current = bpf_get_current_task_btf(); acquired = bpf_task_acquire(current); - bpf_task_release(acquired); + if (acquired) + bpf_task_release(acquired); + else + err = 1; return 0; } @@ -241,3 +247,19 @@ int BPF_PROG(test_task_from_pid_invalid, struct task_struct *task, u64 clone_fla return 0; } + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_trusted_walked, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + /* task->group_leader is listed as a trusted, non-NULL field of task struct. */ + acquired = bpf_task_acquire(task->group_leader); + if (acquired) + bpf_task_release(acquired); + else + err = 1; + + + return 0; +} From patchwork Fri Mar 31 19:57:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 77962 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp809465vqo; Fri, 31 Mar 2023 13:09:57 -0700 (PDT) X-Google-Smtp-Source: AKy350YfBWkIvLBFgAY5576MD3D9rTlnxc8fXgJ7+wWatm1oNGkK/I+4ykYOBHOSr0F/6IPQsWpK X-Received: by 2002:a17:906:a11a:b0:878:54e3:e3e1 with SMTP id t26-20020a170906a11a00b0087854e3e3e1mr27121201ejy.73.1680293397712; Fri, 31 Mar 2023 13:09:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680293397; cv=none; d=google.com; s=arc-20160816; b=ZVvBWPADZquH06nXV+xIkexGd3t3aEe2GAdmhRQirJZowTKQdSeSUSNR6TNYNoV7x6 qQQ6EIRbqR3rPzJVYmCfiO2ymUx9AvgP8zqC9lnFAxGTwax/W+CMt5V0DGT2bSb+nq6Z NMb5OkjNH6jBiwk7YLE1CJY/YD5W84egB7qjrPgi04Im5KKNZJI75KtczNrZnWFdLsAx vJgnltpCZWxoQK+KeCXlJv/55iyThKTeghljGdDT3iSL3EcyxkyQHoEMM4TEUxcwJ1ei 3pzGPml6oiek9VyRYZfaPMiOcvXGU6xcx2wGr8uAcc52V4tGy0O9aG4cA+EYr3QlhTC8 NKTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=5giySbSgqJJrorNGugqsUJyNraW+twrpB7Jc+a5dMnE=; b=xSuQdgbThZHVNGoU/No+fnUXxEk7xHP1BKmSvbPMgPHTvJ3KiXWgjO17SA1BUR284+ tWo9T5UfUycWr9SyA4dmRNLjNFIJ2PNgl94c4GhondFy0F6oHzlWwmxBeALowGFJDuBM EQgZH0LH3CGxdl+YGFu05FsDPS86vngsXogohnovRCKes3ruOztW4GnTG8hiRaapACG9 dFHNVUoAfix6MpsvDbPTciphIZMGuU4IuyHoDGcwyalmQXCA1Ir5XHAzlwphiJfyEFIG htebxH7tJWRR/iU7Kv7u6nwu+JUkH2GnaauaRXyPSWk8R8e8nWYlj6/T3VlApTYlniTk M+sQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o17-20020aa7d3d1000000b004c30cb1f8cdsi2497244edr.224.2023.03.31.13.09.33; Fri, 31 Mar 2023 13:09:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232682AbjCaT5r (ORCPT + 99 others); Fri, 31 Mar 2023 15:57:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbjCaT5m (ORCPT ); Fri, 31 Mar 2023 15:57:42 -0400 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 366371D845; Fri, 31 Mar 2023 12:57:41 -0700 (PDT) Received: by mail-qv1-f43.google.com with SMTP id t13so17215207qvn.2; Fri, 31 Mar 2023 12:57:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680292660; x=1682884660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5giySbSgqJJrorNGugqsUJyNraW+twrpB7Jc+a5dMnE=; b=v/5voDxKRb4IM8kOmcjzdxxX+w8aHlBG3mdO7sMn0OvSalJy1kdWGld47Z2kfJsoAO oynBR/ywgxxnWIkm7HLscYh0RCRaXzMvmDfX1eHpMMxmcYOWNa5W+UF/Q5jSfu/Ow5sK HwyMICfcuoJzQFWi3MXchM68Vqv1CGDImkBllmLhXboBdzezwFxIGiuS6HEcL/A6dFoK 4o2X2O+oBEtX7LjNay/00Vql6Ma2acVgaGn5S2p2CKtmf8fBHsT3uDaAw2mpbV5oPB3H d+sSzfBv+k25aehNPIa1Ko89sYvZgmHP7NmcBVU0FdtPVtB8O+zb4hIUpREvzU+WKElN YxDA== X-Gm-Message-State: AAQBX9fBe+PW2e/7z+b29Wrx24qO5/1CzRWIUygDSTrd2PDHxSydxQQ/ UiEkMyNw13kxF+wAE01U7/DqgbfNJUSKlav0 X-Received: by 2002:a05:6214:23c5:b0:5c2:d241:9c1d with SMTP id hr5-20020a05621423c500b005c2d2419c1dmr46164838qvb.27.1680292659860; Fri, 31 Mar 2023 12:57:39 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:2101]) by smtp.gmail.com with ESMTPSA id f13-20020a0cbecd000000b005dd8b934598sm824236qvj.48.2023.03.31.12.57.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Mar 2023 12:57:39 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: [PATCH bpf-next v2 2/3] bpf: Remove now-defunct task kfuncs Date: Fri, 31 Mar 2023 14:57:32 -0500 Message-Id: <20230331195733.699708-3-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230331195733.699708-1-void@manifault.com> References: <20230331195733.699708-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=0.5 required=5.0 tests=FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761915329693449525?= X-GMAIL-MSGID: =?utf-8?q?1761915329693449525?= In commit 22df776a9a86 ("tasks: Extract rcu_users out of union"), the 'refcount_t rcu_users' field was extracted out of a union with the 'struct rcu_head rcu' field. This allows us to safely perform a refcount_inc_not_zero() on task->rcu_users when acquiring a reference on a task struct. A prior patch leveraged this by making struct task_struct an RCU-protected object in the verifier, and by bpf_task_acquire() to use the task->rcu_users field for synchronization. Now that we can use RCU to protect tasks, we no longer need bpf_task_kptr_get(), or bpf_task_acquire_not_zero(). bpf_task_kptr_get() is truly completely unnecessary, as we can just use RCU to get the object. bpf_task_acquire_not_zero() is now equivalent to bpf_task_acquire(). In addition to these changes, this patch also updates the associated selftests to no longer use these kfuncs. Signed-off-by: David Vernet --- kernel/bpf/helpers.c | 69 ------------------ .../selftests/bpf/prog_tests/task_kfunc.c | 2 +- .../selftests/bpf/progs/rcu_read_lock.c | 9 +-- .../selftests/bpf/progs/task_kfunc_common.h | 1 - .../selftests/bpf/progs/task_kfunc_failure.c | 73 ------------------- .../selftests/bpf/progs/task_kfunc_success.c | 22 +++--- 6 files changed, 14 insertions(+), 162 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index e71a4a54ce99..6be16db9f188 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2019,73 +2019,6 @@ __bpf_kfunc struct task_struct *bpf_task_acquire(struct task_struct *p) return NULL; } -/** - * bpf_task_acquire_not_zero - Acquire a reference to a rcu task object. A task - * acquired by this kfunc which is not stored in a map as a kptr, must be - * released by calling bpf_task_release(). - * @p: The task on which a reference is being acquired. - */ -__bpf_kfunc struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) -{ - /* For the time being this function returns NULL, as it's not currently - * possible to safely acquire a reference to a task with RCU protection - * using get_task_struct() and put_task_struct(). This is due to the - * slightly odd mechanics of p->rcu_users, and how task RCU protection - * works. - * - * A struct task_struct is refcounted by two different refcount_t - * fields: - * - * 1. p->usage: The "true" refcount field which tracks a task's - * lifetime. The task is freed as soon as this - * refcount drops to 0. - * - * 2. p->rcu_users: An "RCU users" refcount field which is statically - * initialized to 2, and is co-located in a union with - * a struct rcu_head field (p->rcu). p->rcu_users - * essentially encapsulates a single p->usage - * refcount, and when p->rcu_users goes to 0, an RCU - * callback is scheduled on the struct rcu_head which - * decrements the p->usage refcount. - * - * There are two important implications to this task refcounting logic - * described above. The first is that - * refcount_inc_not_zero(&p->rcu_users) cannot be used anywhere, as - * after the refcount goes to 0, the RCU callback being scheduled will - * cause the memory backing the refcount to again be nonzero due to the - * fields sharing a union. The other is that we can't rely on RCU to - * guarantee that a task is valid in a BPF program. This is because a - * task could have already transitioned to being in the TASK_DEAD - * state, had its rcu_users refcount go to 0, and its rcu callback - * invoked in which it drops its single p->usage reference. At this - * point the task will be freed as soon as the last p->usage reference - * goes to 0, without waiting for another RCU gp to elapse. The only - * way that a BPF program can guarantee that a task is valid is in this - * scenario is to hold a p->usage refcount itself. - * - * Until we're able to resolve this issue, either by pulling - * p->rcu_users and p->rcu out of the union, or by getting rid of - * p->usage and just using p->rcu_users for refcounting, we'll just - * return NULL here. - */ - return NULL; -} - -/** - * bpf_task_kptr_get - Acquire a reference on a struct task_struct kptr. A task - * kptr acquired by this kfunc which is not subsequently stored in a map, must - * be released by calling bpf_task_release(). - * @pp: A pointer to a task kptr on which a reference is being acquired. - */ -__bpf_kfunc struct task_struct *bpf_task_kptr_get(struct task_struct **pp) -{ - /* We must return NULL here until we have clarity on how to properly - * leverage RCU for ensuring a task's lifetime. See the comment above - * in bpf_task_acquire_not_zero() for more details. - */ - return NULL; -} - /** * bpf_task_release - Release the reference acquired on a task. * @p: The task on which a reference is being released. @@ -2375,8 +2308,6 @@ BTF_ID_FLAGS(func, bpf_list_push_back) BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_task_acquire_not_zero, KF_ACQUIRE | KF_RCU | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE) BTF_ID_FLAGS(func, bpf_rbtree_add) diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c index 330133ece3f6..740d5f644b40 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -73,7 +73,7 @@ static const char * const success_tests[] = { "test_task_acquire_release_current", "test_task_acquire_leave_in_map", "test_task_xchg_release", - "test_task_get_release", + "test_task_map_acquire_release", "test_task_current_acquire_release", "test_task_from_pid_arg", "test_task_from_pid_current", diff --git a/tools/testing/selftests/bpf/progs/rcu_read_lock.c b/tools/testing/selftests/bpf/progs/rcu_read_lock.c index 6a8c88e58df2..14fb01437fb8 100644 --- a/tools/testing/selftests/bpf/progs/rcu_read_lock.c +++ b/tools/testing/selftests/bpf/progs/rcu_read_lock.c @@ -23,7 +23,7 @@ struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; void bpf_key_put(struct bpf_key *key) __ksym; void bpf_rcu_read_lock(void) __ksym; void bpf_rcu_read_unlock(void) __ksym; -struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) __ksym; +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; void bpf_task_release(struct task_struct *p) __ksym; SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") @@ -159,13 +159,8 @@ int task_acquire(void *ctx) goto out; /* acquire a reference which can be used outside rcu read lock region */ - gparent = bpf_task_acquire_not_zero(gparent); + gparent = bpf_task_acquire(gparent); if (!gparent) - /* Until we resolve the issues with using task->rcu_users, we - * expect bpf_task_acquire_not_zero() to return a NULL task. - * See the comment at the definition of - * bpf_task_acquire_not_zero() for more details. - */ goto out; (void)bpf_task_storage_get(&map_a, gparent, 0, 0); diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h index bf0d1da9aff8..41f2d44f49cb 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -21,7 +21,6 @@ struct hash_map { } __tasks_kfunc_map SEC(".maps"); struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; -struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym; void bpf_task_release(struct task_struct *p) __ksym; struct task_struct *bpf_task_from_pid(s32 pid) __ksym; void bpf_rcu_read_lock(void) __ksym; diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c index 63aef547da87..dcdea3127086 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -128,59 +128,6 @@ int BPF_PROG(task_kfunc_acquire_unreleased, struct task_struct *task, u64 clone_ return 0; } -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_non_kptr_param, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr; - - /* Cannot use bpf_task_kptr_get() on a non-kptr, even on a valid task. */ - kptr = bpf_task_kptr_get(&task); - if (!kptr) - return 0; - - bpf_task_release(kptr); - - return 0; -} - -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr, *acquired; - - acquired = bpf_task_acquire(task); - if (!acquired) - return 0; - - /* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */ - kptr = bpf_task_kptr_get(&acquired); - bpf_task_release(acquired); - if (!kptr) - return 0; - - bpf_task_release(kptr); - - return 0; -} - -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_null, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr; - - /* Cannot use bpf_task_kptr_get() on a NULL pointer. */ - kptr = bpf_task_kptr_get(NULL); - if (!kptr) - return 0; - - bpf_task_release(kptr); - - return 0; -} - SEC("tp_btf/task_newtask") __failure __msg("Unreleased reference") int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_flags) @@ -214,26 +161,6 @@ int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, return 0; } -SEC("tp_btf/task_newtask") -__failure __msg("Unreleased reference") -int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr; - struct __tasks_kfunc_map_value *v; - - v = insert_lookup_task(task); - if (!v) - return 0; - - kptr = bpf_task_kptr_get(&v->task); - if (!kptr) - return 0; - - /* Kptr acquired above is never released. */ - - return 0; -} - SEC("tp_btf/task_newtask") __failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags) diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c index a75304a5e860..b09371bba204 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_success.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c @@ -122,7 +122,7 @@ int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags) } SEC("tp_btf/task_newtask") -int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) +int BPF_PROG(test_task_map_acquire_release, struct task_struct *task, u64 clone_flags) { struct task_struct *kptr; struct __tasks_kfunc_map_value *v; @@ -143,18 +143,18 @@ int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) return 0; } - kptr = bpf_task_kptr_get(&v->task); - if (kptr) { - /* Until we resolve the issues with using task->rcu_users, we - * expect bpf_task_kptr_get() to return a NULL task. See the - * comment at the definition of bpf_task_acquire_not_zero() for - * more details. - */ - bpf_task_release(kptr); + bpf_rcu_read_lock(); + kptr = v->task; + if (!kptr) { err = 3; - return 0; + } else { + kptr = bpf_task_acquire(kptr); + if (!kptr) + err = 4; + else + bpf_task_release(kptr); } - + bpf_rcu_read_unlock(); return 0; } From patchwork Fri Mar 31 19:57:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 77963 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp809661vqo; Fri, 31 Mar 2023 13:10:17 -0700 (PDT) X-Google-Smtp-Source: AKy350Z19tVts84phsIXTuwJeBMqizLz68WNN9/nMZeC5kz8U9ClYRqGpVqc5jbvgh0wcDng6JyF X-Received: by 2002:a17:906:738a:b0:933:4ca3:9678 with SMTP id f10-20020a170906738a00b009334ca39678mr26680141ejl.24.1680293417137; Fri, 31 Mar 2023 13:10:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680293417; cv=none; d=google.com; s=arc-20160816; b=QvjifsNTku16IR2tkf7yM1mEnl+knZPPg8rqFQ53RLbNqaefkgidxoM2a1b87P+PNh bYH2DAeKBDhAtAFMWB3HkpffaC6G7VaMxcsk0XiJsXFbsRFmvyV+LxU/9PJovHX7ky3m /Z7kHverWCBOMYv7zBOicSnVXLqy+ZQjL9ydrrOCv9gQ3tLxO9EJCLSnndAL8NQ9yJMb CYHikrwxNj80jZcoMVDGethahYFEC+FzB8jAoYBF30cBQTzeWya8d8GevDIxlLThvUeF BoRGg30UhCVnvrV5dtGzZdINoIPW6regHizy+tmFLSq0bX6uYj5HL8mOuK21+xsHxqI6 11CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=a2HAELiYxMwO28bC3+J9qfIMUXe3QPmftJ2DsQn4LZo=; b=pUzSTq36a+N72seNoMys24tYmWqXgRvmbX86/quJn842fX2zcZ3iBELaWb8oaXw8ZV m8+SeBBYE2+a3VSpEW5UYwH0gl0w3ut4+7PRCynefmVkd9torOONTvblky68b4S7xLiX YU8VY3K9r0Jqpg/LsfYDXln1op8Q5j9U/InxQxLtF0bwHu4t0A1VrTVQGawTRmJYylya PWY96jBl2KQS4ArjBq939wKO1+9toHr78BOI/weJkkYMaQ9BAmxcNzz6qwhWekO6KtUI M76s+SPPKf4TfnFJvMQ044drLg/lzy/JKvGn7n+gRsV7sV3aCW5rdIxB6kWtmfjHYIRd nHfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gu23-20020a170906f29700b008cd525eac74si2412764ejb.836.2023.03.31.13.09.51; Fri, 31 Mar 2023 13:10:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231751AbjCaT5u (ORCPT + 99 others); Fri, 31 Mar 2023 15:57:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232270AbjCaT5n (ORCPT ); Fri, 31 Mar 2023 15:57:43 -0400 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30AEC1FD1D; Fri, 31 Mar 2023 12:57:42 -0700 (PDT) Received: by mail-qv1-f51.google.com with SMTP id on15so3931690qvb.7; Fri, 31 Mar 2023 12:57:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680292661; x=1682884661; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a2HAELiYxMwO28bC3+J9qfIMUXe3QPmftJ2DsQn4LZo=; b=PQ7lVnPGcbNzZg0/6kU45CcmI6Aejhiaoej7UykV11sRDjOLuvrM9EC75OJPk/joyM pYdVGHWdhMheMhXsrlEUuVYWvJ8V4qLweDS3z6LeZEMikUDSe0quDrhNa+vQugeCK1g9 B95RqPPs22aFke5EEFPSOm/ucekTY3p7jYf4OAE1ubbObpINVgZZQ3/KmDcf7F92y/4A VxQZgaKkEHN61koT1KKwb2ce9ou4AIWqLjO/FPYY3SfJmkfjkthC9+h5Ny97uwWBf4k7 F5KL1VVYUyS/6bovP7kfiQHtWlth4ID9+VQdqhICSZ4Zn6f2oI4ZGtLDhHt4FaQUBXjw DJXQ== X-Gm-Message-State: AAQBX9fvpK9nkevXqMnYmnZutfVw/AYJJz5j9UXOPqXECibScSQo42Eo P7eu6mKhsVdHVofRpNQsN7NKu/8TBOquRnTS X-Received: by 2002:a05:6214:29e8:b0:5a3:79cd:8ef8 with SMTP id jv8-20020a05621429e800b005a379cd8ef8mr43581327qvb.32.1680292661266; Fri, 31 Mar 2023 12:57:41 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:2101]) by smtp.gmail.com with ESMTPSA id mk14-20020a056214580e00b005dd8b9345e8sm806212qvb.128.2023.03.31.12.57.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Mar 2023 12:57:40 -0700 (PDT) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: [PATCH bpf-next v2 3/3] bpf,docs: Update documentation to reflect new task kfuncs Date: Fri, 31 Mar 2023 14:57:33 -0500 Message-Id: <20230331195733.699708-4-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230331195733.699708-1-void@manifault.com> References: <20230331195733.699708-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=0.5 required=5.0 tests=FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761915350057711476?= X-GMAIL-MSGID: =?utf-8?q?1761915350057711476?= Now that struct task_struct objects are RCU safe, and bpf_task_acquire() can return NULL, we should update the BPF task kfunc documentation to reflect the current state of the API. Signed-off-by: David Vernet --- Documentation/bpf/kfuncs.rst | 49 +++++++++++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 6 deletions(-) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index bf1b85941452..d8a16c4bef7f 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -471,13 +471,50 @@ struct_ops callback arg. For example: struct task_struct *acquired; acquired = bpf_task_acquire(task); + if (acquired) + /* + * In a typical program you'd do something like store + * the task in a map, and the map will automatically + * release it later. Here, we release it manually. + */ + bpf_task_release(acquired); + return 0; + } + + +References acquired on ``struct task_struct *`` objects are RCU protected. +Therefore, when in an RCU read region, you can obtain a pointer to a task +embedded in a map value without having to acquire a reference: + +.. code-block:: c + + #define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8))) + private(TASK) static struct task_struct *global; + + /** + * A trivial example showing how to access a task stored + * in a map using RCU. + */ + SEC("tp_btf/task_newtask") + int BPF_PROG(task_rcu_read_example, struct task_struct *task, u64 clone_flags) + { + struct task_struct *local_copy; + + bpf_rcu_read_lock(); + local_copy = global; + if (local_copy) + /* + * We could also pass local_copy to kfuncs or helper functions here, + * as we're guaranteed that local_copy will be valid until we exit + * the RCU read region below. + */ + bpf_printk("Global task %s is valid", local_copy->comm); + else + bpf_printk("No global task found"); + bpf_rcu_read_unlock(); + + /* At this point we can no longer reference local_copy. */ - /* - * In a typical program you'd do something like store - * the task in a map, and the map will automatically - * release it later. Here, we release it manually. - */ - bpf_task_release(acquired); return 0; }