[bpf-next,v2,0/3] Remove KF_KPTR_GET kfunc flag

Message ID 20230416084928.326135-1-void@manifault.com
Headers
Series Remove KF_KPTR_GET kfunc flag |

Message

David Vernet April 16, 2023, 8:49 a.m. UTC
  We've managed to improve the UX for kptrs significantly over the last 9
months. All of the existing use cases which previously had KF_KPTR_GET
kfuncs (struct bpf_cpumask *, struct task_struct *, and struct cgroup *)
have all been updated to be synchronized using RCU. In other words,
their KF_KPTR_GET kfuncs have been removed in favor of KF_RCU |
KF_ACQUIRE kfuncs, with the pointers themselves also being readable from
maps in an RCU read region thanks to the types being RCU safe.

While KF_KPTR_GET was a logical starting point for kptrs, it's become
clear that they're not the correct abstraction. KF_KPTR_GET is a flag
that essentially does nothing other than enforcing that the argument to
a function is a pointer to a referenced kptr map value. At first glance,
that's a useful thing to guarantee to a kfunc. It gives kfuncs the
ability to try and acquire a reference on that kptr without requiring
the BPF prog to do something like this:

struct kptr_type *in_map, *new = NULL;

in_map = bpf_kptr_xchg(&map->value, NULL);
if (in_map) {
	new = bpf_kptr_type_acquire(in_map);
	in_map = bpf_kptr_xchg(&map->value, in_map);
	if (in_map)
		bpf_kptr_type_release(in_map);
}

That's clearly a pretty ugly (and racy) UX, and if using KF_KPTR_GET is
the only alternative, it's better than nothing. However, the problem
with any KF_KPTR_GET kfunc lies in the fact that it always requires some
kind of synchronization in order to safely do an opportunistic acquire
of the kptr in the map. This is because a BPF program running on another
CPU could do a bpf_kptr_xchg() on that map value, and free the kptr
after it's been read by the KF_KPTR_GET kfunc. For example, the
now-removed bpf_task_kptr_get() kfunc did the following:

struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
{
	    struct task_struct *p;

	rcu_read_lock();
	p = READ_ONCE(*pp);
	/* If p is non-NULL, it could still be freed by another CPU,
 	 * so we have to do an opportunistic refcount_inc_not_zero()
	 * and return NULL if the task will be freed after the
	 * current RCU read region.
	 */
	|f (p && !refcount_inc_not_zero(&p->rcu_users))
		p = NULL;
	rcu_read_unlock();
    
	return p;
}
    
In other words, the kfunc uses RCU to ensure that the task remains valid
after it's been peeked from the map. However, this is completely
redundant with just defining a KF_RCU kfunc that itself does a
refcount_inc_not_zero(), which is exactly what bpf_task_acquire() now
does.

So, the question of whether KF_KPTR_GET is useful is actually, "Are
there any synchronization mechanisms / safety flags that are required by
certain kptrs, but which are not provided by the verifier to kfuncs?"
The answer to that question today is "No", because every kptr we
currently care about is RCU protected.

Even if the answer ever became "yes", the proper way to support that
referenced kptr type would be to add support for whatever
synchronization mechanism it requires in the verifier, rather than
giving kfuncs a flag that says, "Here's a pointer to a referenced kptr
in a map, do whatever you need to do."

With all that said -- so as to allow us to consolidate the kfunc API,
and simplify the verifier, this patchset removes the KF_KPTR_GET kfunc
flag.

---

This is v2 of this patchset

v1: https://lore.kernel.org/all/20230415103231.236063-1-void@manifault.com/

Changelog:
----------

v1 -> v2:
- Fix KF_RU -> KF_RCU typo in commit summary for patch 2/3, and in cover
  letter (Alexei)
- In order to reduce churn, don't shift all KF_* flags down by 1. We'll
  just fill the now-empty slot the next time we add a flag (Alexei)


David Vernet (3):
  bpf: Remove bpf_kfunc_call_test_kptr_get() test kfunc
  bpf: Remove KF_KPTR_GET kfunc flag
  bpf,docs: Remove KF_KPTR_GET from documentation

 Documentation/bpf/kfuncs.rst                  | 21 ++---
 include/linux/btf.h                           |  1 -
 kernel/bpf/verifier.c                         | 65 ----------------
 net/bpf/test_run.c                            | 12 ---
 tools/testing/selftests/bpf/progs/map_kptr.c  | 40 ++--------
 .../selftests/bpf/progs/map_kptr_fail.c       | 78 -------------------
 .../testing/selftests/bpf/verifier/map_kptr.c | 27 -------
 7 files changed, 11 insertions(+), 233 deletions(-)
  

Comments

patchwork-bot+netdevbpf@kernel.org April 16, 2023, 4 p.m. UTC | #1
Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Sun, 16 Apr 2023 03:49:25 -0500 you wrote:
> We've managed to improve the UX for kptrs significantly over the last 9
> months. All of the existing use cases which previously had KF_KPTR_GET
> kfuncs (struct bpf_cpumask *, struct task_struct *, and struct cgroup *)
> have all been updated to be synchronized using RCU. In other words,
> their KF_KPTR_GET kfuncs have been removed in favor of KF_RCU |
> KF_ACQUIRE kfuncs, with the pointers themselves also being readable from
> maps in an RCU read region thanks to the types being RCU safe.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v2,1/3] bpf: Remove bpf_kfunc_call_test_kptr_get() test kfunc
    https://git.kernel.org/bpf/bpf-next/c/09b501d90521
  - [bpf-next,v2,2/3] bpf: Remove KF_KPTR_GET kfunc flag
    https://git.kernel.org/bpf/bpf-next/c/7b4ddf3920d2
  - [bpf-next,v2,3/3] bpf,docs: Remove KF_KPTR_GET from documentation
    https://git.kernel.org/bpf/bpf-next/c/530474e6d044

You are awesome, thank you!