[bpf-next,v2,2/3] bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage

Message ID 20221029025433.2533810-2-keescook@chromium.org
State New
Headers
Series bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage |

Commit Message

Kees Cook Oct. 29, 2022, 2:54 a.m. UTC
  Round up allocations with kmalloc_size_roundup() so that the verifier's
use of ksize() is always accurate and no special handling of the memory
is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
information back up to callers so they can use the space immediately,
so array resizing to happen less frequently as well.

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Song Liu <song@kernel.org>
Cc: Yonghong Song <yhs@fb.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Stanislav Fomichev <sdf@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: bpf@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 kernel/bpf/verifier.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
  

Comments

Daniel Borkmann Nov. 1, 2022, 1:52 p.m. UTC | #1
On 10/29/22 4:54 AM, Kees Cook wrote:
> Round up allocations with kmalloc_size_roundup() so that the verifier's
> use of ksize() is always accurate and no special handling of the memory
> is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
> information back up to callers so they can use the space immediately,
> so array resizing to happen less frequently as well.
> 
[...]

The commit message is a bit cryptic here without further context. Is this
a bug fix or improvement? I read the latter, but it would be good to have
more context here for reviewers (maybe Link tag pointing to some discussion
or the like). Also, why is the kmalloc_size_roundup() not hidden for kmalloc
callers, isn't this a tree-wide issue?

Thanks,
Daniel

>   kernel/bpf/verifier.c | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index eb8c34db74c7..1c040d27b8f6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1008,9 +1008,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>   	if (unlikely(check_mul_overflow(n, size, &bytes)))
>   		return NULL;
>   
> -	if (ksize(dst) < bytes) {
> +	if (ksize(dst) < ksize(src)) {
>   		kfree(dst);
> -		dst = kmalloc_track_caller(bytes, flags);
> +		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
>   		if (!dst)
>   			return NULL;
>   	}
> @@ -1027,12 +1027,14 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>    */
>   static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
>   {
> +	size_t alloc_size;
>   	void *new_arr;
>   
>   	if (!new_n || old_n == new_n)
>   		goto out;
>   
> -	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
> +	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
> +	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
>   	if (!new_arr) {
>   		kfree(arr);
>   		return NULL;
> @@ -2504,9 +2506,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
>   {
>   	u32 cnt = cur->jmp_history_cnt;
>   	struct bpf_idx_pair *p;
> +	size_t alloc_size;
>   
>   	cnt++;
> -	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
> +	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
> +	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
>   	if (!p)
>   		return -ENOMEM;
>   	p[cnt - 1].idx = env->insn_idx;
>
  
Kees Cook Nov. 1, 2022, 5:01 p.m. UTC | #2
On Tue, Nov 01, 2022 at 02:52:16PM +0100, Daniel Borkmann wrote:
> On 10/29/22 4:54 AM, Kees Cook wrote:
> > Round up allocations with kmalloc_size_roundup() so that the verifier's
> > use of ksize() is always accurate and no special handling of the memory
> > is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
> > information back up to callers so they can use the space immediately,
> > so array resizing to happen less frequently as well.
> > 
> [...]
> 
> The commit message is a bit cryptic here without further context. Is this
> a bug fix or improvement? I read the latter, but it would be good to have

It's an improvement -- e.g. it depends on the recently added
kmalloc_size_roundup() helper.

> more context here for reviewers (maybe Link tag pointing to some discussion
> or the like). Also, why is the kmalloc_size_roundup() not hidden for kmalloc
> callers, isn't this a tree-wide issue?

The main issue is that _most_ allocation callers want an explicitly sized
allocation (and not "more"), and that dynamic runtime analysis tools
(e.g. KASAN, UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise
bounds checking (i.e. not something that is rounded up). A tiny handful
of allocations were doing an implicit alloc/realloc loop that actually
depended on ksize(), and didn't actually always call realloc. This has
created a long series of bugs and problems over many years related to the
runtime bounds checking, so these callers are finally being adjusted to
_not_ depend on the ksize() side-effect, by doing one of several things:

- tracking the allocation size precisely and just never calling ksize()
  at all[1].

- always calling realloc and not using ksize() at all. (This solution
  ends up actually be a subset of the next solution.)

- using kmalloc_size_roundup() to explicitly round up the desired
  allocation size immediately[2].

The bpf/verifier case is this another of this latter case.

Because some of the dynamic bounds checking depends on the size being an
_argument_ to an allocator function (i.e. see the __alloc_size attribute),
the ksize() users are rare, and it could waste local variables, it
was been deemed better to explicitly separate the rounding up from the
allocation itself[3].

Hopefully that helps clarify! :)

-Kees

[1] e.g.:
    https://git.kernel.org/linus/712f210a457d
    https://git.kernel.org/linus/72c08d9f4c72

[2] e.g.:
    https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad
    https://git.kernel.org/netdev/net-next/c/ab3f7828c979
    https://git.kernel.org/netdev/net-next/c/d6dd508080a3

[3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com/
  

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index eb8c34db74c7..1c040d27b8f6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1008,9 +1008,9 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
 	if (unlikely(check_mul_overflow(n, size, &bytes)))
 		return NULL;
 
-	if (ksize(dst) < bytes) {
+	if (ksize(dst) < ksize(src)) {
 		kfree(dst);
-		dst = kmalloc_track_caller(bytes, flags);
+		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
 		if (!dst)
 			return NULL;
 	}
@@ -1027,12 +1027,14 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
  */
 static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
 {
+	size_t alloc_size;
 	void *new_arr;
 
 	if (!new_n || old_n == new_n)
 		goto out;
 
-	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
+	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
+	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
 	if (!new_arr) {
 		kfree(arr);
 		return NULL;
@@ -2504,9 +2506,11 @@  static int push_jmp_history(struct bpf_verifier_env *env,
 {
 	u32 cnt = cur->jmp_history_cnt;
 	struct bpf_idx_pair *p;
+	size_t alloc_size;
 
 	cnt++;
-	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
+	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
+	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
 	if (!p)
 		return -ENOMEM;
 	p[cnt - 1].idx = env->insn_idx;