[RFC,v2] kasan: add atomic tests

Message ID 20240131210041.686657-1-paul.heidekrueger@tum.de
State New
Headers
Series [RFC,v2] kasan: add atomic tests |

Commit Message

Paul Heidekrüger Jan. 31, 2024, 9 p.m. UTC
  Hi!

This RFC patch adds tests that detect whether KASan is able to catch
unsafe atomic accesses.

Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
the following suggested changes:

* Adjust size of allocations to make kasan_atomics() work with all KASan modes
* Remove comments and move tests closer to the bitops tests
* For functions taking two addresses as an input, test each address in a separate function call.
* Rename variables for clarity
* Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()

I'm still uncelar on which kinds of atomic accesses we should be testing
though. The patch below only covers a subset, and I don't know if it
would be feasible to just manually add all atomics of interest. Which
ones would those be exactly? As Andrey pointed out on Bugzilla, if we
were to include all of the atomic64_* ones, that would make a lot of
function calls.

Also, the availability of atomics varies between architectures; I did my
testing on arm64. Is something like gen-atomic-instrumented.sh required?

Many thanks,
Paul

CC: Marco Elver <elver@google.com>
CC: Andrey Konovalov <andreyknvl@gmail.com>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
Signed-off-by: Paul Heidekrüger <paul.heidekrueger@tum.de>
---
 mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)
  

Comments

Marco Elver Feb. 1, 2024, 9:38 a.m. UTC | #1
On Wed, 31 Jan 2024 at 22:01, Paul Heidekrüger <paul.heidekrueger@tum.de> wrote:
>
> Hi!
>
> This RFC patch adds tests that detect whether KASan is able to catch
> unsafe atomic accesses.
>
> Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> the following suggested changes:
>
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> I'm still uncelar on which kinds of atomic accesses we should be testing
> though. The patch below only covers a subset, and I don't know if it
> would be feasible to just manually add all atomics of interest. Which
> ones would those be exactly?

The atomics wrappers are generated by a script. An exhaustive test
case would, if generated by hand, be difficult to keep in sync if some
variants are removed or renamed (although that's probably a relatively
rare occurrence).

I would probably just cover some of the most common ones that all
architectures (that support KASAN) provide. I think you are already
covering some of the most important ones, and I'd just say it's good
enough for the test.

> As Andrey pointed out on Bugzilla, if we
> were to include all of the atomic64_* ones, that would make a lot of
> function calls.

Just include a few atomic64_ cases, similar to the ones you already
include for atomic_. Although beware that the atomic64_t helpers are
likely not available on 32-bit architectures, so you need an #ifdef
CONFIG_64BIT.

Alternatively, there is also atomic_long_t, which (on 64-bit
architectures) just wraps atomic64_t helpers, and on 32-bit the
atomic_t ones. I'd probably opt for the atomic_long_t variants, just
to keep it simpler and get some additional coverage on 32-bit
architectures.

> Also, the availability of atomics varies between architectures; I did my
> testing on arm64. Is something like gen-atomic-instrumented.sh required?

I would not touch gen-atomic-instrumented.sh for the test.

> Many thanks,
> Paul
>
> CC: Marco Elver <elver@google.com>
> CC: Andrey Konovalov <andreyknvl@gmail.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Signed-off-by: Paul Heidekrüger <paul.heidekrueger@tum.de>
> ---
>  mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..1ab4444fe4a0 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,55 @@ static void kasan_bitops_tags(struct kunit *test)
>         kfree(bits);
>  }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> +       int *i_safe = (int *)safe;
> +       int *i_unsafe = (int *)unsafe;
> +
> +       KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> +       int *a1, *a2;

If you're casting it to void* below and never using as an int* in this
function, just make these void* (the sizeof can just be sizeof(int)).

> +       a1 = kzalloc(48, GFP_KERNEL);
> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +       a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> +       kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);

We try to ensure (where possible) that the KASAN tests are not
destructive to the rest of the kernel. I think the size of "48" was
chosen to fall into the 64-byte size class, similar to the bitops. I
would just copy that comment, so nobody attempts to change it in
future. :-)

> +       kfree(a1);
> +       kfree(a2);

Thanks,
-- Marco
  
Paul Heidekrüger Feb. 2, 2024, 10:03 a.m. UTC | #2
On 01.02.2024 10:38, Marco Elver wrote:
> On Wed, 31 Jan 2024 at 22:01, Paul Heidekrüger <paul.heidekrueger@tum.de> wrote:
> >
> > Hi!
> >
> > This RFC patch adds tests that detect whether KASan is able to catch
> > unsafe atomic accesses.
> >
> > Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> > the following suggested changes:
> >
> > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > * Remove comments and move tests closer to the bitops tests
> > * For functions taking two addresses as an input, test each address in a separate function call.
> > * Rename variables for clarity
> > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> >
> > I'm still uncelar on which kinds of atomic accesses we should be testing
> > though. The patch below only covers a subset, and I don't know if it
> > would be feasible to just manually add all atomics of interest. Which
> > ones would those be exactly?
> 
> The atomics wrappers are generated by a script. An exhaustive test
> case would, if generated by hand, be difficult to keep in sync if some
> variants are removed or renamed (although that's probably a relatively
> rare occurrence).
> 
> I would probably just cover some of the most common ones that all
> architectures (that support KASAN) provide. I think you are already
> covering some of the most important ones, and I'd just say it's good
> enough for the test.
> 
> > As Andrey pointed out on Bugzilla, if we
> > were to include all of the atomic64_* ones, that would make a lot of
> > function calls.
> 
> Just include a few atomic64_ cases, similar to the ones you already
> include for atomic_. Although beware that the atomic64_t helpers are
> likely not available on 32-bit architectures, so you need an #ifdef
> CONFIG_64BIT.
> 
> Alternatively, there is also atomic_long_t, which (on 64-bit
> architectures) just wraps atomic64_t helpers, and on 32-bit the
> atomic_t ones. I'd probably opt for the atomic_long_t variants, just
> to keep it simpler and get some additional coverage on 32-bit
> architectures.

If I were to add some atomic_long_* cases, e.g. atomic_long_read() or 
atomic_long_write(), in addition to the test cases I already have, wouldn't that 
mean that on 32-bit architectures we would have the same test case twice because 
atomic_read() and long_atomic_read() both boil down to raw_atomic_read() and 
raw_atomic_write() respectively? Or did I misunderstand and I should only be 
covering long_atomic_* functions whose atomic_* counterpart doesn't exist in the 
test cases already?

> > Also, the availability of atomics varies between architectures; I did my
> > testing on arm64. Is something like gen-atomic-instrumented.sh required?
> 
> I would not touch gen-atomic-instrumented.sh for the test.
> 
> > Many thanks,
> > Paul
> >
> > CC: Marco Elver <elver@google.com>
> > CC: Andrey Konovalov <andreyknvl@gmail.com>
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> > Signed-off-by: Paul Heidekrüger <paul.heidekrueger@tum.de>
> > ---
> >  mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 50 insertions(+)
> >
> > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> > index 8281eb42464b..1ab4444fe4a0 100644
> > --- a/mm/kasan/kasan_test.c
> > +++ b/mm/kasan/kasan_test.c
> > @@ -1150,6 +1150,55 @@ static void kasan_bitops_tags(struct kunit *test)
> >         kfree(bits);
> >  }
> >
> > +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > +{
> > +       int *i_safe = (int *)safe;
> > +       int *i_unsafe = (int *)unsafe;
> > +
> > +       KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > +}
> > +
> > +static void kasan_atomics(struct kunit *test)
> > +{
> > +       int *a1, *a2;
> 
> If you're casting it to void* below and never using as an int* in this
> function, just make these void* (the sizeof can just be sizeof(int)).
> 
> > +       a1 = kzalloc(48, GFP_KERNEL);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +       a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +
> > +       kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);
> 
> We try to ensure (where possible) that the KASAN tests are not
> destructive to the rest of the kernel. I think the size of "48" was
> chosen to fall into the 64-byte size class, similar to the bitops. I
> would just copy that comment, so nobody attempts to change it in
> future. :-)

And yes to all the rest - thanks for the feedback!

> > +       kfree(a1);
> > +       kfree(a2);
> 
> Thanks,
> -- Marco

Many thanks,
Paul
  
Marco Elver Feb. 2, 2024, 10:12 a.m. UTC | #3
On Fri, 2 Feb 2024 at 11:03, Paul Heidekrüger <paul.heidekrueger@tum.de> wrote:
>
> On 01.02.2024 10:38, Marco Elver wrote:
> > On Wed, 31 Jan 2024 at 22:01, Paul Heidekrüger <paul.heidekrueger@tum.de> wrote:
> > >
> > > Hi!
> > >
> > > This RFC patch adds tests that detect whether KASan is able to catch
> > > unsafe atomic accesses.
> > >
> > > Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> > > the following suggested changes:
> > >
> > > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > > * Remove comments and move tests closer to the bitops tests
> > > * For functions taking two addresses as an input, test each address in a separate function call.
> > > * Rename variables for clarity
> > > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> > >
> > > I'm still uncelar on which kinds of atomic accesses we should be testing
> > > though. The patch below only covers a subset, and I don't know if it
> > > would be feasible to just manually add all atomics of interest. Which
> > > ones would those be exactly?
> >
> > The atomics wrappers are generated by a script. An exhaustive test
> > case would, if generated by hand, be difficult to keep in sync if some
> > variants are removed or renamed (although that's probably a relatively
> > rare occurrence).
> >
> > I would probably just cover some of the most common ones that all
> > architectures (that support KASAN) provide. I think you are already
> > covering some of the most important ones, and I'd just say it's good
> > enough for the test.
> >
> > > As Andrey pointed out on Bugzilla, if we
> > > were to include all of the atomic64_* ones, that would make a lot of
> > > function calls.
> >
> > Just include a few atomic64_ cases, similar to the ones you already
> > include for atomic_. Although beware that the atomic64_t helpers are
> > likely not available on 32-bit architectures, so you need an #ifdef
> > CONFIG_64BIT.
> >
> > Alternatively, there is also atomic_long_t, which (on 64-bit
> > architectures) just wraps atomic64_t helpers, and on 32-bit the
> > atomic_t ones. I'd probably opt for the atomic_long_t variants, just
> > to keep it simpler and get some additional coverage on 32-bit
> > architectures.
>
> If I were to add some atomic_long_* cases, e.g. atomic_long_read() or
> atomic_long_write(), in addition to the test cases I already have, wouldn't that
> mean that on 32-bit architectures we would have the same test case twice because
> atomic_read() and long_atomic_read() both boil down to raw_atomic_read() and
> raw_atomic_write() respectively? Or did I misunderstand and I should only be
> covering long_atomic_* functions whose atomic_* counterpart doesn't exist in the
> test cases already?

Sure, on 32-bit this would be a little redundant, but we don't care so
much about what underlying atomic is actually executed, but more about
the instrumentation being correct.

From a KASAN point of view, I can't really tell that if atomic_read()
works that atomic_long_read() also works.

On top of that, we don't care all that much about 32-bit architectures
anymore (I think KASAN should work on some 32-bit architectures, but I
haven't tested that in a long time). ;-)
  

Patch

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..1ab4444fe4a0 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1150,6 +1150,55 @@  static void kasan_bitops_tags(struct kunit *test)
 	kfree(bits);
 }
 
+static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
+{
+	int *i_safe = (int *)safe;
+	int *i_unsafe = (int *)unsafe;
+
+	KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+	KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+}
+
+static void kasan_atomics(struct kunit *test)
+{
+	int *a1, *a2;
+
+	a1 = kzalloc(48, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+	a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+
+	kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);
+
+	kfree(a1);
+	kfree(a2);
+}
+
 static void kmalloc_double_kzfree(struct kunit *test)
 {
 	char *ptr;
@@ -1553,6 +1602,7 @@  static struct kunit_case kasan_kunit_test_cases[] = {
 	KUNIT_CASE(kasan_strings),
 	KUNIT_CASE(kasan_bitops_generic),
 	KUNIT_CASE(kasan_bitops_tags),
+	KUNIT_CASE(kasan_atomics),
 	KUNIT_CASE(kmalloc_double_kzfree),
 	KUNIT_CASE(rcu_uaf),
 	KUNIT_CASE(workqueue_uaf),