[v7,0/3] mm/slub: extend redzone check for kmalloc objects

Message ID 20221021032405.1825078-1-feng.tang@intel.com
Headers
Series mm/slub: extend redzone check for kmalloc objects |

Message

Feng Tang Oct. 21, 2022, 3:24 a.m. UTC
  kmalloc's API family is critical for mm, and one of its nature is that
it will round up the request size to a fixed one (mostly power of 2).
When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
could be allocated, so there is an extra space than what is originally
requested.

This patchset tries to extend the redzone sanity check to the extra
kmalloced buffer than requested, to better detect un-legitimate access
to it. (dependson SLAB_STORE_USER & SLAB_RED_ZONE)

The redzone part has been tested with code below:

	for (shift = 3; shift <= 12; shift++) {
		size = 1 << shift;
		buf = kmalloc(size + 4, GFP_KERNEL);
		/* We have 96, 196 kmalloc size, which is not power of 2 */
		if (size == 64 || size == 128)
			oob_size = 16;
		else
			oob_size = size - 4;
		memset(buf + size + 4, 0xee, oob_size);
		kfree(buf);
	}

(This is against slab tree's 'for-6.2/slub-sysfs' branch, with
HEAD 54736f702526)

Please help to review, thanks!

- Feng
---
Changelogs:

  since v6:
    * 1/4 patch of kmalloc memory wastage debug patch was merged
      to 6.1-rc1, so drop it
    * refine the kasan patch by extending existing APIs and hiding
      kasan internal data structure info (Andrey Konovalov)
    * only reduce zeroing size when slub debug is enabled to
      avoid security risk (Kees Cook/Andrey Konovalov)
    * collect Acked-by tag from Hyeonggon Yoo

  since v5:
    * Refine code/comments and add more perf info in commit log for
      kzalloc change (Hyeonggoon Yoo)
    * change the kasan param name and refine comments about
      kasan+redzone handling (Andrey Konovalov)
    * put free pointer in meta data to make redzone check cover all
      kmalloc objects (Hyeonggoon Yoo)

  since v4:
    * fix a race issue in v3, by moving kmalloc debug init into
      alloc_debug_processing (Hyeonggon Yoo)
    * add 'partial_conext' for better parameter passing in get_partial()
      call chain (Vlastimil Babka)
    * update 'slub.rst' for 'alloc_traces' part (Hyeonggon Yoo)
    * update code comments for 'orig_size'

  since v3:
    * rebase against latest post 6.0-rc1 slab tree's 'for-next' branch
    * fix a bug reported by 0Day, that kmalloc-redzoned data and kasan's
      free meta data overlaps in the same kmalloc object data area

  since v2:
    * rebase against slab tree's 'for-next' branch
    * fix pointer handling (Kefeng Wang)
    * move kzalloc zeroing handling change to a separate patch (Vlastimil Babka)
    * make 'orig_size' only depend on KMALLOC & STORE_USER flag
      bits (Vlastimil Babka)

  since v1:
    * limit the 'orig_size' to kmalloc objects only, and save
      it after track in metadata (Vlastimil Babka)
    * fix a offset calculation problem in print_trailer

  since RFC:
    * fix problems in kmem_cache_alloc_bulk() and records sorting,
      improve the print format (Hyeonggon Yoo)
    * fix a compiling issue found by 0Day bot
    * update the commit log based info from iova developers


Feng Tang (3):
  mm/slub: only zero requested size of buffer for kzalloc when debug
    enabled
  mm: kasan: Extend kasan_metadata_size() to also cover in-object size
  mm/slub: extend redzone check to extra allocated kmalloc space than
    requested

 include/linux/kasan.h |  5 ++--
 mm/kasan/generic.c    | 19 +++++++++----
 mm/slab.c             |  7 +++--
 mm/slab.h             | 22 +++++++++++++--
 mm/slab_common.c      |  4 +++
 mm/slub.c             | 65 +++++++++++++++++++++++++++++++++++++------
 6 files changed, 100 insertions(+), 22 deletions(-)
  

Comments

Vlastimil Babka Nov. 11, 2022, 8:16 a.m. UTC | #1
On 10/21/22 05:24, Feng Tang wrote:
> kmalloc's API family is critical for mm, and one of its nature is that
> it will round up the request size to a fixed one (mostly power of 2).
> When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
> could be allocated, so there is an extra space than what is originally
> requested.
> 
> This patchset tries to extend the redzone sanity check to the extra
> kmalloced buffer than requested, to better detect un-legitimate access
> to it. (dependson SLAB_STORE_USER & SLAB_RED_ZONE)
> 
> The redzone part has been tested with code below:
> 
> 	for (shift = 3; shift <= 12; shift++) {
> 		size = 1 << shift;
> 		buf = kmalloc(size + 4, GFP_KERNEL);
> 		/* We have 96, 196 kmalloc size, which is not power of 2 */
> 		if (size == 64 || size == 128)
> 			oob_size = 16;
> 		else
> 			oob_size = size - 4;
> 		memset(buf + size + 4, 0xee, oob_size);
> 		kfree(buf);
> 	}

Sounds like a new slub_kunit test would be useful? :) doesn't need to be
that exhaustive wrt all sizes, we could just pick one and check that a write
beyond requested kmalloc size is detected?

Thanks!
  
Feng Tang Nov. 11, 2022, 8:29 a.m. UTC | #2
On Fri, Nov 11, 2022 at 04:16:32PM +0800, Vlastimil Babka wrote:
> On 10/21/22 05:24, Feng Tang wrote:
> > kmalloc's API family is critical for mm, and one of its nature is that
> > it will round up the request size to a fixed one (mostly power of 2).
> > When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
> > could be allocated, so there is an extra space than what is originally
> > requested.
> > 
> > This patchset tries to extend the redzone sanity check to the extra
> > kmalloced buffer than requested, to better detect un-legitimate access
> > to it. (dependson SLAB_STORE_USER & SLAB_RED_ZONE)
> > 
> > The redzone part has been tested with code below:
> > 
> > 	for (shift = 3; shift <= 12; shift++) {
> > 		size = 1 << shift;
> > 		buf = kmalloc(size + 4, GFP_KERNEL);
> > 		/* We have 96, 196 kmalloc size, which is not power of 2 */
> > 		if (size == 64 || size == 128)
> > 			oob_size = 16;
> > 		else
> > 			oob_size = size - 4;
> > 		memset(buf + size + 4, 0xee, oob_size);
> > 		kfree(buf);
> > 	}
> 
> Sounds like a new slub_kunit test would be useful? :) doesn't need to be
> that exhaustive wrt all sizes, we could just pick one and check that a write
> beyond requested kmalloc size is detected?

Just git-grepped out slub_kunit.c :), will try to add a case to it.
I'll also check if the case will also be caught by other sanitizer
tools like kasan/kfence etc.

Thanks,
Feng


> Thanks!
>
  
Feng Tang Nov. 21, 2022, 6:38 a.m. UTC | #3
On Fri, Nov 11, 2022 at 04:29:43PM +0800, Tang, Feng wrote:
> On Fri, Nov 11, 2022 at 04:16:32PM +0800, Vlastimil Babka wrote:
> > > 	for (shift = 3; shift <= 12; shift++) {
> > > 		size = 1 << shift;
> > > 		buf = kmalloc(size + 4, GFP_KERNEL);
> > > 		/* We have 96, 196 kmalloc size, which is not power of 2 */
> > > 		if (size == 64 || size == 128)
> > > 			oob_size = 16;
> > > 		else
> > > 			oob_size = size - 4;
> > > 		memset(buf + size + 4, 0xee, oob_size);
> > > 		kfree(buf);
> > > 	}
> > 
> > Sounds like a new slub_kunit test would be useful? :) doesn't need to be
> > that exhaustive wrt all sizes, we could just pick one and check that a write
> > beyond requested kmalloc size is detected?
> 
> Just git-grepped out slub_kunit.c :), will try to add a case to it.
> I'll also check if the case will also be caught by other sanitizer
> tools like kasan/kfence etc.

Just checked, kasan has already has API to disable kasan check
temporarily, and I did see sometime kfence can chime in (4 out of 178
runs) so we need skip kfenced address.

Here is the draft patch, thanks!

From 45bf8d0072e532f43063dbda44c6bb3adcc388b6 Mon Sep 17 00:00:00 2001
From: Feng Tang <feng.tang@intel.com>
Date: Mon, 21 Nov 2022 13:17:11 +0800
Subject: [PATCH] mm/slub, kunit: Add a case for kmalloc redzone functionality

kmalloc redzone check for slub has been merged, and it's better to add
a kunit case for it, which is inspired by a real-world case as described
in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):

"
  octeon-hcd will crash the kernel when SLOB is used. This usually happens
  after the 18-byte control transfer when a device descriptor is read.
  The DMA engine is always transfering full 32-bit words and if the
  transfer is shorter, some random garbage appears after the buffer.
  The problem is not visible with SLUB since it rounds up the allocations
  to word boundary, and the extra bytes will go undetected.
"
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 lib/slub_kunit.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 mm/slab.h        | 15 +++++++++++++++
 mm/slub.c        |  4 ++--
 3 files changed, 59 insertions(+), 2 deletions(-)

diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 7a0564d7cb7a..0653eed19bff 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -120,6 +120,47 @@ static void test_clobber_redzone_free(struct kunit *test)
 	kmem_cache_destroy(s);
 }
 
+
+/*
+ * This case is simulating a real world case, that a device driver
+ * requests 18 bytes buffer, but the device HW has obligation to
+ * operate on 32 bits granularity, so it may actually read or write
+ * 20 bytes to the buffer, and possibly pollute 2 extra bytes after
+ * the requested space.
+ */
+static void test_kmalloc_redzone_access(struct kunit *test)
+{
+	u8 *p;
+
+	if (!is_slub_debug_flags_enabled(SLAB_STORE_USER | SLAB_RED_ZONE))
+		kunit_skip(test, "Test required SLAB_STORE_USER & SLAB_RED_ZONE flags on");
+
+	p = kmalloc(18, GFP_KERNEL);
+
+#ifdef CONFIG_KFENCE
+	{
+		int max_retry = 10;
+
+		while (is_kfence_address(p) && max_retry--) {
+			kfree(p);
+			p = kmalloc(18, GFP_KERNEL);
+		}
+
+		if (!max_retry)
+			kunit_skip(test, "Fail to get non-kfenced memory");
+	}
+#endif
+
+	kasan_disable_current();
+
+	p[18] = 0xab;
+	p[19] = 0xab;
+	kfree(p);
+
+	KUNIT_EXPECT_EQ(test, 3, slab_errors);
+	kasan_enable_current();
+}
+
 static int test_init(struct kunit *test)
 {
 	slab_errors = 0;
@@ -139,6 +180,7 @@ static struct kunit_case test_cases[] = {
 #endif
 
 	KUNIT_CASE(test_clobber_redzone_free),
+	KUNIT_CASE(test_kmalloc_redzone_access),
 	{}
 };
 
diff --git a/mm/slab.h b/mm/slab.h
index e3b3231af742..72f7a85e01ab 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -413,6 +413,17 @@ static inline bool __slub_debug_enabled(void)
 {
 	return static_branch_unlikely(&slub_debug_enabled);
 }
+
+extern slab_flags_t slub_debug;
+
+/*
+ * This should only be used in post-boot time, after 'slub_debug'
+ * gets initialized.
+ */
+static inline bool is_slub_debug_flags_enabled(slab_flags_t flags)
+{
+	return (slub_debug & flags) == flags;
+}
 #else
 static inline void print_tracking(struct kmem_cache *s, void *object)
 {
@@ -421,6 +432,10 @@ static inline bool __slub_debug_enabled(void)
 {
 	return false;
 }
+static inline bool is_slub_debug_flags_enabled(slab_flags_t flags)
+{
+	return false;
+}
 #endif
 
 /*
diff --git a/mm/slub.c b/mm/slub.c
index a24b71041b26..6ef72b8f6291 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -638,9 +638,9 @@ static inline void *restore_red_left(struct kmem_cache *s, void *p)
  * Debug settings:
  */
 #if defined(CONFIG_SLUB_DEBUG_ON)
-static slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS;
+slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS;
 #else
-static slab_flags_t slub_debug;
+slab_flags_t slub_debug;
 #endif
 
 static char *slub_debug_string;
  
Vlastimil Babka Nov. 23, 2022, 9:48 a.m. UTC | #4
On 11/21/22 07:38, Feng Tang wrote:
> On Fri, Nov 11, 2022 at 04:29:43PM +0800, Tang, Feng wrote:
>> On Fri, Nov 11, 2022 at 04:16:32PM +0800, Vlastimil Babka wrote:
>> > > 	for (shift = 3; shift <= 12; shift++) {
>> > > 		size = 1 << shift;
>> > > 		buf = kmalloc(size + 4, GFP_KERNEL);
>> > > 		/* We have 96, 196 kmalloc size, which is not power of 2 */
>> > > 		if (size == 64 || size == 128)
>> > > 			oob_size = 16;
>> > > 		else
>> > > 			oob_size = size - 4;
>> > > 		memset(buf + size + 4, 0xee, oob_size);
>> > > 		kfree(buf);
>> > > 	}
>> > 
>> > Sounds like a new slub_kunit test would be useful? :) doesn't need to be
>> > that exhaustive wrt all sizes, we could just pick one and check that a write
>> > beyond requested kmalloc size is detected?
>> 
>> Just git-grepped out slub_kunit.c :), will try to add a case to it.
>> I'll also check if the case will also be caught by other sanitizer
>> tools like kasan/kfence etc.
> 
> Just checked, kasan has already has API to disable kasan check
> temporarily, and I did see sometime kfence can chime in (4 out of 178
> runs) so we need skip kfenced address.
> 
> Here is the draft patch, thanks!
> 
> From 45bf8d0072e532f43063dbda44c6bb3adcc388b6 Mon Sep 17 00:00:00 2001
> From: Feng Tang <feng.tang@intel.com>
> Date: Mon, 21 Nov 2022 13:17:11 +0800
> Subject: [PATCH] mm/slub, kunit: Add a case for kmalloc redzone functionality
> 
> kmalloc redzone check for slub has been merged, and it's better to add
> a kunit case for it, which is inspired by a real-world case as described
> in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
> 
> "
>   octeon-hcd will crash the kernel when SLOB is used. This usually happens
>   after the 18-byte control transfer when a device descriptor is read.
>   The DMA engine is always transfering full 32-bit words and if the
>   transfer is shorter, some random garbage appears after the buffer.
>   The problem is not visible with SLUB since it rounds up the allocations
>   to word boundary, and the extra bytes will go undetected.
> "
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
>  lib/slub_kunit.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  mm/slab.h        | 15 +++++++++++++++
>  mm/slub.c        |  4 ++--
>  3 files changed, 59 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 7a0564d7cb7a..0653eed19bff 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -120,6 +120,47 @@ static void test_clobber_redzone_free(struct kunit *test)
>  	kmem_cache_destroy(s);
>  }
>  
> +
> +/*
> + * This case is simulating a real world case, that a device driver
> + * requests 18 bytes buffer, but the device HW has obligation to
> + * operate on 32 bits granularity, so it may actually read or write
> + * 20 bytes to the buffer, and possibly pollute 2 extra bytes after
> + * the requested space.
> + */
> +static void test_kmalloc_redzone_access(struct kunit *test)
> +{
> +	u8 *p;
> +
> +	if (!is_slub_debug_flags_enabled(SLAB_STORE_USER | SLAB_RED_ZONE))
> +		kunit_skip(test, "Test required SLAB_STORE_USER & SLAB_RED_ZONE flags on");

Hrmm, this is not great. I didn't realize that we're testing kmalloc()
specific code, so we can't simply create test-specific caches as in the
other kunit tests.
What if we did create a fake kmalloc cache with the necessary flags and used
it with kmalloc_trace() instead of kmalloc()? We would be bypassing the
kmalloc() inline layer so theoretically orig_size handling bugs could be
introduced there that the test wouldn't catch, but I think that's rather
unlikely. Importantly we would still be stressing the orig_size saving and
the adjusted redzone check using this info.

> +	p = kmalloc(18, GFP_KERNEL);
> +
> +#ifdef CONFIG_KFENCE
> +	{
> +		int max_retry = 10;
> +
> +		while (is_kfence_address(p) && max_retry--) {
> +			kfree(p);
> +			p = kmalloc(18, GFP_KERNEL);
> +		}
> +
> +		if (!max_retry)
> +			kunit_skip(test, "Fail to get non-kfenced memory");
> +	}
> +#endif

With the test-specific cache we could also pass SLAB_SKIP_KFENCE there to
handle this. BTW, don't all slub kunit test need to do that in fact?

Thanks,
Vlastimil

> +
> +	kasan_disable_current();
> +
> +	p[18] = 0xab;
> +	p[19] = 0xab;
> +	kfree(p);
> +
> +	KUNIT_EXPECT_EQ(test, 3, slab_errors);
> +	kasan_enable_current();
> +}
> +
>  static int test_init(struct kunit *test)
>  {
>  	slab_errors = 0;
> @@ -139,6 +180,7 @@ static struct kunit_case test_cases[] = {
>  #endif
>  
>  	KUNIT_CASE(test_clobber_redzone_free),
> +	KUNIT_CASE(test_kmalloc_redzone_access),
>  	{}
>  };
>  
> diff --git a/mm/slab.h b/mm/slab.h
> index e3b3231af742..72f7a85e01ab 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -413,6 +413,17 @@ static inline bool __slub_debug_enabled(void)
>  {
>  	return static_branch_unlikely(&slub_debug_enabled);
>  }
> +
> +extern slab_flags_t slub_debug;
> +
> +/*
> + * This should only be used in post-boot time, after 'slub_debug'
> + * gets initialized.
> + */
> +static inline bool is_slub_debug_flags_enabled(slab_flags_t flags)
> +{
> +	return (slub_debug & flags) == flags;
> +}
>  #else
>  static inline void print_tracking(struct kmem_cache *s, void *object)
>  {
> @@ -421,6 +432,10 @@ static inline bool __slub_debug_enabled(void)
>  {
>  	return false;
>  }
> +static inline bool is_slub_debug_flags_enabled(slab_flags_t flags)
> +{
> +	return false;
> +}
>  #endif
>  
>  /*
> diff --git a/mm/slub.c b/mm/slub.c
> index a24b71041b26..6ef72b8f6291 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -638,9 +638,9 @@ static inline void *restore_red_left(struct kmem_cache *s, void *p)
>   * Debug settings:
>   */
>  #if defined(CONFIG_SLUB_DEBUG_ON)
> -static slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS;
> +slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS;
>  #else
> -static slab_flags_t slub_debug;
> +slab_flags_t slub_debug;
>  #endif
>  
>  static char *slub_debug_string;
  
Feng Tang Nov. 28, 2022, 5:43 a.m. UTC | #5
On Wed, Nov 23, 2022 at 10:48:50AM +0100, Vlastimil Babka wrote:
> On 11/21/22 07:38, Feng Tang wrote:
> > On Fri, Nov 11, 2022 at 04:29:43PM +0800, Tang, Feng wrote:
> >> On Fri, Nov 11, 2022 at 04:16:32PM +0800, Vlastimil Babka wrote:
> >> > > 	for (shift = 3; shift <= 12; shift++) {
> >> > > 		size = 1 << shift;
> >> > > 		buf = kmalloc(size + 4, GFP_KERNEL);
> >> > > 		/* We have 96, 196 kmalloc size, which is not power of 2 */
> >> > > 		if (size == 64 || size == 128)
> >> > > 			oob_size = 16;
> >> > > 		else
> >> > > 			oob_size = size - 4;
> >> > > 		memset(buf + size + 4, 0xee, oob_size);
> >> > > 		kfree(buf);
> >> > > 	}
> >> > 
> >> > Sounds like a new slub_kunit test would be useful? :) doesn't need to be
> >> > that exhaustive wrt all sizes, we could just pick one and check that a write
> >> > beyond requested kmalloc size is detected?
> >> 
> >> Just git-grepped out slub_kunit.c :), will try to add a case to it.
> >> I'll also check if the case will also be caught by other sanitizer
> >> tools like kasan/kfence etc.
> > 
> > Just checked, kasan has already has API to disable kasan check
> > temporarily, and I did see sometime kfence can chime in (4 out of 178
> > runs) so we need skip kfenced address.
> > 
> > Here is the draft patch, thanks!
> > 
> > From 45bf8d0072e532f43063dbda44c6bb3adcc388b6 Mon Sep 17 00:00:00 2001
> > From: Feng Tang <feng.tang@intel.com>
> > Date: Mon, 21 Nov 2022 13:17:11 +0800
> > Subject: [PATCH] mm/slub, kunit: Add a case for kmalloc redzone functionality
> > 
> > kmalloc redzone check for slub has been merged, and it's better to add
> > a kunit case for it, which is inspired by a real-world case as described
> > in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
> > 
> > "
> >   octeon-hcd will crash the kernel when SLOB is used. This usually happens
> >   after the 18-byte control transfer when a device descriptor is read.
> >   The DMA engine is always transfering full 32-bit words and if the
> >   transfer is shorter, some random garbage appears after the buffer.
> >   The problem is not visible with SLUB since it rounds up the allocations
> >   to word boundary, and the extra bytes will go undetected.
> > "
> > Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
> > ---
> >  lib/slub_kunit.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> >  mm/slab.h        | 15 +++++++++++++++
> >  mm/slub.c        |  4 ++--
> >  3 files changed, 59 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> > index 7a0564d7cb7a..0653eed19bff 100644
> > --- a/lib/slub_kunit.c
> > +++ b/lib/slub_kunit.c
> > @@ -120,6 +120,47 @@ static void test_clobber_redzone_free(struct kunit *test)
> >  	kmem_cache_destroy(s);
> >  }
> >  
> > +
> > +/*
> > + * This case is simulating a real world case, that a device driver
> > + * requests 18 bytes buffer, but the device HW has obligation to
> > + * operate on 32 bits granularity, so it may actually read or write
> > + * 20 bytes to the buffer, and possibly pollute 2 extra bytes after
> > + * the requested space.
> > + */
> > +static void test_kmalloc_redzone_access(struct kunit *test)
> > +{
> > +	u8 *p;
> > +
> > +	if (!is_slub_debug_flags_enabled(SLAB_STORE_USER | SLAB_RED_ZONE))
> > +		kunit_skip(test, "Test required SLAB_STORE_USER & SLAB_RED_ZONE flags on");
> 
> Hrmm, this is not great. I didn't realize that we're testing kmalloc()
> specific code, so we can't simply create test-specific caches as in the
> other kunit tests.
> What if we did create a fake kmalloc cache with the necessary flags and used
> it with kmalloc_trace() instead of kmalloc()? We would be bypassing the
> kmalloc() inline layer so theoretically orig_size handling bugs could be
> introduced there that the test wouldn't catch, but I think that's rather
> unlikely. Importantly we would still be stressing the orig_size saving and
> the adjusted redzone check using this info.

Nice trick! Will go this way. 

> > +	p = kmalloc(18, GFP_KERNEL);
> > +
> > +#ifdef CONFIG_KFENCE
> > +	{
> > +		int max_retry = 10;
> > +
> > +		while (is_kfence_address(p) && max_retry--) {
> > +			kfree(p);
> > +			p = kmalloc(18, GFP_KERNEL);
> > +		}
> > +
> > +		if (!max_retry)
> > +			kunit_skip(test, "Fail to get non-kfenced memory");
> > +	}
> > +#endif
> 
> With the test-specific cache we could also pass SLAB_SKIP_KFENCE there to
> handle this. 

Yep, the handling will be much simpler, thanks

>
> BTW, don't all slub kunit test need to do that in fact?

Yes, I think they also need.

With default kfence setting test, kence address wasn't hit in
250 times of boot test. And by changing CONFIG_KFENCE_NUM_OBJECTS
from 255 to 16383, and CONFIG_KFENCE_SAMPLE_INTERVAL from 100
to 5, the kfence allocation did hit once in about 300 tims of
boot test.

Will add the flag bit for all kmem_cache creation. 

Thanks,
Feng

> Thanks,
> Vlastimil