Message ID | 20221129063358.3012362-2-feng.tang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp167794wrr; Mon, 28 Nov 2022 22:46:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf73F19mnMG65xDOYzxAZNM99zdW+Nj4f7xFtsBLGUJzMskmayYICEdVkPwX9F7A+1HrWQgn X-Received: by 2002:a17:902:ff02:b0:189:8795:ef75 with SMTP id f2-20020a170902ff0200b001898795ef75mr9678667plj.1.1669704407628; Mon, 28 Nov 2022 22:46:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669704407; cv=none; d=google.com; s=arc-20160816; b=Z2r/PTrGVG5RGZKHfkDXBHBaQ6d2H/rGFmUDkRf1yoeItPLpUn3Q9QjJJuCd1/jGok nRYbS5fY1ah0QR/pnd7oN9O3LWHMZpmmilHVETUBhfkbTI1foB/p2jjqlFNDpBpeZbcm 0zFhxoeW9MuwJYJD40o97IxknJsCY8kqitA3mh0t4dl+D5CSVPElFJoxPpeJnHyDW/eg +113y27jkEpys9Qhb31WCvAdWSGUCK/ABsGu0IazSkBrqmx7+VA1s/V9hh1Yr+cNr8vl aFyJMTY2GKVeCkvlu1HZdOuJoIs/vomxBSwVG0Tz2EZlPn5ohnoLe8t3AyLj7O75+694 w0EA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WzqLxEdATxQHMbHwQsbYhd6+oS/4toKX8Bpapr/RrMk=; b=yXJsprYzbSsN2FJSfkQRrSwQ6cC/P9Wutqj3Tuu5W2ZXF3E0J3k6JYuob8ACyYasa2 irIxfjULiWYRl1t90vCTa0Banwx6Ce2vSHCaMN9pxjqQfLwA22YgGnZhHm86ImIY7DBt kQZnGQXbj/SBRldIPNJqHUGivvcT7TKgOnmzTB0nMgzx2MVXN2MoZuYvEcn3QjO0s9yG q3Dd0I9kbPrWBQlfNRh4rD6eH1cojszxcM5Vd3eFKVapHrvKtdLZdHNmG4cl9a6LdGga YZZNHAXySXcNR1PSTej1cRJ3MHDrLQJ2z1na15AQpC0cYo3XzATRFhtkV/8FYa5zL+nu 0eaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BRDcQ8RL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nl12-20020a17090b384c00b00212f92957d5si1021817pjb.167.2022.11.28.22.46.33; Mon, 28 Nov 2022 22:46:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BRDcQ8RL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229541AbiK2GhE (ORCPT <rfc822;gah0developer@gmail.com> + 99 others); Tue, 29 Nov 2022 01:37:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbiK2GhA (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 29 Nov 2022 01:37:00 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BE40532E3 for <linux-kernel@vger.kernel.org>; Mon, 28 Nov 2022 22:36:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669703819; x=1701239819; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7mjnKay8jU2S2Yvtj+sHUiviWAPaCCXEzlMpUe4IYiY=; b=BRDcQ8RLxKBNxjXIazw+5ZxE+WszOzpwPaxafZGe2sINwuB+3u31Df6s brgZRFeFUtwdwbdg8hTT/f8JqfmEVoP+wVPuO1EiLBsvK5KeRHhIHKXG9 JTKXJGVJEku3+x7D9OfscT6KxzwqYl2R3DT9LxLWLTAV0wvnfQ8/02pqX d77NyZrdOYkwEziAzoHFHKWki+mxwedUQqQlU7bxa8Zt44wM4K0njqSnB MAIVSwhhcWIMmF4IfQmofC1t1Y9+eQJ5rr5A45opjPDJKxTCohGL92hsW 3nKDAeKXl9xbe3yqTbsraDa3uCAT+VkDZiPlxrm0F5/8qnXqogfZMy2jt w==; X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="377178792" X-IronPort-AV: E=Sophos;i="5.96,202,1665471600"; d="scan'208";a="377178792" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Nov 2022 22:36:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="645784217" X-IronPort-AV: E=Sophos;i="5.96,202,1665471600"; d="scan'208";a="645784217" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga007.fm.intel.com with ESMTP; 28 Nov 2022 22:36:56 -0800 From: Feng Tang <feng.tang@intel.com> To: Vlastimil Babka <vbabka@suse.cz>, Andrew Morton <akpm@linux-foundation.org>, Oliver Glitta <glittao@gmail.com>, Christoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Roman Gushchin <roman.gushchin@linux.dev>, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Marco Elver <elver@google.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang <feng.tang@intel.com> Subject: [PATCH v2 2/2] mm/slub, kunit: Add a test case for kmalloc redzone check Date: Tue, 29 Nov 2022 14:33:58 +0800 Message-Id: <20221129063358.3012362-2-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221129063358.3012362-1-feng.tang@intel.com> References: <20221129063358.3012362-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750811969279228103?= X-GMAIL-MSGID: =?utf-8?q?1750811969279228103?= |
Series |
[v2,1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for cache creation
|
|
Commit Message
Feng Tang
Nov. 29, 2022, 6:33 a.m. UTC
kmalloc redzone check for slub has been merged, and it's better to add
a kunit case for it, which is inspired by a real-world case as described
in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
"
octeon-hcd will crash the kernel when SLOB is used. This usually happens
after the 18-byte control transfer when a device descriptor is read.
The DMA engine is always transferring full 32-bit words and if the
transfer is shorter, some random garbage appears after the buffer.
The problem is not visible with SLUB since it rounds up the allocations
to word boundary, and the extra bytes will go undetected.
"
To avoid interrupting the normal functioning of kmalloc caches, a
kmem_cache mimicing kmalloc cache is created with similar and all
necessary flags to have kmalloc-redzone enabled, and kmalloc_trace()
is used to really test the orig_size and redzone setup.
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
Changelog:
since v1:
* create a new cache mimicing kmalloc cache, reduce dependency
over global slub_debug setting (Vlastimil Babka)
lib/slub_kunit.c | 23 +++++++++++++++++++++++
mm/slab.h | 3 ++-
2 files changed, 25 insertions(+), 1 deletion(-)
Comments
On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: > > kmalloc redzone check for slub has been merged, and it's better to add > a kunit case for it, which is inspired by a real-world case as described > in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): > > " > octeon-hcd will crash the kernel when SLOB is used. This usually happens > after the 18-byte control transfer when a device descriptor is read. > The DMA engine is always transferring full 32-bit words and if the > transfer is shorter, some random garbage appears after the buffer. > The problem is not visible with SLUB since it rounds up the allocations > to word boundary, and the extra bytes will go undetected. > " > > To avoid interrupting the normal functioning of kmalloc caches, a > kmem_cache mimicing kmalloc cache is created with similar and all > necessary flags to have kmalloc-redzone enabled, and kmalloc_trace() > is used to really test the orig_size and redzone setup. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Feng Tang <feng.tang@intel.com> > --- > Changelog: > > since v1: > * create a new cache mimicing kmalloc cache, reduce dependency > over global slub_debug setting (Vlastimil Babka) > > lib/slub_kunit.c | 23 +++++++++++++++++++++++ > mm/slab.h | 3 ++- > 2 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index a303adf8f11c..dbdd656624d0 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test) > kmem_cache_destroy(s); > } > > +static void test_kmalloc_redzone_access(struct kunit *test) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0, > + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS, > + NULL); > + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); > + > + kasan_disable_current(); > + > + /* Suppress the -Warray-bounds warning */ > + OPTIMIZER_HIDE_VAR(p); > + p[18] = 0xab; > + p[19] = 0xab; > + > + kmem_cache_free(s, p); > + validate_slab_cache(s); > + KUNIT_EXPECT_EQ(test, 2, slab_errors); > + > + kasan_enable_current(); > + kmem_cache_destroy(s); > +} > + > static int test_init(struct kunit *test) > { > slab_errors = 0; > @@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = { > #endif > > KUNIT_CASE(test_clobber_redzone_free), > + KUNIT_CASE(test_kmalloc_redzone_access), > {} > }; > > diff --git a/mm/slab.h b/mm/slab.h > index c71590f3a22b..b6cd98b16ba7 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > /* Legal flag mask for kmem_cache_create(), for various configurations */ > #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > SLAB_CACHE_DMA32 | SLAB_PANIC | \ > - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) > + SLAB_KMALLOC | SLAB_SKIP_KFENCE | \ > + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS) Shouldn't this hunk be in the previous patch, otherwise that patch alone will fail? This will also make SLAB_SKIP_KFENCE generally available to be used for cache creation. This is a significant change, and before it wasn't possible. Perhaps add a brief note to the commit message (or have a separate patch). We were trying to avoid making this possible, as it might be abused - however, given it's required for tests like these, I suppose there's no way around it. Thanks, -- Marco
On 11/29/22 10:31, Marco Elver wrote: > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: >> >> kmalloc redzone check for slub has been merged, and it's better to add >> a kunit case for it, which is inspired by a real-world case as described >> in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): >> >> " >> octeon-hcd will crash the kernel when SLOB is used. This usually happens >> after the 18-byte control transfer when a device descriptor is read. >> The DMA engine is always transferring full 32-bit words and if the >> transfer is shorter, some random garbage appears after the buffer. >> The problem is not visible with SLUB since it rounds up the allocations >> to word boundary, and the extra bytes will go undetected. >> " >> >> To avoid interrupting the normal functioning of kmalloc caches, a >> kmem_cache mimicing kmalloc cache is created with similar and all >> necessary flags to have kmalloc-redzone enabled, and kmalloc_trace() >> is used to really test the orig_size and redzone setup. >> >> Suggested-by: Vlastimil Babka <vbabka@suse.cz> >> Signed-off-by: Feng Tang <feng.tang@intel.com> >> --- >> Changelog: >> >> since v1: >> * create a new cache mimicing kmalloc cache, reduce dependency >> over global slub_debug setting (Vlastimil Babka) >> >> lib/slub_kunit.c | 23 +++++++++++++++++++++++ >> mm/slab.h | 3 ++- >> 2 files changed, 25 insertions(+), 1 deletion(-) >> >> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c >> index a303adf8f11c..dbdd656624d0 100644 >> --- a/lib/slub_kunit.c >> +++ b/lib/slub_kunit.c >> @@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test) >> kmem_cache_destroy(s); >> } >> >> +static void test_kmalloc_redzone_access(struct kunit *test) >> +{ >> + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0, >> + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS, >> + NULL); >> + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); >> + >> + kasan_disable_current(); >> + >> + /* Suppress the -Warray-bounds warning */ >> + OPTIMIZER_HIDE_VAR(p); >> + p[18] = 0xab; >> + p[19] = 0xab; >> + >> + kmem_cache_free(s, p); >> + validate_slab_cache(s); >> + KUNIT_EXPECT_EQ(test, 2, slab_errors); >> + >> + kasan_enable_current(); >> + kmem_cache_destroy(s); >> +} >> + >> static int test_init(struct kunit *test) >> { >> slab_errors = 0; >> @@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = { >> #endif >> >> KUNIT_CASE(test_clobber_redzone_free), >> + KUNIT_CASE(test_kmalloc_redzone_access), >> {} >> }; >> >> diff --git a/mm/slab.h b/mm/slab.h >> index c71590f3a22b..b6cd98b16ba7 100644 >> --- a/mm/slab.h >> +++ b/mm/slab.h >> @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, >> /* Legal flag mask for kmem_cache_create(), for various configurations */ >> #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ >> SLAB_CACHE_DMA32 | SLAB_PANIC | \ >> - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) >> + SLAB_KMALLOC | SLAB_SKIP_KFENCE | \ >> + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS) > > Shouldn't this hunk be in the previous patch, otherwise that patch > alone will fail? Good point. > This will also make SLAB_SKIP_KFENCE generally available to be used > for cache creation. This is a significant change, and before it wasn't > possible. Perhaps add a brief note to the commit message (or have a > separate patch). We were trying to avoid making this possible, as it > might be abused - however, given it's required for tests like these, I > suppose there's no way around it. For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid this trouble? After all there is a sysfs file to control it at runtime anyway (via skip_kfence_store()). In that case patch 1 would have to wrap kmem_cache_create() and the flag addition with a new function to avoid repeating. That function could also be adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define DEFAULT_FLAGS. For SLAB_KMALLOC there's probably no such way unless we abuse the internal APIs even more and call e.g. create_boot_cache() instead of kmem_cache_create(). But that one is __init, so probably not. If we do instead allow the flag, I wouldn't add it to SLAB_CORE_FLAGS but rather SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTED. > Thanks, > -- Marco
On Tue, 29 Nov 2022 at 12:01, Vlastimil Babka <vbabka@suse.cz> wrote: > > On 11/29/22 10:31, Marco Elver wrote: > > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: > >> diff --git a/mm/slab.h b/mm/slab.h > >> index c71590f3a22b..b6cd98b16ba7 100644 > >> --- a/mm/slab.h > >> +++ b/mm/slab.h > >> @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > >> /* Legal flag mask for kmem_cache_create(), for various configurations */ > >> #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > >> SLAB_CACHE_DMA32 | SLAB_PANIC | \ > >> - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) > >> + SLAB_KMALLOC | SLAB_SKIP_KFENCE | \ > >> + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS) > > > > Shouldn't this hunk be in the previous patch, otherwise that patch > > alone will fail? > > Good point. > > > This will also make SLAB_SKIP_KFENCE generally available to be used > > for cache creation. This is a significant change, and before it wasn't > > possible. Perhaps add a brief note to the commit message (or have a > > separate patch). We were trying to avoid making this possible, as it > > might be abused - however, given it's required for tests like these, I > > suppose there's no way around it. > > For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid > this trouble? After all there is a sysfs file to control it at runtime > anyway (via skip_kfence_store()). > In that case patch 1 would have to wrap kmem_cache_create() and the flag > addition with a new function to avoid repeating. That function could also be > adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define > DEFAULT_FLAGS. I wouldn't overcomplicate it, all we need is a way to say "this flag should not be used directly" - and only have it available via an indirect step. Availability via sysfs is one such step. And for tests, there are 2 options: 1. we could provide a function "kmem_cache_set_test_flags(cache, gfp_flags)" and define SLAB_TEST_FLAGS (which would include SLAB_SKIP_KFENCE). This still allows to set it generally, but should make abuse less likely due to the "test" in the name of that function. 2. just set it directly, s->flags |= SLAB_SKIP_KFENCE. If you're fine with #2, that seems simplest and would be my preference. > For SLAB_KMALLOC there's probably no such way unless we abuse the internal > APIs even more and call e.g. create_boot_cache() instead of > kmem_cache_create(). But that one is __init, so probably not. If we do > instead allow the flag, I wouldn't add it to SLAB_CORE_FLAGS but rather > SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTED. I'd probably go with the simplest solution here.
On 11/29/22 12:48, Marco Elver wrote: > On Tue, 29 Nov 2022 at 12:01, Vlastimil Babka <vbabka@suse.cz> wrote: >> >> On 11/29/22 10:31, Marco Elver wrote: >> > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: > >> For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid >> this trouble? After all there is a sysfs file to control it at runtime >> anyway (via skip_kfence_store()). >> In that case patch 1 would have to wrap kmem_cache_create() and the flag >> addition with a new function to avoid repeating. That function could also be >> adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define >> DEFAULT_FLAGS. > > I wouldn't overcomplicate it, all we need is a way to say "this flag > should not be used directly" - and only have it available via an > indirect step. Availability via sysfs is one such step. > > And for tests, there are 2 options: > > 1. we could provide a function "kmem_cache_set_test_flags(cache, > gfp_flags)" and define SLAB_TEST_FLAGS (which would include > SLAB_SKIP_KFENCE). This still allows to set it generally, but should > make abuse less likely due to the "test" in the name of that function. > > 2. just set it directly, s->flags |= SLAB_SKIP_KFENCE. > > If you're fine with #2, that seems simplest and would be my preference. Yeah, that's what I meant. But slub_kunit.c could still have own internal cache creation function so the "|SLAB_NO_USER_FLAGS" and "s->flags |= SLAB_SKIP_KFENCE" is not repeated X times. > >> For SLAB_KMALLOC there's probably no such way unless we abuse the internal >> APIs even more and call e.g. create_boot_cache() instead of >> kmem_cache_create(). But that one is __init, so probably not. If we do >> instead allow the flag, I wouldn't add it to SLAB_CORE_FLAGS but rather >> SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTED. > > I'd probably go with the simplest solution here. Agreed.
On Tue, Nov 29, 2022 at 08:02:51PM +0800, Vlastimil Babka wrote: > On 11/29/22 12:48, Marco Elver wrote: > > On Tue, 29 Nov 2022 at 12:01, Vlastimil Babka <vbabka@suse.cz> wrote: > >> > >> On 11/29/22 10:31, Marco Elver wrote: > >> > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: > > > >> For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid > >> this trouble? After all there is a sysfs file to control it at runtime > >> anyway (via skip_kfence_store()). > >> In that case patch 1 would have to wrap kmem_cache_create() and the flag > >> addition with a new function to avoid repeating. That function could also be > >> adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define > >> DEFAULT_FLAGS. > > > > I wouldn't overcomplicate it, all we need is a way to say "this flag > > should not be used directly" - and only have it available via an > > indirect step. Availability via sysfs is one such step. > > > > And for tests, there are 2 options: > > > > 1. we could provide a function "kmem_cache_set_test_flags(cache, > > gfp_flags)" and define SLAB_TEST_FLAGS (which would include > > SLAB_SKIP_KFENCE). This still allows to set it generally, but should > > make abuse less likely due to the "test" in the name of that function. > > > > 2. just set it directly, s->flags |= SLAB_SKIP_KFENCE. > > > > If you're fine with #2, that seems simplest and would be my preference. > > Yeah, that's what I meant. But slub_kunit.c could still have own internal > cache creation function so the "|SLAB_NO_USER_FLAGS" and "s->flags |= > SLAB_SKIP_KFENCE" is not repeated X times. I just quickly tried adding a new wrapper, like struct kmem_cache *debug_kmem_cache_create(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *), slab_flags_t debug_flags); and found that, IIUC, both SLAB_KMALLOC and SLAB_NO_USER are creation time flag, while SLAB_SKIP_KFENCE is an allocation runtime flag which could be set after creation. So how about use the initial suggestion from Vlastimil to set the SKIP_KFENCE flag through an internal wrapper in slub_kunit.c? /* Only for debug and test use, to skip kfence allocation */ static inline void kmem_cache_skip_kfence(struct kmem_cache *s) { s->flags |= SLAB_SKIP_KFENCE; } Thanks, Feng > > > >> For SLAB_KMALLOC there's probably no such way unless we abuse the internal > >> APIs even more and call e.g. create_boot_cache() instead of > >> kmem_cache_create(). But that one is __init, so probably not. If we do > >> instead allow the flag, I wouldn't add it to SLAB_CORE_FLAGS but rather > >> SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTED. > > > > I'd probably go with the simplest solution here. > > Agreed.
On Tue, 29 Nov 2022 at 13:53, Feng Tang <feng.tang@intel.com> wrote: > > On Tue, Nov 29, 2022 at 08:02:51PM +0800, Vlastimil Babka wrote: > > On 11/29/22 12:48, Marco Elver wrote: > > > On Tue, 29 Nov 2022 at 12:01, Vlastimil Babka <vbabka@suse.cz> wrote: > > >> > > >> On 11/29/22 10:31, Marco Elver wrote: > > >> > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: > > > > > >> For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid > > >> this trouble? After all there is a sysfs file to control it at runtime > > >> anyway (via skip_kfence_store()). > > >> In that case patch 1 would have to wrap kmem_cache_create() and the flag > > >> addition with a new function to avoid repeating. That function could also be > > >> adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define > > >> DEFAULT_FLAGS. > > > > > > I wouldn't overcomplicate it, all we need is a way to say "this flag > > > should not be used directly" - and only have it available via an > > > indirect step. Availability via sysfs is one such step. > > > > > > And for tests, there are 2 options: > > > > > > 1. we could provide a function "kmem_cache_set_test_flags(cache, > > > gfp_flags)" and define SLAB_TEST_FLAGS (which would include > > > SLAB_SKIP_KFENCE). This still allows to set it generally, but should > > > make abuse less likely due to the "test" in the name of that function. > > > > > > 2. just set it directly, s->flags |= SLAB_SKIP_KFENCE. > > > > > > If you're fine with #2, that seems simplest and would be my preference. > > > > Yeah, that's what I meant. But slub_kunit.c could still have own internal > > cache creation function so the "|SLAB_NO_USER_FLAGS" and "s->flags |= > > SLAB_SKIP_KFENCE" is not repeated X times. > > I just quickly tried adding a new wrapper, like > > struct kmem_cache *debug_kmem_cache_create(const char *name, unsigned int size, > unsigned int align, slab_flags_t flags, > void (*ctor)(void *), slab_flags_t debug_flags); > > and found that, IIUC, both SLAB_KMALLOC and SLAB_NO_USER are creation > time flag, while SLAB_SKIP_KFENCE is an allocation runtime flag which > could be set after creation. > > So how about use the initial suggestion from Vlastimil to set the > SKIP_KFENCE flag through an internal wrapper in slub_kunit.c? > > /* Only for debug and test use, to skip kfence allocation */ > static inline void kmem_cache_skip_kfence(struct kmem_cache *s) > { > s->flags |= SLAB_SKIP_KFENCE; > } Yes, that's fine - as long as it's local to slub_kunit.c, this seems very reasonable. Thanks, -- Marco
On 11/29/22 13:56, Marco Elver wrote: > On Tue, 29 Nov 2022 at 13:53, Feng Tang <feng.tang@intel.com> wrote: >> >> On Tue, Nov 29, 2022 at 08:02:51PM +0800, Vlastimil Babka wrote: >> > On 11/29/22 12:48, Marco Elver wrote: >> > > On Tue, 29 Nov 2022 at 12:01, Vlastimil Babka <vbabka@suse.cz> wrote: >> > >> >> > >> On 11/29/22 10:31, Marco Elver wrote: >> > >> > On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@intel.com> wrote: >> > > >> > >> For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid >> > >> this trouble? After all there is a sysfs file to control it at runtime >> > >> anyway (via skip_kfence_store()). >> > >> In that case patch 1 would have to wrap kmem_cache_create() and the flag >> > >> addition with a new function to avoid repeating. That function could also be >> > >> adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define >> > >> DEFAULT_FLAGS. >> > > >> > > I wouldn't overcomplicate it, all we need is a way to say "this flag >> > > should not be used directly" - and only have it available via an >> > > indirect step. Availability via sysfs is one such step. >> > > >> > > And for tests, there are 2 options: >> > > >> > > 1. we could provide a function "kmem_cache_set_test_flags(cache, >> > > gfp_flags)" and define SLAB_TEST_FLAGS (which would include >> > > SLAB_SKIP_KFENCE). This still allows to set it generally, but should >> > > make abuse less likely due to the "test" in the name of that function. >> > > >> > > 2. just set it directly, s->flags |= SLAB_SKIP_KFENCE. >> > > >> > > If you're fine with #2, that seems simplest and would be my preference. >> > >> > Yeah, that's what I meant. But slub_kunit.c could still have own internal >> > cache creation function so the "|SLAB_NO_USER_FLAGS" and "s->flags |= >> > SLAB_SKIP_KFENCE" is not repeated X times. >> >> I just quickly tried adding a new wrapper, like >> >> struct kmem_cache *debug_kmem_cache_create(const char *name, unsigned int size, >> unsigned int align, slab_flags_t flags, >> void (*ctor)(void *), slab_flags_t debug_flags); >> >> and found that, IIUC, both SLAB_KMALLOC and SLAB_NO_USER are creation >> time flag, while SLAB_SKIP_KFENCE is an allocation runtime flag which >> could be set after creation. >> >> So how about use the initial suggestion from Vlastimil to set the >> SKIP_KFENCE flag through an internal wrapper in slub_kunit.c? >> >> /* Only for debug and test use, to skip kfence allocation */ >> static inline void kmem_cache_skip_kfence(struct kmem_cache *s) >> { >> s->flags |= SLAB_SKIP_KFENCE; >> } > > Yes, that's fine - as long as it's local to slub_kunit.c, this seems > very reasonable. Wrapping just |= SLAB_SKIP_KFENCE won't help that much as you'd need to add a call to kmem_cache_skip_kfence() after each kmem_cache_create() in slub_kunit.c. That's why I propose a wrapper, *also internally defined in slub_kunit.c*, that calls kmem_cache_create() with flags |SLAB_NO_USER_FLAGS, then does s->flags |= SLAB_SKIP_KFENCE; then returns s. At this point said wrapper wouldn't even need align and ctor parameters and could pass 0 and NULL to kmem_cache_create() by itself, as no test uses different values. > Thanks, > -- Marco
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c index a303adf8f11c..dbdd656624d0 100644 --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test) kmem_cache_destroy(s); } +static void test_kmalloc_redzone_access(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0, + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS, + NULL); + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); + + kasan_disable_current(); + + /* Suppress the -Warray-bounds warning */ + OPTIMIZER_HIDE_VAR(p); + p[18] = 0xab; + p[19] = 0xab; + + kmem_cache_free(s, p); + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kasan_enable_current(); + kmem_cache_destroy(s); +} + static int test_init(struct kunit *test) { slab_errors = 0; @@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = { #endif KUNIT_CASE(test_clobber_redzone_free), + KUNIT_CASE(test_kmalloc_redzone_access), {} }; diff --git a/mm/slab.h b/mm/slab.h index c71590f3a22b..b6cd98b16ba7 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, /* Legal flag mask for kmem_cache_create(), for various configurations */ #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) + SLAB_KMALLOC | SLAB_SKIP_KFENCE | \ + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS) #if defined(CONFIG_DEBUG_SLAB) #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)