From patchwork Wed Nov 30 08:54:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 27694 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp812658wrr; Wed, 30 Nov 2022 01:07:45 -0800 (PST) X-Google-Smtp-Source: AA0mqf5nF2FQmoiiu1MFbmLmBPD9M7W2wtkDG3fhf2fnFR1r5sKm34SXpx3MOpdK9RA0MdbU5mRh X-Received: by 2002:a17:906:aac8:b0:7ae:df97:a033 with SMTP id kt8-20020a170906aac800b007aedf97a033mr35170236ejb.344.1669799265244; Wed, 30 Nov 2022 01:07:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669799265; cv=none; d=google.com; s=arc-20160816; b=T5kDd/gE3+zEaqX8tffCdWNXnjAuQC/LNP7ss7JPObXwhIiQGf0ldsoV22CdKniXN/ bHdk9IBpKAzsMfWuuQbxaZb0C9i0Ya84OLpvFFhidWcrF8SXPmwy3+WRTNy1cUIjxNBK 1b0PDFxukuKuPVXtjM/aAtH0O9YXbcCHQE4Ww+m4jNYMBvSM3c7OTlP+Ce5LAMOYt/JR ehjWi3vIsPH2dIScO+c7o86TBVswcK8aNTGZLH7ts1RTuNWHeslNvM+05QGD+7ejg5YT fkhNUNGa3ch2iktzobtWkcEx/ZiLin0oEL56Z0sgkMYZXwr5/xu+Rt1cXgNn+hAzarGq 45hQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=ewBpET307hqn1Z0ofRnWM4PAZGCBHDVPOJDYgzDa1FM=; b=eOEtVYBkj/YWGfhyNoz2Achrc2scgomszebsGPwGJ9al5+2zJp2MpG4cACWR2wo1qL ceYaEMi2bqHy8z75UBvX6Sfp7nI2JlY/2Pk/afXkrnYq+r9vOdpWTBuSrjfYj3JwenFw mv3R952vOrxWzF+9ncnuwpYLDCVZILkjr285viCrkq1Et0yEVNMggkW1vlBvXtkctzYc ox/WH9+MRedm9gzmvltXIiz5juQXw7vbRwtufiXk0sC23+r8AT+MyhgkE7MC4MvpyWVo d1xgpRIc5XJV3IziRT7WPx6lilW/7pnvK2ucGxVUjEwJy17U3tIH1E3GxeJhVMKvPJA0 /8yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QmtXguEI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ty12-20020a170907c70c00b007ae377adb6asi810158ejc.628.2022.11.30.01.07.21; Wed, 30 Nov 2022 01:07:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QmtXguEI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234444AbiK3I5w (ORCPT + 99 others); Wed, 30 Nov 2022 03:57:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231826AbiK3I5u (ORCPT ); Wed, 30 Nov 2022 03:57:50 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E8732CE0A for ; Wed, 30 Nov 2022 00:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669798669; x=1701334669; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1es6/A3ga1m5NpXqdmla/W5OjLRqvafVSel0I0ZkrnM=; b=QmtXguEIvEeEoBdlGsZQyhoS+WdN+3qhv+J8ujoqdaiNaLw2rvh9X2DR ySaRJp57AJmkHd36zNywBpJPEcrlbtsKt4aGJ/U2CI9nuZeQ6cBewCWo7 PbxVnBzJcd337LGCZHJUidMCZF5gimEpdjKNbaVU4jsH9wcNuP7xrQlIO TrSZGWw8AiozRUyKoQiiNqhWVrLBeb391OzHW2fz605AnipXxf6MY0ZK6 VOVmIFarOF039XM25r5/bxM6TuOfoB8nWaaH21aoxGhjLNOyMm+ccGLPq NyvVHdBU0yo5fZPZxqLVnB+rO5M3+Mngywv1kA0O17pUr08/Wg07D7uUH g==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="295039286" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="295039286" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 00:57:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="973026140" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="973026140" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2022 00:57:45 -0800 From: Feng Tang To: Vlastimil Babka , Marco Elver , Andrew Morton , Oliver Glitta , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: [PATCH v3 1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for cache creation Date: Wed, 30 Nov 2022 16:54:50 +0800 Message-Id: <20221130085451.3390992-1-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750911434139730906?= X-GMAIL-MSGID: =?utf-8?q?1750911434139730906?= When kfence is enabled, the buffer allocated from the test case could be from a kfence pool, and the operation could be also caught and reported by kfence first, causing the case to fail. With default kfence setting, this is very difficult to be triggered. By changing CONFIG_KFENCE_NUM_OBJECTS from 255 to 16383, and CONFIG_KFENCE_SAMPLE_INTERVAL from 100 to 5, the allocation from kfence did hit 7 times in different slub_kunit cases out of 900 times of boot test. To avoid this, initially we tried is_kfence_address() to check this and repeated allocation till finding a non-kfence address. Vlastimil Babka suggested SLAB_SKIP_KFENCE flag could be used to achieve this, and better add a wrapper function for simplifying cache creation. Signed-off-by: Feng Tang Reviewed-by: Marco Elver Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- Changelog: since v2: * Don't make SKIP_KFENCE an allowd flag for cache creation, and solve a bug of failed cache creation issue (Marco Elver) * Add a wrapper cache creation function to simplify code including SKIP_KFENCE handling (Vlastimil Babka) lib/slub_kunit.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c index 7a0564d7cb7a..5b0c8e7eb6dc 100644 --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -9,10 +9,25 @@ static struct kunit_resource resource; static int slab_errors; +/* + * Wrapper function for kmem_cache_create(), which reduces 2 parameters: + * 'align' and 'ctor', and sets SLAB_SKIP_KFENCE flag to avoid getting an + * object from kfence pool, where the operation could be caught by both + * our test and kfence sanity check. + */ +static struct kmem_cache *test_kmem_cache_create(const char *name, + unsigned int size, slab_flags_t flags) +{ + struct kmem_cache *s = kmem_cache_create(name, size, 0, + (flags | SLAB_NO_USER_FLAGS), NULL); + s->flags |= SLAB_SKIP_KFENCE; + return s; +} + static void test_clobber_zone(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0, - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_alloc", 64, + SLAB_RED_ZONE); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kasan_disable_current(); @@ -29,8 +44,8 @@ static void test_clobber_zone(struct kunit *test) #ifndef CONFIG_KASAN static void test_next_pointer(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_next_ptr_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); unsigned long tmp; unsigned long *ptr_addr; @@ -74,8 +89,8 @@ static void test_next_pointer(struct kunit *test) static void test_first_word(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_1th_word_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kmem_cache_free(s, p); @@ -89,8 +104,8 @@ static void test_first_word(struct kunit *test) static void test_clobber_50th_byte(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_50th_word_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kmem_cache_free(s, p); @@ -105,8 +120,8 @@ static void test_clobber_50th_byte(struct kunit *test) static void test_clobber_redzone_free(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0, - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_free", 64, + SLAB_RED_ZONE); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kasan_disable_current(); From patchwork Wed Nov 30 08:54:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 27695 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp812708wrr; Wed, 30 Nov 2022 01:07:51 -0800 (PST) X-Google-Smtp-Source: AA0mqf6JrlMy9u6mcokfYqu8H424LSJ8818m4fJYTfKdlyz6dT/etfnXP8PHZmX9CwZ+ifU/UmJ7 X-Received: by 2002:a17:906:1686:b0:7ba:489e:8489 with SMTP id s6-20020a170906168600b007ba489e8489mr29348361ejd.669.1669799271512; Wed, 30 Nov 2022 01:07:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669799271; cv=none; d=google.com; s=arc-20160816; b=BzxfmzCbK8a9wXtz7hYSCCjC7SVnG2mJ1chZfZwFsNvQC5I8NVe54ROmgnN1MCeThr Qtqy0HB44rYSKdiwN4yQc6mRQUYbQYM0HgYzhRs8CECOi/Wi4Bz9aB/PQAlK0SIwBML6 QM6z6I7PP4GYH+LukxFrwQ2wkaCkbbsjo2/u9xK6Ja8GCn9utsMLyYkKqjLvnubEURUY 9Z9BmuiEPWBZj/ncUVQOrqy31fHflUAXYqUQji+HFXlwiFugbvGzVd9x2O0/q1a1BSwo xO3FTSyMx4BjEfDpamjNMnJ4NLVC/3mcv2c1EF0zjhhLAc9ylaBxSsy4dqc9oA5qGDzt ursw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HoRYI/mRl2a8hByRgyN9XAWD5nRydxhExVY5fJ/6gMk=; b=gzhgmHL8zaarfEYlG4fe7bBz7f4HN6OvY6ulsPl8/npMcae4HazhqKNgetFcJntm9A ONWXNzDwmhtPDMHeiqVDjkTPOQ6BDCGH8n/YX5B51prEcg1NTM7Pmz2LhqRRQYi6eZYE 2U8z38qiObUH15avR7QlmGNcGHG2pq24VM3vhQg7euslJd2DkLFwaeEbqvyhe3fMPEw7 ZL7NloTkpvTq1c2ZYS4mhaSjvMDd4wgH4bYqQpzlKtZJ0yoneFATZDbqvsdQPPDQq9Mo Aqr04+H89TO7PEMO0K4LhKs4N+gC5chWRSc4sZhu7HHIU+kKdD42PxMndG3DQyVtBVfv wwsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=L22zVcbU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ne7-20020a1709077b8700b0078debc9d30esi937373ejc.29.2022.11.30.01.07.27; Wed, 30 Nov 2022 01:07:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=L22zVcbU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235364AbiK3I54 (ORCPT + 99 others); Wed, 30 Nov 2022 03:57:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234252AbiK3I5y (ORCPT ); Wed, 30 Nov 2022 03:57:54 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E8572CE0A for ; Wed, 30 Nov 2022 00:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669798672; x=1701334672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wUhL46G8FczpkZOn55ukJFkU1wtUnvW4RJiQUOueOmw=; b=L22zVcbUXLw3oi0isFiokU7qnNAdeTwelHanWr9oeyDeqC0BzO9mfPNJ ZcrcE7kr2Iy4Tcsei8Lopvd+mfejACYO1D/22gLR5f4115eguWtYeL374 aQaKZrnMSyUxOdIWtJT2vTJl/b9KFwnsgbElyTA9QIcxBP5c6PJEKGNm4 P70BAsyKrxUR5pySNdMUZf9p1rO9SMoswWug67ZEBU4rFV+FY0CzeT5fg MwA7ARhwYEQ2ZlNXeSvcl70c85FE4ezAk3dHohbdRUqY++MzM5gYDAPld w5M3d7syM9w96UD326tHY1CK9137c7lfWhlwOS4nFm4J4dVg06nLazICQ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="295039302" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="295039302" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 00:57:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="973026150" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="973026150" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2022 00:57:48 -0800 From: Feng Tang To: Vlastimil Babka , Marco Elver , Andrew Morton , Oliver Glitta , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: [PATCH v3 2/2] mm/slub, kunit: Add a test case for kmalloc redzone check Date: Wed, 30 Nov 2022 16:54:51 +0800 Message-Id: <20221130085451.3390992-2-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221130085451.3390992-1-feng.tang@intel.com> References: <20221130085451.3390992-1-feng.tang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750911441145883963?= X-GMAIL-MSGID: =?utf-8?q?1750911441145883963?= kmalloc redzone check for slub has been merged, and it's better to add a kunit case for it, which is inspired by a real-world case as described in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): " octeon-hcd will crash the kernel when SLOB is used. This usually happens after the 18-byte control transfer when a device descriptor is read. The DMA engine is always transferring full 32-bit words and if the transfer is shorter, some random garbage appears after the buffer. The problem is not visible with SLUB since it rounds up the allocations to word boundary, and the extra bytes will go undetected. " To avoid interrupting the normal functioning of kmalloc caches, a kmem_cache mimicing kmalloc cache is created with similar flags, and kmalloc_trace() is used to really test the orig_size and redzone setup. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- Changelog: since v2: * only add SLAB_KMALLOC to SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTEDa, and use new wrapper of cache creation(Vlastimil Babka) since v1: * create a new cache mimicing kmalloc cache, reduce dependency over global slub_debug setting (Vlastimil Babka) lib/slub_kunit.c | 22 ++++++++++++++++++++++ mm/slab.h | 4 +++- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c index 5b0c8e7eb6dc..ff24879e3afe 100644 --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test) kmem_cache_destroy(s); } +static void test_kmalloc_redzone_access(struct kunit *test) +{ + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); + + kasan_disable_current(); + + /* Suppress the -Warray-bounds warning */ + OPTIMIZER_HIDE_VAR(p); + p[18] = 0xab; + p[19] = 0xab; + + kmem_cache_free(s, p); + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kasan_enable_current(); + kmem_cache_destroy(s); +} + static int test_init(struct kunit *test) { slab_errors = 0; @@ -154,6 +175,7 @@ static struct kunit_case test_cases[] = { #endif KUNIT_CASE(test_clobber_redzone_free), + KUNIT_CASE(test_kmalloc_redzone_access), {} }; diff --git a/mm/slab.h b/mm/slab.h index c71590f3a22b..7cc432969945 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -344,7 +344,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, SLAB_ACCOUNT) #elif defined(CONFIG_SLUB) #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ - SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS) + SLAB_TEMPORARY | SLAB_ACCOUNT | \ + SLAB_NO_USER_FLAGS | SLAB_KMALLOC) #else #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE) #endif @@ -364,6 +365,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, SLAB_RECLAIM_ACCOUNT | \ SLAB_TEMPORARY | \ SLAB_ACCOUNT | \ + SLAB_KMALLOC | \ SLAB_NO_USER_FLAGS) bool __kmem_cache_empty(struct kmem_cache *);