Message ID | 20221121171202.22080-7-vbabka@suse.cz |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717016wrr; Mon, 21 Nov 2022 09:13:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf73p2gucfFJedNKTEfabubsoAtCskob+k1hdnah3aZilMMLEyRbSbAqB4/T3eJPIc31x2BZ X-Received: by 2002:a62:ea18:0:b0:56c:2d:1e56 with SMTP id t24-20020a62ea18000000b0056c002d1e56mr187890pfh.41.1669050795116; Mon, 21 Nov 2022 09:13:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050795; cv=none; d=google.com; s=arc-20160816; b=mfPeeZ2hMpOX0EQt2dvPrNBSJkSLMYArnlTNZg7k75KKaVBJ2zZXvQZLcXYgMnX4w1 uzNpi2vkJxaTRZJU6YP0UYGMYWT2cX9clT67NzI4x7UPDEsV4CnG77rIdCtlOWigukn9 9h55Zlz55woyK6XbkqGTDlvZL7xx24G40RbpO3W/marhSpAeLD7mAYu7hNwZIIVBhoJY touaz3tOp09gwfz7x0CDxri1CEE32q5kxNKEkBfHmkuVdAAP59bDhrxqlcz0+wYXWAbK GPD/fEeGMYwn6qV0jQSIuS6W2t55oNOoNDgCG2wZ0a83ULU1znixP52u4Pu/VFT/6khd Z9WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=CjD+H9s3kNHdBhpF7s3abJLo810hkTEY+Z60WmvwgT6rRy/34zpzn9KAyIjtwMqJiR ee3XpZWjj4/5PHdGG0o3dhbyXt6dp6QExgO100O90Zl/CFaddaLvM82n3Mu6LMuOe1vJ +mUDamh+FhuHzAbECenxqn41MPbnRQVq9aV1GWjF9Vvow98MiIzS4brGcC22sa1/P4Fw eLL+J33verji1iGx1TeD5dplR9YCftOjrTjSa8TM8axAcQC/LnmB03Wy/sBTNO1j0C7R OTsQEJBRzxJfd7Mp0g2cK7oPR6f7RP0YcvWKrCCOJmJpR6mXy2oJodaveHTnAJjn8pzE Akpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=mzbdeZV8; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m7-20020a656a07000000b004774d8e4894si8733584pgu.747.2022.11.21.09.13.00; Mon, 21 Nov 2022 09:13:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=mzbdeZV8; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231205AbiKURMj (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 12:12:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbiKURMR (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 12:12:17 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84734CEFFE for <linux-kernel@vger.kernel.org>; Mon, 21 Nov 2022 09:12:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A4DD21F8B5; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=mzbdeZV88nwMJZERHp+MxXlxhLuB+ykt8joysBTSggmyh+1h/eKA6pjRb67jY7IJfiK3++ TUSJ8ThVLF9cR87APpWHkMjZXkxns2qobBRdsatFT6Z8+B78H8pg+1e8yL7zdJriIJACeW SAQXkURnU8HJ5xEpv5MTvds2GD3FRgk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=/Q1Dwl+GqymCzCwHjoL2MulHeNdgBKkXPw8jB4vzjZJWHTg7KKAC5SiHCtQa6lL5fIh770 lcxE/2aEcQszx0Bw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 789241377F; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 2OHiHGuxe2MQeQAAMHmgww (envelope-from <vbabka@suse.cz>); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka <vbabka@suse.cz> To: Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Pekka Enberg <penberg@kernel.org> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin <roman.gushchin@linux.dev>, Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka <vbabka@suse.cz> Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:56 +0100 Message-Id: <20221121171202.22080-7-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126606597039381?= X-GMAIL-MSGID: =?utf-8?q?1750126606597039381?= |
Series |
Introduce CONFIG_SLUB_TINY and deprecate SLOB
|
|
Commit Message
Vlastimil Babka
Nov. 21, 2022, 5:11 p.m. UTC
Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation
by grouping pages by mobility, but on tiny systems the extra memory
overhead of separate set of kmalloc-rcl caches will probably be worse,
and mobility grouping likely disabled anyway.
Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the
regular ones.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
include/linux/slab.h | 4 ++++
mm/slab_common.c | 10 ++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
Comments
On 11/21/22 18:11, Vlastimil Babka wrote: > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > by grouping pages by mobility, but on tiny systems the extra memory > overhead of separate set of kmalloc-rcl caches will probably be worse, > and mobility grouping likely disabled anyway. > > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > regular ones. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Fixed up in response to lkp report for a MEMCG_KMEM+SLUB_TINY combo: ---8<--- From c1ec0b924850a2863d061f316615d596176f15bb Mon Sep 17 00:00:00 2001 From: Vlastimil Babka <vbabka@suse.cz> Date: Tue, 15 Nov 2022 18:19:28 +0100 Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with CONFIG_SLUB_TINY Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation by grouping pages by mobility, but on tiny systems the extra memory overhead of separate set of kmalloc-rcl caches will probably be worse, and mobility grouping likely disabled anyway. Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the regular ones. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- include/linux/slab.h | 9 +++++++-- mm/slab_common.c | 10 ++++++++-- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 45efc6c553b8..ae2d19ec8467 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -336,12 +336,17 @@ enum kmalloc_cache_type { #endif #ifndef CONFIG_MEMCG_KMEM KMALLOC_CGROUP = KMALLOC_NORMAL, -#else - KMALLOC_CGROUP, #endif +#ifdef CONFIG_SLUB_TINY + KMALLOC_RECLAIM = KMALLOC_NORMAL, +#else KMALLOC_RECLAIM, +#endif #ifdef CONFIG_ZONE_DMA KMALLOC_DMA, +#endif +#ifdef CONFIG_MEMCG_KMEM + KMALLOC_CGROUP, #endif NR_KMALLOC_TYPES }; diff --git a/mm/slab_common.c b/mm/slab_common.c index a8cb5de255fc..907d52963806 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -770,10 +770,16 @@ EXPORT_SYMBOL(kmalloc_size_roundup); #define KMALLOC_CGROUP_NAME(sz) #endif +#ifndef CONFIG_SLUB_TINY +#define KMALLOC_RCL_NAME(sz) .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #sz, +#else +#define KMALLOC_RCL_NAME(sz) +#endif + #define INIT_KMALLOC_INFO(__size, __short_size) \ { \ .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ + KMALLOC_RCL_NAME(__short_size) \ KMALLOC_CGROUP_NAME(__short_size) \ KMALLOC_DMA_NAME(__short_size) \ .size = __size, \ @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { - if (type == KMALLOC_RECLAIM) { + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { flags |= SLAB_RECLAIM_ACCOUNT; } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { if (mem_cgroup_kmem_disabled()) {
On Wed, Nov 23, 2022 at 02:53:43PM +0100, Vlastimil Babka wrote: > On 11/21/22 18:11, Vlastimil Babka wrote: > > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > > by grouping pages by mobility, but on tiny systems the extra memory > > overhead of separate set of kmalloc-rcl caches will probably be worse, > > and mobility grouping likely disabled anyway. > > > > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > > regular ones. > > > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > > Fixed up in response to lkp report for a MEMCG_KMEM+SLUB_TINY combo: > ---8<--- > From c1ec0b924850a2863d061f316615d596176f15bb Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Tue, 15 Nov 2022 18:19:28 +0100 > Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with > CONFIG_SLUB_TINY > > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > by grouping pages by mobility, but on tiny systems the extra memory > overhead of separate set of kmalloc-rcl caches will probably be worse, > and mobility grouping likely disabled anyway. > > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > regular ones. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/slab.h | 9 +++++++-- > mm/slab_common.c | 10 ++++++++-- > 2 files changed, 15 insertions(+), 4 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 45efc6c553b8..ae2d19ec8467 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -336,12 +336,17 @@ enum kmalloc_cache_type { > #endif > #ifndef CONFIG_MEMCG_KMEM > KMALLOC_CGROUP = KMALLOC_NORMAL, > -#else > - KMALLOC_CGROUP, > #endif > +#ifdef CONFIG_SLUB_TINY > + KMALLOC_RECLAIM = KMALLOC_NORMAL, > +#else > KMALLOC_RECLAIM, > +#endif > #ifdef CONFIG_ZONE_DMA > KMALLOC_DMA, > +#endif > +#ifdef CONFIG_MEMCG_KMEM > + KMALLOC_CGROUP, > #endif > NR_KMALLOC_TYPES > }; Can you please elaborate what the lkp report was about and how you fixed it? I'm not getting what the problem of previous version is. > diff --git a/mm/slab_common.c b/mm/slab_common.c > index a8cb5de255fc..907d52963806 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -770,10 +770,16 @@ EXPORT_SYMBOL(kmalloc_size_roundup); > #define KMALLOC_CGROUP_NAME(sz) > #endif > > +#ifndef CONFIG_SLUB_TINY > +#define KMALLOC_RCL_NAME(sz) .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #sz, > +#else > +#define KMALLOC_RCL_NAME(sz) > +#endif > + > #define INIT_KMALLOC_INFO(__size, __short_size) \ > { \ > .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ > - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ > + KMALLOC_RCL_NAME(__short_size) \ > KMALLOC_CGROUP_NAME(__short_size) \ > KMALLOC_DMA_NAME(__short_size) \ > .size = __size, \ > @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) > static void __init > new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) > { > - if (type == KMALLOC_RECLAIM) { > + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { > flags |= SLAB_RECLAIM_ACCOUNT; > } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { > if (mem_cgroup_kmem_disabled()) { > -- > 2.38.1 > Otherwise looks fine to me.
On 11/24/22 13:06, Hyeonggon Yoo wrote: > On Wed, Nov 23, 2022 at 02:53:43PM +0100, Vlastimil Babka wrote: >> On 11/21/22 18:11, Vlastimil Babka wrote: >> > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation >> > by grouping pages by mobility, but on tiny systems the extra memory >> > overhead of separate set of kmalloc-rcl caches will probably be worse, >> > and mobility grouping likely disabled anyway. >> > >> > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the >> > regular ones. >> > >> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> >> >> Fixed up in response to lkp report for a MEMCG_KMEM+SLUB_TINY combo: >> ---8<--- >> From c1ec0b924850a2863d061f316615d596176f15bb Mon Sep 17 00:00:00 2001 >> From: Vlastimil Babka <vbabka@suse.cz> >> Date: Tue, 15 Nov 2022 18:19:28 +0100 >> Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with >> CONFIG_SLUB_TINY >> >> Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation >> by grouping pages by mobility, but on tiny systems the extra memory >> overhead of separate set of kmalloc-rcl caches will probably be worse, >> and mobility grouping likely disabled anyway. >> >> Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the >> regular ones. >> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> >> --- >> include/linux/slab.h | 9 +++++++-- >> mm/slab_common.c | 10 ++++++++-- >> 2 files changed, 15 insertions(+), 4 deletions(-) >> >> diff --git a/include/linux/slab.h b/include/linux/slab.h >> index 45efc6c553b8..ae2d19ec8467 100644 >> --- a/include/linux/slab.h >> +++ b/include/linux/slab.h >> @@ -336,12 +336,17 @@ enum kmalloc_cache_type { >> #endif >> #ifndef CONFIG_MEMCG_KMEM >> KMALLOC_CGROUP = KMALLOC_NORMAL, >> -#else >> - KMALLOC_CGROUP, >> #endif >> +#ifdef CONFIG_SLUB_TINY >> + KMALLOC_RECLAIM = KMALLOC_NORMAL, >> +#else >> KMALLOC_RECLAIM, >> +#endif >> #ifdef CONFIG_ZONE_DMA >> KMALLOC_DMA, >> +#endif >> +#ifdef CONFIG_MEMCG_KMEM >> + KMALLOC_CGROUP, >> #endif >> NR_KMALLOC_TYPES >> }; > > Can you please elaborate what the lkp report was about > and how you fixed it? I'm not getting what the problem of previous > version is. Report here: https://lore.kernel.org/all/202211231949.nIyAWKam-lkp@intel.com/ Problem is that if the preprocessing results in e.g. KMALLOC_NORMAL = 0, KMALLOC_DMA = KMALLOC_NORMAL KMALLOC_CGROUP, KMALLOC_RECLAIM = KMALLOC_NORMAL, NR_KMALLOC_TYPES then NR_KMALLOC_TYPES is not 2, but 1, because the enum's internal counter got reset to 0 by KMALLOC_RECLAIM = KMALLOC_NORMAL. A common gotcha :/
On Thu, Nov 24, 2022 at 01:12:13PM +0100, Vlastimil Babka wrote: > On 11/24/22 13:06, Hyeonggon Yoo wrote: > > On Wed, Nov 23, 2022 at 02:53:43PM +0100, Vlastimil Babka wrote: > >> On 11/21/22 18:11, Vlastimil Babka wrote: > >> > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > >> > by grouping pages by mobility, but on tiny systems the extra memory > >> > overhead of separate set of kmalloc-rcl caches will probably be worse, > >> > and mobility grouping likely disabled anyway. > >> > > >> > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > >> > regular ones. > >> > > >> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > >> > >> Fixed up in response to lkp report for a MEMCG_KMEM+SLUB_TINY combo: > >> ---8<--- > >> From c1ec0b924850a2863d061f316615d596176f15bb Mon Sep 17 00:00:00 2001 > >> From: Vlastimil Babka <vbabka@suse.cz> > >> Date: Tue, 15 Nov 2022 18:19:28 +0100 > >> Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with > >> CONFIG_SLUB_TINY > >> > >> Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > >> by grouping pages by mobility, but on tiny systems the extra memory > >> overhead of separate set of kmalloc-rcl caches will probably be worse, > >> and mobility grouping likely disabled anyway. > >> > >> Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > >> regular ones. > >> > >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > >> --- > >> include/linux/slab.h | 9 +++++++-- > >> mm/slab_common.c | 10 ++++++++-- > >> 2 files changed, 15 insertions(+), 4 deletions(-) > >> > >> diff --git a/include/linux/slab.h b/include/linux/slab.h > >> index 45efc6c553b8..ae2d19ec8467 100644 > >> --- a/include/linux/slab.h > >> +++ b/include/linux/slab.h > >> @@ -336,12 +336,17 @@ enum kmalloc_cache_type { > >> #endif > >> #ifndef CONFIG_MEMCG_KMEM > >> KMALLOC_CGROUP = KMALLOC_NORMAL, > >> -#else > >> - KMALLOC_CGROUP, > >> #endif > >> +#ifdef CONFIG_SLUB_TINY > >> + KMALLOC_RECLAIM = KMALLOC_NORMAL, > >> +#else > >> KMALLOC_RECLAIM, > >> +#endif > >> #ifdef CONFIG_ZONE_DMA > >> KMALLOC_DMA, > >> +#endif > >> +#ifdef CONFIG_MEMCG_KMEM > >> + KMALLOC_CGROUP, > >> #endif > >> NR_KMALLOC_TYPES > >> }; > > > > Can you please elaborate what the lkp report was about > > and how you fixed it? I'm not getting what the problem of previous > > version is. > > Report here: > https://lore.kernel.org/all/202211231949.nIyAWKam-lkp@intel.com/ > > Problem is that if the preprocessing results in e.g. > KMALLOC_NORMAL = 0, > KMALLOC_DMA = KMALLOC_NORMAL > KMALLOC_CGROUP, > KMALLOC_RECLAIM = KMALLOC_NORMAL, > NR_KMALLOC_TYPES > > then NR_KMALLOC_TYPES is not 2, but 1, because the enum's internal counter > got reset to 0 by KMALLOC_RECLAIM = KMALLOC_NORMAL. A common gotcha :/ Thanks for quick and kind explanation :) That was easy to be missed.
On Wed, Nov 23, 2022 at 02:53:43PM +0100, Vlastimil Babka wrote: > On 11/21/22 18:11, Vlastimil Babka wrote: > > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > > by grouping pages by mobility, but on tiny systems the extra memory > > overhead of separate set of kmalloc-rcl caches will probably be worse, > > and mobility grouping likely disabled anyway. > > > > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > > regular ones. > > > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > > Fixed up in response to lkp report for a MEMCG_KMEM+SLUB_TINY combo: > ---8<--- > From c1ec0b924850a2863d061f316615d596176f15bb Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Tue, 15 Nov 2022 18:19:28 +0100 > Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with > CONFIG_SLUB_TINY > > Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation > by grouping pages by mobility, but on tiny systems the extra memory > overhead of separate set of kmalloc-rcl caches will probably be worse, > and mobility grouping likely disabled anyway. > > Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the > regular ones. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/slab.h | 9 +++++++-- > mm/slab_common.c | 10 ++++++++-- > 2 files changed, 15 insertions(+), 4 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 45efc6c553b8..ae2d19ec8467 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -336,12 +336,17 @@ enum kmalloc_cache_type { > #endif > #ifndef CONFIG_MEMCG_KMEM > KMALLOC_CGROUP = KMALLOC_NORMAL, > -#else > - KMALLOC_CGROUP, > #endif > +#ifdef CONFIG_SLUB_TINY > + KMALLOC_RECLAIM = KMALLOC_NORMAL, > +#else > KMALLOC_RECLAIM, > +#endif > #ifdef CONFIG_ZONE_DMA > KMALLOC_DMA, > +#endif > +#ifdef CONFIG_MEMCG_KMEM > + KMALLOC_CGROUP, > #endif > NR_KMALLOC_TYPES > }; > diff --git a/mm/slab_common.c b/mm/slab_common.c > index a8cb5de255fc..907d52963806 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -770,10 +770,16 @@ EXPORT_SYMBOL(kmalloc_size_roundup); > #define KMALLOC_CGROUP_NAME(sz) > #endif > > +#ifndef CONFIG_SLUB_TINY > +#define KMALLOC_RCL_NAME(sz) .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #sz, > +#else > +#define KMALLOC_RCL_NAME(sz) > +#endif > + > #define INIT_KMALLOC_INFO(__size, __short_size) \ > { \ > .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ > - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ > + KMALLOC_RCL_NAME(__short_size) \ > KMALLOC_CGROUP_NAME(__short_size) \ > KMALLOC_DMA_NAME(__short_size) \ > .size = __size, \ > @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) > static void __init > new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) > { > - if (type == KMALLOC_RECLAIM) { > + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { for consistency this can be: if (IS_ENABLED(CONFIG_SLUB_TINY) && (type == KMALLOC_RECLAIM)) { But yeah, it's not a big deal. > flags |= SLAB_RECLAIM_ACCOUNT; > } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { > if (mem_cgroup_kmem_disabled()) { > -- > 2.38.1 > For either case: Looks good to me. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
On Thu, Nov 24, 2022 at 10:23:51PM +0900, Hyeonggon Yoo wrote: > > @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) > > static void __init > > new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) > > { > > - if (type == KMALLOC_RECLAIM) { > > + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { > > for consistency this can be: > if (IS_ENABLED(CONFIG_SLUB_TINY) && (type == KMALLOC_RECLAIM)) { > My finger slipped :) I mean: if (!IS_ENABLED(CONFIG_SLUB_TINY) && (type == KMALLOC_RECLAIM)) {
diff --git a/include/linux/slab.h b/include/linux/slab.h index 45efc6c553b8..3ce9474c90ab 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -339,7 +339,11 @@ enum kmalloc_cache_type { #else KMALLOC_CGROUP, #endif +#ifndef CONFIG_SLUB_TINY KMALLOC_RECLAIM, +#else + KMALLOC_RECLAIM = KMALLOC_NORMAL, +#endif #ifdef CONFIG_ZONE_DMA KMALLOC_DMA, #endif diff --git a/mm/slab_common.c b/mm/slab_common.c index a8cb5de255fc..907d52963806 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -770,10 +770,16 @@ EXPORT_SYMBOL(kmalloc_size_roundup); #define KMALLOC_CGROUP_NAME(sz) #endif +#ifndef CONFIG_SLUB_TINY +#define KMALLOC_RCL_NAME(sz) .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #sz, +#else +#define KMALLOC_RCL_NAME(sz) +#endif + #define INIT_KMALLOC_INFO(__size, __short_size) \ { \ .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ + KMALLOC_RCL_NAME(__short_size) \ KMALLOC_CGROUP_NAME(__short_size) \ KMALLOC_DMA_NAME(__short_size) \ .size = __size, \ @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { - if (type == KMALLOC_RECLAIM) { + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { flags |= SLAB_RECLAIM_ACCOUNT; } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { if (mem_cgroup_kmem_disabled()) {