From patchwork Mon Nov 21 17:11:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23945 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1716623wrr; Mon, 21 Nov 2022 09:12:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf7xDWsyBr1LXfOC9qegH0+PZT2oFikbDAU8VugGvhUpYHdfQnPc3JuWYBoofs3UmGeoqvnX X-Received: by 2002:a17:902:ebc1:b0:186:b6aa:5646 with SMTP id p1-20020a170902ebc100b00186b6aa5646mr1179488plg.73.1669050764463; Mon, 21 Nov 2022 09:12:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050764; cv=none; d=google.com; s=arc-20160816; b=Dmhmyh4NnSz2YmpD1fblO8DtpJMdpnpcCZbyvdZRscORxbiiqoeOIHFblQVraRDWo1 8i6CBLeRhKmDNlyJDD1BkXVTJPl3v7egyBJCw7YwcVJQXV56Pz5hMBokzULprngy9tEa cnHnfw/lN/sgv4PkXJIAvQTHuRUHODdSk+v27l6jQF3lSPLtj+GBF2ilzcGYYR1G6kgP t9Gmkg0P9ShfZ26zB7WBi6O0fCr+MVcVrrBLiuPsxTBIh0tWylxn5Shk6O7m2PGp14yk skyWis1S3sT25qzsQu4wqNVv8KJ6/Go5EhAKevwABj4LwE5dA+QsRs058mawN6hg0gPE DXkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=OH4gn9/yOQemXBxOCkzrxth+6xS29Ha57a7nfXKQmYtKul4+6quctLRGIWCFoji89D ZafSkFD9DBB3/RbIPAFNlMkXfI8bjw+/y/ble7HUmRtwClP+FcV4v7PC5JLDNDgvTO34 v5pYJTRCG0JTLj63ig1WxvMUaB9+LkcdZnA2lRWH5e4w7baOBjheQ85T70phCQy0IQQU Fm/jj87zRTzqayJ9lrggA5Ys2i7rK691pOYL/pUXRTvOSIgu9P8z8kljlEMV+MpVkImL aerMpHGHbeqU58ov4aFY8qLFQx+7alaG8nIM/StG8cDQBcSYz7tMJA9grQJXtdYCn7Lo ibdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="GvgIJH/o"; dkim=neutral (no key) header.i=@suse.cz header.b=EtJakSav; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f9-20020a63f749000000b0046004f18c6csi5896135pgk.456.2022.11.21.09.12.29; Mon, 21 Nov 2022 09:12:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="GvgIJH/o"; dkim=neutral (no key) header.i=@suse.cz header.b=EtJakSav; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbiKURMT (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230378AbiKURMP (ORCPT ); Mon, 21 Nov 2022 12:12:15 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA862CB9C4 for ; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id ACEF41F8AC; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=GvgIJH/oeJ3busNpCVt+0e739h8ng7Y5p0H1Yvj8Oyyz4VNlMSoMJeSSkcRWIVIi+rwbOm 9iY7YqAVIJ3Q1bniJXZImm1Ex1ohYTS9h9AbyfhgjeSYdiWb6qiMTz0AxzDiwJ4mOMYjiq Wm6Ff/tkvltO0fhwoQ58ct2GtyMQF6o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=EtJakSavfVgdRtpd6s1MaFBxytPX8OtTNWjLrPrxQfI+w+LmcN1V+PnY15Kz29QVD5t6PY jcDi72IIpNYPBRDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7F54113B03; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id INVuHmqxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:10 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka , Kees Cook Subject: [PATCH 01/12] mm, slab: ignore hardened usercopy parameters when disabled Date: Mon, 21 Nov 2022 18:11:51 +0100 Message-Id: <20221121171202.22080-2-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126574417155528?= X-GMAIL-MSGID: =?utf-8?q?1750126574417155528?= With CONFIG_HARDENED_USERCOPY not enabled, there are no __check_heap_object() checks happening that would use the kmem_cache useroffset and usersize fields. Yet the fields are still initialized, preventing merging of otherwise compatible caches. Thus ignore the values passed to cache creation and leave them zero when CONFIG_HARDENED_USERCOPY is disabled. In a quick virtme boot test, this has reduced the number of caches in /proc/slabinfo from 131 to 111. Cc: Kees Cook Signed-off-by: Vlastimil Babka Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 0042fb2730d1..a8cb5de255fc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, flags &= CACHE_CREATE_MASK; /* Fail closed on bad usersize of useroffset values. */ - if (WARN_ON(!usersize && useroffset) || + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || + WARN_ON(!usersize && useroffset) || WARN_ON(size < usersize || size - usersize < useroffset)) usersize = useroffset = 0; @@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, align = max(align, size); s->align = calculate_alignment(flags, align, size); + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) + useroffset = usersize = 0; + s->useroffset = useroffset; s->usersize = usersize; From patchwork Mon Nov 21 17:11:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23955 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1718284wrr; Mon, 21 Nov 2022 09:14:59 -0800 (PST) X-Google-Smtp-Source: AA0mqf5OwXFszVskWhEcCXqFBy8BT5PTcHkpP4kh4n2Nrg2gdWg0Vy7WKHwdoZs7f+T5lj0sFVle X-Received: by 2002:a05:6402:2946:b0:468:febe:ebab with SMTP id ed6-20020a056402294600b00468febeebabmr15346183edb.337.1669050899353; Mon, 21 Nov 2022 09:14:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050899; cv=none; d=google.com; s=arc-20160816; b=1GOyBzSLRXtenV2h/XcIBI4w667pS+kCMdW30alXqkH0KBmoqayweWy9sBNCXXioax oZxvwojEJvpwlLiVRGwFvpwf61hyqJ1kRje7QjGKQqxRDJcmqp17SQHlB6ch80lbq65f cHsaRaW+SWwTLOwh/Y9i0wZWpaY+lFqNFdUSAR9MpLeINY2SVVrl9J9F8S3+BMerztDi SGWBm0lRnY0CjylWShpyvByH1dC5wZhwBFpuQRl/iDAiBS9P2JGq0Zu2tDBqy3JSEJ8H kzc2goulcAzVj/tN/nquk+NEtyQ7QkrtYe86R3YQomTQbFSyuJYB1SO/pw5LRg+1wM7S hLYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=LiNIx3D+Vcqg/oY9PQmVLgvvCSLDF2Y8YpP6qaxgdmQ=; b=TdQwMMz6xinjVc8uazgsM3MVYWbIOp1/xg0JaAv3vP89ztfQgCMZLmu2aHjMVDJJ0u 9/dSsunZ76OHvXT8TEHwvFR+FL8pbLerm3TDbBf88S19kPPOH7LWOoaVn6GZwzjnqd53 s+WFGqfmd3TBuPtBZykv1cNY1fcSjLm8cqR9boAtCLfmNSG9JxsS9P9ti/tDGtlOQHBn IM0U3B7IemIKNUlSSxAzAmxHSbDJ/DCRtTWlYd4NQ5b89aK5W3gV8T9x3M3FxRQuDeqz Nstvdj4WVE18jvpY+qPhoblx48x2uGWKAUrGjet7iDhenGBcKLqjj4BlfgXoWzBCRLmD ox9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="LPKd/io8"; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i14-20020a05640242ce00b0046751a6076bsi11338237edc.318.2022.11.21.09.14.34; Mon, 21 Nov 2022 09:14:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="LPKd/io8"; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbiKURMV (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230438AbiKURMP (ORCPT ); Mon, 21 Nov 2022 12:12:15 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 838DCCC142 for ; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id DAF4E220BD; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LiNIx3D+Vcqg/oY9PQmVLgvvCSLDF2Y8YpP6qaxgdmQ=; b=LPKd/io8qKI9K8EVw1r7vw2cy1PMA7Kwg6zn0iDq6kJHW5T8swTh27ag8LKyXvwjda1rbG ltPXt0gGx55SaIlg+ElNJXJWohazAtUZvWqniSm5gZt5gX6895aJRDMgb8ydR8k+o8nja+ MzbKnQSYnFGrW9m2Iy81tDRhhu7a5Dk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LiNIx3D+Vcqg/oY9PQmVLgvvCSLDF2Y8YpP6qaxgdmQ=; b=vV+8DIg+0WjsTwBQs+6XsipNzWjMdZc2I2kHsyMeo4zoE2ygokXewN4JweFLNzXHAo0Su2 CavHWb6InA0DkQBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B0CAC1377F; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id cC6MKmqxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:10 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 02/12] mm, slub: add CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:52 +0100 Message-Id: <20221121171202.22080-3-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126715531029936?= X-GMAIL-MSGID: =?utf-8?q?1750126715531029936?= For tiny systems that have used SLOB until now, SLUB might be impractical due to its higher memory usage. To help with that, introduce an option CONFIG_SLUB_TINY that modifies SLUB to use less memory. This is done by sacrificing scalability, security and debugging features, therefore not recommended for any system with more than 16MB RAM. This commit introduces the option and uses it to set other related options in a way that reduces memory usage. Signed-off-by: Vlastimil Babka Acked-by: Roman Gushchin Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/Kconfig | 21 +++++++++++++++++---- mm/Kconfig.debug | 2 +- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 57e1d8c5b505..5941cb34e30d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -230,6 +230,19 @@ config SLOB endchoice +config SLUB_TINY + bool "Configure SLUB for minimal memory footprint" + depends on SLUB && EXPERT + select SLAB_MERGE_DEFAULT + help + Configures the SLUB allocator in a way to achieve minimal memory + footprint, sacrificing scalability, debugging and other features. + This is intended only for the smallest system that had used the + SLOB allocator and is not recommended for systems with more than + 16MB RAM. + + If unsure, say N. + config SLAB_MERGE_DEFAULT bool "Allow slab caches to be merged" default y @@ -247,7 +260,7 @@ config SLAB_MERGE_DEFAULT config SLAB_FREELIST_RANDOM bool "Randomize slab freelist" - depends on SLAB || SLUB + depends on SLAB || SLUB && !SLUB_TINY help Randomizes the freelist order used on creating new pages. This security feature reduces the predictability of the kernel slab @@ -255,7 +268,7 @@ config SLAB_FREELIST_RANDOM config SLAB_FREELIST_HARDENED bool "Harden slab freelist metadata" - depends on SLAB || SLUB + depends on SLAB || SLUB && !SLUB_TINY help Many kernel heap attacks try to target slab cache metadata and other infrastructure. This options makes minor performance @@ -267,7 +280,7 @@ config SLAB_FREELIST_HARDENED config SLUB_STATS default n bool "Enable SLUB performance statistics" - depends on SLUB && SYSFS + depends on SLUB && SYSFS && !SLUB_TINY help SLUB statistics are useful to debug SLUBs allocation behavior in order find ways to optimize the allocator. This should never be @@ -279,7 +292,7 @@ config SLUB_STATS config SLUB_CPU_PARTIAL default y - depends on SLUB && SMP + depends on SLUB && SMP && !SLUB_TINY bool "SLUB per cpu partial cache" help Per cpu partial caches accelerate objects allocation and freeing diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index ce8dded36de9..fca699ad1fb0 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -56,7 +56,7 @@ config DEBUG_SLAB config SLUB_DEBUG default y bool "Enable SLUB debugging support" if EXPERT - depends on SLUB && SYSFS + depends on SLUB && SYSFS && !SLUB_TINY select STACKDEPOT if STACKTRACE_SUPPORT help SLUB has extensive debug support features. Disabling these can From patchwork Mon Nov 21 17:11:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23950 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717135wrr; Mon, 21 Nov 2022 09:13:25 -0800 (PST) X-Google-Smtp-Source: AA0mqf7eobwNlK+a3INCe21wq8bVMOw/lbCoxAf7/SIpYzsbdTaIjAGntAvypdnmC72WQfelClOC X-Received: by 2002:a17:90a:5911:b0:218:7b32:d353 with SMTP id k17-20020a17090a591100b002187b32d353mr19316992pji.100.1669050804716; Mon, 21 Nov 2022 09:13:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050804; cv=none; d=google.com; s=arc-20160816; b=diLoCY6H/5bXlxNalxG0d+ZZKwOUFNf9uV3Kz1on0k1KCI1Q5ytvQ4MObx+ZOzRf2O A2gS3uxktLQYYw4euWU+0C6Xz5j21WpzHAPqS9vACkpN3fJEwgox1leVHrB+b4JdGlRy QX/kgSVZUqpdaKWDGwSeU/pSvjKi7PecV9MWev8r1ImjJ7FXbLIDU1GB50fogPmb3QLH JAb/4tKZ2XjfyV7rptUuSJEYhM6mTL3OsbRzplOjqNF+/uUHlzmNiix7Jc5bvY1GXy4+ 1vOG48b0RAwLaGA1MFnNnnMnAXE95yzkkoqWUsl5hyCapYNIFoRYCjAVYj/SHZ2AFOIy Mo2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=jdtJevUfKFpur3mHBr5toAQMEnA3Zpl/FSyOEJ3hxb4=; b=MXXcoEJQ7GZ3MzFyDqVRoDLsr9+pDJgMRiXL5ly8X17KtcOWMeqo76Y+IdKdvusQ/n 9iXfc64pcSoav3w7XkHgGrf13v4ndiEV2xNZre53RL58DPEwoNurjK81xwhWiDseLZkB 2i28oQ0TV8TaFqB6jesNw+JG4TjXxkK7TOqf+yD54J8jKuUhMcIAbCYO0LVkV6lzS+ZB YMy5/xmqSYu/30ArR05vcLaz7ZXMKzMxDrftl/kaQnLNm1pLnvJHEdKu0y3ddX75Dr4n v39YbOKKNWI50Iqmm7vA+mU1d6pfCq1hup5IFFZYi58AiZfjqGk89CfOinrUS77vGiyQ 4HPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=oA2ggNUI; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id iw19-20020a170903045300b00188b44ba380si10727133plb.426.2022.11.21.09.13.09; Mon, 21 Nov 2022 09:13:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=oA2ggNUI; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231226AbiKURMo (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230420AbiKURMR (ORCPT ); Mon, 21 Nov 2022 12:12:17 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83CB4CE9DA for ; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 15D30220D1; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jdtJevUfKFpur3mHBr5toAQMEnA3Zpl/FSyOEJ3hxb4=; b=oA2ggNUI6fF0SsaSV4V/A2Sxe7Vd2+DBkLUa3690zSiTrqO8Aga92oF54EPtwfqxLYoie/ DY3yvh8hAh4kLpOZMxG/jb34xmn+FyKHVKabi2e7yCbjwJ88ZU/71+8vPKUldFvMqyjtAb ktd3CQpxc/WckHocsEgVGNhm7hRFhG8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jdtJevUfKFpur3mHBr5toAQMEnA3Zpl/FSyOEJ3hxb4=; b=3/jntdVvQT0EpMHLXt5ERtf8cfleYLhtlr/mNu7GPDDF89TxcaQxSwuekc6lo6qqOM8c29 6mEe9L08sDd5U4Bw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DE7F813B03; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 8FW2NWqxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:10 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 03/12] mm, slub: disable SYSFS support with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:53 +0100 Message-Id: <20221121171202.22080-4-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126616369890626?= X-GMAIL-MSGID: =?utf-8?q?1750126616369890626?= Currently SLUB enables its sysfs support depending unconditionally on the general CONFIG_SYSFS setting. To reduce the configuration combination space, make CONFIG_SLUB_TINY disable SLUB's sysfs support by reusing the existing SLAB_SUPPORTS_SYSFS define. It is unlikely that real tiny systems would combine CONFIG_SLUB_TINY with CONFIG_SYSFS, but a randconfig might. Signed-off-by: Vlastimil Babka --- include/linux/slub_def.h | 2 +- mm/slub.c | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..c186f25c8148 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -142,7 +142,7 @@ struct kmem_cache { struct kmem_cache_node *node[MAX_NUMNODES]; }; -#ifdef CONFIG_SYSFS +#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) #define SLAB_SUPPORTS_SYSFS void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); diff --git a/mm/slub.c b/mm/slub.c index 157527d7101b..ab085aa2f1f0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -298,7 +298,7 @@ struct track { enum track_item { TRACK_ALLOC, TRACK_FREE }; -#ifdef CONFIG_SYSFS +#ifdef SLAB_SUPPORTS_SYSFS static int sysfs_slab_add(struct kmem_cache *); static int sysfs_slab_alias(struct kmem_cache *, const char *); #else @@ -2935,7 +2935,7 @@ static noinline void free_debug_processing( } #endif /* CONFIG_SLUB_DEBUG */ -#if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) +#if defined(CONFIG_SLUB_DEBUG) || defined(SLAB_SUPPORTS_SYSFS) static unsigned long count_partial(struct kmem_cache_node *n, int (*get_count)(struct slab *)) { @@ -2949,7 +2949,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, spin_unlock_irqrestore(&n->list_lock, flags); return x; } -#endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */ +#endif /* CONFIG_SLUB_DEBUG || SLAB_SUPPORTS_SYSFS */ static noinline void slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) @@ -4924,7 +4924,7 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -#ifdef CONFIG_SYSFS +#ifdef SLAB_SUPPORTS_SYSFS static int count_inuse(struct slab *slab) { return slab->inuse; @@ -5182,7 +5182,7 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, #endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_SLUB_DEBUG */ -#ifdef CONFIG_SYSFS +#ifdef SLAB_SUPPORTS_SYSFS enum slab_stat_type { SL_ALL, /* All slabs */ SL_PARTIAL, /* Only partially allocated slabs */ @@ -6056,7 +6056,7 @@ static int __init slab_sysfs_init(void) } __initcall(slab_sysfs_init); -#endif /* CONFIG_SYSFS */ +#endif /* SLAB_SUPPORTS_SYSFS */ #if defined(CONFIG_SLUB_DEBUG) && defined(CONFIG_DEBUG_FS) static int slab_debugfs_show(struct seq_file *seq, void *v) From patchwork Mon Nov 21 17:11:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23946 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1716888wrr; Mon, 21 Nov 2022 09:13:05 -0800 (PST) X-Google-Smtp-Source: AA0mqf6wf+8slb0XeMRqCdCYjmHq0MLSPNpFi4ty3tq/yZzPjzQQDC5IKIXFQmMR7X4aw1OgEscG X-Received: by 2002:a17:903:4051:b0:170:f343:ba14 with SMTP id n17-20020a170903405100b00170f343ba14mr12761636pla.70.1669050785588; Mon, 21 Nov 2022 09:13:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050785; cv=none; d=google.com; s=arc-20160816; b=M9CfCaBLoIHfhHjBkhbhjhHmhSrDfzhNbMn9FgoQ7QJxh8kIa6/o4zXs5LaQGs82iX Td34ggVsD/GPlwxwPf2wiQyMYfrRLAE//Q2vOA9Nt8blb4ppfmLwXlRIAbtgLMLXhO7V HElDEIRZCMyULwQR4MvSlpoopwMtLKyOTuq9kpRYUJRN0757obPRxtE1x4iABxVFkBmj J4n+spEluPeufpVyfUBp1ch5GjDUvNb1LHR1Rp1hSHoWbALOv1VM/0r210O15L63NrMj oSticFT3UIMdj73+zeSZECa/LxLVhaI37EJh2VgKd5MEc8B1U0K4zwki4JMx7tKTXsgG zaWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=AE6b9Ip1Jv26Xwgek6onVQxbl/nnvniL1XqPDrcyQQ0=; b=nMxufUz8sZUsfg9I+7Cy3PmhTeZQtMZCTZ4aiCeYeloxbhX01kxzd5HlR8wVJAdZKS 0u1loHrI+N8773MTYV7th48tZdNzVGrz5eNv1J/uMm7Dpo5D+xtHgsIBN7qMDMrVknKh keL0m3CIv78it05Ivw0RzTC/Q6gnuRgcYS0hulBpWNRPge1hO4N8njLJzHGN4yvXXLq4 fY3UYP5K9MNo3TnKoxNfdBk1dMu2IFk82qBepGUzcUIhoRaw5h9dmUsraHqiHYcvWiFr Sp/iNkhX+cPrlzJrkJo34SrbtB8Ws3eRuOe4xL/tH8FvYhQUiYA0RasyyR4cHVzFUTQz j/fA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="1/VTn6d3"; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e14-20020a170902ed8e00b00189005c48aesi8214999plj.108.2022.11.21.09.12.46; Mon, 21 Nov 2022 09:13:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="1/VTn6d3"; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230484AbiKURM2 (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230441AbiKURMP (ORCPT ); Mon, 21 Nov 2022 12:12:15 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8437DCEFDE for ; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 46D35220D3; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AE6b9Ip1Jv26Xwgek6onVQxbl/nnvniL1XqPDrcyQQ0=; b=1/VTn6d3nJ/r68CevB7e+wCDMbxnR2MTtiL6t4eG83n1/xPF5FDa5TmLbvbToZhzqKsJsG l1H1iDYpN0lQy3MhKDXrvHx79gVP4a2M9w5yso/OsBylq5lk7y6JIffE9GuiS/e/MVuRij 9KhvvDaOCrn9Ol6Lx7067yqcmG2QFS0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AE6b9Ip1Jv26Xwgek6onVQxbl/nnvniL1XqPDrcyQQ0=; b=bBKuH2Y7eX+L9HTCPMnlLbK53G+n/K2K3+rWDVILDjbkeZm8pOwpFzT1RXpbmwxlkx6vMn KbqVlik0Z4wWhlBA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1AB361377F; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id EJ7dBWuxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 04/12] mm, slub: retain no free slabs on partial list with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:54 +0100 Message-Id: <20221121171202.22080-5-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126596239459706?= X-GMAIL-MSGID: =?utf-8?q?1750126596239459706?= SLUB will leave a number of slabs on the partial list even if they are empty, to avoid some slab freeing and reallocation. The goal of CONFIG_SLUB_TINY is to minimize memory overhead, so set the limits to 0 for immediate slab page freeing. Signed-off-by: Vlastimil Babka Acked-by: Roman Gushchin Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index ab085aa2f1f0..917b79278bad 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -241,6 +241,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) /* Enable to log cmpxchg failures */ #undef SLUB_DEBUG_CMPXCHG +#ifndef CONFIG_SLUB_TINY /* * Minimum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. @@ -253,6 +254,10 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) * sort the partial list by the number of objects in use. */ #define MAX_PARTIAL 10 +#else +#define MIN_PARTIAL 0 +#define MAX_PARTIAL 0 +#endif #define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \ SLAB_POISON | SLAB_STORE_USER) From patchwork Mon Nov 21 17:11:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23948 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1716987wrr; Mon, 21 Nov 2022 09:13:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf4OWGjdiJ0eOWHmdHMDSON1sM2Lo/myEHQ6UHrt12KpKhy+y20Vrz3tIF3qfiZR/rbXs89N X-Received: by 2002:a17:90b:2705:b0:218:78ae:bdaa with SMTP id px5-20020a17090b270500b0021878aebdaamr20482000pjb.162.1669050793685; Mon, 21 Nov 2022 09:13:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050793; cv=none; d=google.com; s=arc-20160816; b=H/Wb4v+VshBwp784Ztt28bktwFkqb+rPlrqG2Cf6IZbh7+WBRDJcBq9ZjzM107qxWw Sf+USgXB2oXrVEOnqlont8IkDFNYqGgzNlX7QT0g7+d2/a9g3PBsG7thJ1J3M4+agOfm EyeSJf5b6kwcT9pUeOlXPEXSmNXSvu1eldMF4GD+EK4QgeIGUpkZr2AaAbH89txEm4hz 4fa6nrcrb57cVTM1q2omoEr5NXo7dl+txxVGPOk/g1s7OKZSqglH5XcAAyptrc3Pr9rs jfWv0oxCm3/RrhnV2Um05XK7F1skXWqdQDcwqQWbpah5s5W9vkpARUx96pKW7CehbEQ8 aohw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=i+6JBhvUCUmmY2fewWAJiOqYwob3QGMrPLk+o7jKpOo=; b=vGNJHt3TsKFbha2f7RomZ0VWOkn7eQW24iGl/xsm/FsP8Sc4NxheNy3uA7Cij+pLqV PHXbL4iGHrPg+lmobrl7C0FuJKmWVBDBWYYzEmD2g6CEbgKeGUHkjj4IkqkFhjpE09Pl Pg2bJ7i0iAT1bHckta603nY6dw+wDMXCmFP/d6bnHhB8XFZj2cDxNgEOwgtsuu6iYOSE 6xRN6gMjDRwqJM+AGcAmdVe4fXbJ23vqV44/18y8fOpYfl96PGNguSy/zCmE5Y4l0qP7 bNYHBqIqS6fr/eca1TrTYjwW4+5M3SBDWVKdxfgBeNVUt3sabvx6gm+xsUwJNwJ3PlsB laMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=rmucTGRh; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c13-20020a056a00248d00b0054764d871b5si13015933pfv.230.2022.11.21.09.12.58; Mon, 21 Nov 2022 09:13:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=rmucTGRh; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231193AbiKURMf (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230377AbiKURMQ (ORCPT ); Mon, 21 Nov 2022 12:12:16 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8469ECEFF1 for ; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 767781F8B4; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+6JBhvUCUmmY2fewWAJiOqYwob3QGMrPLk+o7jKpOo=; b=rmucTGRhwveRnInQNpada4a4fkOqU6D6rRn9oP5ElxleOq8yEvmYnOEhYDr+K8soSSBUR0 ApX/SEVkPEnvlVduLIpiCf7597UbxLGv4wiDmeFhQjxo4TF1RTKYCpMz8fxzzzN0xVFggC iEMgt8tuVapzUSxmvwogI3SUBUS2Sek= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+6JBhvUCUmmY2fewWAJiOqYwob3QGMrPLk+o7jKpOo=; b=ibeF/tUAVWs4fDUiEPICV/h0CIyDkXvu2Fwj3crXGYkf3alc6y3+9IzvFzC5Ov1PpvV44s hW9J4iGgwJb4cFDw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4AD5013B03; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id OFWnEWuxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 05/12] mm, slub: lower the default slub_max_order with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:55 +0100 Message-Id: <20221121171202.22080-6-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126605269028385?= X-GMAIL-MSGID: =?utf-8?q?1750126605269028385?= With CONFIG_SLUB_TINY we want to minimize memory overhead. By lowering the default slub_max_order we can make slab allocations use smaller pages. However depending on object sizes, order-0 might not be the best due to increased fragmentation. When testing on a 8MB RAM k210 system by Damien Le Moal [1], slub_max_order=1 had the best results, so use that as the default for CONFIG_SLUB_TINY. [1] https://lore.kernel.org/all/6a1883c4-4c3f-545a-90e8-2cd805bcf4ae@opensource.wdc.com/ Signed-off-by: Vlastimil Babka Acked-by: Roman Gushchin Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 917b79278bad..bf726dd00f7d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3888,7 +3888,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); * take the list_lock. */ static unsigned int slub_min_order; -static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; +static unsigned int slub_max_order = + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; static unsigned int slub_min_objects; /* From patchwork Mon Nov 21 17:11:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23949 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717016wrr; Mon, 21 Nov 2022 09:13:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf73p2gucfFJedNKTEfabubsoAtCskob+k1hdnah3aZilMMLEyRbSbAqB4/T3eJPIc31x2BZ X-Received: by 2002:a62:ea18:0:b0:56c:2d:1e56 with SMTP id t24-20020a62ea18000000b0056c002d1e56mr187890pfh.41.1669050795116; Mon, 21 Nov 2022 09:13:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050795; cv=none; d=google.com; s=arc-20160816; b=mfPeeZ2hMpOX0EQt2dvPrNBSJkSLMYArnlTNZg7k75KKaVBJ2zZXvQZLcXYgMnX4w1 uzNpi2vkJxaTRZJU6YP0UYGMYWT2cX9clT67NzI4x7UPDEsV4CnG77rIdCtlOWigukn9 9h55Zlz55woyK6XbkqGTDlvZL7xx24G40RbpO3W/marhSpAeLD7mAYu7hNwZIIVBhoJY touaz3tOp09gwfz7x0CDxri1CEE32q5kxNKEkBfHmkuVdAAP59bDhrxqlcz0+wYXWAbK GPD/fEeGMYwn6qV0jQSIuS6W2t55oNOoNDgCG2wZ0a83ULU1znixP52u4Pu/VFT/6khd Z9WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=CjD+H9s3kNHdBhpF7s3abJLo810hkTEY+Z60WmvwgT6rRy/34zpzn9KAyIjtwMqJiR ee3XpZWjj4/5PHdGG0o3dhbyXt6dp6QExgO100O90Zl/CFaddaLvM82n3Mu6LMuOe1vJ +mUDamh+FhuHzAbECenxqn41MPbnRQVq9aV1GWjF9Vvow98MiIzS4brGcC22sa1/P4Fw eLL+J33verji1iGx1TeD5dplR9YCftOjrTjSa8TM8axAcQC/LnmB03Wy/sBTNO1j0C7R OTsQEJBRzxJfd7Mp0g2cK7oPR6f7RP0YcvWKrCCOJmJpR6mXy2oJodaveHTnAJjn8pzE Akpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=mzbdeZV8; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m7-20020a656a07000000b004774d8e4894si8733584pgu.747.2022.11.21.09.13.00; Mon, 21 Nov 2022 09:13:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=mzbdeZV8; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231205AbiKURMj (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbiKURMR (ORCPT ); Mon, 21 Nov 2022 12:12:17 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84734CEFFE for ; Mon, 21 Nov 2022 09:12:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A4DD21F8B5; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=mzbdeZV88nwMJZERHp+MxXlxhLuB+ykt8joysBTSggmyh+1h/eKA6pjRb67jY7IJfiK3++ TUSJ8ThVLF9cR87APpWHkMjZXkxns2qobBRdsatFT6Z8+B78H8pg+1e8yL7zdJriIJACeW SAQXkURnU8HJ5xEpv5MTvds2GD3FRgk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwCHqKtNWwOZkhmH0ZzjrvRXVmU5GqeQoEQZzyxt9n8=; b=/Q1Dwl+GqymCzCwHjoL2MulHeNdgBKkXPw8jB4vzjZJWHTg7KKAC5SiHCtQa6lL5fIh770 lcxE/2aEcQszx0Bw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 789241377F; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 2OHiHGuxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 06/12] mm, slub: don't create kmalloc-rcl caches with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:56 +0100 Message-Id: <20221121171202.22080-7-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126606597039381?= X-GMAIL-MSGID: =?utf-8?q?1750126606597039381?= Distinguishing kmalloc(__GFP_RECLAIMABLE) can help against fragmentation by grouping pages by mobility, but on tiny systems the extra memory overhead of separate set of kmalloc-rcl caches will probably be worse, and mobility grouping likely disabled anyway. Thus with CONFIG_SLUB_TINY, don't create kmalloc-rcl caches and use the regular ones. Signed-off-by: Vlastimil Babka Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 4 ++++ mm/slab_common.c | 10 ++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 45efc6c553b8..3ce9474c90ab 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -339,7 +339,11 @@ enum kmalloc_cache_type { #else KMALLOC_CGROUP, #endif +#ifndef CONFIG_SLUB_TINY KMALLOC_RECLAIM, +#else + KMALLOC_RECLAIM = KMALLOC_NORMAL, +#endif #ifdef CONFIG_ZONE_DMA KMALLOC_DMA, #endif diff --git a/mm/slab_common.c b/mm/slab_common.c index a8cb5de255fc..907d52963806 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -770,10 +770,16 @@ EXPORT_SYMBOL(kmalloc_size_roundup); #define KMALLOC_CGROUP_NAME(sz) #endif +#ifndef CONFIG_SLUB_TINY +#define KMALLOC_RCL_NAME(sz) .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #sz, +#else +#define KMALLOC_RCL_NAME(sz) +#endif + #define INIT_KMALLOC_INFO(__size, __short_size) \ { \ .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ + KMALLOC_RCL_NAME(__short_size) \ KMALLOC_CGROUP_NAME(__short_size) \ KMALLOC_DMA_NAME(__short_size) \ .size = __size, \ @@ -859,7 +865,7 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { - if (type == KMALLOC_RECLAIM) { + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { flags |= SLAB_RECLAIM_ACCOUNT; } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { if (mem_cgroup_kmem_disabled()) { From patchwork Mon Nov 21 17:11:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23947 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1716948wrr; Mon, 21 Nov 2022 09:13:11 -0800 (PST) X-Google-Smtp-Source: AA0mqf4q6x0qxgZ2nBK7e3mlFokAoIlztFRqX/2ulklK2rcYNYZBpRVlzDBr1571i8kzVZQGMj1e X-Received: by 2002:a63:5f14:0:b0:43c:969f:18a7 with SMTP id t20-20020a635f14000000b0043c969f18a7mr18597356pgb.12.1669050790831; Mon, 21 Nov 2022 09:13:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050790; cv=none; d=google.com; s=arc-20160816; b=ijkPBC/oshPOQReU8LhLGwDItLGGVsVVG69ncp/pBfgZsFvmHrWxxqVEvW/WIbTy17 7uy7uzbWGSCcqZDDfBVfGMHTLn18ln6CbAuD4Qdbs6tHq15MenJU+vSqMSFXND/czAf6 an9e1OVhoaUM9On3txF4RH8LWmA9qiSE5W3G9zpU7TljJ6JSXk7VQgCPrfTo4eevKyiU mRRBR9mDZ4tJogRRxtvqNIAO3e9RKzBzKvRVBJXjzEPDzbZD9bwT30I8obSQjlyqJE8p q5P4fLBZmynq+Yxpy0g28PAqjoeiOhN7C2mw8m2Ky4AxkVfIAiQV+nOUoHuZ7bjcM5NA aLzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=i2JvvueNOyE9wh1FqFGLP1e7ObZGjB/Lqr6ycx9tY24=; b=Iv6lMn8Sg2WktUdwZac6IIOFPlt3uxMTeH5sNZVCLPBx0dZREqrrOf7I5DpeTCwBPm i9nOZU7rhn25DggTMC63hdCSDXxFNwAgotSIiYq5Pql6+dN1C15UQJ3bjGTI2cZ2G7Zi ac2z4iThAGGFP1LRgbjvqV8D48LviPYpyo6rRSf7wVvUybvkk+JQ+YWhRlqetj7n9rHM hplE92WXeyiWFr6KXcwXJUWNO4B7GxCUUYr1sAd4q9QD3oARD5E44gBNPmR4uwp5tdBO XeHmhWISyhNNG5v1xGy5JF48/RgICtpbcohau614cAIi1Jn1Ah4tJhcQf81BXkr/4VjH Uacg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=oObh6wdN; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=leDY5l4B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c18-20020a056a00009200b0056cb8f67eefsi11009501pfj.70.2022.11.21.09.12.53; Mon, 21 Nov 2022 09:13:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=oObh6wdN; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=leDY5l4B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbiKURMc (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230442AbiKURMP (ORCPT ); Mon, 21 Nov 2022 12:12:15 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8432DCEFC6 for ; Mon, 21 Nov 2022 09:12:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D35E41F8B9; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i2JvvueNOyE9wh1FqFGLP1e7ObZGjB/Lqr6ycx9tY24=; b=oObh6wdNyZY9Dk99j4ViHgNh4V3x7QkVPcXKRIc1vMg0NknSGlK4nStjQ3MC9q31fDkn3g VEXegpinTx+uvpuiTMD7CV+Q/YH333P46dlA7hmutWR0nA0aglK+xwBz3QqoiZ+jV+mG0P 0MNq2GbwWXP4QlFMuvb26vfpljrSZic= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i2JvvueNOyE9wh1FqFGLP1e7ObZGjB/Lqr6ycx9tY24=; b=leDY5l4B4sBPGcsiGjIniDoS7JrKHJETEOsEfHQjeMxhOfosw5yHw4fzoQ38mjrhMZxumR q2GPwRTyterJdTDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A6CDF13B03; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id SPMeKGuxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 07/12] mm, slab: ignore SLAB_RECLAIM_ACCOUNT with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:11:57 +0100 Message-Id: <20221121171202.22080-8-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126601783094505?= X-GMAIL-MSGID: =?utf-8?q?1750126601783094505?= SLAB_RECLAIM_ACCOUNT caches allocate their slab pages with __GFP_RECLAIMABLE and can help against fragmentation by grouping pages by mobility, but on tiny systems mobility grouping is likely disabled anyway and ignoring SLAB_RECLAIM_ACCOUNT might instead lead to merging of caches that are made incompatible just by the flag. Thus with CONFIG_SLUB_TINY, make SLAB_RECLAIM_ACCOUNT ineffective. Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 3ce9474c90ab..1cbbda03ad06 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -129,7 +129,11 @@ /* The following flags affect the page allocator grouping pages by mobility */ /* Objects are reclaimable */ +#ifndef CONFIG_SLUB_TINY #define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U) +#else +#define SLAB_RECLAIM_ACCOUNT 0 +#endif #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ /* From patchwork Mon Nov 21 17:11:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23951 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717257wrr; Mon, 21 Nov 2022 09:13:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf4CrIu8UaaPSUgv2VO9oE5Q93rMNTz4KF3mCICg0cx9BsG0L7P6u9om9wvYMqykXNmygcW1 X-Received: by 2002:a63:f117:0:b0:476:e9c6:50ac with SMTP id f23-20020a63f117000000b00476e9c650acmr1636432pgi.430.1669050814159; Mon, 21 Nov 2022 09:13:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050814; cv=none; d=google.com; s=arc-20160816; b=XxDjdncMEmvbSPHcJt+yfs9wvVNGf0+OTdXGYPK6rS859HlOElYik4mbjuqzMBNU2+ bmLgzgR4ExfDLFlZEVbmr+9I3Onj2nqbPEx5/Fyme9yMU/pUP2Us69O8YlqNsCf9nZlO 2xDNF1ETFpTAbpEAe5JRg+vBQNv9qpw+v4B8s3HxC6ArVvY2Euo2TGop6EapZSA/VLy9 SNhdjLBmcPC632alIyXkK7Zf4M/mdUoGtRH7T11Wg6wSC6CL3w/HlMSUibmSyW+/XHqH 6O/qbE/3SQtIosRvrTgZK1cXHT2j+WiTz9S8N2G+r7/QX8WoRjuQEN3WUC0xPTFYmopH bdkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=kxjbonatpGC5AWy6loCUXxl8UzxiaKRWAVgSkFX/DLQ=; b=b7fgjkJ+JGNJ/6CjXUuRQUbS6YBHriW3bRz7pk8XptRDhvYRuSL1LxfNfEvjxnhwNM ZT736D3wpyjsN5lljlfJk1nki4LC8Y0I67uBu9vK1HWAftfo9J+Dj/xUv+Rt/WLEYCZE pDf5KuET01g6ADhOGSE1rzVtQEeI9eOc/n+HdLJGL/mTtn82DLb/DW7B9/hgPCR7akQI gdLqTNZgO9y3b8OiCFkAEyP1dyYiaElueQ37AjwMDoEbVOPhOST8EXii+X7JcIlYOrrZ IzzlARZ86Inx4ROCtZBOUZ3fYuQKZXKh1M6Z5ZQQ7eUmE5jnvy3NCdTh4meVGMwTQLdx H0Mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=ZVaRYtFQ; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n4-20020a170902d2c400b00186b287bd57si12518823plc.190.2022.11.21.09.13.16; Mon, 21 Nov 2022 09:13:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=ZVaRYtFQ; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231241AbiKURMt (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230433AbiKURMR (ORCPT ); Mon, 21 Nov 2022 12:12:17 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A759CFA42 for ; Mon, 21 Nov 2022 09:12:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0DB5F1F8BA; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kxjbonatpGC5AWy6loCUXxl8UzxiaKRWAVgSkFX/DLQ=; b=ZVaRYtFQ3lKV+m9X+CsTTzPG/N5N7Z0vywBD2y1TmsVrNdQPFLRIor9huHDB1eXfWL3Yjz PGDz94tNLYW7urSgcXMm4Xzty/f1KvPdWBFCKe3MnjnFir6/lRN3vi3n+6e5Or/pnZgsbx JrxvMrwzKGUcsctvUjT+TsAXBVzXvuM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kxjbonatpGC5AWy6loCUXxl8UzxiaKRWAVgSkFX/DLQ=; b=5m4px0JGX/iZy4A0F6Fxi6SnlYTAwR6CG4YIG8vUhfoMBAHc3m7Othokg+1RhsI4LQsenh caEBai4kUa2pPiBg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D69CC1377F; Mon, 21 Nov 2022 17:12:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 2E3EM2uxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:11 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 08/12] mm, slub: refactor free debug processing Date: Mon, 21 Nov 2022 18:11:58 +0100 Message-Id: <20221121171202.22080-9-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126626434951128?= X-GMAIL-MSGID: =?utf-8?q?1750126626434951128?= Since commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug caches and make it safe"), caches with debugging enabled use the free_debug_processing() function to do both freeing checks and actual freeing to partial list under list_lock, bypassing the fast paths. We will want to use the same path for CONFIG_SLUB_TINY, but without the debugging checks, so refactor the code so that free_debug_processing() does only the checks, while the freeing is handled by a new function free_to_partial_list(). For consistency, change return parameter alloc_debug_processing() from int to bool and correct the !SLUB_DEBUG variant to return true and not false. This didn't matter until now, but will in the following changes. Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 154 +++++++++++++++++++++++++++++------------------------- 1 file changed, 83 insertions(+), 71 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index bf726dd00f7d..fd56d7cca9c2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1368,7 +1368,7 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, return 1; } -static noinline int alloc_debug_processing(struct kmem_cache *s, +static noinline bool alloc_debug_processing(struct kmem_cache *s, struct slab *slab, void *object, int orig_size) { if (s->flags & SLAB_CONSISTENCY_CHECKS) { @@ -1380,7 +1380,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, trace(s, slab, object, 1); set_orig_size(s, object, orig_size); init_object(s, object, SLUB_RED_ACTIVE); - return 1; + return true; bad: if (folio_test_slab(slab_folio(slab))) { @@ -1393,7 +1393,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, slab->inuse = slab->objects; slab->freelist = NULL; } - return 0; + return false; } static inline int free_consistency_checks(struct kmem_cache *s, @@ -1646,17 +1646,17 @@ static inline void setup_object_debug(struct kmem_cache *s, void *object) {} static inline void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} -static inline int alloc_debug_processing(struct kmem_cache *s, - struct slab *slab, void *object, int orig_size) { return 0; } +static inline bool alloc_debug_processing(struct kmem_cache *s, + struct slab *slab, void *object, int orig_size) { return true; } -static inline void free_debug_processing( - struct kmem_cache *s, struct slab *slab, - void *head, void *tail, int bulk_cnt, - unsigned long addr) {} +static inline bool free_debug_processing(struct kmem_cache *s, + struct slab *slab, void *head, void *tail, int *bulk_cnt, + unsigned long addr, depot_stack_handle_t handle) { return true; } static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {} static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } +static inline depot_stack_handle_t set_track_prepare(void) { return 0; } static inline void set_track(struct kmem_cache *s, void *object, enum track_item alloc, unsigned long addr) {} static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, @@ -2833,38 +2833,28 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) } /* Supports checking bulk free of a constructed freelist */ -static noinline void free_debug_processing( - struct kmem_cache *s, struct slab *slab, - void *head, void *tail, int bulk_cnt, - unsigned long addr) +static inline bool free_debug_processing(struct kmem_cache *s, + struct slab *slab, void *head, void *tail, int *bulk_cnt, + unsigned long addr, depot_stack_handle_t handle) { - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); - struct slab *slab_free = NULL; + bool checks_ok = false; void *object = head; int cnt = 0; - unsigned long flags; - bool checks_ok = false; - depot_stack_handle_t handle = 0; - - if (s->flags & SLAB_STORE_USER) - handle = set_track_prepare(); - - spin_lock_irqsave(&n->list_lock, flags); if (s->flags & SLAB_CONSISTENCY_CHECKS) { if (!check_slab(s, slab)) goto out; } - if (slab->inuse < bulk_cnt) { + if (slab->inuse < *bulk_cnt) { slab_err(s, slab, "Slab has %d allocated objects but %d are to be freed\n", - slab->inuse, bulk_cnt); + slab->inuse, *bulk_cnt); goto out; } next_object: - if (++cnt > bulk_cnt) + if (++cnt > *bulk_cnt) goto out_cnt; if (s->flags & SLAB_CONSISTENCY_CHECKS) { @@ -2886,57 +2876,18 @@ static noinline void free_debug_processing( checks_ok = true; out_cnt: - if (cnt != bulk_cnt) + if (cnt != *bulk_cnt) { slab_err(s, slab, "Bulk free expected %d objects but found %d\n", - bulk_cnt, cnt); - -out: - if (checks_ok) { - void *prior = slab->freelist; - - /* Perform the actual freeing while we still hold the locks */ - slab->inuse -= cnt; - set_freepointer(s, tail, prior); - slab->freelist = head; - - /* - * If the slab is empty, and node's partial list is full, - * it should be discarded anyway no matter it's on full or - * partial list. - */ - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) - slab_free = slab; - - if (!prior) { - /* was on full list */ - remove_full(s, n, slab); - if (!slab_free) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); - stat(s, FREE_ADD_PARTIAL); - } - } else if (slab_free) { - remove_partial(n, slab); - stat(s, FREE_REMOVE_PARTIAL); - } + *bulk_cnt, cnt); + *bulk_cnt = cnt; } - if (slab_free) { - /* - * Update the counters while still holding n->list_lock to - * prevent spurious validation warnings - */ - dec_slabs_node(s, slab_nid(slab_free), slab_free->objects); - } - - spin_unlock_irqrestore(&n->list_lock, flags); +out: if (!checks_ok) slab_fix(s, "Object at 0x%p not freed", object); - if (slab_free) { - stat(s, FREE_SLAB); - free_slab(s, slab_free); - } + return checks_ok; } #endif /* CONFIG_SLUB_DEBUG */ @@ -3453,6 +3404,67 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) } EXPORT_SYMBOL(kmem_cache_alloc_node); +static noinline void free_to_partial_list( + struct kmem_cache *s, struct slab *slab, + void *head, void *tail, int bulk_cnt, + unsigned long addr) +{ + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); + struct slab *slab_free = NULL; + int cnt = bulk_cnt; + unsigned long flags; + depot_stack_handle_t handle = 0; + + if (s->flags & SLAB_STORE_USER) + handle = set_track_prepare(); + + spin_lock_irqsave(&n->list_lock, flags); + + if (free_debug_processing(s, slab, head, tail, &cnt, addr, handle)) { + void *prior = slab->freelist; + + /* Perform the actual freeing while we still hold the locks */ + slab->inuse -= cnt; + set_freepointer(s, tail, prior); + slab->freelist = head; + + /* + * If the slab is empty, and node's partial list is full, + * it should be discarded anyway no matter it's on full or + * partial list. + */ + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) + slab_free = slab; + + if (!prior) { + /* was on full list */ + remove_full(s, n, slab); + if (!slab_free) { + add_partial(n, slab, DEACTIVATE_TO_TAIL); + stat(s, FREE_ADD_PARTIAL); + } + } else if (slab_free) { + remove_partial(n, slab); + stat(s, FREE_REMOVE_PARTIAL); + } + } + + if (slab_free) { + /* + * Update the counters while still holding n->list_lock to + * prevent spurious validation warnings + */ + dec_slabs_node(s, slab_nid(slab_free), slab_free->objects); + } + + spin_unlock_irqrestore(&n->list_lock, flags); + + if (slab_free) { + stat(s, FREE_SLAB); + free_slab(s, slab_free); + } +} + /* * Slow path handling. This may still be called frequently since objects * have a longer lifetime than the cpu slabs in most processing loads. @@ -3479,7 +3491,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; if (kmem_cache_debug(s)) { - free_debug_processing(s, slab, head, tail, cnt, addr); + free_to_partial_list(s, slab, head, tail, cnt, addr); return; } From patchwork Mon Nov 21 17:11:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23952 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717582wrr; Mon, 21 Nov 2022 09:14:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf4L3JRaFYswhglI/JCVRKEb+HWTB6Z6chFPyKoGWqRaK7FRFKtvM1eNSJlrvQHhEMz3zo4y X-Received: by 2002:a17:902:ab89:b0:17d:8a77:518f with SMTP id f9-20020a170902ab8900b0017d8a77518fmr607678plr.22.1669050840285; Mon, 21 Nov 2022 09:14:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050840; cv=none; d=google.com; s=arc-20160816; b=ixGsdA0voC1IFBFUEEk3h5eaZ2WXvbca9rXPMBm7MZaXUL/ZCzks61xvVWtQTh6E6v Gn/CYbasO18/lCSIWZ5vRfLTHBReiT5SnFr6TIPBD+32muJ6XPCDFDfm5beVPZW1kZ0O Lep+peLwMYK6Iyhk+Qz8XHTNIw0CLniqT2fxEmkpf0UqCt8izKQdIfivQuAh1MjGmlmO u/jFzWOunhXG31Mg+yHFn3F5hXfdKzZgVM3P7jf5XNCikPG50taA57uPm3vrdxf41IXk 1MPuUXNz0i7LAvlhvX3vFcR91TJ0KiYU06+FpiMYxyDt0F0QroPI2TPePxL7QA2bve3J /hJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=HMqYhmOsEKQ7/VSttswoCKZLBAGMRcn+b2MFFvLoSuw=; b=h4+gjSzyu+YoJUMunSWxg2DCeyVkZ+sj/RT/W5ED8RjgzIq1duWvCYp3FdJB8C5IFT q+KiOyPaFYP89IemM2tIEs9ml6Vu2zCt0EOkMFXIeEnDS801kqxNON4c/z8xLcEwPg4y HcVcgWaxgJM0CUurB8P996VbicjT8lem1fqdpppQmoG0R/XRJ93ecvU2GzHRQ7JaFY84 Li5Lw7IWDwRF/q8Mlor0XhjWnzwo4meQG1mD0TldYE7T0NuvNWszgZ9a6AbZGzwM8TIe DRZM3GL3mKLyieWJtUjIeBh1MzYJ4d+TIoyhu2n7sFegpimghbyk3SO5sSrnUUUxR42N wpsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=Ojl3e44R; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b="YOjMxAi/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h184-20020a636cc1000000b00476d0c7b644si10716059pgc.290.2022.11.21.09.13.45; Mon, 21 Nov 2022 09:14:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=Ojl3e44R; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b="YOjMxAi/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231264AbiKURMy (ORCPT + 99 others); Mon, 21 Nov 2022 12:12:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230444AbiKURMR (ORCPT ); Mon, 21 Nov 2022 12:12:17 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88D70CFA41 for ; Mon, 21 Nov 2022 09:12:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 47632220D4; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HMqYhmOsEKQ7/VSttswoCKZLBAGMRcn+b2MFFvLoSuw=; b=Ojl3e44RiDKVqXpC9HV756II2gtNxdR9Uatm8nFW93vuLQRymEttq75jrUTkbvJAa11QQ2 Vplon66ODrOjFPKmrshp5Ww7UH7GdyiL91VE2NMIXexKgm+Y9C93ymTHgFPf6dEUO1sjNy H+fp5ErLwwU44qguaKE3oUXmhmewgPk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HMqYhmOsEKQ7/VSttswoCKZLBAGMRcn+b2MFFvLoSuw=; b=YOjMxAi/H9hdYzqkkCS1LB5sqrRT42/kj/OR9JrMx1foyoNrQPbUdup3ul7JTSY6C0BcqG 8StRZNrD3lITgBAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 118A113B03; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WFqfA2yxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:12 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 09/12] mm, slub: split out allocations from pre/post hooks Date: Mon, 21 Nov 2022 18:11:59 +0100 Message-Id: <20221121171202.22080-10-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126654139277583?= X-GMAIL-MSGID: =?utf-8?q?1750126654139277583?= In the following patch we want to introduce CONFIG_SLUB_TINY allocation paths that don't use the percpu slab. To prepare, refactor the allocation functions: Split out __slab_alloc_node() from slab_alloc_node() where the former does the actual allocation and the latter calls the pre/post hooks. Analogically, split out __kmem_cache_alloc_bulk() from kmem_cache_alloc_bulk(). Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 127 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 77 insertions(+), 50 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index fd56d7cca9c2..5677db3f6d15 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2907,10 +2907,10 @@ static unsigned long count_partial(struct kmem_cache_node *n, } #endif /* CONFIG_SLUB_DEBUG || SLAB_SUPPORTS_SYSFS */ +#ifdef CONFIG_SLUB_DEBUG static noinline void slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) { -#ifdef CONFIG_SLUB_DEBUG static DEFINE_RATELIMIT_STATE(slub_oom_rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); int node; @@ -2941,8 +2941,11 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) pr_warn(" node %d: slabs: %ld, objs: %ld, free: %ld\n", node, nr_slabs, nr_objs, nr_free); } -#endif } +#else /* CONFIG_SLUB_DEBUG */ +static inline void +slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) { } +#endif static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) { @@ -3239,45 +3242,13 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return p; } -/* - * If the object has been wiped upon free, make sure it's fully initialized by - * zeroing out freelist pointer. - */ -static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, - void *obj) -{ - if (unlikely(slab_want_init_on_free(s)) && obj) - memset((void *)((char *)kasan_reset_tag(obj) + s->offset), - 0, sizeof(void *)); -} - -/* - * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) - * have the fastpath folded into their functions. So no function call - * overhead for requests that can be satisfied on the fastpath. - * - * The fastpath works by first checking if the lockless freelist can be used. - * If not then __slab_alloc is called for slow processing. - * - * Otherwise we can simply pick the next object from the lockless free list. - */ -static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, +static __always_inline void *__slab_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { - void *object; struct kmem_cache_cpu *c; struct slab *slab; unsigned long tid; - struct obj_cgroup *objcg = NULL; - bool init = false; - - s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); - if (!s) - return NULL; - - object = kfence_alloc(s, orig_size, gfpflags); - if (unlikely(object)) - goto out; + void *object; redo: /* @@ -3347,6 +3318,48 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l stat(s, ALLOC_FASTPATH); } + return object; +} + +/* + * If the object has been wiped upon free, make sure it's fully initialized by + * zeroing out freelist pointer. + */ +static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, + void *obj) +{ + if (unlikely(slab_want_init_on_free(s)) && obj) + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), + 0, sizeof(void *)); +} + +/* + * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) + * have the fastpath folded into their functions. So no function call + * overhead for requests that can be satisfied on the fastpath. + * + * The fastpath works by first checking if the lockless freelist can be used. + * If not then __slab_alloc is called for slow processing. + * + * Otherwise we can simply pick the next object from the lockless free list. + */ +static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) +{ + void *object; + struct obj_cgroup *objcg = NULL; + bool init = false; + + s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); + if (!s) + return NULL; + + object = kfence_alloc(s, orig_size, gfpflags); + if (unlikely(object)) + goto out; + + object = __slab_alloc_node(s, gfpflags, node, addr, orig_size); + maybe_wipe_obj_freeptr(s, object); init = slab_want_init_on_alloc(gfpflags, s); @@ -3799,18 +3812,12 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); -/* Note that interrupts must be enabled when calling this function. */ -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, - void **p) +static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, + size_t size, void **p, struct obj_cgroup *objcg) { struct kmem_cache_cpu *c; int i; - struct obj_cgroup *objcg = NULL; - /* memcg and kmem_cache debug support */ - s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); - if (unlikely(!s)) - return false; /* * Drain objects in the per cpu slab, while disabling local * IRQs, which protects against PREEMPT and interrupts @@ -3864,18 +3871,38 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_unlock_irq(&s->cpu_slab->lock); slub_put_cpu_ptr(s->cpu_slab); - /* - * memcg and kmem_cache debug support and memory initialization. - * Done outside of the IRQ disabled fastpath loop. - */ - slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); return i; + error: slub_put_cpu_ptr(s->cpu_slab); slab_post_alloc_hook(s, objcg, flags, i, p, false); kmem_cache_free_bulk(s, i, p); return 0; + +} + +/* Note that interrupts must be enabled when calling this function. */ +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, + void **p) +{ + int i; + struct obj_cgroup *objcg = NULL; + + /* memcg and kmem_cache debug support */ + s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); + if (unlikely(!s)) + return false; + + i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg); + + /* + * memcg and kmem_cache debug support and memory initialization. + * Done outside of the IRQ disabled fastpath loop. + */ + if (i != 0) + slab_post_alloc_hook(s, objcg, flags, size, p, + slab_want_init_on_alloc(flags, s)); + return i; } EXPORT_SYMBOL(kmem_cache_alloc_bulk); From patchwork Mon Nov 21 17:12:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23954 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1718115wrr; Mon, 21 Nov 2022 09:14:42 -0800 (PST) X-Google-Smtp-Source: AA0mqf73Xr++1rHHsxOY9P5Y5wf0T0/ovPjNxV/Jk5quyOW4rgm26MJRrpXHp6CTKsneWeDn0T7M X-Received: by 2002:a17:903:1342:b0:186:90a7:6b75 with SMTP id jl2-20020a170903134200b0018690a76b75mr2513426plb.23.1669050881925; Mon, 21 Nov 2022 09:14:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050881; cv=none; d=google.com; s=arc-20160816; b=ghCseIDws3YHJNgEpow+voNNr/+Jf5hGRAC/HicIw9npn6LgztunSMkA1N/AgmkefA r/qlCXiOAQbQy6/RG+F101BC/GJhaU4SLjUz3ctQ1tTaynyD+xKbjQ9fd5t1QUywWx63 qQkqZ1JOEECo5hc47R67vY/UcwKP4K12WkAC3KG0ApmwEal6PB2tuh+CdqMP8G6O5NQb AGxLw3pXVr6qh0OLrFKaQNc9jYsFBLXdrYXABaFSTRZ4ysQGJR85W2xUtVeoUyaH1ihF W2d0drJagfyYjZamGv+iLn3cMurQqQm9TcNFi+boObrgiUddMsGo61McD0bOof0NbuYb Cgcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=SMmNopRYRfA37kq/EN9aG18MOnK/i5uLhKJ1sge8xAA=; b=T7hN080nmFNo+qaPMfgn0p5ZEhTwmM5WKsAJEVzzTXeKSjBj8vB8t23TMpfuSf9vmo Vtc/B8yQHgS9ZrPU1WCZdErRwMJF8V7yZaOXf/TGPNSqejdFnWtoQZSrwkZ28oiLO6FD X5lBdIZ2pAUpQ+Z2nIHl86j5pfI15wuoeWvzGWx83pQVdlqdzwcze6Eg/nBG9BPZrBui A13bTXvjLGhdY1z1CL+CF9ZVh4v9ecEKRKI15HAyU4ZHTYKWm6yt+w2eMYu7Ve6hyEir vSVf325Dp3IuKaSq1HvFNkjnWL1vuj72s+2ASM2ueh8uQ/gRoQyVyjFSULqN3viSDoKA V7Zw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=p6VpLMdM; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b8-20020a1709027e0800b001865eac3459si10343010plm.460.2022.11.21.09.14.27; Mon, 21 Nov 2022 09:14:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=p6VpLMdM; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231301AbiKURNJ (ORCPT + 99 others); Mon, 21 Nov 2022 12:13:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230450AbiKURMX (ORCPT ); Mon, 21 Nov 2022 12:12:23 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8850BCB96C for ; Mon, 21 Nov 2022 09:12:17 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 68E5C220D5; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SMmNopRYRfA37kq/EN9aG18MOnK/i5uLhKJ1sge8xAA=; b=p6VpLMdMxN1kJ4n9jv3xcJa8KnPdeS9ZGmhAlu1UHcCH6DEBqPKsdNwZkAaWRSKwPN4y08 xDURlE0TZATHKwd51J4Q3qmb82ACWWzgCQykDg9AQEbIzdj6A3VKWm5nw2Gjq5Uj/LWlcx jaID3ty2j28Z+LUc+mLFSclopdkoY8k= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SMmNopRYRfA37kq/EN9aG18MOnK/i5uLhKJ1sge8xAA=; b=SujXFbAWs2lGrzR/O4yA0XV7pl+hnwj3LAsswVaOhswJrHrH7T9uxC6qZPBVs/RqNBplWo 6TymHqOKZvWaJcAQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3F9F11377F; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iBzjDmyxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:12 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 10/12] mm, slub: remove percpu slabs with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:12:00 +0100 Message-Id: <20221121171202.22080-11-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126697268989489?= X-GMAIL-MSGID: =?utf-8?q?1750126697268989489?= SLUB gets most of its scalability by percpu slabs. However for CONFIG_SLUB_TINY the goal is minimal memory overhead, not scalability. Thus, #ifdef out the whole kmem_cache_cpu percpu structure and associated code. Additionally to the slab page savings, this reduces percpu allocator usage, and code size. This change builds on recent commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug caches and make it safe"), as caches with enabled debugging also avoid percpu slabs and all allocations and freeing ends up working with the partial list. With a bit more refactoring by the preceding patches, use the same code paths with CONFIG_SLUB_TINY. Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slub_def.h | 4 ++ mm/slub.c | 102 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 103 insertions(+), 3 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index c186f25c8148..79df64eb054e 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -41,6 +41,7 @@ enum stat_item { CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ NR_SLUB_STAT_ITEMS }; +#ifndef CONFIG_SLUB_TINY /* * When changing the layout, make sure freelist and tid are still compatible * with this_cpu_cmpxchg_double() alignment requirements. @@ -57,6 +58,7 @@ struct kmem_cache_cpu { unsigned stat[NR_SLUB_STAT_ITEMS]; #endif }; +#endif /* CONFIG_SLUB_TINY */ #ifdef CONFIG_SLUB_CPU_PARTIAL #define slub_percpu_partial(c) ((c)->partial) @@ -88,7 +90,9 @@ struct kmem_cache_order_objects { * Slab cache management. */ struct kmem_cache { +#ifndef CONFIG_SLUB_TINY struct kmem_cache_cpu __percpu *cpu_slab; +#endif /* Used for retrieving partial slabs, etc. */ slab_flags_t flags; unsigned long min_partial; diff --git a/mm/slub.c b/mm/slub.c index 5677db3f6d15..7f1cd702c3b4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -337,10 +337,12 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) */ static nodemask_t slab_nodes; +#ifndef CONFIG_SLUB_TINY /* * Workqueue used for flush_cpu_slab(). */ static struct workqueue_struct *flushwq; +#endif /******************************************************************** * Core slab cache functions @@ -386,10 +388,12 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object) return freelist_dereference(s, object + s->offset); } +#ifndef CONFIG_SLUB_TINY static void prefetch_freepointer(const struct kmem_cache *s, void *object) { prefetchw(object + s->offset); } +#endif /* * When running under KMSAN, get_freepointer_safe() may return an uninitialized @@ -1681,11 +1685,13 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node, static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) {} +#ifndef CONFIG_SLUB_TINY static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { return false; } +#endif #endif /* CONFIG_SLUB_DEBUG */ /* @@ -2219,7 +2225,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, if (!pfmemalloc_match(slab, pc->flags)) continue; - if (kmem_cache_debug(s)) { + if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { object = alloc_single_from_partial(s, n, slab, pc->orig_size); if (object) @@ -2334,6 +2340,8 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context return get_any_partial(s, pc); } +#ifndef CONFIG_SLUB_TINY + #ifdef CONFIG_PREEMPTION /* * Calculate the next globally unique transaction for disambiguation @@ -2347,7 +2355,7 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context * different cpus. */ #define TID_STEP 1 -#endif +#endif /* CONFIG_PREEMPTION */ static inline unsigned long next_tid(unsigned long tid) { @@ -2808,6 +2816,13 @@ static int slub_cpu_dead(unsigned int cpu) return 0; } +#else /* CONFIG_SLUB_TINY */ +static inline void flush_all_cpus_locked(struct kmem_cache *s) { } +static inline void flush_all(struct kmem_cache *s) { } +static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } +static inline int slub_cpu_dead(unsigned int cpu) { return 0; } +#endif /* CONFIG_SLUB_TINY */ + /* * Check if the objects in a per cpu structure fit numa * locality expectations. @@ -2955,6 +2970,7 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags) return true; } +#ifndef CONFIG_SLUB_TINY /* * Check the slab->freelist and either transfer the freelist to the * per cpu freelist or deactivate the slab. @@ -3320,6 +3336,33 @@ static __always_inline void *__slab_alloc_node(struct kmem_cache *s, return object; } +#else /* CONFIG_SLUB_TINY */ +static void *__slab_alloc_node(struct kmem_cache *s, + gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) +{ + struct partial_context pc; + struct slab *slab; + void *object; + + pc.flags = gfpflags; + pc.slab = &slab; + pc.orig_size = orig_size; + object = get_partial(s, node, &pc); + + if (object) + return object; + + slab = new_slab(s, gfpflags, node); + if (unlikely(!slab)) { + slab_out_of_memory(s, gfpflags, node); + return NULL; + } + + object = alloc_single_from_new_slab(s, slab, orig_size); + + return object; +} +#endif /* CONFIG_SLUB_TINY */ /* * If the object has been wiped upon free, make sure it's fully initialized by @@ -3503,7 +3546,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, if (kfence_free(head)) return; - if (kmem_cache_debug(s)) { + if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { free_to_partial_list(s, slab, head, tail, cnt, addr); return; } @@ -3604,6 +3647,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, discard_slab(s, slab); } +#ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free that * can perform fastpath freeing without additional function calls. @@ -3678,6 +3722,16 @@ static __always_inline void do_slab_free(struct kmem_cache *s, } stat(s, FREE_FASTPATH); } +#else /* CONFIG_SLUB_TINY */ +static void do_slab_free(struct kmem_cache *s, + struct slab *slab, void *head, void *tail, + int cnt, unsigned long addr) +{ + void *tail_obj = tail ? : head; + + __slab_free(s, slab, head, tail_obj, cnt, addr); +} +#endif /* CONFIG_SLUB_TINY */ static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, @@ -3812,6 +3866,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); +#ifndef CONFIG_SLUB_TINY static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p, struct obj_cgroup *objcg) { @@ -3880,6 +3935,36 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, return 0; } +#else /* CONFIG_SLUB_TINY */ +static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, + size_t size, void **p, struct obj_cgroup *objcg) +{ + int i; + + for (i = 0; i < size; i++) { + void *object = kfence_alloc(s, s->object_size, flags); + + if (unlikely(object)) { + p[i] = object; + continue; + } + + p[i] = __slab_alloc_node(s, flags, NUMA_NO_NODE, + _RET_IP_, s->object_size); + if (unlikely(!p[i])) + goto error; + + maybe_wipe_obj_freeptr(s, p[i]); + } + + return i; + +error: + slab_post_alloc_hook(s, objcg, flags, i, p, false); + kmem_cache_free_bulk(s, i, p); + return 0; +} +#endif /* CONFIG_SLUB_TINY */ /* Note that interrupts must be enabled when calling this function. */ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, @@ -4059,6 +4144,7 @@ init_kmem_cache_node(struct kmem_cache_node *n) #endif } +#ifndef CONFIG_SLUB_TINY static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) { BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE < @@ -4078,6 +4164,12 @@ static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) return 1; } +#else +static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) +{ + return 1; +} +#endif /* CONFIG_SLUB_TINY */ static struct kmem_cache *kmem_cache_node; @@ -4140,7 +4232,9 @@ static void free_kmem_cache_nodes(struct kmem_cache *s) void __kmem_cache_release(struct kmem_cache *s) { cache_random_seq_destroy(s); +#ifndef CONFIG_SLUB_TINY free_percpu(s->cpu_slab); +#endif free_kmem_cache_nodes(s); } @@ -4917,8 +5011,10 @@ void __init kmem_cache_init(void) void __init kmem_cache_init_late(void) { +#ifndef CONFIG_SLUB_TINY flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0); WARN_ON(!flushwq); +#endif } struct kmem_cache * From patchwork Mon Nov 21 17:12:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23953 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1717927wrr; Mon, 21 Nov 2022 09:14:28 -0800 (PST) X-Google-Smtp-Source: AA0mqf6EogsAVxXVriWvgNGRIYGP7jO8hyb2uSm9ZIWTWgbwPaewdbyGf5PM6SaDLXDO5NAN/FfC X-Received: by 2002:a05:6a00:1409:b0:56b:e1d8:e7a1 with SMTP id l9-20020a056a00140900b0056be1d8e7a1mr3906940pfu.28.1669050868132; Mon, 21 Nov 2022 09:14:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050868; cv=none; d=google.com; s=arc-20160816; b=V9YpJPwVilr12U24r+IHCa1bfwr4fPc9sT+ZK3ci7s/OLjpBpHqNublO5fseV/zBsn XYRczZUn4caywLTo1zXw3/wgL8JCsPOPOy2qiXqDWqQFwz3HZ8nL2Pq3lnP9CoGWzmcu OakEKPJ3WNvagltldnipvGD5kiGWi3RwIbp//kLNI2en8IjUdtsG2D+EH68Jfz8jLEG+ 5/Wx8QrWQSTvr/6ZFKZjEfmHUu9GD8D5wKDgVdiR5C4nGhv+t4NWxCZJ9iuT0S5y3axH /OlXRSebOYrGPUtV/B1aMEf7X79dTc+Gdv+G0BlEta37bsCz0EhDjlzm5TbnIq+j0zXn KtaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=X3WZS4NhVY+Kjo4XdjOkZd1dtlHURo50T/wAh28ytNE=; b=GAXQRhbL40jcER5vCUvCfwp+uroa7s+TuVrHm6AZ8G3IICJq8qSbFSnh8j/ZqbbTZb YI3yZgneNPNowB3L1Xar480e4pPqrykEmsbqSYC+C4lfaiS9GKSb8PFYw9D4fGNf33uH zpSFJisn4GIv1Mm/xHRD+hyibci8SmYROSWiig2l/pLflrrf13mkZpic7A6A9vo11vK/ jL2oAGqGzsL1RWRId3nH99aGGg0GD3+Q3hE7w3B+NjCuCdXLcWpmi2GRa6K+bEXsaV/v XoJw/3fuo6gjC/0nL2RgY1wYt1DvRhMUM/aJLxSk3qndl2oF2NXxqqLAnOsszV4gXxDc QfTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=rdmY920q; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 78-20020a630251000000b004532834e821si11558772pgc.598.2022.11.21.09.14.14; Mon, 21 Nov 2022 09:14:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=rdmY920q; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231295AbiKURNE (ORCPT + 99 others); Mon, 21 Nov 2022 12:13:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229721AbiKURMX (ORCPT ); Mon, 21 Nov 2022 12:12:23 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65E5FCEB91 for ; Mon, 21 Nov 2022 09:12:17 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 9AC7B1F8BB; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X3WZS4NhVY+Kjo4XdjOkZd1dtlHURo50T/wAh28ytNE=; b=rdmY920qgTRyWMDU9Lvdi9t9OF9/QhFDxMqQw+V1STkr7+GbgdShYi31aii2Wq88xE/kCw +zaHdMmfWfiK81ohGzq3zHdcAbX6vepPHK82mRoA1f479p0Ly63ipmonfkzxoY2VjhfVL/ /6EQw1GjfEznRtmefD26Q5u9SXxfX8Y= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X3WZS4NhVY+Kjo4XdjOkZd1dtlHURo50T/wAh28ytNE=; b=TkTS9+5mYVWVSQ0Z3ri95SsgD2DGEbJkaROTw88M754wmWvi7YsmGDX8ymuzHS3hl3KZNO D+slbCc0iTNOp8AA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6C22C13B03; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id AJzHGWyxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:12 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH 11/12] mm, slub: don't aggressively inline with CONFIG_SLUB_TINY Date: Mon, 21 Nov 2022 18:12:01 +0100 Message-Id: <20221121171202.22080-12-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126682875431498?= X-GMAIL-MSGID: =?utf-8?q?1750126682875431498?= SLUB fastpaths use __always_inline to avoid function calls. With CONFIG_SLUB_TINY we would rather save the memory. Add a __fastpath_inline macro that's __always_inline normally but empty with CONFIG_SLUB_TINY. bloat-o-meter results on x86_64 mm/slub.o: add/remove: 3/1 grow/shrink: 1/8 up/down: 865/-1784 (-919) Function old new delta kmem_cache_free 20 281 +261 slab_alloc_node.isra - 245 +245 slab_free.constprop.isra - 231 +231 __kmem_cache_alloc_lru.isra - 128 +128 __kmem_cache_release 88 83 -5 __kmem_cache_create 1446 1436 -10 __kmem_cache_free 271 142 -129 kmem_cache_alloc_node 330 127 -203 kmem_cache_free_bulk.part 826 613 -213 __kmem_cache_alloc_node 230 10 -220 kmem_cache_alloc_lru 325 12 -313 kmem_cache_alloc 325 10 -315 kmem_cache_free.part 376 - -376 Total: Before=26103, After=25184, chg -3.52% Signed-off-by: Vlastimil Babka Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 7f1cd702c3b4..d54466e76503 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -187,6 +187,12 @@ do { \ #define USE_LOCKLESS_FAST_PATH() (false) #endif +#ifndef CONFIG_SLUB_TINY +#define __fastpath_inline __always_inline +#else +#define __fastpath_inline +#endif + #ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG_ON DEFINE_STATIC_KEY_TRUE(slub_debug_enabled); @@ -3386,7 +3392,7 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, * * Otherwise we can simply pick the next object from the lockless free list. */ -static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, +static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; @@ -3412,13 +3418,13 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l return object; } -static __always_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *lru, +static __fastpath_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, unsigned long addr, size_t orig_size) { return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } -static __always_inline +static __fastpath_inline void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) { @@ -3733,7 +3739,7 @@ static void do_slab_free(struct kmem_cache *s, } #endif /* CONFIG_SLUB_TINY */ -static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab, +static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, unsigned long addr) { From patchwork Mon Nov 21 17:12:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 23956 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1718307wrr; Mon, 21 Nov 2022 09:15:01 -0800 (PST) X-Google-Smtp-Source: AA0mqf5+g48HqPkbEOxKbxbRBNFcU/OogVZW+/haK3RDOpedkewleNfpSZ1JWswtmGqHZJBtwn2C X-Received: by 2002:a17:906:d1c7:b0:7ad:fd3e:7762 with SMTP id bs7-20020a170906d1c700b007adfd3e7762mr1690702ejb.717.1669050901339; Mon, 21 Nov 2022 09:15:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050901; cv=none; d=google.com; s=arc-20160816; b=vLspSJKfbIjpaZSU4V0gJbmJNZ6vO8RXa9YBPakfaOuCIeFO48z9Gr9YVGSZ5yIGQU Wf1rbo5+GX3qW0ignltW3Yx+NXwOYep3oghiTN6sT3Orlk2WBo900QF+VQ12Ww8UrpSm +0ObJwSzRcfJwPt2pKQTRH4V1aJ8DDawBg3np1K6bAOyMnhKMpQ5pzT4LQQwj/o5V/WL 2Q2IfiA/IsyvTIJjHPf6OB2JKlqpog91th+Of01/zCgxAHZl7Rrkkof4Y6tp9ONm1Jca l0hMfUZSLrfETdbhQQo8LBlrDFAbqswRc0f84U3VaaQsjgop2z2jc5FTFBwKNAa2Coif ROug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=dnIAeIAMlXNqqoq+MRWq7Ss/UQhtQKGy702t3mo/oSI=; b=Swxz1Aydx4QjGs/zutFRORFj0ZQWiFSj7ROHeyH6QBmZm7RFC1WnxJ92EWDpjP3BpZ fedc6LWivLWBqIMNwiuQH1FGnhYRR+ctjIRQAT2dS6WZnv/Mof0wPnA+vhs6/gOdGc43 R5Y4wADz2O7ngxyc9Dmd3wAttSiPAtxjAPbIayz0Nowq+lLmP6q/se4IQ68Dbp73czJM zNy4JwRLrrhiUxdzrSwpvUfEOrxi0mqMurr4Jlh1AmW3MQGU0nE7yXS4WSqmHHMyiWbe LoCf6QNnRf75lVIhvrQPPzUkXjsFF638sa05Ti5K2fsiXbsIPWYYU4UGHkSQsVxjEqMl V0sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=vtxbq3qy; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=NwLapaWO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h8-20020a0564020e0800b004613ae68ca0si8822135edh.442.2022.11.21.09.14.36; Mon, 21 Nov 2022 09:15:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=vtxbq3qy; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=NwLapaWO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231163AbiKURNQ (ORCPT + 99 others); Mon, 21 Nov 2022 12:13:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230491AbiKURMY (ORCPT ); Mon, 21 Nov 2022 12:12:24 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ACA2CB9FF; Mon, 21 Nov 2022 09:12:17 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 03BA61F8C2; Mon, 21 Nov 2022 17:12:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050733; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dnIAeIAMlXNqqoq+MRWq7Ss/UQhtQKGy702t3mo/oSI=; b=vtxbq3qyPUuQaVPk7kHB/WHSzZIrm9Y2mRjQyHWtIiBDU7Afxc2YO/cJUrhKM07bi5kT1t 40aACKtw/WJdDmj2ZXllFn6f1VotHkRdEqURMW0H34Ss1wvgG8iiIr/mSFTUVEEh5WOYvg piYYTFWU9jX9Dh9MihoQcgOZUjHxKRM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050733; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dnIAeIAMlXNqqoq+MRWq7Ss/UQhtQKGy702t3mo/oSI=; b=NwLapaWOl3WOxfwvprXJYVdZjoiksjS26EaUETGccE0tN5nvi9CECdpuFNHdQnKo+IdQkF n0FmW/ncPlubHHAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9D2F51377F; Mon, 21 Nov 2022 17:12:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +MO+JWyxe2MQeQAAMHmgww (envelope-from ); Mon, 21 Nov 2022 17:12:12 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka , Russell King , Aaro Koskinen , Janusz Krzysztofik , Tony Lindgren , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Yoshinori Sato , Rich Felker , Arnd Bergmann , Josh Triplett , Conor Dooley , Damien Le Moal , Christophe Leroy , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, openrisc@lists.librecores.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org Subject: [PATCH 12/12] mm, slob: rename CONFIG_SLOB to CONFIG_SLOB_DEPRECATED Date: Mon, 21 Nov 2022 18:12:02 +0100 Message-Id: <20221121171202.22080-13-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126717585323702?= X-GMAIL-MSGID: =?utf-8?q?1750126717585323702?= As explained in [1], we would like to remove SLOB if possible. - There are no known users that need its somewhat lower memory footprint so much that they cannot handle SLUB (after some modifications by the previous patches) instead. - It is an extra maintenance burden, and a number of features are incompatible with it. - It blocks the API improvement of allowing kfree() on objects allocated via kmem_cache_alloc(). As the first step, rename the CONFIG_SLOB option in the slab allocator configuration choice to CONFIG_SLOB_DEPRECATED. Add CONFIG_SLOB depending on CONFIG_SLOB_DEPRECATED as an internal option to avoid code churn. This will cause existing .config files and defconfigs with CONFIG_SLOB=y to silently switch to the default (and recommended replacement) SLUB, while still allowing SLOB to be configured by anyone that notices and needs it. But those should contact the slab maintainers and linux-mm@kvack.org as explained in the updated help. With no valid objections, the plan is to update the existing defconfigs to SLUB and remove SLOB in a few cycles. To make SLUB more suitable replacement for SLOB, a CONFIG_SLUB_TINY option was introduced to limit SLUB's memory overhead. There is a number of defconfigs specifying CONFIG_SLOB=y. As part of this patch, update them to select CONFIG_SLUB and CONFIG_SLUB_TINY. [1] https://lore.kernel.org/all/b35c3f82-f67b-2103-7d82-7a7ba7521439@suse.cz/ Cc: Russell King Cc: Aaro Koskinen Cc: Janusz Krzysztofik Cc: Tony Lindgren Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: Yoshinori Sato Cc: Rich Felker Cc: Arnd Bergmann Cc: Josh Triplett Cc: Conor Dooley Cc: Damien Le Moal Cc: Christophe Leroy Cc: Geert Uytterhoeven Cc: Cc: Cc: Cc: Cc: Signed-off-by: Vlastimil Babka Acked-by: Aaro Koskinen # OMAP1 Reviewed-by: Damien Le Moal Tested-by: Damien Le Moal Acked-by: Arnd Bergmann Acked-by: Roman Gushchin Acked-by: Palmer Dabbelt Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- arch/arm/configs/clps711x_defconfig | 3 ++- arch/arm/configs/collie_defconfig | 3 ++- arch/arm/configs/multi_v4t_defconfig | 3 ++- arch/arm/configs/omap1_defconfig | 3 ++- arch/arm/configs/pxa_defconfig | 3 ++- arch/arm/configs/tct_hammer_defconfig | 3 ++- arch/arm/configs/xcep_defconfig | 3 ++- arch/openrisc/configs/or1ksim_defconfig | 3 ++- arch/openrisc/configs/simple_smp_defconfig | 3 ++- arch/riscv/configs/nommu_k210_defconfig | 3 ++- arch/riscv/configs/nommu_k210_sdcard_defconfig | 3 ++- arch/riscv/configs/nommu_virt_defconfig | 3 ++- arch/sh/configs/rsk7201_defconfig | 3 ++- arch/sh/configs/rsk7203_defconfig | 3 ++- arch/sh/configs/se7206_defconfig | 3 ++- arch/sh/configs/shmin_defconfig | 3 ++- arch/sh/configs/shx3_defconfig | 3 ++- kernel/configs/tiny.config | 5 +++-- mm/Kconfig | 17 +++++++++++++++-- 19 files changed, 52 insertions(+), 21 deletions(-) diff --git a/arch/arm/configs/clps711x_defconfig b/arch/arm/configs/clps711x_defconfig index 92481b2a88fa..adcee238822a 100644 --- a/arch/arm/configs/clps711x_defconfig +++ b/arch/arm/configs/clps711x_defconfig @@ -14,7 +14,8 @@ CONFIG_ARCH_EDB7211=y CONFIG_ARCH_P720T=y CONFIG_AEABI=y # CONFIG_COREDUMP is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y diff --git a/arch/arm/configs/collie_defconfig b/arch/arm/configs/collie_defconfig index 2a2d2cb3ce2e..69341c33e0cc 100644 --- a/arch/arm/configs/collie_defconfig +++ b/arch/arm/configs/collie_defconfig @@ -13,7 +13,8 @@ CONFIG_CMDLINE="noinitrd root=/dev/mtdblock2 rootfstype=jffs2 fbcon=rotate:1" CONFIG_FPE_NWFPE=y CONFIG_PM=y # CONFIG_SWAP is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y diff --git a/arch/arm/configs/multi_v4t_defconfig b/arch/arm/configs/multi_v4t_defconfig index e2fd822f741a..b60000a89aff 100644 --- a/arch/arm/configs/multi_v4t_defconfig +++ b/arch/arm/configs/multi_v4t_defconfig @@ -25,7 +25,8 @@ CONFIG_ARM_CLPS711X_CPUIDLE=y CONFIG_JUMP_LABEL=y CONFIG_PARTITION_ADVANCED=y # CONFIG_COREDUMP is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_MTD=y CONFIG_MTD_CMDLINE_PARTS=y CONFIG_MTD_BLOCK=y diff --git a/arch/arm/configs/omap1_defconfig b/arch/arm/configs/omap1_defconfig index 70511fe4b3ec..246f1bba7df5 100644 --- a/arch/arm/configs/omap1_defconfig +++ b/arch/arm/configs/omap1_defconfig @@ -42,7 +42,8 @@ CONFIG_MODULE_FORCE_UNLOAD=y CONFIG_PARTITION_ADVANCED=y CONFIG_BINFMT_MISC=y # CONFIG_SWAP is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_VM_EVENT_COUNTERS is not set CONFIG_NET=y CONFIG_PACKET=y diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig index d60cc9cc4c21..0a0f12df40b5 100644 --- a/arch/arm/configs/pxa_defconfig +++ b/arch/arm/configs/pxa_defconfig @@ -49,7 +49,8 @@ CONFIG_PARTITION_ADVANCED=y CONFIG_LDM_PARTITION=y CONFIG_CMDLINE_PARTITION=y CONFIG_BINFMT_MISC=y -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_COMPACTION is not set CONFIG_NET=y CONFIG_PACKET=y diff --git a/arch/arm/configs/tct_hammer_defconfig b/arch/arm/configs/tct_hammer_defconfig index 3b29ae1fb750..6bd38b6f22c4 100644 --- a/arch/arm/configs/tct_hammer_defconfig +++ b/arch/arm/configs/tct_hammer_defconfig @@ -19,7 +19,8 @@ CONFIG_FPE_NWFPE=y CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y # CONFIG_SWAP is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y diff --git a/arch/arm/configs/xcep_defconfig b/arch/arm/configs/xcep_defconfig index ea59e4b6bfc5..6bd9f71b71fc 100644 --- a/arch/arm/configs/xcep_defconfig +++ b/arch/arm/configs/xcep_defconfig @@ -26,7 +26,8 @@ CONFIG_MODULE_UNLOAD=y CONFIG_MODVERSIONS=y CONFIG_MODULE_SRCVERSION_ALL=y # CONFIG_BLOCK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_COMPAT_BRK is not set # CONFIG_VM_EVENT_COUNTERS is not set CONFIG_NET=y diff --git a/arch/openrisc/configs/or1ksim_defconfig b/arch/openrisc/configs/or1ksim_defconfig index 6e1e004047c7..0116e465238f 100644 --- a/arch/openrisc/configs/or1ksim_defconfig +++ b/arch/openrisc/configs/or1ksim_defconfig @@ -10,7 +10,8 @@ CONFIG_EXPERT=y # CONFIG_AIO is not set # CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_MODULES=y # CONFIG_BLOCK is not set CONFIG_OPENRISC_BUILTIN_DTB="or1ksim" diff --git a/arch/openrisc/configs/simple_smp_defconfig b/arch/openrisc/configs/simple_smp_defconfig index ff49d868e040..b990cb6c9309 100644 --- a/arch/openrisc/configs/simple_smp_defconfig +++ b/arch/openrisc/configs/simple_smp_defconfig @@ -16,7 +16,8 @@ CONFIG_EXPERT=y # CONFIG_AIO is not set # CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_MODULES=y # CONFIG_BLOCK is not set CONFIG_OPENRISC_BUILTIN_DTB="simple_smp" diff --git a/arch/riscv/configs/nommu_k210_defconfig b/arch/riscv/configs/nommu_k210_defconfig index 96fe8def644c..79b3ccd58ff0 100644 --- a/arch/riscv/configs/nommu_k210_defconfig +++ b/arch/riscv/configs/nommu_k210_defconfig @@ -25,7 +25,8 @@ CONFIG_CC_OPTIMIZE_FOR_SIZE=y CONFIG_EMBEDDED=y # CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_MMU is not set CONFIG_SOC_CANAAN=y CONFIG_NONPORTABLE=y diff --git a/arch/riscv/configs/nommu_k210_sdcard_defconfig b/arch/riscv/configs/nommu_k210_sdcard_defconfig index 379740654373..6b80bb13b8ed 100644 --- a/arch/riscv/configs/nommu_k210_sdcard_defconfig +++ b/arch/riscv/configs/nommu_k210_sdcard_defconfig @@ -17,7 +17,8 @@ CONFIG_CC_OPTIMIZE_FOR_SIZE=y CONFIG_EMBEDDED=y # CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_MMU is not set CONFIG_SOC_CANAAN=y CONFIG_NONPORTABLE=y diff --git a/arch/riscv/configs/nommu_virt_defconfig b/arch/riscv/configs/nommu_virt_defconfig index 1a56eda5ce46..4cf0f297091e 100644 --- a/arch/riscv/configs/nommu_virt_defconfig +++ b/arch/riscv/configs/nommu_virt_defconfig @@ -22,7 +22,8 @@ CONFIG_EXPERT=y # CONFIG_KALLSYMS is not set # CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_MMU is not set CONFIG_SOC_VIRT=y CONFIG_NONPORTABLE=y diff --git a/arch/sh/configs/rsk7201_defconfig b/arch/sh/configs/rsk7201_defconfig index 619c18699459..376e95fa77bc 100644 --- a/arch/sh/configs/rsk7201_defconfig +++ b/arch/sh/configs/rsk7201_defconfig @@ -10,7 +10,8 @@ CONFIG_USER_NS=y CONFIG_PID_NS=y CONFIG_BLK_DEV_INITRD=y # CONFIG_AIO is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_PROFILING=y CONFIG_MODULES=y # CONFIG_BLK_DEV_BSG is not set diff --git a/arch/sh/configs/rsk7203_defconfig b/arch/sh/configs/rsk7203_defconfig index d00fafc021e1..1d5fd67a3949 100644 --- a/arch/sh/configs/rsk7203_defconfig +++ b/arch/sh/configs/rsk7203_defconfig @@ -11,7 +11,8 @@ CONFIG_USER_NS=y CONFIG_PID_NS=y CONFIG_BLK_DEV_INITRD=y CONFIG_KALLSYMS_ALL=y -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_PROFILING=y CONFIG_MODULES=y # CONFIG_BLK_DEV_BSG is not set diff --git a/arch/sh/configs/se7206_defconfig b/arch/sh/configs/se7206_defconfig index 122216123e63..78e0e7be57ee 100644 --- a/arch/sh/configs/se7206_defconfig +++ b/arch/sh/configs/se7206_defconfig @@ -21,7 +21,8 @@ CONFIG_BLK_DEV_INITRD=y CONFIG_KALLSYMS_ALL=y # CONFIG_ELF_CORE is not set # CONFIG_COMPAT_BRK is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_PROFILING=y CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y diff --git a/arch/sh/configs/shmin_defconfig b/arch/sh/configs/shmin_defconfig index c0b6f40d01cc..e078b193a78a 100644 --- a/arch/sh/configs/shmin_defconfig +++ b/arch/sh/configs/shmin_defconfig @@ -9,7 +9,8 @@ CONFIG_LOG_BUF_SHIFT=14 # CONFIG_FUTEX is not set # CONFIG_EPOLL is not set # CONFIG_SHMEM is not set -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y # CONFIG_BLK_DEV_BSG is not set CONFIG_CPU_SUBTYPE_SH7706=y CONFIG_MEMORY_START=0x0c000000 diff --git a/arch/sh/configs/shx3_defconfig b/arch/sh/configs/shx3_defconfig index 32ec6eb1eabc..aa353dff7f19 100644 --- a/arch/sh/configs/shx3_defconfig +++ b/arch/sh/configs/shx3_defconfig @@ -20,7 +20,8 @@ CONFIG_USER_NS=y CONFIG_PID_NS=y # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set CONFIG_KALLSYMS_ALL=y -CONFIG_SLOB=y +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y CONFIG_PROFILING=y CONFIG_KPROBES=y CONFIG_MODULES=y diff --git a/kernel/configs/tiny.config b/kernel/configs/tiny.config index 8a44b93da0f3..c2f9c912df1c 100644 --- a/kernel/configs/tiny.config +++ b/kernel/configs/tiny.config @@ -7,5 +7,6 @@ CONFIG_KERNEL_XZ=y # CONFIG_KERNEL_LZO is not set # CONFIG_KERNEL_LZ4 is not set # CONFIG_SLAB is not set -# CONFIG_SLUB is not set -CONFIG_SLOB=y +# CONFIG_SLOB_DEPRECATED is not set +CONFIG_SLUB=y +CONFIG_SLUB_TINY=y diff --git a/mm/Kconfig b/mm/Kconfig index 5941cb34e30d..dcc49c69552f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -219,17 +219,30 @@ config SLUB and has enhanced diagnostics. SLUB is the default choice for a slab allocator. -config SLOB +config SLOB_DEPRECATED depends on EXPERT - bool "SLOB (Simple Allocator)" + bool "SLOB (Simple Allocator - DEPRECATED)" depends on !PREEMPT_RT help + Deprecated and scheduled for removal in a few cycles. SLUB + recommended as replacement. CONFIG_SLUB_TINY can be considered + on systems with 16MB or less RAM. + + If you need SLOB to stay, please contact linux-mm@kvack.org and + people listed in the SLAB ALLOCATOR section of MAINTAINERS file, + with your use case. + SLOB replaces the stock allocator with a drastically simpler allocator. SLOB is generally more space efficient but does not perform as well on large systems. endchoice +config SLOB + bool + default y + depends on SLOB_DEPRECATED + config SLUB_TINY bool "Configure SLUB for minimal memory footprint" depends on SLUB && EXPERT