From patchwork Mon Nov 13 19:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 164605 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b909:0:b0:403:3b70:6f57 with SMTP id t9csp1419828vqg; Mon, 13 Nov 2023 11:14:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IF2tK27cuTe0tLqDRU/f6EeLg/6QR4ohe7Y/JJ7vPIoaMIv9IO0oFM4fjejLvyfJ+QvE/cf X-Received: by 2002:a17:903:4288:b0:1cc:4559:ff with SMTP id ju8-20020a170903428800b001cc455900ffmr142573plb.13.1699902876971; Mon, 13 Nov 2023 11:14:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699902876; cv=none; d=google.com; s=arc-20160816; b=0ZbZszQYpyCnmV3ct2S3L4mvU4fICv92H7EODJQg4LU/CiiieezfMakUZUaLcyd00F jYTZg3MiAFHasmYs1WBLcjUGy7S5qHVRcEIMgEjvPnHPgwby9amTAY5A6v3cWKpTbWyg jEIlrT4expqyxUJtD2uca3T4WqBs/5Z8fx7nf5P1w+JuC9vFGLev23rv+2Uvgy30xiYS rNW3WWO6b8uqROku9is9NpQLECa26sEsJ7pmuUY92x9xVjFxqQn4F0gfustgSyFuGrUo A91Bm5DGV1NfWbFMj+dofu33/i4xw0qQgo7oREsQaS92YqyHIc6DuY6+aXtkJue4sRLL QdXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=foUH5Pkv5cwwnwudHkR/uUiz9PA77WygUi4ZtZ6vDeI=; fh=RPAmrUlnQQdc1FhCirEqyhGh/OnPyRxUfAdj7ygPMx4=; b=Bvl9oW6VFxqFzVRR7hMxPT5E9RtHSndtsJu8wNwA2yMjallFfbFbbz258BM0+Ms4RC le6ee2seyzlunJJosgerCdxkq6nu8UX6AHwVg8U45iFERgehYE7j+911u6kXLy+cfKQ0 J3imxWfVvx5Oi70tL2IWNxIzxOeccl5f3PcRbmgTH9zLtuCEPRWIAgn9PBH6+l6f8lRK yKxII9MvBxJ0OcTebNSaDrqeEBXNSmdDREUH2iTL8/mJHVeoYWGLqXW9HOxzHGFs1vJv yIoty+2MDKLpeDA7pbh8dpU3ricML0UtZYONuMn5Es3STzCIS9EX2F3Ajwj0JTC1ojXG F7LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=JkDwUha8; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=7F+W9fjH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id u8-20020a170903124800b001ce2fc160cfsi1367788plh.434.2023.11.13.11.14.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Nov 2023 11:14:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=JkDwUha8; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=7F+W9fjH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 2875B8029206; Mon, 13 Nov 2023 11:14:36 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231936AbjKMTO3 (ORCPT + 30 others); Mon, 13 Nov 2023 14:14:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231804AbjKMTOQ (ORCPT ); Mon, 13 Nov 2023 14:14:16 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB6F0171C; Mon, 13 Nov 2023 11:14:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 70BFD1F86A; Mon, 13 Nov 2023 19:14:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1699902851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=foUH5Pkv5cwwnwudHkR/uUiz9PA77WygUi4ZtZ6vDeI=; b=JkDwUha8GrNLV0F4H/HfigXcndrXbPi+DIz32fgM3T7Jbgda20PZxDrXpPDZIYMYjf0bHn UPDTgaHgrskjO/8xkYAnx2DM7VfXZ+fOk69GqynKai8G87Ku7QiA+TC0ZtzHdWQMbMhyQQ pXW5GQW7+5kbzQcbFJWPNbz3kl4pPN0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1699902851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=foUH5Pkv5cwwnwudHkR/uUiz9PA77WygUi4ZtZ6vDeI=; b=7F+W9fjHMYRYgXcO9sSyXIqFYuruO4RfNRkuH4B8mqfXj7unvPpG/PV5G51XhRCrdrO9Jr ncE41UxL51yzoIAA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1307813398; Mon, 13 Nov 2023 19:14:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 6MwFBIN1UmVFOgAAMHmgww (envelope-from ); Mon, 13 Nov 2023 19:14:11 +0000 From: Vlastimil Babka To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Vlastimil Babka Subject: [PATCH 06/20] mm/slab: remove CONFIG_SLAB code from slab common code Date: Mon, 13 Nov 2023 20:13:47 +0100 Message-ID: <20231113191340.17482-28-vbabka@suse.cz> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231113191340.17482-22-vbabka@suse.cz> References: <20231113191340.17482-22-vbabka@suse.cz> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 13 Nov 2023 11:14:36 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782477359240350493 X-GMAIL-MSGID: 1782477359240350493 In slab_common.c and slab.h headers, we can now remove all code behind CONFIG_SLAB and CONFIG_DEBUG_SLAB ifdefs, and remove all CONFIG_SLUB ifdefs. Signed-off-by: Vlastimil Babka Reviewed-by: Kees Cook --- include/linux/slab.h | 13 +-------- mm/slab.h | 69 ++++---------------------------------------- mm/slab_common.c | 22 ++------------ 3 files changed, 8 insertions(+), 96 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 34e43cddc520..90fb1f0d843a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -24,7 +24,6 @@ /* * Flags to pass to kmem_cache_create(). - * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set. */ /* DEBUG: Perform (expensive) checks on alloc/free */ #define SLAB_CONSISTENCY_CHECKS ((slab_flags_t __force)0x00000100U) @@ -302,25 +301,15 @@ static inline unsigned int arch_slab_minalign(void) * Kmalloc array related definitions */ -#ifdef CONFIG_SLAB /* - * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * SLUB directly allocates requests fitting in to an order-1 page * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW -#define KMALLOC_SHIFT_LOW 5 -#endif -#endif - -#ifdef CONFIG_SLUB -#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) -#ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif -#endif /* Maximum allocatable size */ #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX) diff --git a/mm/slab.h b/mm/slab.h index 3d07fb428393..014c36ea51fa 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -42,21 +42,6 @@ typedef union { struct slab { unsigned long __page_flags; -#if defined(CONFIG_SLAB) - - struct kmem_cache *slab_cache; - union { - struct { - struct list_head slab_list; - void *freelist; /* array of free object indexes */ - void *s_mem; /* first object */ - }; - struct rcu_head rcu_head; - }; - unsigned int active; - -#elif defined(CONFIG_SLUB) - struct kmem_cache *slab_cache; union { struct { @@ -91,10 +76,6 @@ struct slab { }; unsigned int __unused; -#else -#error "Unexpected slab allocator configured" -#endif - atomic_t __page_refcount; #ifdef CONFIG_MEMCG unsigned long memcg_data; @@ -111,7 +92,7 @@ SLAB_MATCH(memcg_data, memcg_data); #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <= sizeof(struct page)); -#if defined(system_has_freelist_aba) && defined(CONFIG_SLUB) +#if defined(system_has_freelist_aba) static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t))); #endif @@ -228,13 +209,7 @@ static inline size_t slab_size(const struct slab *slab) return PAGE_SIZE << slab_order(slab); } -#ifdef CONFIG_SLAB -#include -#endif - -#ifdef CONFIG_SLUB #include -#endif #include #include @@ -320,26 +295,16 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s) SLAB_CACHE_DMA32 | SLAB_PANIC | \ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) -#if defined(CONFIG_DEBUG_SLAB) -#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) -#elif defined(CONFIG_SLUB_DEBUG) +#ifdef CONFIG_SLUB_DEBUG #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_CONSISTENCY_CHECKS) #else #define SLAB_DEBUG_FLAGS (0) #endif -#if defined(CONFIG_SLAB) -#define SLAB_CACHE_FLAGS (SLAB_MEM_SPREAD | SLAB_NOLEAKTRACE | \ - SLAB_RECLAIM_ACCOUNT | SLAB_TEMPORARY | \ - SLAB_ACCOUNT | SLAB_NO_MERGE) -#elif defined(CONFIG_SLUB) #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ SLAB_TEMPORARY | SLAB_ACCOUNT | \ SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE) -#else -#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE) -#endif /* Common flags available with current configuration */ #define CACHE_CREATE_MASK (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS | SLAB_CACHE_FLAGS) @@ -672,18 +637,14 @@ size_t __ksize(const void *objp); static inline size_t slab_ksize(const struct kmem_cache *s) { -#ifndef CONFIG_SLUB - return s->object_size; - -#else /* CONFIG_SLUB */ -# ifdef CONFIG_SLUB_DEBUG +#ifdef CONFIG_SLUB_DEBUG /* * Debugging requires use of the padding between object * and whatever may come after it. */ if (s->flags & (SLAB_RED_ZONE | SLAB_POISON)) return s->object_size; -# endif +#endif if (s->flags & SLAB_KASAN) return s->object_size; /* @@ -697,7 +658,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) * Else we can use all the padding etc for the allocation */ return s->size; -#endif } static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, @@ -775,23 +735,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, * The slab lists for all objects. */ struct kmem_cache_node { -#ifdef CONFIG_SLAB - raw_spinlock_t list_lock; - struct list_head slabs_partial; /* partial list first, better asm code */ - struct list_head slabs_full; - struct list_head slabs_free; - unsigned long total_slabs; /* length of all slab lists */ - unsigned long free_slabs; /* length of free slab list only */ - unsigned long free_objects; - unsigned int free_limit; - unsigned int colour_next; /* Per-node cache coloring */ - struct array_cache *shared; /* shared per node */ - struct alien_cache **alien; /* on other nodes */ - unsigned long next_reap; /* updated without locking */ - int free_touched; /* updated without locking */ -#endif - -#ifdef CONFIG_SLUB spinlock_t list_lock; unsigned long nr_partial; struct list_head partial; @@ -800,8 +743,6 @@ struct kmem_cache_node { atomic_long_t total_objects; struct list_head full; #endif -#endif - }; static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) @@ -818,7 +759,7 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) if ((__n = get_node(__s, __node))) -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG) +#ifdef CONFIG_SLUB_DEBUG void dump_unreclaimable_slab(void); #else static inline void dump_unreclaimable_slab(void) diff --git a/mm/slab_common.c b/mm/slab_common.c index 8d431193c273..63b8411db7ce 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -71,10 +71,8 @@ static int __init setup_slab_merge(char *str) return 1; } -#ifdef CONFIG_SLUB __setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); __setup_param("slub_merge", slub_merge, setup_slab_merge, 0); -#endif __setup("slab_nomerge", setup_slab_nomerge); __setup("slab_merge", setup_slab_merge); @@ -197,10 +195,6 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, if (s->size - size >= sizeof(void *)) continue; - if (IS_ENABLED(CONFIG_SLAB) && align && - (align > s->align || s->align % align)) - continue; - return s; } return NULL; @@ -1222,12 +1216,8 @@ void cache_random_seq_destroy(struct kmem_cache *cachep) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG) -#ifdef CONFIG_SLAB -#define SLABINFO_RIGHTS (0600) -#else +#ifdef CONFIG_SLUB_DEBUG #define SLABINFO_RIGHTS (0400) -#endif static void print_slabinfo_header(struct seq_file *m) { @@ -1235,18 +1225,10 @@ static void print_slabinfo_header(struct seq_file *m) * Output format version, so at least we can change it * without _too_ many complaints. */ -#ifdef CONFIG_DEBUG_SLAB - seq_puts(m, "slabinfo - version: 2.1 (statistics)\n"); -#else seq_puts(m, "slabinfo - version: 2.1\n"); -#endif seq_puts(m, "# name "); seq_puts(m, " : tunables "); seq_puts(m, " : slabdata "); -#ifdef CONFIG_DEBUG_SLAB - seq_puts(m, " : globalstat "); - seq_puts(m, " : cpustat "); -#endif seq_putc(m, '\n'); } @@ -1370,7 +1352,7 @@ static int __init slab_proc_init(void) } module_init(slab_proc_init); -#endif /* CONFIG_SLAB || CONFIG_SLUB_DEBUG */ +#endif /* CONFIG_SLUB_DEBUG */ static __always_inline __realloc_size(2) void * __do_krealloc(const void *p, size_t new_size, gfp_t flags)