From patchwork Tue Oct 31 14:07:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 160133 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b90f:0:b0:403:3b70:6f57 with SMTP id t15csp267981vqg; Tue, 31 Oct 2023 07:10:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFwlhY3L1i2vfHNC30P0k43767zOkU9Hb+ya37I/yVdQcHClhz07/c72vcxhYNN7L4qvRUm X-Received: by 2002:a17:902:e0c5:b0:1ca:8b90:1cbd with SMTP id e5-20020a170902e0c500b001ca8b901cbdmr9516065pla.0.1698761408126; Tue, 31 Oct 2023 07:10:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698761408; cv=none; d=google.com; s=arc-20160816; b=mTMIFsKJqME4brurb1ddxSKNn2pHurN5Og8A0pVcKOWBtjeHwl12CxPjoGfpL94RQ8 afdY3TnaH6wOaB/RgMG+Fu1Vj2ZVdXLGic1PJ8hzWEaVAb7wDCEuM2PHpeVlUexCi4DP xYlnbM87qSCjp6QV4tS1ikVtLPqNmaEEOH04I83G8wWP2KbcFPdf77ulgkMwl5Cf9Kzu oPRycueHxiFVBG3xrMyrnDK1ZbCm9bv0Fw2VKe6HOhLwDN810zPe1y+0a6V6cbKqUySh E8gon87fIgnk1lEPZ10nJdn8Gwev15afz5PDmTvxcgWH+FGHTy4gOvHATwzQXnF/jhm+ X7aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WNirFUhouaQ2FdRmy2RjQgkCONe7ugTNtD0fPI9SkvQ=; fh=1GJaSiew/4nhYvae+yJYXPcARWi0PNZhXd1mB5ASXRg=; b=Ecpd8Txim0Q7WUnJN5Lm8GL8oBUpHnrM0+N2g20CTSLmmMLaqhDfxUXMK+APCq9DOG A86gByfmlzlx78wu3wgjO49mhw7Ts+IZme92HqgTA4eMgbIVsN0+3yYtO69D2VcSij4f k/zSFZxbdkrirhFvcpKFvKhfLqJAJcQ3XP0Spa8nDl7Nf5KUppMXk+9FiwDq9ospMNpi LYqC3iPycn+ARm++ly2An48h2BfLD7D/i3NkrUkPLdlf2gccXebjKR3p7WfWJjDd5i9/ Rt1XwuVkxwjCHUO2zOGXE1QSXjl8YcIZNWO4/ErDr4X/NWfKd7/7jxd9qrncY70urEH/ hflQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=r1MiH4vM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id lb7-20020a170902fa4700b001ca4e2a35efsi1039029plb.45.2023.10.31.07.10.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Oct 2023 07:10:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=r1MiH4vM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 7EEA6802B00D; Tue, 31 Oct 2023 07:10:05 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236046AbjJaOJm (ORCPT + 33 others); Tue, 31 Oct 2023 10:09:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235514AbjJaOJZ (ORCPT ); Tue, 31 Oct 2023 10:09:25 -0400 Received: from out-179.mta1.migadu.com (out-179.mta1.migadu.com [95.215.58.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B79B9121 for ; Tue, 31 Oct 2023 07:09:22 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698761361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WNirFUhouaQ2FdRmy2RjQgkCONe7ugTNtD0fPI9SkvQ=; b=r1MiH4vM84Jj3nwHog6emae9i1zx3YqzKMYp0IBEloU7g/8tB45Y1H9zMFiMKQbkD/H0Qe VuvBkAF8ea4irChfcbAhAq0tXL092g71ipfL7eKZwrga14ekhl47LZcRSrMGn4jQOfzSeP bhCSvGRNAWF820iIR0MuX+4oZgKIP/E= From: chengming.zhou@linux.dev To: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, willy@infradead.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v4 8/9] slub: Rename all *unfreeze_partials* functions to *put_partials* Date: Tue, 31 Oct 2023 14:07:40 +0000 Message-Id: <20231031140741.79387-9-chengming.zhou@linux.dev> In-Reply-To: <20231031140741.79387-1-chengming.zhou@linux.dev> References: <20231031140741.79387-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 31 Oct 2023 07:10:05 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781280442619271850 X-GMAIL-MSGID: 1781280442619271850 From: Chengming Zhou Since all partial slabs on the CPU partial list are not frozen anymore, we don't unfreeze when moving cpu partial slabs to node partial list, it's better to rename these functions. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index c429f8baba5f..bb7368047103 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2549,7 +2549,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) +static void put_partials_node(struct kmem_cache *s, struct slab *partial_slab) { struct kmem_cache_node *n = NULL, *n2 = NULL; struct slab *slab, *slab_to_discard = NULL; @@ -2591,9 +2591,9 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) } /* - * Unfreeze all the cpu partial slabs. + * Put all the cpu partial slabs to the node partial list. */ -static void unfreeze_partials(struct kmem_cache *s) +static void put_partials(struct kmem_cache *s) { struct slab *partial_slab; unsigned long flags; @@ -2604,11 +2604,11 @@ static void unfreeze_partials(struct kmem_cache *s) local_unlock_irqrestore(&s->cpu_slab->lock, flags); if (partial_slab) - __unfreeze_partials(s, partial_slab); + put_partials_node(s, partial_slab); } -static void unfreeze_partials_cpu(struct kmem_cache *s, - struct kmem_cache_cpu *c) +static void put_partials_cpu(struct kmem_cache *s, + struct kmem_cache_cpu *c) { struct slab *partial_slab; @@ -2616,7 +2616,7 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, c->partial = NULL; if (partial_slab) - __unfreeze_partials(s, partial_slab); + put_partials_node(s, partial_slab); } /* @@ -2629,7 +2629,7 @@ static void unfreeze_partials_cpu(struct kmem_cache *s, static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) { struct slab *oldslab; - struct slab *slab_to_unfreeze = NULL; + struct slab *slab_to_put = NULL; unsigned long flags; int slabs = 0; @@ -2644,7 +2644,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) * per node partial list. Postpone the actual unfreezing * outside of the critical section. */ - slab_to_unfreeze = oldslab; + slab_to_put = oldslab; oldslab = NULL; } else { slabs = oldslab->slabs; @@ -2660,17 +2660,17 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (slab_to_unfreeze) { - __unfreeze_partials(s, slab_to_unfreeze); + if (slab_to_put) { + put_partials_node(s, slab_to_put); stat(s, CPU_PARTIAL_DRAIN); } } #else /* CONFIG_SLUB_CPU_PARTIAL */ -static inline void unfreeze_partials(struct kmem_cache *s) { } -static inline void unfreeze_partials_cpu(struct kmem_cache *s, - struct kmem_cache_cpu *c) { } +static inline void put_partials(struct kmem_cache *s) { } +static inline void put_partials_cpu(struct kmem_cache *s, + struct kmem_cache_cpu *c) { } #endif /* CONFIG_SLUB_CPU_PARTIAL */ @@ -2712,7 +2712,7 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) stat(s, CPUSLAB_FLUSH); } - unfreeze_partials_cpu(s, c); + put_partials_cpu(s, c); } struct slub_flush_work { @@ -2740,7 +2740,7 @@ static void flush_cpu_slab(struct work_struct *w) if (c->slab) flush_slab(s, c); - unfreeze_partials(s); + put_partials(s); } static bool has_cpu_slab(int cpu, struct kmem_cache *s) @@ -3171,7 +3171,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (unlikely(!node_match(slab, node) || !pfmemalloc_match(slab, gfpflags))) { slab->next = NULL; - __unfreeze_partials(s, slab); + put_partials_node(s, slab); continue; }