[RFC,v3,2/7] slub: Prepare __slab_free() for unfrozen partial slab out of node partial list
Message ID | 20231024093345.3676493-3-chengming.zhou@linux.dev |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1825514vqx; Tue, 24 Oct 2023 02:44:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IED52lJ9HmIsujObveQ+nYXbs5VDS6oRr0jyby2hYiXZDspLD6iI8lpLDfSqxQPWhhfgPhz X-Received: by 2002:a05:6358:729c:b0:168:e06f:d798 with SMTP id w28-20020a056358729c00b00168e06fd798mr4916433rwf.12.1698140681689; Tue, 24 Oct 2023 02:44:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140681; cv=none; d=google.com; s=arc-20160816; b=rgj6Y4cad/RJP64Uq9dW3slmD9Nqv3C7xmhQJco/Khv97TEsU/K7R41H4XQcO7DSMo WRiPTlVlep2BRpQMv8rgwE4pQPwx5GHTzvHFQgfkVabociFU5g0x0kUOq2bzyXSayPim yomhqJJ6yemy56iRyBmxRvjIN9r/SWo2Owr6drnwwjpsNKEzICJ41JmQSKvyyb3R1VYp PMvGhkGUoLA8gN66NWnwaOoHoEgI8zeT2JL6LnoxF1qxMp5hASC/LtdMyXwWNpTPPPm0 A7RUaEAkeRN9QI/D2ESR4YCjHJ9Ze/8Os3laPgPsCBoKcTLU/TlmoHeIWah6FvXX6JsG kmfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=Dyg4yyzNoGdIPyjP29czVuHBhJd2Ezw59ry6yDRd/a0gLH6RmNcn0eeWQgunIchqZ8 2+f83kOnjGEiuImRESXjMasTDIiMfDFjcHW+z90SozU9GMV2mhRe5kJtJhT8avPWOJnV n4t2wH9vYeEHxu88czlrinHvQoxV0+t7/oKf4/YPsVX/r0N8LfECSDqW4+D4Q/yt+NV5 qkuElXXNrrH+zkIB5PHZ4t4f2KTho+f6am2BL4DybOy+irluGQda93Xe8liPmjg33nTt fNRGIaY+Y/vwXY5Ml5v4a8ZXfYzuxX+xRJEFK5N8UtoVbUAg5hDKT7MVXdo8noDydfwu 3uMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=XdLQFwKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id e9-20020a056a001a8900b006bdd721a84asi8358532pfv.299.2023.10.24.02.44.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:44:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=XdLQFwKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 6D2118042B77; Tue, 24 Oct 2023 02:44:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234258AbjJXJoN (ORCPT <rfc822;a1648639935@gmail.com> + 26 others); Tue, 24 Oct 2023 05:44:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234546AbjJXJeJ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 24 Oct 2023 05:34:09 -0400 Received: from out-190.mta0.migadu.com (out-190.mta0.migadu.com [91.218.175.190]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A57E1FC6 for <linux-kernel@vger.kernel.org>; Tue, 24 Oct 2023 02:34:01 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; b=XdLQFwKjBt2izoxJqBOj93psrMaXNRty4xClGHCDT9p4T6juuhYWB/eoSnFrU+EPl/QvmP 6gAcQ3g4yVWbXa8UyMXIbOBPExVeP2TXXqyMEawXzbGshHntMsONmYh2xHZQ0ydgsgepOq WH5zAaZdRSTxxEDvvW3W5dkYKS7D57s= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou <zhouchengming@bytedance.com> Subject: [RFC PATCH v3 2/7] slub: Prepare __slab_free() for unfrozen partial slab out of node partial list Date: Tue, 24 Oct 2023 09:33:40 +0000 Message-Id: <20231024093345.3676493-3-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:44:39 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629563201133402 X-GMAIL-MSGID: 1780629563201133402 |
Series |
slub: Delay freezing of CPU partial slabs
|
|
Commit Message
Chengming Zhou
Oct. 24, 2023, 9:33 a.m. UTC
From: Chengming Zhou <zhouchengming@bytedance.com> Now the partial slub will be frozen when taken out of node partial list, so the __slab_free() will know from "was_frozen" that the partial slab is not on node partial list and is used by one kmem_cache_cpu. But we will change this, make partial slabs leave the node partial list with unfrozen state, so we need to change __slab_free() to use the new slab_test_node_partial() we just introduced. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> --- mm/slub.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
Comments
On 10/24/23 11:33, chengming.zhou@linux.dev wrote: > From: Chengming Zhou <zhouchengming@bytedance.com> > > Now the partial slub will be frozen when taken out of node partial list, partially empty slab > so the __slab_free() will know from "was_frozen" that the partial slab > is not on node partial list and is used by one kmem_cache_cpu. ... is a cpu or cpu partial slab of some cpu. > But we will change this, make partial slabs leave the node partial list > with unfrozen state, so we need to change __slab_free() to use the new > slab_test_node_partial() we just introduced. > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > --- > mm/slub.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/mm/slub.c b/mm/slub.c > index 3fad4edca34b..f568a32d7332 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > unsigned long counters; > struct kmem_cache_node *n = NULL; > unsigned long flags; > + bool on_node_partial; > > stat(s, FREE_SLOWPATH); > > @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > */ > spin_lock_irqsave(&n->list_lock, flags); > > + on_node_partial = slab_test_node_partial(slab); > } > } > > @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > return; > } > > + /* > + * This slab was partial but not on the per-node partial list, This slab was partially empty ... Otherwise LGTM! > + * in which case we shouldn't manipulate its list, just return. > + */ > + if (prior && !on_node_partial) { > + spin_unlock_irqrestore(&n->list_lock, flags); > + return; > + } > + > if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) > goto slab_empty; >
On 2023/10/27 23:18, Vlastimil Babka wrote: > On 10/24/23 11:33, chengming.zhou@linux.dev wrote: >> From: Chengming Zhou <zhouchengming@bytedance.com> >> >> Now the partial slub will be frozen when taken out of node partial list, > > partially empty slab > >> so the __slab_free() will know from "was_frozen" that the partial slab >> is not on node partial list and is used by one kmem_cache_cpu. > > ... is a cpu or cpu partial slab of some cpu. > >> But we will change this, make partial slabs leave the node partial list >> with unfrozen state, so we need to change __slab_free() to use the new >> slab_test_node_partial() we just introduced. >> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> >> --- >> mm/slub.c | 11 +++++++++++ >> 1 file changed, 11 insertions(+) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 3fad4edca34b..f568a32d7332 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> unsigned long counters; >> struct kmem_cache_node *n = NULL; >> unsigned long flags; >> + bool on_node_partial; >> >> stat(s, FREE_SLOWPATH); >> >> @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> */ >> spin_lock_irqsave(&n->list_lock, flags); >> >> + on_node_partial = slab_test_node_partial(slab); >> } >> } >> >> @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> return; >> } >> >> + /* >> + * This slab was partial but not on the per-node partial list, > > This slab was partially empty ... > > Otherwise LGTM! Ok, will fix. Thanks! > >> + * in which case we shouldn't manipulate its list, just return. >> + */ >> + if (prior && !on_node_partial) { >> + spin_unlock_irqrestore(&n->list_lock, flags); >> + return; >> + } >> + >> if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) >> goto slab_empty; >> >
diff --git a/mm/slub.c b/mm/slub.c index 3fad4edca34b..f568a32d7332 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; + bool on_node_partial; stat(s, FREE_SLOWPATH); @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ spin_lock_irqsave(&n->list_lock, flags); + on_node_partial = slab_test_node_partial(slab); } } @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } + /* + * This slab was partial but not on the per-node partial list, + * in which case we shouldn't manipulate its list, just return. + */ + if (prior && !on_node_partial) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty;