From patchwork Tue Oct 17 15:44:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 154373 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4226170vqb; Tue, 17 Oct 2023 08:46:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IECcl0P+hlj7lqC/ejaY/7NWOns0Vc0hnIqwrdzLF7AEHhHzxZ5o8xCoFJq1zV8yhzgVQ7B X-Received: by 2002:a05:6871:4689:b0:1e9:9e04:1d24 with SMTP id ni9-20020a056871468900b001e99e041d24mr2739867oab.5.1697557586359; Tue, 17 Oct 2023 08:46:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697557586; cv=none; d=google.com; s=arc-20160816; b=FPQbFA3tc/LIYvT/G0zht9+TAgU4q759FfLYEfGPhWCD6kuB6n6SaigxBdQqDo2ofn /YGqB1uhpdV3V0uIJBmUgRcSohXvUnujIeCzDc5qJRiE0+pNM7MmYned0QEBpY80p2bS jrBlPpq0g0Guk2qWErfeUDzbeMiEepg7UemsdXqNpVDjmcEq9jmza7VF1hMpJsjO/BFv 6ge3OhIvfrYPVLnRZIAlvo6mzD3hcnaJ1iy/FLlFM+q/Eqjeu/ayeRrIsjoT2cEeWdXF RChKXD9H7GDFf8SJU9itPXtW+fprFIX9fd1yasuvVosrCfkcbtIXaUQVde6hlzKUMg0k qxtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/rqGINGA/f9CjHGz6bcXHgniUddOArpzlJ7hGX+P1rU=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=BiTayXh7KfNFa0gtSnbVB6kU78oo7wPxi1LUo8+4CKok4w5GKBIgdwNvb7DXjP8jAE Hw/uHF0JmDO4lS+vK1IV6qcLZ2+tN7kpbGNxp4MgHb+EKoXl9Gtk4edjES4Iz3kzRlbz Nd7tRTIfspK/zx476iYxGVHuHoQJUtbtX7niDsFD6UUpgr50wHpPbBn40vHPd1xqa0f0 9b1PjJ2iRlkoMvfkTTNRLuQSRmyDPrvGXs4u1c/kWsVpc8Db75AtmVUvUjMRr/QtyodX 1UssPp1YvTEdGP4de19wVneIsMr82Xm9L296v6nnTLZShAupYTXYSY7AhL/jE/UqvSgl rqSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=t7itMpfW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id m15-20020a656a0f000000b005aa644010a6si56900pgu.205.2023.10.17.08.46.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 08:46:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=t7itMpfW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id AD564803985B; Tue, 17 Oct 2023 08:46:23 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235080AbjJQPpr (ORCPT + 20 others); Tue, 17 Oct 2023 11:45:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344158AbjJQPpf (ORCPT ); Tue, 17 Oct 2023 11:45:35 -0400 Received: from out-205.mta1.migadu.com (out-205.mta1.migadu.com [IPv6:2001:41d0:203:375::cd]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CA56119 for ; Tue, 17 Oct 2023 08:45:32 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557531; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/rqGINGA/f9CjHGz6bcXHgniUddOArpzlJ7hGX+P1rU=; b=t7itMpfWMoYVC3j6sv4EJs1ydOPU1zP3XISTj8W27fYulJ37dQyEN+Pa322OfNaStXpJZV 36vAPklwKga0vw8d+1eVptg8XqL89bkZHme5rmr/8afd3FtJyx000aND1uE1U0lpxztDa8 d7+B+Qhwf+dLLjvv182AjauHrt2yObw= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 5/5] slub: Introduce get_cpu_partial() Date: Tue, 17 Oct 2023 15:44:39 +0000 Message-Id: <20231017154439.3036608-6-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 17 Oct 2023 08:46:23 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780018143474698156 X-GMAIL-MSGID: 1780018143474698156 From: Chengming Zhou Since the slabs on cpu partial list are not frozen anymore, we introduce get_cpu_partial() to get a frozen slab with its freelist from cpu partial list. It's now much like getting a frozen slab with its freelist from node partial list. Another change is about get_partial(), which can return no frozen slab when all slabs are failed when acquire_slab(), but get some unfreeze slabs in its cpu partial list, so we need to check this rare case to avoid allocating a new slab. Signed-off-by: Chengming Zhou --- mm/slub.c | 87 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 68 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 044235bd8a45..d58eaf8447fd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3064,6 +3064,68 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) return freelist; } +#ifdef CONFIG_SLUB_CPU_PARTIAL + +static void *get_cpu_partial(struct kmem_cache *s, struct kmem_cache_cpu *c, + struct slab **slabptr, int node, gfp_t gfpflags) +{ + unsigned long flags; + struct slab *slab; + struct slab new; + unsigned long counters; + void *freelist; + + while (slub_percpu_partial(c)) { + local_lock_irqsave(&s->cpu_slab->lock, flags); + if (unlikely(!slub_percpu_partial(c))) { + local_unlock_irqrestore(&s->cpu_slab->lock, flags); + /* we were preempted and partial list got empty */ + return NULL; + } + + slab = slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); + stat(s, CPU_PARTIAL_ALLOC); + + if (unlikely(!node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { + slab->next = NULL; + __unfreeze_partials(s, slab); + continue; + } + + do { + freelist = slab->freelist; + counters = slab->counters; + + new.counters = counters; + VM_BUG_ON(new.frozen); + + new.inuse = slab->objects; + new.frozen = 1; + } while (!__slab_update_freelist(s, slab, + freelist, counters, + NULL, new.counters, + "get_cpu_partial")); + + *slabptr = slab; + return freelist; + } + + return NULL; +} + +#else /* CONFIG_SLUB_CPU_PARTIAL */ + +static void *get_cpu_partial(struct kmem_cache *s, struct kmem_cache_cpu *c, + struct slab **slabptr, int node, gfp_t gfpflags) +{ + return NULL; +} + +#endif + /* * Slow path. The lockless freelist is empty or we need to perform * debugging duties. @@ -3106,7 +3168,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, node = NUMA_NO_NODE; goto new_slab; } -redo: if (unlikely(!node_match(slab, node))) { /* @@ -3182,24 +3243,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_slab: - if (slub_percpu_partial(c)) { - local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->slab)) { - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_slab; - } - if (unlikely(!slub_percpu_partial(c))) { - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - /* we were preempted and partial list got empty */ - goto new_objects; - } - - slab = c->slab = slub_percpu_partial(c); - slub_set_percpu_partial(c, slab); - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - stat(s, CPU_PARTIAL_ALLOC); - goto redo; - } + freelist = get_cpu_partial(s, c, &slab, node, gfpflags); + if (freelist) + goto retry_load_slab; new_objects: @@ -3210,6 +3256,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (freelist) goto check_new_slab; + if (slub_percpu_partial(c)) + goto new_slab; + slub_put_cpu_ptr(s->cpu_slab); slab = new_slab(s, gfpflags, node); c = slub_get_cpu_ptr(s->cpu_slab);