From patchwork Tue Oct 17 15:44:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 154374 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4226227vqb; Tue, 17 Oct 2023 08:46:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHZaKkMclkZgSh0VMRNEreJWxQTlroytFugaZ/5HvA9o+W+O/MTAzWIYYZTIjUuU1pIfp9g X-Received: by 2002:a17:903:23cd:b0:1c6:9312:187 with SMTP id o13-20020a17090323cd00b001c693120187mr2696544plh.3.1697557592101; Tue, 17 Oct 2023 08:46:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697557592; cv=none; d=google.com; s=arc-20160816; b=Nw4L7oB/fVM4oxVgbSfOQE6juqPgOR+iZuBprhxnb2Ot10SR5ZYvYHU1d4X0kChRYD +bv5ZESaN3bg3uUwMtUfSnF0zpi2+9Q0OF1cwnYAmn5CR7T/jKiYyTsRsO89jOOR+CcX IXSqBShn2xfsTZmBVZeBk26ZkVd+wFdi07+6F8frzoeXXbZEvSfX0vw27ZqpX2JhmytS F33EVO03bnhAy/Q/5XKxPGQCBVOEIQPFb47gaXjNaN3bx/T7XgYjvIWs6yfS/W1OXm6r llnX1m5i0ITElvqzgGOb4fX0kENUNMIurdwBJb47w0pHMFu/l1LfTcZhVaR9o91f96Ey j3gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SwVUZ0CSOlUV0/uHFVqKY/DwonXk9nrbYMCg7m1DgMw=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=xoVWHbUTv+xv8lTom+UFxnNRD7YxHBbEspptUfczvo/hrYmEP9FGxBa6fVUl2RRmUb wAl9f7na4E5c8/94Q/4jggpNxYwlhNCUcSnSw3+iaAlb+NBKn7S7WUVi2kUThrmodTqW b1BQl4aEIHjrT3+NRtCc1SBMgTCnh2wuWzPHok59OgrD5y3C80zGNeXYBkLpsELvroVP 0zb7y7wEWRjMK6F8dmYO43jbZWqzTioiojBWOMpxboMpZ2UwvxnV8U7O/rjL+oKuEmNX BfxsMdo/ZoJLIhm/WltNHhyqPkod5KbhsUdgsA3DpXUwuJqF5uUTe7cTqiPnjqRmLraL 1GYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=QTQRTtR4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id lg7-20020a170902fb8700b001c9cc3a07c3si1904441plb.280.2023.10.17.08.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 08:46:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=QTQRTtR4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id F316180842F1; Tue, 17 Oct 2023 08:46:00 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344211AbjJQPpl (ORCPT + 20 others); Tue, 17 Oct 2023 11:45:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344182AbjJQPpe (ORCPT ); Tue, 17 Oct 2023 11:45:34 -0400 Received: from out-203.mta1.migadu.com (out-203.mta1.migadu.com [95.215.58.203]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0745610D for ; Tue, 17 Oct 2023 08:45:29 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557528; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SwVUZ0CSOlUV0/uHFVqKY/DwonXk9nrbYMCg7m1DgMw=; b=QTQRTtR4TDMx815F2FxWR0MO4r4tizfmzdquqGgIm6079a5vKw/+Kv5swHYxfc+xr4bwXq Qc4wl110RF7gV0nbPrV0c1kK94qUoAu4HC34L59IfxeRumqBLP9gWTX7ryMGjzSz0nXUJR 3HtIn99NiO2gQzsXZGKH13+VO/pQ1ig= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 4/5] slub: Don't freeze slabs for cpu partial Date: Tue, 17 Oct 2023 15:44:38 +0000 Message-Id: <20231017154439.3036608-5-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 17 Oct 2023 08:46:01 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780018149919860587 X-GMAIL-MSGID: 1780018149919860587 From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freeze to get slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: 1. The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. 2. The node list_lock contention would be less, since we only need to freeze one slab under the node list_lock. (In fact, we can first move slabs out of node partial list, don't need to freeze any slab at all, so the contention on slab won't transfer to the node list_lock contention.) We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized using the new introduced slab->flags. Note this patch just change the part of moving the partial slabs for easy code review, we will fix other parts in the following patches. Signed-off-by: Chengming Zhou --- mm/slub.c | 61 ++++++++++++++++--------------------------------------- 1 file changed, 17 insertions(+), 44 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5a9711b35c74..044235bd8a45 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2329,19 +2329,21 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) - break; - if (!object) { - *pc->slab = slab; - stat(s, ALLOC_FROM_PARTIAL); - object = t; - } else { - put_cpu_partial(s, slab, 0); - stat(s, CPU_PARTIAL_NODE); - partial_slabs++; + t = acquire_slab(s, n, slab, object == NULL); + if (t) { + *pc->slab = slab; + stat(s, ALLOC_FROM_PARTIAL); + object = t; + continue; + } } + + remove_partial(n, slab); + put_cpu_partial(s, slab, 0); + stat(s, CPU_PARTIAL_NODE); + partial_slabs++; + #ifdef CONFIG_SLUB_CPU_PARTIAL if (!kmem_cache_has_cpu_partial(s) || partial_slabs > s->cpu_partial_slabs / 2) @@ -2612,9 +2614,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2627,23 +2626,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3640,18 +3623,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3681,7 +3654,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* * If we just froze the slab then put it onto the * per cpu partial list.