From patchwork Tue Oct 24 09:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157350 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1822236vqx; Tue, 24 Oct 2023 02:34:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFMGbM0jbuvmNDZ01Tqc7qDQae1TmMgViq9pTQQyi0fGNzYtrSnChtcKaxgHyYmlBzh3fx5 X-Received: by 2002:a17:902:7487:b0:1c7:755d:ccc8 with SMTP id h7-20020a170902748700b001c7755dccc8mr9535750pll.29.1698140095082; Tue, 24 Oct 2023 02:34:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140095; cv=none; d=google.com; s=arc-20160816; b=kv0Ju8TJnb4V7XgLL/Xlf+WJvp7UML+Xw7G6jGfFjH4XBz3UqTwpmVtDwfUG2DtMRO UAs0d5PxEzeRHqUeOFGw7E2lmjCCjIdHXyhVtZP1FCint8LLd5SvVJPB2kPFmDSuhYqp AE+AbwSjz+KHjT8wRxaB+Hqc+oGWUm6qVnEb3GJerv6HJjIY+ZIjeJqB53TqtWc+Yka8 Df4EiCKXFhC/ZQ3zkATEkpsJH/aD+c/8PTnQDZtS1FwhrhAcSiYrOUNjBHMAkbZWnkt8 CCuAthx0rRjgsbqKVvGWxxyZLaA96HBI0ZA3FyAQjyUNQ3dNiUkwzVMDSUl7pBrIBdIX 6xjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mVsHpTi8/ybc90IBC2PUgLSTYbIjUF6GvQZHOJT0cXI=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=p0ftoifEN7+EFIyIaZVr4O0sW5yscx2DLzrZgQzKaEeaQ3oRYiOA/DTg1mPbm1GJGt EObZqdITXrR3J8ijre6mNBjJAdaMXaX09D/GWRETeiZYe3ZinCdl77BSOeuIQ+A0zYrA SHLwObRUUqciipcaeJkqaYl6ZiLaHOkpejIqs1v21F2wAJwtiPspRsX9gBhqYGojqHJJ Lceh6FeOgQnHl3Pu9wQDUQ6OMjoUxCtkbGDx/IgynR2kBQnSFD+CVasRJ9yJyghR5bfn Qa4zg7+LV5lPiiXPRC8xdyF1Md2g5hgOtOnNvM1J2ps7NWZWu5uGg1HGCUT5T4WwS6Uy G0ug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=WkSYYMuX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id i10-20020a170902eb4a00b001b8922e82e3si8035303pli.297.2023.10.24.02.34.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:34:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=WkSYYMuX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 12C65803099A; Tue, 24 Oct 2023 02:34:54 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234213AbjJXJel (ORCPT + 26 others); Tue, 24 Oct 2023 05:34:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234531AbjJXJeI (ORCPT ); Tue, 24 Oct 2023 05:34:08 -0400 Received: from out-206.mta0.migadu.com (out-206.mta0.migadu.com [IPv6:2001:41d0:1004:224b::ce]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45085D7F for ; Tue, 24 Oct 2023 02:33:59 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mVsHpTi8/ybc90IBC2PUgLSTYbIjUF6GvQZHOJT0cXI=; b=WkSYYMuXGHeVTVvJKeEDkqylfQ5ISuw3VBdmhEEMq+LWy2qJeirb8he9LzRmfOp5qZ6+kT ZswKHPlHfUqDgf+oW4CzD+qBmu/0yksI5xZDpGRNHQHcev4VmDR1H8HChl9GYIrTSRecZo StPLjAgG42c97ebsxtS4Z+pyEQMwhsc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 1/7] slub: Keep track of whether slub is on the per-node partial list Date: Tue, 24 Oct 2023 09:33:39 +0000 Message-Id: <20231024093345.3676493-2-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:34:54 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780628948722440278 X-GMAIL-MSGID: 1780628948722440278 From: Chengming Zhou Now we rely on the "frozen" bit to see if we should manipulate the slab->slab_list, which will be changed in the following patch. Instead we introduce another way to keep track of whether slub is on the per-node partial list, here we reuse the PG_workingset bit. We use __set_bit and __clear_bit directly instead of the atomic version for better performance and it's safe since it's protected by the slub node list_lock. Signed-off-by: Chengming Zhou --- mm/slab.h | 19 +++++++++++++++++++ mm/slub.c | 3 +++ 2 files changed, 22 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 8cd3294fedf5..50522b688cfb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -193,6 +193,25 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab) __folio_clear_active(slab_folio(slab)); } +/* + * Slub reuse PG_workingset bit to keep track of whether it's on + * the per-node partial list. + */ +static inline bool slab_test_node_partial(const struct slab *slab) +{ + return folio_test_workingset((struct folio *)slab_folio(slab)); +} + +static inline void slab_set_node_partial(struct slab *slab) +{ + __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); +} + +static inline void slab_clear_node_partial(struct slab *slab) +{ + __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); +} + static inline void *slab_address(const struct slab *slab) { return folio_address(slab_folio(slab)); diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..3fad4edca34b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2127,6 +2127,7 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) list_add_tail(&slab->slab_list, &n->partial); else list_add(&slab->slab_list, &n->partial); + slab_set_node_partial(slab); } static inline void add_partial(struct kmem_cache_node *n, @@ -2141,6 +2142,7 @@ static inline void remove_partial(struct kmem_cache_node *n, { lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); + slab_clear_node_partial(slab); n->nr_partial--; } @@ -4831,6 +4833,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) if (free == slab->objects) { list_move(&slab->slab_list, &discard); + slab_clear_node_partial(slab); n->nr_partial--; dec_slabs_node(s, node, slab->objects); } else if (free <= SHRINK_PROMOTE_MAX) From patchwork Tue Oct 24 09:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157358 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1825514vqx; Tue, 24 Oct 2023 02:44:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IED52lJ9HmIsujObveQ+nYXbs5VDS6oRr0jyby2hYiXZDspLD6iI8lpLDfSqxQPWhhfgPhz X-Received: by 2002:a05:6358:729c:b0:168:e06f:d798 with SMTP id w28-20020a056358729c00b00168e06fd798mr4916433rwf.12.1698140681689; Tue, 24 Oct 2023 02:44:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140681; cv=none; d=google.com; s=arc-20160816; b=rgj6Y4cad/RJP64Uq9dW3slmD9Nqv3C7xmhQJco/Khv97TEsU/K7R41H4XQcO7DSMo WRiPTlVlep2BRpQMv8rgwE4pQPwx5GHTzvHFQgfkVabociFU5g0x0kUOq2bzyXSayPim yomhqJJ6yemy56iRyBmxRvjIN9r/SWo2Owr6drnwwjpsNKEzICJ41JmQSKvyyb3R1VYp PMvGhkGUoLA8gN66NWnwaOoHoEgI8zeT2JL6LnoxF1qxMp5hASC/LtdMyXwWNpTPPPm0 A7RUaEAkeRN9QI/D2ESR4YCjHJ9Ze/8Os3laPgPsCBoKcTLU/TlmoHeIWah6FvXX6JsG kmfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=Dyg4yyzNoGdIPyjP29czVuHBhJd2Ezw59ry6yDRd/a0gLH6RmNcn0eeWQgunIchqZ8 2+f83kOnjGEiuImRESXjMasTDIiMfDFjcHW+z90SozU9GMV2mhRe5kJtJhT8avPWOJnV n4t2wH9vYeEHxu88czlrinHvQoxV0+t7/oKf4/YPsVX/r0N8LfECSDqW4+D4Q/yt+NV5 qkuElXXNrrH+zkIB5PHZ4t4f2KTho+f6am2BL4DybOy+irluGQda93Xe8liPmjg33nTt fNRGIaY+Y/vwXY5Ml5v4a8ZXfYzuxX+xRJEFK5N8UtoVbUAg5hDKT7MVXdo8noDydfwu 3uMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=XdLQFwKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id e9-20020a056a001a8900b006bdd721a84asi8358532pfv.299.2023.10.24.02.44.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:44:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=XdLQFwKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 6D2118042B77; Tue, 24 Oct 2023 02:44:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234258AbjJXJoN (ORCPT + 26 others); Tue, 24 Oct 2023 05:44:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234546AbjJXJeJ (ORCPT ); Tue, 24 Oct 2023 05:34:09 -0400 Received: from out-190.mta0.migadu.com (out-190.mta0.migadu.com [91.218.175.190]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A57E1FC6 for ; Tue, 24 Oct 2023 02:34:01 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; b=XdLQFwKjBt2izoxJqBOj93psrMaXNRty4xClGHCDT9p4T6juuhYWB/eoSnFrU+EPl/QvmP 6gAcQ3g4yVWbXa8UyMXIbOBPExVeP2TXXqyMEawXzbGshHntMsONmYh2xHZQ0ydgsgepOq WH5zAaZdRSTxxEDvvW3W5dkYKS7D57s= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 2/7] slub: Prepare __slab_free() for unfrozen partial slab out of node partial list Date: Tue, 24 Oct 2023 09:33:40 +0000 Message-Id: <20231024093345.3676493-3-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:44:39 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629563201133402 X-GMAIL-MSGID: 1780629563201133402 From: Chengming Zhou Now the partial slub will be frozen when taken out of node partial list, so the __slab_free() will know from "was_frozen" that the partial slab is not on node partial list and is used by one kmem_cache_cpu. But we will change this, make partial slabs leave the node partial list with unfrozen state, so we need to change __slab_free() to use the new slab_test_node_partial() we just introduced. Signed-off-by: Chengming Zhou --- mm/slub.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 3fad4edca34b..f568a32d7332 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; + bool on_node_partial; stat(s, FREE_SLOWPATH); @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ spin_lock_irqsave(&n->list_lock, flags); + on_node_partial = slab_test_node_partial(slab); } } @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } + /* + * This slab was partial but not on the per-node partial list, + * in which case we shouldn't manipulate its list, just return. + */ + if (prior && !on_node_partial) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty; From patchwork Tue Oct 24 09:33:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157356 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1825454vqx; Tue, 24 Oct 2023 02:44:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFTzMCbSPLcc2PcAwsSUWKP/hiiNAmuJmroQP5r+TgJRPsmbM1Qrj8Jcono/RqJyjN0wd7N X-Received: by 2002:a17:903:27c7:b0:1c3:1f0c:fb82 with SMTP id km7-20020a17090327c700b001c31f0cfb82mr7341182plb.41.1698140669033; Tue, 24 Oct 2023 02:44:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140669; cv=none; d=google.com; s=arc-20160816; b=0yJM5U2BmcP8IMQZ9yHuHNZKEJMZYfhJ9kaCNlQLzXUDqIRVTi3Y+8iQYMsA7fCDGr XN5/LF7lSArZJN9OHDTKBxZeGIh16eFw54kQp55GCoAIrzY5yY26bfuFluDBmlZJGvMX ORdRnVwKDXn+QeBPkdp6INBo3RR2rXtlezqP7lLiTNCjxWYwrVeiPnab57L1cot50OLk lFgceVtfAAnXR6CPNClA5Bl29GjG1WJ9FQJ78eErMzHOYcEN6iP3bsOTbQUTHFKJ3Yu2 U4DdMG+GZTlrxvjYeFy7UlD0aoOzaG6aRcGBTI/nd4X623eNoDyEt2rbWkmNjyhViMVl Sd8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nZerFoAj8INaI6ZSxOyjE3BeTDGPig5MeJKjJh1pMQ8=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=npX2VGJvu2Vh4DzOGMhDZV6AAEbandPTVFYmT+Nenx0DvWtvjIjaqVZqE/2yBsXE94 oA1gCFK52LHrxex+upmvzIC+dlL4OVHFdz2QPtK1MO3l9fjClCaJMvZIIbNsNReUZbjj TzZLlWOFQ5zO5A7VNsh/+y29knB55N1uKx+V/msdYBsL1NLC4M+7nQE5qV+kOG50O2vi f4ayb6tIDQKC7IH/q+La21WlHL47/4tGSnSExuOqtjz3wc1fr57LhbckeP9Z5WjTq56X kUb0nLV5coWqnd/+Nthhj/iWAXLLUcKQujQMR+NWJBfFEEbdUt3HsfAMblWwduShqu2o ccZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=V5gwmpkG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id y12-20020a170902ed4c00b001c9d690baa4si7964159plb.532.2023.10.24.02.44.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:44:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=V5gwmpkG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id C6A7480CD726; Tue, 24 Oct 2023 02:44:26 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234193AbjJXJoJ (ORCPT + 26 others); Tue, 24 Oct 2023 05:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234547AbjJXJeJ (ORCPT ); Tue, 24 Oct 2023 05:34:09 -0400 Received: from out-191.mta0.migadu.com (out-191.mta0.migadu.com [91.218.175.191]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBF0610F6 for ; Tue, 24 Oct 2023 02:34:03 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140042; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nZerFoAj8INaI6ZSxOyjE3BeTDGPig5MeJKjJh1pMQ8=; b=V5gwmpkG8243Dq0GOsqzdIUFu+jgwdS+wsU2S/1oNpANH7Jiro5jaQ+wSigcoAXTIwUHR0 tz233Wke1fEJVMWATf22pDzMl0BDfdcprGotQ76eoP2bW65VEb0nau30v/bz1ONYQ7C1iq uidyW/b1k2Vm8H6knFM73iZoGZXo8uc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 3/7] slub: Reflow ___slab_alloc() Date: Tue, 24 Oct 2023 09:33:41 +0000 Message-Id: <20231024093345.3676493-4-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:44:26 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629550237126360 X-GMAIL-MSGID: 1780629550237126360 From: Chengming Zhou The get_partial() interface used in ___slab_alloc() may return a single object in the "kmem_cache_debug(s)" case, in which we will just return the "freelist" object. Move this handling up to prepare for later changes. And the "pfmemalloc_match()" part is not needed for node partial slab, since we already check this in the get_partial_node(). Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f568a32d7332..cd8aa68c156e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3218,8 +3218,21 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, pc.slab = &slab; pc.orig_size = orig_size; freelist = get_partial(s, node, &pc); - if (freelist) - goto check_new_slab; + if (freelist) { + if (kmem_cache_debug(s)) { + /* + * For debug caches here we had to go through + * alloc_single_from_partial() so just store the + * tracking info and return the object. + */ + if (s->flags & SLAB_STORE_USER) + set_track(s, freelist, TRACK_ALLOC, addr); + + return freelist; + } + + goto retry_load_slab; + } slub_put_cpu_ptr(s->cpu_slab); slab = new_slab(s, gfpflags, node); @@ -3255,20 +3268,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, inc_slabs_node(s, slab_nid(slab), slab->objects); -check_new_slab: - - if (kmem_cache_debug(s)) { - /* - * For debug caches here we had to go through - * alloc_single_from_partial() so just store the tracking info - * and return the object - */ - if (s->flags & SLAB_STORE_USER) - set_track(s, freelist, TRACK_ALLOC, addr); - - return freelist; - } - if (unlikely(!pfmemalloc_match(slab, gfpflags))) { /* * For !pfmemalloc_match() case we don't load freelist so that From patchwork Tue Oct 24 09:33:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157355 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1825380vqx; Tue, 24 Oct 2023 02:44:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFAqGwjf/cOGSNWRY7/fsexEypviI0sFc9iIVjjuwMreirDjJokv7FwC4K6/Agq0S6l7dfZ X-Received: by 2002:a67:b24c:0:b0:44d:5e09:e387 with SMTP id s12-20020a67b24c000000b0044d5e09e387mr11001694vsh.20.1698140658399; Tue, 24 Oct 2023 02:44:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140658; cv=none; d=google.com; s=arc-20160816; b=EhbbWLjBaTtdgAqtE+Jh6krdUJbFs6vu6uKHnHp3ixluYeERcwWS9Yfk9Nc/sxRX+n 9ZuZ4p8gdW8SvsBiIAxA0us23JJNbZznNkrduMn6szXWnlxjkreEnD20dRnNDGD7YjMi mFp86S5xc6YR3Sb4jmuTJVjH228n6AcROGW2o+UUAicQl1lbz0q+wC4NFk8eTP+8OEvP fucInbtgOuT9btF3GZzNF8rQrnX/c1JcoMrhPbcj6PABCffOUJw8ZwwnFa37XDJ6ESvv /IFV4RzoW17rMbEJNWtw+vY64/zd4VhmyWgMOEQU+n+ZWrQkbd4WiedUie7H61ShEue7 85dQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=M0oaaQjRtsggr60xf/TcRhISey4PmiC9GN3ZTwclV0I=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=nwo7OeQOUhxGyJ9+HLHESdZRGCAYE7bdjfKGQi3VNmY5E5FZ/h6ZA+Qv42AVa0ap/Z cxSZdQWB31hNU+8nk+EavTzeQpJJawRpmGc5KyHrgQZkoxZU285Wlc8e0dPcK1J6ZA5b LTY9ZC79wo5W1NiHfi0Dp2tx4gNAouqfeXdUa6FoMvBbFQr8t7zK/gS9apu5M64jReGw jSaiqL69a68FTWwjdzv6RCrNF5lYsSBrXXOVz/sLUJiLY6ocGMHJtpQsKWJ4pDbmwagi ojyXQqeo88yZT82oizwg9snrpz+3FgI9rV2oeaSfB9N5h8yGqtxPZzbl1L3AofjwieMk 61lQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b="qe+5aC/P"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id cb7-20020a056a02070700b005b2d044af28si8862491pgb.257.2023.10.24.02.44.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:44:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b="qe+5aC/P"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id F2D96803465A; Tue, 24 Oct 2023 02:44:14 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233039AbjJXJoF (ORCPT + 26 others); Tue, 24 Oct 2023 05:44:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234550AbjJXJeK (ORCPT ); Tue, 24 Oct 2023 05:34:10 -0400 Received: from out-209.mta0.migadu.com (out-209.mta0.migadu.com [IPv6:2001:41d0:1004:224b::d1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D48C410E6 for ; Tue, 24 Oct 2023 02:34:05 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M0oaaQjRtsggr60xf/TcRhISey4PmiC9GN3ZTwclV0I=; b=qe+5aC/P6zRu1TdKIHVLvCkRJ0utMiqJQbaCAMAjQVzHN5g58p3EkWiMWER0AdEMI6PKuU X/FgD2Z340OUYybYcb2S1udN+T5kspyS4qlvEyxH5DfTOIwhkqfb34by37jVV286bVVgXe Fb/uzGHTjVq7WYepvR7yMsyzuEjRMHI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 4/7] slub: Change get_partial() interfaces to return slab Date: Tue, 24 Oct 2023 09:33:42 +0000 Message-Id: <20231024093345.3676493-5-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:44:15 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629539239141441 X-GMAIL-MSGID: 1780629539239141441 From: Chengming Zhou We need all get_partial() related interfaces to return a slab, instead of returning the freelist (or object). Use the partial_context.object to return back freelist or object for now. This patch shouldn't have any functional changes. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 63 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 33 insertions(+), 30 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index cd8aa68c156e..7d0234bffad3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -204,9 +204,9 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); /* Structure holding parameters for get_partial() call chain */ struct partial_context { - struct slab **slab; gfp_t flags; unsigned int orig_size; + void *object; }; static inline bool kmem_cache_debug(struct kmem_cache *s) @@ -2271,10 +2271,11 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); /* * Try to allocate a partial slab from a specific node. */ -static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct partial_context *pc) +static struct slab *get_partial_node(struct kmem_cache *s, + struct kmem_cache_node *n, + struct partial_context *pc) { - struct slab *slab, *slab2; + struct slab *slab, *slab2, *partial = NULL; void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2290,27 +2291,28 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { - void *t; - if (!pfmemalloc_match(slab, pc->flags)) continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { object = alloc_single_from_partial(s, n, slab, pc->orig_size); - if (object) + if (object) { + partial = slab; + pc->object = object; break; + } continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) + object = acquire_slab(s, n, slab, object == NULL); + if (!object) break; - if (!object) { - *pc->slab = slab; + if (!partial) { + partial = slab; + pc->object = object; stat(s, ALLOC_FROM_PARTIAL); - object = t; } else { put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); @@ -2326,20 +2328,21 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } spin_unlock_irqrestore(&n->list_lock, flags); - return object; + return partial; } /* * Get a slab from somewhere. Search in increasing NUMA distances. */ -static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) +static struct slab *get_any_partial(struct kmem_cache *s, + struct partial_context *pc) { #ifdef CONFIG_NUMA struct zonelist *zonelist; struct zoneref *z; struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(pc->flags); - void *object; + struct slab *slab; unsigned int cpuset_mems_cookie; /* @@ -2374,8 +2377,8 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) if (n && cpuset_zone_allowed(zone, pc->flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, pc); - if (object) { + slab = get_partial_node(s, n, pc); + if (slab) { /* * Don't check read_mems_allowed_retry() * here - if mems_allowed was updated in @@ -2383,7 +2386,7 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) * between allocation and the cpuset * update */ - return object; + return slab; } } } @@ -2395,17 +2398,18 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) /* * Get a partial slab, lock it and return it. */ -static void *get_partial(struct kmem_cache *s, int node, struct partial_context *pc) +static struct slab *get_partial(struct kmem_cache *s, int node, + struct partial_context *pc) { - void *object; + struct slab *slab; int searchnode = node; if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), pc); - if (object || node != NUMA_NO_NODE) - return object; + slab = get_partial_node(s, get_node(s, searchnode), pc); + if (slab || node != NUMA_NO_NODE) + return slab; return get_any_partial(s, pc); } @@ -3215,10 +3219,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_objects: pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - freelist = get_partial(s, node, &pc); - if (freelist) { + slab = get_partial(s, node, &pc); + if (slab) { + freelist = pc.object; if (kmem_cache_debug(s)) { /* * For debug caches here we had to go through @@ -3410,12 +3414,11 @@ static void *__slab_alloc_node(struct kmem_cache *s, void *object; pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - object = get_partial(s, node, &pc); + slab = get_partial(s, node, &pc); - if (object) - return object; + if (slab) + return pc.object; slab = new_slab(s, gfpflags, node); if (unlikely(!slab)) { From patchwork Tue Oct 24 09:33:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157357 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1825457vqx; Tue, 24 Oct 2023 02:44:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEo9yspnNlvYsJ7lc8ZE21CFw/TEte5ObS000sfbihvRToSvuSmgOv39GfLuu2zKfHgJMHK X-Received: by 2002:a05:6a20:8f0e:b0:14b:8023:33c8 with SMTP id b14-20020a056a208f0e00b0014b802333c8mr2289355pzk.2.1698140669590; Tue, 24 Oct 2023 02:44:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140669; cv=none; d=google.com; s=arc-20160816; b=msFaf9pXDzhKow7Td4AdK8EjwvAc6it4A8NJo0HXI0ljq0JO/9FgBKVGQEbPA+FJQO JsJMBMZJOvHNNHGQq1aCBP8El+kW+e8p3Rk+H/FnuKqH1ioPjjEwaC+am0KffhRwPcFQ VgCB/Lw4uBzRsgnflFtVNlNCBWnHkMz0KpkARuQY1RHvmUi6tP83SSS5YaE0cXn4kv+b HhNglTvPsDVvSFsGB9++3vZqk4gycbBy921qaN1SfkXuQD2NivlKP5Cht+BZ/yWBRNT/ vHMcPl4cyqKzFeTLKijtEHt6FllY6uSJh/ZEQoz5KhKAIHQDqS+f/kWN+yQR7AUaAJjB ZrKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UijxEztLRTexuc2wmlND22JYjAvU8peNSSAQw8rq2PA=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=sFHXuAClCkAZiSi4UjevLcy7HK9xlt9kRmTCiFwPZD/p5petI84YXC3z5v4b15jmKj 0NRTIdsFcDsmq119B5n96qCO786juUoHXBpbPa8o6in28bfC6o/9CCaT/0GLS/sSuvvS aPvHB8hMlppY847/LFd8EB6pJF36QWKs17VOaTmHQK/S6hV05QJpE7qrAB4R6OLZG/SF 94Xvb+4BHXy3yxQg8irfROfZ2ZEdFJ88iDldnXp5w/aF8LgodObk+9fkIW6K5Ru6pYA7 96uki2pwjZY/ZLI70BfJkCLjEIkVgOBtOLtS6DNsnINxWx0JZ3K/RoGZui0p8gPckd+M hIQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=gykdRjMa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id q15-20020a17090311cf00b001b7ea20dbf2si8128792plh.224.2023.10.24.02.44.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:44:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=gykdRjMa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 0257E80309EF; Tue, 24 Oct 2023 02:44:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234284AbjJXJoQ (ORCPT + 26 others); Tue, 24 Oct 2023 05:44:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234554AbjJXJeL (ORCPT ); Tue, 24 Oct 2023 05:34:11 -0400 Received: from out-194.mta0.migadu.com (out-194.mta0.migadu.com [91.218.175.194]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4285E10F5 for ; Tue, 24 Oct 2023 02:34:08 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140046; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UijxEztLRTexuc2wmlND22JYjAvU8peNSSAQw8rq2PA=; b=gykdRjMal5sST/bB2trCR8USAxy2UxFWhSeBpIeqaRzeF5OYDFVsgF9netz4vEDOO1YDYl eTbp0AKHNXuWCACBnN9eOw9UgYy91M8GYL3m9MQgwc5I0MLP0NIaEzdLHE9x29GwvY+Peq YX7cFeuPsGeiRsn56VsLqoroQhjglBc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 5/7] slub: Introduce freeze_slab() Date: Tue, 24 Oct 2023 09:33:43 +0000 Message-Id: <20231024093345.3676493-6-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:44:29 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629550855779856 X-GMAIL-MSGID: 1780629550855779856 From: Chengming Zhou We will have unfrozen slabs out of the node partial list later, so we need a freeze_slab() function to freeze the partial slab and get its freelist. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 7d0234bffad3..5b428648021f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3079,6 +3079,33 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) return freelist; } +/* + * Freeze the partial slab and return the pointer to the freelist. + */ +static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab) +{ + struct slab new; + unsigned long counters; + void *freelist; + + do { + freelist = slab->freelist; + counters = slab->counters; + + new.counters = counters; + VM_BUG_ON(new.frozen); + + new.inuse = slab->objects; + new.frozen = 1; + + } while (!__slab_update_freelist(s, slab, + freelist, counters, + NULL, new.counters, + "freeze_slab")); + + return freelist; +} + /* * Slow path. The lockless freelist is empty or we need to perform * debugging duties. From patchwork Tue Oct 24 09:33:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157351 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1822538vqx; Tue, 24 Oct 2023 02:35:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFlN3jHSNUOg5Th2Omhohc1v1Z4coO52kgC+5qGgsHHmQTbmveI9YOLCw/tjvv80TCrRKmk X-Received: by 2002:a17:902:c94d:b0:1c8:91d8:d5ca with SMTP id i13-20020a170902c94d00b001c891d8d5camr9202447pla.42.1698140145346; Tue, 24 Oct 2023 02:35:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140145; cv=none; d=google.com; s=arc-20160816; b=OZvYMuFdNpRR9YoAduCZQW8Q3CkjlF7Y40mbawFvC39flzoeyhYIhVwhs74bBGoExu uhaTCaSZzzAs5jmYWAOpxccGv45ONxY3eDB6u6P/a9nNmOHcnzNlIg/uFRwJ2fvdn+27 CdKzOV5Xu00+hOR4Pgje+Maj1ncv/xOrRWwTlpgKANUzp/GfG4utZ4kPt7fNmR9bluMb QivgC+MNcReHTkEH5b9GY4p32J5ORr64HFyjgF066R1Xcz1STdYnHPKrigdma3B3IDh9 dtILfmROghCZnfpIe9nR7ni+OBrjT7qrLbzVKwpU24PmdHAeCQ41sT/2gH0nsIVukK9f UGAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1189G8OwJQsHaUqQ7omEWGXJiWlZJSsRqA7qcxA5H0E=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=mYadVak4xFBhwkFkwCFCeYnwyDsQr2SB96+hqSPXLkrPDRciuuEG3WGVhlApmg5bxM 1QJZonfBVgMk1t6Kp1nMV0EOBBew193ivUUnwsMpe40mKmOAVVeKkBueXWQxsUSwapMg BY3y65Dlw1fUC6x1F+AM+54OIUuYh9wFKlTDt3dbXYJeeibTjjaIwvD6W6bII+ZqLl2+ LjRZFbAL4G/iseToffSrNjri/Dooj7OmyjjE4TED4uleA54BrDQp3JuTKZeh56MCcB/3 gpbHmDKk7fa0pXvclJROaGPuY/keeBvBNkSiwRcV+OxIYjjOG/AXbb17wtC8MaJtoyIi LCSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Y7iQvWYn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id w16-20020a170902e89000b001ab29e00303si8271965plg.426.2023.10.24.02.35.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:35:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Y7iQvWYn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id A458380BE86C; Tue, 24 Oct 2023 02:35:19 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234299AbjJXJez (ORCPT + 26 others); Tue, 24 Oct 2023 05:34:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234563AbjJXJeM (ORCPT ); Tue, 24 Oct 2023 05:34:12 -0400 Received: from out-190.mta0.migadu.com (out-190.mta0.migadu.com [91.218.175.190]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97C441706 for ; Tue, 24 Oct 2023 02:34:10 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1189G8OwJQsHaUqQ7omEWGXJiWlZJSsRqA7qcxA5H0E=; b=Y7iQvWYnYomwhH9HpqqMB6QnGTBOBbLrimV35vPGob+QAc/EDk42XtH9Qiw36iRppPZAfY nbCKMQcN9UNCzVRzFAAYL7APDUqtxspLiF8pMGCAFass0h7cocBOURvFszr/QxFy9Eav/T md+C1DNGHg5ckM7dyaawphlNjCc6pmI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 6/7] slub: Delay freezing of partial slabs Date: Tue, 24 Oct 2023 09:33:44 +0000 Message-Id: <20231024093345.3676493-7-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:35:19 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629001257711273 X-GMAIL-MSGID: 1780629001257711273 From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freezing to when use slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: - The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. - The node list_lock contention would be less, since we don't need to freeze any slab under the node list_lock. We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized now. Since the slab returned by get_partial() interfaces is not frozen anymore and no freelist in the partial_context, so we need to use the introduced freeze_slab() to freeze it and get its freelist. Similarly, the slabs on the CPU partial list are not frozen anymore, we need to freeze_slab() on it before use. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 111 +++++++++++------------------------------------------- 1 file changed, 21 insertions(+), 90 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5b428648021f..486d44421432 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2215,51 +2215,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, return object; } -/* - * Remove slab from the partial list, freeze it and - * return the pointer to the freelist. - * - * Returns a list of objects or NULL if it fails. - */ -static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct slab *slab, - int mode) -{ - void *freelist; - unsigned long counters; - struct slab new; - - lockdep_assert_held(&n->list_lock); - - /* - * Zap the freelist and set the frozen bit. - * The old freelist is the list of objects for the - * per cpu allocation list. - */ - freelist = slab->freelist; - counters = slab->counters; - new.counters = counters; - if (mode) { - new.inuse = slab->objects; - new.freelist = NULL; - } else { - new.freelist = freelist; - } - - VM_BUG_ON(new.frozen); - new.frozen = 1; - - if (!__slab_update_freelist(s, slab, - freelist, counters, - new.freelist, new.counters, - "acquire_slab")) - return NULL; - - remove_partial(n, slab); - WARN_ON(!freelist); - return freelist; -} - #ifdef CONFIG_SLUB_CPU_PARTIAL static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); #else @@ -2276,7 +2231,6 @@ static struct slab *get_partial_node(struct kmem_cache *s, struct partial_context *pc) { struct slab *slab, *slab2, *partial = NULL; - void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2295,7 +2249,7 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { - object = alloc_single_from_partial(s, n, slab, + void *object = alloc_single_from_partial(s, n, slab, pc->orig_size); if (object) { partial = slab; @@ -2305,13 +2259,10 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; } - object = acquire_slab(s, n, slab, object == NULL); - if (!object) - break; + remove_partial(n, slab); if (!partial) { partial = slab; - pc->object = object; stat(s, ALLOC_FROM_PARTIAL); } else { put_cpu_partial(s, slab, 0); @@ -2610,9 +2561,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2625,23 +2573,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3148,7 +3080,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, node = NUMA_NO_NODE; goto new_slab; } -redo: if (unlikely(!node_match(slab, node))) { /* @@ -3224,7 +3155,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_slab: - if (slub_percpu_partial(c)) { + while (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -3236,11 +3167,20 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_objects; } - slab = c->slab = slub_percpu_partial(c); + slab = slub_percpu_partial(c); slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); - goto redo; + + if (unlikely(!node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { + slab->next = NULL; + __unfreeze_partials(s, slab); + continue; + } + + freelist = freeze_slab(s, slab); + goto retry_load_slab; } new_objects: @@ -3249,8 +3189,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, pc.orig_size = orig_size; slab = get_partial(s, node, &pc); if (slab) { - freelist = pc.object; if (kmem_cache_debug(s)) { + freelist = pc.object; /* * For debug caches here we had to go through * alloc_single_from_partial() so just store the @@ -3262,6 +3202,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return freelist; } + freelist = freeze_slab(s, slab); goto retry_load_slab; } @@ -3663,18 +3604,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3704,9 +3635,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* - * If we just froze the slab then put it onto the + * If we started with a full slab then put it onto the * per cpu partial list. */ put_cpu_partial(s, slab, 1); From patchwork Tue Oct 24 09:33:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 157352 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1822639vqx; Tue, 24 Oct 2023 02:36:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGuzu1waL4NYtV8oFpRTpCIY/PGTevyY2eLckARcIrDzb3jDDKzL0OcHgYzvVAiydVHeGs9 X-Received: by 2002:a05:6a00:150d:b0:68c:69ca:2786 with SMTP id q13-20020a056a00150d00b0068c69ca2786mr10012983pfu.34.1698140161221; Tue, 24 Oct 2023 02:36:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698140161; cv=none; d=google.com; s=arc-20160816; b=WxmdslKcEcNlpZ23Hq7xSeLvVynhjP5qQn4xhuPoiBO7uSH9Fe0HoqNSxFLkkzWsEC qOPnIwfc1C1JJE6Fr8g3XWdLwnffK1zWgsAb6D3wY2mKWj/WLRBJeWZTDT0Oe/PPlj3u +1ro6i2nZ5gaguThupxVXtHrfR+pyRRaRkW7WMlvQV8uj/q67rgVYJO/8sJwfmaujOuS AdAwTcVx8TXTg8fYTKqnr8SM4FhXwXPI9Z7tSaIRKEbC/Dess9Rtu0hM7BjgD/Uyar8j hNhiwzFmYRLHdkmDjJW+eTaMIdCyEpankA1ROSgJLyKuV1oFrpZiAfzSwzfbyKnqYeSi 49Pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cdvJNGs14r77ax8dSUQgIW9duarJJopRajvvGp2TDMo=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=U7JtxYL5/QYjVrpVhoEfh1QOQtTxH58e18fQT5aF+meSA2DGY9oGalIZcvOy0ZDcug 5iKZ+6qMp2GbVegGFJfkYkZKD+u+3TG+fD+TM2PBJyxX92PCkobFyJbY460gcRxHz3a4 Vwq6aOI8KbWaB42lZ6nciZujCmSqPkX+V5xHCN9GLFVlcmy0G98hH+ZIIBC6VgCzltmH jpn66JbrS+wgs9gCxQ2xJOak2ynasxZcb8OWS88e7qVkrAlN1l7PyOqKfYJB3OsIcdOj sXViyElBF5/2TiQ5TodKId/m/BP2M+nzBNNDqsUlCkGTuBzhLn7OBDutjhhNPgxvlzct ehGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=LFQxXGGR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id s13-20020a056a0008cd00b006bf53494ee0si5788277pfu.175.2023.10.24.02.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 02:36:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=LFQxXGGR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 37BD9806AFCF; Tue, 24 Oct 2023 02:35:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234258AbjJXJer (ORCPT + 26 others); Tue, 24 Oct 2023 05:34:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234581AbjJXJeP (ORCPT ); Tue, 24 Oct 2023 05:34:15 -0400 Received: from out-197.mta0.migadu.com (out-197.mta0.migadu.com [91.218.175.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DBD1170D for ; Tue, 24 Oct 2023 02:34:13 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cdvJNGs14r77ax8dSUQgIW9duarJJopRajvvGp2TDMo=; b=LFQxXGGRKgIbWAldNJ1TYZrnkEc0I+pD5nGoJ0Dqwyqt32dcDScSAvf2HkxURKE+MuV4uf VITI/OvMyXtzkGrkQIUB2hTODauxk/xJEPGsyPf7Ow4uqMZV7ixHrnzuwbIKcLr4S5SXvq SpnsSqfJOa1quAz1cL2pg8WHPN5pFGw= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 7/7] slub: Optimize deactivate_slab() Date: Tue, 24 Oct 2023 09:33:45 +0000 Message-Id: <20231024093345.3676493-8-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 24 Oct 2023 02:35:33 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780629017807588477 X-GMAIL-MSGID: 1780629017807588477 From: Chengming Zhou Since the introduce of unfrozen slabs on cpu partial list, we don't need to synchronize the slab frozen state under the node list_lock. The caller of deactivate_slab() and the caller of __slab_free() won't manipulate the slab list concurrently. So we can get node list_lock in the last stage if we really need to manipulate the slab list in this path. Signed-off-by: Chengming Zhou --- mm/slub.c | 70 ++++++++++++++++++++----------------------------------- 1 file changed, 25 insertions(+), 45 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 486d44421432..64d550e415eb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2449,10 +2449,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int free_delta = 0; - enum slab_modes mode = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; @@ -2499,58 +2497,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * unfrozen and number of objects in the slab may have changed. * Then release lock and retry cmpxchg again. */ -redo: - - old.freelist = READ_ONCE(slab->freelist); - old.counters = READ_ONCE(slab->counters); - VM_BUG_ON(!old.frozen); - - /* Determine target state of the slab */ - new.counters = old.counters; - if (freelist_tail) { - new.inuse -= free_delta; - set_freepointer(s, freelist_tail, old.freelist); - new.freelist = freelist; - } else - new.freelist = old.freelist; + do { + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); + VM_BUG_ON(!old.frozen); + + /* Determine target state of the slab */ + new.counters = old.counters; + new.frozen = 0; + if (freelist_tail) { + new.inuse -= free_delta; + set_freepointer(s, freelist_tail, old.freelist); + new.freelist = freelist; + } else + new.freelist = old.freelist; - new.frozen = 0; + } while (!slab_update_freelist(s, slab, + old.freelist, old.counters, + new.freelist, new.counters, + "unfreezing slab")); + /* + * Stage three: Manipulate the slab list based on the updated state. + */ if (!new.inuse && n->nr_partial >= s->min_partial) { - mode = M_FREE; + stat(s, DEACTIVATE_EMPTY); + discard_slab(s, slab); + stat(s, FREE_SLAB); } else if (new.freelist) { - mode = M_PARTIAL; - /* - * Taking the spinlock removes the possibility that - * acquire_slab() will see a slab that is frozen - */ spin_lock_irqsave(&n->list_lock, flags); - } else { - mode = M_FULL_NOLIST; - } - - - if (!slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")) { - if (mode == M_PARTIAL) - spin_unlock_irqrestore(&n->list_lock, flags); - goto redo; - } - - - if (mode == M_PARTIAL) { add_partial(n, slab, tail); spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); - } else if (mode == M_FREE) { - stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); - stat(s, FREE_SLAB); - } else if (mode == M_FULL_NOLIST) { + } else stat(s, DEACTIVATE_FULL); - } } #ifdef CONFIG_SLUB_CPU_PARTIAL