From patchwork Tue Oct 17 15:44:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 154369 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4226039vqb; Tue, 17 Oct 2023 08:46:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7yoGZObeIiYetCd1FssHLIzLNufMfPMJ9nLlOjWk+Xie1dtdwxbBtm36xucaxZ1+MhUU9 X-Received: by 2002:a92:c806:0:b0:351:1ed0:5c6b with SMTP id v6-20020a92c806000000b003511ed05c6bmr2762438iln.3.1697557575096; Tue, 17 Oct 2023 08:46:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697557575; cv=none; d=google.com; s=arc-20160816; b=I0ODZrjNGFkTKIgu5VhHT6UsSfGc1X3X1PKqNescTtjqWg88NQn9fRJ7PmH7M1o+FI 9dBCF823iAqEhHX9M85ryr2sxz0ssuP/Iump+y1ArRZ7qZI0EvChw6oJmIBKZ12B0knu v2H4L3S/lkh2Tw1kVZow7h8qrJWXwV5rEUbnTAB1DHlEVn9fJJvKgjAg9BU/Z71f9FM3 r83nYf5lM6zOJWVAe8cNvtCCbo7QmquTF4fe7O96Mqpvn9SKwuv5tx58/Udh1odvQGOx DQuJvCtekgq4NZc4/Xbmkp54+8LhXIbVH7pw62CbK3Vtdx2GYIrRiqWXobi/RYeoc1SC QoFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k+bY4oHxBRtJWcNO+Gp7uU0LbPEc1T0Re1GRoZLORj4=; fh=pb+eBl776ZRvna1F9CtRk2Qy3fEtolZ93e7lS3CUucI=; b=TOgLlwsqyeFbI9J2Y6pOqxs9bVhZ+a4VgSIdslUQpn3FidAZQTvcY1fYRyMp9Z+VC6 IsbFMDkccx0xino+0yKNcrMLVN63nJwame5eeIrHYXYumcu18VhV0PFcvBGBPHb13fKN d6MPipJCozyq18db+N8xqZ/JUv+5aGq79d9U5jX28NZBHWhw20OUKDHCT09ktGhDjvO6 epZZ7gHw4CrBeWvm3TxaBPxvIUaS6Ym5eIF8LOMYn0QVT9x6QNmk42CTHnjIxBqhHmLZ 09xci70jJ7Y3/zuH8aP8b+2Lr/AEjVhHBb9w3dOG62QXM7zuRXBMW+PHpA7Bqb3T6qLv PL8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=CcmfhoGd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id b15-20020a63d30f000000b005b81f21a25csi21672pgg.830.2023.10.17.08.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 08:46:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=CcmfhoGd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 72FC180A1390; Tue, 17 Oct 2023 08:45:37 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344125AbjJQPpZ (ORCPT + 20 others); Tue, 17 Oct 2023 11:45:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235034AbjJQPpW (ORCPT ); Tue, 17 Oct 2023 11:45:22 -0400 Received: from out-206.mta1.migadu.com (out-206.mta1.migadu.com [95.215.58.206]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E173095 for ; Tue, 17 Oct 2023 08:45:20 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k+bY4oHxBRtJWcNO+Gp7uU0LbPEc1T0Re1GRoZLORj4=; b=CcmfhoGdDV5iQ65rv3mmqcXetcW5trAeYkwUs1YW3F3rWasU7mQ1KysL674T5DLYaL+6bY hvlmkEqgAGs6ijxn+CaFmKHMdHwEHXZOjokAKPisHp+6bsfBg24xWmR06X5Sv+xRkXOR31 /Kyv+0b/9DHODnwMzhKPyqiEic4J/bc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 1/5] slub: Introduce on_partial() Date: Tue, 17 Oct 2023 15:44:35 +0000 Message-Id: <20231017154439.3036608-2-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 17 Oct 2023 08:45:37 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780018131625675684 X-GMAIL-MSGID: 1780018131625675684 From: Chengming Zhou We change slab->__unused to slab->flags to use it as SLUB_FLAGS, which now only include SF_NODE_PARTIAL flag. It indicates whether or not the slab is on node partial list. The following patches will change to don't freeze slab when moving it from node partial list to cpu partial list. So we can't rely on frozen bit to see if we should manipulate the slab->slab_list. Instead we will rely on this SF_NODE_PARTIAL flag, which is protected by node list_lock. Signed-off-by: Chengming Zhou --- mm/slab.h | 2 +- mm/slub.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/mm/slab.h b/mm/slab.h index 8cd3294fedf5..11e9c9a0f648 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -89,7 +89,7 @@ struct slab { }; struct rcu_head rcu_head; }; - unsigned int __unused; + unsigned int flags; #else #error "Unexpected slab allocator configured" diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..e5356ad14951 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1993,6 +1993,12 @@ static inline bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ +enum SLUB_FLAGS { + SF_INIT_VALUE = 0, + SF_EXIT_VALUE = -1, + SF_NODE_PARTIAL = 1 << 0, +}; + static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { struct slab *slab; @@ -2031,6 +2037,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) slab->objects = oo_objects(oo); slab->inuse = 0; slab->frozen = 0; + slab->flags = SF_INIT_VALUE; account_slab(slab, oo_order(oo), s, flags); @@ -2077,6 +2084,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) int order = folio_order(folio); int pages = 1 << order; + slab->flags = SF_EXIT_VALUE; __slab_clear_pfmemalloc(slab); folio->mapping = NULL; /* Make the mapping reset visible before clearing the flag */ @@ -2119,9 +2127,28 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab) /* * Management of partially allocated slabs. */ +static void ___add_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + slab->flags |= SF_NODE_PARTIAL; +} + +static void ___remove_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + slab->flags &= ~SF_NODE_PARTIAL; +} + +static inline bool on_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + return slab->flags & SF_NODE_PARTIAL; +} + static inline void __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { + ___add_partial(n, slab); n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) list_add_tail(&slab->slab_list, &n->partial); @@ -2142,6 +2169,7 @@ static inline void remove_partial(struct kmem_cache_node *n, lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); n->nr_partial--; + ___remove_partial(n, slab); } /*