Message ID | 20221014114322.97512-1-42.hyeyoo@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp137416wrs; Fri, 14 Oct 2022 04:48:06 -0700 (PDT) X-Google-Smtp-Source: AMsMyM460bIAIsAPKVyWQ8/LYrq6WieBv0aZbFDqoYVazQ7luPF/1ERxJreZS+fENaqSbWdeLa8N X-Received: by 2002:aa7:8299:0:b0:562:4c48:a0cb with SMTP id s25-20020aa78299000000b005624c48a0cbmr5103543pfm.66.1665748086340; Fri, 14 Oct 2022 04:48:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665748086; cv=none; d=google.com; s=arc-20160816; b=FVXfnAYLXgVsS0ZKU7N2DCxVP19lU2jFeraU2YXsaDIEbcpxAuR2ZQ0FVABNpnn3hG 2+AWHBQLABSd52EkN6k31MXj89nDq6Fmdy+uk6q/uKeT60TNgN05u4GxeSnjdAal+Fkn BVTrorSm+OqJZ3iGk1ssmN+2+OTbMkwCvnVLOZRIXVU7pYa3J1AE+5is7RHWwieuxFCa 4fj10Qi/iD631LyXQqJWlG3hQagaAWuAyczKUXXCOFBZtyyNoSdaw2hkBQ/l2b8QpJ7V 0jlVhJdERhc8d4aggiL63dBMBpoZCxHiIAzz1JRiH9BdlTzlyHdxayO0g0/A/Q2xEOuw KSDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=XtwckXY6GxG78kcpO8+3k1Gq+VT6rNZTsbhQDdeEaDs=; b=W0Rwf1+4V4OSeIM5kEU6hatmBbmKK8IcKRH3czdN1wfZ8KpkoGjMXqNlqzg9nO0ZHo COINQU4XHX/48R5+dyVOGqDI6v1V3SXdFxUzHokAVtL0hsfyhGL2Hc6h/qon+nYBoV5D X3ccUfEvPsTkRXka8i1w89/HFUjpfPZks7RwU2B1caK5sVjWbmN+0MnTJUhc2jXoBedf E02ibaP1IaKqkS+J4+ZPI8IHGmKt0uqxsm1B7cjLsIPwpKRhXyURMxdMEmj9Z9+S2vlC NUgFmEjB3auSYsXJ5OVntHEZG29FpAcEwFgRF27a9HCxsw/BQ13+B2eP2Qwz66HfA3v1 zPmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=RyFNLKV4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jb12-20020a170903258c00b0017ff874ff26si2670175plb.47.2022.10.14.04.47.50; Fri, 14 Oct 2022 04:48:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=RyFNLKV4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229665AbiJNLnw (ORCPT <rfc822;ouuuleilei@gmail.com> + 99 others); Fri, 14 Oct 2022 07:43:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229578AbiJNLnt (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 14 Oct 2022 07:43:49 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373641C8437 for <linux-kernel@vger.kernel.org>; Fri, 14 Oct 2022 04:43:49 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id a5-20020a17090aa50500b002008eeb040eso7589481pjq.1 for <linux-kernel@vger.kernel.org>; Fri, 14 Oct 2022 04:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=XtwckXY6GxG78kcpO8+3k1Gq+VT6rNZTsbhQDdeEaDs=; b=RyFNLKV4kCKAULz2zXwQH1I7Vbb+R8O/w1860da086cFUgLzEPfQAZHXTJ7s8Y7ADQ jfEuo6v3iUa08i0aj9Eu3Ql3MEyY07KlN3EK79rrRZd72kTZ3KZOTBI4oAuI8Q7aooBY WWxhsTC+tg5Psc09DFqaQCLWkEAN+ZwAEN1gOCLT9x+h6W1Ylg4/2DbCCU3DciOzyJTx 46vVNqE+sV20n/trrqY+2XDvcIA3+Ht+Lz8RWq2s7bmZqNBlIEyS7m0iR4gQTbrlsaVq E1k1lzwK7hw/kytwHvz0v6V9gC8nMoX1qR1bfYcsnp7Kv/W9w+xRV6s1gr6YVPWqTbKQ ChUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XtwckXY6GxG78kcpO8+3k1Gq+VT6rNZTsbhQDdeEaDs=; b=rM15PHp5HRvDvrm5GfRH+E828EXoNg6QgEuaJE8KXAwSm7lOmW+gSjiVYf0p7j2p/N r7Zjx3CHu1+KU9fpTOsaKnFeCiX6wvE1c/7w66Ssd4pi5vlvd44+Iu22F5L8CEzEtY6W yqWd8V9yWaiLqRysaUqSXReC7pUJraadLW2Sl6SRB8BRmk0AlPKOFwRh/dJ1foxg3JB/ d4yj+2Ie3GnMey1E6xdWLp6ktkqYl2PPZ72ZJwG/23AfH26AG0OUB7l7/BPbwid98MZV 3nACDuwQGUbHZK48vWOxdoSo5C1orQz58LluNv9NRSh0PFa8NBem7vpWe8TzlGTxk+CV EIsA== X-Gm-Message-State: ACrzQf3pwdELJReIL4RipLPXIAGXNti/9KRhP/YEESYGnCpuQr9vkF4j bxC58Fo2ijoPgLjiQhAnhGo= X-Received: by 2002:a17:902:d484:b0:17f:7437:565d with SMTP id c4-20020a170902d48400b0017f7437565dmr4892551plg.154.1665747828641; Fri, 14 Oct 2022 04:43:48 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id d5-20020a170902cec500b00178a8f4d4f2sm1542251plg.74.2022.10.14.04.43.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Oct 2022 04:43:47 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Andrew Morton <akpm@linux-foundation.org>, Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <roman.gushchin@linux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/slub: remove dead code for debug caches on deactivate_slab() Date: Fri, 14 Oct 2022 20:43:22 +0900 Message-Id: <20221014114322.97512-1-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746663465396764772?= X-GMAIL-MSGID: =?utf-8?q?1746663465396764772?= |
Series |
mm/slub: remove dead code for debug caches on deactivate_slab()
|
|
Commit Message
Hyeonggon Yoo
Oct. 14, 2022, 11:43 a.m. UTC
After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug
caches and make it safe"), SLUB does not take a slab from partial list for
debug caches. As deactivation isn't needed anymore, remove dead code.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/slub.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)
Comments
On 10/14/22 13:43, Hyeonggon Yoo wrote: > After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug > caches and make it safe"), SLUB does not take a slab from partial list for I'm confused by "SLUB does not take a slab from partial list" here. Did you mean something like "SLUB never installs (even temporarily) a percpu slab for debug caches"? So that means we never deactivate percpu slabs for debug caches. And since debug caches are also the only ones that use the full list, we no longer need to care about the full list in deactivate_slab(), right? > debug caches. As deactivation isn't needed anymore, remove dead code. > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Otherwise it looks correct to me, just wanted to clarify I'm not missing something. > --- > mm/slub.c | 16 ++-------------- > 1 file changed, 2 insertions(+), 14 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 96dd392d7f99..e2215240954d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2411,7 +2411,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) > static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > void *freelist) > { > - enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST }; > + enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; > struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > int free_delta = 0; > enum slab_modes mode = M_NONE; > @@ -2487,14 +2487,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > * acquire_slab() will see a slab that is frozen > */ > spin_lock_irqsave(&n->list_lock, flags); > - } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) { > - mode = M_FULL; > - /* > - * This also ensures that the scanning of full > - * slabs from diagnostic functions will not see > - * any frozen slabs. > - */ > - spin_lock_irqsave(&n->list_lock, flags); > } else { > mode = M_FULL_NOLIST; > } > @@ -2504,7 +2496,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > old.freelist, old.counters, > new.freelist, new.counters, > "unfreezing slab")) { > - if (mode == M_PARTIAL || mode == M_FULL) > + if (mode == M_PARTIAL) > spin_unlock_irqrestore(&n->list_lock, flags); > goto redo; > } > @@ -2518,10 +2510,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > stat(s, DEACTIVATE_EMPTY); > discard_slab(s, slab); > stat(s, FREE_SLAB); > - } else if (mode == M_FULL) { > - add_full(s, n, slab); > - spin_unlock_irqrestore(&n->list_lock, flags); > - stat(s, DEACTIVATE_FULL); > } else if (mode == M_FULL_NOLIST) { > stat(s, DEACTIVATE_FULL); > }
On Fri, Oct 21, 2022 at 12:43:42PM +0200, Vlastimil Babka wrote: > On 10/14/22 13:43, Hyeonggon Yoo wrote: > > After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug > > caches and make it safe"), SLUB does not take a slab from partial list for > > I'm confused by "SLUB does not take a slab from partial list" here. Did you > mean something like "SLUB never installs (even temporarily) a percpu slab > for debug caches"? Yes. > So that means we never deactivate percpu slabs for debug > caches. Yes. > And since debug caches are also the only ones that use the full > list, we no longer need to care about the full list in deactivate_slab(), right? Yes, You got it right, exactly! Let me rephrase: "After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug caches and make it safe"), SLUB never installs percpu slab for debug caches and thus never deactivates percpu slab for them. Since only some of debug caches care about the full list, SLUB no longer deactivates to full list. Remove dead code in deactivate_slab()." Feel free to change this ;-) > > > debug caches. As deactivation isn't needed anymore, remove dead code. > > > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > Otherwise it looks correct to me, just wanted to clarify I'm not missing > something. You are not missing anything. Thank you for clarification. Hyeonggon > > > --- > > mm/slub.c | 16 ++-------------- > > 1 file changed, 2 insertions(+), 14 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index 96dd392d7f99..e2215240954d 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2411,7 +2411,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) > > static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > > void *freelist) > > { > > - enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST }; > > + enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; > > struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > > int free_delta = 0; > > enum slab_modes mode = M_NONE; > > @@ -2487,14 +2487,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > > * acquire_slab() will see a slab that is frozen > > */ > > spin_lock_irqsave(&n->list_lock, flags); > > - } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) { > > - mode = M_FULL; > > - /* > > - * This also ensures that the scanning of full > > - * slabs from diagnostic functions will not see > > - * any frozen slabs. > > - */ > > - spin_lock_irqsave(&n->list_lock, flags); > > } else { > > mode = M_FULL_NOLIST; > > } > > @@ -2504,7 +2496,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > > old.freelist, old.counters, > > new.freelist, new.counters, > > "unfreezing slab")) { > > - if (mode == M_PARTIAL || mode == M_FULL) > > + if (mode == M_PARTIAL) > > spin_unlock_irqrestore(&n->list_lock, flags); > > goto redo; > > } > > @@ -2518,10 +2510,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > > stat(s, DEACTIVATE_EMPTY); > > discard_slab(s, slab); > > stat(s, FREE_SLAB); > > - } else if (mode == M_FULL) { > > - add_full(s, n, slab); > > - spin_unlock_irqrestore(&n->list_lock, flags); > > - stat(s, DEACTIVATE_FULL); > > } else if (mode == M_FULL_NOLIST) { > > stat(s, DEACTIVATE_FULL); > > } >
On 10/22/22 06:14, Hyeonggon Yoo wrote: > On Fri, Oct 21, 2022 at 12:43:42PM +0200, Vlastimil Babka wrote: >> On 10/14/22 13:43, Hyeonggon Yoo wrote: >> > After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug >> > caches and make it safe"), SLUB does not take a slab from partial list for >> >> I'm confused by "SLUB does not take a slab from partial list" here. Did you >> mean something like "SLUB never installs (even temporarily) a percpu slab >> for debug caches"? > > Yes. > >> So that means we never deactivate percpu slabs for debug >> caches. > > Yes. > >> And since debug caches are also the only ones that use the full >> list, we no longer need to care about the full list in deactivate_slab(), right? > > Yes, You got it right, exactly! > > Let me rephrase: > > "After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug > caches and make it safe"), SLUB never installs percpu slab for debug caches > and thus never deactivates percpu slab for them. > > Since only some of debug caches care about the full list, SLUB no longer > deactivates to full list. Remove dead code in deactivate_slab()." > > > Feel free to change this ;-) Great, thanks! Pushed to slab/for-6.2/cleanups
diff --git a/mm/slub.c b/mm/slub.c index 96dd392d7f99..e2215240954d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2411,7 +2411,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { - enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST }; + enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int free_delta = 0; enum slab_modes mode = M_NONE; @@ -2487,14 +2487,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * acquire_slab() will see a slab that is frozen */ spin_lock_irqsave(&n->list_lock, flags); - } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) { - mode = M_FULL; - /* - * This also ensures that the scanning of full - * slabs from diagnostic functions will not see - * any frozen slabs. - */ - spin_lock_irqsave(&n->list_lock, flags); } else { mode = M_FULL_NOLIST; } @@ -2504,7 +2496,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) { - if (mode == M_PARTIAL || mode == M_FULL) + if (mode == M_PARTIAL) spin_unlock_irqrestore(&n->list_lock, flags); goto redo; } @@ -2518,10 +2510,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, stat(s, DEACTIVATE_EMPTY); discard_slab(s, slab); stat(s, FREE_SLAB); - } else if (mode == M_FULL) { - add_full(s, n, slab); - spin_unlock_irqrestore(&n->list_lock, flags); - stat(s, DEACTIVATE_FULL); } else if (mode == M_FULL_NOLIST) { stat(s, DEACTIVATE_FULL); }