Message ID | 20230405185427.1246289-2-yosryahmed@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp530832vqo; Wed, 5 Apr 2023 12:08:10 -0700 (PDT) X-Google-Smtp-Source: AKy350bbc+BG67jckZKQRJRTKzqT+TM1twdP6H3Rk5h3rJasxtrq6SdZvmahxKgcYjQN5gQjIksC X-Received: by 2002:a17:90b:4d84:b0:23f:7176:df32 with SMTP id oj4-20020a17090b4d8400b0023f7176df32mr7772162pjb.40.1680721690196; Wed, 05 Apr 2023 12:08:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680721690; cv=none; d=google.com; s=arc-20160816; b=ygb32D8DWdBEJUsCeSLYRgFwGzwbHLuHXFdMaa2CyLXDbfYLrfXJ5XaYBxEi4SVvTU 497RsqO1M0XswBxTU/i73ZM71VxQFiOtVVQRvVikD9fr0ALkthwt43oK2h4AmFUgiqpB wm3Gi6vX3eg7t4U/aSXgiM53xPxnX+DjMMLW56oNZoEkewqzc71YuZIcjbdHhzbsrzjB 6n3Ptz9heQvy5cdNKmdNa7BEur8roKha92rslPE8OG0Ohbgd2z1Xw1q7KELpZTenvXIv lLXASLvwR2sGl2RTcyRX4URbdWA1D61R6Ty7WPqKAIg2JEq9RAtuROaYtpuDhDGpRiPz xwgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=oB2sMXwt0v6x8lwXtmYlxh/hoioeozIJIVJcuuQJKWA=; b=DoZtRXbFlqXrQVrP49r81meVV3mOG756q+IKhP/ypfEMM8H40BZzGEqEecDJVu+Wvn fS3cYE/SRs+ioC1TYLBXQVQ+UKeJJj68/UEV18l5whJOxXwwyGgBSn2dYht/sbb+0tv2 XpMbe7LtyAjo6SLIiLzR/Envx1SBC3yDanWMJppmebny68nMvJyfhlNaAaCfwuutc+EO miyUAfIXY34pE5sisQq3CKgl/FtXmUnRnQHZv7jwpsmX08EqgqwphxTlD/Q/+ZeAYE2a ix3zTu/s8u/tczUYI0B0MUa1WknTw4q3+i1yf+IfHPhskHiziy32htklcFzu1DXIK9U3 1AjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Ros0vevj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l23-20020a17090add9700b00234001d9d35si1825366pjv.153.2023.04.05.12.07.57; Wed, 05 Apr 2023 12:08:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Ros0vevj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234407AbjDESyj (ORCPT <rfc822;lkml4gm@gmail.com> + 99 others); Wed, 5 Apr 2023 14:54:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234360AbjDESyd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 5 Apr 2023 14:54:33 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6659E3A9D for <linux-kernel@vger.kernel.org>; Wed, 5 Apr 2023 11:54:32 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54be7584b28so48934337b3.16 for <linux-kernel@vger.kernel.org>; Wed, 05 Apr 2023 11:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680720871; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oB2sMXwt0v6x8lwXtmYlxh/hoioeozIJIVJcuuQJKWA=; b=Ros0vevjUiwI3/TjUKg+KCeA5SHnfksHcC4OK3M0Fn4fXpXuUrynI6b8MaR9/c3jSk kYtK4Ske7kxweFphiQTIVIVAbS/Fuv7jZA8Wq1GWNo0oxzQ5Nh5pojz9jDp+do2ph3Jj IztRjzkxwy2vWa3oGo525fcM6H4ZEmqQU6p4rYHtfJsYsBqy2LUHEZvuMUuNIKTo/uqK Xwb0LLWXZaQYVo8lUIEKsrrCdSMMCxNVxpuQ+1xdxcKxOZL6ys2NO41rWyag1l5oLhzt LVX/O/QEV66s0ajoxgyOizIIfKMnIekWa445aOdc74s+Q8ItuYHAsLFXOFgFRUjGTZ12 4y1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680720871; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oB2sMXwt0v6x8lwXtmYlxh/hoioeozIJIVJcuuQJKWA=; b=rkpGbyMUQI5ryVu1+CEgXe9Y3vooOD8k9bRC7nh7y6hUEgYQ3KtG5Pb0gBZCUSXrX6 dYUGhyGsARBVRcMYRbHbDEu2OphPqFsoposK3FrBDToQbhieHjLvXc54TNEe07ZcM/QJ OEXpyCzGbraPiArHOh8IlRMqMMLYOF86htniQMSa+pQ6sWkKNRS02Yhptte3uImvMzoT pE+u4MSxGeNIqS6wA4W0pR8md2lpfv84vV9lUMOJqtWNSh5R+MhIxS7/cCy51NdjXNrv L/KI1ghiOeDLQI8BRWUNAo+RJbggvgqV2VV4gNJ7t0feAlSNfiERosFVazISvLPAvj7a sl7A== X-Gm-Message-State: AAQBX9eZGWji6aFhDDNhe14MDfi/6A5wu8kCWJHI268ObfWltrKcBiDX +JtArBsJ+VebVC2+bP1t7j9clkcEl9fcv0V1 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a25:d986:0:b0:b8b:eea7:525c with SMTP id q128-20020a25d986000000b00b8beea7525cmr163875ybg.9.1680720871655; Wed, 05 Apr 2023 11:54:31 -0700 (PDT) Date: Wed, 5 Apr 2023 18:54:26 +0000 In-Reply-To: <20230405185427.1246289-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230405185427.1246289-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230405185427.1246289-2-yosryahmed@google.com> Subject: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim From: Yosry Ahmed <yosryahmed@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Alexander Viro <viro@zeniv.linux.org.uk>, "Darrick J. Wong" <djwong@kernel.org>, Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <roman.gushchin@linux.dev>, Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, Miaohe Lin <linmiaohe@huawei.com>, David Hildenbrand <david@redhat.com>, Johannes Weiner <hannes@cmpxchg.org>, Peter Xu <peterx@redhat.com>, NeilBrown <neilb@suse.de>, Shakeel Butt <shakeelb@google.com>, Michal Hocko <mhocko@kernel.org>, Yu Zhao <yuzhao@google.com>, Dave Chinner <david@fromorbit.com> Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed <yosryahmed@google.com>, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-7.7 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762364427448499026?= X-GMAIL-MSGID: =?utf-8?q?1762364427448499026?= |
Series |
Ignore non-LRU-based reclaim in memcg reclaim
|
|
Commit Message
Yosry Ahmed
April 5, 2023, 6:54 p.m. UTC
We keep track of different types of reclaimed pages through
reclaim_state->reclaimed_slab, and we add them to the reported number
of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg
reclaim, we have no clue if those pages are charged to the memcg under
reclaim.
Slab pages are shared by different memcgs, so a freed slab page may have
only been partially charged to the memcg under reclaim. The same goes for
clean file pages from pruned inodes (on highmem systems) or xfs buffer
pages, there is no simple way to currently link them to the memcg under
reclaim.
Stop reporting those freed pages as reclaimed pages during memcg reclaim.
This should make the return value of writing to memory.reclaim, and may
help reduce unnecessary reclaim retries during memcg charging. Writing to
memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
for this case we want to include any freed pages, so use the
global_reclaim() check instead of !cgroup_reclaim().
Generally, this should make the return value of
try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
freed a slab page that was mostly charged to the memcg under reclaim),
the return value of try_to_free_mem_cgroup_pages() can be underestimated,
but this should be fine. The freed pages will be uncharged anyway, and we
can charge the memcg the next time around as we usually do memcg reclaim
in a retry loop.
The next patch performs some cleanups around reclaim_state and adds an
elaborate comment explaining this to the code. This patch is kept
minimal for easy backporting.
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Cc: stable@vger.kernel.org
---
global_reclaim(sc) does not exist in kernels before 6.3. It can be
replaced with:
!cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)
---
mm/vmscan.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
Comments
On 05.04.23 20:54, Yosry Ahmed wrote: > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes for > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing to > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we Can't we end up in extreme situations where try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount of memory for that cgroup was freed up. Can you extend on why "this should be fine" ? I suspect that overestimation might be worse than underestimation. (see my comment proposal below) > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > The next patch performs some cleanups around reclaim_state and adds an > elaborate comment explaining this to the code. This patch is kept > minimal for easy backporting. > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > Cc: stable@vger.kernel.org Fixes: ? Otherwise it's hard to judge how far to backport this. > --- > > global_reclaim(sc) does not exist in kernels before 6.3. It can be > replaced with: > !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup) > > --- > mm/vmscan.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9c1c5e8b24b8f..c82bd89f90364 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > sc->nr_reclaimed - reclaimed); > > - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > - current->reclaim_state->reclaimed_slab = 0; Worth adding a comment like /* * Slab pages cannot universally be linked to a single memcg. So only * account them as reclaimed during global reclaim. Note that we might * underestimate the amount of memory reclaimed (but won't overestimate * it). */ but ... > + if (global_reclaim(sc)) { > + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > + current->reclaim_state->reclaimed_slab = 0; > + } > > return success ? MEMCG_LRU_YOUNG : 0; > } > @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > > shrink_node_memcgs(pgdat, sc); > ... do we want to factor the add+clear into a simple helper such that we can have above comment there? static void cond_account_reclaimed_slab(reclaim_state, sc) { /* * Slab pages cannot universally be linked to a single memcg. So * only account them as reclaimed during global reclaim. Note * that we might underestimate the amount of memory reclaimed * (but won't overestimate it). */ if (global_reclaim(sc)) { sc->nr_reclaimed += reclaim_state->reclaimed_slab; reclaim_state->reclaimed_slab = 0; } } Yes, effective a couple LOC more, but still straight-forward for a stable backport > - if (reclaim_state) { > + if (reclaim_state && global_reclaim(sc)) { > sc->nr_reclaimed += reclaim_state->reclaimed_slab; > reclaim_state->reclaimed_slab = 0; > }
Thanks for taking a look, David! On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote: > > On 05.04.23 20:54, Yosry Ahmed wrote: > > We keep track of different types of reclaimed pages through > > reclaim_state->reclaimed_slab, and we add them to the reported number > > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > > reclaim, we have no clue if those pages are charged to the memcg under > > reclaim. > > > > Slab pages are shared by different memcgs, so a freed slab page may have > > only been partially charged to the memcg under reclaim. The same goes for > > clean file pages from pruned inodes (on highmem systems) or xfs buffer > > pages, there is no simple way to currently link them to the memcg under > > reclaim. > > > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > > This should make the return value of writing to memory.reclaim, and may > > help reduce unnecessary reclaim retries during memcg charging. Writing to > > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > > for this case we want to include any freed pages, so use the > > global_reclaim() check instead of !cgroup_reclaim(). > > > > Generally, this should make the return value of > > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > > freed a slab page that was mostly charged to the memcg under reclaim), > > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > > but this should be fine. The freed pages will be uncharged anyway, and we > > Can't we end up in extreme situations where > try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount > of memory for that cgroup was freed up. > > Can you extend on why "this should be fine" ? > > I suspect that overestimation might be worse than underestimation. (see > my comment proposal below) In such extreme scenarios even though try_to_free_mem_cgroup_pages() would return an underestimated value, the freed memory for the cgroup will be uncharged. try_charge() (and most callers of try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if try_to_free_mem_cgroup_pages() returns an underestimated value charging will succeed the next time around. The only case where this might be a problem is if it happens in the final retry, but I guess we need to be *really* unlucky for this extreme scenario to happen. One could argue that if we reach such a situation the cgroup will probably OOM soon anyway. > > > can charge the memcg the next time around as we usually do memcg reclaim > > in a retry loop. > > > > The next patch performs some cleanups around reclaim_state and adds an > > elaborate comment explaining this to the code. This patch is kept > > minimal for easy backporting. > > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > > Cc: stable@vger.kernel.org > > Fixes: ? > > Otherwise it's hard to judge how far to backport this. It's hard to judge. The issue has been there for a while, but memory.reclaim just made it more user visible. I think we can attribute it to per-object slab accounting, because before that any freed slab pages in cgroup reclaim would be entirely charged to that cgroup. Although in all fairness, other types of freed pages that use reclaim_state->reclaimed_slab cannot be attributed to the cgroup under reclaim have been there before that. I guess slab is the most significant among them tho, so for the purposes of backporting I guess: Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > --- > > > > global_reclaim(sc) does not exist in kernels before 6.3. It can be > > replaced with: > > !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup) > > > > --- > > mm/vmscan.c | 8 +++++--- > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 9c1c5e8b24b8f..c82bd89f90364 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) > > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > > sc->nr_reclaimed - reclaimed); > > > > - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > > - current->reclaim_state->reclaimed_slab = 0; > > Worth adding a comment like > > /* > * Slab pages cannot universally be linked to a single memcg. So only > * account them as reclaimed during global reclaim. Note that we might > * underestimate the amount of memory reclaimed (but won't overestimate > * it). > */ > > but ... > > > + if (global_reclaim(sc)) { > > + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > > + current->reclaim_state->reclaimed_slab = 0; > > + } > > > > return success ? MEMCG_LRU_YOUNG : 0; > > } > > @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > > > > shrink_node_memcgs(pgdat, sc); > > > > ... do we want to factor the add+clear into a simple helper such that we > can have above comment there? > > static void cond_account_reclaimed_slab(reclaim_state, sc) > { > /* > * Slab pages cannot universally be linked to a single memcg. So > * only account them as reclaimed during global reclaim. Note > * that we might underestimate the amount of memory reclaimed > * (but won't overestimate it). > */ > if (global_reclaim(sc)) { > sc->nr_reclaimed += reclaim_state->reclaimed_slab; > reclaim_state->reclaimed_slab = 0; > } > } > > Yes, effective a couple LOC more, but still straight-forward for a > stable backport The next patch in the series performs some refactoring and cleanups, among which we add a helper called flush_reclaim_state() that does exactly that and contains a sizable comment. I left this outside of this patch in v5 to make the effective change as small as possible for backporting. Looks like it can be confusing tho without the comment. How about I pull this part to this patch as well for v6? > > > - if (reclaim_state) { > > + if (reclaim_state && global_reclaim(sc)) { > > sc->nr_reclaimed += reclaim_state->reclaimed_slab; > > reclaim_state->reclaimed_slab = 0; > > } > > -- > Thanks, > > David / dhildenb >
On 06.04.23 16:07, Yosry Ahmed wrote: > Thanks for taking a look, David! > > On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote: >> >> On 05.04.23 20:54, Yosry Ahmed wrote: >>> We keep track of different types of reclaimed pages through >>> reclaim_state->reclaimed_slab, and we add them to the reported number >>> of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg >>> reclaim, we have no clue if those pages are charged to the memcg under >>> reclaim. >>> >>> Slab pages are shared by different memcgs, so a freed slab page may have >>> only been partially charged to the memcg under reclaim. The same goes for >>> clean file pages from pruned inodes (on highmem systems) or xfs buffer >>> pages, there is no simple way to currently link them to the memcg under >>> reclaim. >>> >>> Stop reporting those freed pages as reclaimed pages during memcg reclaim. >>> This should make the return value of writing to memory.reclaim, and may >>> help reduce unnecessary reclaim retries during memcg charging. Writing to >>> memory.reclaim on the root memcg is considered as cgroup_reclaim(), but >>> for this case we want to include any freed pages, so use the >>> global_reclaim() check instead of !cgroup_reclaim(). >>> >>> Generally, this should make the return value of >>> try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. >>> freed a slab page that was mostly charged to the memcg under reclaim), >>> the return value of try_to_free_mem_cgroup_pages() can be underestimated, >>> but this should be fine. The freed pages will be uncharged anyway, and we >> >> Can't we end up in extreme situations where >> try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount >> of memory for that cgroup was freed up. >> >> Can you extend on why "this should be fine" ? >> >> I suspect that overestimation might be worse than underestimation. (see >> my comment proposal below) > > In such extreme scenarios even though try_to_free_mem_cgroup_pages() > would return an underestimated value, the freed memory for the cgroup > will be uncharged. try_charge() (and most callers of > try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if > try_to_free_mem_cgroup_pages() returns an underestimated value > charging will succeed the next time around. > > The only case where this might be a problem is if it happens in the > final retry, but I guess we need to be *really* unlucky for this > extreme scenario to happen. One could argue that if we reach such a > situation the cgroup will probably OOM soon anyway. > >> >>> can charge the memcg the next time around as we usually do memcg reclaim >>> in a retry loop. >>> >>> The next patch performs some cleanups around reclaim_state and adds an >>> elaborate comment explaining this to the code. This patch is kept >>> minimal for easy backporting. >>> >>> Signed-off-by: Yosry Ahmed <yosryahmed@google.com> >>> Cc: stable@vger.kernel.org >> >> Fixes: ? >> >> Otherwise it's hard to judge how far to backport this. > > It's hard to judge. The issue has been there for a while, but > memory.reclaim just made it more user visible. I think we can > attribute it to per-object slab accounting, because before that any > freed slab pages in cgroup reclaim would be entirely charged to that > cgroup. > > Although in all fairness, other types of freed pages that use > reclaim_state->reclaimed_slab cannot be attributed to the cgroup under > reclaim have been there before that. I guess slab is the most > significant among them tho, so for the purposes of backporting I > guess: > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") > >> >>> --- >>> >>> global_reclaim(sc) does not exist in kernels before 6.3. It can be >>> replaced with: >>> !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup) >>> >>> --- >>> mm/vmscan.c | 8 +++++--- >>> 1 file changed, 5 insertions(+), 3 deletions(-) >>> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 9c1c5e8b24b8f..c82bd89f90364 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) >>> vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, >>> sc->nr_reclaimed - reclaimed); >>> >>> - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; >>> - current->reclaim_state->reclaimed_slab = 0; >> >> Worth adding a comment like >> >> /* >> * Slab pages cannot universally be linked to a single memcg. So only >> * account them as reclaimed during global reclaim. Note that we might >> * underestimate the amount of memory reclaimed (but won't overestimate >> * it). >> */ >> >> but ... >> >>> + if (global_reclaim(sc)) { >>> + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; >>> + current->reclaim_state->reclaimed_slab = 0; >>> + } >>> >>> return success ? MEMCG_LRU_YOUNG : 0; >>> } >>> @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) >>> >>> shrink_node_memcgs(pgdat, sc); >>> >> >> ... do we want to factor the add+clear into a simple helper such that we >> can have above comment there? >> >> static void cond_account_reclaimed_slab(reclaim_state, sc) >> { >> /* >> * Slab pages cannot universally be linked to a single memcg. So >> * only account them as reclaimed during global reclaim. Note >> * that we might underestimate the amount of memory reclaimed >> * (but won't overestimate it). >> */ >> if (global_reclaim(sc)) { >> sc->nr_reclaimed += reclaim_state->reclaimed_slab; >> reclaim_state->reclaimed_slab = 0; >> } >> } >> >> Yes, effective a couple LOC more, but still straight-forward for a >> stable backport > > The next patch in the series performs some refactoring and cleanups, > among which we add a helper called flush_reclaim_state() that does > exactly that and contains a sizable comment. I left this outside of > this patch in v5 to make the effective change as small as possible for > backporting. Looks like it can be confusing tho without the comment. > > How about I pull this part to this patch as well for v6? As long as it's a helper similar to what I proposed, I think that makes a lot of sense (and doesn't particularly bloat this patch).
On Thu, Apr 6, 2023 at 10:50 AM David Hildenbrand <david@redhat.com> wrote: > > On 06.04.23 16:07, Yosry Ahmed wrote: > > Thanks for taking a look, David! > > > > On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote: > >> > >> On 05.04.23 20:54, Yosry Ahmed wrote: > >>> We keep track of different types of reclaimed pages through > >>> reclaim_state->reclaimed_slab, and we add them to the reported number > >>> of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > >>> reclaim, we have no clue if those pages are charged to the memcg under > >>> reclaim. > >>> > >>> Slab pages are shared by different memcgs, so a freed slab page may have > >>> only been partially charged to the memcg under reclaim. The same goes for > >>> clean file pages from pruned inodes (on highmem systems) or xfs buffer > >>> pages, there is no simple way to currently link them to the memcg under > >>> reclaim. > >>> > >>> Stop reporting those freed pages as reclaimed pages during memcg reclaim. > >>> This should make the return value of writing to memory.reclaim, and may > >>> help reduce unnecessary reclaim retries during memcg charging. Writing to > >>> memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > >>> for this case we want to include any freed pages, so use the > >>> global_reclaim() check instead of !cgroup_reclaim(). > >>> > >>> Generally, this should make the return value of > >>> try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > >>> freed a slab page that was mostly charged to the memcg under reclaim), > >>> the return value of try_to_free_mem_cgroup_pages() can be underestimated, > >>> but this should be fine. The freed pages will be uncharged anyway, and we > >> > >> Can't we end up in extreme situations where > >> try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount > >> of memory for that cgroup was freed up. > >> > >> Can you extend on why "this should be fine" ? > >> > >> I suspect that overestimation might be worse than underestimation. (see > >> my comment proposal below) > > > > In such extreme scenarios even though try_to_free_mem_cgroup_pages() > > would return an underestimated value, the freed memory for the cgroup > > will be uncharged. try_charge() (and most callers of > > try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if > > try_to_free_mem_cgroup_pages() returns an underestimated value > > charging will succeed the next time around. > > > > The only case where this might be a problem is if it happens in the > > final retry, but I guess we need to be *really* unlucky for this > > extreme scenario to happen. One could argue that if we reach such a > > situation the cgroup will probably OOM soon anyway. > > > >> > >>> can charge the memcg the next time around as we usually do memcg reclaim > >>> in a retry loop. > >>> > >>> The next patch performs some cleanups around reclaim_state and adds an > >>> elaborate comment explaining this to the code. This patch is kept > >>> minimal for easy backporting. > >>> > >>> Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > >>> Cc: stable@vger.kernel.org > >> > >> Fixes: ? > >> > >> Otherwise it's hard to judge how far to backport this. > > > > It's hard to judge. The issue has been there for a while, but > > memory.reclaim just made it more user visible. I think we can > > attribute it to per-object slab accounting, because before that any > > freed slab pages in cgroup reclaim would be entirely charged to that > > cgroup. > > > > Although in all fairness, other types of freed pages that use > > reclaim_state->reclaimed_slab cannot be attributed to the cgroup under > > reclaim have been there before that. I guess slab is the most > > significant among them tho, so for the purposes of backporting I > > guess: > > > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > > instead of pages") > > > >> > >>> --- > >>> > >>> global_reclaim(sc) does not exist in kernels before 6.3. It can be > >>> replaced with: > >>> !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup) > >>> > >>> --- > >>> mm/vmscan.c | 8 +++++--- > >>> 1 file changed, 5 insertions(+), 3 deletions(-) > >>> > >>> diff --git a/mm/vmscan.c b/mm/vmscan.c > >>> index 9c1c5e8b24b8f..c82bd89f90364 100644 > >>> --- a/mm/vmscan.c > >>> +++ b/mm/vmscan.c > >>> @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) > >>> vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > >>> sc->nr_reclaimed - reclaimed); > >>> > >>> - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > >>> - current->reclaim_state->reclaimed_slab = 0; > >> > >> Worth adding a comment like > >> > >> /* > >> * Slab pages cannot universally be linked to a single memcg. So only > >> * account them as reclaimed during global reclaim. Note that we might > >> * underestimate the amount of memory reclaimed (but won't overestimate > >> * it). > >> */ > >> > >> but ... > >> > >>> + if (global_reclaim(sc)) { > >>> + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > >>> + current->reclaim_state->reclaimed_slab = 0; > >>> + } > >>> > >>> return success ? MEMCG_LRU_YOUNG : 0; > >>> } > >>> @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > >>> > >>> shrink_node_memcgs(pgdat, sc); > >>> > >> > >> ... do we want to factor the add+clear into a simple helper such that we > >> can have above comment there? > >> > >> static void cond_account_reclaimed_slab(reclaim_state, sc) > >> { > >> /* > >> * Slab pages cannot universally be linked to a single memcg. So > >> * only account them as reclaimed during global reclaim. Note > >> * that we might underestimate the amount of memory reclaimed > >> * (but won't overestimate it). > >> */ > >> if (global_reclaim(sc)) { > >> sc->nr_reclaimed += reclaim_state->reclaimed_slab; > >> reclaim_state->reclaimed_slab = 0; > >> } > >> } > >> > >> Yes, effective a couple LOC more, but still straight-forward for a > >> stable backport > > > > The next patch in the series performs some refactoring and cleanups, > > among which we add a helper called flush_reclaim_state() that does > > exactly that and contains a sizable comment. I left this outside of > > this patch in v5 to make the effective change as small as possible for > > backporting. Looks like it can be confusing tho without the comment. > > > > How about I pull this part to this patch as well for v6? > > As long as it's a helper similar to what I proposed, I think that makes > a lot of sense (and doesn't particularly bloat this patch). Sounds good to me, I will do that and respin. Thanks David! > > -- > Thanks, > > David / dhildenb > >
On Thu, 6 Apr 2023 12:30:56 +0200 David Hildenbrand <david@redhat.com> wrote:
> Otherwise it's hard to judge how far to backport this.
The case for backporting sounded rather unconvincing to me, which is
why I'm still sitting on the v4 series.
What are your thoughts on the desirability of a backport?
It makes sense to design the forthcoming v6 series for backportability,
so that even if we decide "no", others can still take it easily if they
wish to.
diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8f..c82bd89f90364 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, sc->nr_reclaimed - reclaimed); - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; - current->reclaim_state->reclaimed_slab = 0; + if (global_reclaim(sc)) { + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; + current->reclaim_state->reclaimed_slab = 0; + } return success ? MEMCG_LRU_YOUNG : 0; } @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) shrink_node_memcgs(pgdat, sc); - if (reclaim_state) { + if (reclaim_state && global_reclaim(sc)) { sc->nr_reclaimed += reclaim_state->reclaimed_slab; reclaim_state->reclaimed_slab = 0; }