Message ID | 20230413104034.1086717-2-yosryahmed@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp939151vqo; Thu, 13 Apr 2023 03:49:28 -0700 (PDT) X-Google-Smtp-Source: AKy350Y0N8ryMP896aKo6q4hiQ1WO/qyxhFYwvIHJu0IQlhZg76hOIUSTF6CAgfm0IynEphZt7NQ X-Received: by 2002:a05:6a00:988:b0:624:9bdc:f255 with SMTP id u8-20020a056a00098800b006249bdcf255mr3329833pfg.21.1681382968581; Thu, 13 Apr 2023 03:49:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681382968; cv=none; d=google.com; s=arc-20160816; b=j6w1LsJwG/7JvGi9Sy2ZR/vFQYIz1IX+PtJEeh7ziMkTYW2wGun3Ny0PavNJfx5qd6 X45m+bHTjFxmwEiMcAdGUQM57qJM10RTDpwPVOuPWD1yqnKiJvmokqQr3pNj4GQSjx1n 2omdrFLJS0yruZDnccBmo+uuqTSSMCNhyWg2L7ZPBfpiAA9ZeKp1vudDvuplbVumdcCp ckmmdZLPQdqU49BTrMn0uzJIllkYsf7EYVVWLrnGDf29QqXjxhhpTERn+GRyrGl10fxJ 9i21KnSDUiqnKoZDrP8JhHJg013md6Bi3UTUBq+x0jayL/8Sn2R3kwKBtR8Y63DvfFpJ 11aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=xGnocY4pIBEOyCy1J5p5K30k0wCvlbaawbxDP+ACQr8=; b=FFHUtSxgRPtRLL7JbV93vDu87ksSdrZGMM3BRefeEaWOPrqVpcKXsM1MlEshCnEruF VhnrswqHsnjYkO+7uoBwyzgnUKLwhrP+vmxtIwjid0kQppfZkvt8yaQd1G60wKqJOc1t Tk9kc+wNF3lT0NI6C/x+kIWPhahSXSPjAoUNXYE8FRWTCiHCSy/dYhdaUooI8kUDVY3I fGZRzwAzWRHWePoraYw5B0Nwe6q6OcM4AcLt3Bj4Ko4kkq2Q1SOK3Vbx+xx/5shwjCt8 Y1lRJYpwuTZvX/e8krCkfyF6bjmaZXOAKMl80DDKzUO0ME8/u7WKLnN0EBHRkLy2R3b0 vofg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=7DaELoYb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h6-20020a63df46000000b0051357734521si1786333pgj.604.2023.04.13.03.49.16; Thu, 13 Apr 2023 03:49:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=7DaELoYb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230327AbjDMKkr (ORCPT <rfc822;peter110.wang@gmail.com> + 99 others); Thu, 13 Apr 2023 06:40:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbjDMKkk (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 13 Apr 2023 06:40:40 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 418D0E64 for <linux-kernel@vger.kernel.org>; Thu, 13 Apr 2023 03:40:39 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id z9-20020a170903018900b001a1e8390831so7846008plg.5 for <linux-kernel@vger.kernel.org>; Thu, 13 Apr 2023 03:40:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681382439; x=1683974439; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xGnocY4pIBEOyCy1J5p5K30k0wCvlbaawbxDP+ACQr8=; b=7DaELoYbwDJCrNtZlmfg9Wt1StcR4uFr/DcPD/2pe3cLhlNGHN0Vo/cQ7EdPJo7up0 8bmCI0JQEmCL566Vot7ySRXaEZ0Q9wHFt3Sh9NvXD+gSBVNkrXxQorzAPrPz3sL21Jvg UkLNapMN9bhRXj3lO4/bDPz8UGmnAz0jlApmptNFgfyGVpKdYVctpDwfppNvGRrl4WXx hDn5QBpGHzMAOlfKXhd4xxjuLNGoae6rpvwcD9uDypxIntJ4NI1CR6tegrXbWE9t7vA4 LHSI7L2dX5euMSDWQaeHCt4S6p/b4GKxns2a9UeAtjH7MQxy0QVqfkDsaHQMGS5XAPuo 907w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681382439; x=1683974439; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xGnocY4pIBEOyCy1J5p5K30k0wCvlbaawbxDP+ACQr8=; b=LWgzxlDNYAY11e1+vFijxLu+9+rIXSbWT2z7xT+ZVODAWxI205nruvKM87Wal5k3hr QrcNgvI1guwkVO3UYkPiR0HnNlK6qXPBuGCp/AbPUPoPDPHVdQTQQsXIoHds63TxkMeJ L/JcdmljFKSi7fPf2nLjlSq4elT2yfOBZ8HhPbM580gXE+Jm06xZdrP0lgYT46Fo4fSV YeyMlcd23UU0skKAdswtpnjwZnTJLhhCZ7GkoKkh6cMd4zxJxJ+RNMRmTdPqUAUuEFOa wUqZb9wtxMpeUatRukWfw8ZU085v94uF/2HwyRIprar0vitQwsO75LYvvSEm0d6mLox+ hm9A== X-Gm-Message-State: AAQBX9d2eeCLTN1V66KWRjmrtOQ0YNtwB1WXkjl5DwVKXrcsrxN2Gqw2 Jc1USN3ntTFFDyQ926cQRb80TTjE4Kl7wIXM X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a65:51c7:0:b0:51a:f873:2645 with SMTP id i7-20020a6551c7000000b0051af8732645mr331971pgq.9.1681382438784; Thu, 13 Apr 2023 03:40:38 -0700 (PDT) Date: Thu, 13 Apr 2023 10:40:32 +0000 In-Reply-To: <20230413104034.1086717-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230413104034.1086717-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230413104034.1086717-2-yosryahmed@google.com> Subject: [PATCH v6 1/3] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim From: Yosry Ahmed <yosryahmed@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Alexander Viro <viro@zeniv.linux.org.uk>, "Darrick J. Wong" <djwong@kernel.org>, Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <roman.gushchin@linux.dev>, Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, Miaohe Lin <linmiaohe@huawei.com>, David Hildenbrand <david@redhat.com>, Johannes Weiner <hannes@cmpxchg.org>, Peter Xu <peterx@redhat.com>, NeilBrown <neilb@suse.de>, Shakeel Butt <shakeelb@google.com>, Michal Hocko <mhocko@kernel.org>, Yu Zhao <yuzhao@google.com>, Dave Chinner <david@fromorbit.com>, Tim Chen <tim.c.chen@linux.intel.com> Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed <yosryahmed@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763057827434694347?= X-GMAIL-MSGID: =?utf-8?q?1763057827434694347?= |
Series |
Ignore non-LRU-based reclaim in memcg reclaim
|
|
Commit Message
Yosry Ahmed
April 13, 2023, 10:40 a.m. UTC
We keep track of different types of reclaimed pages through
reclaim_state->reclaimed_slab, and we add them to the reported number
of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg
reclaim, we have no clue if those pages are charged to the memcg under
reclaim.
Slab pages are shared by different memcgs, so a freed slab page may have
only been partially charged to the memcg under reclaim. The same goes for
clean file pages from pruned inodes (on highmem systems) or xfs buffer
pages, there is no simple way to currently link them to the memcg under
reclaim.
Stop reporting those freed pages as reclaimed pages during memcg reclaim.
This should make the return value of writing to memory.reclaim, and may
help reduce unnecessary reclaim retries during memcg charging. Writing to
memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
for this case we want to include any freed pages, so use the
global_reclaim() check instead of !cgroup_reclaim().
Generally, this should make the return value of
try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
freed a slab page that was mostly charged to the memcg under reclaim),
the return value of try_to_free_mem_cgroup_pages() can be underestimated,
but this should be fine. The freed pages will be uncharged anyway, and we
can charge the memcg the next time around as we usually do memcg reclaim
in a retry loop.
Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
instead of pages")
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 42 insertions(+), 7 deletions(-)
Comments
On Thu, Apr 13, 2023 at 3:40 AM Yosry Ahmed <yosryahmed@google.com> wrote: > > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes for > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing to > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") Andrew, I removed the CC: stable as you were sceptical about the need for a backport, but left the Fixes tag so that it's easy to identify where to backport it if you and/or stable maintainers decide otherwise. > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > --- > mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 42 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9c1c5e8b24b8..be657832be48 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -511,6 +511,46 @@ static bool writeback_throttling_sane(struct scan_control *sc) > } > #endif > > +/* > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to > + * scan_control->nr_reclaimed. > + */ > +static void flush_reclaim_state(struct scan_control *sc) > +{ > + /* > + * Currently, reclaim_state->reclaimed includes three types of pages > + * freed outside of vmscan: > + * (1) Slab pages. > + * (2) Clean file pages from pruned inodes (on highmem systems). > + * (3) XFS freed buffer pages. > + * > + * For all of these cases, we cannot universally link the pages to a > + * single memcg. For example, a memcg-aware shrinker can free one object > + * charged to the target memcg, causing an entire page to be freed. > + * If we count the entire page as reclaimed from the memcg, we end up > + * overestimating the reclaimed amount (potentially under-reclaiming). > + * > + * Only count such pages for global reclaim to prevent under-reclaiming > + * from the target memcg; preventing unnecessary retries during memcg > + * charging and false positives from proactive reclaim. > + * > + * For uncommon cases where the freed pages were actually mostly > + * charged to the target memcg, we end up underestimating the reclaimed > + * amount. This should be fine. The freed pages will be uncharged > + * anyway, even if they are not counted here properly, and we will be > + * able to make forward progress in charging (which is usually in a > + * retry loop). > + * > + * We can go one step further, and report the uncharged objcg pages in > + * memcg reclaim, to make reporting more accurate and reduce > + * underestimation, but it's probably not worth the complexity for now. > + */ > + if (current->reclaim_state && global_reclaim(sc)) { > + sc->nr_reclaimed += current->reclaim_state->reclaimed; > + current->reclaim_state->reclaimed = 0; > + } > +} > + > static long xchg_nr_deferred(struct shrinker *shrinker, > struct shrink_control *sc) > { > @@ -5346,8 +5386,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > sc->nr_reclaimed - reclaimed); > > - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > - current->reclaim_state->reclaimed_slab = 0; > + flush_reclaim_state(sc); > > return success ? MEMCG_LRU_YOUNG : 0; > } > @@ -6450,7 +6489,6 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > { > - struct reclaim_state *reclaim_state = current->reclaim_state; > unsigned long nr_reclaimed, nr_scanned; > struct lruvec *target_lruvec; > bool reclaimable = false; > @@ -6472,10 +6510,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > > shrink_node_memcgs(pgdat, sc); > > - if (reclaim_state) { > - sc->nr_reclaimed += reclaim_state->reclaimed_slab; > - reclaim_state->reclaimed_slab = 0; > - } > + flush_reclaim_state(sc); > > /* Record the subtree's reclaim efficiency */ > if (!sc->proactive) > -- > 2.40.0.577.gac1e443424-goog >
On 13.04.23 12:40, Yosry Ahmed wrote: > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes for > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing to > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > --- LGTM, hopefully the underestimation won't result in a real issue. Acked-by: David Hildenbrand <david@redhat.com>
On Thu, Apr 13, 2023 at 4:16 AM David Hildenbrand <david@redhat.com> wrote: > > On 13.04.23 12:40, Yosry Ahmed wrote: > > We keep track of different types of reclaimed pages through > > reclaim_state->reclaimed_slab, and we add them to the reported number > > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > > reclaim, we have no clue if those pages are charged to the memcg under > > reclaim. > > > > Slab pages are shared by different memcgs, so a freed slab page may have > > only been partially charged to the memcg under reclaim. The same goes for > > clean file pages from pruned inodes (on highmem systems) or xfs buffer > > pages, there is no simple way to currently link them to the memcg under > > reclaim. > > > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > > This should make the return value of writing to memory.reclaim, and may > > help reduce unnecessary reclaim retries during memcg charging. Writing to > > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > > for this case we want to include any freed pages, so use the > > global_reclaim() check instead of !cgroup_reclaim(). > > > > Generally, this should make the return value of > > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > > freed a slab page that was mostly charged to the memcg under reclaim), > > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > > but this should be fine. The freed pages will be uncharged anyway, and we > > can charge the memcg the next time around as we usually do memcg reclaim > > in a retry loop. > > > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > > instead of pages") > > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > > --- > > LGTM, hopefully the underestimation won't result in a real issue. > > Acked-by: David Hildenbrand <david@redhat.com> Thanks! > > -- > Thanks, > > David / dhildenb >
On Thu 13-04-23 10:40:32, Yosry Ahmed wrote: > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes for > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing to > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Michal Hocko <mhocko@suse.com>
On Thu, Apr 13, 2023 at 3:40 AM Yosry Ahmed <yosryahmed@google.com> wrote: > > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes for > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing to > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > --- > mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 42 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9c1c5e8b24b8..be657832be48 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -511,6 +511,46 @@ static bool writeback_throttling_sane(struct scan_control *sc) > } > #endif > > +/* > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to > + * scan_control->nr_reclaimed. > + */ > +static void flush_reclaim_state(struct scan_control *sc) > +{ > + /* > + * Currently, reclaim_state->reclaimed includes three types of pages > + * freed outside of vmscan: > + * (1) Slab pages. > + * (2) Clean file pages from pruned inodes (on highmem systems). > + * (3) XFS freed buffer pages. > + * > + * For all of these cases, we cannot universally link the pages to a > + * single memcg. For example, a memcg-aware shrinker can free one object > + * charged to the target memcg, causing an entire page to be freed. > + * If we count the entire page as reclaimed from the memcg, we end up > + * overestimating the reclaimed amount (potentially under-reclaiming). > + * > + * Only count such pages for global reclaim to prevent under-reclaiming > + * from the target memcg; preventing unnecessary retries during memcg > + * charging and false positives from proactive reclaim. > + * > + * For uncommon cases where the freed pages were actually mostly > + * charged to the target memcg, we end up underestimating the reclaimed > + * amount. This should be fine. The freed pages will be uncharged > + * anyway, even if they are not counted here properly, and we will be > + * able to make forward progress in charging (which is usually in a > + * retry loop). > + * > + * We can go one step further, and report the uncharged objcg pages in > + * memcg reclaim, to make reporting more accurate and reduce > + * underestimation, but it's probably not worth the complexity for now. > + */ > + if (current->reclaim_state && global_reclaim(sc)) { > + sc->nr_reclaimed += current->reclaim_state->reclaimed; > + current->reclaim_state->reclaimed = 0; Ugh.. this breaks the build. This should have been current->reclaim_state->reclaimed_slab. It doesn't get renamed from "reclaimed_slab" to "reclaim" until the next patch. When I moved flush_reclaim_state() from patch 2 to patch 1 I forgot to augment it. My bad. The break is fixed by the very next patch, and the patches have already landed in Linus's tree, so there isn't much that can be done at this point. Sorry about that. Just wondering, why wouldn't this breakage be caught by any of the build bots? > + } > +} > + > static long xchg_nr_deferred(struct shrinker *shrinker, > struct shrink_control *sc) > { > @@ -5346,8 +5386,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > sc->nr_reclaimed - reclaimed); > > - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; > - current->reclaim_state->reclaimed_slab = 0; > + flush_reclaim_state(sc); > > return success ? MEMCG_LRU_YOUNG : 0; > } > @@ -6450,7 +6489,6 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > { > - struct reclaim_state *reclaim_state = current->reclaim_state; > unsigned long nr_reclaimed, nr_scanned; > struct lruvec *target_lruvec; > bool reclaimable = false; > @@ -6472,10 +6510,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > > shrink_node_memcgs(pgdat, sc); > > - if (reclaim_state) { > - sc->nr_reclaimed += reclaim_state->reclaimed_slab; > - reclaim_state->reclaimed_slab = 0; > - } > + flush_reclaim_state(sc); > > /* Record the subtree's reclaim efficiency */ > if (!sc->proactive) > -- > 2.40.0.577.gac1e443424-goog >
diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..be657832be48 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -511,6 +511,46 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +/* + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to + * scan_control->nr_reclaimed. + */ +static void flush_reclaim_state(struct scan_control *sc) +{ + /* + * Currently, reclaim_state->reclaimed includes three types of pages + * freed outside of vmscan: + * (1) Slab pages. + * (2) Clean file pages from pruned inodes (on highmem systems). + * (3) XFS freed buffer pages. + * + * For all of these cases, we cannot universally link the pages to a + * single memcg. For example, a memcg-aware shrinker can free one object + * charged to the target memcg, causing an entire page to be freed. + * If we count the entire page as reclaimed from the memcg, we end up + * overestimating the reclaimed amount (potentially under-reclaiming). + * + * Only count such pages for global reclaim to prevent under-reclaiming + * from the target memcg; preventing unnecessary retries during memcg + * charging and false positives from proactive reclaim. + * + * For uncommon cases where the freed pages were actually mostly + * charged to the target memcg, we end up underestimating the reclaimed + * amount. This should be fine. The freed pages will be uncharged + * anyway, even if they are not counted here properly, and we will be + * able to make forward progress in charging (which is usually in a + * retry loop). + * + * We can go one step further, and report the uncharged objcg pages in + * memcg reclaim, to make reporting more accurate and reduce + * underestimation, but it's probably not worth the complexity for now. + */ + if (current->reclaim_state && global_reclaim(sc)) { + sc->nr_reclaimed += current->reclaim_state->reclaimed; + current->reclaim_state->reclaimed = 0; + } +} + static long xchg_nr_deferred(struct shrinker *shrinker, struct shrink_control *sc) { @@ -5346,8 +5386,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, sc->nr_reclaimed - reclaimed); - sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; - current->reclaim_state->reclaimed_slab = 0; + flush_reclaim_state(sc); return success ? MEMCG_LRU_YOUNG : 0; } @@ -6450,7 +6489,6 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) { - struct reclaim_state *reclaim_state = current->reclaim_state; unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; @@ -6472,10 +6510,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) shrink_node_memcgs(pgdat, sc); - if (reclaim_state) { - sc->nr_reclaimed += reclaim_state->reclaimed_slab; - reclaim_state->reclaimed_slab = 0; - } + flush_reclaim_state(sc); /* Record the subtree's reclaim efficiency */ if (!sc->proactive)