Message ID | 20221201233317.1394958-1-almasrymina@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp547121wrr; Thu, 1 Dec 2022 15:48:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf4wqakXXXir9zfmLKWqJ0laVb+q84jeZdbUn7Li4k310kpxjyyBniZ2L2JOeLf8KqsPK2iu X-Received: by 2002:a17:906:b241:b0:7bc:1f2c:41b5 with SMTP id ce1-20020a170906b24100b007bc1f2c41b5mr29129499ejb.463.1669938516990; Thu, 01 Dec 2022 15:48:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669938516; cv=none; d=google.com; s=arc-20160816; b=lovOFHWNbIsbA6D41Zu+HELhvG5aucsRCqpnKOMHYf1VY3LLJLYbq3AuzrL9rTyea0 nsYTGcgubkOKV4FGZ9ckhD6ehaBqBsPcXbSUt0rdeYOFXw/fTHEe/DWyODrateXDL3E2 2Belgd+ejcMSxIS0UY4CP61uwpl5KmVO8ZAB1URvsNXqz4xVKpWVHrbQ4ysn61SBXYhw luKB6VOz+5Kg15Sgls/Vm/fuNVNgi0Q9necY3e0TRHDb9OaHDh3QVM9jcPGvw3Cepprf 5f5qMNVr7nVmpG77PNvZ8QW6PQ/h5AIEdIuUJyJ/3hBlEjFwsWf+94AiGulgTscuKn+N flSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=j5xM6059haRL2w3Ml8a9OvL6hEZxMiznHTFr64Muu3Y=; b=JNSmZ+QHlk9Dmst4xRj8W95oi/RlQ2uFw79BRJxI/pbM8duLAbuWK/61fQzNyBiO0G C/43i3NkgCd0JDS3YgIqD7phLnpTySGoXElFbzmDCprxq2XQo0oBR1ip5dFZls3M1+na iGp6/MjM531sFH4NkyRA5l6xfj/tLoX4gtdlgyj+BWDucou+jr8EDV7Z7NgTHxFSjlbY tMjS/8izFqTZECI8TUbc6A+U21ZyiByaiAwLkBziH5IchiRW+2+i3fAQN3HN1Cb6eLsp FR+eiQtDGkGj73cvpxRy3Soi/Ktz9AwLDf87+dVH/3cvnlIze6mmQPRmHmNR43TrNtuG yRIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ewb56Xgk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id di10-20020a170906730a00b0078df19995e4si5456203ejc.241.2022.12.01.15.48.13; Thu, 01 Dec 2022 15:48:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ewb56Xgk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231895AbiLAXdl (ORCPT <rfc822;heyuhang3455@gmail.com> + 99 others); Thu, 1 Dec 2022 18:33:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231652AbiLAXdW (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 1 Dec 2022 18:33:22 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2390413F for <linux-kernel@vger.kernel.org>; Thu, 1 Dec 2022 15:33:22 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id e185-20020a2569c2000000b006f28dd5da75so3320969ybc.19 for <linux-kernel@vger.kernel.org>; Thu, 01 Dec 2022 15:33:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=j5xM6059haRL2w3Ml8a9OvL6hEZxMiznHTFr64Muu3Y=; b=ewb56XgkB8OJ94EaI97ZNlJi8hda6Yr2t8q/bkmq3CwmXRsSaDFbJ3M2qcP3Yj2dKA x+WXST6VfgBddgE2zHpubheKMa0xiWrMKYNegpChRy0JHvzkCgwNg0C39luwmaKTBPct xzsWH5vK6uM04ozXYjeyVqAdfMDNK5298lLSIoNCLJuM6ZqXwohnfue0mu1WJ0bVx8HG q8KUfKfm9KdhsuVz1Sb4NfzeW/s1BqCPvIdF4pPry0GId2KBTQ0jsJopvKeiJ7Vta/tW 7NSD0AV3P8ivtFJwq588AHC2zW4gjr5w2x6KE71cC2dzC4zTiGFheA9Gk4KQSBrqLyXb fuwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=j5xM6059haRL2w3Ml8a9OvL6hEZxMiznHTFr64Muu3Y=; b=grAfoxpf13NsJz3KMKJKOuIPxawgUefoZZq/SbTKEibR+9dr92PddwoFTnzPhHTGCD Bt8sY83KWx22t+IYEBV1pG4CE+iOaPlean2BwoYbepUVTpjJVTp0T/iw+787LQE4mnfq D75GWg1ciMG7VIeGXEDkXiUSbz/u2m6Sj+9nLVVzj9kJUDNgBt7JWypY1ToUZyEeib05 BCm56CDLAyIWEaedzbuUfwbVdqRaICRwojWTN7ESLqm4mQXvuZhsvbDAZ8t+IHdfbUuo 3KUpEvVYaUIAiXWCYuFt2C6zN9NSj+nDWLibox8eSjvHib8wLq7vjmHGLxcNft1FzkEu K0uQ== X-Gm-Message-State: ANoB5plZ6GNXYjSTpepX9n2EvU0OZbTov55JqVJynGVMJwgROKHnp7U/ H2AKV0XOakq2TyaAnGJDGlqJpwUMoXDElDtpYg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2d4:203:4f7:4c6:3f81:f009]) (user=almasrymina job=sendgmr) by 2002:a25:c447:0:b0:6f8:784:efb9 with SMTP id u68-20020a25c447000000b006f80784efb9mr17934822ybf.334.1669937601439; Thu, 01 Dec 2022 15:33:21 -0800 (PST) Date: Thu, 1 Dec 2022 15:33:17 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221201233317.1394958-1-almasrymina@google.com> Subject: [PATCH v1] mm: disable top-tier fallback to reclaim on proactive reclaim From: Mina Almasry <almasrymina@google.com> To: Huang Ying <ying.huang@intel.com>, Yang Shi <yang.shi@linux.alibaba.com>, Yosry Ahmed <yosryahmed@google.com>, Tim Chen <tim.c.chen@linux.intel.com>, weixugc@google.com, shakeelb@google.com, gthelen@google.com, fvdl@google.com, Andrew Morton <akpm@linux-foundation.org> Cc: Mina Almasry <almasrymina@google.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751057450297845087?= X-GMAIL-MSGID: =?utf-8?q?1751057450297845087?= |
Series |
[v1] mm: disable top-tier fallback to reclaim on proactive reclaim
|
|
Commit Message
Mina Almasry
Dec. 1, 2022, 11:33 p.m. UTC
Reclaiming directly from top tier nodes breaks the aging pipeline of
memory tiers. If we have a RAM -> CXL -> storage hierarchy, we
should demote from RAM to CXL and from CXL to storage. If we reclaim
a page from RAM, it means we 'demote' it directly from RAM to storage,
bypassing potentially a huge amount of pages colder than it in CXL.
However disabling reclaim from top tier nodes entirely would cause ooms
in edge scenarios where lower tier memory is unreclaimable for whatever
reason, e.g. memory being mlocked() or too hot to reclaim. In these
cases we would rather the job run with a performance regression rather
than it oom altogether.
However, we can disable reclaim from top tier nodes for proactive reclaim.
That reclaim is not real memory pressure, and we don't have any cause to
be breaking the aging pipeline.
Signed-off-by: Mina Almasry <almasrymina@google.com>
---
mm/vmscan.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
--
2.39.0.rc0.267.gcb52ba06e7-goog
Comments
Mina Almasry <almasrymina@google.com> writes: > Reclaiming directly from top tier nodes breaks the aging pipeline of > memory tiers. If we have a RAM -> CXL -> storage hierarchy, we > should demote from RAM to CXL and from CXL to storage. If we reclaim > a page from RAM, it means we 'demote' it directly from RAM to storage, > bypassing potentially a huge amount of pages colder than it in CXL. > > However disabling reclaim from top tier nodes entirely would cause ooms > in edge scenarios where lower tier memory is unreclaimable for whatever > reason, e.g. memory being mlocked() or too hot to reclaim. In these > cases we would rather the job run with a performance regression rather > than it oom altogether. > > However, we can disable reclaim from top tier nodes for proactive reclaim. > That reclaim is not real memory pressure, and we don't have any cause to > be breaking the aging pipeline. > > Signed-off-by: Mina Almasry <almasrymina@google.com> > --- > mm/vmscan.c | 27 ++++++++++++++++++++++++--- > 1 file changed, 24 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 23fc5b523764..6eb130e57920 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2088,10 +2088,31 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > nr_reclaimed += demote_folio_list(&demote_folios, pgdat); > /* Folios that could not be demoted are still in @demote_folios */ > if (!list_empty(&demote_folios)) { > - /* Folios which weren't demoted go back on @folio_list for retry: */ > + /* > + * Folios which weren't demoted go back on @folio_list. > + */ I don't we should change comments style here. Why not just + /* Folios which weren't demoted go back on @folio_list. */ Other than this, the patch LGTM, Thanks! Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > list_splice_init(&demote_folios, folio_list); > - do_demote_pass = false; > - goto retry; > + > + /* > + * goto retry to reclaim the undemoted folios in folio_list if > + * desired. > + * > + * Reclaiming directly from top tier nodes is not often desired > + * due to it breaking the LRU ordering: in general memory > + * should be reclaimed from lower tier nodes and demoted from > + * top tier nodes. > + * > + * However, disabling reclaim from top tier nodes entirely > + * would cause ooms in edge scenarios where lower tier memory > + * is unreclaimable for whatever reason, eg memory being > + * mlocked or too hot to reclaim. We can disable reclaim > + * from top tier nodes in proactive reclaim though as that is > + * not real memory pressure. > + */ > + if (!sc->proactive) { > + do_demote_pass = false; > + goto retry; > + } > } > > pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; > -- > 2.39.0.rc0.267.gcb52ba06e7-goog
On Thu, 1 Dec 2022 15:33:17 -0800 Mina Almasry <almasrymina@google.com> wrote: > Reclaiming directly from top tier nodes breaks the aging pipeline of > memory tiers. If we have a RAM -> CXL -> storage hierarchy, we > should demote from RAM to CXL and from CXL to storage. If we reclaim > a page from RAM, it means we 'demote' it directly from RAM to storage, > bypassing potentially a huge amount of pages colder than it in CXL. > > However disabling reclaim from top tier nodes entirely would cause ooms > in edge scenarios where lower tier memory is unreclaimable for whatever > reason, e.g. memory being mlocked() or too hot to reclaim. In these > cases we would rather the job run with a performance regression rather > than it oom altogether. > > However, we can disable reclaim from top tier nodes for proactive reclaim. > That reclaim is not real memory pressure, and we don't have any cause to > be breaking the aging pipeline. > Is this purely from code inspection, or are there quantitative observations to be shared?
On Fri, Dec 2, 2022 at 1:38 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Thu, 1 Dec 2022 15:33:17 -0800 Mina Almasry <almasrymina@google.com> wrote: > > > Reclaiming directly from top tier nodes breaks the aging pipeline of > > memory tiers. If we have a RAM -> CXL -> storage hierarchy, we > > should demote from RAM to CXL and from CXL to storage. If we reclaim > > a page from RAM, it means we 'demote' it directly from RAM to storage, > > bypassing potentially a huge amount of pages colder than it in CXL. > > > > However disabling reclaim from top tier nodes entirely would cause ooms > > in edge scenarios where lower tier memory is unreclaimable for whatever > > reason, e.g. memory being mlocked() or too hot to reclaim. In these > > cases we would rather the job run with a performance regression rather > > than it oom altogether. > > > > However, we can disable reclaim from top tier nodes for proactive reclaim. > > That reclaim is not real memory pressure, and we don't have any cause to > > be breaking the aging pipeline. > > > > Is this purely from code inspection, or are there quantitative > observations to be shared? > This is from code inspection, but also it is by definition. Proactive reclaim is when the userspace does: echo "1m" > /path/to/cgroup/memory.reclaim At that point the kernel tries to proactively reclaim 1 MB from that cgroup at the userspace's behest, regardless of the actual memory pressure in the cgroup, so proactive reclaim is not real memory pressure as I state in the commit message. Proactive reclaim is triggered in the code by memory_reclaim(): https://elixir.bootlin.com/linux/v6.1-rc7/source/mm/memcontrol.c#L6572 Which sets MEMCG_RECLAIM_PROACTIVE: https://elixir.bootlin.com/linux/v6.1-rc7/source/mm/memcontrol.c#L6586 Which in turn sets sc->proactive: https://elixir.bootlin.com/linux/v6.1-rc7/source/mm/vmscan.c#L6743 In my patch I only allow falling back to reclaim from top tier nodes if !sc->proactive. I was in the process of sending a v2 with the comment fix btw, but I'll hold back on that since it seems you already merged the patch to unstable. Thanks! If I end up sending another version of the patch it should come with the comment fix.
On Thu, Dec 1, 2022 at 3:33 PM Mina Almasry <almasrymina@google.com> wrote: > > Reclaiming directly from top tier nodes breaks the aging pipeline of > memory tiers. If we have a RAM -> CXL -> storage hierarchy, we > should demote from RAM to CXL and from CXL to storage. If we reclaim > a page from RAM, it means we 'demote' it directly from RAM to storage, > bypassing potentially a huge amount of pages colder than it in CXL. > > However disabling reclaim from top tier nodes entirely would cause ooms > in edge scenarios where lower tier memory is unreclaimable for whatever > reason, e.g. memory being mlocked() or too hot to reclaim. In these > cases we would rather the job run with a performance regression rather > than it oom altogether. > > However, we can disable reclaim from top tier nodes for proactive reclaim. > That reclaim is not real memory pressure, and we don't have any cause to > be breaking the aging pipeline. Makes sense to me. Reviewed-by: Yang Shi <shy828301@gmail.com> > > Signed-off-by: Mina Almasry <almasrymina@google.com> > --- > mm/vmscan.c | 27 ++++++++++++++++++++++++--- > 1 file changed, 24 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 23fc5b523764..6eb130e57920 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2088,10 +2088,31 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > nr_reclaimed += demote_folio_list(&demote_folios, pgdat); > /* Folios that could not be demoted are still in @demote_folios */ > if (!list_empty(&demote_folios)) { > - /* Folios which weren't demoted go back on @folio_list for retry: */ > + /* > + * Folios which weren't demoted go back on @folio_list. > + */ > list_splice_init(&demote_folios, folio_list); > - do_demote_pass = false; > - goto retry; > + > + /* > + * goto retry to reclaim the undemoted folios in folio_list if > + * desired. > + * > + * Reclaiming directly from top tier nodes is not often desired > + * due to it breaking the LRU ordering: in general memory > + * should be reclaimed from lower tier nodes and demoted from > + * top tier nodes. > + * > + * However, disabling reclaim from top tier nodes entirely > + * would cause ooms in edge scenarios where lower tier memory > + * is unreclaimable for whatever reason, eg memory being > + * mlocked or too hot to reclaim. We can disable reclaim > + * from top tier nodes in proactive reclaim though as that is > + * not real memory pressure. > + */ > + if (!sc->proactive) { > + do_demote_pass = false; > + goto retry; > + } > } > > pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; > -- > 2.39.0.rc0.267.gcb52ba06e7-goog >
diff --git a/mm/vmscan.c b/mm/vmscan.c index 23fc5b523764..6eb130e57920 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2088,10 +2088,31 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, nr_reclaimed += demote_folio_list(&demote_folios, pgdat); /* Folios that could not be demoted are still in @demote_folios */ if (!list_empty(&demote_folios)) { - /* Folios which weren't demoted go back on @folio_list for retry: */ + /* + * Folios which weren't demoted go back on @folio_list. + */ list_splice_init(&demote_folios, folio_list); - do_demote_pass = false; - goto retry; + + /* + * goto retry to reclaim the undemoted folios in folio_list if + * desired. + * + * Reclaiming directly from top tier nodes is not often desired + * due to it breaking the LRU ordering: in general memory + * should be reclaimed from lower tier nodes and demoted from + * top tier nodes. + * + * However, disabling reclaim from top tier nodes entirely + * would cause ooms in edge scenarios where lower tier memory + * is unreclaimable for whatever reason, eg memory being + * mlocked or too hot to reclaim. We can disable reclaim + * from top tier nodes in proactive reclaim though as that is + * not real memory pressure. + */ + if (!sc->proactive) { + do_demote_pass = false; + goto retry; + } } pgactivate = stat->nr_activate[0] + stat->nr_activate[1];