Message ID | 20231003231422.4046187-1-nphamcs@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp2407669vqb; Tue, 3 Oct 2023 16:14:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGbUYVgDLi7xO/VbbwRPPAEGaQKcyO6nOM4Vr/J0s9mKoFrm6Tl9Hx89KGiQRB3Jq+bFpM7 X-Received: by 2002:a25:f30b:0:b0:d89:4d2c:d843 with SMTP id c11-20020a25f30b000000b00d894d2cd843mr619747ybs.24.1696374871113; Tue, 03 Oct 2023 16:14:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696374871; cv=none; d=google.com; s=arc-20160816; b=ALIjhGnP0QurXo8ii1KEs2Vw2A5GaMB52AR6960yk9kCw4ixiurRFMl/waef6zM8XM QEt4254U0GCPHTyqPUzg+C/1qOcuj40DZ/fbeKjhUY1kHl3/AYn+EQuLO0bet46XDtEv xZheoBIRkwSDbOLCvDAJwoi+q5zsMYsxTKO+vqK+G1HYMYuQaH3lASEACHhsmeL2Kdqp 5BVUUtwAL6n5HBk6HQIRemKLnuE5zX2Ab35oPiOz5BNM2eNNrMoFAk2ohmQU1XejmA8z ronq10KqL2qKm/s3TrVDeK9vDcFOPxnaHxLzcl7oAmGxbCBrU+dbU0Lbh8z0Gr9V+69J CEcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xWjIhVNT1E7PfrfKwNLVQZ455aStkJaWOS9he+QZwc4=; fh=iuOdG2UsaRKUpHOR3B0vXOw1fow3AOmqKEKZ7KuRmaE=; b=04pfKbDaRreSQAptwXdmjAW699ZA1gpjkSCL5EgcyGAWiuXc2ClMVW7+rVaREwZBzI 5fPGvnkIq75yvkXVpoONrP2MOev9dNjFMpkktwMP13S66Z1ACezGrJX8LB4NVkW2dTxT ixA0q1w3EFbRZ32ECS0uYGhI7gk/68tIPPkg0GTURfllfoqW0DOyjLALObOt69tIs+B7 U9K9TlBOAv5F+ceBiYb9Wuam0vD0NsnN8MEsIX2exL80TBY4JgAntG3b++WODxWIrksO 53VCkbw3DHsJsYi5+IlWDQ3+vSW/t1WZPjoo2EKRsJ5bq8x+uWzyTD2QVkz86CsAGuD0 aY6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=bmJ0Os0X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id m27-20020a63711b000000b00574046766e0si2460805pgc.40.2023.10.03.16.14.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 16:14:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=bmJ0Os0X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id EA70281B93EB; Tue, 3 Oct 2023 16:14:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232502AbjJCXO2 (ORCPT <rfc822;chrisfriedt@gmail.com> + 17 others); Tue, 3 Oct 2023 19:14:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231504AbjJCXO1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 3 Oct 2023 19:14:27 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E7D4AF; Tue, 3 Oct 2023 16:14:24 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1c723f1c80fso11384575ad.1; Tue, 03 Oct 2023 16:14:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696374864; x=1696979664; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xWjIhVNT1E7PfrfKwNLVQZ455aStkJaWOS9he+QZwc4=; b=bmJ0Os0XHMwPi+/FSTBaHKOQFubBPd8lJsAuoj7Ouw4QydaJIyp7+cfJtZnmo0EF2n 4cioSqFeEAlGIVJvFu448HiCssEoupJw5xV7MBJu6BEHiGEalCuVkP2ljOgsYp7qpswD LJ9+rzz/IWZJ7Yhsc930S+Ve5T050hyd0s4x3lPWxr6GrincDyQWXLEafowBaKjumaOt iuT3/19pEeENg/EU4BiaDKF3Yz9Bbt9MnN3c6DNukwnmQ6aC62ZxD4G/sbwsB+OQHyRN UkI8vnBKeaaAC0+JRoYsAK6Mg5/ugkRixwQCYqs5etz62xxYJcoVzXFu6Yod9Mgfpw8e XBAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696374864; x=1696979664; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xWjIhVNT1E7PfrfKwNLVQZ455aStkJaWOS9he+QZwc4=; b=Kbfz5mvbIdoFafq4ZZiAnYu5pIBiXgVn/IAfiEaOX/LQMdwe4ByDJO3ZIArE8uB+5t aOmvSjScfX9Lq1kcVGvefN5RyBsReLdlmxgu62R48Vnnn0nZdC1Rlvd5r5VckKFKBZ10 BS43EMWO2uX3/vlJz5zZT3VBkgOeOOQB8iLMl2NgMSXjSxbkUKsyESU9znzC19kgTTyN GqqX66LL6OXiI+dmfn4/aYwbxPqtp8ncmdZvJWWvY3uznqtVEG680KWWl6Gh4kurpPXS 432OaW3scmCLGz/AYuH6tshgDoTQLNpOHwWU1YKDbDkmfWQpKVZJlpkuMZKm3ms73UeI GXig== X-Gm-Message-State: AOJu0YwC5zLJWwR0Gn7+0mx8Zak4Zy8zDI6DoTHghoh4tJK49K9UzgUv Q/077O4K8wwZZaazG3X9W5w= X-Received: by 2002:a17:902:bc45:b0:1c7:7916:e87b with SMTP id t5-20020a170902bc4500b001c77916e87bmr1071338plz.14.1696374863896; Tue, 03 Oct 2023 16:14:23 -0700 (PDT) Received: from localhost (fwdproxy-prn-116.fbsv.net. [2a03:2880:ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id u5-20020a17090282c500b001c446dea2c5sm2173607plz.143.2023.10.03.16.14.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 16:14:23 -0700 (PDT) From: Nhat Pham <nphamcs@gmail.com> To: akpm@linux-foundation.org Cc: riel@surriel.com, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, tj@kernel.org, lizefan.x@bytedance.com, shuah@kernel.org, mike.kravetz@oracle.com, yosryahmed@google.com, fvdl@google.com, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Subject: [PATCH] memcontrol: only transfer the memcg data for migration Date: Tue, 3 Oct 2023 16:14:22 -0700 Message-Id: <20231003231422.4046187-1-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003171329.GB314430@monkey> References: <20231003171329.GB314430@monkey> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 03 Oct 2023 16:14:29 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778777976782366680 X-GMAIL-MSGID: 1778777976782366680 |
Series |
memcontrol: only transfer the memcg data for migration
|
|
Commit Message
Nhat Pham
Oct. 3, 2023, 11:14 p.m. UTC
For most migration use cases, only transfer the memcg data from the old
folio to the new folio, and clear the old folio's memcg data. No
charging and uncharging will be done. These use cases include the new
hugetlb memcg accounting behavior (which was not previously handled).
This shaves off some work on the migration path, and avoids the
temporary double charging of a folio during its migration.
The only exception is replace_page_cache_folio(), which will use the old
mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that
context, the isolation of the old page isn't quite as thorough as with
migration, so we cannot use our new implementation directly.
This patch is the result of the following discussion on the new hugetlb
memcg accounting behavior:
https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/
Reported-by: Mike Kravetz <mike.kravetz@oracle.com>
Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/memcontrol.h | 7 ++++++
mm/filemap.c | 2 +-
mm/memcontrol.c | 45 +++++++++++++++++++++++++++++++++++---
mm/migrate.c | 3 +--
4 files changed, 51 insertions(+), 6 deletions(-)
Comments
On Tue, Oct 3, 2023 at 4:14 PM Nhat Pham <nphamcs@gmail.com> wrote: > > For most migration use cases, only transfer the memcg data from the old > folio to the new folio, and clear the old folio's memcg data. No > charging and uncharging will be done. These use cases include the new > hugetlb memcg accounting behavior (which was not previously handled). > > This shaves off some work on the migration path, and avoids the > temporary double charging of a folio during its migration. > > The only exception is replace_page_cache_folio(), which will use the old > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > context, the isolation of the old page isn't quite as thorough as with > migration, so we cannot use our new implementation directly. > > This patch is the result of the following discussion on the new hugetlb > memcg accounting behavior: > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Nhat Pham <nphamcs@gmail.com> Does this patch fit before or after your series? In both cases I think there might be a problem for bisectability. > --- > include/linux/memcontrol.h | 7 ++++++ > mm/filemap.c | 2 +- > mm/memcontrol.c | 45 +++++++++++++++++++++++++++++++++++--- > mm/migrate.c | 3 +-- > 4 files changed, 51 insertions(+), 6 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index a827e2129790..e3eaa123256b 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -711,6 +711,8 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages); > > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new); > + > void mem_cgroup_migrate(struct folio *old, struct folio *new); > > /** > @@ -1294,6 +1296,11 @@ static inline void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, > { > } > > +static inline void mem_cgroup_replace_folio(struct folio *old, > + struct folio *new) > +{ > +} > + > static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) > { > } > diff --git a/mm/filemap.c b/mm/filemap.c > index 9481ffaf24e6..673745219c82 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -819,7 +819,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) > new->mapping = mapping; > new->index = offset; > > - mem_cgroup_migrate(old, new); > + mem_cgroup_replace_folio(old, new); > > xas_lock_irq(&xas); > xas_store(&xas, new); > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6660684f6f97..cbaa26605b3d 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -7316,16 +7316,17 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) > } > > /** > - * mem_cgroup_migrate - Charge a folio's replacement. > + * mem_cgroup_replace_folio - Charge a folio's replacement. > * @old: Currently circulating folio. > * @new: Replacement folio. > * > * Charge @new as a replacement folio for @old. @old will > - * be uncharged upon free. > + * be uncharged upon free. This is only used by the page cache > + * (in replace_page_cache_folio()). > * > * Both folios must be locked, @new->mapping must be set up. > */ > -void mem_cgroup_migrate(struct folio *old, struct folio *new) > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new) > { > struct mem_cgroup *memcg; > long nr_pages = folio_nr_pages(new); > @@ -7364,6 +7365,44 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) > local_irq_restore(flags); > } > > +/** > + * mem_cgroup_migrate - Transfer the memcg data from the old to the new folio. > + * @old: Currently circulating folio. > + * @new: Replacement folio. > + * > + * Transfer the memcg data from the old folio to the new folio for migration. > + * The old folio's data info will be cleared. Note that the memory counters > + * will remain unchanged throughout the process. > + * > + * Both folios must be locked, @new->mapping must be set up. > + */ > +void mem_cgroup_migrate(struct folio *old, struct folio *new) > +{ > + struct mem_cgroup *memcg; > + > + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); > + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); > + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); > + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new); > + > + if (mem_cgroup_disabled()) > + return; > + > + memcg = folio_memcg(old); > + /* > + * Note that it is normal to see !memcg for a hugetlb folio. > + * It could have been allocated when memory_hugetlb_accounting was not > + * selected, for e.g. > + */ > + VM_WARN_ON_ONCE_FOLIO(!memcg, old); > + if (!memcg) > + return; > + > + /* Transfer the charge and the css ref */ > + commit_charge(new, memcg); > + old->memcg_data = 0; > +} > + > DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); > EXPORT_SYMBOL(memcg_sockets_enabled_key); > > diff --git a/mm/migrate.c b/mm/migrate.c > index 7d1804c4a5d9..6034c7ed1d65 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -633,8 +633,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) > > folio_copy_owner(newfolio, folio); > > - if (!folio_test_hugetlb(folio)) > - mem_cgroup_migrate(folio, newfolio); > + mem_cgroup_migrate(folio, newfolio); > } > EXPORT_SYMBOL(folio_migrate_flags); > > -- > 2.34.1 >
On Tue, Oct 3, 2023 at 4:22 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Tue, Oct 3, 2023 at 4:14 PM Nhat Pham <nphamcs@gmail.com> wrote: > > > > For most migration use cases, only transfer the memcg data from the old > > folio to the new folio, and clear the old folio's memcg data. No > > charging and uncharging will be done. These use cases include the new > > hugetlb memcg accounting behavior (which was not previously handled). > > > > This shaves off some work on the migration path, and avoids the > > temporary double charging of a folio during its migration. > > > > The only exception is replace_page_cache_folio(), which will use the old > > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > > context, the isolation of the old page isn't quite as thorough as with > > migration, so we cannot use our new implementation directly. > > > > This patch is the result of the following discussion on the new hugetlb > > memcg accounting behavior: > > > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > Signed-off-by: Nhat Pham <nphamcs@gmail.com> > > Does this patch fit before or after your series? In both cases I think > there might be a problem for bisectability. Hmm my intention for this patch is as a fixlet. (i.e it should be eventually squashed to the second patch of that series). I just include the extra context on the fixlet for review purposes. My apologies - should have been much clearer. (Perhaps I should just send out v4 at this point?) > > > --- > > include/linux/memcontrol.h | 7 ++++++ > > mm/filemap.c | 2 +- > > mm/memcontrol.c | 45 +++++++++++++++++++++++++++++++++++--- > > mm/migrate.c | 3 +-- > > 4 files changed, 51 insertions(+), 6 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index a827e2129790..e3eaa123256b 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -711,6 +711,8 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > > > void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages); > > > > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new); > > + > > void mem_cgroup_migrate(struct folio *old, struct folio *new); > > > > /** > > @@ -1294,6 +1296,11 @@ static inline void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, > > { > > } > > > > +static inline void mem_cgroup_replace_folio(struct folio *old, > > + struct folio *new) > > +{ > > +} > > + > > static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) > > { > > } > > diff --git a/mm/filemap.c b/mm/filemap.c > > index 9481ffaf24e6..673745219c82 100644 > > --- a/mm/filemap.c > > +++ b/mm/filemap.c > > @@ -819,7 +819,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) > > new->mapping = mapping; > > new->index = offset; > > > > - mem_cgroup_migrate(old, new); > > + mem_cgroup_replace_folio(old, new); > > > > xas_lock_irq(&xas); > > xas_store(&xas, new); > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 6660684f6f97..cbaa26605b3d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -7316,16 +7316,17 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) > > } > > > > /** > > - * mem_cgroup_migrate - Charge a folio's replacement. > > + * mem_cgroup_replace_folio - Charge a folio's replacement. > > * @old: Currently circulating folio. > > * @new: Replacement folio. > > * > > * Charge @new as a replacement folio for @old. @old will > > - * be uncharged upon free. > > + * be uncharged upon free. This is only used by the page cache > > + * (in replace_page_cache_folio()). > > * > > * Both folios must be locked, @new->mapping must be set up. > > */ > > -void mem_cgroup_migrate(struct folio *old, struct folio *new) > > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new) > > { > > struct mem_cgroup *memcg; > > long nr_pages = folio_nr_pages(new); > > @@ -7364,6 +7365,44 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) > > local_irq_restore(flags); > > } > > > > +/** > > + * mem_cgroup_migrate - Transfer the memcg data from the old to the new folio. > > + * @old: Currently circulating folio. > > + * @new: Replacement folio. > > + * > > + * Transfer the memcg data from the old folio to the new folio for migration. > > + * The old folio's data info will be cleared. Note that the memory counters > > + * will remain unchanged throughout the process. > > + * > > + * Both folios must be locked, @new->mapping must be set up. > > + */ > > +void mem_cgroup_migrate(struct folio *old, struct folio *new) > > +{ > > + struct mem_cgroup *memcg; > > + > > + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); > > + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); > > + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); > > + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new); > > + > > + if (mem_cgroup_disabled()) > > + return; > > + > > + memcg = folio_memcg(old); > > + /* > > + * Note that it is normal to see !memcg for a hugetlb folio. > > + * It could have been allocated when memory_hugetlb_accounting was not > > + * selected, for e.g. > > + */ > > + VM_WARN_ON_ONCE_FOLIO(!memcg, old); > > + if (!memcg) > > + return; > > + > > + /* Transfer the charge and the css ref */ > > + commit_charge(new, memcg); > > + old->memcg_data = 0; > > +} > > + > > DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); > > EXPORT_SYMBOL(memcg_sockets_enabled_key); > > > > diff --git a/mm/migrate.c b/mm/migrate.c > > index 7d1804c4a5d9..6034c7ed1d65 100644 > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -633,8 +633,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) > > > > folio_copy_owner(newfolio, folio); > > > > - if (!folio_test_hugetlb(folio)) > > - mem_cgroup_migrate(folio, newfolio); > > + mem_cgroup_migrate(folio, newfolio); > > } > > EXPORT_SYMBOL(folio_migrate_flags); > > > > -- > > 2.34.1 > >
On Tue, Oct 3, 2023 at 4:31 PM Nhat Pham <nphamcs@gmail.com> wrote: > > On Tue, Oct 3, 2023 at 4:22 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > On Tue, Oct 3, 2023 at 4:14 PM Nhat Pham <nphamcs@gmail.com> wrote: > > > > > > For most migration use cases, only transfer the memcg data from the old > > > folio to the new folio, and clear the old folio's memcg data. No > > > charging and uncharging will be done. These use cases include the new > > > hugetlb memcg accounting behavior (which was not previously handled). > > > > > > This shaves off some work on the migration path, and avoids the > > > temporary double charging of a folio during its migration. > > > > > > The only exception is replace_page_cache_folio(), which will use the old > > > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > > > context, the isolation of the old page isn't quite as thorough as with > > > migration, so we cannot use our new implementation directly. > > > > > > This patch is the result of the following discussion on the new hugetlb > > > memcg accounting behavior: > > > > > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > > > > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > > > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > > Signed-off-by: Nhat Pham <nphamcs@gmail.com> > > > > Does this patch fit before or after your series? In both cases I think > > there might be a problem for bisectability. > > Hmm my intention for this patch is as a fixlet. > (i.e it should be eventually squashed to the second patch of that series). > I just include the extra context on the fixlet for review purposes. > > My apologies - should have been much clearer. > (Perhaps I should just send out v4 at this point?) > It's really up to Andrew, just make it clear what the intention is. Thanks!
On Tue, Oct 3, 2023 at 4:14 PM Nhat Pham <nphamcs@gmail.com> wrote: > > For most migration use cases, only transfer the memcg data from the old > folio to the new folio, and clear the old folio's memcg data. No > charging and uncharging will be done. These use cases include the new > hugetlb memcg accounting behavior (which was not previously handled). > > This shaves off some work on the migration path, and avoids the > temporary double charging of a folio during its migration. > > The only exception is replace_page_cache_folio(), which will use the old > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > context, the isolation of the old page isn't quite as thorough as with > migration, so we cannot use our new implementation directly. > > This patch is the result of the following discussion on the new hugetlb > memcg accounting behavior: > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ For Andrew: This fixes the following patch (which has not been merged to stable): https://lore.kernel.org/lkml/20231003001828.2554080-3-nphamcs@gmail.com/ and should ideally be squashed to it. My apologies for not making it clear in the changelog. Let me know if you'd like to see a new version instead of this fixlet. > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Nhat Pham <nphamcs@gmail.com> > --- > include/linux/memcontrol.h | 7 ++++++ > mm/filemap.c | 2 +- > mm/memcontrol.c | 45 +++++++++++++++++++++++++++++++++++--- > mm/migrate.c | 3 +-- > 4 files changed, 51 insertions(+), 6 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index a827e2129790..e3eaa123256b 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -711,6 +711,8 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages); > > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new); > + > void mem_cgroup_migrate(struct folio *old, struct folio *new); > > /** > @@ -1294,6 +1296,11 @@ static inline void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, > { > } > > +static inline void mem_cgroup_replace_folio(struct folio *old, > + struct folio *new) > +{ > +} > + > static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) > { > } > diff --git a/mm/filemap.c b/mm/filemap.c > index 9481ffaf24e6..673745219c82 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -819,7 +819,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) > new->mapping = mapping; > new->index = offset; > > - mem_cgroup_migrate(old, new); > + mem_cgroup_replace_folio(old, new); > > xas_lock_irq(&xas); > xas_store(&xas, new); > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6660684f6f97..cbaa26605b3d 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -7316,16 +7316,17 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) > } > > /** > - * mem_cgroup_migrate - Charge a folio's replacement. > + * mem_cgroup_replace_folio - Charge a folio's replacement. > * @old: Currently circulating folio. > * @new: Replacement folio. > * > * Charge @new as a replacement folio for @old. @old will > - * be uncharged upon free. > + * be uncharged upon free. This is only used by the page cache > + * (in replace_page_cache_folio()). > * > * Both folios must be locked, @new->mapping must be set up. > */ > -void mem_cgroup_migrate(struct folio *old, struct folio *new) > +void mem_cgroup_replace_folio(struct folio *old, struct folio *new) > { > struct mem_cgroup *memcg; > long nr_pages = folio_nr_pages(new); > @@ -7364,6 +7365,44 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) > local_irq_restore(flags); > } > > +/** > + * mem_cgroup_migrate - Transfer the memcg data from the old to the new folio. > + * @old: Currently circulating folio. > + * @new: Replacement folio. > + * > + * Transfer the memcg data from the old folio to the new folio for migration. > + * The old folio's data info will be cleared. Note that the memory counters > + * will remain unchanged throughout the process. > + * > + * Both folios must be locked, @new->mapping must be set up. > + */ > +void mem_cgroup_migrate(struct folio *old, struct folio *new) > +{ > + struct mem_cgroup *memcg; > + > + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); > + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); > + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); > + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new); > + > + if (mem_cgroup_disabled()) > + return; > + > + memcg = folio_memcg(old); > + /* > + * Note that it is normal to see !memcg for a hugetlb folio. > + * It could have been allocated when memory_hugetlb_accounting was not > + * selected, for e.g. > + */ > + VM_WARN_ON_ONCE_FOLIO(!memcg, old); > + if (!memcg) > + return; > + > + /* Transfer the charge and the css ref */ > + commit_charge(new, memcg); > + old->memcg_data = 0; > +} > + > DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); > EXPORT_SYMBOL(memcg_sockets_enabled_key); > > diff --git a/mm/migrate.c b/mm/migrate.c > index 7d1804c4a5d9..6034c7ed1d65 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -633,8 +633,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) > > folio_copy_owner(newfolio, folio); > > - if (!folio_test_hugetlb(folio)) > - mem_cgroup_migrate(folio, newfolio); > + mem_cgroup_migrate(folio, newfolio); > } > EXPORT_SYMBOL(folio_migrate_flags); > > -- > 2.34.1 >
On Tue, Oct 3, 2023 at 4:54 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Tue, Oct 3, 2023 at 4:31 PM Nhat Pham <nphamcs@gmail.com> wrote: > > > > On Tue, Oct 3, 2023 at 4:22 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > On Tue, Oct 3, 2023 at 4:14 PM Nhat Pham <nphamcs@gmail.com> wrote: > > > > > > > > For most migration use cases, only transfer the memcg data from the old > > > > folio to the new folio, and clear the old folio's memcg data. No > > > > charging and uncharging will be done. These use cases include the new > > > > hugetlb memcg accounting behavior (which was not previously handled). > > > > > > > > This shaves off some work on the migration path, and avoids the > > > > temporary double charging of a folio during its migration. > > > > > > > > The only exception is replace_page_cache_folio(), which will use the old > > > > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > > > > context, the isolation of the old page isn't quite as thorough as with > > > > migration, so we cannot use our new implementation directly. > > > > > > > > This patch is the result of the following discussion on the new hugetlb > > > > memcg accounting behavior: > > > > > > > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > > > > > > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > > > > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > > > Signed-off-by: Nhat Pham <nphamcs@gmail.com> > > > > > > Does this patch fit before or after your series? In both cases I think > > > there might be a problem for bisectability. > > > > Hmm my intention for this patch is as a fixlet. > > (i.e it should be eventually squashed to the second patch of that series). > > I just include the extra context on the fixlet for review purposes. > > > > My apologies - should have been much clearer. > > (Perhaps I should just send out v4 at this point?) > > > > It's really up to Andrew, just make it clear what the intention is. Thanks for reminding me! That was my oversight. > > Thanks!
On Tue, Oct 03, 2023 at 04:14:22PM -0700, Nhat Pham wrote: > For most migration use cases, only transfer the memcg data from the old > folio to the new folio, and clear the old folio's memcg data. No > charging and uncharging will be done. These use cases include the new > hugetlb memcg accounting behavior (which was not previously handled). > > This shaves off some work on the migration path, and avoids the > temporary double charging of a folio during its migration. > > The only exception is replace_page_cache_folio(), which will use the old > mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that > context, the isolation of the old page isn't quite as thorough as with > migration, so we cannot use our new implementation directly. > > This patch is the result of the following discussion on the new hugetlb > memcg accounting behavior: > > https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > > Reported-by: Mike Kravetz <mike.kravetz@oracle.com> > Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/ > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Nhat Pham <nphamcs@gmail.com> For squashing, the patch title should be: hugetlb: memcg: account hugetlb-backed memory in memory controller fix However, I think this should actually be split out. It changes how all pages are cgroup-migrated, which is a bit too large of a side effect for the hugetlb accounting patch itself. Especially because the reasoning outlined above will get lost once this fixup is folded. IOW, send one prep patch, to go before the series, which splits mem_cgroup_replace_folio() and does the mem_cgroup_migrate() optimization() with the above explanation. Then send a fixlet for the hugetlb accounting patch that removes the !hugetlb-conditional for the mem_cgroup_migrate() call. If you're clear in the queueing instructions for both patches, Andrew can probably do it in-place without having to resend everything :) > +void mem_cgroup_migrate(struct folio *old, struct folio *new) > +{ > + struct mem_cgroup *memcg; > + > + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); > + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); > + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); > + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new); > + > + if (mem_cgroup_disabled()) > + return; > + > + memcg = folio_memcg(old); > + /* > + * Note that it is normal to see !memcg for a hugetlb folio. > + * It could have been allocated when memory_hugetlb_accounting was not > + * selected, for e.g. Is that sentence truncated? > + */ > + VM_WARN_ON_ONCE_FOLIO(!memcg, old); > + if (!memcg) > + return; If this is expected to happen, it shouldn't warn: VM_WARN_ON_ONCE(!folio_test_hugetlb(old) && !memcg, old);
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a827e2129790..e3eaa123256b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -711,6 +711,8 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages); +void mem_cgroup_replace_folio(struct folio *old, struct folio *new); + void mem_cgroup_migrate(struct folio *old, struct folio *new); /** @@ -1294,6 +1296,11 @@ static inline void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, { } +static inline void mem_cgroup_replace_folio(struct folio *old, + struct folio *new) +{ +} + static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) { } diff --git a/mm/filemap.c b/mm/filemap.c index 9481ffaf24e6..673745219c82 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -819,7 +819,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) new->mapping = mapping; new->index = offset; - mem_cgroup_migrate(old, new); + mem_cgroup_replace_folio(old, new); xas_lock_irq(&xas); xas_store(&xas, new); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6660684f6f97..cbaa26605b3d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7316,16 +7316,17 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) } /** - * mem_cgroup_migrate - Charge a folio's replacement. + * mem_cgroup_replace_folio - Charge a folio's replacement. * @old: Currently circulating folio. * @new: Replacement folio. * * Charge @new as a replacement folio for @old. @old will - * be uncharged upon free. + * be uncharged upon free. This is only used by the page cache + * (in replace_page_cache_folio()). * * Both folios must be locked, @new->mapping must be set up. */ -void mem_cgroup_migrate(struct folio *old, struct folio *new) +void mem_cgroup_replace_folio(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; long nr_pages = folio_nr_pages(new); @@ -7364,6 +7365,44 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) local_irq_restore(flags); } +/** + * mem_cgroup_migrate - Transfer the memcg data from the old to the new folio. + * @old: Currently circulating folio. + * @new: Replacement folio. + * + * Transfer the memcg data from the old folio to the new folio for migration. + * The old folio's data info will be cleared. Note that the memory counters + * will remain unchanged throughout the process. + * + * Both folios must be locked, @new->mapping must be set up. + */ +void mem_cgroup_migrate(struct folio *old, struct folio *new) +{ + struct mem_cgroup *memcg; + + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new); + + if (mem_cgroup_disabled()) + return; + + memcg = folio_memcg(old); + /* + * Note that it is normal to see !memcg for a hugetlb folio. + * It could have been allocated when memory_hugetlb_accounting was not + * selected, for e.g. + */ + VM_WARN_ON_ONCE_FOLIO(!memcg, old); + if (!memcg) + return; + + /* Transfer the charge and the css ref */ + commit_charge(new, memcg); + old->memcg_data = 0; +} + DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); EXPORT_SYMBOL(memcg_sockets_enabled_key); diff --git a/mm/migrate.c b/mm/migrate.c index 7d1804c4a5d9..6034c7ed1d65 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -633,8 +633,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_copy_owner(newfolio, folio); - if (!folio_test_hugetlb(folio)) - mem_cgroup_migrate(folio, newfolio); + mem_cgroup_migrate(folio, newfolio); } EXPORT_SYMBOL(folio_migrate_flags);