Message ID | 20231129032154.3710765-6-yosryahmed@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a5a7:0:b0:403:3b70:6f57 with SMTP id d7csp93817vqn; Tue, 28 Nov 2023 19:22:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IHu6osz5mVi2UOTTPilNV2M1zcYmiXGeiK2i24b+7z9tPz5wTrUiDkS04xrOHfvpgLZ4ohX X-Received: by 2002:a17:90b:1d12:b0:285:d96c:66c with SMTP id on18-20020a17090b1d1200b00285d96c066cmr8812108pjb.43.1701228158562; Tue, 28 Nov 2023 19:22:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701228158; cv=none; d=google.com; s=arc-20160816; b=GPV2admB2zGpqFCzzHkNAgzDAg5qTTbGAdg508Mq9P+O51kFnFUbq+pl7BRxqvVZvW 7RDM0A/d+Oyx+uVKB0mhVFUbtiHGwNSRLFCWzdfruEibuK257WIvnEFVZEr87hhC6dXd uj1j95tvospjzg8kdVKg3jvFPePpEQAWHAkGRHjV4K7Z28HVsArejLubluAGsDlociI3 hPF01nbO0jPG/uqCLthWS5TkTuae972jkJI1pgT5u1zqA43DmS8kSMVhrzB2Md47uwS7 UIWHhbSZxW15ecKxAEzaU+ep4loTOibQ+7pWK1/vg7CqonLSK80aoTd6iVzJwfMF5Hb5 UDUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; fh=IfBS1eUR4Ph2d1rlykJm3WUUhflOgyoV/WQxPmHrs48=; b=N+rRVJ2JyD16Ds0wIrze2QkwRy0aTCuLNqcU8lvVh8R4/87LPfFOk/ir1MRVnTh+D1 3tU0yt734uhJblX48vbr12gsMGiik1lA4lkVs468Gi+wWGRu0W0FGFoNpwa6Q3kJOlQd 6aiZEFJ3594RqqfMsPGG2e8PjauOfzBPBN+YVWbLhO1Ng3ArIhXONDK+0K3VDVnG6PRH G/eIXXZuXan5dUr+mQx6LjFOWFAKE1YxdOBEwlzcAz5mhczOx36RchLRsCm+fRKxy1wg frsU/gk5Wx+aCMrSi4mzOIDSOuLxB0h4nX3MnnUS2ZjNSrLhNUG2lCes2NkWeEmMXHoD LtnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b="3a8C/86a"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id bt24-20020a17090af01800b0028065b30a0dsi409985pjb.124.2023.11.28.19.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 19:22:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b="3a8C/86a"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 4306C81EB47B; Tue, 28 Nov 2023 19:22:35 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376894AbjK2DWM (ORCPT <rfc822;toshivichauhan@gmail.com> + 99 others); Tue, 28 Nov 2023 22:22:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234848AbjK2DWG (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 28 Nov 2023 22:22:06 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1E261BC8 for <linux-kernel@vger.kernel.org>; Tue, 28 Nov 2023 19:22:10 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-db40b699d0fso7194784276.2 for <linux-kernel@vger.kernel.org>; Tue, 28 Nov 2023 19:22:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228130; x=1701832930; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; b=3a8C/86aGmX7wva/oHIC5RABl6eqyBs4oRc/adrnqjBRF+tRrlwf9C8eH+vTbfHvQf RBRBmgJadnqlL9dY0pjPxAuOWQncHnw6bK4muZgLddSJPyWMsPntwaTQ2046qL2VP2eL abzUUxgHQ2bScm0s5fD4otCIpEqMIWKF7XN5kh/8QNyGdiMkkT8GFDNFi4/MlxMEPM6t E47BOfi9F2VW/cNrejEHqngP76SMmfGimbnxX9qnMjvJKX4AZvHd+cqGUUiS7hExDJ86 jpnaUswusGa/dm6nP+qskWFIrx58Ui+wIJjqb8LgFq/B+u38ha6h+7MwpUnSeAoLeKTg KAPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228130; x=1701832930; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; b=bjqzS9n4pLLFoq/M2U65SKjtxyW9zodhse+2CWchibjkdpUaaIhcjajCRGeTZzGApT S4kqbV1T1YZp2Dg7L7igzLC806xC2Sps0FD+9zOjhAItG+Tz06Ljigvu0FrIjLfIUvIE 0SJVyzxrq+Pg0+DWPgna9XXLsvhTgplgJ3kXkU7blcThpFrKwpkJpnEirLyCcwOsmeV4 oif3+7d9UPsm2E4Qgu52JGh/tL4NmCSTLc4yWW1UQohytBiLpFgUAIngzCo4Hqilz66D FMprwWG134MTIBmO5iCoo4mez3qMguzviYEcvESleToidjKGuTg44V+5H+jCHDfipr7H mR+g== X-Gm-Message-State: AOJu0YzKEgzwf6FC22yJ2hjl/zuE/dIX0swlGCi8tRjRUdpXSLSoXHp9 Ye1+uliL4Y2Q+0XUvODV3ZN9c0UcrHaiEInX X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a25:6f8b:0:b0:db3:f436:5714 with SMTP id k133-20020a256f8b000000b00db3f4365714mr586243ybc.0.1701228129872; Tue, 28 Nov 2023 19:22:09 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:53 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-6-yosryahmed@google.com> Subject: [mm-unstable v4 5/5] mm: memcg: restore subtree stats flushing From: Yosry Ahmed <yosryahmed@google.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Roman Gushchin <roman.gushchin@linux.dev>, Shakeel Butt <shakeelb@google.com>, Muchun Song <muchun.song@linux.dev>, Ivan Babrou <ivan@cloudflare.com>, Tejun Heo <tj@kernel.org>, " =?utf-8?q?M?= =?utf-8?q?ichal_Koutn=C3=BD?= " <mkoutny@suse.com>, Waiman Long <longman@redhat.com>, kernel-team@cloudflare.com, Wei Xu <weixugc@google.com>, Greg Thelen <gthelen@google.com>, Domenico Cerasuolo <cerasuolodomenico@gmail.com>, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed <yosryahmed@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 28 Nov 2023 19:22:35 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783867017641851027 X-GMAIL-MSGID: 1783867017641851027 |
Series |
mm: memcg: subtree stats flushing and thresholds
|
|
Commit Message
Yosry Ahmed
Nov. 29, 2023, 3:21 a.m. UTC
Stats flushing for memcg currently follows the following rules: - Always flush the entire memcg hierarchy (i.e. flush the root). - Only one flusher is allowed at a time. If someone else tries to flush concurrently, they skip and return immediately. - A periodic flusher flushes all the stats every 2 seconds. The reason this approach is followed is because all flushes are serialized by a global rstat spinlock. On the memcg side, flushing is invoked from userspace reads as well as in-kernel flushers (e.g. reclaim, refault, etc). This approach aims to avoid serializing all flushers on the global lock, which can cause a significant performance hit under high concurrency. This approach has the following problems: - Occasionally a userspace read of the stats of a non-root cgroup will be too expensive as it has to flush the entire hierarchy [1]. - Sometimes the stats accuracy are compromised if there is an ongoing flush, and we skip and return before the subtree of interest is actually flushed, yielding stale stats (by up to 2s due to periodic flushing). This is more visible when reading stats from userspace, but can also affect in-kernel flushers. The latter problem is particulary a concern when userspace reads stats after an event occurs, but gets stats from before the event. Examples: - When memory usage / pressure spikes, a userspace OOM handler may look at the stats of different memcgs to select a victim based on various heuristics (e.g. how much private memory will be freed by killing this). Reading stale stats from before the usage spike in this case may cause a wrongful OOM kill. - A proactive reclaimer may read the stats after writing to memory.reclaim to measure the success of the reclaim operation. Stale stats from before reclaim may give a false negative. - Reading the stats of a parent and a child memcg may be inconsistent (child larger than parent), if the flush doesn't happen when the parent is read, but happens when the child is read. As for in-kernel flushers, they will occasionally get stale stats. No regressions are currently known from this, but if there are regressions, they would be very difficult to debug and link to the source of the problem. This patch aims to fix these problems by restoring subtree flushing, and removing the unified/coalesced flushing logic that skips flushing if there is an ongoing flush. This change would introduce a significant regression with global stats flushing thresholds. With per-memcg stats flushing thresholds, this seems to perform really well. The thresholds protect the underlying lock from unnecessary contention. Add a mutex to protect the underlying rstat lock from excessive memcg flushing. The thresholds are re-checked after the mutex is grabbed to make sure that a concurrent flush did not already get the subtree we are trying to flush. A call to cgroup_rstat_flush() is not cheap, even if there are no pending updates. This patch was tested in two ways to ensure the latency of flushing is up to bar, on a machine with 384 cpus: - A synthetic test with 5000 concurrent workers in 500 cgroups doing allocations and reclaim, as well as 1000 readers for memory.stat (variation of [2]). No regressions were noticed in the total runtime. Note that significant regressions in this test are observed with global stats thresholds, but not with per-memcg thresholds. - A synthetic stress test for concurrently reading memcg stats while memory allocation/freeing workers are running in the background, provided by Wei Xu [3]. With 250k threads reading the stats every 100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01% of reads take more than 1ms, and no reads take more than 100ms. [1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/ [2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/ Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com> --- include/linux/memcontrol.h | 8 ++-- mm/memcontrol.c | 75 +++++++++++++++++++++++--------------- mm/vmscan.c | 2 +- mm/workingset.c | 10 +++-- 4 files changed, 58 insertions(+), 37 deletions(-)
Comments
On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > Stats flushing for memcg currently follows the following rules: > - Always flush the entire memcg hierarchy (i.e. flush the root). > - Only one flusher is allowed at a time. If someone else tries to flush > concurrently, they skip and return immediately. > - A periodic flusher flushes all the stats every 2 seconds. > > The reason this approach is followed is because all flushes are > serialized by a global rstat spinlock. On the memcg side, flushing is > invoked from userspace reads as well as in-kernel flushers (e.g. > reclaim, refault, etc). This approach aims to avoid serializing all > flushers on the global lock, which can cause a significant performance > hit under high concurrency. > > This approach has the following problems: > - Occasionally a userspace read of the stats of a non-root cgroup will > be too expensive as it has to flush the entire hierarchy [1]. > - Sometimes the stats accuracy are compromised if there is an ongoing > flush, and we skip and return before the subtree of interest is > actually flushed, yielding stale stats (by up to 2s due to periodic > flushing). This is more visible when reading stats from userspace, > but can also affect in-kernel flushers. > > The latter problem is particulary a concern when userspace reads stats > after an event occurs, but gets stats from before the event. Examples: > - When memory usage / pressure spikes, a userspace OOM handler may look > at the stats of different memcgs to select a victim based on various > heuristics (e.g. how much private memory will be freed by killing > this). Reading stale stats from before the usage spike in this case > may cause a wrongful OOM kill. > - A proactive reclaimer may read the stats after writing to > memory.reclaim to measure the success of the reclaim operation. Stale > stats from before reclaim may give a false negative. > - Reading the stats of a parent and a child memcg may be inconsistent > (child larger than parent), if the flush doesn't happen when the > parent is read, but happens when the child is read. > > As for in-kernel flushers, they will occasionally get stale stats. No > regressions are currently known from this, but if there are regressions, > they would be very difficult to debug and link to the source of the > problem. > > This patch aims to fix these problems by restoring subtree flushing, > and removing the unified/coalesced flushing logic that skips flushing if > there is an ongoing flush. This change would introduce a significant > regression with global stats flushing thresholds. With per-memcg stats > flushing thresholds, this seems to perform really well. The thresholds > protect the underlying lock from unnecessary contention. > > Add a mutex to protect the underlying rstat lock from excessive memcg > flushing. The thresholds are re-checked after the mutex is grabbed to > make sure that a concurrent flush did not already get the subtree we are > trying to flush. A call to cgroup_rstat_flush() is not cheap, even if > there are no pending updates. > > This patch was tested in two ways to ensure the latency of flushing is > up to bar, on a machine with 384 cpus: > - A synthetic test with 5000 concurrent workers in 500 cgroups doing > allocations and reclaim, as well as 1000 readers for memory.stat > (variation of [2]). No regressions were noticed in the total runtime. > Note that significant regressions in this test are observed with > global stats thresholds, but not with per-memcg thresholds. > > - A synthetic stress test for concurrently reading memcg stats while > memory allocation/freeing workers are running in the background, > provided by Wei Xu [3]. With 250k threads reading the stats every > 100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01% > of reads take more than 1ms, and no reads take more than 100ms. > > [1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/ > [2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/ > [3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/ > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com> > --- > include/linux/memcontrol.h | 8 ++-- > mm/memcontrol.c | 75 +++++++++++++++++++++++--------------- > mm/vmscan.c | 2 +- > mm/workingset.c | 10 +++-- > 4 files changed, 58 insertions(+), 37 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index a568f70a26774..8673140683e6e 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1050,8 +1050,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, > return x; > } > > -void mem_cgroup_flush_stats(void); > -void mem_cgroup_flush_stats_ratelimited(void); > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg); > +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg); > > void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, > int val); > @@ -1566,11 +1566,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, > return node_page_state(lruvec_pgdat(lruvec), idx); > } > > -static inline void mem_cgroup_flush_stats(void) > +static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > { > } > > -static inline void mem_cgroup_flush_stats_ratelimited(void) > +static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) > { > } > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 93b483b379aa1..5d300318bf18a 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -670,7 +670,6 @@ struct memcg_vmstats { > */ > static void flush_memcg_stats_dwork(struct work_struct *w); > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); > -static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); > static u64 flush_last_time; > > #define FLUSH_TIME (2UL*HZ) > @@ -731,35 +730,47 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > } > } > > -static void do_flush_stats(void) > +static void do_flush_stats(struct mem_cgroup *memcg) > { > - /* > - * We always flush the entire tree, so concurrent flushers can just > - * skip. This avoids a thundering herd problem on the rstat global lock > - * from memcg flushers (e.g. reclaim, refault, etc). > - */ > - if (atomic_read(&stats_flush_ongoing) || > - atomic_xchg(&stats_flush_ongoing, 1)) > - return; > - > - WRITE_ONCE(flush_last_time, jiffies_64); > - > - cgroup_rstat_flush(root_mem_cgroup->css.cgroup); > + if (mem_cgroup_is_root(memcg)) > + WRITE_ONCE(flush_last_time, jiffies_64); > > - atomic_set(&stats_flush_ongoing, 0); > + cgroup_rstat_flush(memcg->css.cgroup); > } > > -void mem_cgroup_flush_stats(void) > +/* > + * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree > + * @memcg: root of the subtree to flush > + * > + * Flushing is serialized by the underlying global rstat lock. There is also a > + * minimum amount of work to be done even if there are no stat updates to flush. > + * Hence, we only flush the stats if the updates delta exceeds a threshold. This > + * avoids unnecessary work and contention on the underlying lock. > + */ What is global rstat lock? > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > { > - if (memcg_should_flush_stats(root_mem_cgroup)) > - do_flush_stats(); > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > + > + if (mem_cgroup_disabled()) > + return; > + > + if (!memcg) > + memcg = root_mem_cgroup; > + > + if (memcg_should_flush_stats(memcg)) { > + mutex_lock(&memcg_stats_flush_mutex); > + /* Check again after locking, another flush may have occurred */ > + if (memcg_should_flush_stats(memcg)) > + do_flush_stats(memcg); > + mutex_unlock(&memcg_stats_flush_mutex); > + } > } > > -void mem_cgroup_flush_stats_ratelimited(void) > +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) > { > /* Only flush if the periodic flusher is one full cycle late */ > if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME)) > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > } > > static void flush_memcg_stats_dwork(struct work_struct *w) > @@ -768,7 +779,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) > * Deliberately ignore memcg_should_flush_stats() here so that flushing > * in latency-sensitive paths is as cheap as possible. > */ > - do_flush_stats(); > + do_flush_stats(root_mem_cgroup); > queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); > } > > @@ -1664,7 +1675,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) > * > * Current memory state: > */ > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > > for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { > u64 size; > @@ -4214,7 +4225,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) > int nid; > struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > > for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { > seq_printf(m, "%s=%lu", stat->name, > @@ -4295,7 +4306,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) > > BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); > > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > > for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { > unsigned long nr; > @@ -4791,7 +4802,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, > struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); > struct mem_cgroup *parent; > > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > > *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); > *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); > @@ -6886,7 +6897,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) > int i; > struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(memcg); > > for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { > int nid; > @@ -8125,7 +8136,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) > break; > } > > - cgroup_rstat_flush(memcg->css.cgroup); > + /* > + * mem_cgroup_flush_stats() ignores small changes. Use > + * do_flush_stats() directly to get accurate stats for charging. > + */ > + do_flush_stats(memcg); > pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE; > if (pages < max) > continue; > @@ -8190,8 +8205,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) > static u64 zswap_current_read(struct cgroup_subsys_state *css, > struct cftype *cft) > { > - cgroup_rstat_flush(css->cgroup); > - return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B); > + struct mem_cgroup *memcg = mem_cgroup_from_css(css); > + > + mem_cgroup_flush_stats(memcg); > + return memcg_page_state(memcg, MEMCG_ZSWAP_B); > } > > static int zswap_max_show(struct seq_file *m, void *v) > diff --git a/mm/vmscan.c b/mm/vmscan.c > index d8c3338fee0fb..0b8a0107d58d8 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2250,7 +2250,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) > * Flush the memory cgroup stats, so that we read accurate per-memcg > * lruvec stats for heuristics. > */ > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(sc->target_mem_cgroup); > > /* > * Determine the scan balance between anon and file LRUs. > diff --git a/mm/workingset.c b/mm/workingset.c > index dce41577a49d2..7d3dacab8451a 100644 > --- a/mm/workingset.c > +++ b/mm/workingset.c > @@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) > > rcu_read_unlock(); > > - /* Flush stats (and potentially sleep) outside the RCU read section */ > - mem_cgroup_flush_stats_ratelimited(); > + /* > + * Flush stats (and potentially sleep) outside the RCU read section. > + * XXX: With per-memcg flushing and thresholding, is ratelimiting > + * still needed here? > + */ > + mem_cgroup_flush_stats_ratelimited(eviction_memcg); What if flushing is not rate-limited (e.g. above line is commented)? > > eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); > refault = atomic_long_read(&eviction_lruvec->nonresident_age); > @@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, > struct lruvec *lruvec; > int i; > > - mem_cgroup_flush_stats(); > + mem_cgroup_flush_stats(sc->memcg); > lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); > for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) > pages += lruvec_page_state_local(lruvec, Confused...
On 12/1/23 20:57, Bagas Sanjaya wrote: >> -void mem_cgroup_flush_stats(void) >> +/* >> + * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree >> + * @memcg: root of the subtree to flush >> + * >> + * Flushing is serialized by the underlying global rstat lock. There is also a >> + * minimum amount of work to be done even if there are no stat updates to flush. >> + * Hence, we only flush the stats if the updates delta exceeds a threshold. This >> + * avoids unnecessary work and contention on the underlying lock. >> + */ > What is global rstat lock? It is the cgroup_rstat_lock in kernel/cgroup/rstat.c. Cheers, Longman
On 12/2/23 09:56, Waiman Long wrote: > > On 12/1/23 20:57, Bagas Sanjaya wrote: >>> -void mem_cgroup_flush_stats(void) >>> +/* >>> + * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree >>> + * @memcg: root of the subtree to flush >>> + * >>> + * Flushing is serialized by the underlying global rstat lock. There is also a >>> + * minimum amount of work to be done even if there are no stat updates to flush. >>> + * Hence, we only flush the stats if the updates delta exceeds a threshold. This >>> + * avoids unnecessary work and contention on the underlying lock. >>> + */ >> What is global rstat lock? > > It is the cgroup_rstat_lock in kernel/cgroup/rstat.c. > OK, I see that. Thanks!
On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: [...] > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > { > - if (memcg_should_flush_stats(root_mem_cgroup)) > - do_flush_stats(); > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > + > + if (mem_cgroup_disabled()) > + return; > + > + if (!memcg) > + memcg = root_mem_cgroup; > + > + if (memcg_should_flush_stats(memcg)) { > + mutex_lock(&memcg_stats_flush_mutex); What's the point of this mutex now? What is it providing? I understand we can not try_lock here due to targeted flushing. Why not just let the global rstat serialize the flushes? Actually this mutex can cause latency hiccups as the mutex owner can get resched during flush and then no one can flush for a potentially long time. > + /* Check again after locking, another flush may have occurred */ > + if (memcg_should_flush_stats(memcg)) > + do_flush_stats(memcg); > + mutex_unlock(&memcg_stats_flush_mutex); > + } > }
[..] > > diff --git a/mm/workingset.c b/mm/workingset.c > > index dce41577a49d2..7d3dacab8451a 100644 > > --- a/mm/workingset.c > > +++ b/mm/workingset.c > > @@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) > > > > rcu_read_unlock(); > > > > - /* Flush stats (and potentially sleep) outside the RCU read section */ > > - mem_cgroup_flush_stats_ratelimited(); > > + /* > > + * Flush stats (and potentially sleep) outside the RCU read section. > > + * XXX: With per-memcg flushing and thresholding, is ratelimiting > > + * still needed here? > > + */ > > + mem_cgroup_flush_stats_ratelimited(eviction_memcg); > > What if flushing is not rate-limited (e.g. above line is commented)? > Hmm I think I might be misunderstanding the question. The call to mem_cgroup_flush_stats_ratelimited() does not ratelimit other flushers, it is rather a flush call that is itself ratelimited. IOW, it may or may not flush based on when was the last time someone else flushed. This was introduced because flushing in the fault path was expensive in some cases, so we wanted to avoid flushing if someone else recently did a flush, as we don't expect a lot of pending changes in this case. However, that was when flushing was always on the root level. Now that we are flushing on the memcg level, it may no longer be needed as: - The flush is more scoped, there should be less work to do. - There is a per-memcg threshold now such that we only flush when there are pending updates in this memcg. This is why I added a comment that the ratelimited flush here may no longer be needed. I didn't want to investigate this as part of this series, especially that I do not have a reproducer for the fault latency introduced by the flush before ratelimiting. Hence, I am leaving the comment such that people know that this ratelimiting may no longer be needed with this patch. > > > > eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); > > refault = atomic_long_read(&eviction_lruvec->nonresident_age); > > @@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, > > struct lruvec *lruvec; > > int i; > > > > - mem_cgroup_flush_stats(); > > + mem_cgroup_flush_stats(sc->memcg); > > lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); > > for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) > > pages += lruvec_page_state_local(lruvec, > > Confused... Which part is confusing? The call to mem_cgroup_flush_stats() now receives a memcg argument as flushing is scoped to that memcg only to avoid doing unnecessary work to flush other memcgs with global flushing.
On Sat, Dec 2, 2023 at 12:31 AM Shakeel Butt <shakeelb@google.com> wrote: > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > [...] > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > { > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > - do_flush_stats(); > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > + > > + if (mem_cgroup_disabled()) > > + return; > > + > > + if (!memcg) > > + memcg = root_mem_cgroup; > > + > > + if (memcg_should_flush_stats(memcg)) { > > + mutex_lock(&memcg_stats_flush_mutex); > > What's the point of this mutex now? What is it providing? I understand > we can not try_lock here due to targeted flushing. Why not just let the > global rstat serialize the flushes? Actually this mutex can cause > latency hiccups as the mutex owner can get resched during flush and then > no one can flush for a potentially long time. I was hoping this was clear from the commit message and code comments, but apparently I was wrong, sorry. Let me give more context. In previous versions and/or series, the mutex was only used with flushes from userspace to guard in-kernel flushers against high contention from userspace. Later on, I kept the mutex for all memcg flushers for the following reasons: (a) Allow waiters to sleep: Unlike other flushers, the memcg flushing path can see a lot of concurrency. The mutex avoids having a lot of CPUs spinning (e.g. concurrent reclaimers) by allowing waiters to sleep. (b) Check the threshold under lock but before calling cgroup_rstat_flush(): The calls to cgroup_rstat_flush() are not very cheap even if there's nothing to flush, as we still need to iterate all CPUs. If flushers contend directly on the rstat lock, overlapping flushes will unnecessarily do the percpu iteration once they hold the lock. With the mutex, they will check the threshold again once they hold the mutex. (c) Protect non-memcg flushers from contention from memcg flushers. This is not as strong of an argument as protecting in-kernel flushers from userspace flushers. There has been discussions before about changing the rstat lock itself to be a mutex, which would resolve (a), but there are concerns about priority inversions if a low priority task holds the mutex and gets preempted, as well as the amount of time the rstat lock holder keeps the lock for: https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ I agree about possible hiccups due to the inner lock being dropped while the mutex is held. Running a synthetic test with high concurrency between reclaimers (in-kernel flushers) and stats readers show no material performance difference with or without the mutex. Maybe things cancel out, or don't really matter in practice. I would prefer to keep the current code as I think (a) and (b) could cause problems in the future, and the current form of the code (with the mutex) has already seen mileage with production workloads.
On Mon, Dec 4, 2023 at 12:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Sat, Dec 2, 2023 at 12:31 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > > [...] > > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > > { > > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > > - do_flush_stats(); > > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > > + > > > + if (mem_cgroup_disabled()) > > > + return; > > > + > > > + if (!memcg) > > > + memcg = root_mem_cgroup; > > > + > > > + if (memcg_should_flush_stats(memcg)) { > > > + mutex_lock(&memcg_stats_flush_mutex); > > > > What's the point of this mutex now? What is it providing? I understand > > we can not try_lock here due to targeted flushing. Why not just let the > > global rstat serialize the flushes? Actually this mutex can cause > > latency hiccups as the mutex owner can get resched during flush and then > > no one can flush for a potentially long time. > > I was hoping this was clear from the commit message and code comments, > but apparently I was wrong, sorry. Let me give more context. > > In previous versions and/or series, the mutex was only used with > flushes from userspace to guard in-kernel flushers against high > contention from userspace. Later on, I kept the mutex for all memcg > flushers for the following reasons: > > (a) Allow waiters to sleep: > Unlike other flushers, the memcg flushing path can see a lot of > concurrency. The mutex avoids having a lot of CPUs spinning (e.g. > concurrent reclaimers) by allowing waiters to sleep. > > (b) Check the threshold under lock but before calling cgroup_rstat_flush(): > The calls to cgroup_rstat_flush() are not very cheap even if there's > nothing to flush, as we still need to iterate all CPUs. If flushers > contend directly on the rstat lock, overlapping flushes will > unnecessarily do the percpu iteration once they hold the lock. With > the mutex, they will check the threshold again once they hold the > mutex. > > (c) Protect non-memcg flushers from contention from memcg flushers. > This is not as strong of an argument as protecting in-kernel flushers > from userspace flushers. > > There has been discussions before about changing the rstat lock itself > to be a mutex, which would resolve (a), but there are concerns about > priority inversions if a low priority task holds the mutex and gets > preempted, as well as the amount of time the rstat lock holder keeps > the lock for: > https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ > > I agree about possible hiccups due to the inner lock being dropped > while the mutex is held. Running a synthetic test with high > concurrency between reclaimers (in-kernel flushers) and stats readers > show no material performance difference with or without the mutex. > Maybe things cancel out, or don't really matter in practice. > > I would prefer to keep the current code as I think (a) and (b) could > cause problems in the future, and the current form of the code (with > the mutex) has already seen mileage with production workloads. Correction: The priority inversion is possible on the memcg side due to the mutex in this patch. Also, for point (a), the spinners will eventually sleep once they hold the lock and hit the first CPU boundary -- because of the lock dropping and cond_resched(). So eventually, all spinners should be able to sleep, although it will be a while until they do. With the mutex, they all sleep from the beginning. Point (b) still holds though. I am slightly inclined to keep the mutex but I can send a small fixlet to remove it if others think otherwise. Shakeel, Wei, any preferences?
On Mon, Dec 4, 2023 at 1:38 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Mon, Dec 4, 2023 at 12:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > On Sat, Dec 2, 2023 at 12:31 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > > > [...] > > > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > > > { > > > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > > > - do_flush_stats(); > > > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > > > + > > > > + if (mem_cgroup_disabled()) > > > > + return; > > > > + > > > > + if (!memcg) > > > > + memcg = root_mem_cgroup; > > > > + > > > > + if (memcg_should_flush_stats(memcg)) { > > > > + mutex_lock(&memcg_stats_flush_mutex); > > > > > > What's the point of this mutex now? What is it providing? I understand > > > we can not try_lock here due to targeted flushing. Why not just let the > > > global rstat serialize the flushes? Actually this mutex can cause > > > latency hiccups as the mutex owner can get resched during flush and then > > > no one can flush for a potentially long time. > > > > I was hoping this was clear from the commit message and code comments, > > but apparently I was wrong, sorry. Let me give more context. > > > > In previous versions and/or series, the mutex was only used with > > flushes from userspace to guard in-kernel flushers against high > > contention from userspace. Later on, I kept the mutex for all memcg > > flushers for the following reasons: > > > > (a) Allow waiters to sleep: > > Unlike other flushers, the memcg flushing path can see a lot of > > concurrency. The mutex avoids having a lot of CPUs spinning (e.g. > > concurrent reclaimers) by allowing waiters to sleep. > > > > (b) Check the threshold under lock but before calling cgroup_rstat_flush(): > > The calls to cgroup_rstat_flush() are not very cheap even if there's > > nothing to flush, as we still need to iterate all CPUs. If flushers > > contend directly on the rstat lock, overlapping flushes will > > unnecessarily do the percpu iteration once they hold the lock. With > > the mutex, they will check the threshold again once they hold the > > mutex. > > > > (c) Protect non-memcg flushers from contention from memcg flushers. > > This is not as strong of an argument as protecting in-kernel flushers > > from userspace flushers. > > > > There has been discussions before about changing the rstat lock itself > > to be a mutex, which would resolve (a), but there are concerns about > > priority inversions if a low priority task holds the mutex and gets > > preempted, as well as the amount of time the rstat lock holder keeps > > the lock for: > > https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ > > > > I agree about possible hiccups due to the inner lock being dropped > > while the mutex is held. Running a synthetic test with high > > concurrency between reclaimers (in-kernel flushers) and stats readers > > show no material performance difference with or without the mutex. > > Maybe things cancel out, or don't really matter in practice. > > > > I would prefer to keep the current code as I think (a) and (b) could > > cause problems in the future, and the current form of the code (with > > the mutex) has already seen mileage with production workloads. > > Correction: The priority inversion is possible on the memcg side due > to the mutex in this patch. Also, for point (a), the spinners will > eventually sleep once they hold the lock and hit the first CPU > boundary -- because of the lock dropping and cond_resched(). So > eventually, all spinners should be able to sleep, although it will be > a while until they do. With the mutex, they all sleep from the > beginning. Point (b) still holds though. > > I am slightly inclined to keep the mutex but I can send a small fixlet > to remove it if others think otherwise. > > Shakeel, Wei, any preferences? My preference is to avoid the issue we know we see in production alot i.e. priority inversion. In future if you see issues with spinning then you can come up with the lockless flush mechanism at that time.
On Mon, Dec 4, 2023 at 3:31 PM Shakeel Butt <shakeelb@google.com> wrote: > > On Mon, Dec 4, 2023 at 1:38 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > On Mon, Dec 4, 2023 at 12:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > On Sat, Dec 2, 2023 at 12:31 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > > > > [...] > > > > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > > > > { > > > > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > > > > - do_flush_stats(); > > > > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > > > > + > > > > > + if (mem_cgroup_disabled()) > > > > > + return; > > > > > + > > > > > + if (!memcg) > > > > > + memcg = root_mem_cgroup; > > > > > + > > > > > + if (memcg_should_flush_stats(memcg)) { > > > > > + mutex_lock(&memcg_stats_flush_mutex); > > > > > > > > What's the point of this mutex now? What is it providing? I understand > > > > we can not try_lock here due to targeted flushing. Why not just let the > > > > global rstat serialize the flushes? Actually this mutex can cause > > > > latency hiccups as the mutex owner can get resched during flush and then > > > > no one can flush for a potentially long time. > > > > > > I was hoping this was clear from the commit message and code comments, > > > but apparently I was wrong, sorry. Let me give more context. > > > > > > In previous versions and/or series, the mutex was only used with > > > flushes from userspace to guard in-kernel flushers against high > > > contention from userspace. Later on, I kept the mutex for all memcg > > > flushers for the following reasons: > > > > > > (a) Allow waiters to sleep: > > > Unlike other flushers, the memcg flushing path can see a lot of > > > concurrency. The mutex avoids having a lot of CPUs spinning (e.g. > > > concurrent reclaimers) by allowing waiters to sleep. > > > > > > (b) Check the threshold under lock but before calling cgroup_rstat_flush(): > > > The calls to cgroup_rstat_flush() are not very cheap even if there's > > > nothing to flush, as we still need to iterate all CPUs. If flushers > > > contend directly on the rstat lock, overlapping flushes will > > > unnecessarily do the percpu iteration once they hold the lock. With > > > the mutex, they will check the threshold again once they hold the > > > mutex. > > > > > > (c) Protect non-memcg flushers from contention from memcg flushers. > > > This is not as strong of an argument as protecting in-kernel flushers > > > from userspace flushers. > > > > > > There has been discussions before about changing the rstat lock itself > > > to be a mutex, which would resolve (a), but there are concerns about > > > priority inversions if a low priority task holds the mutex and gets > > > preempted, as well as the amount of time the rstat lock holder keeps > > > the lock for: > > > https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ > > > > > > I agree about possible hiccups due to the inner lock being dropped > > > while the mutex is held. Running a synthetic test with high > > > concurrency between reclaimers (in-kernel flushers) and stats readers > > > show no material performance difference with or without the mutex. > > > Maybe things cancel out, or don't really matter in practice. > > > > > > I would prefer to keep the current code as I think (a) and (b) could > > > cause problems in the future, and the current form of the code (with > > > the mutex) has already seen mileage with production workloads. > > > > Correction: The priority inversion is possible on the memcg side due > > to the mutex in this patch. Also, for point (a), the spinners will > > eventually sleep once they hold the lock and hit the first CPU > > boundary -- because of the lock dropping and cond_resched(). So > > eventually, all spinners should be able to sleep, although it will be > > a while until they do. With the mutex, they all sleep from the > > beginning. Point (b) still holds though. > > > > I am slightly inclined to keep the mutex but I can send a small fixlet > > to remove it if others think otherwise. > > > > Shakeel, Wei, any preferences? > > My preference is to avoid the issue we know we see in production alot > i.e. priority inversion. > > In future if you see issues with spinning then you can come up with > the lockless flush mechanism at that time. Given that the synthetic high concurrency test doesn't show material performance difference between the mutex and non-mutex versions, I agree that the mutex can be taken out from this patch set (one less global mutex to worry about).
On Mon, Dec 4, 2023 at 3:46 PM Wei Xu <weixugc@google.com> wrote: > > On Mon, Dec 4, 2023 at 3:31 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Mon, Dec 4, 2023 at 1:38 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > On Mon, Dec 4, 2023 at 12:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > > > On Sat, Dec 2, 2023 at 12:31 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > > > > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > > > > > [...] > > > > > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > > > > > { > > > > > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > > > > > - do_flush_stats(); > > > > > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > > > > > + > > > > > > + if (mem_cgroup_disabled()) > > > > > > + return; > > > > > > + > > > > > > + if (!memcg) > > > > > > + memcg = root_mem_cgroup; > > > > > > + > > > > > > + if (memcg_should_flush_stats(memcg)) { > > > > > > + mutex_lock(&memcg_stats_flush_mutex); > > > > > > > > > > What's the point of this mutex now? What is it providing? I understand > > > > > we can not try_lock here due to targeted flushing. Why not just let the > > > > > global rstat serialize the flushes? Actually this mutex can cause > > > > > latency hiccups as the mutex owner can get resched during flush and then > > > > > no one can flush for a potentially long time. > > > > > > > > I was hoping this was clear from the commit message and code comments, > > > > but apparently I was wrong, sorry. Let me give more context. > > > > > > > > In previous versions and/or series, the mutex was only used with > > > > flushes from userspace to guard in-kernel flushers against high > > > > contention from userspace. Later on, I kept the mutex for all memcg > > > > flushers for the following reasons: > > > > > > > > (a) Allow waiters to sleep: > > > > Unlike other flushers, the memcg flushing path can see a lot of > > > > concurrency. The mutex avoids having a lot of CPUs spinning (e.g. > > > > concurrent reclaimers) by allowing waiters to sleep. > > > > > > > > (b) Check the threshold under lock but before calling cgroup_rstat_flush(): > > > > The calls to cgroup_rstat_flush() are not very cheap even if there's > > > > nothing to flush, as we still need to iterate all CPUs. If flushers > > > > contend directly on the rstat lock, overlapping flushes will > > > > unnecessarily do the percpu iteration once they hold the lock. With > > > > the mutex, they will check the threshold again once they hold the > > > > mutex. > > > > > > > > (c) Protect non-memcg flushers from contention from memcg flushers. > > > > This is not as strong of an argument as protecting in-kernel flushers > > > > from userspace flushers. > > > > > > > > There has been discussions before about changing the rstat lock itself > > > > to be a mutex, which would resolve (a), but there are concerns about > > > > priority inversions if a low priority task holds the mutex and gets > > > > preempted, as well as the amount of time the rstat lock holder keeps > > > > the lock for: > > > > https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ > > > > > > > > I agree about possible hiccups due to the inner lock being dropped > > > > while the mutex is held. Running a synthetic test with high > > > > concurrency between reclaimers (in-kernel flushers) and stats readers > > > > show no material performance difference with or without the mutex. > > > > Maybe things cancel out, or don't really matter in practice. > > > > > > > > I would prefer to keep the current code as I think (a) and (b) could > > > > cause problems in the future, and the current form of the code (with > > > > the mutex) has already seen mileage with production workloads. > > > > > > Correction: The priority inversion is possible on the memcg side due > > > to the mutex in this patch. Also, for point (a), the spinners will > > > eventually sleep once they hold the lock and hit the first CPU > > > boundary -- because of the lock dropping and cond_resched(). So > > > eventually, all spinners should be able to sleep, although it will be > > > a while until they do. With the mutex, they all sleep from the > > > beginning. Point (b) still holds though. > > > > > > I am slightly inclined to keep the mutex but I can send a small fixlet > > > to remove it if others think otherwise. > > > > > > Shakeel, Wei, any preferences? > > > > My preference is to avoid the issue we know we see in production alot > > i.e. priority inversion. > > > > In future if you see issues with spinning then you can come up with > > the lockless flush mechanism at that time. > > Given that the synthetic high concurrency test doesn't show material > performance difference between the mutex and non-mutex versions, I > agree that the mutex can be taken out from this patch set (one less > global mutex to worry about). Thanks Wei and Shakeel for your input. Andrew, could you please squash in the fixlet below and remove the paragraph starting with "Add a mutex to.." from the commit message? From 19af26e01f93cbf0806d75a234b78e48c1ce9d80 Mon Sep 17 00:00:00 2001 From: Yosry Ahmed <yosryahmed@google.com> Date: Mon, 4 Dec 2023 23:43:29 +0000 Subject: [PATCH] mm: memcg: remove stats flushing mutex The mutex was intended to make the waiters sleep instead of spin, and such that we can check the update thresholds again after acquiring the mutex. However, the mutex has a risk of priority inversion, especially since the underlying rstat lock can de dropped while the mutex is held. Synthetic testing with high concurrency of flushers shows no regressions without the mutex, so remove it. Suggested-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Yosry Ahmed <yosryahmed@google.com> --- mm/memcontrol.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5d300318bf18a..0563625767349 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -749,21 +749,14 @@ static void do_flush_stats(struct mem_cgroup *memcg) */ void mem_cgroup_flush_stats(struct mem_cgroup *memcg) { - static DEFINE_MUTEX(memcg_stats_flush_mutex); - if (mem_cgroup_disabled()) return; if (!memcg) memcg = root_mem_cgroup; - if (memcg_should_flush_stats(memcg)) { - mutex_lock(&memcg_stats_flush_mutex); - /* Check again after locking, another flush may have occurred */ - if (memcg_should_flush_stats(memcg)) - do_flush_stats(memcg); - mutex_unlock(&memcg_stats_flush_mutex); - } + if (memcg_should_flush_stats(memcg)) + do_flush_stats(memcg); } void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) -- 2.43.0.rc2.451.g8631bc7472-goog
On Mon, Dec 04, 2023 at 03:49:01PM -0800, Yosry Ahmed wrote: [...] > > From 19af26e01f93cbf0806d75a234b78e48c1ce9d80 Mon Sep 17 00:00:00 2001 > From: Yosry Ahmed <yosryahmed@google.com> > Date: Mon, 4 Dec 2023 23:43:29 +0000 > Subject: [PATCH] mm: memcg: remove stats flushing mutex > > The mutex was intended to make the waiters sleep instead of spin, and > such that we can check the update thresholds again after acquiring the > mutex. However, the mutex has a risk of priority inversion, especially > since the underlying rstat lock can de dropped while the mutex is held. > > Synthetic testing with high concurrency of flushers shows no > regressions without the mutex, so remove it. > > Suggested-by: Shakeel Butt <shakeelb@google.com> > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Shakeel Butt <shakeelb@google.com>
On Mon, 4 Dec 2023 23:58:56 +0000 Shakeel Butt <shakeelb@google.com> wrote: > On Mon, Dec 04, 2023 at 03:49:01PM -0800, Yosry Ahmed wrote: > [...] > > > > From 19af26e01f93cbf0806d75a234b78e48c1ce9d80 Mon Sep 17 00:00:00 2001 > > From: Yosry Ahmed <yosryahmed@google.com> > > Date: Mon, 4 Dec 2023 23:43:29 +0000 > > Subject: [PATCH] mm: memcg: remove stats flushing mutex > > > > The mutex was intended to make the waiters sleep instead of spin, and > > such that we can check the update thresholds again after acquiring the > > mutex. However, the mutex has a risk of priority inversion, especially > > since the underlying rstat lock can de dropped while the mutex is held. > > > > Synthetic testing with high concurrency of flushers shows no > > regressions without the mutex, so remove it. > > > > Suggested-by: Shakeel Butt <shakeelb@google.com> > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > > Acked-by: Shakeel Butt <shakeelb@google.com> > I'd like to move this series into mm-stable soon. Are we all OK with that?
On Tue, Dec 12, 2023 at 10:43 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Mon, 4 Dec 2023 23:58:56 +0000 Shakeel Butt <shakeelb@google.com> wrote: > > > On Mon, Dec 04, 2023 at 03:49:01PM -0800, Yosry Ahmed wrote: > > [...] > > > > > > From 19af26e01f93cbf0806d75a234b78e48c1ce9d80 Mon Sep 17 00:00:00 2001 > > > From: Yosry Ahmed <yosryahmed@google.com> > > > Date: Mon, 4 Dec 2023 23:43:29 +0000 > > > Subject: [PATCH] mm: memcg: remove stats flushing mutex > > > > > > The mutex was intended to make the waiters sleep instead of spin, and > > > such that we can check the update thresholds again after acquiring the > > > mutex. However, the mutex has a risk of priority inversion, especially > > > since the underlying rstat lock can de dropped while the mutex is held. > > > > > > Synthetic testing with high concurrency of flushers shows no > > > regressions without the mutex, so remove it. > > > > > > Suggested-by: Shakeel Butt <shakeelb@google.com> > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > > > > Acked-by: Shakeel Butt <shakeelb@google.com> > > > > I'd like to move this series into mm-stable soon. Are we all OK with that? OK from me.
On Tue, Dec 12, 2023 at 10:43 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Mon, 4 Dec 2023 23:58:56 +0000 Shakeel Butt <shakeelb@google.com> wrote: > > > On Mon, Dec 04, 2023 at 03:49:01PM -0800, Yosry Ahmed wrote: > > [...] > > > > > > From 19af26e01f93cbf0806d75a234b78e48c1ce9d80 Mon Sep 17 00:00:00 2001 > > > From: Yosry Ahmed <yosryahmed@google.com> > > > Date: Mon, 4 Dec 2023 23:43:29 +0000 > > > Subject: [PATCH] mm: memcg: remove stats flushing mutex > > > > > > The mutex was intended to make the waiters sleep instead of spin, and > > > such that we can check the update thresholds again after acquiring the > > > mutex. However, the mutex has a risk of priority inversion, especially > > > since the underlying rstat lock can de dropped while the mutex is held. > > > > > > Synthetic testing with high concurrency of flushers shows no > > > regressions without the mutex, so remove it. > > > > > > Suggested-by: Shakeel Butt <shakeelb@google.com> > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com> > > > > Acked-by: Shakeel Butt <shakeelb@google.com> > > > > I'd like to move this series into mm-stable soon. Are we all OK with that? Looking forward to that :)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a568f70a26774..8673140683e6e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1050,8 +1050,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return x; } -void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_ratelimited(void); +void mem_cgroup_flush_stats(struct mem_cgroup *memcg); +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1566,11 +1566,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return node_page_state(lruvec_pgdat(lruvec), idx); } -static inline void mem_cgroup_flush_stats(void) +static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg) { } -static inline void mem_cgroup_flush_stats_ratelimited(void) +static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93b483b379aa1..5d300318bf18a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -670,7 +670,6 @@ struct memcg_vmstats { */ static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); -static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); static u64 flush_last_time; #define FLUSH_TIME (2UL*HZ) @@ -731,35 +730,47 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void do_flush_stats(void) +static void do_flush_stats(struct mem_cgroup *memcg) { - /* - * We always flush the entire tree, so concurrent flushers can just - * skip. This avoids a thundering herd problem on the rstat global lock - * from memcg flushers (e.g. reclaim, refault, etc). - */ - if (atomic_read(&stats_flush_ongoing) || - atomic_xchg(&stats_flush_ongoing, 1)) - return; - - WRITE_ONCE(flush_last_time, jiffies_64); - - cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + if (mem_cgroup_is_root(memcg)) + WRITE_ONCE(flush_last_time, jiffies_64); - atomic_set(&stats_flush_ongoing, 0); + cgroup_rstat_flush(memcg->css.cgroup); } -void mem_cgroup_flush_stats(void) +/* + * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree + * @memcg: root of the subtree to flush + * + * Flushing is serialized by the underlying global rstat lock. There is also a + * minimum amount of work to be done even if there are no stat updates to flush. + * Hence, we only flush the stats if the updates delta exceeds a threshold. This + * avoids unnecessary work and contention on the underlying lock. + */ +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) { - if (memcg_should_flush_stats(root_mem_cgroup)) - do_flush_stats(); + static DEFINE_MUTEX(memcg_stats_flush_mutex); + + if (mem_cgroup_disabled()) + return; + + if (!memcg) + memcg = root_mem_cgroup; + + if (memcg_should_flush_stats(memcg)) { + mutex_lock(&memcg_stats_flush_mutex); + /* Check again after locking, another flush may have occurred */ + if (memcg_should_flush_stats(memcg)) + do_flush_stats(memcg); + mutex_unlock(&memcg_stats_flush_mutex); + } } -void mem_cgroup_flush_stats_ratelimited(void) +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) { /* Only flush if the periodic flusher is one full cycle late */ if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME)) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); } static void flush_memcg_stats_dwork(struct work_struct *w) @@ -768,7 +779,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) * Deliberately ignore memcg_should_flush_stats() here so that flushing * in latency-sensitive paths is as cheap as possible. */ - do_flush_stats(); + do_flush_stats(root_mem_cgroup); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -1664,7 +1675,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) * * Current memory state: */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -4214,7 +4225,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -4295,7 +4306,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -4791,7 +4802,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); @@ -6886,7 +6897,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid; @@ -8125,7 +8136,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) break; } - cgroup_rstat_flush(memcg->css.cgroup); + /* + * mem_cgroup_flush_stats() ignores small changes. Use + * do_flush_stats() directly to get accurate stats for charging. + */ + do_flush_stats(memcg); pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE; if (pages < max) continue; @@ -8190,8 +8205,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { - cgroup_rstat_flush(css->cgroup); - return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B); + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + + mem_cgroup_flush_stats(memcg); + return memcg_page_state(memcg, MEMCG_ZSWAP_B); } static int zswap_max_show(struct seq_file *m, void *v) diff --git a/mm/vmscan.c b/mm/vmscan.c index d8c3338fee0fb..0b8a0107d58d8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2250,7 +2250,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(sc->target_mem_cgroup); /* * Determine the scan balance between anon and file LRUs. diff --git a/mm/workingset.c b/mm/workingset.c index dce41577a49d2..7d3dacab8451a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) rcu_read_unlock(); - /* Flush stats (and potentially sleep) outside the RCU read section */ - mem_cgroup_flush_stats_ratelimited(); + /* + * Flush stats (and potentially sleep) outside the RCU read section. + * XXX: With per-memcg flushing and thresholding, is ratelimiting + * still needed here? + */ + mem_cgroup_flush_stats_ratelimited(eviction_memcg); eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); refault = atomic_long_read(&eviction_lruvec->nonresident_age); @@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, struct lruvec *lruvec; int i; - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(sc->memcg); lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) pages += lruvec_page_state_local(lruvec,