Message ID | 20230809045810.1659356-1-yosryahmed@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c44e:0:b0:3f2:4152:657d with SMTP id w14csp2575463vqr; Tue, 8 Aug 2023 22:22:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG5UPN5blaxI2bLHWSPJ9CR//B2Hv2lz6O1sHky4WMQvFHb4Obx1dhjRg9e4sTsSUP2fYwG X-Received: by 2002:a17:906:30d2:b0:99b:5161:8e0d with SMTP id b18-20020a17090630d200b0099b51618e0dmr1238591ejb.21.1691558558786; Tue, 08 Aug 2023 22:22:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691558558; cv=none; d=google.com; s=arc-20160816; b=RWncxZhnAHOr27Nv1gbWo0sDMc1uWcsFnSop3HiHjDnHBlH+BRPz+QLU586NzPFsYh VCjKMro6xaQqCv1iOR06HQsBH3GbzXJAVRYonZehRULcljir0Jq6+vicb8I083e/2vgX MlgWvXjmGJU4mk8XyiiTWicSwZ0WjO72UlLVGl7HTsHwDETDlo48POp7xSEdDxh6xglp ZdEY//xUJadax30lMfWuod6IpM+sBqVYzd5WgM9KpbMkbynU6pWmkUbsZIMjJmJY+wZJ AJV3M89l56FXZ9/pQq2jE9mzMhW2H+U8G1dz2fRN8rQWtxUB5kYeB7PbVxEVyLOSVKki niwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=OYG5lfEUc9Di0uttH2eztXJ8fv3suEABmVSMtUffIT0=; fh=ImUq1vHhBj420ejHRVQQih81VfdfWgFOPlkOQhSTLk4=; b=bVibDZZLbkANI0yQtfWYXXwCSaAZhF75EMjmNprQHxip1yM6YRC4gMROJjiEckGEvZ pJmxTcGTQiiO261D7dhNdzJTOk//vMepPH9a7ULT09u4m0pJ1eJ2o2lYm71byxQRuyX0 PnNKyXPpDjmGU+qq4aYgs8xxISSN/PKJ+GBrsfLcGTk05CzI1QFU97fiLhNxtAgLOPoE ZTF14lQCL2fWPamX1XUn2vOfzaRepOwxrqGQs46ILjumqo3JMxAgX/VWX2oOBEEMuaq7 Lrw26PQl69iwK2wmxEvOkKZOxfmBSoHvP5Jw0ce2GdVqeaagLD2jlDlPQ9ph2bZMhgWs R70g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=quGljRI4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q24-20020a1709066ad800b0099bc39cf3fdsi3080457ejs.848.2023.08.08.22.22.14; Tue, 08 Aug 2023 22:22:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=quGljRI4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbjHIE6P (ORCPT <rfc822;aaronkmseo@gmail.com> + 99 others); Wed, 9 Aug 2023 00:58:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbjHIE6O (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 9 Aug 2023 00:58:14 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDEE9172C for <linux-kernel@vger.kernel.org>; Tue, 8 Aug 2023 21:58:13 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5637a108d02so4025982a12.2 for <linux-kernel@vger.kernel.org>; Tue, 08 Aug 2023 21:58:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691557093; x=1692161893; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=OYG5lfEUc9Di0uttH2eztXJ8fv3suEABmVSMtUffIT0=; b=quGljRI4w1RP9FtvOGqFbpKn287sq8n8yZxrVoBE8wYYqcAnuV/TkVeOTn+wchwE3e SA1f2P2LImZukMa8s5iU0UG7r/m+VsFAhDaPuxOBo5H44wylZPOCSmJJdmS6ZEwAkh7/ jtHAbR0qSA8Wob2+ggf6h8hpDEhUh3u4eGFPmDb/IOVbX9xuUcq7EC6ZVjShiepu1hoW 07R9B8SdYm2fi/uW3sOuASuXA70UXL8GWT79a/55rkSb0DuEdo7MP7i9j37geWuXJNJV 1x3tT+mEB5XmgsUnZmx4RN2R1qZDAJ3tUB2y22gEJq6H3iE26Oi58NHGVuPHKganuZDL Zs5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691557093; x=1692161893; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=OYG5lfEUc9Di0uttH2eztXJ8fv3suEABmVSMtUffIT0=; b=NuyHHRGV1LEWgbJanl3EduUpg/FQCBOG6/1+auIgUYNbgShpCs5PQKn1gvM32vGnVx 4Zlx46KkufM0rP8G0jcIO6cbieMOgwmP1k2MABw1m34Ws59N+xN3RklGG522CFiM6OOj oELpxvXYlTupL2rq6y2eHKs4HLuUudQC2wPnrOq5PvgAyItULIBN3t8vjBTO382gnUmX h6JogEWR3pYpGNi5QW12YMz3eE1c6R0uwra1tkTfwwG/QC7GOCIgg7mcvWSSkdVqQPwp ZRWRN38wGuH3CKshHFk7KfWqFb7xe90tVyzrv2tx02llUYmvEVeHBGO0KvQffsQNxVUS /u5Q== X-Gm-Message-State: AOJu0YzjAKmL4mfEHWZuwTuL/TY464iv4RUwEsPkO9aULqwfu7PGXoQk wf/hxI+5nQJU7332SfF5ymS6psTn9Q9grAa9 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:3f0c:0:b0:563:8767:d83f with SMTP id m12-20020a633f0c000000b005638767d83fmr24521pga.7.1691557093131; Tue, 08 Aug 2023 21:58:13 -0700 (PDT) Date: Wed, 9 Aug 2023 04:58:10 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230809045810.1659356-1-yosryahmed@google.com> Subject: [PATCH] mm: memcg: provide accurate stats for userspace reads From: Yosry Ahmed <yosryahmed@google.com> To: Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Roman Gushchin <roman.gushchin@linux.dev>, Shakeel Butt <shakeelb@google.com>, Andrew Morton <akpm@linux-foundation.org> Cc: Muchun Song <muchun.song@linux.dev>, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed <yosryahmed@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773727707548925152 X-GMAIL-MSGID: 1773727707548925152 |
Series |
mm: memcg: provide accurate stats for userspace reads
|
|
Commit Message
Yosry Ahmed
Aug. 9, 2023, 4:58 a.m. UTC
Over time, the memcg code added multiple optimizations to the stats
flushing path that introduce a tradeoff between accuracy and
performance. In some contexts (e.g. dirty throttling, refaults, etc), a
full rstat flush of the stats in the tree can be too expensive. Such
optimizations include [1]:
(a) Introducing a periodic background flusher to keep the size of the
update tree from growing unbounded.
(b) Allowing only one thread to flush at a time, and other concurrent
flushers just skip the flush. This avoids a thundering herd problem
when multiple reclaim/refault threads attempt to flush the stats at
once.
(c) Only executing a flush if the magnitude of the stats updates exceeds
a certain threshold.
These optimizations were necessary to make flushing feasible in
performance-critical paths, and they come at the cost of some accuracy
that we choose to live without. On the other hand, for flushes invoked
when userspace is reading the stats, the tradeoff is less appealing
This code path is not performance-critical, and the inaccuracies can
affect userspace behavior. For example, skipping flushing when there is
another ongoing flush is essentially a coin flip. We don't know if the
ongoing flush is done with the subtree of interest or not.
If userspace asks for stats, let's give it accurate stats. Without this
patch, we see regressions in userspace workloads due to stats inaccuracy
in some cases.
Rework the do_flush_stats() helper to accept a "full" boolean argument.
For a "full" flush, if there is an ongoing flush, do not skip. Instead
wait for the flush to complete. Introduce a new
mem_cgroup_flush_stats_full() interface that use this full flush, and
also does not check if the magnitude of the updates exceeds the
threshold. Use mem_cgroup_flush_stats_full() in code paths where stats
are flushed due to a userspace read. This essentially undos optimzations
(b) and (c) above for flushes triggered by userspace reads.
[1] https://lore.kernel.org/lkml/20210716212137.1391164-2-shakeelb@google.com/
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
I want to argue that this is what we should be doing for all flushing
contexts, not just userspace reads (i.e all flushes should be "full").
Skipping if a flush is ongoing is too brittle. There is a significant
chance that the stats of the cgroup we care about is not fully flushed.
Waiting for an ongoing flush to finish ensures correctness while still
avoiding the thundering herd problem on the rstat flush lock.
Having said that, there is a higher chance of regression if we add the
wait in more critical paths (e.g. reclaim, refaults), so I opt-ed to do
this for userspace reads for now. We have complaints about inaccuracy in
userspace reads, but no complaints about inaccuracy in other paths so
far (although it would be really difficult to tie a reclaim/refault
problem to a partial stats flush anyway).
---
mm/memcontrol.c | 42 +++++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 15 deletions(-)
Comments
On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > Over time, the memcg code added multiple optimizations to the stats > flushing path that introduce a tradeoff between accuracy and > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > full rstat flush of the stats in the tree can be too expensive. Such > optimizations include [1]: > (a) Introducing a periodic background flusher to keep the size of the > update tree from growing unbounded. > (b) Allowing only one thread to flush at a time, and other concurrent > flushers just skip the flush. This avoids a thundering herd problem > when multiple reclaim/refault threads attempt to flush the stats at > once. > (c) Only executing a flush if the magnitude of the stats updates exceeds > a certain threshold. > > These optimizations were necessary to make flushing feasible in > performance-critical paths, and they come at the cost of some accuracy > that we choose to live without. On the other hand, for flushes invoked > when userspace is reading the stats, the tradeoff is less appealing > This code path is not performance-critical, and the inaccuracies can > affect userspace behavior. For example, skipping flushing when there is > another ongoing flush is essentially a coin flip. We don't know if the > ongoing flush is done with the subtree of interest or not. I am not convinced by this much TBH. What kind of precision do you really need and how much off is what we provide? More expensive read of stats from userspace is quite easy to notice and usually reported as a regression. So you should have a convincing argument that an extra time spent is really worth it. AFAIK there are many monitoring (top like) tools which simply read those files regularly just to show numbers and they certainly do not need a high level of precision. [...] > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > } > } > > -static void do_flush_stats(void) > +static void do_flush_stats(bool full) > { > + if (!atomic_read(&stats_flush_ongoing) && > + !atomic_xchg(&stats_flush_ongoing, 1)) > + goto flush; > + > /* > - * We always flush the entire tree, so concurrent flushers can just > - * skip. This avoids a thundering herd problem on the rstat global lock > - * from memcg flushers (e.g. reclaim, refault, etc). > + * We always flush the entire tree, so concurrent flushers can choose to > + * skip if accuracy is not critical. Otherwise, wait for the ongoing > + * flush to complete. This avoids a thundering herd problem on the rstat > + * global lock from memcg flushers (e.g. reclaim, refault, etc). > */ > - if (atomic_read(&stats_flush_ongoing) || > - atomic_xchg(&stats_flush_ongoing, 1)) > - return; > - > + while (full && atomic_read(&stats_flush_ongoing) == 1) { > + if (!cond_resched()) > + cpu_relax(); You are reinveting a mutex with spinning waiter. Why don't you simply make stats_flush_ongoing a real mutex and make use try_lock for !full flush and normal lock otherwise? > + } > + return; > +flush: > WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); > > cgroup_rstat_flush(root_mem_cgroup->css.cgroup); [...]
On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > Over time, the memcg code added multiple optimizations to the stats > > flushing path that introduce a tradeoff between accuracy and > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > full rstat flush of the stats in the tree can be too expensive. Such > > optimizations include [1]: > > (a) Introducing a periodic background flusher to keep the size of the > > update tree from growing unbounded. > > (b) Allowing only one thread to flush at a time, and other concurrent > > flushers just skip the flush. This avoids a thundering herd problem > > when multiple reclaim/refault threads attempt to flush the stats at > > once. > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > a certain threshold. > > > > These optimizations were necessary to make flushing feasible in > > performance-critical paths, and they come at the cost of some accuracy > > that we choose to live without. On the other hand, for flushes invoked > > when userspace is reading the stats, the tradeoff is less appealing > > This code path is not performance-critical, and the inaccuracies can > > affect userspace behavior. For example, skipping flushing when there is > > another ongoing flush is essentially a coin flip. We don't know if the > > ongoing flush is done with the subtree of interest or not. > > I am not convinced by this much TBH. What kind of precision do you > really need and how much off is what we provide? > > More expensive read of stats from userspace is quite easy to notice > and usually reported as a regression. So you should have a convincing > argument that an extra time spent is really worth it. AFAIK there are > many monitoring (top like) tools which simply read those files regularly > just to show numbers and they certainly do not need a high level of > precision. We used to spend this time before commit fd25a9e0e23b ("memcg: unify memcg stat flushing") which generalized the "skip if ongoing flush" for all stat flushing. As far I know, the problem was contention on the flushing lock which also affected critical paths like refault. The problem is that the current behavior is indeterministic, if cpu A tries to flush stats and cpu B is already doing that, cpu A will just skip. At that point, the cgroup(s) that cpu A cares about may have been fully flushed, partially flushed (in terms of cpus), or not flushed at all. We have no idea. We just know that someone else is flushing something. IOW, in some cases the flush request will be completely ignored and userspace will read stale stats (up to 2s + the periodic flusher runtime). Some workloads need to read up-to-date stats as feedback to actions (e.g. after proactive reclaim, or for userspace OOM killing purposes), and reading such stale stats causes regressions or misbehavior by userspace. > > [...] > > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > } > > } > > > > -static void do_flush_stats(void) > > +static void do_flush_stats(bool full) > > { > > + if (!atomic_read(&stats_flush_ongoing) && > > + !atomic_xchg(&stats_flush_ongoing, 1)) > > + goto flush; > > + > > /* > > - * We always flush the entire tree, so concurrent flushers can just > > - * skip. This avoids a thundering herd problem on the rstat global lock > > - * from memcg flushers (e.g. reclaim, refault, etc). > > + * We always flush the entire tree, so concurrent flushers can choose to > > + * skip if accuracy is not critical. Otherwise, wait for the ongoing > > + * flush to complete. This avoids a thundering herd problem on the rstat > > + * global lock from memcg flushers (e.g. reclaim, refault, etc). > > */ > > - if (atomic_read(&stats_flush_ongoing) || > > - atomic_xchg(&stats_flush_ongoing, 1)) > > - return; > > - > > + while (full && atomic_read(&stats_flush_ongoing) == 1) { > > + if (!cond_resched()) > > + cpu_relax(); > > You are reinveting a mutex with spinning waiter. Why don't you simply > make stats_flush_ongoing a real mutex and make use try_lock for !full > flush and normal lock otherwise? So that was actually a spinlock at one point, when we used to skip if try_lock failed. We opted for an atomic because the lock was only used in a try_lock fashion. The problem here is that the atomic is used to ensure that only one thread actually attempts to flush at a time (and others skip/wait), to avoid a thundering herd problem on cgroup_rstat_lock. Here, what I am trying to do is essentially equivalent to "wait until the lock is available but don't grab it". If we make stats_flush_ongoing a mutex, I am afraid the thundering herd problem will be reintroduced for stats_flush_ongoing this time. I am not sure if there's a cleaner way of doing this, but I am certainly open for suggestions. I also don't like how the spinning loop looks as of now.
On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > Over time, the memcg code added multiple optimizations to the stats > > > flushing path that introduce a tradeoff between accuracy and > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > full rstat flush of the stats in the tree can be too expensive. Such > > > optimizations include [1]: > > > (a) Introducing a periodic background flusher to keep the size of the > > > update tree from growing unbounded. > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > flushers just skip the flush. This avoids a thundering herd problem > > > when multiple reclaim/refault threads attempt to flush the stats at > > > once. > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > a certain threshold. > > > > > > These optimizations were necessary to make flushing feasible in > > > performance-critical paths, and they come at the cost of some accuracy > > > that we choose to live without. On the other hand, for flushes invoked > > > when userspace is reading the stats, the tradeoff is less appealing > > > This code path is not performance-critical, and the inaccuracies can > > > affect userspace behavior. For example, skipping flushing when there is > > > another ongoing flush is essentially a coin flip. We don't know if the > > > ongoing flush is done with the subtree of interest or not. > > > > I am not convinced by this much TBH. What kind of precision do you > > really need and how much off is what we provide? > > > > More expensive read of stats from userspace is quite easy to notice > > and usually reported as a regression. So you should have a convincing > > argument that an extra time spent is really worth it. AFAIK there are > > many monitoring (top like) tools which simply read those files regularly > > just to show numbers and they certainly do not need a high level of > > precision. > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > memcg stat flushing") which generalized the "skip if ongoing flush" > for all stat flushing. As far I know, the problem was contention on > the flushing lock which also affected critical paths like refault. > > The problem is that the current behavior is indeterministic, if cpu A > tries to flush stats and cpu B is already doing that, cpu A will just > skip. At that point, the cgroup(s) that cpu A cares about may have > been fully flushed, partially flushed (in terms of cpus), or not > flushed at all. We have no idea. We just know that someone else is > flushing something. IOW, in some cases the flush request will be > completely ignored and userspace will read stale stats (up to 2s + the > periodic flusher runtime). Yes, that is certainly true but why does that matter? Stats are always a snapshot of the past. Do we get an inconsistent image that would be actively harmful. > Some workloads need to read up-to-date stats as feedback to actions > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > and reading such stale stats causes regressions or misbehavior by > userspace. Please tell us more about those and why should all others that do not require such a precision should page that price as well. > > [...] > > > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > > } > > > } > > > > > > -static void do_flush_stats(void) > > > +static void do_flush_stats(bool full) > > > { > > > + if (!atomic_read(&stats_flush_ongoing) && > > > + !atomic_xchg(&stats_flush_ongoing, 1)) > > > + goto flush; > > > + > > > /* > > > - * We always flush the entire tree, so concurrent flushers can just > > > - * skip. This avoids a thundering herd problem on the rstat global lock > > > - * from memcg flushers (e.g. reclaim, refault, etc). > > > + * We always flush the entire tree, so concurrent flushers can choose to > > > + * skip if accuracy is not critical. Otherwise, wait for the ongoing > > > + * flush to complete. This avoids a thundering herd problem on the rstat > > > + * global lock from memcg flushers (e.g. reclaim, refault, etc). > > > */ > > > - if (atomic_read(&stats_flush_ongoing) || > > > - atomic_xchg(&stats_flush_ongoing, 1)) > > > - return; > > > - > > > + while (full && atomic_read(&stats_flush_ongoing) == 1) { > > > + if (!cond_resched()) > > > + cpu_relax(); > > > > You are reinveting a mutex with spinning waiter. Why don't you simply > > make stats_flush_ongoing a real mutex and make use try_lock for !full > > flush and normal lock otherwise? > > So that was actually a spinlock at one point, when we used to skip if > try_lock failed. AFAICS cgroup_rstat_flush is allowed to sleep so spinlocks are not really possible. > We opted for an atomic because the lock was only used > in a try_lock fashion. The problem here is that the atomic is used to > ensure that only one thread actually attempts to flush at a time (and > others skip/wait), to avoid a thundering herd problem on > cgroup_rstat_lock. > > Here, what I am trying to do is essentially equivalent to "wait until > the lock is available but don't grab it". If we make > stats_flush_ongoing a mutex, I am afraid the thundering herd problem > will be reintroduced for stats_flush_ongoing this time. You will have potentially many spinners for something that might take quite a lot of time (sleep) if there is nothing else to schedule. I do not think this is a proper behavior. Really, you shouldn't be busy waiting for a sleeper. > I am not sure if there's a cleaner way of doing this, but I am > certainly open for suggestions. I also don't like how the spinning > loop looks as of now. mutex_try_lock for non-critical flushers and mutex_lock of syncing ones. We can talk a custom locking scheme if that proves insufficient or problematic.
On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > Over time, the memcg code added multiple optimizations to the stats > > > > flushing path that introduce a tradeoff between accuracy and > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > optimizations include [1]: > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > update tree from growing unbounded. > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > once. > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > a certain threshold. > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > performance-critical paths, and they come at the cost of some accuracy > > > > that we choose to live without. On the other hand, for flushes invoked > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > This code path is not performance-critical, and the inaccuracies can > > > > affect userspace behavior. For example, skipping flushing when there is > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > ongoing flush is done with the subtree of interest or not. > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > really need and how much off is what we provide? > > > > > > More expensive read of stats from userspace is quite easy to notice > > > and usually reported as a regression. So you should have a convincing > > > argument that an extra time spent is really worth it. AFAIK there are > > > many monitoring (top like) tools which simply read those files regularly > > > just to show numbers and they certainly do not need a high level of > > > precision. > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > memcg stat flushing") which generalized the "skip if ongoing flush" > > for all stat flushing. As far I know, the problem was contention on > > the flushing lock which also affected critical paths like refault. > > > > The problem is that the current behavior is indeterministic, if cpu A > > tries to flush stats and cpu B is already doing that, cpu A will just > > skip. At that point, the cgroup(s) that cpu A cares about may have > > been fully flushed, partially flushed (in terms of cpus), or not > > flushed at all. We have no idea. We just know that someone else is > > flushing something. IOW, in some cases the flush request will be > > completely ignored and userspace will read stale stats (up to 2s + the > > periodic flusher runtime). > > Yes, that is certainly true but why does that matter? Stats are always a > snapshot of the past. Do we get an inconsistent image that would be > actively harmful. That can very well be the case because we may be in a state where some cpus are flushed and some aren't. Also sometimes a few seconds is too old. We have some workloads that read the stats every 1-2 seconds to keep a fresh state, and they certainly do not expect stats to be 2+ seconds old when they read them. > > > Some workloads need to read up-to-date stats as feedback to actions > > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > > and reading such stale stats causes regressions or misbehavior by > > userspace. > > Please tell us more about those and why should all others that do not > require such a precision should page that price as well. Everyone used to pay this price though and no one used to complain. Even before rstat, we used to iterate the entire hierarchy when userspace reads the stats. rstat came in and made this much more efficient by only iterating the subtrees that actually have updates. The "skip if someone else is flushing" behavior was introduced for flushers in critical paths (e.g. refault), and hurting the accuracy for userspace readers was a side effect of it. This patch is trying to remedy this side effect by restoring the old behavior for userspace reads. One other side effect is testing. Some tests started becoming flaky because a test performs an action and expects the state of the system to change in a certain way deterministically. In some cases the flushing race leads to false negatives. > > > > [...] > > > > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > > > } > > > > } > > > > > > > > -static void do_flush_stats(void) > > > > +static void do_flush_stats(bool full) > > > > { > > > > + if (!atomic_read(&stats_flush_ongoing) && > > > > + !atomic_xchg(&stats_flush_ongoing, 1)) > > > > + goto flush; > > > > + > > > > /* > > > > - * We always flush the entire tree, so concurrent flushers can just > > > > - * skip. This avoids a thundering herd problem on the rstat global lock > > > > - * from memcg flushers (e.g. reclaim, refault, etc). > > > > + * We always flush the entire tree, so concurrent flushers can choose to > > > > + * skip if accuracy is not critical. Otherwise, wait for the ongoing > > > > + * flush to complete. This avoids a thundering herd problem on the rstat > > > > + * global lock from memcg flushers (e.g. reclaim, refault, etc). > > > > */ > > > > - if (atomic_read(&stats_flush_ongoing) || > > > > - atomic_xchg(&stats_flush_ongoing, 1)) > > > > - return; > > > > - > > > > + while (full && atomic_read(&stats_flush_ongoing) == 1) { > > > > + if (!cond_resched()) > > > > + cpu_relax(); > > > > > > You are reinveting a mutex with spinning waiter. Why don't you simply > > > make stats_flush_ongoing a real mutex and make use try_lock for !full > > > flush and normal lock otherwise? > > > > So that was actually a spinlock at one point, when we used to skip if > > try_lock failed. > > AFAICS cgroup_rstat_flush is allowed to sleep so spinlocks are not > really possible. > > > We opted for an atomic because the lock was only used > > in a try_lock fashion. The problem here is that the atomic is used to > > ensure that only one thread actually attempts to flush at a time (and > > others skip/wait), to avoid a thundering herd problem on > > cgroup_rstat_lock. > > > > Here, what I am trying to do is essentially equivalent to "wait until > > the lock is available but don't grab it". If we make > > stats_flush_ongoing a mutex, I am afraid the thundering herd problem > > will be reintroduced for stats_flush_ongoing this time. > > You will have potentially many spinners for something that might take > quite a lot of time (sleep) if there is nothing else to schedule. I do > not think this is a proper behavior. Really, you shouldn't be busy > waiting for a sleeper. > > > I am not sure if there's a cleaner way of doing this, but I am > > certainly open for suggestions. I also don't like how the spinning > > loop looks as of now. > > mutex_try_lock for non-critical flushers and mutex_lock of syncing ones. > We can talk a custom locking scheme if that proves insufficient or > problematic. > -- > Michal Hocko > SUSE Labs
<snip> > > > [...] > > > > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > > > } > > > > } > > > > > > > > -static void do_flush_stats(void) > > > > +static void do_flush_stats(bool full) > > > > { > > > > + if (!atomic_read(&stats_flush_ongoing) && > > > > + !atomic_xchg(&stats_flush_ongoing, 1)) > > > > + goto flush; > > > > + > > > > /* > > > > - * We always flush the entire tree, so concurrent flushers can just > > > > - * skip. This avoids a thundering herd problem on the rstat global lock > > > > - * from memcg flushers (e.g. reclaim, refault, etc). > > > > + * We always flush the entire tree, so concurrent flushers can choose to > > > > + * skip if accuracy is not critical. Otherwise, wait for the ongoing > > > > + * flush to complete. This avoids a thundering herd problem on the rstat > > > > + * global lock from memcg flushers (e.g. reclaim, refault, etc). > > > > */ > > > > - if (atomic_read(&stats_flush_ongoing) || > > > > - atomic_xchg(&stats_flush_ongoing, 1)) > > > > - return; > > > > - > > > > + while (full && atomic_read(&stats_flush_ongoing) == 1) { > > > > + if (!cond_resched()) > > > > + cpu_relax(); > > > > > > You are reinveting a mutex with spinning waiter. Why don't you simply > > > make stats_flush_ongoing a real mutex and make use try_lock for !full > > > flush and normal lock otherwise? > > > > So that was actually a spinlock at one point, when we used to skip if > > try_lock failed. > > AFAICS cgroup_rstat_flush is allowed to sleep so spinlocks are not > really possible. Sorry I hit the send button too early, didn't get to this part. We were able to use a spinlock because we used to disable sleeping when flushing the stats then, which opened another can of worms :) > > > We opted for an atomic because the lock was only used > > in a try_lock fashion. The problem here is that the atomic is used to > > ensure that only one thread actually attempts to flush at a time (and > > others skip/wait), to avoid a thundering herd problem on > > cgroup_rstat_lock. > > > > Here, what I am trying to do is essentially equivalent to "wait until > > the lock is available but don't grab it". If we make > > stats_flush_ongoing a mutex, I am afraid the thundering herd problem > > will be reintroduced for stats_flush_ongoing this time. > > You will have potentially many spinners for something that might take > quite a lot of time (sleep) if there is nothing else to schedule. I do > not think this is a proper behavior. Really, you shouldn't be busy > waiting for a sleeper. > > > I am not sure if there's a cleaner way of doing this, but I am > > certainly open for suggestions. I also don't like how the spinning > > loop looks as of now. > > mutex_try_lock for non-critical flushers and mutex_lock of syncing ones. > We can talk a custom locking scheme if that proves insufficient or > problematic. I have no problem with this. I can send a v2 following this scheme, once we agree on the importance of this patch :) > -- > Michal Hocko > SUSE Labs
On Wed 09-08-23 06:13:05, Yosry Ahmed wrote: > On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > > Over time, the memcg code added multiple optimizations to the stats > > > > > flushing path that introduce a tradeoff between accuracy and > > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > > optimizations include [1]: > > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > > update tree from growing unbounded. > > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > > once. > > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > > a certain threshold. > > > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > > performance-critical paths, and they come at the cost of some accuracy > > > > > that we choose to live without. On the other hand, for flushes invoked > > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > > This code path is not performance-critical, and the inaccuracies can > > > > > affect userspace behavior. For example, skipping flushing when there is > > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > > ongoing flush is done with the subtree of interest or not. > > > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > > really need and how much off is what we provide? > > > > > > > > More expensive read of stats from userspace is quite easy to notice > > > > and usually reported as a regression. So you should have a convincing > > > > argument that an extra time spent is really worth it. AFAIK there are > > > > many monitoring (top like) tools which simply read those files regularly > > > > just to show numbers and they certainly do not need a high level of > > > > precision. > > > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > > memcg stat flushing") which generalized the "skip if ongoing flush" > > > for all stat flushing. As far I know, the problem was contention on > > > the flushing lock which also affected critical paths like refault. > > > > > > The problem is that the current behavior is indeterministic, if cpu A > > > tries to flush stats and cpu B is already doing that, cpu A will just > > > skip. At that point, the cgroup(s) that cpu A cares about may have > > > been fully flushed, partially flushed (in terms of cpus), or not > > > flushed at all. We have no idea. We just know that someone else is > > > flushing something. IOW, in some cases the flush request will be > > > completely ignored and userspace will read stale stats (up to 2s + the > > > periodic flusher runtime). > > > > Yes, that is certainly true but why does that matter? Stats are always a > > snapshot of the past. Do we get an inconsistent image that would be > > actively harmful. > > That can very well be the case because we may be in a state where some > cpus are flushed and some aren't. Also sometimes a few seconds is too > old. We have some workloads that read the stats every 1-2 seconds to > keep a fresh state, and they certainly do not expect stats to be 2+ > seconds old when they read them. I hate to repeat myself but please be more specific. This all sounds just too wavy to me. > > > Some workloads need to read up-to-date stats as feedback to actions > > > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > > > and reading such stale stats causes regressions or misbehavior by > > > userspace. > > > > Please tell us more about those and why should all others that do not > > require such a precision should page that price as well. > > Everyone used to pay this price though and no one used to complain. Right, and then the overhead has been reduced and now you want to bring it back and that will be seen as a regression. It doesn't really matter what used to be the overhead. People always care when something gets slower.
On Wed, Aug 9, 2023 at 6:32 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 09-08-23 06:13:05, Yosry Ahmed wrote: > > On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > > > Over time, the memcg code added multiple optimizations to the stats > > > > > > flushing path that introduce a tradeoff between accuracy and > > > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > > > optimizations include [1]: > > > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > > > update tree from growing unbounded. > > > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > > > once. > > > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > > > a certain threshold. > > > > > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > > > performance-critical paths, and they come at the cost of some accuracy > > > > > > that we choose to live without. On the other hand, for flushes invoked > > > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > > > This code path is not performance-critical, and the inaccuracies can > > > > > > affect userspace behavior. For example, skipping flushing when there is > > > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > > > ongoing flush is done with the subtree of interest or not. > > > > > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > > > really need and how much off is what we provide? > > > > > > > > > > More expensive read of stats from userspace is quite easy to notice > > > > > and usually reported as a regression. So you should have a convincing > > > > > argument that an extra time spent is really worth it. AFAIK there are > > > > > many monitoring (top like) tools which simply read those files regularly > > > > > just to show numbers and they certainly do not need a high level of > > > > > precision. > > > > > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > > > memcg stat flushing") which generalized the "skip if ongoing flush" > > > > for all stat flushing. As far I know, the problem was contention on > > > > the flushing lock which also affected critical paths like refault. > > > > > > > > The problem is that the current behavior is indeterministic, if cpu A > > > > tries to flush stats and cpu B is already doing that, cpu A will just > > > > skip. At that point, the cgroup(s) that cpu A cares about may have > > > > been fully flushed, partially flushed (in terms of cpus), or not > > > > flushed at all. We have no idea. We just know that someone else is > > > > flushing something. IOW, in some cases the flush request will be > > > > completely ignored and userspace will read stale stats (up to 2s + the > > > > periodic flusher runtime). > > > > > > Yes, that is certainly true but why does that matter? Stats are always a > > > snapshot of the past. Do we get an inconsistent image that would be > > > actively harmful. > > > > That can very well be the case because we may be in a state where some > > cpus are flushed and some aren't. Also sometimes a few seconds is too > > old. We have some workloads that read the stats every 1-2 seconds to > > keep a fresh state, and they certainly do not expect stats to be 2+ > > seconds old when they read them. > > I hate to repeat myself but please be more specific. This all sounds > just too wavy to me. Sorry I didn't have the full story in mind, I had to do my homework. One example is userspace OOM killing. Our userspace OOM killer makes decisions based on some stats from memory.stat, and stale stats (a few seconds in this case) can result in an unrightful OOM kill, which can easily cascade. A simplified example of that is when a hierarchy has a parent cgroup with multiple related children. In this case, there are usually file-backed resources that are shared between those children, and OOM killing one of them will not free those resources. Hence, the OOM killer only considers their anonymous usage to be reap-able when a memcg is nuked. For that we use the "anon" stat (or "rss" in cgroup v1) in memory.stat. > > > > > Some workloads need to read up-to-date stats as feedback to actions > > > > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > > > > and reading such stale stats causes regressions or misbehavior by > > > > userspace. > > > > > > Please tell us more about those and why should all others that do not > > > require such a precision should page that price as well. > > > > Everyone used to pay this price though and no one used to complain. > > Right, and then the overhead has been reduced and now you want to bring > it back and that will be seen as a regression. It doesn't really matter > what used to be the overhead. People always care when something gets > slower. People also care when something gets less accurate :)
On Wed 09-08-23 11:33:20, Yosry Ahmed wrote: > On Wed, Aug 9, 2023 at 6:32 AM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 09-08-23 06:13:05, Yosry Ahmed wrote: > > > On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > > > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > > > > Over time, the memcg code added multiple optimizations to the stats > > > > > > > flushing path that introduce a tradeoff between accuracy and > > > > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > > > > optimizations include [1]: > > > > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > > > > update tree from growing unbounded. > > > > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > > > > once. > > > > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > > > > a certain threshold. > > > > > > > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > > > > performance-critical paths, and they come at the cost of some accuracy > > > > > > > that we choose to live without. On the other hand, for flushes invoked > > > > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > > > > This code path is not performance-critical, and the inaccuracies can > > > > > > > affect userspace behavior. For example, skipping flushing when there is > > > > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > > > > ongoing flush is done with the subtree of interest or not. > > > > > > > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > > > > really need and how much off is what we provide? > > > > > > > > > > > > More expensive read of stats from userspace is quite easy to notice > > > > > > and usually reported as a regression. So you should have a convincing > > > > > > argument that an extra time spent is really worth it. AFAIK there are > > > > > > many monitoring (top like) tools which simply read those files regularly > > > > > > just to show numbers and they certainly do not need a high level of > > > > > > precision. > > > > > > > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > > > > memcg stat flushing") which generalized the "skip if ongoing flush" > > > > > for all stat flushing. As far I know, the problem was contention on > > > > > the flushing lock which also affected critical paths like refault. > > > > > > > > > > The problem is that the current behavior is indeterministic, if cpu A > > > > > tries to flush stats and cpu B is already doing that, cpu A will just > > > > > skip. At that point, the cgroup(s) that cpu A cares about may have > > > > > been fully flushed, partially flushed (in terms of cpus), or not > > > > > flushed at all. We have no idea. We just know that someone else is > > > > > flushing something. IOW, in some cases the flush request will be > > > > > completely ignored and userspace will read stale stats (up to 2s + the > > > > > periodic flusher runtime). > > > > > > > > Yes, that is certainly true but why does that matter? Stats are always a > > > > snapshot of the past. Do we get an inconsistent image that would be > > > > actively harmful. > > > > > > That can very well be the case because we may be in a state where some > > > cpus are flushed and some aren't. Also sometimes a few seconds is too > > > old. We have some workloads that read the stats every 1-2 seconds to > > > keep a fresh state, and they certainly do not expect stats to be 2+ > > > seconds old when they read them. > > > > I hate to repeat myself but please be more specific. This all sounds > > just too wavy to me. > > Sorry I didn't have the full story in mind, I had to do my homework. > One example is userspace OOM killing. Our userspace OOM killer makes > decisions based on some stats from memory.stat, and stale stats (a few > seconds in this case) can result in an unrightful OOM kill, which can > easily cascade. OK, but how is this any different from having outdated data because you have to wait for memory.stat to read (being blocked inside the rstat code)? Either your oom killer is reading the stats directly and then you depend on that flushing which is something that could be really harmful itself or you rely on another thread doing the blocking and you do not have up-to-date numbers anyway. So how does blocking actually help? > A simplified example of that is when a hierarchy has a parent cgroup > with multiple related children. In this case, there are usually > file-backed resources that are shared between those children, and OOM > killing one of them will not free those resources. Hence, the OOM > killer only considers their anonymous usage to be reap-able when a > memcg is nuked. For that we use the "anon" stat (or "rss" in cgroup > v1) in memory.stat. > > > > > > > > Some workloads need to read up-to-date stats as feedback to actions > > > > > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > > > > > and reading such stale stats causes regressions or misbehavior by > > > > > userspace. > > > > > > > > Please tell us more about those and why should all others that do not > > > > require such a precision should page that price as well. > > > > > > Everyone used to pay this price though and no one used to complain. > > > > Right, and then the overhead has been reduced and now you want to bring > > it back and that will be seen as a regression. It doesn't really matter > > what used to be the overhead. People always care when something gets > > slower. > > People also care when something gets less accurate :) Accuracy will never be 100%. We have to carefully balance between accuracy and overhead. So far we haven't heard about how much inaccuracy you are getting. Numbers help! In any case I do get the argument about consistency within a subtree (children data largely not matching parents'). Examples like that would be really helpful as well. If that is indeed the case then I would consider it much more serious than accuracy which is always problematic (100ms of an actively allocating context can ruin your just read numbers and there is no way around that wihtout stopping the world). Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger an explicit refresh. For those users who really need more accurate numbers we might consider interface like that. Or allow to write to stat file and do that in the write handler.
On Fri, Aug 11, 2023 at 5:21 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 09-08-23 11:33:20, Yosry Ahmed wrote: > > On Wed, Aug 9, 2023 at 6:32 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 09-08-23 06:13:05, Yosry Ahmed wrote: > > > > On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > > > > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > > > > > Over time, the memcg code added multiple optimizations to the stats > > > > > > > > flushing path that introduce a tradeoff between accuracy and > > > > > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > > > > > optimizations include [1]: > > > > > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > > > > > update tree from growing unbounded. > > > > > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > > > > > once. > > > > > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > > > > > a certain threshold. > > > > > > > > > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > > > > > performance-critical paths, and they come at the cost of some accuracy > > > > > > > > that we choose to live without. On the other hand, for flushes invoked > > > > > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > > > > > This code path is not performance-critical, and the inaccuracies can > > > > > > > > affect userspace behavior. For example, skipping flushing when there is > > > > > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > > > > > ongoing flush is done with the subtree of interest or not. > > > > > > > > > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > > > > > really need and how much off is what we provide? > > > > > > > > > > > > > > More expensive read of stats from userspace is quite easy to notice > > > > > > > and usually reported as a regression. So you should have a convincing > > > > > > > argument that an extra time spent is really worth it. AFAIK there are > > > > > > > many monitoring (top like) tools which simply read those files regularly > > > > > > > just to show numbers and they certainly do not need a high level of > > > > > > > precision. > > > > > > > > > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > > > > > memcg stat flushing") which generalized the "skip if ongoing flush" > > > > > > for all stat flushing. As far I know, the problem was contention on > > > > > > the flushing lock which also affected critical paths like refault. > > > > > > > > > > > > The problem is that the current behavior is indeterministic, if cpu A > > > > > > tries to flush stats and cpu B is already doing that, cpu A will just > > > > > > skip. At that point, the cgroup(s) that cpu A cares about may have > > > > > > been fully flushed, partially flushed (in terms of cpus), or not > > > > > > flushed at all. We have no idea. We just know that someone else is > > > > > > flushing something. IOW, in some cases the flush request will be > > > > > > completely ignored and userspace will read stale stats (up to 2s + the > > > > > > periodic flusher runtime). > > > > > > > > > > Yes, that is certainly true but why does that matter? Stats are always a > > > > > snapshot of the past. Do we get an inconsistent image that would be > > > > > actively harmful. > > > > > > > > That can very well be the case because we may be in a state where some > > > > cpus are flushed and some aren't. Also sometimes a few seconds is too > > > > old. We have some workloads that read the stats every 1-2 seconds to > > > > keep a fresh state, and they certainly do not expect stats to be 2+ > > > > seconds old when they read them. > > > > > > I hate to repeat myself but please be more specific. This all sounds > > > just too wavy to me. > > > > Sorry I didn't have the full story in mind, I had to do my homework. > > One example is userspace OOM killing. Our userspace OOM killer makes > > decisions based on some stats from memory.stat, and stale stats (a few > > seconds in this case) can result in an unrightful OOM kill, which can > > easily cascade. > > OK, but how is this any different from having outdated data because you > have to wait for memory.stat to read (being blocked inside the rstat > code)? Either your oom killer is reading the stats directly and then you > depend on that flushing which is something that could be really harmful > itself or you rely on another thread doing the blocking and you do not > have up-to-date numbers anyway. So how does blocking actually help? I am not sure I understand. The problem is that when you skip when someone else is flushing, there is a chance that the stats we care about haven't been flushed since the last time the periodic flusher ran. Which is supposed to be ~2 seconds ago, but maybe more depending on how busy the workqueue is. When you block until the flusher finishes, the stats are being refreshed as you wait. So the stats are not getting more outdated as you wait in the general case (unless your cgroup was flushed first and you're waiting for others to be flushed). [Let's call this approach A] Furthermore, with the implementation you suggested using a mutex, we will wait until the ongoing flush is completed, then we will grab the mutex and do a flush ourselves. That second flush should mostly be very fast, but it will guarantee even fresher stats. [Let's call this approach B] See below for test results with either A or B. We can add a new API that checks if the specific cgroup we care about is flushed and wait on that instead of waiting for the entire flush to finish, which will add stronger guarantees. However, as you said when you suggested the mutex approach, let's start simple and add more complexity when needed. > > > A simplified example of that is when a hierarchy has a parent cgroup > > with multiple related children. In this case, there are usually > > file-backed resources that are shared between those children, and OOM > > killing one of them will not free those resources. Hence, the OOM > > killer only considers their anonymous usage to be reap-able when a > > memcg is nuked. For that we use the "anon" stat (or "rss" in cgroup > > v1) in memory.stat. > > > > > > > > > > > Some workloads need to read up-to-date stats as feedback to actions > > > > > > (e.g. after proactive reclaim, or for userspace OOM killing purposes), > > > > > > and reading such stale stats causes regressions or misbehavior by > > > > > > userspace. > > > > > > > > > > Please tell us more about those and why should all others that do not > > > > > require such a precision should page that price as well. > > > > > > > > Everyone used to pay this price though and no one used to complain. > > > > > > Right, and then the overhead has been reduced and now you want to bring > > > it back and that will be seen as a regression. It doesn't really matter > > > what used to be the overhead. People always care when something gets > > > slower. > > > > People also care when something gets less accurate :) > > Accuracy will never be 100%. We have to carefully balance between > accuracy and overhead. So far we haven't heard about how much inaccuracy > you are getting. Numbers help! Very good question, I should have added numbers since the beginning to clarify the significance of the problem. To easily produce numbers I will use another use case that we have that relies on having fresh stats, which is proactive reclaim. Proactive reclaim usually operates in a feedback loop where it requests some reclaim, queries the stats, and decides how to operate based on that (e.g. fallback for a while). When running a test that is proactively reclaiming some memory and expecting to see the memory swapped, without this patch, we see significant inaccuracy. In some failure instances we expect ~2000 pages to be swapped but we only find ~1200. This is observed on machines with hundreds of cpus, where the problem is most noticeable. This is a huge difference. Keep in mind that the inaccuracy would probably be even worse in a production environment if the system is under enough pressure (e.g. the periodic flusher is late). For both approach A (wait until flusher finishes and exit, i.e this patch) and approach B (wait until flusher finishes then flush, i.e the mutex approach), I stop seeing this failure in the proactive reclaim test and the stats are accurate. I have v2 ready that implements approach B with the mutex ready to fire, just say the word :) > > In any case I do get the argument about consistency within a subtree > (children data largely not matching parents'). Examples like that would > be really helpful as well. If that is indeed the case then I would > consider it much more serious than accuracy which is always problematic > (100ms of an actively allocating context can ruin your just read numbers > and there is no way around that wihtout stopping the world). 100% agreed. It's more difficult to get testing results for this, but that can easily be the case when we have no idea how much is flushed when we return from mem_cgroup_flush_stats(). > > Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger > an explicit refresh. For those users who really need more accurate > numbers we might consider interface like that. Or allow to write to stat > file and do that in the write handler. This wouldn't be my first option, but if that's the only way to get accurate stats I'll take it. Keep in mind that the normal stats read path will always try to refresh, it's just that it will often skip refreshing due to an implementation-specific race. So having an interface for an explicit flush might be too implementation specific, especially if the race disappears later and the interface is not needed later. Having said that, I am not opposed to this if that's the only way forward for accurate stats, but I would rather have the stat reads be always accurate unless a regression is noticed. > -- > Michal Hocko > SUSE Labs
On Fri 11-08-23 12:02:48, Yosry Ahmed wrote: > On Fri, Aug 11, 2023 at 5:21 AM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 09-08-23 11:33:20, Yosry Ahmed wrote: > > > On Wed, Aug 9, 2023 at 6:32 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > On Wed 09-08-23 06:13:05, Yosry Ahmed wrote: > > > > > On Wed, Aug 9, 2023 at 5:58 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > On Wed 09-08-23 05:31:04, Yosry Ahmed wrote: > > > > > > > On Wed, Aug 9, 2023 at 1:51 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > > > > > On Wed 09-08-23 04:58:10, Yosry Ahmed wrote: > > > > > > > > > Over time, the memcg code added multiple optimizations to the stats > > > > > > > > > flushing path that introduce a tradeoff between accuracy and > > > > > > > > > performance. In some contexts (e.g. dirty throttling, refaults, etc), a > > > > > > > > > full rstat flush of the stats in the tree can be too expensive. Such > > > > > > > > > optimizations include [1]: > > > > > > > > > (a) Introducing a periodic background flusher to keep the size of the > > > > > > > > > update tree from growing unbounded. > > > > > > > > > (b) Allowing only one thread to flush at a time, and other concurrent > > > > > > > > > flushers just skip the flush. This avoids a thundering herd problem > > > > > > > > > when multiple reclaim/refault threads attempt to flush the stats at > > > > > > > > > once. > > > > > > > > > (c) Only executing a flush if the magnitude of the stats updates exceeds > > > > > > > > > a certain threshold. > > > > > > > > > > > > > > > > > > These optimizations were necessary to make flushing feasible in > > > > > > > > > performance-critical paths, and they come at the cost of some accuracy > > > > > > > > > that we choose to live without. On the other hand, for flushes invoked > > > > > > > > > when userspace is reading the stats, the tradeoff is less appealing > > > > > > > > > This code path is not performance-critical, and the inaccuracies can > > > > > > > > > affect userspace behavior. For example, skipping flushing when there is > > > > > > > > > another ongoing flush is essentially a coin flip. We don't know if the > > > > > > > > > ongoing flush is done with the subtree of interest or not. > > > > > > > > > > > > > > > > I am not convinced by this much TBH. What kind of precision do you > > > > > > > > really need and how much off is what we provide? > > > > > > > > > > > > > > > > More expensive read of stats from userspace is quite easy to notice > > > > > > > > and usually reported as a regression. So you should have a convincing > > > > > > > > argument that an extra time spent is really worth it. AFAIK there are > > > > > > > > many monitoring (top like) tools which simply read those files regularly > > > > > > > > just to show numbers and they certainly do not need a high level of > > > > > > > > precision. > > > > > > > > > > > > > > We used to spend this time before commit fd25a9e0e23b ("memcg: unify > > > > > > > memcg stat flushing") which generalized the "skip if ongoing flush" > > > > > > > for all stat flushing. As far I know, the problem was contention on > > > > > > > the flushing lock which also affected critical paths like refault. > > > > > > > > > > > > > > The problem is that the current behavior is indeterministic, if cpu A > > > > > > > tries to flush stats and cpu B is already doing that, cpu A will just > > > > > > > skip. At that point, the cgroup(s) that cpu A cares about may have > > > > > > > been fully flushed, partially flushed (in terms of cpus), or not > > > > > > > flushed at all. We have no idea. We just know that someone else is > > > > > > > flushing something. IOW, in some cases the flush request will be > > > > > > > completely ignored and userspace will read stale stats (up to 2s + the > > > > > > > periodic flusher runtime). > > > > > > > > > > > > Yes, that is certainly true but why does that matter? Stats are always a > > > > > > snapshot of the past. Do we get an inconsistent image that would be > > > > > > actively harmful. > > > > > > > > > > That can very well be the case because we may be in a state where some > > > > > cpus are flushed and some aren't. Also sometimes a few seconds is too > > > > > old. We have some workloads that read the stats every 1-2 seconds to > > > > > keep a fresh state, and they certainly do not expect stats to be 2+ > > > > > seconds old when they read them. > > > > > > > > I hate to repeat myself but please be more specific. This all sounds > > > > just too wavy to me. > > > > > > Sorry I didn't have the full story in mind, I had to do my homework. > > > One example is userspace OOM killing. Our userspace OOM killer makes > > > decisions based on some stats from memory.stat, and stale stats (a few > > > seconds in this case) can result in an unrightful OOM kill, which can > > > easily cascade. > > > > OK, but how is this any different from having outdated data because you > > have to wait for memory.stat to read (being blocked inside the rstat > > code)? Either your oom killer is reading the stats directly and then you > > depend on that flushing which is something that could be really harmful > > itself or you rely on another thread doing the blocking and you do not > > have up-to-date numbers anyway. So how does blocking actually help? > > I am not sure I understand. > > The problem is that when you skip when someone else is flushing, there > is a chance that the stats we care about haven't been flushed since > the last time the periodic flusher ran. Which is supposed to be ~2 > seconds ago, but maybe more depending on how busy the workqueue is. Yes, this is clear. You simply get _some_ snapshot of the past. > When you block until the flusher finishes, the stats are being > refreshed as you wait. So the stats are not getting more outdated as > you wait in the general case (unless your cgroup was flushed first and > you're waiting for others to be flushed). > [Let's call this approach A] Yes, but the amount of waiting is also undeterministic and even after you waited your stats might be outdated already depending on how quickly somebody allocates. That was my point. > Furthermore, with the implementation you suggested using a mutex, we > will wait until the ongoing flush is completed, then we will grab the > mutex and do a flush ourselves. Flushing would be mostly unnecessary as somebody has just flushed everything. The only point of mutex is to remove the super ugly busy wait for sleepable context construct. [...] > When running a test that is proactively reclaiming some memory and > expecting to see the memory swapped, without this patch, we see > significant inaccuracy. In some failure instances we expect ~2000 > pages to be swapped but we only find ~1200. That difference is 3MB of memory. What is the precision you are operating on? > This is observed on > machines with hundreds of cpus, where the problem is most noticeable. > This is a huge difference. Keep in mind that the inaccuracy would > probably be even worse in a production environment if the system is > under enough pressure (e.g. the periodic flusher is late). > > For both approach A (wait until flusher finishes and exit, i.e this > patch) and approach B (wait until flusher finishes then flush, i.e the > mutex approach), I stop seeing this failure in the proactive reclaim > test and the stats are accurate. > > I have v2 ready that implements approach B with the mutex ready to > fire, just say the word :) > > > > > In any case I do get the argument about consistency within a subtree > > (children data largely not matching parents'). Examples like that would > > be really helpful as well. If that is indeed the case then I would > > consider it much more serious than accuracy which is always problematic > > (100ms of an actively allocating context can ruin your just read numbers > > and there is no way around that wihtout stopping the world). > > 100% agreed. It's more difficult to get testing results for this, but > that can easily be the case when we have no idea how much is flushed > when we return from mem_cgroup_flush_stats(). > > > > > Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger > > an explicit refresh. For those users who really need more accurate > > numbers we might consider interface like that. Or allow to write to stat > > file and do that in the write handler. > > This wouldn't be my first option, but if that's the only way to get > accurate stats I'll take it. To be honest, this would be my preferable option because of 2 reasons. a) we do not want to guarantee to much on the precision front because that would just makes maintainability much more harder with different people having a different opinion of how much precision is enough and b) it makes the more rare (need precise) case the special case rather than the default. > Keep in mind that the normal stats read path will always try to > refresh, it's just that it will often skip refreshing due to an > implementation-specific race. So having an interface for an explicit > flush might be too implementation specific, especially if the race > disappears later and the interface is not needed later. That doesn't really matter because from the userspace POV it is really not important how the whole thing is implemented and whether the interface blocks in reality. It simply has count with blocking. It just needs a coherent and up-to-date-at-the-flush semantic.
<snip> > > > > > I hate to repeat myself but please be more specific. This all sounds > > > > > just too wavy to me. > > > > > > > > Sorry I didn't have the full story in mind, I had to do my homework. > > > > One example is userspace OOM killing. Our userspace OOM killer makes > > > > decisions based on some stats from memory.stat, and stale stats (a few > > > > seconds in this case) can result in an unrightful OOM kill, which can > > > > easily cascade. > > > > > > OK, but how is this any different from having outdated data because you > > > have to wait for memory.stat to read (being blocked inside the rstat > > > code)? Either your oom killer is reading the stats directly and then you > > > depend on that flushing which is something that could be really harmful > > > itself or you rely on another thread doing the blocking and you do not > > > have up-to-date numbers anyway. So how does blocking actually help? > > > > I am not sure I understand. > > > > The problem is that when you skip when someone else is flushing, there > > is a chance that the stats we care about haven't been flushed since > > the last time the periodic flusher ran. Which is supposed to be ~2 > > seconds ago, but maybe more depending on how busy the workqueue is. > > Yes, this is clear. You simply get _some_ snapshot of the past. > > > When you block until the flusher finishes, the stats are being > > refreshed as you wait. So the stats are not getting more outdated as > > you wait in the general case (unless your cgroup was flushed first and > > you're waiting for others to be flushed). > > [Let's call this approach A] > > Yes, but the amount of waiting is also undeterministic and even after > you waited your stats might be outdated already depending on how quickly > somebody allocates. That was my point. Right, we are just trying to minimize the staleness window. > > > Furthermore, with the implementation you suggested using a mutex, we > > will wait until the ongoing flush is completed, then we will grab the > > mutex and do a flush ourselves. > > Flushing would be mostly unnecessary as somebody has just flushed > everything. The only point of mutex is to remove the super ugly busy > wait for sleepable context construct. Right, but it also has the (arguably) nice double flush effect, also minimizes the staleness window. > > [...] > > When running a test that is proactively reclaiming some memory and > > expecting to see the memory swapped, without this patch, we see > > significant inaccuracy. In some failure instances we expect ~2000 > > pages to be swapped but we only find ~1200. > > That difference is 3MB of memory. What is the precision you are > operating on? I am not concerned with MBs, I am concerned with ratio. On a large system with hundreds of cpus there are larger chances of missing updates on a bunch of cpus, which might be a lot. > > > This is observed on > > machines with hundreds of cpus, where the problem is most noticeable. > > This is a huge difference. Keep in mind that the inaccuracy would > > probably be even worse in a production environment if the system is > > under enough pressure (e.g. the periodic flusher is late). > > > > For both approach A (wait until flusher finishes and exit, i.e this > > patch) and approach B (wait until flusher finishes then flush, i.e the > > mutex approach), I stop seeing this failure in the proactive reclaim > > test and the stats are accurate. > > > > I have v2 ready that implements approach B with the mutex ready to > > fire, just say the word :) > > > > > > > > In any case I do get the argument about consistency within a subtree > > > (children data largely not matching parents'). Examples like that would > > > be really helpful as well. If that is indeed the case then I would > > > consider it much more serious than accuracy which is always problematic > > > (100ms of an actively allocating context can ruin your just read numbers > > > and there is no way around that wihtout stopping the world). > > > > 100% agreed. It's more difficult to get testing results for this, but > > that can easily be the case when we have no idea how much is flushed > > when we return from mem_cgroup_flush_stats(). > > > > > > > > Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger > > > an explicit refresh. For those users who really need more accurate > > > numbers we might consider interface like that. Or allow to write to stat > > > file and do that in the write handler. > > > > This wouldn't be my first option, but if that's the only way to get > > accurate stats I'll take it. > > To be honest, this would be my preferable option because of 2 reasons. > a) we do not want to guarantee to much on the precision front because > that would just makes maintainability much more harder with different > people having a different opinion of how much precision is enough and b) > it makes the more rare (need precise) case the special case rather than > the default. How about we go with the proposed approach in this patch (or the mutex approach as it's much cleaner), and if someone complains about slow reads we revert the change and introduce the refresh API? We might just get away with making all reads accurate and avoid the hassle of updating some userspace readers to do write-then-read. We don't know for sure that something will regress. What do you think?
Hi all, (sorry for late response as I was away) On Fri, Aug 11, 2023 at 1:40 PM Yosry Ahmed <yosryahmed@google.com> wrote: > [...] > > > > > > > > Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger > > > > an explicit refresh. For those users who really need more accurate > > > > numbers we might consider interface like that. Or allow to write to stat > > > > file and do that in the write handler. > > > > > > This wouldn't be my first option, but if that's the only way to get > > > accurate stats I'll take it. > > > > To be honest, this would be my preferable option because of 2 reasons. > > a) we do not want to guarantee to much on the precision front because > > that would just makes maintainability much more harder with different > > people having a different opinion of how much precision is enough and b) > > it makes the more rare (need precise) case the special case rather than > > the default. > > How about we go with the proposed approach in this patch (or the mutex > approach as it's much cleaner), and if someone complains about slow > reads we revert the change and introduce the refresh API? We might > just get away with making all reads accurate and avoid the hassle of > updating some userspace readers to do write-then-read. We don't know > for sure that something will regress. > > What do you think? Actually I am with Michal on this one. As I see multiple regression reports for reading the stats, I am inclined towards rate limiting the sync stats flushing from user readable interfaces (through mem_cgroup_flush_stats_ratelimited()) and providing a separate interface as suggested by Michal to explicitly flush the stats for users ok with the cost. Since we flush the stats every 2 seconds, most of the users should be fine and the users who care about accuracy can pay for it.
On Fri, Aug 11, 2023 at 7:08 PM Shakeel Butt <shakeelb@google.com> wrote: > > Hi all, > > (sorry for late response as I was away) > > On Fri, Aug 11, 2023 at 1:40 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > [...] > > > > > > > > > > Last note, for /proc/vmstat we have /proc/sys/vm/stat_refresh to trigger > > > > > an explicit refresh. For those users who really need more accurate > > > > > numbers we might consider interface like that. Or allow to write to stat > > > > > file and do that in the write handler. > > > > > > > > This wouldn't be my first option, but if that's the only way to get > > > > accurate stats I'll take it. > > > > > > To be honest, this would be my preferable option because of 2 reasons. > > > a) we do not want to guarantee to much on the precision front because > > > that would just makes maintainability much more harder with different > > > people having a different opinion of how much precision is enough and b) > > > it makes the more rare (need precise) case the special case rather than > > > the default. > > > > How about we go with the proposed approach in this patch (or the mutex > > approach as it's much cleaner), and if someone complains about slow > > reads we revert the change and introduce the refresh API? We might > > just get away with making all reads accurate and avoid the hassle of > > updating some userspace readers to do write-then-read. We don't know > > for sure that something will regress. > > > > What do you think? > > Actually I am with Michal on this one. As I see multiple regression > reports for reading the stats, I am inclined towards rate limiting the > sync stats flushing from user readable interfaces (through > mem_cgroup_flush_stats_ratelimited()) and providing a separate > interface as suggested by Michal to explicitly flush the stats for > users ok with the cost. Since we flush the stats every 2 seconds, most > of the users should be fine and the users who care about accuracy can > pay for it. I am worried that writing to a stat for flushing then reading will increase the staleness window which we are trying to reduce here. Would it be acceptable to add a separate interface to explicitly read flushed stats without having to write first? If the distinction disappears in the future we can just short-circuit both interfaces.
On Fri, Aug 11, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > [...] > > I am worried that writing to a stat for flushing then reading will > increase the staleness window which we are trying to reduce here. > Would it be acceptable to add a separate interface to explicitly read > flushed stats without having to write first? If the distinction > disappears in the future we can just short-circuit both interfaces. What is the acceptable staleness time window for your case? It is hard to imagine that a write+read will always be worse than just a read. Even the proposed patch can have an unintended and larger than expected staleness window due to some processing on return-to-userspace or some scheduling delay.
On Fri, Aug 11, 2023 at 7:29 PM Shakeel Butt <shakeelb@google.com> wrote: > > On Fri, Aug 11, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > [...] > > > > I am worried that writing to a stat for flushing then reading will > > increase the staleness window which we are trying to reduce here. > > Would it be acceptable to add a separate interface to explicitly read > > flushed stats without having to write first? If the distinction > > disappears in the future we can just short-circuit both interfaces. > > What is the acceptable staleness time window for your case? It is hard > to imagine that a write+read will always be worse than just a read. > Even the proposed patch can have an unintended and larger than > expected staleness window due to some processing on > return-to-userspace or some scheduling delay. Maybe I am worrying too much, we can just go for writing to memory.stat for explicit stats refresh. Do we still want to go with the mutex approach Michal suggested for do_flush_stats() to support either waiting for ongoing flushes (mutex_lock) or skipping (mutex_trylock)?
On Fri, Aug 11, 2023 at 7:36 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Fri, Aug 11, 2023 at 7:29 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > On Fri, Aug 11, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > [...] > > > > > > I am worried that writing to a stat for flushing then reading will > > > increase the staleness window which we are trying to reduce here. > > > Would it be acceptable to add a separate interface to explicitly read > > > flushed stats without having to write first? If the distinction > > > disappears in the future we can just short-circuit both interfaces. > > > > What is the acceptable staleness time window for your case? It is hard > > to imagine that a write+read will always be worse than just a read. > > Even the proposed patch can have an unintended and larger than > > expected staleness window due to some processing on > > return-to-userspace or some scheduling delay. > > Maybe I am worrying too much, we can just go for writing to > memory.stat for explicit stats refresh. > > Do we still want to go with the mutex approach Michal suggested for > do_flush_stats() to support either waiting for ongoing flushes > (mutex_lock) or skipping (mutex_trylock)? I would say keep that as a separate patch.
On Fri 11-08-23 19:48:14, Shakeel Butt wrote: > On Fri, Aug 11, 2023 at 7:36 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > On Fri, Aug 11, 2023 at 7:29 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > On Fri, Aug 11, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > > [...] > > > > > > > > I am worried that writing to a stat for flushing then reading will > > > > increase the staleness window which we are trying to reduce here. > > > > Would it be acceptable to add a separate interface to explicitly read > > > > flushed stats without having to write first? If the distinction > > > > disappears in the future we can just short-circuit both interfaces. > > > > > > What is the acceptable staleness time window for your case? It is hard > > > to imagine that a write+read will always be worse than just a read. > > > Even the proposed patch can have an unintended and larger than > > > expected staleness window due to some processing on > > > return-to-userspace or some scheduling delay. > > > > Maybe I am worrying too much, we can just go for writing to > > memory.stat for explicit stats refresh. > > > > Do we still want to go with the mutex approach Michal suggested for > > do_flush_stats() to support either waiting for ongoing flushes > > (mutex_lock) or skipping (mutex_trylock)? > > I would say keep that as a separate patch. Separate patches would be better but please make the mutex conversion first. We really do not want to have any busy waiting depending on a sleep exported to the userspace. That is just no-go. Thanks!
On Sat, Aug 12, 2023 at 1:35 AM Michal Hocko <mhocko@suse.com> wrote: > > On Fri 11-08-23 19:48:14, Shakeel Butt wrote: > > On Fri, Aug 11, 2023 at 7:36 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > On Fri, Aug 11, 2023 at 7:29 PM Shakeel Butt <shakeelb@google.com> wrote: > > > > > > > > On Fri, Aug 11, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > > > > > > > > [...] > > > > > > > > > > I am worried that writing to a stat for flushing then reading will > > > > > increase the staleness window which we are trying to reduce here. > > > > > Would it be acceptable to add a separate interface to explicitly read > > > > > flushed stats without having to write first? If the distinction > > > > > disappears in the future we can just short-circuit both interfaces. > > > > > > > > What is the acceptable staleness time window for your case? It is hard > > > > to imagine that a write+read will always be worse than just a read. > > > > Even the proposed patch can have an unintended and larger than > > > > expected staleness window due to some processing on > > > > return-to-userspace or some scheduling delay. > > > > > > Maybe I am worrying too much, we can just go for writing to > > > memory.stat for explicit stats refresh. > > > > > > Do we still want to go with the mutex approach Michal suggested for > > > do_flush_stats() to support either waiting for ongoing flushes > > > (mutex_lock) or skipping (mutex_trylock)? > > > > I would say keep that as a separate patch. > > Separate patches would be better but please make the mutex conversion > first. We really do not want to have any busy waiting depending on a > sleep exported to the userspace. That is just no-go. +tj@kernel.org That makes sense. Taking a step back though, and considering there have been other complaints about unified flushing causing expensive reads from memory.stat [1], I am wondering if we should tackle the fundamental problem. We have a single global rstat lock for flushing, which protects the global per-cgroup counters as far as I understand. A single lock means a lot of contention, which is why we implemented unified flushing on the memcg side in the first place, where we only let one flusher operate and everyone else skip, but that flusher needs to flush the entire tree. This can be unnecessarily expensive (see [1]), and to avoid how expensive it is we sacrifice accuracy (what this patch is about). I am exploring breaking down that lock into per-cgroup locks, where a flusher acquires locks in a top down fashion. This allows for some concurrency in flushing, and makes unified flushing unnecessary. If we retire unified flushing we fix both accuracy and expensive reads at the same time, while not sacrificing performance for concurrent in-kernel flushers. What do you think? I am prototyping something now and running some tests, it seems promising and simple-ish (unless I am missing a big correctness issue). [1] https://lore.kernel.org/lkml/CABWYdi3YNwtPDwwJWmCO-ER50iP7CfbXkCep5TKb-9QzY-a40A@mail.gmail.com/ > > Thanks! > -- > Michal Hocko > SUSE Labs
On Mon, Aug 14, 2023 at 5:48 PM Tejun Heo <tj@kernel.org> wrote: > > Hello, > > On Mon, Aug 14, 2023 at 05:39:15PM -0700, Yosry Ahmed wrote: > > I believe dropping unified flushing, if possible of course, may fix > > both problems. > > Yeah, flushing the whole tree for every stat read will push up the big O > complexity of the operation. It shouldn't be too bad because only what's > updated since the last read will need flushing but if you have a really big > machine with a lot of constantly active cgroups, you're still gonna feel it. > So, yeah, drop that and switch the global lock to mutex and we should all be > good? I hope so, but I am not sure. The unified flushing was added initially to mitigate a thundering herd problem from concurrent in-kernel flushers (e.g. concurrent reclaims), but back then flushing was atomic so we had to keep the spinlock held for a long time. I think it should be better now, but I am hoping Shakeel will chime in since he added the unified flushing originally. We also need to agree on what to do about stats_flushing_threshold and flush_next_time since they're both global now (since all flushing is global). > > Thanks. > > -- > tejun
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e041ba827e59..38e227f7127d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -630,7 +630,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) /* * If stats_flush_threshold exceeds the threshold * (>num_online_cpus()), cgroup stats update will be triggered - * in __mem_cgroup_flush_stats(). Increasing this var further + * in mem_cgroup_flush_stats(). Increasing this var further * is redundant and simply adds overhead in atomic update. */ if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void do_flush_stats(void) +static void do_flush_stats(bool full) { + if (!atomic_read(&stats_flush_ongoing) && + !atomic_xchg(&stats_flush_ongoing, 1)) + goto flush; + /* - * We always flush the entire tree, so concurrent flushers can just - * skip. This avoids a thundering herd problem on the rstat global lock - * from memcg flushers (e.g. reclaim, refault, etc). + * We always flush the entire tree, so concurrent flushers can choose to + * skip if accuracy is not critical. Otherwise, wait for the ongoing + * flush to complete. This avoids a thundering herd problem on the rstat + * global lock from memcg flushers (e.g. reclaim, refault, etc). */ - if (atomic_read(&stats_flush_ongoing) || - atomic_xchg(&stats_flush_ongoing, 1)) - return; - + while (full && atomic_read(&stats_flush_ongoing) == 1) { + if (!cond_resched()) + cpu_relax(); + } + return; +flush: WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); cgroup_rstat_flush(root_mem_cgroup->css.cgroup); @@ -661,7 +668,12 @@ static void do_flush_stats(void) void mem_cgroup_flush_stats(void) { if (atomic_read(&stats_flush_threshold) > num_online_cpus()) - do_flush_stats(); + do_flush_stats(false); +} + +static void mem_cgroup_flush_stats_full(void) +{ + do_flush_stats(true); } void mem_cgroup_flush_stats_ratelimited(void) @@ -676,7 +688,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) * Always flush here so that flushing in latency-sensitive paths is * as cheap as possible. */ - do_flush_stats(); + do_flush_stats(false); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -1576,7 +1588,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) * * Current memory state: */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_full(); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -4018,7 +4030,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_full(); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -4093,7 +4105,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_full(); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -6610,7 +6622,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_full(); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid;