[v2,1/2] mm: memcg: refactor page state unit helpers

Message ID 20230922175741.635002-2-yosryahmed@google.com
State New
Headers
Series mm: memcg: fix tracking of pending stats updates values |

Commit Message

Yosry Ahmed Sept. 22, 2023, 5:57 p.m. UTC
  memcg_page_state_unit() is currently used to identify the unit of a
memcg state item so that all stats in memory.stat are in bytes. However,
it lies about the units of WORKINGSET_* stats. These stats actually
represent pages, but we present them to userspace as a scalar number of
events. In retrospect, maybe those stats should have been memcg "events"
rather than memcg "state".

In preparation for using memcg_page_state_unit() for other purposes that
need to know the truthful units of different stat items, break it down
into two helpers:
- memcg_page_state_unit() retuns the actual unit of the item.
- memcg_page_state_output_unit() returns the unit used for output.

Use the latter instead of the former in memcg_page_state_output() and
lruvec_page_state_output(). While we are at it, let's show cgroup v1
some love and add memcg_page_state_local_output() for consistency.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/memcontrol.c | 44 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 11 deletions(-)
  

Comments

Johannes Weiner Oct. 3, 2023, 1:03 p.m. UTC | #1
On Fri, Sep 22, 2023 at 05:57:39PM +0000, Yosry Ahmed wrote:
> memcg_page_state_unit() is currently used to identify the unit of a
> memcg state item so that all stats in memory.stat are in bytes. However,
> it lies about the units of WORKINGSET_* stats. These stats actually
> represent pages, but we present them to userspace as a scalar number of
> events. In retrospect, maybe those stats should have been memcg "events"
> rather than memcg "state".
> 
> In preparation for using memcg_page_state_unit() for other purposes that
> need to know the truthful units of different stat items, break it down
> into two helpers:
> - memcg_page_state_unit() retuns the actual unit of the item.
> - memcg_page_state_output_unit() returns the unit used for output.
> 
> Use the latter instead of the former in memcg_page_state_output() and
> lruvec_page_state_output(). While we are at it, let's show cgroup v1
> some love and add memcg_page_state_local_output() for consistency.
> 
> No functional change intended.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

That's a nice cleanup in itself.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
  
Michal Koutný Oct. 3, 2023, 6:11 p.m. UTC | #2
On Fri, Sep 22, 2023 at 05:57:39PM +0000, Yosry Ahmed <yosryahmed@google.com> wrote:
> memcg_page_state_unit() is currently used to identify the unit of a
> memcg state item so that all stats in memory.stat are in bytes. However,
> it lies about the units of WORKINGSET_* stats. These stats actually
> represent pages, but we present them to userspace as a scalar number of
> events. In retrospect, maybe those stats should have been memcg "events"
> rather than memcg "state".

Why isn't it possible to move WORKINGSET_* stats under the events now?
(Instead of using internal and external units.)

Thanks,
Michal
  
Yosry Ahmed Oct. 3, 2023, 7:47 p.m. UTC | #3
On Tue, Oct 3, 2023 at 11:11 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Fri, Sep 22, 2023 at 05:57:39PM +0000, Yosry Ahmed <yosryahmed@google.com> wrote:
> > memcg_page_state_unit() is currently used to identify the unit of a
> > memcg state item so that all stats in memory.stat are in bytes. However,
> > it lies about the units of WORKINGSET_* stats. These stats actually
> > represent pages, but we present them to userspace as a scalar number of
> > events. In retrospect, maybe those stats should have been memcg "events"
> > rather than memcg "state".
>
> Why isn't it possible to move WORKINGSET_* stats under the events now?
> (Instead of using internal and external units.)

Those constants are shared with code outside of memcg, namely enum
node_stat_item and enum vm_event_item, and IIUC they are used
differently outside of memcg. Did I miss something?

>
> Thanks,
> Michal
  
Michal Koutný Oct. 4, 2023, 9:02 a.m. UTC | #4
On Tue, Oct 03, 2023 at 12:47:25PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> Those constants are shared with code outside of memcg, namely enum
> node_stat_item and enum vm_event_item, and IIUC they are used
> differently outside of memcg. Did I miss something?

The difference is not big, e.g.
  mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta);
could be
  __count_memcg_events(
    container_of(lruvec, struct mem_cgroup_per_node, lruvec)->memcg,
    WORKINGSET_ACTIVATE_BASE + type, delta
  );

Yes, it would mean transferring WORKINGSET_* items from enum
node_stat_item to enum vm_event_item.
IOW, I don't know what is the effective difference between
mod_memcg_lruvec_state() and count_memcg_events().
Is it per-memcg vs per-memcg-per-node resolution?
(Is _that_ read by workingset mechanism?)

Thanks,
Michal
  
Yosry Ahmed Oct. 4, 2023, 4:58 p.m. UTC | #5
On Wed, Oct 4, 2023 at 2:02 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Tue, Oct 03, 2023 at 12:47:25PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > Those constants are shared with code outside of memcg, namely enum
> > node_stat_item and enum vm_event_item, and IIUC they are used
> > differently outside of memcg. Did I miss something?
>
> The difference is not big, e.g.
>   mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta);
> could be
>   __count_memcg_events(
>     container_of(lruvec, struct mem_cgroup_per_node, lruvec)->memcg,
>     WORKINGSET_ACTIVATE_BASE + type, delta
>   );
>
> Yes, it would mean transferring WORKINGSET_* items from enum
> node_stat_item to enum vm_event_item.
> IOW, I don't know what is the effective difference between
> mod_memcg_lruvec_state() and count_memcg_events().
> Is it per-memcg vs per-memcg-per-node resolution?
> (Is _that_ read by workingset mechanism?)

Even if it is not read, I think it is exposed in memory.numa_stat, right?

Outside of memcg code, if you look at vmstat_start(), you will see
that the items in enum vm_event_item are handled differently (see
all_vm_events()) when reading vmstat. I don't think we can just move
it, unfortunately.

>
> Thanks,
> Michal
  
Johannes Weiner Oct. 4, 2023, 6:36 p.m. UTC | #6
On Wed, Oct 04, 2023 at 11:02:11AM +0200, Michal Koutný wrote:
> On Tue, Oct 03, 2023 at 12:47:25PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > Those constants are shared with code outside of memcg, namely enum
> > node_stat_item and enum vm_event_item, and IIUC they are used
> > differently outside of memcg. Did I miss something?
> 
> The difference is not big, e.g.
>   mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta);
> could be
>   __count_memcg_events(
>     container_of(lruvec, struct mem_cgroup_per_node, lruvec)->memcg,
>     WORKINGSET_ACTIVATE_BASE + type, delta
>   );
> 
> Yes, it would mean transferring WORKINGSET_* items from enum
> node_stat_item to enum vm_event_item.
> IOW, I don't know what is the effective difference between
> mod_memcg_lruvec_state() and count_memcg_events().
> Is it per-memcg vs per-memcg-per-node resolution?

Yes, it's because of node resolution which event counters generally
don't have. Some of the refault events influence node-local reclaim
decisions, see mm/vmscan.c::snapshot_refaults().

There are a few other event counters in the stat array that people
thought would be useful to have split out in
/sys/devices/system/node/nodeN/vmstat to understand numa behavior
better.

It's a bit messy.

Some events would be useful to move to 'stats' for the numa awareness,
such as the allocator stats and reclaim activity.

Some events would be useful to move to 'stats' for the numa awareness,
but don't have the zone resolution required by them, such as
kswapd/kcompactd wakeups.

Some events aren't numa specific, such as oom kills, drop_pagecache.
  
Michal Koutný Oct. 5, 2023, 9:06 a.m. UTC | #7
On Wed, Oct 04, 2023 at 02:36:19PM -0400, Johannes Weiner <hannes@cmpxchg.org> wrote:
> Yes, it's because of node resolution which event counters generally
> don't have. Some of the refault events influence node-local reclaim
> decisions, see mm/vmscan.c::snapshot_refaults().
> 
> There are a few other event counters in the stat array that people
> thought would be useful to have split out in
> /sys/devices/system/node/nodeN/vmstat to understand numa behavior
> better.
> 
> It's a bit messy.
> 
> Some events would be useful to move to 'stats' for the numa awareness,
> such as the allocator stats and reclaim activity.
> 
> Some events would be useful to move to 'stats' for the numa awareness,
> but don't have the zone resolution required by them, such as
> kswapd/kcompactd wakeups.

Thanks for the enlightenment.

> Some events aren't numa specific, such as oom kills, drop_pagecache.

These are oddballs indeed. As with the normalization patchset these are
counted as PAGE_SIZE^W 1 error but they should rather be an infinite
error (to warrant a flush).

So my feedback to this series is:
- patch 1/2 -- creating two classes of units is consequence of unclarity
  between state and events (as in event=Δstate/Δt) and resolution
  (global vs per-node), so the better approach would be to tidy this up,
- patch 2/2 -- it could use the single unit class that exists, 
  it'll bound the error of printed numbers afterall (and can be changed
  later depending on how it affects internal consumers).

My 0.02€,
Michal
  
Yosry Ahmed Oct. 5, 2023, 9:31 a.m. UTC | #8
On Thu, Oct 5, 2023 at 2:06 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Wed, Oct 04, 2023 at 02:36:19PM -0400, Johannes Weiner <hannes@cmpxchg.org> wrote:
> > Yes, it's because of node resolution which event counters generally
> > don't have. Some of the refault events influence node-local reclaim
> > decisions, see mm/vmscan.c::snapshot_refaults().
> >
> > There are a few other event counters in the stat array that people
> > thought would be useful to have split out in
> > /sys/devices/system/node/nodeN/vmstat to understand numa behavior
> > better.
> >
> > It's a bit messy.
> >
> > Some events would be useful to move to 'stats' for the numa awareness,
> > such as the allocator stats and reclaim activity.
> >
> > Some events would be useful to move to 'stats' for the numa awareness,
> > but don't have the zone resolution required by them, such as
> > kswapd/kcompactd wakeups.
>
> Thanks for the enlightenment.
>
> > Some events aren't numa specific, such as oom kills, drop_pagecache.
>
> These are oddballs indeed. As with the normalization patchset these are
> counted as PAGE_SIZE^W 1 error but they should rather be an infinite
> error (to warrant a flush).
>
> So my feedback to this series is:
> - patch 1/2 -- creating two classes of units is consequence of unclarity
>   between state and events (as in event=Δstate/Δt) and resolution
>   (global vs per-node), so the better approach would be to tidy this up,

I am not really sure what you mean here. I understand that this series
fixes the unit normalization for state but leaves events out of it.
Looking at the event items tracked by memcg in memcg_vm_event_stat it
looks to me that most of them correspond roughly to a page's worth of
updates (all but the THP_* events). We don't track things like
OOM_KILL and DROP_PAGECACHE per memcg as far as I can tell.

Do you mean that we should add something similar to
memcg_page_state_unit() for events as well to get all of them right?
If yes, I think that should be easy to add, it would only special case
THP_* events.

Alternatively, I can add a comment above the call to
memcg_rstat_updated() in __count_memcg_events() explaining why we
don't normalize the event count for now.

> - patch 2/2 -- it could use the single unit class that exists,
>   it'll bound the error of printed numbers afterall (and can be changed
>   later depending on how it affects internal consumers).

This will mean that WORKINGSET_* state will become more stale. We will
need 4096 as many updates as today to get a flush. These are used by
internal flushers (reclaim), and are exposed to userspace. I am not
sure we want to do that.

>
> My 0.02€,
> Michal
  
Michal Koutný Oct. 5, 2023, 4:30 p.m. UTC | #9
On Thu, Oct 05, 2023 at 02:31:03AM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> I am not really sure what you mean here.

My "vision" is to treat WORKINGSET_ entries as events.
That would mean implementing per-node tracking for vm_event_item
(costlier?).
That would mean node_stat_item and vm_event_item being effectively
equal, so they could be merged in one.
That would be situation to come up with new classification based on use
cases (e.g. precision/timeliness requirements, state vs change
semantics).

(Do not take this as blocker of the patch 1/2, I rather used the
opportunity to discuss a greater possible cleanup.)

> We don't track things like OOM_KILL and DROP_PAGECACHE per memcg as
> far as I can tell.

Ah, good. (I forgot only subset of entries is relevant for memcgs.)

> This will mean that WORKINGSET_* state will become more stale. We will
> need 4096 as many updates as today to get a flush. These are used by
> internal flushers (reclaim), and are exposed to userspace. I am not
> sure we want to do that.

snapshot_refaults() doesn't seem to follow after flush
and
workigset_refault()'s flush doesn't seem to preceed readers

Is the flush misplaced or have I overlooked something?
(If the former, it seems to work good enough even with the current
flushing heuristics :-))


Michal
  
Yosry Ahmed Oct. 5, 2023, 5:30 p.m. UTC | #10
On Thu, Oct 5, 2023 at 9:30 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Thu, Oct 05, 2023 at 02:31:03AM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > I am not really sure what you mean here.
>
> My "vision" is to treat WORKINGSET_ entries as events.
> That would mean implementing per-node tracking for vm_event_item
> (costlier?).
> That would mean node_stat_item and vm_event_item being effectively
> equal, so they could be merged in one.
> That would be situation to come up with new classification based on use
> cases (e.g. precision/timeliness requirements, state vs change
> semantics).
>
> (Do not take this as blocker of the patch 1/2, I rather used the
> opportunity to discuss a greater possible cleanup.)

Yeah ideally we can clean this up separately. I would be careful about
userspace exposure though. It seems like CONFIG_VM_EVENT_COUNTERS is
used to control tracking events and displaying them in vmstat, so
moving items between node_stat_item and vm_event_item (or merging
them) won't be easy.

>
> > We don't track things like OOM_KILL and DROP_PAGECACHE per memcg as
> > far as I can tell.
>
> Ah, good. (I forgot only subset of entries is relevant for memcgs.)
>
> > This will mean that WORKINGSET_* state will become more stale. We will
> > need 4096 as many updates as today to get a flush. These are used by
> > internal flushers (reclaim), and are exposed to userspace. I am not
> > sure we want to do that.
>
> snapshot_refaults() doesn't seem to follow after flush
> and
> workigset_refault()'s flush doesn't seem to preceed readers
>
> Is the flush misplaced or have I overlooked something?
> (If the former, it seems to work good enough even with the current
> flushing heuristics :-))

We flush in prepare_scan_count() before reading WORKINGSET_ACTIVATE_*
state. That flush also implicitly precedes every call to
snapshot_refaults(), which is unclear and not so robust, but we are
effectively flushing before snapshot_refaults() too.


>
>
> Michal
  
Andrew Morton Oct. 18, 2023, 7:27 p.m. UTC | #11
On Thu, 5 Oct 2023 10:30:36 -0700 Yosry Ahmed <yosryahmed@google.com> wrote:

> On Thu, Oct 5, 2023 at 9:30 AM Michal Koutný <mkoutny@suse.com> wrote:
> >
> > On Thu, Oct 05, 2023 at 02:31:03AM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > > I am not really sure what you mean here.
> >
> > My "vision" is to treat WORKINGSET_ entries as events.
> > That would mean implementing per-node tracking for vm_event_item
> > (costlier?).
> > That would mean node_stat_item and vm_event_item being effectively
> > equal, so they could be merged in one.
> > That would be situation to come up with new classification based on use
> > cases (e.g. precision/timeliness requirements, state vs change
> > semantics).
> >
> > (Do not take this as blocker of the patch 1/2, I rather used the
> > opportunity to discuss a greater possible cleanup.)
> 
> Yeah ideally we can clean this up separately. I would be careful about
> userspace exposure though. It seems like CONFIG_VM_EVENT_COUNTERS is
> used to control tracking events and displaying them in vmstat, so
> moving items between node_stat_item and vm_event_item (or merging
> them) won't be easy.

I like the word "separately".  This series has been in mm-unstable for
nearly a month, so I'll move it into mm-stable as-is,
  

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 927c64d3cbcb..308cc7353ef0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1535,7 +1535,7 @@  static const struct memory_stat memory_stats[] = {
 	{ "workingset_nodereclaim",	WORKINGSET_NODERECLAIM		},
 };
 
-/* Translate stat items to the correct unit for memory.stat output */
+/* The actual unit of the state item, not the same as the output unit */
 static int memcg_page_state_unit(int item)
 {
 	switch (item) {
@@ -1543,6 +1543,22 @@  static int memcg_page_state_unit(int item)
 	case MEMCG_ZSWAP_B:
 	case NR_SLAB_RECLAIMABLE_B:
 	case NR_SLAB_UNRECLAIMABLE_B:
+		return 1;
+	case NR_KERNEL_STACK_KB:
+		return SZ_1K;
+	default:
+		return PAGE_SIZE;
+	}
+}
+
+/* Translate stat items to the correct unit for memory.stat output */
+static int memcg_page_state_output_unit(int item)
+{
+	/*
+	 * Workingset state is actually in pages, but we export it to userspace
+	 * as a scalar count of events, so special case it here.
+	 */
+	switch (item) {
 	case WORKINGSET_REFAULT_ANON:
 	case WORKINGSET_REFAULT_FILE:
 	case WORKINGSET_ACTIVATE_ANON:
@@ -1551,17 +1567,23 @@  static int memcg_page_state_unit(int item)
 	case WORKINGSET_RESTORE_FILE:
 	case WORKINGSET_NODERECLAIM:
 		return 1;
-	case NR_KERNEL_STACK_KB:
-		return SZ_1K;
 	default:
-		return PAGE_SIZE;
+		return memcg_page_state_unit(item);
 	}
 }
 
 static inline unsigned long memcg_page_state_output(struct mem_cgroup *memcg,
 						    int item)
 {
-	return memcg_page_state(memcg, item) * memcg_page_state_unit(item);
+	return memcg_page_state(memcg, item) *
+		memcg_page_state_output_unit(item);
+}
+
+static inline unsigned long memcg_page_state_local_output(
+		struct mem_cgroup *memcg, int item)
+{
+	return memcg_page_state_local(memcg, item) *
+		memcg_page_state_output_unit(item);
 }
 
 static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
@@ -4106,9 +4128,8 @@  static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
 	for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
 		unsigned long nr;
 
-		nr = memcg_page_state_local(memcg, memcg1_stats[i]);
-		seq_buf_printf(s, "%s %lu\n", memcg1_stat_names[i],
-			   nr * memcg_page_state_unit(memcg1_stats[i]));
+		nr = memcg_page_state_local_output(memcg, memcg1_stats[i]);
+		seq_buf_printf(s, "%s %lu\n", memcg1_stat_names[i], nr);
 	}
 
 	for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
@@ -4134,9 +4155,9 @@  static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
 	for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
 		unsigned long nr;
 
-		nr = memcg_page_state(memcg, memcg1_stats[i]);
+		nr = memcg_page_state_output(memcg, memcg1_stats[i]);
 		seq_buf_printf(s, "total_%s %llu\n", memcg1_stat_names[i],
-			   (u64)nr * memcg_page_state_unit(memcg1_stats[i]));
+			       (u64)nr);
 	}
 
 	for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
@@ -6614,7 +6635,8 @@  static int memory_stat_show(struct seq_file *m, void *v)
 static inline unsigned long lruvec_page_state_output(struct lruvec *lruvec,
 						     int item)
 {
-	return lruvec_page_state(lruvec, item) * memcg_page_state_unit(item);
+	return lruvec_page_state(lruvec, item) *
+		memcg_page_state_output_unit(item);
 }
 
 static int memory_numa_stat_show(struct seq_file *m, void *v)