[v3,1/5] mm: memcg: change flush_next_time to flush_last_time

Message ID 20231116022411.2250072-2-yosryahmed@google.com
State New
Headers
Series mm: memcg: subtree stats flushing and thresholds |

Commit Message

Yosry Ahmed Nov. 16, 2023, 2:24 a.m. UTC
  flush_next_time is an inaccurate name. It's not the next time that
periodic flushing will happen, it's rather the next time that
ratelimited flushing can happen if the periodic flusher is late.

Simplify its semantics by just storing the timestamp of the last flush
instead, flush_last_time. Move the 2*FLUSH_TIME addition to
mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
This way, all the ratelimiting semantics live in one place.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
---
 mm/memcontrol.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)
  

Comments

Shakeel Butt Nov. 17, 2023, 6:22 p.m. UTC | #1
On Thu, Nov 16, 2023 at 02:24:06AM +0000, Yosry Ahmed wrote:
> flush_next_time is an inaccurate name. It's not the next time that
> periodic flushing will happen, it's rather the next time that
> ratelimited flushing can happen if the periodic flusher is late.
> 
> Simplify its semantics by just storing the timestamp of the last flush
> instead, flush_last_time. Move the 2*FLUSH_TIME addition to
> mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
> This way, all the ratelimiting semantics live in one place.
> 
> No functional change intended.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>

Acked-by: Shakeel Butt <shakeelb@google.com>
  
Chris Li Nov. 17, 2023, 6:27 p.m. UTC | #2
Hi Yosry,

Acked-by: Chris Li <chrisl@kernel.org> (Google)

Chris

On Wed, Nov 15, 2023 at 6:24 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> flush_next_time is an inaccurate name. It's not the next time that
> periodic flushing will happen, it's rather the next time that
> ratelimited flushing can happen if the periodic flusher is late.
>
> Simplify its semantics by just storing the timestamp of the last flush
> instead, flush_last_time. Move the 2*FLUSH_TIME addition to
> mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
> This way, all the ratelimiting semantics live in one place.
>
> No functional change intended.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> ---
>  mm/memcontrol.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 774bd6e21e278..18931d82f108f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -593,7 +593,7 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
>  static DEFINE_PER_CPU(unsigned int, stats_updates);
>  static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
>  static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
> -static u64 flush_next_time;
> +static u64 flush_last_time;
>
>  #define FLUSH_TIME (2UL*HZ)
>
> @@ -653,7 +653,7 @@ static void do_flush_stats(void)
>             atomic_xchg(&stats_flush_ongoing, 1))
>                 return;
>
> -       WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
> +       WRITE_ONCE(flush_last_time, jiffies_64);
>
>         cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
>
> @@ -669,7 +669,8 @@ void mem_cgroup_flush_stats(void)
>
>  void mem_cgroup_flush_stats_ratelimited(void)
>  {
> -       if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
> +       /* Only flush if the periodic flusher is one full cycle late */
> +       if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
>                 mem_cgroup_flush_stats();
>  }
>
> --
> 2.43.0.rc0.421.g78406f8d94-goog
>
>
  

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 774bd6e21e278..18931d82f108f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -593,7 +593,7 @@  static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
 static DEFINE_PER_CPU(unsigned int, stats_updates);
 static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
 static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
-static u64 flush_next_time;
+static u64 flush_last_time;
 
 #define FLUSH_TIME (2UL*HZ)
 
@@ -653,7 +653,7 @@  static void do_flush_stats(void)
 	    atomic_xchg(&stats_flush_ongoing, 1))
 		return;
 
-	WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
+	WRITE_ONCE(flush_last_time, jiffies_64);
 
 	cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
 
@@ -669,7 +669,8 @@  void mem_cgroup_flush_stats(void)
 
 void mem_cgroup_flush_stats_ratelimited(void)
 {
-	if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
+	/* Only flush if the periodic flusher is one full cycle late */
+	if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
 		mem_cgroup_flush_stats();
 }