From patchwork Tue Mar 28 06:16:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 75853 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2006208vqo; Mon, 27 Mar 2023 23:26:43 -0700 (PDT) X-Google-Smtp-Source: AKy350akj3KoAGTtucZShEdYABq+pywVbhuQtkll3I8md8NgO2WkZaLjMiWNqz40dz0HW0nK9pcC X-Received: by 2002:a17:906:478a:b0:930:fded:5bf2 with SMTP id cw10-20020a170906478a00b00930fded5bf2mr17949743ejc.52.1679984803489; Mon, 27 Mar 2023 23:26:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679984803; cv=none; d=google.com; s=arc-20160816; b=KcHkHqSDdaEosn2INXSm2PsbdBPqdMnLs7+cjYIEdmQGBLDN2SMAzQSPh5UriQ2AWX A0U5CufNtgDa3lXjyEjNoUw7aoDejai6oygPloPTyzl/kYru6v8vruGkM3MQ4uLUVipU zqCYcMN9EBR0tcSOs/UEZo8+O5FEU1gYVaq0cZLCVy3Y8VJWHivGEpu5ZiqoYsdnOxBc pdt/M0ezf5jGB7iQvnBr2xznwBMLn7pvdCBMaI08k5ULIXHUmtKNdkSjcFg4Sg+cC0xG 8mfTyckbVT+nm4kSgwZ4Vx6dEhQGcHpcg6saMFuJYuDY/MZDJk91JpF4q+U2Mxo0kKcb c56Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=9IOPAG+bqxWxKt0wdRgDKHhdCXWwpmJ9UigZO44sKyE=; b=hABUHyKo9Uif4ajWWxxp1lUq2j5sT8xd7QAxKirHfJElyDmgYHYFTLi2Aev55AtD8r J2oQyQ/KEXz1Q3EC0lWiZ2Z04TCw/NBFVLKTNjcYd4/+Vxgv/qX4JOhDqxlKu7JhMIvZ +NdIyzAYwnTtLMxqxjwsYCIUM5rhaZCHyHCBS5NJYOj02l6cIVhRxubkGAUkZov0IqLl nLXS2S+rps8pLxvd711Kt2jozYcGiDn71OY5PrKmawp8pfmGPtLdeNY05d3FiXeWo6Gn MXLqrkzF19RKTOkaupt/F+tz3wXbSkTTuFUBURF2DiVydkjPui/iZwdqFZE0rbun+AIc Xr1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ntB4QsMV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gn13-20020a1709070d0d00b00933b62b2533si17948405ejc.297.2023.03.27.23.26.19; Mon, 27 Mar 2023 23:26:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ntB4QsMV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232663AbjC1GRo (ORCPT + 99 others); Tue, 28 Mar 2023 02:17:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232548AbjC1GRR (ORCPT ); Tue, 28 Mar 2023 02:17:17 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96FC835BC for ; Mon, 27 Mar 2023 23:16:53 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id o4-20020a056a00214400b00627ddde00f4so5321325pfk.4 for ; Mon, 27 Mar 2023 23:16:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679984213; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9IOPAG+bqxWxKt0wdRgDKHhdCXWwpmJ9UigZO44sKyE=; b=ntB4QsMVT8Fqv0//N2SyNJC8TO+SF5Nd//5MMCXPuMIlVWZztrbNiDvPnYTm36PsVJ 0K9UDAdVpl/jKPNVqGESnlXBby6gGh6KP9olFVGSWL7LZjX9hTDyTd2rcdVRYaALTq3d Svdw5D9xcS6f+R64pTOq4hP3DTwLDxc7s6WmPqIXIzObjUGQ7BCksYYcDbsn8CMNCe4l XYGLULvlYDvGRGQwaFkUUkpNiN3oSuW1tCmNSMxILm7ttbjB9+T2xkgSS2wr2ydIjDg8 ulb0ThWq/NbS6X5lqaGmn6zu6IKbfwWSX3vLp0eLAeAr3Ue3Dif4Fen7adboVvdM5kPM ZMpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679984213; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9IOPAG+bqxWxKt0wdRgDKHhdCXWwpmJ9UigZO44sKyE=; b=eryVWX7e9Pf5uP8030yJoHfQIJasUdIiGoPcJ/tCya3GhsrP4MvRlqq6BpZikeb6l3 IKTb6RjLimQDXVuekRWdOenrJFG7stO8Vo3vGdnYiz6c/myyiSiyOIE5cTXhPCg5dO1q m8p/roto7Vp7Q84Wgn50z7IftrU4wNH8bLzkp0rM5q2ma668h0Moy3b/xASbAcAPlOcU xx93R5NxY8VpAAdrz5t/caTJxxLAkv8x9WYeshWFA8Ic8mQ2UzFuzoGg4fF09MOwe2M5 hsBCnEB5lHwkPl8BspakSTcm7BDlgDEqUsCX0sFnN0qnOlCaWrP0ob5KYYYLkQhXE69a 0aQw== X-Gm-Message-State: AAQBX9d8rit7fglTrNWwMvT4wSOYjF2lJbrvph6pPUNQpRLBygPGAqgQ 8okyrUpmR/hFvWwZHON9fBZ83UoWoOPW9mDj X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:4a41:0:b0:507:46cb:f45b with SMTP id j1-20020a634a41000000b0050746cbf45bmr3969874pgl.1.1679984212630; Mon, 27 Mar 2023 23:16:52 -0700 (PDT) Date: Tue, 28 Mar 2023 06:16:35 +0000 In-Reply-To: <20230328061638.203420-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328061638.203420-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328061638.203420-7-yosryahmed@google.com> Subject: [PATCH v1 6/9] memcg: sleep during flushing stats in safe contexts From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed X-Spam-Status: No, score=-7.7 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761591745332512481?= X-GMAIL-MSGID: =?utf-8?q?1761591745332512481?= Currently, all contexts that flush memcg stats do so with sleeping not allowed. Some of these contexts are perfectly safe to sleep in, such as reading cgroup files from userspace or the background periodic flusher. Refactor the code to make mem_cgroup_flush_stats() non-atomic (aka sleepable), and provide a separate atomic version. The atomic version is used in reclaim, refault, writeback, and in mem_cgroup_usage(). All other code paths are left to use the non-atomic version. This includes callbacks for userspace reads and the periodic flusher. Since refault is the only caller of mem_cgroup_flush_stats_ratelimited(), this function is changed to call the atomic version of mem_cgroup_flush_stats(). Reclaim and refault code paths are modified to do non-atomic flushing in separate later patches -- so mem_cgroup_flush_stats_ratelimited() will eventually become non-atomic. Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt --- include/linux/memcontrol.h | 5 ++++ mm/memcontrol.c | 58 ++++++++++++++++++++++++++++++++------ mm/vmscan.c | 2 +- 3 files changed, 55 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ac3f3b3a45e2..a4bc3910a2eb 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1037,6 +1037,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void mem_cgroup_flush_stats(void); +void mem_cgroup_flush_stats_atomic(void); void mem_cgroup_flush_stats_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, @@ -1535,6 +1536,10 @@ static inline void mem_cgroup_flush_stats(void) { } +static inline void mem_cgroup_flush_stats_atomic(void) +{ +} + static inline void mem_cgroup_flush_stats_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 64ff33e02c96..57e8cbf701f3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -634,7 +634,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void __mem_cgroup_flush_stats(void) +static bool mem_cgroup_pre_stats_flush(void) { /* * We always flush the entire tree, so concurrent flushers can just @@ -642,24 +642,57 @@ static void __mem_cgroup_flush_stats(void) * from memcg flushers (e.g. reclaim, refault, etc). */ if (atomic_xchg(&stats_flush_ongoing, 1)) - return; + return false; WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + return true; +} + +static void mem_cgroup_post_stats_flush(void) +{ atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_flush_ongoing, 0); } -void mem_cgroup_flush_stats(void) +static bool mem_cgroup_should_flush_stats(void) { - if (atomic_read(&stats_flush_threshold) > num_online_cpus()) - __mem_cgroup_flush_stats(); + return atomic_read(&stats_flush_threshold) > num_online_cpus(); +} + +/* atomic functions, safe to call from any context */ +static void __mem_cgroup_flush_stats_atomic(void) +{ + if (mem_cgroup_pre_stats_flush()) { + cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + mem_cgroup_post_stats_flush(); + } +} + +void mem_cgroup_flush_stats_atomic(void) +{ + if (mem_cgroup_should_flush_stats()) + __mem_cgroup_flush_stats_atomic(); } void mem_cgroup_flush_stats_ratelimited(void) { if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); +} + +/* non-atomic functions, only safe from sleepable contexts */ +static void __mem_cgroup_flush_stats(void) +{ + if (mem_cgroup_pre_stats_flush()) { + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + mem_cgroup_post_stats_flush(); + } +} + +void mem_cgroup_flush_stats(void) +{ + if (mem_cgroup_should_flush_stats()) + __mem_cgroup_flush_stats(); } static void flush_memcg_stats_dwork(struct work_struct *w) @@ -3684,9 +3717,12 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) * done from irq context; use stale stats in this case. * Arguably, usage threshold events are not reliable on the root * memcg anyway since its usage is ill-defined. + * + * Additionally, other call paths through memcg_check_events() + * disable irqs, so make sure we are flushing stats atomically. */ if (in_task()) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) @@ -4609,7 +4645,11 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + /* + * wb_writeback() takes a spinlock and calls + * wb_over_bg_thresh()->mem_cgroup_wb_stats(). Do not sleep. + */ + mem_cgroup_flush_stats_atomic(); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..a9511ccb936f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2845,7 +2845,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); /* * Determine the scan balance between anon and file LRUs.