From patchwork Tue Oct 17 23:21:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 154591 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4454723vqb; Tue, 17 Oct 2023 16:22:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEcBtfcS3d7tKvXCSwsWmKBnw+1efDqzlIruiFGavR4dXq0XGiju0GSJPPqwk91n7znNiwx X-Received: by 2002:a05:6830:6e9a:b0:6c7:aab5:6e50 with SMTP id ed26-20020a0568306e9a00b006c7aab56e50mr4828403otb.2.1697584939677; Tue, 17 Oct 2023 16:22:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697584939; cv=none; d=google.com; s=arc-20160816; b=B6wisMpgcv8XrmCk++Rj0OGmez2RhzPPD7DX20zI+VVQAzxii/7/PiXurylZ2JnaQ3 88bC8zbiAo1NZw3W+++Y36x6YE2y4nHQDUORnATqajGUMa4PRqAoxeuBAjKp6KkrQ0H9 kSpBBwDSn3EnKbwrI3iFtolVh4T/kg8VJFhe4JIoL8j4Hhe3okhHRj5nNCkBerOmWJ0A x5wGVvxPtnVE+aMyc49Fak7upm2Vqj5Bb7nQxF4I5LyTbLyO0XcTn/Ye8FNP8l7xSdQn q1VQDPSF5G7ceEFPRHakjKOi79YnQDIV6y1ffborHAhIr1W2+mAPEW5k6BYqh81x9wPN FLiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; fh=ujhzRoGvNYK+acFkUTmqZhWNw+cNDLTLaE+W0yhRAuE=; b=xQVL/Gk291DbWgUs06eS9YqVbb0Tgx1JSHiOQca5dkHEMFntiCnS0I1zh4xZBpYbuE +OahoPF7K/YgoPvMrqWMXUq6vJYvcmmIGNmth3t1xc1/oJlAnCyELSPT9x1kfrStXyWW niw4quuduFJsGi1awA3r3VO9FZ/qN7TyBPJg56vXSD8wdwcwfUIV1kTDlbib85nUJH0t N+/3fpgDp8zvVfdZ7CA9+AblHvX3wtpSR2k4Ae6wc7bPbp2yArq2GX/cKpEzfo2wLJXZ 3RJpL4nnjVhPlYAtiHtHDNvaOMs/LCv5UciSfXiZmQJEO2AYpXQrhIGi8632k10iHoXC 1esw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=dTqken3n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id h15-20020a056a001a4f00b006b6119c4695si1030385pfv.380.2023.10.17.16.22.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:22:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=dTqken3n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 601E58027839; Tue, 17 Oct 2023 16:22:12 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344302AbjJQXV6 (ORCPT + 22 others); Tue, 17 Oct 2023 19:21:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbjJQXV4 (ORCPT ); Tue, 17 Oct 2023 19:21:56 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C9D598; Tue, 17 Oct 2023 16:21:54 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1c5cd27b1acso52523315ad.2; Tue, 17 Oct 2023 16:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584914; x=1698189714; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; b=dTqken3nEAlQtJkoFoYvE06ZQRxOWd/E1HpTEvzXz+Y4/26+JlFkc5jTNXixUKf8aD 7Wzmlk4nY1mTdw373uz9I2Ommsa8LrdkcfW+4bsoXfXyLoCNbIluyX1R3hol/A9J3ZFB diZDzywXx5j4Pfsf3sDDouQhrh9jzCNwuomcXl2MCLLprGY+m30FXz6TpcwAmbk14gTc Gnaq63wgM6fiW6NGdDVCOJwhtE5Ua14BYpK6oOsLWJh7CXOsYn/kQ4tVNgztRbyh9/fL Yc2pfuBXjcl/eeJAra67nQiu3rqyX7oHtSUNzZOkp04hisGebnTKIvGWYdT/cgitayHV xUnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584914; x=1698189714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; b=sIkA8+cWfyrz3mnYEUVRDeksOaM4PV9yEZfxal0uaT28V7wTicGMBAueNZc6fGOjvc CrfjkDy12c7mdt9QcWCl/v6toQiGwZI5i+Hk0nP7jdjokHo23U8igf11fnsLuLSdHgOg jq4Hdf2i2EHb5e+bW/jzE2MHnojfIGfjenWE7bJ4s8+99leXNwWtnrotZGSKKK7hPaWd GBVnOtTMB5iUrUfg9WLW3cYAUafcY2UfqNo60d7tcr7IUZ1IpcFIlIOWqkrMNEqAR8CS 199HvEZcbTgrokHnwTKMWcwYZR2+6ReZAcCZitRNxLp2pSVEBsZiTXp7fNaQQXehL7KM /lRw== X-Gm-Message-State: AOJu0YzMOWQ3LzUwrTR4IeO3NEH3qttaXOOCI9zVb98I1NCRja2sfRB8 HSk5lteT5bU1SJWNG2T32VQ= X-Received: by 2002:a17:903:1388:b0:1c9:aac5:df1a with SMTP id jx8-20020a170903138800b001c9aac5df1amr3209709plb.51.1697584913825; Tue, 17 Oct 2023 16:21:53 -0700 (PDT) Received: from localhost (fwdproxy-prn-013.fbsv.net. [2a03:2880:ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id u14-20020a170902e5ce00b001b86492d724sm2120023plf.223.2023.10.17.16.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:53 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Date: Tue, 17 Oct 2023 16:21:48 -0700 Message-Id: <20231017232152.2605440-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 17 Oct 2023 16:22:12 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780046825993378937 X-GMAIL-MSGID: 1780046825993378937 The interface of list_lru is based on the assumption that objects are allocated on the correct node/memcg, with this change it is introduced the possibility to explicitly specify numa node and memcgroup when adding and removing objects. This is so that users of list_lru can track node/memcg of the items outside of the list_lru, like in zswap, where the allocations can be made by kswapd for data that's charged to a different cgroup. Signed-off-by: Nhat Pham --- include/linux/list_lru.h | 38 +++++++++++++++++++++++++++++++++++ mm/list_lru.c | 43 +++++++++++++++++++++++++++++++++++----- 2 files changed, 76 insertions(+), 5 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index b35968ee9fb5..0f5f39cacbbb 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren */ bool list_lru_add(struct list_lru *lru, struct list_head *item); +/** + * __list_lru_add: add an element to a specific sublist. + * @list_lru: the lru pointer + * @item: the item to be added. + * @memcg: the cgroup of the sublist to add the item to. + * @nid: the node id of the sublist to add the item to. + * + * This function is similar to list_lru_add(), but it allows the caller to + * specify the sublist to which the item should be added. This can be useful + * when the list_head node is not necessarily in the same cgroup and NUMA node + * as the data it represents, such as zswap, where the list_head node could be + * from kswapd and the data from a different cgroup altogether. + * + * Return value: true if the list was updated, false otherwise + */ +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + /** * list_lru_del: delete an element to the lru list * @list_lru: the lru pointer @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); */ bool list_lru_del(struct list_lru *lru, struct list_head *item); +/** + * __list_lru_del: delete an element from a specific sublist. + * @list_lru: the lru pointer + * @item: the item to be deleted. + * @memcg: the cgroup of the sublist to delete the item from. + * @nid: the node id of the sublist to delete the item from. + * + * Return value: true if the list was updated, false otherwise. + */ +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + /** * list_lru_count_one: return the number of objects currently held by @lru * @lru: the lru pointer. @@ -136,6 +166,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru) void list_lru_isolate(struct list_lru_one *list, struct list_head *item); void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); +/* + * list_lru_putback: undo list_lru_isolate. + * + * Since we might have dropped the LRU lock in between, recompute list_lru_one + * from the node's id and memcg. + */ +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, struct list_lru_one *list, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index a05e5bef3b40..63b75163c6ad 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -119,13 +119,22 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, bool list_lru_add(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return __list_lru_add(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_add); + +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; - struct mem_cgroup *memcg; struct list_lru_one *l; spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, &memcg); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) @@ -138,17 +147,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } -EXPORT_SYMBOL_GPL(list_lru_add); +EXPORT_SYMBOL_GPL(__list_lru_add); bool list_lru_del(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return __list_lru_del(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_del); + +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, NULL); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -158,7 +177,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } -EXPORT_SYMBOL_GPL(list_lru_del); +EXPORT_SYMBOL_GPL(__list_lru_del); void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { @@ -175,6 +194,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, } EXPORT_SYMBOL_GPL(list_lru_isolate_move); +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *list = + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + + if (list_empty(item)) { + list_add_tail(item, &list->list); + if (!list->nr_items++) + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); + } +} +EXPORT_SYMBOL_GPL(list_lru_putback); + unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { From patchwork Tue Oct 17 23:21:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 154595 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4455018vqb; Tue, 17 Oct 2023 16:23:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFf38EPC+Z8KY7dEYhC92xSfbKKCXkVA9Qcf7iR2yuaee3tHl3ldOj3YNEoLLX+ZXgHRs+c X-Received: by 2002:a17:90b:1b44:b0:274:60c7:e15a with SMTP id nv4-20020a17090b1b4400b0027460c7e15amr3939925pjb.4.1697584986024; Tue, 17 Oct 2023 16:23:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697584986; cv=none; d=google.com; s=arc-20160816; b=sflV97rcpOcJX5IGOmLyXV88jhZSaBFmSbMgQpMT61cpS+YBhJDKf8Fkw96YmAsIsE wrUTPbauuBLnxk6Mlv5eBBGQ6hfGWOk3GXt7O/7xBe2YV0ygfU7ipnciNOIwyZmZSxxe EpO1hjG1bzJtL9Xzm1MZKkjdWBx2CeB8Iphtn+5uexyXRv2wg6mqn9BC2XdUgEj4YMna zjNH1BSiWjqnLJFYQ9F8lSGI9VJeBZhOtFopVHhC7n10qkMvhyRDv4/ayfYzh3qlnoMr lTGuGMG1jjIrwdrX1teiEER3qYHy4xJfO6Zey9XBaGPckXEjs1Ft568TsVvRO9bLsMau XDeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; fh=ujhzRoGvNYK+acFkUTmqZhWNw+cNDLTLaE+W0yhRAuE=; b=uioilX9HjjbO4rXanaDPqzh+TaVSJe0zcB3VwD3cTzmwgFTPCjzqc8sMOivy8VA/5N n8ka9/lrc+pSFg3Jv9TXjHjpcyEWgN1Mfg2gzFu8VQ9Py1x1PslfNcE+tF0X578IQ6w5 0eCwn9gEgbmJIhGioxq2zqPLpOxeMba00cDTdrbqJvcJya7zc545FRQNWwq3hpXo0Vlf 6YCyZhp9Jk/lYu82GD/Cq5+A3nSVcagPLJAWnW7tlZ4rPgrpnw78vEwYNBiau2oI1YO6 /dV9QPTggPT91ZqnRTR4/py5nJbGjQtFaG4PQ5ZNcD11noQS8F3M8nmUH6zil5tRMCO4 iQAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=TPwoyqic; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id oj5-20020a17090b4d8500b0027d06e08f9bsi181104pjb.150.2023.10.17.16.23.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:23:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=TPwoyqic; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 9F480802D0C6; Tue, 17 Oct 2023 16:23:02 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344358AbjJQXWE (ORCPT + 22 others); Tue, 17 Oct 2023 19:22:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344288AbjJQXV5 (ORCPT ); Tue, 17 Oct 2023 19:21:57 -0400 Received: from mail-il1-x12e.google.com (mail-il1-x12e.google.com [IPv6:2607:f8b0:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ABF6C4; Tue, 17 Oct 2023 16:21:55 -0700 (PDT) Received: by mail-il1-x12e.google.com with SMTP id e9e14a558f8ab-3573e6dd79bso24309505ab.2; Tue, 17 Oct 2023 16:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584915; x=1698189715; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; b=TPwoyqic1o/ZRAljL4naDxOvkXIafEFLaEtyCFtKTS53/3/rBIfJDGYhZBc0gPM789 2W4uKNOZAteFM9uRP54PDUv+1QavI52MVhAhEFa3F2Rn5dmR4BK4q1PnpKJ1Fb4/pkir g7EDziWNbjoa4ve4p2rPSeLZcBu1PWrAejWoXtJWxSpG/HlusmVhW5TG6Ondgf2wJFAB X3fYs4Y1XPtxZe3b5zzukv+UGzWyqaHaEVo2DAsJkMbADQ0rPAbD29lHImAhdbPclkPQ D7gNMRQGh93sqwilVl88R8ka2JuH67XuHhiI3BhcZosht2GZZ+TGKrc3TDeKOX+FKD2d w4Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584915; x=1698189715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; b=oTQ0S7l1BReQPwQ9otvzD5tL1du012mn3zEpqcKImy0g3ChB8dH/7hGrJ01M8uTRTo uaRil3mSRGQOnSOfUwncJvq+ehmUrW3YSBZm3LXvcDzrZ+S4VsgwVIPTMUIO87r5fBZw fvirBEn7Skt4QTsd/6AaxNnxmM6v5/qcg5F25CCrVU3M+rm0eaBy3nUV+G5BkoK2tQjg q8OWVU6qhcPm70Xl8c59pErJJHJMXLyiZrOIaqvvVfptmLLj4eMM/wyGU2rP7hi+zRTM pD1TYHoxlcvqVfLoFg+TdKz9i8H6EDPFxJzrOeMqopl9WHBDck33hSmrvXg78/w/ZACi Vt1w== X-Gm-Message-State: AOJu0YxWrKs/hrfRk8OD8Wjc/fT6/8BRUX4adZemLaELSHE2TIdaC2UD xbEXCfcPoZd0gv9/JX/ZUnA= X-Received: by 2002:a05:6e02:1b82:b0:34f:70ec:d4cf with SMTP id h2-20020a056e021b8200b0034f70ecd4cfmr4908802ili.8.1697584914772; Tue, 17 Oct 2023 16:21:54 -0700 (PDT) Received: from localhost (fwdproxy-prn-012.fbsv.net. [2a03:2880:ff:c::face:b00c]) by smtp.gmail.com with ESMTPSA id k125-20020a632483000000b005742092c211sm430376pgk.64.2023.10.17.16.21.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:54 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 2/5] zswap: make shrinking memcg-aware Date: Tue, 17 Oct 2023 16:21:49 -0700 Message-Id: <20231017232152.2605440-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 17 Oct 2023 16:23:02 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780046874063867712 X-GMAIL-MSGID: 1780046874063867712 From: Domenico Cerasuolo Currently, we only have a single global LRU for zswap. This makes it impossible to perform worload-specific shrinking - an memcg cannot determine which pages in the pool it owns, and often ends up writing pages from other memcgs. This issue has been previously observed in practice and mitigated by simply disabling memcg-initiated shrinking: https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u This patch fully resolves the issue by replacing the global zswap LRU with memcg- and NUMA-specific LRUs, and modify the reclaim logic: a) When a store attempt hits an memcg limit, it now triggers a synchronous reclaim attempt that, if successful, allows the new hotter page to be accepted by zswap. b) If the store attempt instead hits the global zswap limit, it will trigger an asynchronous reclaim attempt, in which an memcg is selected for reclaim in a round-robin-like fashion. Signed-off-by: Domenico Cerasuolo Co-developed-by: Nhat Pham Signed-off-by: Nhat Pham --- include/linux/memcontrol.h | 5 ++ mm/swap.h | 3 +- mm/swap_state.c | 17 +++- mm/zswap.c | 179 ++++++++++++++++++++++++++----------- 4 files changed, 147 insertions(+), 57 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 031102ac9311..3de10fabea0f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + return NULL; +} + static inline bool folio_memcg_kmem(struct folio *folio) { return false; diff --git a/mm/swap.h b/mm/swap.h index 8a3c7a0ace4f..bbd6ce661a20 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated); + bool *new_page_allocated, + bool fail_if_exists); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index b3b14bd0dd64..0356df52b06a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + bool *new_page_allocated, bool fail_if_exists) { struct swap_info_struct *si; struct folio *folio; @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (err != -EEXIST) goto fail_put_swap; + /* + * This check guards against a state that happens if a call + * to __read_swap_cache_async triggers a reclaim, if the + * reclaimer (zswap's writeback as of now) then decides to + * reclaim that same entry, then the subsequent call to + * __read_swap_cache_async would get stuck in this loop. + */ + if (fail_if_exists && err == -EEXIST) + goto fail_put_swap; /* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + vma, addr, &page_was_allocated, false); if (page_was_allocated) swap_readpage(retpage, false, plug); @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + gfp_mask, vma, addr, &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, pte_unmap(pte); pte = NULL; page = __read_swap_cache_async(entry, gfp_mask, vma, - addr, &page_allocated); + addr, &page_allocated, false); if (!page) continue; if (page_allocated) { diff --git a/mm/zswap.c b/mm/zswap.c index 083c693602b8..d2989ad11814 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" @@ -171,8 +172,8 @@ struct zswap_pool { struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; - struct list_head lru; - spinlock_t lru_lock; + struct list_lru list_lru; + struct mem_cgroup *next_shrink; }; /* @@ -288,15 +289,25 @@ static void zswap_update_total_size(void) zswap_pool_total_size = total; } +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL; +} + +static inline int entry_to_nid(struct zswap_entry *entry) +{ + return page_to_nid(virt_to_page(entry)); +} + /********************************* * zswap entry functions **********************************/ static struct kmem_cache *zswap_entry_cache; -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) { struct zswap_entry *entry; - entry = kmem_cache_alloc(zswap_entry_cache, gfp); + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); if (!entry) return NULL; entry->refcount = 1; @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* lru functions +**********************************/ +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) +{ + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); + bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg); + + mem_cgroup_put(memcg); + return added; +} + +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry) +{ + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); + bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg); + + mem_cgroup_put(memcg); + return removed; +} + /********************************* * rbtree functions **********************************/ @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry) if (!entry->length) atomic_dec(&zswap_same_filled_pages); else { - spin_lock(&entry->pool->lru_lock); - list_del(&entry->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); zswap_pool_put(entry->pool); } @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree, zswap_entry_put(tree, entry); } -static int zswap_reclaim_entry(struct zswap_pool *pool) +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg) { - struct zswap_entry *entry; + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + struct mem_cgroup *memcg; struct zswap_tree *tree; pgoff_t swpoffset; - int ret; + enum lru_status ret = LRU_REMOVED_RETRY; + int writeback_result; - /* Get an entry off the LRU */ - spin_lock(&pool->lru_lock); - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lru_lock); - return -EINVAL; - } - entry = list_last_entry(&pool->lru, struct zswap_entry, lru); - list_del_init(&entry->lru); /* * Once the lru lock is dropped, the entry might get freed. The * swpoffset is copied to the stack, and entry isn't deref'd again @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) */ swpoffset = swp_offset(entry->swpentry); tree = zswap_trees[swp_type(entry->swpentry)]; - spin_unlock(&pool->lru_lock); + list_lru_isolate(l, item); + spin_unlock(lock); /* Check for invalidate() race */ spin_lock(&tree->lock); if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) { - ret = -EAGAIN; goto unlock; } /* Hold a reference to prevent a free during writeback */ zswap_entry_get(entry); spin_unlock(&tree->lock); - ret = zswap_writeback_entry(entry, tree); + writeback_result = zswap_writeback_entry(entry, tree); spin_lock(&tree->lock); - if (ret) { - /* Writeback failed, put entry back on LRU */ - spin_lock(&pool->lru_lock); - list_move(&entry->lru, &pool->lru); - spin_unlock(&pool->lru_lock); + if (writeback_result) { + zswap_reject_reclaim_fail++; + memcg = get_mem_cgroup_from_entry(entry); + spin_lock(lock); + /* we cannot use zswap_lru_add here, because it increments node's lru count */ + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg); + spin_unlock(lock); + mem_cgroup_put(memcg); + ret = LRU_RETRY; goto put_unlock; } + zswap_written_back_pages++; /* * Writeback started successfully, the page now belongs to the @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) zswap_entry_put(tree, entry); unlock: spin_unlock(&tree->lock); - return ret ? -EAGAIN : 0; + spin_lock(lock); + return ret; +} + +static int shrink_memcg(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + int nid, shrunk = 0; + + pool = zswap_pool_current_get(); + if (!pool) + return -EINVAL; + + /* + * Skip zombies because their LRUs are reparented and we would be + * reclaiming from the parent instead of the dead memcgroup. + */ + if (memcg && !mem_cgroup_online(memcg)) + goto out; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + unsigned long nr_to_walk = 1; + + if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb, + NULL, &nr_to_walk)) + shrunk++; + } +out: + zswap_pool_put(pool); + return shrunk ? 0 : -EAGAIN; } static void shrink_worker(struct work_struct *w) @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w) shrink_work); int ret, failures = 0; + /* global reclaim will select cgroup in a round-robin fashion. */ do { - ret = zswap_reclaim_entry(pool); + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL); + + ret = shrink_memcg(pool->next_shrink); + if (ret) { - zswap_reject_reclaim_fail++; if (ret != -EAGAIN) break; if (++failures == MAX_RECLAIM_RETRIES) @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - INIT_LIST_HEAD(&pool->lru); - spin_lock_init(&pool->lru_lock); + list_lru_init_memcg(&pool->list_lru, NULL); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool) cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); + list_lru_destroy(&pool->list_lru); + if (pool->next_shrink) + mem_cgroup_put(pool->next_shrink); for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) zpool_destroy_pool(pool->zpools[i]); kfree(pool); @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* try to allocate swap cache page */ page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0, - &page_was_allocated); + &page_was_allocated, true); if (!page) { ret = -ENOMEM; goto fail; @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* start writeback */ __swap_writepage(page, &wbc); put_page(page); - zswap_written_back_pages++; return ret; @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; struct zpool *zpool; + int lru_alloc_ret; unsigned int dlen = PAGE_SIZE; unsigned long handle, value; char *buf; @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); - - /* - * XXX: zswap reclaim does not work with cgroups yet. Without a - * cgroup-aware entry LRU, we will push out entries system-wide based on - * local cgroup limits. - */ objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) - goto reject; + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto reject; + } + mem_cgroup_put(memcg); + } /* reclaim space if needed */ if (zswap_is_full()) { @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio) zswap_pool_reached_full = false; } + pool = zswap_pool_current_get(); + if (!pool) + goto reject; + /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL); + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); if (!entry) { zswap_reject_kmemcache_fail++; + zswap_pool_put(pool); goto reject; } @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio) entry->length = 0; entry->value = value; atomic_inc(&zswap_same_filled_pages); + zswap_pool_put(pool); goto insert_entry; } kunmap_atomic(src); @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio) goto freepage; /* if entry is successfully added, it keeps the reference */ - entry->pool = zswap_pool_current_get(); - if (!entry->pool) - goto freepage; + entry->pool = pool; + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL); + mem_cgroup_put(memcg); + + if (lru_alloc_ret) + goto freepage; + } /* compress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_add(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&pool->list_lru, entry); } spin_unlock(&tree->lock); @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio) put_dstmem: mutex_unlock(acomp_ctx->mutex); - zswap_pool_put(entry->pool); freepage: + zswap_pool_put(entry->pool); zswap_entry_cache_free(entry); reject: if (objcg) @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio) zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_move(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); + zswap_lru_add(&entry->pool->list_lru, entry); } zswap_entry_put(tree, entry); spin_unlock(&tree->lock); From patchwork Tue Oct 17 23:21:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 154593 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4454850vqb; Tue, 17 Oct 2023 16:22:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHGnKYWmaCzg28iCqK3FOA4W8SWlj6XBoMf18Np35rKchZmrQaLE9355clHtXksUyI5RSJS X-Received: by 2002:a05:6a20:ce83:b0:16c:b514:a4bc with SMTP id if3-20020a056a20ce8300b0016cb514a4bcmr3311664pzb.4.1697584960400; Tue, 17 Oct 2023 16:22:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697584960; cv=none; d=google.com; s=arc-20160816; b=Npq9gkeWjkrbABWrzGl7nTxzO3gogbYbQ7Jf5enwGad56LYn93uGOaGxayDeK1xe8U pjrnz3xuqX+W2VETih1GeZ7N4z19L0ysgQWHB8rV60Y03Q0mDoIC6m+roYG6v+vPQZWn J29DGatNZuqYQvvD5R9jXe1GmZXG6OhajAlXG5jwFHmCsb4RP2E6/pRYXwTdKi+Cx+RA gNOn0NGXsBoijlJLqgy386OI0zkhr99bHWsk7jSKiuZoHJrLifvPjUfI/3y+SG24VWO7 VAOoqAXjOIQdIOGRzcr4wTbZOYOYLprExGdba09bKTJzvCq100RfUMB2lbfnfkbe0WU9 6hfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; fh=ujhzRoGvNYK+acFkUTmqZhWNw+cNDLTLaE+W0yhRAuE=; b=sUj54vaAryTQnNOa8nwQXVV/MMX7BJmidU/TAzVkVxrNcZ4AwC7S4EM7AzsG/L29NQ 1XgNE6JHTQWMtUhadaKHfIKO8lwG6hTQ9FsnOgGcdaZ49b/ChknV2JbHWZcdBBZT0KQl zb7sE8YTsfxZ5ADubsy4LeGQIw4Ny0ZBknkNYg5biqEwchZVY+fYLtn5yE2Vd3Dd9q9h 54ObzlsBpi3grbVcC125rS+oy5lHd0x6aeioAwHEvS4q7u2fxVoCtxylo3w8//0D36iE 2ZzUFY5om41SJswBXLMvgGXuOGOBidpTD2PX6cMrROnSplnZpYULGPJ9LvrINgO8AyOW SNsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=FJjUrDVN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id j9-20020a170902da8900b001c9abee0d76si1118196plx.331.2023.10.17.16.22.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:22:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=FJjUrDVN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 303F380FC1B2; Tue, 17 Oct 2023 16:22:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344329AbjJQXWB (ORCPT + 22 others); Tue, 17 Oct 2023 19:22:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344285AbjJQXV5 (ORCPT ); Tue, 17 Oct 2023 19:21:57 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F3E4F1; Tue, 17 Oct 2023 16:21:56 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id 41be03b00d2f7-565334377d0so4666227a12.2; Tue, 17 Oct 2023 16:21:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584916; x=1698189716; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; b=FJjUrDVNM245FBw75PZq6BIDq4hDWljDViMTS48XAb4zWxhQ9/EfhBzRsXfegH7Zu9 zaISAdljETrWRwxggYurxfCsd05MpK1PAu/Zw/TgNqr0B1DVHfMeTbF+6mvgQqfIOOvy SXeSOYvTtZPzmlierwpu+LlwWNC2vzBZpU/BYTcpE7bSejjB3TD2IZg4K7bqaj2Og1Go sVqO1xtm1oOC6o5DHv/qaLgIvB+V9Dghs56iW8pggg3uR0nUjA0d6/G4JLzv91X6BtLZ W8XipcmBeFmjJOFpCynkvcbjTKh/1PdFQq61+eJtvYc/g6txSZpSOp8p5Fyhpwj+qA8i 26nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584916; x=1698189716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; b=BE0hdc4f7iCzhJ1gYXxyaFHr7g7p6t9b7dvD+YxoF07hZwd+sjzokoZtRov9T7/T2t KsvchtWC/m0sWN737G3gj0jmFyWqpZAHiLqDatbIC9OOGcVMpy+lJMxfiw1Z1LRUxJBr C/LSBTknq2r/kjq3kCXi0UyUyQS2C3jA4JbwPsmt0YLRuXzrXPINWGqIV4czKiqKzqhC XcCzYJl3b/cDf/FAOWUav87ms6qN7N5TVwz8dMJWnKbtSS8QJdsRryunGcMIrQzU+eQS SZvpX5G6IEQ7ETrAOMIw9khn7BiKDuYTdddsa+ScM1MNPYgcxxfd0hm4jCsLblfJycK4 Fmzg== X-Gm-Message-State: AOJu0YyjDcNUr9x8Mrnoc1JYw6YuOK9muVMcTzyboN4u+CzMh7yXNOW8 TVhwzQG3yT+TYA/gDuQqDeU= X-Received: by 2002:a05:6a20:938b:b0:17a:284:de50 with SMTP id x11-20020a056a20938b00b0017a0284de50mr4589053pzh.6.1697584915727; Tue, 17 Oct 2023 16:21:55 -0700 (PDT) Received: from localhost (fwdproxy-prn-008.fbsv.net. [2a03:2880:ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id e7-20020a170902b78700b001c735421215sm2116251pls.216.2023.10.17.16.21.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:55 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Date: Tue, 17 Oct 2023 16:21:50 -0700 Message-Id: <20231017232152.2605440-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 17 Oct 2023 16:22:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780046847023770768 X-GMAIL-MSGID: 1780046847023770768 From: Domenico Cerasuolo Since zswap now writes back pages from memcg-specific LRUs, we now need a new stat to show writebacks count for each memcg. Suggested-by: Nhat Pham Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham Acked-by: Nhat Pham --- include/linux/memcontrol.h | 2 ++ mm/memcontrol.c | 15 +++++++++++++++ mm/zswap.c | 3 +++ 3 files changed, 20 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3de10fabea0f..7868b1e00bf5 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -38,6 +38,7 @@ enum memcg_stat_item { MEMCG_KMEM, MEMCG_ZSWAP_B, MEMCG_ZSWAPPED, + MEMCG_ZSWAP_WB, MEMCG_NR_STAT, }; @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg, bool obj_cgroup_may_zswap(struct obj_cgroup *objcg); void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size); void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size); +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg); #else static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1bde67b29287..a9118871e5a6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = { #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) { "zswap", MEMCG_ZSWAP_B }, { "zswapped", MEMCG_ZSWAPPED }, + { "zswap_wb", MEMCG_ZSWAP_WB }, #endif { "file_mapped", NR_FILE_MAPPED }, { "file_dirty", NR_FILE_DIRTY }, @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item) switch (item) { case MEMCG_PERCPU_B: case MEMCG_ZSWAP_B: + case MEMCG_ZSWAP_WB: case NR_SLAB_RECLAIMABLE_B: case NR_SLAB_UNRECLAIMABLE_B: case WORKINGSET_REFAULT_ANON: @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) rcu_read_unlock(); } +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg) +{ + struct mem_cgroup *memcg; + + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return; + + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1); + rcu_read_unlock(); +} + static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { diff --git a/mm/zswap.c b/mm/zswap.c index d2989ad11814..15485427e3fa 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o } zswap_written_back_pages++; + if (entry->objcg) + obj_cgroup_report_zswap_wb(entry->objcg); + /* * Writeback started successfully, the page now belongs to the * swapcache. Drop the entry from zswap - unless invalidate already From patchwork Tue Oct 17 23:21:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 154592 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4454731vqb; Tue, 17 Oct 2023 16:22:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG5B+ulnzRpwBfmBdlc0cuxOVUN4yRE2LXgluizX06YVN85uLxMtpeDHyPzdUL8oTzQ1UBA X-Received: by 2002:a05:6a20:d80d:b0:163:ab09:193e with SMTP id iv13-20020a056a20d80d00b00163ab09193emr3613295pzb.1.1697584940632; Tue, 17 Oct 2023 16:22:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697584940; cv=none; d=google.com; s=arc-20160816; b=0+Oq1WVBZFz7l/u3tpD3l3WYfCFZumyanf/gtsZLD6Cq9B8e/VkqtwR8xGbzyBH/HL sNZrHpMTHLxlWT7gR0cmQuz9gQ6dhnP0hJGFmWngTLIW5P17Zkt4DIWjnHKIkM+RZKv2 /5SYusaC8HesmHSKQEeTsvbEZnOnn8F6lYJpq85pezbWJdjM2qlAWN+3FTkbiv8WuiYY oZyyYMRZVqytEFwcAMl6EYGj7AyifA4kyCaSSTftotGbDBzlsDqpgiz2hWOVMSnemPl/ 3T2PZ69+VR9qMnMNVDYjxcefkBH4893PIKfvK6+di2JT0zu7EWT+rEW3wKHIt+OdevGp Z/2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; fh=ujhzRoGvNYK+acFkUTmqZhWNw+cNDLTLaE+W0yhRAuE=; b=GmNGqDdXjJDVnDtx+gfEmfTixN1MHx20BAqG9GDTMa25Wo0lyW60bagKCcrIX0kHnH xegBJ/8LcC5QZf0xG2Abw5DdfBQR+oaXYvsPv9VRGMFZAn9biiIDVBeaY8p+JKhSMQ2m YGJjfyHKa27WEAerLesGnH2rKhK4luvISoCR90U+9Bo4xhAa+eDTtfF0ZCceai0V8aPx ibWZDt0uoxp8tl8gk/2/EA8ysdkkUWYpmLrzbBJfvO1LtlIGB23FwSz3RbjfY/8wRvgG fXZfzR5GZfHqlBDtgGfhGjZ4wNAgvcnhHlTxI7CVQ04PM171C7gpyVLuLStFQugIxUWk pH0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=bc2Xv+yC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id j184-20020a638bc1000000b0054fbd904b6dsi815375pge.500.2023.10.17.16.22.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:22:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=bc2Xv+yC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A237A8106802; Tue, 17 Oct 2023 16:22:19 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344368AbjJQXWH (ORCPT + 22 others); Tue, 17 Oct 2023 19:22:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344300AbjJQXV6 (ORCPT ); Tue, 17 Oct 2023 19:21:58 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29FD492; Tue, 17 Oct 2023 16:21:57 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 41be03b00d2f7-577e62e2adfso4026563a12.2; Tue, 17 Oct 2023 16:21:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584916; x=1698189716; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; b=bc2Xv+yC6w8vZfRQxkt5/aUH2uyTB32/sDUnY/grhMoJx5YGkQmWQYB23Z7u0IJDZa DPZ/lkY+m2bv0LglAXM1J0oQrfWuAKn5/T+OvBtR62UwGy9fS/fznUWDicQGKtlm/dno XeLJExaFHd/Ha+DVn+lW+vVLly5RRxqjw6uXnKLOk6Z/05trKJLwTqyKVqNvH0T7BAy/ 52Uplxa3q7UxWjsromTdoG32K0EzjpZJRJ3E4/9Bk5xZFNvLN9UFBXlHX6WL/m/o5rQY gLNL9b1Pagk+n2kLu52X86eLCD/wqEfDzkuv7brfcZFv0HSoMl4wJi/4CDCvt1p1xxFl V/gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584916; x=1698189716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; b=BzT748mXMUsS5UX2HXR08KPS3NME2nxPJ6FbKkkTS6vV+PeoE8JPRRGDHzr8C8qi6c 8yY8RyDiKHpOenMa76NlT40MUoFYKTuE7DAnGPDQBehWzvkQ1GiIhlkUm9uiQ40CprnZ 0ayH1GaSUK4KsfBOTc9xradpxIbQCOHLm1VUvZ3fCgyTPddDW2kAUdt9IlNFIXIXwJ+p s1O6D69zPeH/oBD/UlOzZsL4NIpDOehpPlYq6/efL5UMaCBDSx1MZJPoRZx23d6zlePF WooTcmE8pcAEMxcNbzIWSuMjDJ2t+PRGXI6Pl0r53GQlaRrl+eBM3MdZG14KQRf+zayE TPzw== X-Gm-Message-State: AOJu0YzfLEEr2ixQPW2Gk69qkGoRIi81JLxFLcVYN83CUkK3nk1T8J4s H4Scnv8UX+9CZW2wEINs0tQ= X-Received: by 2002:a05:6a21:4886:b0:14c:d494:77c3 with SMTP id av6-20020a056a21488600b0014cd49477c3mr3906280pzc.33.1697584916491; Tue, 17 Oct 2023 16:21:56 -0700 (PDT) Received: from localhost (fwdproxy-prn-019.fbsv.net. [2a03:2880:ff:13::face:b00c]) by smtp.gmail.com with ESMTPSA id u12-20020a170902b28c00b001c9bc811d4dsm2111635plr.295.2023.10.17.16.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:56 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Date: Tue, 17 Oct 2023 16:21:51 -0700 Message-Id: <20231017232152.2605440-5-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 17 Oct 2023 16:22:19 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780046827112996633 X-GMAIL-MSGID: 1780046827112996633 From: Domenico Cerasuolo The memcg-zswap self test is updated to adjust to the behavior change implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"), where zswap performs writeback for specific memcg. Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham Acked-by: Nhat Pham --- tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index 49def87a909b..11271fabeffc 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_zswap_written_back_pages(size_t *value) +static int get_cg_wb_count(const char *cg) { - return read_int("/sys/kernel/debug/zswap/written_back_pages", value); + return cg_read_key_long(cg, "memory.stat", "zswap_wb"); } static int allocate_bytes(const char *cgroup, void *arg) @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg) return 0; } +static char *setup_test_group_1M(const char *root, const char *name) +{ + char *group_name = cg_name(root, name); + + if (!group_name) + return NULL; + if (cg_create(group_name)) + goto fail; + if (cg_write(group_name, "memory.max", "1M")) { + cg_destroy(group_name); + goto fail; + } + return group_name; +fail: + free(group_name); + return NULL; +} + /* * When trying to store a memcg page in zswap, if the memcg hits its memory - * limit in zswap, writeback should not be triggered. - * - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may - * not zswap"). Needs to be revised when a per memcg writeback mechanism is - * implemented. + * limit in zswap, writeback should affect only the zswapped pages of that + * memcg. */ static int test_no_invasive_cgroup_shrink(const char *root) { - size_t written_back_before, written_back_after; int ret = KSFT_FAIL; - char *test_group; + size_t control_allocation_size = MB(10); + char *control_allocation, *wb_group = NULL, *control_group = NULL; /* Set up */ - test_group = cg_name(root, "no_shrink_test"); - if (!test_group) - goto out; - if (cg_create(test_group)) + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1"); + if (!wb_group) + return KSFT_FAIL; + if (cg_write(wb_group, "memory.zswap.max", "10K")) goto out; - if (cg_write(test_group, "memory.max", "1M")) + control_group = setup_test_group_1M(root, "per_memcg_wb_test2"); + if (!control_group) goto out; - if (cg_write(test_group, "memory.zswap.max", "10K")) + + /* Push some test_group2 memory into zswap */ + if (cg_enter_current(control_group)) goto out; - if (get_zswap_written_back_pages(&written_back_before)) + control_allocation = malloc(control_allocation_size); + for (int i = 0; i < control_allocation_size; i += 4095) + control_allocation[i] = 'a'; + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1) goto out; - /* Allocate 10x memory.max to push memory into zswap */ - if (cg_run(test_group, allocate_bytes, (void *)MB(10))) + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */ + if (cg_run(wb_group, allocate_bytes, (void *)MB(10))) goto out; - /* Verify that no writeback happened because of the memcg allocation */ - if (get_zswap_written_back_pages(&written_back_after)) - goto out; - if (written_back_after == written_back_before) + /* Verify that only zswapped memory from gwb_group has been written back */ + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0) ret = KSFT_PASS; out: - cg_destroy(test_group); - free(test_group); + cg_enter_current(root); + if (control_group) { + cg_destroy(control_group); + free(control_group); + } + cg_destroy(wb_group); + free(wb_group); + if (control_allocation) + free(control_allocation); return ret; } From patchwork Tue Oct 17 23:21:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 154594 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4454964vqb; Tue, 17 Oct 2023 16:22:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHIxHd9+9kRl3jp5EjWF2AldVXK54t5v2jVysIk2YSaXq70nRo1h4AWGMuP8Ty4JkNwCUxn X-Received: by 2002:a05:6a20:d80d:b0:163:c167:964a with SMTP id iv13-20020a056a20d80d00b00163c167964amr3412717pzb.1.1697584979005; Tue, 17 Oct 2023 16:22:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697584978; cv=none; d=google.com; s=arc-20160816; b=DM8l7aVLLLYeGUBrup0KWUtVSHExrBTMm1FqkHhtsx4fS+FCPCh0LCVsSeycBw6Y9n HMMLmduiL8sqIp5re7esicECXhPdlStJHHVQqfX04/wKePpUiuViQKiPeByeUpvnbw1D z12jnhoO6XjXZT21+9i8nTqcU1IUp/9aZ4uYFtS4wa0/Zpn0SWYsa7yFbXY2sIf1ESBH 21TY700pCWPTSXe7/ky9hk3Omiq9IAQu/iGH4XQBQJo03sjDxWrzXx9OPpIARZiIJu4B bnFmPplCqdIMxbWKenu7WrReeZ58eU085JpGDHTriKcyvQJ9J3vVd8Yzom7VqG5rw1Jd TRTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; fh=ujhzRoGvNYK+acFkUTmqZhWNw+cNDLTLaE+W0yhRAuE=; b=E3PfmSJSCtOFbkZ5RPAejQFo5wImmeLY/ob7xfRalXqWRkiLl/d+F67fs7PZBds61g cBlZPJ8ZFJbIaJEGVwGLW9a7qAFwMWkVf/L/GtIF+XfQqiBb+Wg9QRxEdvhqjNUaAVJd PdNsYMKE6AlmYvp707p1RhEWmptrVTsWE1GEcvPuDH+PXyOJotFRreGNfJ/niyg//nAs u1gJpiN34m5OsuY0lPnGpZUakHb9H/bEwubgGmDoFthX+HLn1eVkwHKwwS+ZMkmAu3PL MjqKjAZyn8SSO57/apYf1CCyEvhaVCzZZOrF7mB9ogWsbxtWih/NvGKTgaqRRQ8yvV72 zKXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="eh3faz0/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id t14-20020a17090b018e00b0027b258f284esi200862pjs.26.2023.10.17.16.22.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:22:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="eh3faz0/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id D926A80A8000; Tue, 17 Oct 2023 16:22:55 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344379AbjJQXWK (ORCPT + 22 others); Tue, 17 Oct 2023 19:22:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344322AbjJQXWA (ORCPT ); Tue, 17 Oct 2023 19:22:00 -0400 Received: from mail-oo1-xc34.google.com (mail-oo1-xc34.google.com [IPv6:2607:f8b0:4864:20::c34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62553F5; Tue, 17 Oct 2023 16:21:58 -0700 (PDT) Received: by mail-oo1-xc34.google.com with SMTP id 006d021491bc7-581e92f615fso438661eaf.2; Tue, 17 Oct 2023 16:21:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584917; x=1698189717; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; b=eh3faz0/3gJH+G995J9cEEIot0dx6VPKBZ/P0fSar3SvJzWbiMES5XtubkL6UypOZk 7L0E4xNzn8QHwSZxv+ZHavJEh8p1b8ITHSqaTTDdWlPn2jc/w1QcmDOc6NYTp6pwWkNC TPpYTVuggkJgx9b/7Ia/z1iYs7aDJAyR1r3/GbMtsOCVsUN0vhUtsej85bH9FGYmiOJ/ 4xABNPl1XZshEUe7hPJjeLZiJ7XF03itMlJ2fx8/GhThgpVjmL5asb2bpzjc0ibNGOWb pWWhpDbUUsHLzzmvaTrsCZH4swyG4XR7jkSmWw09ryTacodya2OT0AtXr7Yw9NC0Mwxm 8B/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584917; x=1698189717; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; b=R/W93STvk9ail7PvioQW6NisxAxr41UpGHI03YJfviXRiuox1PAeUF+dA7++pdj/BK BUqyGV7BbPyKjvgTvdO92hliP6HTnhllCOudzY+OmcFyDusjnxOKBWe+6K7caZNEazbl 2S3DrMhJx+ci+G+VGjwn/ZFZnNRwyVAg9K4hf4HZAQx8jTX4sQXEyUr5bzn4OLq7LUYB hs6yJaUNEj1pH1qBfiMn1Ot7Bbq9I1XI9bSBV74nVizFZtxdXZU92/V6pt0WPPkfUO5c bbat/d+8R2lRXSk/5VPCt1MkIPF3EI0a9GdWu2vp4b1j1JyZcKqSb6oX2fKZzsjAr0Fs 3XPg== X-Gm-Message-State: AOJu0YwuksC3ZoP5EPvEb0x2ZpGKYxmpIDuNaCyW1nfSJTZK23YeOGMp PVmMmI3QKqYS9ot+6VM5jA0= X-Received: by 2002:a05:6359:2ea3:b0:166:cae0:6e19 with SMTP id rp35-20020a0563592ea300b00166cae06e19mr3977722rwb.3.1697584917455; Tue, 17 Oct 2023 16:21:57 -0700 (PDT) Received: from localhost (fwdproxy-prn-001.fbsv.net. [2a03:2880:ff:1::face:b00c]) by smtp.gmail.com with ESMTPSA id z5-20020aa79f85000000b0068bc6a75848sm1983031pfr.156.2023.10.17.16.21.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:57 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Date: Tue, 17 Oct 2023 16:21:52 -0700 Message-Id: <20231017232152.2605440-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Tue, 17 Oct 2023 16:22:55 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780046867023808446 X-GMAIL-MSGID: 1780046867023808446 Currently, we only shrink the zswap pool when the user-defined limit is hit. This means that if we set the limit too high, cold data that are unlikely to be used again will reside in the pool, wasting precious memory. It is hard to predict how much zswap space will be needed ahead of time, as this depends on the workload (specifically, on factors such as memory access patterns and compressibility of the memory pages). This patch implements a memcg- and NUMA-aware shrinker for zswap, that is initiated when there is memory pressure. The shrinker does not have any parameter that must be tuned by the user, and can be opted in or out on a per-memcg basis. Furthermore, to make it more robust for many workloads and prevent overshrinking (i.e evicting warm pages that might be refaulted into memory), we build in the following heuristics: * Estimate the number of warm pages residing in zswap, and attempt to protect this region of the zswap LRU. * Scale the number of freeable objects by an estimate of the memory saving factor. The better zswap compresses the data, the fewer pages we will evict to swap (as we will otherwise incur IO for relatively small memory saving). * During reclaim, if the shrinker encounters a page that is also being brought into memory, the shrinker will cautiously terminate its shrinking action, as this is a sign that it is touching the warmer region of the zswap LRU. As a proof of concept, we ran the following synthetic benchmark: build the linux kernel in a memory-limited cgroup, and allocate some cold data in tmpfs to see if the shrinker could write them out and improved the overall performance. Depending on the amount of cold data generated, we observe from 14% to 35% reduction in kernel CPU time used in the kernel builds. Signed-off-by: Nhat Pham --- Documentation/admin-guide/mm/zswap.rst | 7 ++ include/linux/mmzone.h | 14 +++ mm/mmzone.c | 3 + mm/swap_state.c | 21 +++- mm/zswap.c | 161 +++++++++++++++++++++++-- 5 files changed, 196 insertions(+), 10 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 45b98390e938..522ae22ccb84 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -153,6 +153,13 @@ attribute, e. g.:: Setting this parameter to 100 will disable the hysteresis. +When there is a sizable amount of cold memory residing in the zswap pool, it +can be advantageous to proactively write these cold pages to swap and reclaim +the memory for other use cases. By default, the zswap shrinker is disabled. +User can enable it as follows: + + echo Y > /sys/module/zswap/parameters/shrinker_enabled + A debugfs interface is provided for various statistic about pool size, number of pages stored, same-value filled pages and various counters for the reasons pages are rejected. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 486587fcd27f..8947a1bfbe9c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -637,6 +637,20 @@ struct lruvec { #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif +#ifdef CONFIG_ZSWAP + /* + * Number of pages in zswap that should be protected from the shrinker. + * This number is an estimate of the following counts: + * + * a) Recent page faults. + * b) Recent insertion to the zswap LRU. This includes new zswap stores, + * as well as recent zswap LRU rotations. + * + * These pages are likely to be warm, and might incur IO if the are written + * to swap. + */ + atomic_long_t nr_zswap_protected; +#endif }; /* Isolate for asynchronous migration */ diff --git a/mm/mmzone.c b/mm/mmzone.c index 68e1511be12d..4137f3ac42cd 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec) memset(lruvec, 0, sizeof(struct lruvec)); spin_lock_init(&lruvec->lru_lock); +#ifdef CONFIG_ZSWAP + atomic_long_set(&lruvec->nr_zswap_protected, 0); +#endif for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/swap_state.c b/mm/swap_state.c index 0356df52b06a..a60197b55a28 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); +#ifdef CONFIG_ZSWAP + if (page) { + struct lruvec *lruvec = folio_lruvec(page_folio(page)); + + atomic_long_inc(&lruvec->nr_zswap_protected); + } +#endif + return page; } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, lru_add_drain(); skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - NULL); + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL); +#ifdef CONFIG_ZSWAP + if (page) { + struct lruvec *lruvec = folio_lruvec(page_folio(page)); + + atomic_long_inc(&lruvec->nr_zswap_protected); + } +#endif + return page; } /** diff --git a/mm/zswap.c b/mm/zswap.c index 15485427e3fa..1d1fe75a5237 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644); /* Number of zpools in zswap_pool (empirically determined for scalability) */ #define ZSWAP_NR_ZPOOLS 32 +/* Enable/disable memory pressure-based shrinker. */ +static bool zswap_shrinker_enabled; +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); + /********************************* * data structures **********************************/ @@ -174,6 +178,8 @@ struct zswap_pool { char tfm_name[CRYPTO_MAX_ALG_NAME]; struct list_lru list_lru; struct mem_cgroup *next_shrink; + struct shrinker *shrinker; + atomic_t nr_stored; }; /* @@ -272,17 +278,26 @@ static bool zswap_can_accept(void) DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); } +static u64 get_zswap_pool_size(struct zswap_pool *pool) +{ + u64 pool_size = 0; + int i; + + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) + pool_size += zpool_get_total_size(pool->zpools[i]); + + return pool_size; +} + static void zswap_update_total_size(void) { struct zswap_pool *pool; u64 total = 0; - int i; rcu_read_lock(); list_for_each_entry_rcu(pool, &zswap_pools, list) - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) - total += zpool_get_total_size(pool->zpools[i]); + total += get_zswap_pool_size(pool); rcu_read_unlock(); @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) { struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); - bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg); - + int nid = entry_to_nid(entry); + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg); + unsigned long lru_size, old, new; + + if (added) { + lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg); + old = atomic_long_inc_return(&lruvec->nr_zswap_protected); + + /* + * Decay to avoid overflow and adapt to changing workloads. + * This is based on LRU reclaim cost decaying heuristics. + */ + do { + new = old > lru_size / 4 ? old / 2 : old; + } while ( + !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new)); + } mem_cgroup_put(memcg); return added; } @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry) else { zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); + atomic_dec(&entry->pool->nr_stored); zswap_pool_put(entry->pool); } zswap_entry_cache_free(entry); @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, return entry; } +/********************************* +* shrinker functions +**********************************/ +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg); + +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); + unsigned long shrink_ret, nr_protected, lru_size; + struct zswap_pool *pool = shrinker->private_data; + bool encountered_page_in_swapcache = false; + + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected); + lru_size = list_lru_shrink_count(&pool->list_lru, sc); + + /* + * Abort if the shrinker is disabled or if we are shrinking into the + * protected region. + */ + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { + sc->nr_scanned = 0; + return SHRINK_STOP; + } + + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, + &encountered_page_in_swapcache); + + if (encountered_page_in_swapcache) + return SHRINK_STOP; + + return shrink_ret ? shrink_ret : SHRINK_STOP; +} + +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct zswap_pool *pool = shrinker->private_data; + struct mem_cgroup *memcg = sc->memcg; + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; + +#ifdef CONFIG_MEMCG_KMEM + cgroup_rstat_flush(memcg->css.cgroup); + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); +#else + /* use pool stats instead of memcg stats */ + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; + nr_stored = atomic_read(&pool->nr_stored); +#endif + + if (!zswap_shrinker_enabled || !nr_stored) + return 0; + + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected); + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc); + /* + * Subtract the lru size by an estimate of the number of pages + * that should be protected. + */ + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0; + + /* + * Scale the number of freeable pages by the memory saving factor. + * This ensures that the better zswap compresses memory, the fewer + * pages we will evict to swap (as it will otherwise incur IO for + * relatively small memory saving). + */ + return mult_frac(nr_freeable, nr_backing, nr_stored); +} + +static void zswap_alloc_shrinker(struct zswap_pool *pool) +{ + pool->shrinker = + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap"); + if (!pool->shrinker) + return; + + pool->shrinker->private_data = pool; + pool->shrinker->scan_objects = zswap_shrinker_scan; + pool->shrinker->count_objects = zswap_shrinker_count; + pool->shrinker->batch = 0; + pool->shrinker->seeks = DEFAULT_SEEKS; +} + /********************************* * per-cpu code **********************************/ @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o spinlock_t *lock, void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + bool *encountered_page_in_swapcache = (bool *)arg; struct mem_cgroup *memcg; struct zswap_tree *tree; + struct lruvec *lruvec; pgoff_t swpoffset; enum lru_status ret = LRU_REMOVED_RETRY; int writeback_result; @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o /* we cannot use zswap_lru_add here, because it increments node's lru count */ list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg); spin_unlock(lock); - mem_cgroup_put(memcg); ret = LRU_RETRY; + + /* + * Encountering a page already in swap cache is a sign that we are shrinking + * into the warmer region. We should terminate shrinking (if we're in the dynamic + * shrinker context). + */ + if (writeback_result == -EEXIST && encountered_page_in_swapcache) { + ret = LRU_SKIP; + *encountered_page_in_swapcache = true; + } + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry))); + /* Increment the protection area to account for the LRU rotation. */ + atomic_long_inc(&lruvec->nr_zswap_protected); + + mem_cgroup_put(memcg); goto put_unlock; } zswap_written_back_pages++; @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) &pool->node); if (ret) goto error; + + zswap_alloc_shrinker(pool); + if (!pool->shrinker) + goto error; + pr_debug("using %s compressor\n", pool->tfm_name); /* being the current pool takes 1 ref; this func expects the @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - list_lru_init_memcg(&pool->list_lru, NULL); + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) + goto lru_fail; + shrinker_register(pool->shrinker); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); return pool; +lru_fail: + list_lru_destroy(&pool->list_lru); + shrinker_free(pool->shrinker); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool) zswap_pool_debug("destroying", pool); + shrinker_free(pool->shrinker); cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); list_lru_destroy(&pool->list_lru); @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio) if (entry->length) { INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&pool->list_lru, entry); + atomic_inc(&pool->nr_stored); } spin_unlock(&tree->lock);