From patchwork Mon Nov 6 18:31:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 162116 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2850089vqu; Mon, 6 Nov 2023 10:32:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IH9XZXGVmBwgzejVpG+Jnpn48uDQLAXgmq3VDUv28h4QAwRXqN0dbX1LJQNYgunJtuAQKbH X-Received: by 2002:a17:90a:19d6:b0:280:729d:98ec with SMTP id 22-20020a17090a19d600b00280729d98ecmr21667338pjj.47.1699295535150; Mon, 06 Nov 2023 10:32:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699295535; cv=none; d=google.com; s=arc-20160816; b=S38DNAhbmI1oViOqTncR8l084NtTI8z9064nLgTKil8iuHeRoaW+2ywJF27U/4SF4G hq1+kQcsur6hWuge53qmmOpzQqSNwzr3GBUVpWeHLBc0WpUrPkd9gBA/7QdU+jK+4EAD vX9jPUCYYLWoKxQtxHAqn/Rw4XqL8DITVarBpTi9SI7L2VRtsrI2VorD6yJPPb3U5uLi i2cfw+0xVC9VUBr143wJkj/VnMiEDkCCWQXdac+dHhwzZcjGCnEdhsoHZWBs8MraRZC8 /MHi/zhn5fqCIN91l2Drpe1VMXvkmOaXdM8wRQi7NYAoMR1ZMctog6OfeJnrqN1nHx61 JGOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5pFcPVumquKH2Gwm0GjVQh+sz7Zkp1krGgtTHD7g2j8=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=Nboqb816aP4Eu9cEdU03oFAfSZD79dRhy4Y6SJsqgVCBtgi/TIX9S/Fp9o0Tnv+mLH r+EcM6MTMXpj7tqwJdgWWa4JnSmmXACY3XkEc2XgMXanA1IZh7EQSmAATwMzQB2+iJQG y44UazTOLe2V3230BtyGTehER6nn5fCdd8ZzKz/UomsOnMSL6Cq4gXk+kYO3pBoxtprq fDz8Jy2wB0T7vv8nrcJlLr5WlTjWh4Ek1bU49A4mMSWfu/9qYcdYhptM5EVsJxro0GCj CA/LFiCQEeo29IIsdwlrursVDb424yWIB2mozDPxfcrE1Zaa5AnDs+TYoFbpthZO4ScG mWmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=ca7J0m4+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id k7-20020a17090a658700b00280464bcd94si8667569pjj.130.2023.11.06.10.32.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=ca7J0m4+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A673180D44E1; Mon, 6 Nov 2023 10:32:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232375AbjKFScI (ORCPT + 34 others); Mon, 6 Nov 2023 13:32:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232071AbjKFScF (ORCPT ); Mon, 6 Nov 2023 13:32:05 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAAD194; Mon, 6 Nov 2023 10:32:01 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-6b7f0170d7bso4830518b3a.2; Mon, 06 Nov 2023 10:32:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699295521; x=1699900321; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5pFcPVumquKH2Gwm0GjVQh+sz7Zkp1krGgtTHD7g2j8=; b=ca7J0m4+FeW2BSo0iHzSMj32NeULDnNlks1SJlhQzvg/SJwnNWQopZ+QTnJLtN4loQ y+E0Gth4avWYzck0ZvA2H9iZm+8GrkyB8IWV/PNn6s0EvGszgmQrJ3OmoPsu3hEAZ6ln qRD8OuMNhv8mNJ6/EgOHBf/3n/mnrX3J+qIPlFsJ3lKW0Hu8dRvOuTeqjJkqilPOxZWF uxICFgt/gdNQ50WP6JMBPgQBoaJaW9k0FVc/qtkZ9DoliD/sZ0/IrC1DAeqY8aCgZttQ tQUvFEPfj9imMmg0+iynWGzkXHB3ucuQjRq1/K/DFzaDdMBWPXnrdwmjdfEFKiE+tr9H 3KRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699295521; x=1699900321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5pFcPVumquKH2Gwm0GjVQh+sz7Zkp1krGgtTHD7g2j8=; b=ntJAmGkJdmugb+Kka/qxFr0p77b4VyW3uFK8nDxQPBqf7+Q5zUJ04fBmt5QAUw4Dov MUwtFGNN/OcEHUtZfUYLg/lTIXOycOgHcvVBB1vTfEoOlCWiQAMPRBUCXJHWOyJphWWL fZF+JJf7bzRc5ic40zBhwo3wJiP5Xa7cZVnmsnbwH8DMlvmDEa9F0IpC25pAcZIzFQ0l iIICQEJvc7qV/rjwGrOBOwrIXjlUEQqD0GBxoYpPtrpUiZp9+ZwpyQAYSGIuO2BQm6aV 154HVYETenqm5flZbl7O6mzShA1oMBPQnXI1VODhsovkTQm2+ZOtWqFF6YyZ01FsZr1d DoIg== X-Gm-Message-State: AOJu0Ywpm6sKLWY5XwGdQtyz/TfFx8B46mctlvD3GpuAsyTld2QwOOuS TLCuxc9zxKBgcuqhELCOkHA= X-Received: by 2002:a05:6a00:b42:b0:6be:30f1:45f8 with SMTP id p2-20020a056a000b4200b006be30f145f8mr37139393pfo.20.1699295521198; Mon, 06 Nov 2023 10:32:01 -0800 (PST) Received: from localhost (fwdproxy-prn-007.fbsv.net. [2a03:2880:ff:7::face:b00c]) by smtp.gmail.com with ESMTPSA id p24-20020aa78618000000b006bf53a51e6dsm6119753pfn.179.2023.11.06.10.32.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:00 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v5 1/6] list_lru: allows explicit memcg and NUMA node selection Date: Mon, 6 Nov 2023 10:31:54 -0800 Message-Id: <20231106183159.3562879-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106183159.3562879-1-nphamcs@gmail.com> References: <20231106183159.3562879-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 06 Nov 2023 10:32:13 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781840515209442463 X-GMAIL-MSGID: 1781840515209442463 The interface of list_lru is based on the assumption that the list node and the data it represents belong to the same allocated on the correct node/memcg. While this assumption is valid for existing slab objects LRU such as dentries and inodes, it is undocumented, and rather inflexible for certain potential list_lru users (such as the upcoming zswap shrinker and the THP shrinker). It has caused us a lot of issues during our development. This patch changes list_lru interface so that the caller must explicitly specify numa node and memcg when adding and removing objects. The old list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and list_lru_del_obj(), respectively. It also extends the list_lru API with a new function, list_lru_putback, which undoes a previous list_lru_isolate call. Unlike list_lru_add, it does not increment the LRU node count (as list_lru_isolate does not decrement the node count). list_lru_putback also allows for explicit memcg and NUMA node selection. Suggested-by: Johannes Weiner Signed-off-by: Nhat Pham --- drivers/android/binder_alloc.c | 5 ++-- fs/dcache.c | 8 +++--- fs/gfs2/quota.c | 6 ++--- fs/inode.c | 4 +-- fs/nfs/nfs42xattr.c | 8 +++--- fs/nfsd/filecache.c | 4 +-- fs/xfs/xfs_buf.c | 6 ++--- fs/xfs/xfs_dquot.c | 2 +- fs/xfs/xfs_qm.c | 2 +- include/linux/list_lru.h | 46 +++++++++++++++++++++++++++++--- mm/list_lru.c | 48 ++++++++++++++++++++++++++++------ mm/workingset.c | 4 +-- 12 files changed, 108 insertions(+), 35 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 138f6d43d13b..e80669d4e037 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -285,7 +285,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, trace_binder_free_lru_start(alloc, index); - ret = list_lru_add(&binder_alloc_lru, &page->lru); + ret = list_lru_add_obj(&binder_alloc_lru, &page->lru); WARN_ON(!ret); trace_binder_free_lru_end(alloc, index); @@ -848,7 +848,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) if (!alloc->pages[i].page_ptr) continue; - on_lru = list_lru_del(&binder_alloc_lru, + on_lru = list_lru_del_obj(&binder_alloc_lru, &alloc->pages[i].lru); page_addr = alloc->buffer + i * PAGE_SIZE; binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, @@ -1287,4 +1287,3 @@ int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, dest, bytes); } - diff --git a/fs/dcache.c b/fs/dcache.c index 25ac74d30bff..482d1b34d88d 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -428,7 +428,8 @@ static void d_lru_add(struct dentry *dentry) this_cpu_inc(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_inc(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_add_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_lru_del(struct dentry *dentry) @@ -438,7 +439,8 @@ static void d_lru_del(struct dentry *dentry) this_cpu_dec(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_dec(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_del_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_shrink_del(struct dentry *dentry) @@ -1240,7 +1242,7 @@ static enum lru_status dentry_lru_isolate(struct list_head *item, * * This is guaranteed by the fact that all LRU management * functions are intermediated by the LRU API calls like - * list_lru_add and list_lru_del. List movement in this file + * list_lru_add_obj and list_lru_del_obj. List movement in this file * only ever occur through this functions or through callbacks * like this one, that are called from the LRU API. * diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 2f1328af34f4..72015594bc83 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -271,7 +271,7 @@ static struct gfs2_quota_data *gfs2_qd_search_bucket(unsigned int hash, if (qd->qd_sbd != sdp) continue; if (lockref_get_not_dead(&qd->qd_lockref)) { - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); return qd; } } @@ -344,7 +344,7 @@ static void qd_put(struct gfs2_quota_data *qd) } qd->qd_lockref.count = 0; - list_lru_add(&gfs2_qd_lru, &qd->qd_lru); + list_lru_add_obj(&gfs2_qd_lru, &qd->qd_lru); spin_unlock(&qd->qd_lockref.lock); } @@ -1508,7 +1508,7 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp) lockref_mark_dead(&qd->qd_lockref); spin_unlock(&qd->qd_lockref.lock); - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); list_add(&qd->qd_lru, &dispose); } spin_unlock(&qd_lock); diff --git a/fs/inode.c b/fs/inode.c index 84bc3c76e5cc..f889ba8dccd9 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -462,7 +462,7 @@ static void __inode_add_lru(struct inode *inode, bool rotate) if (!mapping_shrinkable(&inode->i_data)) return; - if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_add_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_inc(nr_unused); else if (rotate) inode->i_state |= I_REFERENCED; @@ -480,7 +480,7 @@ void inode_add_lru(struct inode *inode) static void inode_lru_list_del(struct inode *inode) { - if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_del_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_dec(nr_unused); } diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index 2ad66a8922f4..49aaf28a6950 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -132,7 +132,7 @@ nfs4_xattr_entry_lru_add(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_add(lru, &entry->lru); + return list_lru_add_obj(lru, &entry->lru); } static bool @@ -143,7 +143,7 @@ nfs4_xattr_entry_lru_del(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_del(lru, &entry->lru); + return list_lru_del_obj(lru, &entry->lru); } /* @@ -349,7 +349,7 @@ nfs4_xattr_cache_unlink(struct inode *inode) oldcache = nfsi->xattr_cache; if (oldcache != NULL) { - list_lru_del(&nfs4_xattr_cache_lru, &oldcache->lru); + list_lru_del_obj(&nfs4_xattr_cache_lru, &oldcache->lru); oldcache->inode = NULL; } nfsi->xattr_cache = NULL; @@ -474,7 +474,7 @@ nfs4_xattr_get_cache(struct inode *inode, int add) kref_get(&cache->ref); nfsi->xattr_cache = cache; cache->inode = inode; - list_lru_add(&nfs4_xattr_cache_lru, &cache->lru); + list_lru_add_obj(&nfs4_xattr_cache_lru, &cache->lru); } spin_unlock(&inode->i_lock); diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index 9c62b4502539..82352c100b49 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -322,7 +322,7 @@ nfsd_file_check_writeback(struct nfsd_file *nf) static bool nfsd_file_lru_add(struct nfsd_file *nf) { set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags); - if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_add_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_add(nf); return true; } @@ -331,7 +331,7 @@ static bool nfsd_file_lru_add(struct nfsd_file *nf) static bool nfsd_file_lru_remove(struct nfsd_file *nf) { - if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_del_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_del(nf); return true; } diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 9e7ba04572db..9c2654a8d24b 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -169,7 +169,7 @@ xfs_buf_stale( atomic_set(&bp->b_lru_ref, 0); if (!(bp->b_state & XFS_BSTATE_DISPOSE) && - (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru))) + (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru))) atomic_dec(&bp->b_hold); ASSERT(atomic_read(&bp->b_hold) >= 1); @@ -1047,7 +1047,7 @@ xfs_buf_rele( * buffer for the LRU and clear the (now stale) dispose list * state flag */ - if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) { + if (list_lru_add_obj(&bp->b_target->bt_lru, &bp->b_lru)) { bp->b_state &= ~XFS_BSTATE_DISPOSE; atomic_inc(&bp->b_hold); } @@ -1060,7 +1060,7 @@ xfs_buf_rele( * was on was the disposal list */ if (!(bp->b_state & XFS_BSTATE_DISPOSE)) { - list_lru_del(&bp->b_target->bt_lru, &bp->b_lru); + list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru); } else { ASSERT(list_empty(&bp->b_lru)); } diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index ac6ba646624d..49f619f5aa96 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1064,7 +1064,7 @@ xfs_qm_dqput( struct xfs_quotainfo *qi = dqp->q_mount->m_quotainfo; trace_xfs_dqput_free(dqp); - if (list_lru_add(&qi->qi_lru, &dqp->q_lru)) + if (list_lru_add_obj(&qi->qi_lru, &dqp->q_lru)) XFS_STATS_INC(dqp->q_mount, xs_qm_dquot_unused); } xfs_dqunlock(dqp); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 94a7932ac570..67d0a8564ff3 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -171,7 +171,7 @@ xfs_qm_dqpurge( * hits zero, so it really should be on the freelist here. */ ASSERT(!list_empty(&dqp->q_lru)); - list_lru_del(&qi->qi_lru, &dqp->q_lru); + list_lru_del_obj(&qi->qi_lru, &dqp->q_lru); XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot_unused); xfs_qm_dqdestroy(dqp); diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index b35968ee9fb5..5ef217443299 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -75,6 +75,8 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * list_lru_add: add an element to the lru list's tail * @list_lru: the lru pointer * @item: the item to be added. + * @memcg: the cgroup of the sublist to add the item to. + * @nid: the node id of the sublist to add the item to. * * If the element is already part of a list, this function returns doing * nothing. Therefore the caller does not need to keep state about whether or @@ -87,12 +89,28 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * * Return value: true if the list was updated, false otherwise */ -bool list_lru_add(struct list_lru *lru, struct list_head *item); +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); /** - * list_lru_del: delete an element to the lru list + * list_lru_add_obj: add an element to the lru list's tail + * @list_lru: the lru pointer + * @item: the item to be added. + * + * This function is similar to list_lru_add(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item); + +/** + * list_lru_del: delete an element from the lru list * @list_lru: the lru pointer * @item: the item to be deleted. + * @memcg: the cgroup of the sublist to delete the item from. + * @nid: the node id of the sublist to delete the item from. * * This function works analogously as list_lru_add in terms of list * manipulation. The comments about an element already pertaining to @@ -100,7 +118,21 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); * * Return value: true if the list was updated, false otherwise */ -bool list_lru_del(struct list_lru *lru, struct list_head *item); +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + +/** + * list_lru_del_obj: delete an element from the lru list + * @list_lru: the lru pointer + * @item: the item to be deleted. + * + * This function is similar to list_lru_del(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise. + */ +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item); /** * list_lru_count_one: return the number of objects currently held by @lru @@ -136,6 +168,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru) void list_lru_isolate(struct list_lru_one *list, struct list_head *item); void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); +/* + * list_lru_putback: undo list_lru_isolate. + * + * Since we might have dropped the LRU lock in between, recompute list_lru_one + * from the node's id and memcg. + */ +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, struct list_lru_one *list, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index a05e5bef3b40..fcca67ac26ec 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -116,21 +116,19 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, } #endif /* CONFIG_MEMCG_KMEM */ -bool list_lru_add(struct list_lru *lru, struct list_head *item) +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) { - int nid = page_to_nid(virt_to_page(item)); struct list_lru_node *nlru = &lru->node[nid]; - struct mem_cgroup *memcg; struct list_lru_one *l; spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, &memcg); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -140,15 +138,25 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_add); -bool list_lru_del(struct list_lru *lru, struct list_head *item) +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_add(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_add_obj); + +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, NULL); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -160,6 +168,16 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_del); +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item) +{ + int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_del(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_del_obj); + void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { list_del_init(item); @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, } EXPORT_SYMBOL_GPL(list_lru_isolate_move); +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *list = + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + + if (list_empty(item)) { + list_add_tail(item, &list->list); + if (!list->nr_items++) + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); + } +} +EXPORT_SYMBOL_GPL(list_lru_putback); + unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { diff --git a/mm/workingset.c b/mm/workingset.c index 11045febc383..7d3dacab8451 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -631,12 +631,12 @@ void workingset_update_node(struct xa_node *node) if (node->count && node->count == node->nr_values) { if (list_empty(&node->private_list)) { - list_lru_add(&shadow_nodes, &node->private_list); + list_lru_add_obj(&shadow_nodes, &node->private_list); __inc_lruvec_kmem_state(node, WORKINGSET_NODES); } } else { if (!list_empty(&node->private_list)) { - list_lru_del(&shadow_nodes, &node->private_list); + list_lru_del_obj(&shadow_nodes, &node->private_list); __dec_lruvec_kmem_state(node, WORKINGSET_NODES); } } From patchwork Mon Nov 6 18:31:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 162118 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2850217vqu; Mon, 6 Nov 2023 10:32:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IGKrkY/DqBiqk/j/BDA4JvW+sYyykrVBANYkhujSI1/ejgiN8hbSmuXs4Pm79py1im/regS X-Received: by 2002:a17:902:d1c3:b0:1c9:c91d:3fd6 with SMTP id g3-20020a170902d1c300b001c9c91d3fd6mr19954851plb.5.1699295549754; Mon, 06 Nov 2023 10:32:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699295549; cv=none; d=google.com; s=arc-20160816; b=DLrCfVWQpYPWJAghWNHBj3g+D0m205o9RZRMo4zLEyN0fWgo0E0d/Onx4AzQTXxjVB /iUKAaZct0QS3DDFAbXyPnLgR7rB5UjleKOstDWAhmlKYa0Mt3m8Gmnot29idArHeyfH seKD7h9MUpJXNet7vBFVo1K2akho1N98Cf7yCtOgZTbEFN5PlnprQn8zJ4aCU7pKPzAw tEiQgG6E7+7iO8M5O4CgL+7oBHtmBwC6TSv+jc+34UioADob7bAD7cgGTvFl9EQHhKwg dKQ6qJSmVF5PYt+SUm19PDJ9nm1itjx1oy7/RxAFbG8Y5tZn5PwNDW1/d13CCTYMnAhM xrgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nLPnyP0RaynBJicEAOxae+atCQWtlTDNIjOFc7rQd7s=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=mIAgqGw0dp9SYfcXK3m91XMgb9dYkyMzauRycj5T2d0KJYytsOFkrakEtZ6gMpCstt 6i+8lmnI08EJgqLYtcJm5hay9yI75dyEr7tdDRk2o4BBOMq+jpgzpq9tcLmS5h/Adl4Q /iCFREvd2rV4YmsQ2EJ4QlOcY/BInU8PDfp5ftBqIK4kxFnMNjJ/opfL/4qjKhQe5UCr /O/+O8U5PJ+Px2NQNxc41Gxzr/ujUw82WnUTv9NWxugDlWj/T8exYxrARdD6TBMI6fqT p+qoTKgvaAGCIhg0RcohIX7VCBYEMSU+Vj56G7bbBx15j7oojX2+FObTmPRm+70q/QuF LElA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=a6w8iggS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id o5-20020a170902d4c500b001b878f9e11csi9401927plg.54.2023.11.06.10.32.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=a6w8iggS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 17B25801F77F; Mon, 6 Nov 2023 10:32:28 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232174AbjKFScU (ORCPT + 34 others); Mon, 6 Nov 2023 13:32:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232370AbjKFScH (ORCPT ); Mon, 6 Nov 2023 13:32:07 -0500 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25C41D4D; Mon, 6 Nov 2023 10:32:04 -0800 (PST) Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-5b99bfca064so3098488a12.3; Mon, 06 Nov 2023 10:32:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699295523; x=1699900323; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nLPnyP0RaynBJicEAOxae+atCQWtlTDNIjOFc7rQd7s=; b=a6w8iggSIX6RYGE/SNMBHDw5ibVVBYbG8rjXYe5mt6fnTZ050W3wrfmmuU3ADTjZyS wH1lHJFBJ6ZsBAdeZ2fCAaXLpJ4XAErA556H9PEzxnlgRxdqVf4o510L/A/KsPZOi64W Y4qZsLyJNU4w1f62MN568rODgttFlt6CXP2n2Jcd1+ydzL6kMZzqgWN5gb6QaDgQe6Ej qM4MCUl6/xmB/pv5/UjeG41Y663+loxLB3yB0NFM/oRKPkGxoiFD4v0cT6Jn9ApC1Ewy j3XpJkQwharSbYEDMt6qeqRYP+zQhtMVGM8f+2MDJwB9BodKQYGQvisTBQ6UCQXsqMSq F5/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699295523; x=1699900323; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nLPnyP0RaynBJicEAOxae+atCQWtlTDNIjOFc7rQd7s=; b=f9nbys5cxssrL2QAUHKcILBt9bXK7aUlfBxT7t49WUWVRqAtbhhIWA4gyXUMd1NfBo 0WHAcQ6DQou4Vakoa3DGEySFlVRhh6P5dYJ5ZGWRhe1novd7/aKFZFs14zpT1ottCyUl v6oHfSd2HFMwB0eJzUwsO4guEG1Ew1Y1fqz5Izbwv7m9fjTTQTilC2Iq0Fh0kRgcFqQL BDB3bi+9cH1seJxB0cGdDKNZ2HHN74w2ZoOhR1/c2lHSXz0DJIr9xx4lb9vlY23wMEmt MFgvi3eaMt7EgvbVWW4IZpIKlVYumE7qffPP0a9sMpAtzerl7Ldvi5C1BvJuMbzCT2tZ xSnA== X-Gm-Message-State: AOJu0YwwVrM0+O4/Gw09QY6F/jFGA6gQec5tHZKnnv7wpbJsbu5VEnno s1DYkywv000xn63TlII2E5A= X-Received: by 2002:a17:902:d484:b0:1cc:361b:7b28 with SMTP id c4-20020a170902d48400b001cc361b7b28mr24335746plg.64.1699295523478; Mon, 06 Nov 2023 10:32:03 -0800 (PST) Received: from localhost (fwdproxy-prn-009.fbsv.net. [2a03:2880:ff:9::face:b00c]) by smtp.gmail.com with ESMTPSA id u22-20020a1709026e1600b001c5dea67c26sm6357160plk.233.2023.11.06.10.32.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:02 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v5 3/6] zswap: make shrinking memcg-aware Date: Mon, 6 Nov 2023 10:31:56 -0800 Message-Id: <20231106183159.3562879-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106183159.3562879-1-nphamcs@gmail.com> References: <20231106183159.3562879-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 06 Nov 2023 10:32:28 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781840530082226792 X-GMAIL-MSGID: 1781840530082226792 From: Domenico Cerasuolo Currently, we only have a single global LRU for zswap. This makes it impossible to perform worload-specific shrinking - an memcg cannot determine which pages in the pool it owns, and often ends up writing pages from other memcgs. This issue has been previously observed in practice and mitigated by simply disabling memcg-initiated shrinking: https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u This patch fully resolves the issue by replacing the global zswap LRU with memcg- and NUMA-specific LRUs, and modify the reclaim logic: a) When a store attempt hits an memcg limit, it now triggers a synchronous reclaim attempt that, if successful, allows the new hotter page to be accepted by zswap. b) If the store attempt instead hits the global zswap limit, it will trigger an asynchronous reclaim attempt, in which an memcg is selected for reclaim in a round-robin-like fashion. Signed-off-by: Domenico Cerasuolo Co-developed-by: Nhat Pham Signed-off-by: Nhat Pham --- include/linux/memcontrol.h | 5 + include/linux/zswap.h | 2 + mm/memcontrol.c | 2 + mm/swap.h | 3 +- mm/swap_state.c | 24 +++- mm/zswap.c | 252 +++++++++++++++++++++++++++++-------- 6 files changed, 227 insertions(+), 61 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 55c85f952afd..95f6c9e60ed1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + return NULL; +} + static inline bool folio_memcg_kmem(struct folio *folio) { return false; diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 2a60ce39cfde..e571e393669b 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); void zswap_swapon(int type); void zswap_swapoff(int type); +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); #else @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio) static inline void zswap_invalidate(int type, pgoff_t offset) {} static inline void zswap_swapon(int type) {} static inline void zswap_swapoff(int type) {} +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6f7fc0101252..2ef49b471a16 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5640,6 +5640,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); + zswap_memcg_offline_cleanup(memcg); + memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); diff --git a/mm/swap.h b/mm/swap.h index 73c332ee4d91..c0dc73e10e91 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated); + bool *new_page_allocated, + bool skip_if_exists); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index 85d9e5806a6a..6c84236382f3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated) + bool *new_page_allocated, + bool skip_if_exists) { struct swap_info_struct *si; struct folio *folio; @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (err != -EEXIST) goto fail_put_swap; + /* + * Protect against a recursive call to __read_swap_cache_async() + * on the same entry waiting forever here because SWAP_HAS_CACHE + * is set but the folio is not the swap cache yet. This can + * happen today if mem_cgroup_swapin_charge_folio() below + * triggers reclaim through zswap, which may call + * __read_swap_cache_async() in the writeback path. + */ + if (skip_if_exists) + goto fail_put_swap; + /* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, mpol = get_vma_policy(vma, addr, 0, &ilx); page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); mpol_cond_put(mpol); if (page_allocated) @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, mpol, ilx, &page_allocated); + gfp_mask, mpol, ilx, &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, pte_unmap(pte); pte = NULL; page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; diff --git a/mm/zswap.c b/mm/zswap.c index 2e691cd1a466..2654b0d214cc 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -35,6 +35,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" @@ -172,8 +173,9 @@ struct zswap_pool { struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; - struct list_head lru; - spinlock_t lru_lock; + struct list_lru list_lru; + spinlock_t next_shrink_lock; + struct mem_cgroup *next_shrink; }; /* @@ -289,15 +291,42 @@ static void zswap_update_total_size(void) zswap_pool_total_size = total; } +/* should be called under RCU */ +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL; +} + +static inline int entry_to_nid(struct zswap_entry *entry) +{ + return page_to_nid(virt_to_page(entry)); +} + +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + + /* lock out zswap pools list modification */ + spin_lock(&zswap_pools_lock); + list_for_each_entry(pool, &zswap_pools, list) { + spin_lock(&pool->next_shrink_lock); + if (pool->next_shrink == memcg) + pool->next_shrink = + mem_cgroup_iter(NULL, pool->next_shrink, NULL, true); + spin_unlock(&pool->next_shrink_lock); + } + spin_unlock(&zswap_pools_lock); +} + /********************************* * zswap entry functions **********************************/ static struct kmem_cache *zswap_entry_cache; -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) { struct zswap_entry *entry; - entry = kmem_cache_alloc(zswap_entry_cache, gfp); + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); if (!entry) return NULL; entry->refcount = 1; @@ -310,6 +339,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* lru functions +**********************************/ +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + /* + * Note that it is safe to use rcu_read_lock() here, even in the face of + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection + * used in list_lru lookup, only two scenarios are possible: + * + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The + * new entry will be reparented to memcg's parent's list_lru. + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The + * new entry will be added directly to memcg's parent's list_lru. + * + * Similar reasoning holds for list_lru_del() and list_lru_putback(). + */ + rcu_read_lock(); + memcg = get_mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_add(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = get_mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_del(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_putback(struct list_lru *list_lru, + struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + spinlock_t *lock = &list_lru->node[nid].lock; + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = get_mem_cgroup_from_entry(entry); + spin_lock(lock); + /* we cannot use list_lru_add here, because it increments node's lru count */ + list_lru_putback(list_lru, &entry->lru, nid, memcg); + spin_unlock(lock); + rcu_read_unlock(); +} + /********************************* * rbtree functions **********************************/ @@ -394,9 +478,7 @@ static void zswap_free_entry(struct zswap_entry *entry) if (!entry->length) atomic_dec(&zswap_same_filled_pages); else { - spin_lock(&entry->pool->lru_lock); - list_del(&entry->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); zswap_pool_put(entry->pool); } @@ -630,21 +712,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree, zswap_entry_put(tree, entry); } -static int zswap_reclaim_entry(struct zswap_pool *pool) +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg) { - struct zswap_entry *entry; + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); struct zswap_tree *tree; pgoff_t swpoffset; - int ret; + enum lru_status ret = LRU_REMOVED_RETRY; + int writeback_result; - /* Get an entry off the LRU */ - spin_lock(&pool->lru_lock); - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lru_lock); - return -EINVAL; - } - entry = list_last_entry(&pool->lru, struct zswap_entry, lru); - list_del_init(&entry->lru); /* * Once the lru lock is dropped, the entry might get freed. The * swpoffset is copied to the stack, and entry isn't deref'd again @@ -652,28 +728,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) */ swpoffset = swp_offset(entry->swpentry); tree = zswap_trees[swp_type(entry->swpentry)]; - spin_unlock(&pool->lru_lock); + list_lru_isolate(l, item); + /* + * It's safe to drop the lock here because we return either + * LRU_REMOVED_RETRY or LRU_RETRY. + */ + spin_unlock(lock); /* Check for invalidate() race */ spin_lock(&tree->lock); - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) { - ret = -EAGAIN; + if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) goto unlock; - } + /* Hold a reference to prevent a free during writeback */ zswap_entry_get(entry); spin_unlock(&tree->lock); - ret = zswap_writeback_entry(entry, tree); + writeback_result = zswap_writeback_entry(entry, tree); spin_lock(&tree->lock); - if (ret) { - /* Writeback failed, put entry back on LRU */ - spin_lock(&pool->lru_lock); - list_move(&entry->lru, &pool->lru); - spin_unlock(&pool->lru_lock); + if (writeback_result) { + zswap_reject_reclaim_fail++; + zswap_lru_putback(&entry->pool->list_lru, entry); + ret = LRU_RETRY; goto put_unlock; } + zswap_written_back_pages++; /* * Writeback started successfully, the page now belongs to the @@ -687,27 +767,76 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) zswap_entry_put(tree, entry); unlock: spin_unlock(&tree->lock); - return ret ? -EAGAIN : 0; + spin_lock(lock); + return ret; +} + +static int shrink_memcg(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + int nid, shrunk = 0; + + /* + * Skip zombies because their LRUs are reparented and we would be + * reclaiming from the parent instead of the dead memcg. + */ + if (memcg && !mem_cgroup_online(memcg)) + return -ENOENT; + + pool = zswap_pool_current_get(); + if (!pool) + return -EINVAL; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + unsigned long nr_to_walk = 1; + + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg, + &shrink_memcg_cb, NULL, &nr_to_walk); + } + zswap_pool_put(pool); + return shrunk ? 0 : -EAGAIN; } static void shrink_worker(struct work_struct *w) { struct zswap_pool *pool = container_of(w, typeof(*pool), shrink_work); + struct mem_cgroup *memcg; int ret, failures = 0; + /* global reclaim will select cgroup in a round-robin fashion. */ do { - ret = zswap_reclaim_entry(pool); - if (ret) { - zswap_reject_reclaim_fail++; - if (ret != -EAGAIN) - break; + spin_lock(&pool->next_shrink_lock); + memcg = pool->next_shrink = + mem_cgroup_iter(NULL, pool->next_shrink, NULL, true); + + /* full round trip */ + if (!memcg) { + spin_unlock(&pool->next_shrink_lock); if (++failures == MAX_RECLAIM_RETRIES) break; + + goto resched; } + + /* + * Acquire an extra reference to the iterated memcg in case the + * original reference is dropped by the zswap offlining callback. + */ + css_get(&memcg->css); + spin_unlock(&pool->next_shrink_lock); + + ret = shrink_memcg(memcg); + mem_cgroup_put(memcg); + + if (ret == -EINVAL) + break; + if (ret && ++failures == MAX_RECLAIM_RETRIES) + break; + +resched: cond_resched(); } while (!zswap_can_accept()); - zswap_pool_put(pool); } static struct zswap_pool *zswap_pool_create(char *type, char *compressor) @@ -765,11 +894,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - INIT_LIST_HEAD(&pool->lru); - spin_lock_init(&pool->lru_lock); + list_lru_init_memcg(&pool->list_lru, NULL); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); + spin_lock_init(&pool->next_shrink_lock); return pool; @@ -832,6 +961,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool) cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); + list_lru_destroy(&pool->list_lru); + + spin_lock(&pool->next_shrink_lock); + mem_cgroup_put(pool->next_shrink); + pool->next_shrink = NULL; + spin_unlock(&pool->next_shrink_lock); + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) zpool_destroy_pool(pool->zpools[i]); kfree(pool); @@ -1079,7 +1215,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* try to allocate swap cache page */ mpol = get_task_policy(current); page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &page_was_allocated); + NO_INTERLEAVE_INDEX, &page_was_allocated, true); if (!page) { ret = -ENOMEM; goto fail; @@ -1145,7 +1281,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* start writeback */ __swap_writepage(page, &wbc); put_page(page); - zswap_written_back_pages++; return ret; @@ -1202,6 +1337,7 @@ bool zswap_store(struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; struct zpool *zpool; unsigned int dlen = PAGE_SIZE; @@ -1233,15 +1369,15 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); - - /* - * XXX: zswap reclaim does not work with cgroups yet. Without a - * cgroup-aware entry LRU, we will push out entries system-wide based on - * local cgroup limits. - */ objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) - goto reject; + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto reject; + } + mem_cgroup_put(memcg); + } /* reclaim space if needed */ if (zswap_is_full()) { @@ -1258,7 +1394,7 @@ bool zswap_store(struct folio *folio) } /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL); + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); if (!entry) { zswap_reject_kmemcache_fail++; goto reject; @@ -1285,6 +1421,15 @@ bool zswap_store(struct folio *folio) if (!entry->pool) goto freepage; + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_pool; + } + mem_cgroup_put(memcg); + } + /* compress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); @@ -1361,9 +1506,8 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_add(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&entry->pool->list_lru, entry); } spin_unlock(&tree->lock); @@ -1376,6 +1520,7 @@ bool zswap_store(struct folio *folio) put_dstmem: mutex_unlock(acomp_ctx->mutex); +put_pool: zswap_pool_put(entry->pool); freepage: zswap_entry_cache_free(entry); @@ -1470,9 +1615,8 @@ bool zswap_load(struct folio *folio) zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_move(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); + zswap_lru_add(&entry->pool->list_lru, entry); } zswap_entry_put(tree, entry); spin_unlock(&tree->lock); From patchwork Mon Nov 6 18:31:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 162117 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2850149vqu; Mon, 6 Nov 2023 10:32:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IG+OkJ/BxeLIB+q/bEfpudeVa0lJyaLk3PYQYEFWfE+wr76eyp7T+zIM9CLnQM3gJTWA1Mb X-Received: by 2002:a17:903:1c6:b0:1cc:8b4c:9ba1 with SMTP id e6-20020a17090301c600b001cc8b4c9ba1mr10447267plh.50.1699295542624; Mon, 06 Nov 2023 10:32:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699295542; cv=none; d=google.com; s=arc-20160816; b=ZQ4mkjNkwPRdxlcPhbK56I6nNLoAI3ppN/z+F9nQtyT19535EG0P3KPXad9V4tSAdI sYfCo2CUQvq80VxO4/yxbqmhFmp1XtsPdzp0TPnWX1UB9oq5V3KE+YaJaNfc75P/6vxd PX781cXBaG3oaIedsEQEaTTJyA8UBuo5LyK4hDSWrVsAHukmcOozqVhUX2oR+Cnr03hs bVfgvU/gPE4gkot26Yc7I5pHETBVlACslxnz0uqMHpw1Nzs5n1phtsRZrqx0l5WAnYa8 jQE3FK0hnOan7ceqABFVP3+AA690eFk018sZ7dcE3s4jjJxD5cb4Tr1NT3qE6XJUcI74 KGTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CKTpKLeSBdUqUozne8dOLG7XHxiGHFR4EgqMwDuCKlY=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=BIxesLaD5JcLNpLaHMqvJKeHMAciC3TLaVy4bsySwgfNG59vHIdpt2siw0rAJ2nrnf THc4oqvSvwv/W68qFqSMe/gMw1ohU7WDle5jpJuXlezTjS2iNsDWS6VU5DKbkvqWiS9Q rO7kymYhI8TqAK1EZYjmwLjeolsgv8MK9m7OUU6vM/ZYASVTmVum4u8bMXy3PkpDqlPN Y4b7gEyRhkbcwX3st9D4ZkWEaO1RSb2K7xF1F3uIxj3oM+rMoL2cE0NjEnrzBcwvwhl8 xmkFfukxky1zn5D1JHCY8LowdxRSpNu3dkOMsreozwt9stgcxnZ1zZu+QA8jw46PUH89 OVQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="DQEAu/8T"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id v17-20020a170902d09100b001cc649af5e7si7982032plv.243.2023.11.06.10.32.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="DQEAu/8T"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 71A39801F77E; Mon, 6 Nov 2023 10:32:21 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232686AbjKFScR (ORCPT + 34 others); Mon, 6 Nov 2023 13:32:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232371AbjKFScH (ORCPT ); Mon, 6 Nov 2023 13:32:07 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44AABD49; Mon, 6 Nov 2023 10:32:05 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-6c3077984e8so4053981b3a.0; Mon, 06 Nov 2023 10:32:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699295525; x=1699900325; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CKTpKLeSBdUqUozne8dOLG7XHxiGHFR4EgqMwDuCKlY=; b=DQEAu/8T6Zlvmk8QXgJb+oVwN1lFWiAgQOtG7oTLgE1ZxOtDfHeodJk9Cn4bjZGGk6 y7ThCkWVBjkbDUHInA/cNizKDmk3bFaEtOqsu59AlgQoMeIlimbwtr5unijvmHuct2Vm a7FNUxcWLeYX+QZAfVpWCNxBV6YZwCw601B7Ju4iT854fJuRKaaysVOxBDBPhktKZw2D K9JbGPzPZteaccjDLGvkx2556K20tfRVetEiRRFUj2QQrORyk8tWEVapQY+bsmfiUtEm BsaS254XNN6zZDg9kuEHBiCXeBCGAVW+wBV3KEBHwWCVukc9tTf0svfADm9znEPzl6sH bRaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699295525; x=1699900325; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CKTpKLeSBdUqUozne8dOLG7XHxiGHFR4EgqMwDuCKlY=; b=UbHiq0lBH/wSkMsiVzQWE5BQRgckF/oVh12lpFWAV5nAErJKQSaHCxZxhm9YRXDqRq PztjEzjmIDeUF18gHK0HQZeJ7VpgKpYcBNw7Gd7zVkJVMPCyTG7CLfNUc5mx1tiJGujk m+s3SuoSy2EFtaB4LLFnRl/mYQuuHCD9XskjPyeUYhMrcVSKz/TjyJS/VKfKND+cV0am kQGpf8nJmyH0yd3r+rD6KLQ725bzJUeqGh8jPG26gCDvrg3nT/bHeMru7cmUWL8UwG8J tAejssRgyhoAr2wFgFIB7EXoAvX5ogDzT4jivpUHuB0la+1BbDzLNxqC51ONZTYEd6nT qEeQ== X-Gm-Message-State: AOJu0YyRkQEH1r4GLPa5uMMDNhLoK/jBBg7cZyYYd5yfpqb951iSdK8A wC2aZFOABemMiOT9MNWzut4= X-Received: by 2002:a05:6a20:914e:b0:172:83b8:67f8 with SMTP id x14-20020a056a20914e00b0017283b867f8mr26904604pzc.29.1699295524682; Mon, 06 Nov 2023 10:32:04 -0800 (PST) Received: from localhost (fwdproxy-prn-119.fbsv.net. [2a03:2880:ff:77::face:b00c]) by smtp.gmail.com with ESMTPSA id j8-20020a170903024800b001c9d968563csm6161658plh.79.2023.11.06.10.32.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:04 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v5 4/6] mm: memcg: add per-memcg zswap writeback stat Date: Mon, 6 Nov 2023 10:31:57 -0800 Message-Id: <20231106183159.3562879-5-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106183159.3562879-1-nphamcs@gmail.com> References: <20231106183159.3562879-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 06 Nov 2023 10:32:21 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781840522994050997 X-GMAIL-MSGID: 1781840522994050997 From: Domenico Cerasuolo Since zswap now writes back pages from memcg-specific LRUs, we now need a new stat to show writebacks count for each memcg. Suggested-by: Nhat Pham Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham --- include/linux/vm_event_item.h | 1 + mm/memcontrol.c | 1 + mm/vmstat.c | 1 + mm/zswap.c | 3 +++ 4 files changed, 6 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 8abfa1240040..3153359c3841 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -145,6 +145,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_ZSWAP ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2ef49b471a16..e43b5aba8efc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -593,6 +593,7 @@ static const unsigned int memcg_vm_event_stat[] = { #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, diff --git a/mm/vmstat.c b/mm/vmstat.c index 359460deb377..5e5572f3b456 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_ZSWAP "zswpin", "zswpout", + "zswp_wb", #endif #ifdef CONFIG_X86 "direct_map_level2_splits", diff --git a/mm/zswap.c b/mm/zswap.c index 2654b0d214cc..03ee41a8b884 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -755,6 +755,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o } zswap_written_back_pages++; + if (entry->objcg) + count_objcg_event(entry->objcg, ZSWP_WB); + /* * Writeback started successfully, the page now belongs to the * swapcache. Drop the entry from zswap - unless invalidate already From patchwork Mon Nov 6 18:31:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 162119 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2850253vqu; Mon, 6 Nov 2023 10:32:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IH1rxH/0sOlMRAIc7HgrIcxTZDw7diLRkToS7+GtuF804jvVHboZdhGBapfN8sasnl7h8tO X-Received: by 2002:a17:90b:3a8c:b0:281:3a4a:2e61 with SMTP id om12-20020a17090b3a8c00b002813a4a2e61mr209190pjb.14.1699295553189; Mon, 06 Nov 2023 10:32:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699295553; cv=none; d=google.com; s=arc-20160816; b=VNFwPKXrTQxDDIc8QrSTMIETk/6nPXvP3SyOKuVUfO5OuXCmHGV6dFpqu88rh5NcFt 0nCRazzlJTyO723RQg+0TNR1edYQKZuiyVb1BoZyhS1D/jPuhD2iSLU9S2wF523RCqBu gVx6C04rwunyZ5Sg1T0EBT+IGj8etbPYpKu8tL5OLMePVqT6gjNIYNP5hs7GnjQWFF8Z o+umqQtATmb7Y8zpBGi84V140aAiIFZDuZnxmomfY1zukHY2UaDPTrrLUUnUojPBN2+E hyXphsw2D8oeauBnbUyVfh7UlYTtCwxk3osfyokBbQlAzu5nGzCrE89PexSQKRrGHSWb I3LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fQKZ6uKAgGpieDKxwUlilWEMw/a+RaeqsurH8iiKhxA=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=HVdyybASg4e+sxi4U2S+kEkV4jeAw5X1ZsXw8TyjzvCu5jXYqKAJeMVEqcYLR/G/FT zVxzBh8K2psju9Xrnx/25cCPBEVfNkLz1zW7qtWiFvzThHujdo37GZR7pxpV/rpoW4jR R/by4WI5LA1027GgHR5qR87Z3D8IxXvu0a3B+XvZSPpm4M7HHyYre5OKA3PPlNYdWClV foaq3Ux8NABODIwtHd+DDBOt9EixUlKuLNMSk+l6uzseBa7YQ2lVz0lVkBEbd7tiOpQB R6/IkWSiLolH80lMJ7FkBY9nbdJUOd4BbQTQBN/f8ckxJhGPhYOgxO0j8l5tumbDnsB1 wBMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=MQ3iJp7X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id co6-20020a17090afe8600b00280470736besi8761961pjb.182.2023.11.06.10.32.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=MQ3iJp7X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 1AE20801F773; Mon, 6 Nov 2023 10:32:32 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232618AbjKFScX (ORCPT + 34 others); Mon, 6 Nov 2023 13:32:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232405AbjKFScJ (ORCPT ); Mon, 6 Nov 2023 13:32:09 -0500 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A7ABD47; Mon, 6 Nov 2023 10:32:06 -0800 (PST) Received: by mail-pg1-x52e.google.com with SMTP id 41be03b00d2f7-5aaebfac4b0so3175149a12.2; Mon, 06 Nov 2023 10:32:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699295525; x=1699900325; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fQKZ6uKAgGpieDKxwUlilWEMw/a+RaeqsurH8iiKhxA=; b=MQ3iJp7Xeqn3PN6RfzQYK5RPtfpgfaxLNvibFQyxjKHzPzFuKJeB+Xsnzwr28m3pCL C2vevFV1UTzqgzCRCdHzuvSmReF19EN+L8/mE9ugV5rB39wky86laYPaiXxXY3OXawSR GQLP5+9oTS3VAzfYneNOtLQxJl580694ZPVi538xrv23RHCYrXFcjNSPtBR98ksAJVCa i0bA0Lk0k1HKMotFqSf76haa9dbPrKAM76iFjmtXYJJS/84zZtU7+UCl3q4QR9xYuRAF qULO0cZbtCuGKrjv8Oa0+h4roM1/iNn86CfSFRqO2jp+8S2U3yy3sQ89UuxasIdg7bZM ge3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699295525; x=1699900325; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fQKZ6uKAgGpieDKxwUlilWEMw/a+RaeqsurH8iiKhxA=; b=kIRFiw7Yq/wcgMb9ReklR5rpkGhyeRY8BvfR8HblfS5l/Kli6Q2e/FiEwt+X0HuHXL U/BjD7B5sA+Bj1R1ZFRX4sAQG4BzG3sIZHXNSFajBjJVkLl/Hs9ArTcm3n4tEXp0349B Uv3tZwEabNppT1fOoBpt03mH0gHDxBQULQ2Mtq7dT82KF7dz9BnCKXDH9+oV+N8kWkfQ SFfD3Xgz020M5Is1Z1KZr1znAd8tlsnPNUy5wf5PKzG9XSdhPP8bPn5CixQpZ/0zme0p QCcpp9lndZAeLtbFZ1UJM23KGxTipvZ3zz0wceGsiC3Z8l5MXRY+ZcNXad+Havv44ooJ 8THg== X-Gm-Message-State: AOJu0YwwBDKkdjWn9k9+p/PrOcdactK5BHhGJS1ZPLF1tf2PNcteKEEy o4Jh2ywF4Z3GLvCY4Hiptso= X-Received: by 2002:a05:6a20:7487:b0:15e:b8a1:57b0 with SMTP id p7-20020a056a20748700b0015eb8a157b0mr25302832pzd.39.1699295525583; Mon, 06 Nov 2023 10:32:05 -0800 (PST) Received: from localhost (fwdproxy-prn-007.fbsv.net. [2a03:2880:ff:7::face:b00c]) by smtp.gmail.com with ESMTPSA id r17-20020a62e411000000b006bc189378a7sm5859887pfh.196.2023.11.06.10.32.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:05 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v5 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Date: Mon, 6 Nov 2023 10:31:58 -0800 Message-Id: <20231106183159.3562879-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106183159.3562879-1-nphamcs@gmail.com> References: <20231106183159.3562879-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 06 Nov 2023 10:32:32 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781840534257773569 X-GMAIL-MSGID: 1781840534257773569 From: Domenico Cerasuolo The memcg-zswap self test is updated to adjust to the behavior change implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"), where zswap performs writeback for specific memcg. Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham --- tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index 49def87a909b..753a3b9de1ad 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_zswap_written_back_pages(size_t *value) +static int get_cg_wb_count(const char *cg) { - return read_int("/sys/kernel/debug/zswap/written_back_pages", value); + return cg_read_key_long(cg, "memory.stat", "zswp_wb"); } static int allocate_bytes(const char *cgroup, void *arg) @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg) return 0; } +static char *setup_test_group_1M(const char *root, const char *name) +{ + char *group_name = cg_name(root, name); + + if (!group_name) + return NULL; + if (cg_create(group_name)) + goto fail; + if (cg_write(group_name, "memory.max", "1M")) { + cg_destroy(group_name); + goto fail; + } + return group_name; +fail: + free(group_name); + return NULL; +} + /* * When trying to store a memcg page in zswap, if the memcg hits its memory - * limit in zswap, writeback should not be triggered. - * - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may - * not zswap"). Needs to be revised when a per memcg writeback mechanism is - * implemented. + * limit in zswap, writeback should affect only the zswapped pages of that + * memcg. */ static int test_no_invasive_cgroup_shrink(const char *root) { - size_t written_back_before, written_back_after; int ret = KSFT_FAIL; - char *test_group; + size_t control_allocation_size = MB(10); + char *control_allocation, *wb_group = NULL, *control_group = NULL; /* Set up */ - test_group = cg_name(root, "no_shrink_test"); - if (!test_group) - goto out; - if (cg_create(test_group)) + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1"); + if (!wb_group) + return KSFT_FAIL; + if (cg_write(wb_group, "memory.zswap.max", "10K")) goto out; - if (cg_write(test_group, "memory.max", "1M")) + control_group = setup_test_group_1M(root, "per_memcg_wb_test2"); + if (!control_group) goto out; - if (cg_write(test_group, "memory.zswap.max", "10K")) + + /* Push some test_group2 memory into zswap */ + if (cg_enter_current(control_group)) goto out; - if (get_zswap_written_back_pages(&written_back_before)) + control_allocation = malloc(control_allocation_size); + for (int i = 0; i < control_allocation_size; i += 4095) + control_allocation[i] = 'a'; + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1) goto out; - /* Allocate 10x memory.max to push memory into zswap */ - if (cg_run(test_group, allocate_bytes, (void *)MB(10))) + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */ + if (cg_run(wb_group, allocate_bytes, (void *)MB(10))) goto out; - /* Verify that no writeback happened because of the memcg allocation */ - if (get_zswap_written_back_pages(&written_back_after)) - goto out; - if (written_back_after == written_back_before) + /* Verify that only zswapped memory from gwb_group has been written back */ + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0) ret = KSFT_PASS; out: - cg_destroy(test_group); - free(test_group); + cg_enter_current(root); + if (control_group) { + cg_destroy(control_group); + free(control_group); + } + cg_destroy(wb_group); + free(wb_group); + if (control_allocation) + free(control_allocation); return ret; } From patchwork Mon Nov 6 18:31:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 162120 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp2850285vqu; Mon, 6 Nov 2023 10:32:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IH35X0mnQj/tZK4HrCKYH2nyWfQuRt2Fx+u7NWlJB6q7PJrvGQsZcKymSSd+zRrMwlst1cj X-Received: by 2002:a17:902:fb43:b0:1cb:fcfb:61af with SMTP id lf3-20020a170902fb4300b001cbfcfb61afmr322994plb.30.1699295558388; Mon, 06 Nov 2023 10:32:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699295558; cv=none; d=google.com; s=arc-20160816; b=CAY13QgvLRP7a05RgFxpPp5UJEmbLw8sXMJO1ojcK4kwF3ByKL9UbHNXlUIP11eUCc ncd88a67FFPLN5vbRfS9eyouCFw+OwspikTczjw6Mh38YQCLFm/W1UGAvusf31HD8Eb5 NVwlNgnFBQjwKuHUw7h7XK6I3kbHhoNCCciHllgCgHN8IeekUuEKUsqXeax8hu1tWDQ2 oHAfapH9oUaB6j4c0EsUWlxQiIiePwEdiCu3aQDLYYkZ6e9/1fiQhR6EeenRs5cRn6tF ZP+E1tVY54lCNlLR2H2MdoaD+quXntct3u5rfx4Cf9gQ4Q3556yTMLyo1cZ3nQqyGDhf Fm+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=N8ph0qQQ5H5bJLHYkI6jb4RvQ0Lcli/vqoPRF5tf8vc=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=g/Gy2Sg2xPbaEh3Es/c18cf7VmVIwMmi0AYjtme7bSZXx9sBbsaOJV9KPfDUTpsmhJ jGe00/B6546bCOvNWIVsBgqXsfCN7vKphHENBjLpp++ks3cRvkjZUALFNjsdCHbaLZ3w Uy4jbL55osUH7/fniMxxMqAYKJRjWmiplSAtVnqZlARQ/u61kPzml8cj0GLWld68CUxR QdQd3+M7qYUfo0Bc5wejK+l/wfMTNOz2UqSuCYk/GW8eS0WZkomntQQczXza5zrCgaNx jX0xADhUARgwq3vejbU0EFsrNI8NgAxTl43TZfrLackabIRWP0LUPma7Aniejc/HEP36 Qxbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=dOEjEJd5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id c8-20020a63ea08000000b005b930e0b600si226652pgi.820.2023.11.06.10.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=dOEjEJd5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 5C89D801E860; Mon, 6 Nov 2023 10:32:37 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232932AbjKFScZ (ORCPT + 34 others); Mon, 6 Nov 2023 13:32:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232586AbjKFScQ (ORCPT ); Mon, 6 Nov 2023 13:32:16 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5880694; Mon, 6 Nov 2023 10:32:07 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-6b77ab73c6fso3780541b3a.1; Mon, 06 Nov 2023 10:32:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699295527; x=1699900327; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N8ph0qQQ5H5bJLHYkI6jb4RvQ0Lcli/vqoPRF5tf8vc=; b=dOEjEJd5cK3eSyIiQo5rV68cM8y+rTXDI0exDf+wPkX9oALr44CL/f9BenZ4X7Ro92 ieQE2R79UnRXNm7qPJ+hcyJtGeN9YivxJgaIpiBiQv4u9Wec7tF9XSX0b/tGMycSIN7l mo/Xft8YXuOVS5WPzbKA3UQp372pZzifwiEK0WvgvlRiSI9EWWoz8AEcBZwYFAAqjEqq KC714X5ReOWmMglumvyMCdjt9IPwSXf0ek9tyoiD3jbwZRT1pvioEOyDMUdYkFB3ypsU 6NcZ4DcJ3JXPWdyKvOSWz5+GXP4fHDHuwoIJKzLTZB2mchwiKM+V0EEk06A0xMOqvNWc nOzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699295527; x=1699900327; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N8ph0qQQ5H5bJLHYkI6jb4RvQ0Lcli/vqoPRF5tf8vc=; b=e+0atJmIxz/ReGfwz/dUNsT3OwuR586Lanf0Tj7zdHnr77fxI42d/hdV5abs1RzJT8 /VDCJtczmfUk3xeP0qiRLcfFnN2+wBYCw/dH2lMj7D3dguMSVsYn83fWWgHRXNFMm1D9 Ni9Ef2nOIrGIIutV31m7o9LXjqnXpsrzAonXyrf/xDxEoNEWjVKX1SKEkhslC3cu5Ebd ueWOvaDFCvZqgKhWsYD9yHvWK06AQukFb573uopyGlkkhuLbqv+zBT+OGIuB148Ni7Lv haiPNC2Hy5Nmd5olAhbewV9g+u2N3B+aQ7Oz2ohao3y5CnmBztP7nUgSByhEBRpCdvoT yCQg== X-Gm-Message-State: AOJu0YwQcVn54e0sXOKO76Hd38UlKf9HNexJtNYp7Gc9ScUT6BuVMfsF t3Z4jWC1BD39TPmKbT9osFY= X-Received: by 2002:a05:6a20:4406:b0:151:35ad:f331 with SMTP id ce6-20020a056a20440600b0015135adf331mr324995pzb.14.1699295526484; Mon, 06 Nov 2023 10:32:06 -0800 (PST) Received: from localhost (fwdproxy-prn-016.fbsv.net. [2a03:2880:ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id m11-20020a62f20b000000b006887be16675sm5829072pfh.205.2023.11.06.10.32.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 10:32:06 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v5 6/6] zswap: shrinks zswap pool based on memory pressure Date: Mon, 6 Nov 2023 10:31:59 -0800 Message-Id: <20231106183159.3562879-7-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106183159.3562879-1-nphamcs@gmail.com> References: <20231106183159.3562879-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 06 Nov 2023 10:32:37 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781840539465315304 X-GMAIL-MSGID: 1781840539465315304 Currently, we only shrink the zswap pool when the user-defined limit is hit. This means that if we set the limit too high, cold data that are unlikely to be used again will reside in the pool, wasting precious memory. It is hard to predict how much zswap space will be needed ahead of time, as this depends on the workload (specifically, on factors such as memory access patterns and compressibility of the memory pages). This patch implements a memcg- and NUMA-aware shrinker for zswap, that is initiated when there is memory pressure. The shrinker does not have any parameter that must be tuned by the user, and can be opted in or out on a per-memcg basis. Furthermore, to make it more robust for many workloads and prevent overshrinking (i.e evicting warm pages that might be refaulted into memory), we build in the following heuristics: * Estimate the number of warm pages residing in zswap, and attempt to protect this region of the zswap LRU. * Scale the number of freeable objects by an estimate of the memory saving factor. The better zswap compresses the data, the fewer pages we will evict to swap (as we will otherwise incur IO for relatively small memory saving). * During reclaim, if the shrinker encounters a page that is also being brought into memory, the shrinker will cautiously terminate its shrinking action, as this is a sign that it is touching the warmer region of the zswap LRU. As a proof of concept, we ran the following synthetic benchmark: build the linux kernel in a memory-limited cgroup, and allocate some cold data in tmpfs to see if the shrinker could write them out and improved the overall performance. Depending on the amount of cold data generated, we observe from 14% to 35% reduction in kernel CPU time used in the kernel builds. Signed-off-by: Nhat Pham --- Documentation/admin-guide/mm/zswap.rst | 7 + include/linux/mmzone.h | 2 + include/linux/zswap.h | 25 +++- mm/mmzone.c | 1 + mm/swap_state.c | 2 + mm/zswap.c | 177 ++++++++++++++++++++++++- 6 files changed, 208 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 45b98390e938..522ae22ccb84 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -153,6 +153,13 @@ attribute, e. g.:: Setting this parameter to 100 will disable the hysteresis. +When there is a sizable amount of cold memory residing in the zswap pool, it +can be advantageous to proactively write these cold pages to swap and reclaim +the memory for other use cases. By default, the zswap shrinker is disabled. +User can enable it as follows: + + echo Y > /sys/module/zswap/parameters/shrinker_enabled + A debugfs interface is provided for various statistic about pool size, number of pages stored, same-value filled pages and various counters for the reasons pages are rejected. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 12f31633be05..633afdb96c40 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -22,6 +22,7 @@ #include #include #include +#include #include /* Free memory management - zoned buddy allocator. */ @@ -637,6 +638,7 @@ struct lruvec { #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif + struct zswap_lruvec_state zswap_lruvec_state; }; /* Isolate for asynchronous migration */ diff --git a/include/linux/zswap.h b/include/linux/zswap.h index e571e393669b..cbd373ba88d2 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -5,20 +5,40 @@ #include #include +struct lruvec; + extern u64 zswap_pool_total_size; extern atomic_t zswap_stored_pages; #ifdef CONFIG_ZSWAP +struct zswap_lruvec_state { + /* + * Number of pages in zswap that should be protected from the shrinker. + * This number is an estimate of the following counts: + * + * a) Recent page faults. + * b) Recent insertion to the zswap LRU. This includes new zswap stores, + * as well as recent zswap LRU rotations. + * + * These pages are likely to be warm, and might incur IO if the are written + * to swap. + */ + atomic_long_t nr_zswap_protected; +}; + bool zswap_store(struct folio *folio); bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); void zswap_swapon(int type); void zswap_swapoff(int type); void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); - +void zswap_lruvec_state_init(struct lruvec *lruvec); +void zswap_lruvec_swapin(struct page *page); #else +struct zswap_lruvec_state {}; + static inline bool zswap_store(struct folio *folio) { return false; @@ -33,7 +53,8 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {} static inline void zswap_swapon(int type) {} static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} - +static inline void zswap_lruvec_init(struct lruvec *lruvec) {} +static inline void zswap_lruvec_swapin(struct page *page) {} #endif #endif /* _LINUX_ZSWAP_H */ diff --git a/mm/mmzone.c b/mm/mmzone.c index b594d3f268fe..c01896eca736 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -78,6 +78,7 @@ void lruvec_init(struct lruvec *lruvec) memset(lruvec, 0, sizeof(struct lruvec)); spin_lock_init(&lruvec->lru_lock); + zswap_lruvec_state_init(lruvec); for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/swap_state.c b/mm/swap_state.c index 6c84236382f3..94ed2d508db0 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -687,6 +687,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); + zswap_lruvec_swapin(page); return page; } @@ -862,6 +863,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); + zswap_lruvec_swapin(page); return page; } diff --git a/mm/zswap.c b/mm/zswap.c index 03ee41a8b884..260e01180ee0 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -146,6 +146,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644); /* Number of zpools in zswap_pool (empirically determined for scalability) */ #define ZSWAP_NR_ZPOOLS 32 +/* Enable/disable memory pressure-based shrinker. */ +static bool zswap_shrinker_enabled; +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); + /********************************* * data structures **********************************/ @@ -176,6 +180,8 @@ struct zswap_pool { struct list_lru list_lru; spinlock_t next_shrink_lock; struct mem_cgroup *next_shrink; + struct shrinker *shrinker; + atomic_t nr_stored; }; /* @@ -274,17 +280,26 @@ static bool zswap_can_accept(void) DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); } +static u64 get_zswap_pool_size(struct zswap_pool *pool) +{ + u64 pool_size = 0; + int i; + + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) + pool_size += zpool_get_total_size(pool->zpools[i]); + + return pool_size; +} + static void zswap_update_total_size(void) { struct zswap_pool *pool; u64 total = 0; - int i; rcu_read_lock(); list_for_each_entry_rcu(pool, &zswap_pools, list) - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) - total += zpool_get_total_size(pool->zpools[i]); + total += get_zswap_pool_size(pool); rcu_read_unlock(); @@ -339,13 +354,34 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* zswap lruvec functions +**********************************/ +void zswap_lruvec_state_init(struct lruvec *lruvec) +{ + atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0); +} + +void zswap_lruvec_swapin(struct page *page) +{ + struct lruvec *lruvec; + + if (page) { + lruvec = folio_lruvec(page_folio(page)); + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected); + } +} + /********************************* * lru functions **********************************/ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) { + atomic_long_t *nr_zswap_protected; + unsigned long lru_size, old, new; int nid = entry_to_nid(entry); struct mem_cgroup *memcg; + struct lruvec *lruvec; /* * Note that it is safe to use rcu_read_lock() here, even in the face of @@ -363,6 +399,19 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) memcg = get_mem_cgroup_from_entry(entry); /* will always succeed */ list_lru_add(list_lru, &entry->lru, nid, memcg); + + /* Update the protection area */ + lru_size = list_lru_count_one(list_lru, nid, memcg); + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + nr_zswap_protected = &lruvec->zswap_lruvec_state.nr_zswap_protected; + old = atomic_long_inc_return(nr_zswap_protected); + /* + * Decay to avoid overflow and adapt to changing workloads. + * This is based on LRU reclaim cost decaying heuristics. + */ + do { + new = old > lru_size / 4 ? old / 2 : old; + } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new)); rcu_read_unlock(); } @@ -384,6 +433,7 @@ static void zswap_lru_putback(struct list_lru *list_lru, int nid = entry_to_nid(entry); spinlock_t *lock = &list_lru->node[nid].lock; struct mem_cgroup *memcg; + struct lruvec *lruvec; rcu_read_lock(); memcg = get_mem_cgroup_from_entry(entry); @@ -391,6 +441,10 @@ static void zswap_lru_putback(struct list_lru *list_lru, /* we cannot use list_lru_add here, because it increments node's lru count */ list_lru_putback(list_lru, &entry->lru, nid, memcg); spin_unlock(lock); + + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry))); + /* increment the protection area to account for the LRU rotation. */ + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected); rcu_read_unlock(); } @@ -480,6 +534,7 @@ static void zswap_free_entry(struct zswap_entry *entry) else { zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); + atomic_dec(&entry->pool->nr_stored); zswap_pool_put(entry->pool); } zswap_entry_cache_free(entry); @@ -521,6 +576,95 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, return entry; } +/********************************* +* shrinker functions +**********************************/ +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg); + +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); + unsigned long shrink_ret, nr_protected, lru_size; + struct zswap_pool *pool = shrinker->private_data; + bool encountered_page_in_swapcache = false; + + nr_protected = + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); + lru_size = list_lru_shrink_count(&pool->list_lru, sc); + + /* + * Abort if the shrinker is disabled or if we are shrinking into the + * protected region. + */ + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { + sc->nr_scanned = 0; + return SHRINK_STOP; + } + + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, + &encountered_page_in_swapcache); + + if (encountered_page_in_swapcache) + return SHRINK_STOP; + + return shrink_ret ? shrink_ret : SHRINK_STOP; +} + +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct zswap_pool *pool = shrinker->private_data; + struct mem_cgroup *memcg = sc->memcg; + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; + +#ifdef CONFIG_MEMCG_KMEM + cgroup_rstat_flush(memcg->css.cgroup); + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); +#else + /* use pool stats instead of memcg stats */ + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; + nr_stored = atomic_read(&pool->nr_stored); +#endif + + if (!zswap_shrinker_enabled || !nr_stored) + return 0; + + nr_protected = + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc); + /* + * Subtract the lru size by an estimate of the number of pages + * that should be protected. + */ + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0; + + /* + * Scale the number of freeable pages by the memory saving factor. + * This ensures that the better zswap compresses memory, the fewer + * pages we will evict to swap (as it will otherwise incur IO for + * relatively small memory saving). + */ + return mult_frac(nr_freeable, nr_backing, nr_stored); +} + +static void zswap_alloc_shrinker(struct zswap_pool *pool) +{ + pool->shrinker = + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap"); + if (!pool->shrinker) + return; + + pool->shrinker->private_data = pool; + pool->shrinker->scan_objects = zswap_shrinker_scan; + pool->shrinker->count_objects = zswap_shrinker_count; + pool->shrinker->batch = 0; + pool->shrinker->seeks = DEFAULT_SEEKS; +} + /********************************* * per-cpu code **********************************/ @@ -716,6 +860,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o spinlock_t *lock, void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + bool *encountered_page_in_swapcache = (bool *)arg; struct zswap_tree *tree; pgoff_t swpoffset; enum lru_status ret = LRU_REMOVED_RETRY; @@ -751,6 +896,17 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o zswap_reject_reclaim_fail++; zswap_lru_putback(&entry->pool->list_lru, entry); ret = LRU_RETRY; + + /* + * Encountering a page already in swap cache is a sign that we are shrinking + * into the warmer region. We should terminate shrinking (if we're in the dynamic + * shrinker context). + */ + if (writeback_result == -EEXIST && encountered_page_in_swapcache) { + ret = LRU_SKIP; + *encountered_page_in_swapcache = true; + } + goto put_unlock; } zswap_written_back_pages++; @@ -890,6 +1046,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) &pool->node); if (ret) goto error; + + zswap_alloc_shrinker(pool); + if (!pool->shrinker) + goto error; + pr_debug("using %s compressor\n", pool->tfm_name); /* being the current pool takes 1 ref; this func expects the @@ -897,14 +1058,20 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - list_lru_init_memcg(&pool->list_lru, NULL); + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) + goto lru_fail; + shrinker_register(pool->shrinker); INIT_WORK(&pool->shrink_work, shrink_worker); + atomic_set(&pool->nr_stored, 0); zswap_pool_debug("created", pool); spin_lock_init(&pool->next_shrink_lock); return pool; +lru_fail: + list_lru_destroy(&pool->list_lru); + shrinker_free(pool->shrinker); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -962,6 +1129,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool) zswap_pool_debug("destroying", pool); + shrinker_free(pool->shrinker); cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); list_lru_destroy(&pool->list_lru); @@ -1511,6 +1679,7 @@ bool zswap_store(struct folio *folio) if (entry->length) { INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&entry->pool->list_lru, entry); + atomic_inc(&entry->pool->nr_stored); } spin_unlock(&tree->lock);