From patchwork Mon Nov 27 23:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 170504 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3550783vqx; Mon, 27 Nov 2023 15:46:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IEHv+XVlMinskVGdQSduxgWLi4JbNXBUsVxL4UNnPMo3KJnFQ/OTEw6Fu3W6eYq0zIGYsFp X-Received: by 2002:a17:90b:4d87:b0:280:fecd:b170 with SMTP id oj7-20020a17090b4d8700b00280fecdb170mr11938326pjb.42.1701128782656; Mon, 27 Nov 2023 15:46:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701128782; cv=none; d=google.com; s=arc-20160816; b=osTRlSwPZ9qYo46fYohM9JX6sy+0oXR735FZgbqX9IulxN7cBrazC2TnnuXejQnEVM Ljczdi2WJThBYLD4eFKjcnRblf2Sy5yqj1yVHvVGNl9k9j6OgQ9c3f5qCyTpDjgmBe/T z1VAJCBdIGeCb+5uVKB2R6PCmJDJ7l1vxzOt5QULT7ataQaIKJbSu5QOhtLT0sfFAxkX xO0veFsFcnCBN++Kb5VCOVNVSdbUbNj3gY7gWrbxmj2Chokl9BqQ55h691PmSvcEU70/ 0zryXdbPFhvSUM/PrNBIlfyh+7jmWI5YS2++sy/oKHlD7CTsDVm1gFgZlujmS5ADWhD0 VxUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fNiCSzvNV1ziGwJa7LtT47VBl/50ZnVtPtDE/RNEfyE=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=JgKIWdagBjNLHdp6yRf/t9q+3S9bE2+dfmlFcr/sS6Oq/+6O7ImbtxAJszhI9aq33k q61BRglRojE26y39Ru/3XQGU6UXSSd8IIDTwBW1WS2SyspV+F7z6KHERnU2f40h8Zkv/ M1ciV1OiY3Gofat354MwUPLAJaDaMyijF8/LGVGNSCMtege2ZyRo1DKr3OivtLRCadbz sOcZ82ieEWvYsMdPrPxsLLhdBMTr5GfIAMqr+agV3KNOILbk/bLJgInCY+1TKd/NphCw KnqHB+dRaxNoaxK0oSFxVyc9haE8uHDbbrBexahdH14XRwCf36CknP+QOLxtrlhgKCwp sgeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=AXqxPnMD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id q21-20020a656855000000b005c2188ccb4asi10947250pgt.698.2023.11.27.15.46.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=AXqxPnMD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A9ED380793FF; Mon, 27 Nov 2023 15:46:21 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233841AbjK0XqD (ORCPT + 99 others); Mon, 27 Nov 2023 18:46:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231594AbjK0Xp5 (ORCPT ); Mon, 27 Nov 2023 18:45:57 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 901351B5; Mon, 27 Nov 2023 15:46:02 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1cfb83211b9so19228325ad.3; Mon, 27 Nov 2023 15:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701128762; x=1701733562; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fNiCSzvNV1ziGwJa7LtT47VBl/50ZnVtPtDE/RNEfyE=; b=AXqxPnMDNBlLfL6Eiv0exjzRMnPGTlg45oS1lKHnTNrPrByRqfCBPKKA48kZhWbgGf WD2BqZZpJJxxDj0VymY9+vbEI0Ffkd8sA7Ew3xfuw+ERy21+GM5SDmWIt/rvreeAqiPB MOB8kpu8qJ/nljN9TmS78nWq3A3kfCJGDG3cpiWbhEKhxrcIYRcX7Cu29jS7Ac8Eh91B 0Jp8k+gmrZWhp5mYOuXWkXYP5chd8CmIaPGRfDLXUQYCGzaUWZKKDNa0AVXkz/dP60Q5 3EMKxqrSiJ4T+r5VRUIEu7QmZelxqa9kbotp+2AEr/ELXOjDtWBojtPeBdx4Z3N13RuI gPBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701128762; x=1701733562; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fNiCSzvNV1ziGwJa7LtT47VBl/50ZnVtPtDE/RNEfyE=; b=c6lWzUTqK/iqgydA92E0WdAxsdq/pDLD1DbwNvXWipDT8q7YqWbNWYFo8E1xGNZEFt /Hr616meo4RR3ivRdKrML6dnXc7jOxHb0RFrfO3BBhRc1TcQJ8K5wuMFrcwpvtOOp/ws uTmTZ25EAF0+0W1XjVbKNsbcwjYvbQVQK18UNFCL/kzfNDwG0hYjFPhhSRmqO8u89zkF MbjxfKN9L14AvtcG8DIcihYw/ulrozcjszOQUhEFW6wXHtbKA2w0wAVsbjpY83JlyvWk yEm5OzIfCU9M7stCXWNXHudONyuvKtqcNsuYppJFWNEx57SMNGJOu22fTmV41NipWaaX qwQw== X-Gm-Message-State: AOJu0YzKMqyOemEacumv3NefSQ8jJf2LsTI7JlI/ywuwaUYtk0MQp9t0 Rv3JaICfHOjgZ72kzPjO4kE= X-Received: by 2002:a17:902:ab94:b0:1cf:ba11:1adf with SMTP id f20-20020a170902ab9400b001cfba111adfmr7677488plr.67.1701128761804; Mon, 27 Nov 2023 15:46:01 -0800 (PST) Received: from localhost (fwdproxy-prn-024.fbsv.net. [2a03:2880:ff:18::face:b00c]) by smtp.gmail.com with ESMTPSA id p18-20020a170902a41200b001c74df14e6fsm8818313plq.284.2023.11.27.15.46.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:01 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v7 1/6] list_lru: allows explicit memcg and NUMA node selection Date: Mon, 27 Nov 2023 15:45:55 -0800 Message-Id: <20231127234600.2971029-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231127234600.2971029-1-nphamcs@gmail.com> References: <20231127234600.2971029-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 15:46:21 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783762814392565787 X-GMAIL-MSGID: 1783762814392565787 The interface of list_lru is based on the assumption that the list node and the data it represents belong to the same allocated on the correct node/memcg. While this assumption is valid for existing slab objects LRU such as dentries and inodes, it is undocumented, and rather inflexible for certain potential list_lru users (such as the upcoming zswap shrinker and the THP shrinker). It has caused us a lot of issues during our development. This patch changes list_lru interface so that the caller must explicitly specify numa node and memcg when adding and removing objects. The old list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and list_lru_del_obj(), respectively. It also extends the list_lru API with a new function, list_lru_putback, which undoes a previous list_lru_isolate call. Unlike list_lru_add, it does not increment the LRU node count (as list_lru_isolate does not decrement the node count). list_lru_putback also allows for explicit memcg and NUMA node selection. Suggested-by: Johannes Weiner Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- drivers/android/binder_alloc.c | 7 ++--- fs/dcache.c | 8 +++-- fs/gfs2/quota.c | 6 ++-- fs/inode.c | 4 +-- fs/nfs/nfs42xattr.c | 8 ++--- fs/nfsd/filecache.c | 4 +-- fs/xfs/xfs_buf.c | 6 ++-- fs/xfs/xfs_dquot.c | 2 +- fs/xfs/xfs_qm.c | 2 +- include/linux/list_lru.h | 54 ++++++++++++++++++++++++++++++++-- mm/list_lru.c | 48 +++++++++++++++++++++++++----- mm/workingset.c | 4 +-- 12 files changed, 117 insertions(+), 36 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 138f6d43d13b..f69d30c9f50f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -234,7 +234,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, if (page->page_ptr) { trace_binder_alloc_lru_start(alloc, index); - on_lru = list_lru_del(&binder_alloc_lru, &page->lru); + on_lru = list_lru_del_obj(&binder_alloc_lru, &page->lru); WARN_ON(!on_lru); trace_binder_alloc_lru_end(alloc, index); @@ -285,7 +285,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, trace_binder_free_lru_start(alloc, index); - ret = list_lru_add(&binder_alloc_lru, &page->lru); + ret = list_lru_add_obj(&binder_alloc_lru, &page->lru); WARN_ON(!ret); trace_binder_free_lru_end(alloc, index); @@ -848,7 +848,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) if (!alloc->pages[i].page_ptr) continue; - on_lru = list_lru_del(&binder_alloc_lru, + on_lru = list_lru_del_obj(&binder_alloc_lru, &alloc->pages[i].lru); page_addr = alloc->buffer + i * PAGE_SIZE; binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, @@ -1287,4 +1287,3 @@ int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, dest, bytes); } - diff --git a/fs/dcache.c b/fs/dcache.c index c82ae731df9a..2ba37643b9c5 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -428,7 +428,8 @@ static void d_lru_add(struct dentry *dentry) this_cpu_inc(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_inc(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_add_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_lru_del(struct dentry *dentry) @@ -438,7 +439,8 @@ static void d_lru_del(struct dentry *dentry) this_cpu_dec(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_dec(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_del_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_shrink_del(struct dentry *dentry) @@ -1240,7 +1242,7 @@ static enum lru_status dentry_lru_isolate(struct list_head *item, * * This is guaranteed by the fact that all LRU management * functions are intermediated by the LRU API calls like - * list_lru_add and list_lru_del. List movement in this file + * list_lru_add_obj and list_lru_del_obj. List movement in this file * only ever occur through this functions or through callbacks * like this one, that are called from the LRU API. * diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 95dae7838b4e..b57f8c7b35be 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -271,7 +271,7 @@ static struct gfs2_quota_data *gfs2_qd_search_bucket(unsigned int hash, if (qd->qd_sbd != sdp) continue; if (lockref_get_not_dead(&qd->qd_lockref)) { - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); return qd; } } @@ -344,7 +344,7 @@ static void qd_put(struct gfs2_quota_data *qd) } qd->qd_lockref.count = 0; - list_lru_add(&gfs2_qd_lru, &qd->qd_lru); + list_lru_add_obj(&gfs2_qd_lru, &qd->qd_lru); spin_unlock(&qd->qd_lockref.lock); } @@ -1517,7 +1517,7 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp) lockref_mark_dead(&qd->qd_lockref); spin_unlock(&qd->qd_lockref.lock); - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); list_add(&qd->qd_lru, &dispose); } spin_unlock(&qd_lock); diff --git a/fs/inode.c b/fs/inode.c index f238d987dec9..ef2034a985e0 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -464,7 +464,7 @@ static void __inode_add_lru(struct inode *inode, bool rotate) if (!mapping_shrinkable(&inode->i_data)) return; - if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_add_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_inc(nr_unused); else if (rotate) inode->i_state |= I_REFERENCED; @@ -482,7 +482,7 @@ void inode_add_lru(struct inode *inode) static void inode_lru_list_del(struct inode *inode) { - if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_del_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_dec(nr_unused); } diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index 2ad66a8922f4..49aaf28a6950 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -132,7 +132,7 @@ nfs4_xattr_entry_lru_add(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_add(lru, &entry->lru); + return list_lru_add_obj(lru, &entry->lru); } static bool @@ -143,7 +143,7 @@ nfs4_xattr_entry_lru_del(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_del(lru, &entry->lru); + return list_lru_del_obj(lru, &entry->lru); } /* @@ -349,7 +349,7 @@ nfs4_xattr_cache_unlink(struct inode *inode) oldcache = nfsi->xattr_cache; if (oldcache != NULL) { - list_lru_del(&nfs4_xattr_cache_lru, &oldcache->lru); + list_lru_del_obj(&nfs4_xattr_cache_lru, &oldcache->lru); oldcache->inode = NULL; } nfsi->xattr_cache = NULL; @@ -474,7 +474,7 @@ nfs4_xattr_get_cache(struct inode *inode, int add) kref_get(&cache->ref); nfsi->xattr_cache = cache; cache->inode = inode; - list_lru_add(&nfs4_xattr_cache_lru, &cache->lru); + list_lru_add_obj(&nfs4_xattr_cache_lru, &cache->lru); } spin_unlock(&inode->i_lock); diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index ef063f93fde9..6c2decfdeb4b 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -322,7 +322,7 @@ nfsd_file_check_writeback(struct nfsd_file *nf) static bool nfsd_file_lru_add(struct nfsd_file *nf) { set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags); - if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_add_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_add(nf); return true; } @@ -331,7 +331,7 @@ static bool nfsd_file_lru_add(struct nfsd_file *nf) static bool nfsd_file_lru_remove(struct nfsd_file *nf) { - if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_del_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_del(nf); return true; } diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 545c7991b9b5..669332849680 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -169,7 +169,7 @@ xfs_buf_stale( atomic_set(&bp->b_lru_ref, 0); if (!(bp->b_state & XFS_BSTATE_DISPOSE) && - (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru))) + (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru))) atomic_dec(&bp->b_hold); ASSERT(atomic_read(&bp->b_hold) >= 1); @@ -1047,7 +1047,7 @@ xfs_buf_rele( * buffer for the LRU and clear the (now stale) dispose list * state flag */ - if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) { + if (list_lru_add_obj(&bp->b_target->bt_lru, &bp->b_lru)) { bp->b_state &= ~XFS_BSTATE_DISPOSE; atomic_inc(&bp->b_hold); } @@ -1060,7 +1060,7 @@ xfs_buf_rele( * was on was the disposal list */ if (!(bp->b_state & XFS_BSTATE_DISPOSE)) { - list_lru_del(&bp->b_target->bt_lru, &bp->b_lru); + list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru); } else { ASSERT(list_empty(&bp->b_lru)); } diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index ac6ba646624d..49f619f5aa96 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1064,7 +1064,7 @@ xfs_qm_dqput( struct xfs_quotainfo *qi = dqp->q_mount->m_quotainfo; trace_xfs_dqput_free(dqp); - if (list_lru_add(&qi->qi_lru, &dqp->q_lru)) + if (list_lru_add_obj(&qi->qi_lru, &dqp->q_lru)) XFS_STATS_INC(dqp->q_mount, xs_qm_dquot_unused); } xfs_dqunlock(dqp); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 94a7932ac570..67d0a8564ff3 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -171,7 +171,7 @@ xfs_qm_dqpurge( * hits zero, so it really should be on the freelist here. */ ASSERT(!list_empty(&dqp->q_lru)); - list_lru_del(&qi->qi_lru, &dqp->q_lru); + list_lru_del_obj(&qi->qi_lru, &dqp->q_lru); XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot_unused); xfs_qm_dqdestroy(dqp); diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index db86ad78d428..7675a48a0701 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -75,6 +75,8 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * list_lru_add: add an element to the lru list's tail * @lru: the lru pointer * @item: the item to be added. + * @nid: the node id of the sublist to add the item to. + * @memcg: the cgroup of the sublist to add the item to. * * If the element is already part of a list, this function returns doing * nothing. Therefore the caller does not need to keep state about whether or @@ -87,12 +89,28 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * * Return: true if the list was updated, false otherwise */ -bool list_lru_add(struct list_lru *lru, struct list_head *item); +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); /** - * list_lru_del: delete an element to the lru list + * list_lru_add_obj: add an element to the lru list's tail + * @lru: the lru pointer + * @item: the item to be added. + * + * This function is similar to list_lru_add(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item); + +/** + * list_lru_del: delete an element from the lru list * @lru: the lru pointer * @item: the item to be deleted. + * @nid: the node id of the sublist to delete the item from. + * @memcg: the cgroup of the sublist to delete the item from. * * This function works analogously as list_lru_add() in terms of list * manipulation. The comments about an element already pertaining to @@ -100,7 +118,21 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); * * Return: true if the list was updated, false otherwise */ -bool list_lru_del(struct list_lru *lru, struct list_head *item); +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + +/** + * list_lru_del_obj: delete an element from the lru list + * @lru: the lru pointer + * @item: the item to be deleted. + * + * This function is similar to list_lru_del(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise. + */ +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item); /** * list_lru_count_one: return the number of objects currently held by @lru @@ -138,6 +170,22 @@ static inline unsigned long list_lru_count(struct list_lru *lru) void list_lru_isolate(struct list_lru_one *list, struct list_head *item); void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); +/** + * list_lru_putback: undo list_lru_isolate + * @lru: the lru pointer. + * @item: the item to put back. + * @nid: the node id of the sublist to put the item back to. + * @memcg: the cgroup of the sublist to put the item back to. + * + * Put back an isolated item into its original LRU. Note that unlike + * list_lru_add, this does not increment the node LRU count (as + * list_lru_isolate does not originally decrement this count). + * + * Since we might have dropped the LRU lock in between, recompute list_lru_one + * from the node's id and memcg. + */ +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, struct list_lru_one *list, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index a05e5bef3b40..fcca67ac26ec 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -116,21 +116,19 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, } #endif /* CONFIG_MEMCG_KMEM */ -bool list_lru_add(struct list_lru *lru, struct list_head *item) +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) { - int nid = page_to_nid(virt_to_page(item)); struct list_lru_node *nlru = &lru->node[nid]; - struct mem_cgroup *memcg; struct list_lru_one *l; spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, &memcg); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -140,15 +138,25 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_add); -bool list_lru_del(struct list_lru *lru, struct list_head *item) +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_add(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_add_obj); + +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, NULL); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -160,6 +168,16 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_del); +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item) +{ + int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_del(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_del_obj); + void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { list_del_init(item); @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, } EXPORT_SYMBOL_GPL(list_lru_isolate_move); +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *list = + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + + if (list_empty(item)) { + list_add_tail(item, &list->list); + if (!list->nr_items++) + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); + } +} +EXPORT_SYMBOL_GPL(list_lru_putback); + unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { diff --git a/mm/workingset.c b/mm/workingset.c index b192e44a0e7c..c17d45c6f29b 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -615,12 +615,12 @@ void workingset_update_node(struct xa_node *node) if (node->count && node->count == node->nr_values) { if (list_empty(&node->private_list)) { - list_lru_add(&shadow_nodes, &node->private_list); + list_lru_add_obj(&shadow_nodes, &node->private_list); __inc_lruvec_kmem_state(node, WORKINGSET_NODES); } } else { if (!list_empty(&node->private_list)) { - list_lru_del(&shadow_nodes, &node->private_list); + list_lru_del_obj(&shadow_nodes, &node->private_list); __dec_lruvec_kmem_state(node, WORKINGSET_NODES); } } From patchwork Mon Nov 27 23:45:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 170503 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3550743vqx; Mon, 27 Nov 2023 15:46:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IHAsaqtIkfqxJ76Lg61H8+BKg3dcFDbjRyF/Az2KA8+DMa/I92MbMHEabd8J/CfebKe8u4v X-Received: by 2002:a05:6a20:1444:b0:187:23f4:d061 with SMTP id a4-20020a056a20144400b0018723f4d061mr11196947pzi.37.1701128779258; Mon, 27 Nov 2023 15:46:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701128779; cv=none; d=google.com; s=arc-20160816; b=eNnaGCCFtBvaR6ueacnytc9RogpJMOhtoO+9Tcz5wUdYUil6KvEy6cbKc2hHPimAV1 Aowgf1pt8ORQZZoCbOBnL91gfakY/qHVJ2zL70tv93KDmAe5ZqgbRwhQvrN/0HDjAEAL Kpcjihcg0pZFabralX7s9zNjQ9QxEroh16OOYcMBikoiZVgoxpqcZ/09VfWrKDzEuq8e nqB3gVlF11PdtojXPbaRKovRkO1YCj3zAPMQ1KChjn74PYXlJ9423MjFjm1kIWiH40so /GpSTHO+x+k4wkGWDDHOSixKRXTfKoQoIHW7u0VeX80HQ81aws6iWc6m/lizDFsuzGZG iNkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jvxvLBlp9fV8n96zZT2Ez0ns0pnZxld9OCN8MtepcpQ=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=ilaA1WtA+uJIWpqVWMW9F+Zg2qJcW/SQoYN9RlLQW+BEdT42JJYlrEaYculbrUEClO 9id5dSZv52GWzgrHWA9d2A4GI2LR/mzm9znZtMigTeS+rXUOwZP3vMJ0gffVhlTm5LYe sf+K7FCwSX5bF2GKcBWvoBB65ZEEwj3KYUOTvRRkILaVmtbRaXEViRJrLxLpuFoQgxdg Ihj9YBHNkU7W6tjR1NN9Le0ikkpTjaH4FABS1ZkpyAXEtNc6SZWHtgp4i37iF016qexH 1T16alr88YJdXlCFQnqvDkRZbWM6+kCk+HjKSixSfZZGeQtq/U46vbUxaVY0qzf5mYIx CJrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=NdYYdNoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id o37-20020a634e65000000b005be029a66d1si10649383pgl.806.2023.11.27.15.46.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=NdYYdNoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 53E27811216E; Mon, 27 Nov 2023 15:46:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233721AbjK0Xp7 (ORCPT + 99 others); Mon, 27 Nov 2023 18:45:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231459AbjK0Xp5 (ORCPT ); Mon, 27 Nov 2023 18:45:57 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6986B1B6; Mon, 27 Nov 2023 15:46:03 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1cfc3f50504so15148395ad.3; Mon, 27 Nov 2023 15:46:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701128763; x=1701733563; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jvxvLBlp9fV8n96zZT2Ez0ns0pnZxld9OCN8MtepcpQ=; b=NdYYdNoUqcFgDD1XreOzzSy4sZVc2MqDQsK2+y1Du34+I07vGndVbfuopiP5OWsML0 kg/1gNPwVZSJAcoC5PzG+BB7PU6VZM0e4s49kXhm4n+gedLSYw8d28SetjzO5ZU2SAmA +PMkZZkGR5TAj55g/u1iW5TCSHweZni2b1sCdGXTzHyvcMsIElFVtoZcD52xWmhkDxrp CEY+IgonLsE8Rr+LLyyBsSBAX8shgKgx1YYTmgt6ctvpSzbVT8WF1f46SnV58kQTxOTh X/ynxOsz++uGYkLZK+dxRgykOSoezIR6ITlDi1ggX20TuX4konCN/1vrA7jMrYvWkyHg dVYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701128763; x=1701733563; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jvxvLBlp9fV8n96zZT2Ez0ns0pnZxld9OCN8MtepcpQ=; b=Y/WYSlOtLl6gXj/u2XV4Xfvi3wKr0Dw5wVnCMnWzSX6iihwxFsBIAlPEKWwq/QsLTU 2pDmZqwKjfaTJXLAsequLgeOKiWAed8C6AQP5jiG9Dm2SpXyvSC1bQVQukNUUWp+sXrH RBu97KJHT/l0suoW8FKDJo3GCVJ8myu6qDavor/gaUyU3CND18Fc5QzpWuVOicB7SVYg JZoKskLz0gt8Y+8cKXC5A5uHunQYahxHSDZyT1kYKHPCoIqpvYNGzmG4WEntJMvganZc xoUsVFNYHM90i80WmBJvHs7SNH+d4ovlPfQaP9PWoOJOvRgY5OMTxt7Ozlpz8wb2oI2h EfTg== X-Gm-Message-State: AOJu0Yx9Qjku4wd9BnZ8Vyv1XB8Ab6/+ZgGjpXbw1LPzofGiUOsDS17p 9lE5FQNYNGFcP4KhCPDxsX4= X-Received: by 2002:a17:902:b28c:b0:1cc:3fc9:1802 with SMTP id u12-20020a170902b28c00b001cc3fc91802mr11577380plr.61.1701128762852; Mon, 27 Nov 2023 15:46:02 -0800 (PST) Received: from localhost (fwdproxy-prn-011.fbsv.net. [2a03:2880:ff:b::face:b00c]) by smtp.gmail.com with ESMTPSA id u13-20020a170902e80d00b001cfb14a09a4sm5507314plg.126.2023.11.27.15.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:02 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v7 2/6] memcontrol: add a new function to traverse online-only memcg hierarchy Date: Mon, 27 Nov 2023 15:45:56 -0800 Message-Id: <20231127234600.2971029-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231127234600.2971029-1-nphamcs@gmail.com> References: <20231127234600.2971029-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 27 Nov 2023 15:46:13 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783762810378239595 X-GMAIL-MSGID: 1783762810378239595 The new zswap writeback scheme requires an online-only memcg hierarchy traversal. Add this functionality via the new mem_cgroup_iter_online() function - the old mem_cgroup_iter() is a special case of this new function. Suggested-by: Andrew Morton Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 13 +++++++++++++ mm/memcontrol.c | 29 +++++++++++++++++++++++++---- 2 files changed, 38 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7bdcf3020d7a..5bfe41840e80 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -833,6 +833,10 @@ static inline void mem_cgroup_put(struct mem_cgroup *memcg) struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *, struct mem_cgroup *, struct mem_cgroup_reclaim_cookie *); +struct mem_cgroup *mem_cgroup_iter_online(struct mem_cgroup *root, + struct mem_cgroup *prev, + struct mem_cgroup_reclaim_cookie *reclaim, + bool online); void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *); void mem_cgroup_scan_tasks(struct mem_cgroup *memcg, int (*)(struct task_struct *, void *), void *arg); @@ -1386,6 +1390,15 @@ mem_cgroup_iter(struct mem_cgroup *root, return NULL; } +static inline struct mem_cgroup * +mem_cgroup_iter_online(struct mem_cgroup *root, + struct mem_cgroup *prev, + struct mem_cgroup_reclaim_cookie *reclaim, + bool online) +{ + return NULL; +} + static inline void mem_cgroup_iter_break(struct mem_cgroup *root, struct mem_cgroup *prev) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 470821d1ba1a..b1fb96233fa1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1111,10 +1111,11 @@ struct mem_cgroup *get_mem_cgroup_from_current(void) } /** - * mem_cgroup_iter - iterate over memory cgroup hierarchy + * mem_cgroup_iter_online - iterate over memory cgroup hierarchy * @root: hierarchy root * @prev: previously returned memcg, NULL on first invocation * @reclaim: cookie for shared reclaim walks, NULL for full walks + * @online: whether to skip offline memcgs * * Returns references to children of the hierarchy below @root, or * @root itself, or %NULL after a full round-trip. @@ -1123,13 +1124,16 @@ struct mem_cgroup *get_mem_cgroup_from_current(void) * invocations for reference counting, or use mem_cgroup_iter_break() * to cancel a hierarchy walk before the round-trip is complete. * + * Caller can skip offline memcgs by passing true for @online. + * * Reclaimers can specify a node in @reclaim to divide up the memcgs * in the hierarchy among all concurrent reclaimers operating on the * same node. */ -struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, +struct mem_cgroup *mem_cgroup_iter_online(struct mem_cgroup *root, struct mem_cgroup *prev, - struct mem_cgroup_reclaim_cookie *reclaim) + struct mem_cgroup_reclaim_cookie *reclaim, + bool online) { struct mem_cgroup_reclaim_iter *iter; struct cgroup_subsys_state *css = NULL; @@ -1199,7 +1203,8 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, * is provided by the caller, so we know it's alive * and kicking, and don't take an extra reference. */ - if (css == &root->css || css_tryget(css)) { + if (css == &root->css || (!online && css_tryget(css)) || + css_tryget_online(css)) { memcg = mem_cgroup_from_css(css); break; } @@ -1228,6 +1233,22 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, return memcg; } +/** + * mem_cgroup_iter - iterate over memory cgroup hierarchy + * @root: hierarchy root + * @prev: previously returned memcg, NULL on first invocation + * @reclaim: cookie for shared reclaim walks, NULL for full walks + * + * Perform an iteration on the memory cgroup hierarchy without skipping + * offline memcgs. + */ +struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, + struct mem_cgroup *prev, + struct mem_cgroup_reclaim_cookie *reclaim) +{ + return mem_cgroup_iter_online(root, prev, reclaim, false); +} + /** * mem_cgroup_iter_break - abort a hierarchy walk prematurely * @root: hierarchy root From patchwork Mon Nov 27 23:45:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 170506 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3550903vqx; Mon, 27 Nov 2023 15:46:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IEJfmrZOhjUzlBEGc6eYyis2yqmPoBvhEfIQFGQRtKGgDiZkwkY6OGXodyxqzO1oXpNKUtf X-Received: by 2002:a17:90b:4b4e:b0:285:baea:d550 with SMTP id mi14-20020a17090b4b4e00b00285baead550mr6137710pjb.45.1701128796543; Mon, 27 Nov 2023 15:46:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701128796; cv=none; d=google.com; s=arc-20160816; b=cU6pUuAN4ncWBgPYYDpTWXEOasCEkvxUCaMMS1nMwsHer+qvyvTaySjev+K5K1y1f4 heBUqFAxYZncIwdIi8R12F4/XpjR8iW4+ffAepcLdJzf8p2mWRVkgv0vMRuMS7crUBlr PJRM7TBbwdzRHyoJcLJJ0wXP/aFjKXaaHAqvjRAMdlR7Uqi5OgesKEVp8ZF3QP9yythG zE3plQa8GZfsDVSliV2GHryfXj8VJWKASSl+ZuaBySnhqBLRgSWWJvNk+WqcEmxokqK6 Pbl7+PeDv/ed4gB2qR2bnxfaISdG41KTTAXcWPSMw9jzN3Lchaxiy3G8xoDqxl8EUBGy sBrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bWOipUHj9Dmf0Z2DLTB1Rx67tTniBw2zqpEESxZom8o=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=uzTmzDZq6mLtcY3R9gJWgH7Lim+gUvVORVWrVi6Xou3qhNhatNIGLBm77YG4Qm6EaE jbzvxN8te8/fkA7nTtkYST9cKdOgbRwV6fSVxGblU5zyeEuWUEzNuH6zsepLR6vwqegp AnSBb2j8vIQNMpW7lLsRA/MInCYp06JaXHIj9tFwxu9GvlnQIFvz5V1zI7KcmesZVMX3 7B2K5JliIr2OXxGBsazv0nB81eS1eqwNtlEy1hZxQuptJKpwGujKPHvMAIlH83mrTM0A pxxp/yRC6NnXaf9VWmKwBFpKiolPXICQYAW6pQ/KVq6UWMO9AQutfX9wQW1gTT36g/La vjsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=j4lHvfQs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id x17-20020a17090a789100b002801dcaf6a9si7790434pjk.64.2023.11.27.15.46.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=j4lHvfQs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 1436C8112171; Mon, 27 Nov 2023 15:46:31 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233937AbjK0XqI (ORCPT + 99 others); Mon, 27 Nov 2023 18:46:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233750AbjK0Xp7 (ORCPT ); Mon, 27 Nov 2023 18:45:59 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 774AA1B8; Mon, 27 Nov 2023 15:46:04 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-6cb9dd2ab56so4190997b3a.3; Mon, 27 Nov 2023 15:46:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701128764; x=1701733564; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bWOipUHj9Dmf0Z2DLTB1Rx67tTniBw2zqpEESxZom8o=; b=j4lHvfQscAbzIEOa+w4ngA1mPcEFajGWril8GukmPjHMDHNxTsgW40eXoF1YGovfA+ vk1LqJ+9mTaoDLHhBABUiNi20FHIjGqXH4ajdntWy+kMp5saH8vfgn3fxT47bkK5ZUKM Pp39svlwbelb1hvIqQ9yWJuc+cCCLwwcmc9phbRnvvfLei7K9Q/WJ+gvlnzlOS+xKfJ/ u0+xrlXHeZOCwZI81raV63sJFb9PFJd9qcGrBaJZbBo/mI1HAyHAMtIvKayGyYQlQgur lan6e3bpUcxwf+k/tmIcPsc0uwo2coFm80FSeYwJyDA/wLfq05Ko/hrwSOLVkmpPQ3hQ kLJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701128764; x=1701733564; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bWOipUHj9Dmf0Z2DLTB1Rx67tTniBw2zqpEESxZom8o=; b=AMWEo4UZcqq6kb8JqDDNggwok4Iu0IipRzNue57Cr01M67TKIPWbt69FosA14ZBn8h VC6/dqBeB+c48ycBTuRDJ/amnc8vAk45UrqeBPGZ49vLPnfjlx1dYsZS3wYsqudMI1bB zOHeKY5WwmMSaXcGycTzmiFy7Mu+NB3igkWOz1XgvV5UH/V+hiZer44IpwlgjvDx3mVB b+J4LYkBaj+eF5M/ac1vLDJLOunHAiMPHxu/a4W4sSQV8LfxJFhXQwlRaRKFtHsxz9CS CfN7Nimv5sYx8B6RrBNtS4+vcCSnCYLhrjzIDhRAZzPLdYW1ZoSFqYQGpqn8WFYsbUfX fXzQ== X-Gm-Message-State: AOJu0Yy2YOqSLqjl14IRQ1Y2X3ULOIriNyGteHa2ZNU4pGXhiNjBH7Dd sd7gDkzmOYH7sa4cDSEyq64= X-Received: by 2002:a05:6a00:984:b0:6cd:8870:bd1f with SMTP id u4-20020a056a00098400b006cd8870bd1fmr6937191pfg.0.1701128763715; Mon, 27 Nov 2023 15:46:03 -0800 (PST) Received: from localhost (fwdproxy-prn-007.fbsv.net. [2a03:2880:ff:7::face:b00c]) by smtp.gmail.com with ESMTPSA id r16-20020a62e410000000b006933f504111sm7940946pfh.145.2023.11.27.15.46.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:03 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v7 3/6] zswap: make shrinking memcg-aware Date: Mon, 27 Nov 2023 15:45:57 -0800 Message-Id: <20231127234600.2971029-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231127234600.2971029-1-nphamcs@gmail.com> References: <20231127234600.2971029-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 27 Nov 2023 15:46:31 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783762828583189853 X-GMAIL-MSGID: 1783762828583189853 From: Domenico Cerasuolo Currently, we only have a single global LRU for zswap. This makes it impossible to perform worload-specific shrinking - an memcg cannot determine which pages in the pool it owns, and often ends up writing pages from other memcgs. This issue has been previously observed in practice and mitigated by simply disabling memcg-initiated shrinking: https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u This patch fully resolves the issue by replacing the global zswap LRU with memcg- and NUMA-specific LRUs, and modify the reclaim logic: a) When a store attempt hits an memcg limit, it now triggers a synchronous reclaim attempt that, if successful, allows the new hotter page to be accepted by zswap. b) If the store attempt instead hits the global zswap limit, it will trigger an asynchronous reclaim attempt, in which an memcg is selected for reclaim in a round-robin-like fashion. Signed-off-by: Domenico Cerasuolo Co-developed-by: Nhat Pham Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 5 + include/linux/zswap.h | 2 + mm/memcontrol.c | 2 + mm/swap.h | 3 +- mm/swap_state.c | 24 +++- mm/zswap.c | 248 +++++++++++++++++++++++++++++-------- 6 files changed, 223 insertions(+), 61 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5bfe41840e80..a568f70a2677 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1191,6 +1191,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + return NULL; +} + static inline bool folio_memcg_kmem(struct folio *folio) { return false; diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 2a60ce39cfde..e571e393669b 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); void zswap_swapon(int type); void zswap_swapoff(int type); +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); #else @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio) static inline void zswap_invalidate(int type, pgoff_t offset) {} static inline void zswap_swapon(int type) {} static inline void zswap_swapoff(int type) {} +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b1fb96233fa1..8c0f3f971179 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5635,6 +5635,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); + zswap_memcg_offline_cleanup(memcg); + memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); diff --git a/mm/swap.h b/mm/swap.h index 73c332ee4d91..c0dc73e10e91 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated); + bool *new_page_allocated, + bool skip_if_exists); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index 85d9e5806a6a..6c84236382f3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated) + bool *new_page_allocated, + bool skip_if_exists) { struct swap_info_struct *si; struct folio *folio; @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (err != -EEXIST) goto fail_put_swap; + /* + * Protect against a recursive call to __read_swap_cache_async() + * on the same entry waiting forever here because SWAP_HAS_CACHE + * is set but the folio is not the swap cache yet. This can + * happen today if mem_cgroup_swapin_charge_folio() below + * triggers reclaim through zswap, which may call + * __read_swap_cache_async() in the writeback path. + */ + if (skip_if_exists) + goto fail_put_swap; + /* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, mpol = get_vma_policy(vma, addr, 0, &ilx); page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); mpol_cond_put(mpol); if (page_allocated) @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, mpol, ilx, &page_allocated); + gfp_mask, mpol, ilx, &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, pte_unmap(pte); pte = NULL; page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; diff --git a/mm/zswap.c b/mm/zswap.c index 4bdb2d83bb0d..5e397fc1f375 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -35,6 +35,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" @@ -174,8 +175,8 @@ struct zswap_pool { struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; - struct list_head lru; - spinlock_t lru_lock; + struct list_lru list_lru; + struct mem_cgroup *next_shrink; }; /* @@ -291,15 +292,40 @@ static void zswap_update_total_size(void) zswap_pool_total_size = total; } +/* should be called under RCU */ +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL; +} + +static inline int entry_to_nid(struct zswap_entry *entry) +{ + return page_to_nid(virt_to_page(entry)); +} + +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + + /* lock out zswap pools list modification */ + spin_lock(&zswap_pools_lock); + list_for_each_entry(pool, &zswap_pools, list) { + if (pool->next_shrink == memcg) + pool->next_shrink = + mem_cgroup_iter_online(NULL, pool->next_shrink, NULL, true); + } + spin_unlock(&zswap_pools_lock); +} + /********************************* * zswap entry functions **********************************/ static struct kmem_cache *zswap_entry_cache; -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) { struct zswap_entry *entry; - entry = kmem_cache_alloc(zswap_entry_cache, gfp); + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); if (!entry) return NULL; entry->refcount = 1; @@ -312,6 +338,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* lru functions +**********************************/ +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + /* + * Note that it is safe to use rcu_read_lock() here, even in the face of + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection + * used in list_lru lookup, only two scenarios are possible: + * + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The + * new entry will be reparented to memcg's parent's list_lru. + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The + * new entry will be added directly to memcg's parent's list_lru. + * + * Similar reasoning holds for list_lru_del() and list_lru_putback(). + */ + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_add(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_del(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_putback(struct list_lru *list_lru, + struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + spinlock_t *lock = &list_lru->node[nid].lock; + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + spin_lock(lock); + /* we cannot use list_lru_add here, because it increments node's lru count */ + list_lru_putback(list_lru, &entry->lru, nid, memcg); + spin_unlock(lock); + rcu_read_unlock(); +} + /********************************* * rbtree functions **********************************/ @@ -396,9 +477,7 @@ static void zswap_free_entry(struct zswap_entry *entry) if (!entry->length) atomic_dec(&zswap_same_filled_pages); else { - spin_lock(&entry->pool->lru_lock); - list_del(&entry->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); zswap_pool_put(entry->pool); } @@ -632,21 +711,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree, zswap_entry_put(tree, entry); } -static int zswap_reclaim_entry(struct zswap_pool *pool) +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg) { - struct zswap_entry *entry; + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); struct zswap_tree *tree; pgoff_t swpoffset; - int ret; + enum lru_status ret = LRU_REMOVED_RETRY; + int writeback_result; - /* Get an entry off the LRU */ - spin_lock(&pool->lru_lock); - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lru_lock); - return -EINVAL; - } - entry = list_last_entry(&pool->lru, struct zswap_entry, lru); - list_del_init(&entry->lru); /* * Once the lru lock is dropped, the entry might get freed. The * swpoffset is copied to the stack, and entry isn't deref'd again @@ -654,28 +727,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) */ swpoffset = swp_offset(entry->swpentry); tree = zswap_trees[swp_type(entry->swpentry)]; - spin_unlock(&pool->lru_lock); + list_lru_isolate(l, item); + /* + * It's safe to drop the lock here because we return either + * LRU_REMOVED_RETRY or LRU_RETRY. + */ + spin_unlock(lock); /* Check for invalidate() race */ spin_lock(&tree->lock); - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) { - ret = -EAGAIN; + if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) goto unlock; - } + /* Hold a reference to prevent a free during writeback */ zswap_entry_get(entry); spin_unlock(&tree->lock); - ret = zswap_writeback_entry(entry, tree); + writeback_result = zswap_writeback_entry(entry, tree); spin_lock(&tree->lock); - if (ret) { - /* Writeback failed, put entry back on LRU */ - spin_lock(&pool->lru_lock); - list_move(&entry->lru, &pool->lru); - spin_unlock(&pool->lru_lock); + if (writeback_result) { + zswap_reject_reclaim_fail++; + zswap_lru_putback(&entry->pool->list_lru, entry); + ret = LRU_RETRY; goto put_unlock; } + zswap_written_back_pages++; /* * Writeback started successfully, the page now belongs to the @@ -689,27 +766,76 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) zswap_entry_put(tree, entry); unlock: spin_unlock(&tree->lock); - return ret ? -EAGAIN : 0; + spin_lock(lock); + return ret; +} + +static int shrink_memcg(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + int nid, shrunk = 0; + + /* + * Skip zombies because their LRUs are reparented and we would be + * reclaiming from the parent instead of the dead memcg. + */ + if (memcg && !mem_cgroup_online(memcg)) + return -ENOENT; + + pool = zswap_pool_current_get(); + if (!pool) + return -EINVAL; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + unsigned long nr_to_walk = 1; + + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg, + &shrink_memcg_cb, NULL, &nr_to_walk); + } + zswap_pool_put(pool); + return shrunk ? 0 : -EAGAIN; } static void shrink_worker(struct work_struct *w) { struct zswap_pool *pool = container_of(w, typeof(*pool), shrink_work); + struct mem_cgroup *memcg; int ret, failures = 0; + /* global reclaim will select cgroup in a round-robin fashion. */ do { - ret = zswap_reclaim_entry(pool); - if (ret) { - zswap_reject_reclaim_fail++; - if (ret != -EAGAIN) - break; + spin_lock(&zswap_pools_lock); + memcg = pool->next_shrink = + mem_cgroup_iter_online(NULL, pool->next_shrink, NULL, true); + + /* full round trip */ + if (!memcg) { + spin_unlock(&zswap_pools_lock); if (++failures == MAX_RECLAIM_RETRIES) break; + + goto resched; } + + /* + * Acquire an extra reference to the iterated memcg in case the + * original reference is dropped by the zswap offlining callback. + */ + css_get(&memcg->css); + spin_unlock(&zswap_pools_lock); + + ret = shrink_memcg(memcg); + mem_cgroup_put(memcg); + + if (ret == -EINVAL) + break; + if (ret && ++failures == MAX_RECLAIM_RETRIES) + break; + +resched: cond_resched(); } while (!zswap_can_accept()); - zswap_pool_put(pool); } static struct zswap_pool *zswap_pool_create(char *type, char *compressor) @@ -767,8 +893,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - INIT_LIST_HEAD(&pool->lru); - spin_lock_init(&pool->lru_lock); + list_lru_init_memcg(&pool->list_lru, NULL); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); @@ -834,6 +959,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool) cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); + list_lru_destroy(&pool->list_lru); + + spin_lock(&zswap_pools_lock); + mem_cgroup_put(pool->next_shrink); + pool->next_shrink = NULL; + spin_unlock(&zswap_pools_lock); + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) zpool_destroy_pool(pool->zpools[i]); kfree(pool); @@ -1081,7 +1213,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* try to allocate swap cache page */ mpol = get_task_policy(current); page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &page_was_allocated); + NO_INTERLEAVE_INDEX, &page_was_allocated, true); if (!page) { ret = -ENOMEM; goto fail; @@ -1152,7 +1284,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* start writeback */ __swap_writepage(page, &wbc); put_page(page); - zswap_written_back_pages++; return ret; @@ -1209,6 +1340,7 @@ bool zswap_store(struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; struct zpool *zpool; unsigned int dlen = PAGE_SIZE; @@ -1240,15 +1372,15 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); - - /* - * XXX: zswap reclaim does not work with cgroups yet. Without a - * cgroup-aware entry LRU, we will push out entries system-wide based on - * local cgroup limits. - */ objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) - goto reject; + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto reject; + } + mem_cgroup_put(memcg); + } /* reclaim space if needed */ if (zswap_is_full()) { @@ -1265,7 +1397,7 @@ bool zswap_store(struct folio *folio) } /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL); + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); if (!entry) { zswap_reject_kmemcache_fail++; goto reject; @@ -1292,6 +1424,15 @@ bool zswap_store(struct folio *folio) if (!entry->pool) goto freepage; + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_pool; + } + mem_cgroup_put(memcg); + } + /* compress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); @@ -1370,9 +1511,8 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_add(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&entry->pool->list_lru, entry); } spin_unlock(&tree->lock); @@ -1385,6 +1525,7 @@ bool zswap_store(struct folio *folio) put_dstmem: mutex_unlock(acomp_ctx->mutex); +put_pool: zswap_pool_put(entry->pool); freepage: zswap_entry_cache_free(entry); @@ -1479,9 +1620,8 @@ bool zswap_load(struct folio *folio) zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_move(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); + zswap_lru_add(&entry->pool->list_lru, entry); } zswap_entry_put(tree, entry); spin_unlock(&tree->lock); From patchwork Mon Nov 27 23:45:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 170505 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3550874vqx; Mon, 27 Nov 2023 15:46:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IEtRzE4mfttikIAgFXGoDNHqUln6sLF+pJuzx+jmoKJOInsy0oYHoqtjgVpsrKaTrMVVoQr X-Received: by 2002:a05:6a20:938b:b0:180:ebec:da1e with SMTP id x11-20020a056a20938b00b00180ebecda1emr15769709pzh.21.1701128792438; Mon, 27 Nov 2023 15:46:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701128792; cv=none; d=google.com; s=arc-20160816; b=KI3ooI2PKu9iup14UiXfHpTMVd6tjCmAE6Svrdczlotzp+VKQLyQwXW66uUKK8WFI1 fiNH3i8NwZN9rnJ90zLktFITV8FmNns6gpVzi/qUgj9xIAjYVigMLftKM7a73wgqCh9R 3PWBs+RfZRSzlowdnYNwA4TDryrUR0mLmvtsO10gJ/cf0Epx0u1DN4cGVLVxIlho/Apv ilqUJpBVwmhmvFmtBC0ZAdTeyvS8VO6RzUuSk7OxgLPzmy77zA2NHyZgkAGTWDQRNGaJ cAndyYWEJNqGqdOCt/PdgU2CjccazEsWAu5rsKgNxpMI0PBrXSx0VudTyGzcRuLQdpaX SXkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+QqTttvrK+ydHCThErTWQGU78pwuFOyKaqUvf3Lt6j8=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=FmOWQirntNkUDekBU+SDEXpo6PwwAzut1tjZERNqjiq9FInibKP4aY7kOZ2KBc+AU2 xWM+sbPMDzfa+O5F/CX63ltwSbxQlsE0mD0xorFp1/zrx74dWULr1M/Oc1UkWC3gvI9O 0WO9cOboyNaYlJnEexzl9VMVS+989BZUBxaIMTL42nmNGnrJ4w0yioI7c+B193Sp36r9 agjO8frA06rmP5AElAYt5t1k1/0r6TwXmf1dqP7vVcIs4HynFOYWe5K/PnUw193W7EPA uz1Pt8ci3M9hi//DMx067rUyMSSZlHc+y5bLn2vM/C2y1bfih2CDTDnbWkRrxyXMaoei R9ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=NrHaEoiI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id b1-20020a056a00114100b006cb6cc08e3fsi11170654pfm.387.2023.11.27.15.46.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=NrHaEoiI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 498B180A8B40; Mon, 27 Nov 2023 15:46:31 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233895AbjK0XqS (ORCPT + 99 others); Mon, 27 Nov 2023 18:46:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233796AbjK0XqA (ORCPT ); Mon, 27 Nov 2023 18:46:00 -0500 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D19EB1B1; Mon, 27 Nov 2023 15:46:05 -0800 (PST) Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-5c2066accc5so3253232a12.3; Mon, 27 Nov 2023 15:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701128765; x=1701733565; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+QqTttvrK+ydHCThErTWQGU78pwuFOyKaqUvf3Lt6j8=; b=NrHaEoiIGzswpw48UOOMgSE/w6JjfTItR35NDtUcjjGzgm4yBvBYHpIcoENC9fMcvp AdtPmEOGjksixBn1UqcKUyKFDtZxKMCFuWy07Pd+HDiMjc+e/VfXDPELs5MLEhOhA5hR kEj8EwHC2agx1EHc4vVNFx96OyjlvwqskFw5+kfGE0SG5CACgLAW5sf2rfSyKS73e+XR ZpWRAtF2ywuDXO6vv7VctR06WGcm/2ODALmWT8SY/NJmc0/xPAS+clrClkeQCddAZzdR 7r5RR7/V8gcbcwzHs8GdnI+Y65EGLMjLfEJxj3JxzKmA2iA0Z4TXzQPkJ0E2jHXxuhYO /kcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701128765; x=1701733565; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+QqTttvrK+ydHCThErTWQGU78pwuFOyKaqUvf3Lt6j8=; b=a18pdayLB7pcOUSsXC4Bcm0KY8B9vP7SFjOro0ceqZ8hbN4NyaMl44qgW1hjalatCy iSnTJbCqOyLXW0xbBrlrmP8NPmRosSG/AJ6pABQw/yjS39ii5Ld+XJq2POZ94jBLFZ7e 23mKTU9dSwRxPLpz8FQ7K3d9yWVTSUAuyIbpPROIbx2rvhJv2Va7L8bZVvVbiNkbHj53 0x/PWNOI8I4ui3m3TGkH4mbXZssUJBLmwvn6UyS0RuDJcITN97+SvuF5S+54PfjILT+A rkwYuxMxM/ZGrYYs92KE3UwLHejGK2GtMY2e4f+eE4710OG7nqB0hzP1puHt8zlttqD5 /ZWQ== X-Gm-Message-State: AOJu0YxhsnthAwX9FnAQ5Mx51t5WhOkTOUhdhqDH0ij/jZw6OpD1J1bZ Qw8uNfvaJW/d4vmLZJv42Pk= X-Received: by 2002:a17:90b:4d06:b0:27c:f9e7:30fd with SMTP id mw6-20020a17090b4d0600b0027cf9e730fdmr12623338pjb.7.1701128765215; Mon, 27 Nov 2023 15:46:05 -0800 (PST) Received: from localhost (fwdproxy-prn-116.fbsv.net. [2a03:2880:ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id 25-20020a17090a031900b0027d0af2e9c3sm9103232pje.40.2023.11.27.15.46.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:04 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v7 4/6] mm: memcg: add per-memcg zswap writeback stat Date: Mon, 27 Nov 2023 15:45:58 -0800 Message-Id: <20231127234600.2971029-5-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231127234600.2971029-1-nphamcs@gmail.com> References: <20231127234600.2971029-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 15:46:31 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783762824769766144 X-GMAIL-MSGID: 1783762824769766144 From: Domenico Cerasuolo Since zswap now writes back pages from memcg-specific LRUs, we now need a new stat to show writebacks count for each memcg. Suggested-by: Nhat Pham Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham --- include/linux/vm_event_item.h | 1 + mm/memcontrol.c | 1 + mm/vmstat.c | 1 + mm/zswap.c | 3 +++ 4 files changed, 6 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index d1b847502f09..f4569ad98edf 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -142,6 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_ZSWAP ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8c0f3f971179..f88c8fd03689 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -703,6 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = { #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, diff --git a/mm/vmstat.c b/mm/vmstat.c index afa5a38fcc9c..2249f85e4a87 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_ZSWAP "zswpin", "zswpout", + "zswp_wb", #endif #ifdef CONFIG_X86 "direct_map_level2_splits", diff --git a/mm/zswap.c b/mm/zswap.c index 5e397fc1f375..6a761753f979 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -754,6 +754,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o } zswap_written_back_pages++; + if (entry->objcg) + count_objcg_event(entry->objcg, ZSWP_WB); + /* * Writeback started successfully, the page now belongs to the * swapcache. Drop the entry from zswap - unless invalidate already From patchwork Mon Nov 27 23:45:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 170507 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3551161vqx; Mon, 27 Nov 2023 15:47:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IHxTwKdRLP4Jn8K0PAfVWA4x0mmgRUW8SLoPITNuyCIQkXwBo8J+EGS74BWWDZm+AlzN4rC X-Received: by 2002:a17:902:f685:b0:1cf:c9c3:e79d with SMTP id l5-20020a170902f68500b001cfc9c3e79dmr6665650plg.59.1701128822258; Mon, 27 Nov 2023 15:47:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701128822; cv=none; d=google.com; s=arc-20160816; b=ddLJwBz18YzddkcnSqxsgKbKRTHpvWW77yLskQCeusyUReLn9xv/Dn1pCjvgINY1iK +p0iIq6aJ1J3D+uM+m9JbMFL1uvZhq/YycDe4zx4ESFdDattt2CZvejvFmb+mFNTRGMZ 4khPu4DEsLhTSgadmsqbifPfAVhMN2KDSFVqh6k/ZEdy60sBzf3LCGkDiqPVbYutP9Wt +J0uyPDhgJP6qY4CmtXVO6LpsHsrfBMF1h4XpeMB4AVgB2AmQOk0RNGE142MTUJLwjDf 1Tvr4FRlrWkjbTln8iV6hGK9ZBT8PLJ/t/LFn1MWQMP6iNaSB8cC4Q+31O7FoshGE8Er 2zkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; fh=5ynFD2G6LA0fdRuAOSZxbBxHoIIG4U3xrwbnkupZs28=; b=YYodfrx+H89x7kwQ0CKxPCcEMRPReQ1a2ysidGj/a/xVYTMSgeOlCCGXw9DRYpyYtg svu0MMYfXQHevQYNHH85Np2OGc+LO3wTE7C0p104ZWw0xOZ5DgHabCBSfAtiIB1Hw7HK cvLqJLZqkVwQ66TGjXXn8AZCcvYjfpHY/MXNIOJsKZ4FxWXP6yfjAJpgKVRphalOKGw8 Ish76bmhxdgXZKPxR2sqSc/wO9QmXrISnmQdVJ0t2o7yrxZNy5O3UyXY+PIRVBd4r6UV KwTWVHhmKbfV7fJLHBIKf+XeT5rY4JjEeHByzoVq/1wUtFTyyIreGgvJJI/qhaKZ+bRY M1HA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=WospnarM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id y17-20020a17090322d100b001cfbd271f2esi5604759plg.7.2023.11.27.15.47.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:47:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=WospnarM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id D54268060837; Mon, 27 Nov 2023 15:46:49 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233933AbjK0XqV (ORCPT + 99 others); Mon, 27 Nov 2023 18:46:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233725AbjK0XqH (ORCPT ); Mon, 27 Nov 2023 18:46:07 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB3EBD4D; Mon, 27 Nov 2023 15:46:06 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1cc9b626a96so36245565ad.2; Mon, 27 Nov 2023 15:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701128766; x=1701733566; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; b=WospnarMH3s7gP4jnTAcqFmb9elrS9WMCLt5UyLSG7l8UFBGsachNOuOeFiUiHgyOd ffdVV57l8ekFlblhtbj24xwIZlgmsstURHWxJXNXJ67jqg65yusHWJnW6QT1dojLvdOb O4jmCEJJNnnZVZx/4un5vA2k1IBr4jkOIIFWnkZCd3xIGr5QPoNhhdVoeuuyDx+Uv1GS nRLYTh0XY0LmpcqKlVLBXX2cS63cmsRy3fbiXZ0I8bAB+btFSCj3zs1nqdqYa1iC8hAr BUOpZJxAD6CJoH9/zEzrDCfN6oUVAdR9+JFxI9opxqqRdAhX/jEgqb+KuPpKebqAt0jM r1EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701128766; x=1701733566; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; b=j6L8xZ/ztwJcWPlxDTrXBQqJ1ETxQjMzy3Eu4q9bhWQE+vw5/qzVTWIEdaWFaEfucK rw/qh/7p/zn+dOvH29CFUh/7XkMmXAdxPhxjnaZ53vPUEkobSa6HeSuwtU6nB4y6RPfi hvYx7H66L+QtdBaUHTo3mMSVDWyOpJKEENtn90V1cXCk6iQdfikiCAOIGomWC8PPnkEp NIeM+mrzcOXf9QTuFosH20djuKWP+yUsCNWT38pnSlqTBq9lvSvbOeSL/86oCSMFQh/x ZdjEEC33isD2g+6ON8Kny430ilUSq0gm2HgGljpAzSvXSlPBtOUoVx5kJCs4Kglck9fU Ih/w== X-Gm-Message-State: AOJu0YybwPghOYMppq6wkqJkN5t0Q61/0Qia3uQWDiBVgq7qctyvjeca Fu5XiZlH9moSNJ92HHBipF0= X-Received: by 2002:a17:902:c411:b0:1cf:a0b1:ec06 with SMTP id k17-20020a170902c41100b001cfa0b1ec06mr15743491plk.55.1701128766074; Mon, 27 Nov 2023 15:46:06 -0800 (PST) Received: from localhost (fwdproxy-prn-011.fbsv.net. [2a03:2880:ff:b::face:b00c]) by smtp.gmail.com with ESMTPSA id c12-20020a170902c1cc00b001cf96a0e4e6sm8166994plc.242.2023.11.27.15.46.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 15:46:05 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v7 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Date: Mon, 27 Nov 2023 15:45:59 -0800 Message-Id: <20231127234600.2971029-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231127234600.2971029-1-nphamcs@gmail.com> References: <20231127234600.2971029-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 27 Nov 2023 15:46:50 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783762855608384809 X-GMAIL-MSGID: 1783762855608384809 From: Domenico Cerasuolo The memcg-zswap self test is updated to adjust to the behavior change implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"), where zswap performs writeback for specific memcg. Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham --- tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index c99d2adaca3f..47fdaa146443 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_zswap_written_back_pages(size_t *value) +static int get_cg_wb_count(const char *cg) { - return read_int("/sys/kernel/debug/zswap/written_back_pages", value); + return cg_read_key_long(cg, "memory.stat", "zswp_wb"); } static long get_zswpout(const char *cgroup) @@ -73,6 +73,24 @@ static int allocate_bytes(const char *cgroup, void *arg) return 0; } +static char *setup_test_group_1M(const char *root, const char *name) +{ + char *group_name = cg_name(root, name); + + if (!group_name) + return NULL; + if (cg_create(group_name)) + goto fail; + if (cg_write(group_name, "memory.max", "1M")) { + cg_destroy(group_name); + goto fail; + } + return group_name; +fail: + free(group_name); + return NULL; +} + /* * Sanity test to check that pages are written into zswap. */ @@ -117,43 +135,51 @@ static int test_zswap_usage(const char *root) /* * When trying to store a memcg page in zswap, if the memcg hits its memory - * limit in zswap, writeback should not be triggered. - * - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may - * not zswap"). Needs to be revised when a per memcg writeback mechanism is - * implemented. + * limit in zswap, writeback should affect only the zswapped pages of that + * memcg. */ static int test_no_invasive_cgroup_shrink(const char *root) { - size_t written_back_before, written_back_after; int ret = KSFT_FAIL; - char *test_group; + size_t control_allocation_size = MB(10); + char *control_allocation, *wb_group = NULL, *control_group = NULL; /* Set up */ - test_group = cg_name(root, "no_shrink_test"); - if (!test_group) - goto out; - if (cg_create(test_group)) + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1"); + if (!wb_group) + return KSFT_FAIL; + if (cg_write(wb_group, "memory.zswap.max", "10K")) goto out; - if (cg_write(test_group, "memory.max", "1M")) + control_group = setup_test_group_1M(root, "per_memcg_wb_test2"); + if (!control_group) goto out; - if (cg_write(test_group, "memory.zswap.max", "10K")) + + /* Push some test_group2 memory into zswap */ + if (cg_enter_current(control_group)) goto out; - if (get_zswap_written_back_pages(&written_back_before)) + control_allocation = malloc(control_allocation_size); + for (int i = 0; i < control_allocation_size; i += 4095) + control_allocation[i] = 'a'; + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1) goto out; - /* Allocate 10x memory.max to push memory into zswap */ - if (cg_run(test_group, allocate_bytes, (void *)MB(10))) + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */ + if (cg_run(wb_group, allocate_bytes, (void *)MB(10))) goto out; - /* Verify that no writeback happened because of the memcg allocation */ - if (get_zswap_written_back_pages(&written_back_after)) - goto out; - if (written_back_after == written_back_before) + /* Verify that only zswapped memory from gwb_group has been written back */ + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0) ret = KSFT_PASS; out: - cg_destroy(test_group); - free(test_group); + cg_enter_current(root); + if (control_group) { + cg_destroy(control_group); + free(control_group); + } + cg_destroy(wb_group); + free(wb_group); + if (control_allocation) + free(control_allocation); return ret; }