From patchwork Sat Apr 15 09:27:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 83658 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920835vqo; Sat, 15 Apr 2023 02:10:45 -0700 (PDT) X-Google-Smtp-Source: AKy350bnR8koqG9fKnX7Hn2MU/pn6JTRF6ZL1ql+fy+WkXbtXWlqZtsIvKc3J2ujpm1tLvrqQUyb X-Received: by 2002:a05:6a00:2305:b0:639:ec5a:6d76 with SMTP id h5-20020a056a00230500b00639ec5a6d76mr13938927pfh.27.1681549845600; Sat, 15 Apr 2023 02:10:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549845; cv=none; d=google.com; s=arc-20160816; b=glgDgRShnoe9JLyryIY9l87q3pTYNMeyvMs4UsTYo9WrqnrvNShveNiHy7/eWcnzLn FkUyAzXvxSAc+tEJ//Wfra4u4ALxT36mOXheIUdn0Slr7THyqTNQj6HBWgI7vldGqtmX B1xmC5mebRbzsjEGBaTb1LgDSwN2eyokEC05mdnYu3QoR4AugB3Emm1lhw3dgZoBwU0R lvs1/cuMTRXn7xfBuvL6wYT08+U+BdBfmunHdecGzIs/RUwol7EJA65oIlWOKnlkesw4 ln49nblUAudeccBSBmYehrFjOvn6eZ0VUiqFv9yNoduMGWap2YN+KDHd9lqNdMF4P+8N ylzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=5QTGLkHRwUitlOFgH4LDIajbtoyyUkiZ4fniTihyVlM=; b=0HLPCRjXu9mU+76u338NhQ9LOtjeae6uKamUBUrHDWBeCAAhiKMTcdNoS2CdfIlVI9 +x/D5VDQcKOfRUe7z7A/G+mq/pOA8D8yEBwNiqWnLZ650JuDupclACEtrkmK0DjHTiV5 WuaJlTIt6HTODZdWy90G1sI2QzTYg52gMMxoHL0Lye5/7us7bdN/ydhfGbKVAnzWMJVj hlNmAnJi3BVE0XaMRzMFpEjLNImpudscLdDcGeMbAahO82icMXNtq2AGK0I9+3ePrBwO 2JjV0rEgChgdlXpYtiJ1EhLWzeR4753C7tmbHMxHDfG7nCD0E0zfEn7z/oJSW4HqSxFY 4cgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c18-20020a056a00009200b00627d5397423si6478364pfj.145.2023.04.15.02.10.31; Sat, 15 Apr 2023 02:10:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230022AbjDOJJq (ORCPT + 99 others); Sat, 15 Apr 2023 05:09:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbjDOJJR (ORCPT ); Sat, 15 Apr 2023 05:09:17 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47A747D84 for ; Sat, 15 Apr 2023 02:08:58 -0700 (PDT) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Pz6t04rShzJrRL; Sat, 15 Apr 2023 17:08:08 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Sat, 15 Apr 2023 17:08:55 +0800 From: Kefeng Wang To: Andrew Morton , , "Matthew Wilcox (Oracle)" CC: SeongJae Park , Hugh Dickins , , Kefeng Wang Subject: [PATCH] mm: rename reclaim_pages() to reclaim_folios() Date: Sat, 15 Apr 2023 17:27:16 +0800 Message-ID: <20230415092716.61970-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232811244960312?= X-GMAIL-MSGID: =?utf-8?q?1763232811244960312?= As commit a83f0551f496 ("mm/vmscan: convert reclaim_pages() to use a folio") changes the arg from page_list to folio_list, but not the defination, let's correct it and rename it to reclaim_folios too. Signed-off-by: Kefeng Wang --- include/linux/swap.h | 2 +- mm/damon/paddr.c | 2 +- mm/madvise.c | 4 ++-- mm/shmem.c | 4 ++-- mm/vmscan.c | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 7f7d5b9ddf7e..8c8c6ceaa462 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -442,7 +442,7 @@ extern unsigned long shrink_all_memory(unsigned long nr_pages); extern int vm_swappiness; long remove_mapping(struct address_space *mapping, struct folio *folio); -extern unsigned long reclaim_pages(struct list_head *page_list); +unsigned long reclaim_folios(struct list_head *folio_list); #ifdef CONFIG_NUMA extern int node_reclaim_mode; extern int sysctl_min_unmapped_ratio; diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index dd9c33fbe805..840d25ad9e59 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -255,7 +255,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) list_add(&folio->lru, &folio_list); folio_put(folio); } - applied = reclaim_pages(&folio_list); + applied = reclaim_folios(&folio_list); cond_resched(); return applied * PAGE_SIZE; } diff --git a/mm/madvise.c b/mm/madvise.c index b5ffbaf616f5..bfc683de85ef 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -417,7 +417,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, huge_unlock: spin_unlock(ptl); if (pageout) - reclaim_pages(&folio_list); + reclaim_folios(&folio_list); return 0; } @@ -513,7 +513,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); if (pageout) - reclaim_pages(&folio_list); + reclaim_folios(&folio_list); cond_resched(); return 0; diff --git a/mm/shmem.c b/mm/shmem.c index 16378b281a5d..bdb2948a149f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2381,7 +2381,7 @@ static void shmem_isolate_pages_range(struct address_space *mapping, loff_t star folio_put(folio); /* - * Prepare the folios to be passed to reclaim_pages(). + * Prepare the folios to be passed to reclaim_folios(). * VM can't reclaim a folio unless young bit is * cleared in its flags. */ @@ -2406,7 +2406,7 @@ static int shmem_fadvise_dontneed(struct address_space *mapping, loff_t start, lru_add_drain(); shmem_isolate_pages_range(mapping, start, end, &folio_list); - reclaim_pages(&folio_list); + reclaim_folios(&folio_list); return 0; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 2cd21e1d5849..b218c8a6244f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2786,7 +2786,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, return nr_reclaimed; } -unsigned long reclaim_pages(struct list_head *folio_list) +unsigned long reclaim_folios(struct list_head *folio_list) { int nid; unsigned int nr_reclaimed = 0;