[v4,4/4] mm/swap: Convert deactivate_page() to folio_deactivate()
Commit Message
Deactivate_page() has already been converted to use folios, this change
converts it to take in a folio argument instead of calling page_folio().
It also renames the function folio_deactivate() to be more consistent
with other folio functions.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: SeongJae Park <sj@kernel.org>
---
include/linux/swap.h | 2 +-
mm/damon/paddr.c | 2 +-
mm/madvise.c | 4 ++--
mm/swap.c | 14 ++++++--------
4 files changed, 10 insertions(+), 12 deletions(-)
Comments
On Wed, Dec 21, 2022 at 11:10 AM Vishal Moola (Oracle)
<vishal.moola@gmail.com> wrote:
>
> Deactivate_page() has already been converted to use folios, this change
> converts it to take in a folio argument instead of calling page_folio().
> It also renames the function folio_deactivate() to be more consistent
> with other folio functions.
There is one more in mm/vmscan.c.
Please git grep.
@@ -401,7 +401,7 @@ extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
extern void lru_add_drain_all(void);
-extern void deactivate_page(struct page *page);
+void folio_deactivate(struct folio *folio);
void folio_mark_lazyfree(struct folio *folio);
extern void swap_setup(void);
@@ -297,7 +297,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
if (mark_accessed)
folio_mark_accessed(folio);
else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
folio_put(folio);
applied += folio_nr_pages(folio);
}
@@ -416,7 +416,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
huge_unlock:
spin_unlock(ptl);
if (pageout)
@@ -510,7 +510,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
}
arch_leave_lazy_mmu_mode();
@@ -733,17 +733,15 @@ void deactivate_file_folio(struct folio *folio)
}
/*
- * deactivate_page - deactivate a page
- * @page: page to deactivate
+ * folio_deactivate - deactivate a folio
+ * @folio: folio to deactivate
*
- * deactivate_page() moves @page to the inactive list if @page was on the active
- * list and was not an unevictable page. This is done to accelerate the reclaim
- * of @page.
+ * folio_deactivate() moves @folio to the inactive list if @folio was on the
+ * active list and was not unevictable. This is done to accelerate the
+ * reclaim of @folio.
*/
-void deactivate_page(struct page *page)
+void folio_deactivate(struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
(folio_test_active(folio) || lru_gen_enabled())) {
struct folio_batch *fbatch;