mm: kill [add|del]_page_to_lru_list()

Message ID 20230609013901.79250-1-wangkefeng.wang@huawei.com
State New
Headers
Series mm: kill [add|del]_page_to_lru_list() |

Commit Message

Kefeng Wang June 9, 2023, 1:39 a.m. UTC
  Directly call lruvec_del_folio(), and drop unused page interfaces.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm_inline.h | 12 ------------
 mm/compaction.c           |  2 +-
 2 files changed, 1 insertion(+), 13 deletions(-)
  

Comments

Yu Zhao June 9, 2023, 5:14 p.m. UTC | #1
On Thu, Jun 8, 2023 at 7:23 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> Directly call lruvec_del_folio(), and drop unused page interfaces.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Acked-by: Yu Zhao <yuzhao@google.com>
  
Matthew Wilcox June 9, 2023, 5:18 p.m. UTC | #2
On Fri, Jun 09, 2023 at 09:39:01AM +0800, Kefeng Wang wrote:
> Directly call lruvec_del_folio(), and drop unused page interfaces.

Convert isolate_migratepages_block() to actually use folios and
then we can kill the interfaces.

> +++ b/mm/compaction.c
> @@ -1145,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			low_pfn += compound_nr(page) - 1;
>  
>  		/* Successfully isolated */
> -		del_page_from_lru_list(page, lruvec);
> +		lruvec_del_folio(lruvec, page_folio(page));

This kind of thing is not encouraged.  It's just churn and gets in
the way of actual conversions.
  
Kefeng Wang June 12, 2023, 1:08 a.m. UTC | #3
On 2023/6/10 1:18, Matthew Wilcox wrote:
> On Fri, Jun 09, 2023 at 09:39:01AM +0800, Kefeng Wang wrote:
>> Directly call lruvec_del_folio(), and drop unused page interfaces.
> 
> Convert isolate_migratepages_block() to actually use folios and
> then we can kill the interfaces.
> 
>> +++ b/mm/compaction.c
>> @@ -1145,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   			low_pfn += compound_nr(page) - 1;
>>   
>>   		/* Successfully isolated */
>> -		del_page_from_lru_list(page, lruvec);
>> +		lruvec_del_folio(lruvec, page_folio(page));
> 
> This kind of thing is not encouraged.  It's just churn and gets in
> the way of actual conversions.

Sure, thanks for your suggestion, will convert 
isolate_migratepages_block() firstly.
>
  

Patch

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 0e1d239a882c..e9cdeb290841 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -323,12 +323,6 @@  void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
 		list_add(&folio->lru, &lruvec->lists[lru]);
 }
 
-static __always_inline void add_page_to_lru_list(struct page *page,
-				struct lruvec *lruvec)
-{
-	lruvec_add_folio(lruvec, page_folio(page));
-}
-
 static __always_inline
 void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
 {
@@ -357,12 +351,6 @@  void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
 			-folio_nr_pages(folio));
 }
 
-static __always_inline void del_page_from_lru_list(struct page *page,
-				struct lruvec *lruvec)
-{
-	lruvec_del_folio(lruvec, page_folio(page));
-}
-
 #ifdef CONFIG_ANON_VMA_NAME
 /*
  * mmap_lock should be read-locked when calling anon_vma_name(). Caller should
diff --git a/mm/compaction.c b/mm/compaction.c
index 3398ef3a55fe..66b442d20d01 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1145,7 +1145,7 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			low_pfn += compound_nr(page) - 1;
 
 		/* Successfully isolated */
-		del_page_from_lru_list(page, lruvec);
+		lruvec_del_folio(lruvec, page_folio(page));
 		mod_node_page_state(page_pgdat(page),
 				NR_ISOLATED_ANON + page_is_file_lru(page),
 				thp_nr_pages(page));