[2/4] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio()

Message ID 20230802095346.87449-3-wangkefeng.wang@huawei.com
State New
Headers
Series mm: migrate: more folio conversion |

Commit Message

Kefeng Wang Aug. 2, 2023, 9:53 a.m. UTC
  Rename numamigrate_isolate_page() to numamigrate_isolate_folio(), then
make it takes a folio and use folio API to save compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)
  

Comments

Matthew Wilcox Aug. 2, 2023, 12:30 p.m. UTC | #1
On Wed, Aug 02, 2023 at 05:53:44PM +0800, Kefeng Wang wrote:
> -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
>  {
> -	int nr_pages = thp_nr_pages(page);
> -	int order = compound_order(page);
> +	int nr_pages = folio_nr_pages(folio);
> +	int order = folio_order(folio);
>  
> -	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
> +	VM_BUG_ON_FOLIO(order && !folio_test_pmd_mappable(folio), folio);

I don't know why we have this assertion.  I would be inclined to delete
it as part of generalising the migration code to handle arbitrary sizes
of folio, rather than assert that we only support PMD size folios.

>  	/* Do not migrate THP mapped by multiple processes */
> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
> +	if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) > 1)
>  		return 0;

I don't know if this is the right logic.  We've willing to move folios
mapped by multiple processes, as long as they're smaller than PMD size,
but once they get to PMD size they're magical and can't be moved?
  
Kefeng Wang Aug. 3, 2023, 7:08 a.m. UTC | #2
On 2023/8/2 20:30, Matthew Wilcox wrote:
> On Wed, Aug 02, 2023 at 05:53:44PM +0800, Kefeng Wang wrote:
>> -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>> +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
>>   {
>> -	int nr_pages = thp_nr_pages(page);
>> -	int order = compound_order(page);
>> +	int nr_pages = folio_nr_pages(folio);
>> +	int order = folio_order(folio);
>>   
>> -	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>> +	VM_BUG_ON_FOLIO(order && !folio_test_pmd_mappable(folio), folio);
> 
> I don't know why we have this assertion.  I would be inclined to delete
> it as part of generalising the migration code to handle arbitrary sizes
> of folio, rather than assert that we only support PMD size folios.

Ok, will drop it.
> 
>>   	/* Do not migrate THP mapped by multiple processes */
>> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
>> +	if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) > 1)
>>   		return 0;
> 
> I don't know if this is the right logic.  We've willing to move folios
> mapped by multiple processes, as long as they're smaller than PMD size,
> but once they get to PMD size they're magical and can't be moved?

It seems that the logical is introduced by commit 04fa5d6a6547 ("mm:
migrate: check page_count of THP before migrating") and refactor by
340ef3902cf2 ("mm: numa: cleanup flow of transhuge page migration"),


   "Hugh Dickins pointed out that migrate_misplaced_transhuge_page() does
   not check page_count before migrating like base page migration and
   khugepage.  He could not see why this was safe and he is right."

For now, there is no migrate_misplaced_transhuge_page() and base/thp
page migrate's path is unified, there is a check(for old/new kernel) in 
migrate_misplaced_page(),

  "Don't migrate file pages that are mapped in multiple processes
  with execute permissions as they are probably shared libraries."

We could drop the above check in numamigrate_isolate_page(), but
according to 04fa5d6a6547, maybe disable migrate page shared by
multi-process during numa balance for both base/thp page.


> 
>
  
Hugh Dickins Aug. 6, 2023, 5:04 a.m. UTC | #3
On Thu, 3 Aug 2023, Kefeng Wang wrote:
> On 2023/8/2 20:30, Matthew Wilcox wrote:
> > On Wed, Aug 02, 2023 at 05:53:44PM +0800, Kefeng Wang wrote:
> >>   	/* Do not migrate THP mapped by multiple processes */
> >> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
> >> +	if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) >
> >> 1)
> >>     return 0;
> > 
> > I don't know if this is the right logic.  We've willing to move folios
> > mapped by multiple processes, as long as they're smaller than PMD size,
> > but once they get to PMD size they're magical and can't be moved?
> 
> It seems that the logical is introduced by commit 04fa5d6a6547 ("mm:
> migrate: check page_count of THP before migrating") and refactor by
> 340ef3902cf2 ("mm: numa: cleanup flow of transhuge page migration"),
> 
> 
>   "Hugh Dickins pointed out that migrate_misplaced_transhuge_page() does
>   not check page_count before migrating like base page migration and
>   khugepage.  He could not see why this was safe and he is right."
> 
> For now, there is no migrate_misplaced_transhuge_page() and base/thp
> page migrate's path is unified, there is a check(for old/new kernel) in
> migrate_misplaced_page(),

Right, Mel's comment on safety above comes from a time when
migrate_misplaced_transhuge_page() went its own way, and so did not
reach the careful reference count checking found in (the now named)
folio_migrate_mapping().  If migrate_misplaced_page() is now using
the standard migrate_pages() for small pages and THPs, then there
should no longer be safety concerns.

> 
>  "Don't migrate file pages that are mapped in multiple processes
>  with execute permissions as they are probably shared libraries."
> 
> We could drop the above check in numamigrate_isolate_page(), but
> according to 04fa5d6a6547, maybe disable migrate page shared by
> multi-process during numa balance for both base/thp page.

I'm out of touch with the NUMA balancing preferences at present,
but think that you're probably right that it should aim to treat
folio mapped into multiple processes the same way for small and large
and THP (except, of course, that large and THP, with multiple PTEs in
the same process, can present more challenges to how to go about that).

Hugh
  

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index b0c318bc531c..39e0d35bccb3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2476,15 +2476,15 @@  static struct folio *alloc_misplaced_dst_folio(struct folio *src,
 	return __folio_alloc_node(gfp, order, nid);
 }
 
-static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
+static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
 {
-	int nr_pages = thp_nr_pages(page);
-	int order = compound_order(page);
+	int nr_pages = folio_nr_pages(folio);
+	int order = folio_order(folio);
 
-	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
+	VM_BUG_ON_FOLIO(order && !folio_test_pmd_mappable(folio), folio);
 
 	/* Do not migrate THP mapped by multiple processes */
-	if (PageTransHuge(page) && total_mapcount(page) > 1)
+	if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) > 1)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
@@ -2501,18 +2501,18 @@  static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 		return 0;
 	}
 
-	if (!isolate_lru_page(page))
+	if (!folio_isolate_lru(folio))
 		return 0;
 
-	mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page),
+	node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
 			    nr_pages);
 
 	/*
-	 * Isolating the page has taken another reference, so the
-	 * caller's reference can be safely dropped without the page
+	 * Isolating the folio has taken another reference, so the
+	 * caller's reference can be safely dropped without the folio
 	 * disappearing underneath us during migration.
 	 */
-	put_page(page);
+	folio_put(folio);
 	return 1;
 }
 
@@ -2546,7 +2546,7 @@  int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	if (page_is_file_lru(page) && PageDirty(page))
 		goto out;
 
-	isolated = numamigrate_isolate_page(pgdat, page);
+	isolated = numamigrate_isolate_folio(pgdat, page_folio(page));
 	if (!isolated)
 		goto out;