[v2] mm: migrate: record the mlocked page status to remove unnecessary lru drain

Message ID c960df0db27ae55b9e9babdcfe842e75fb44ccbc.1697703785.git.baolin.wang@linux.alibaba.com
State New
Headers
Series [v2] mm: migrate: record the mlocked page status to remove unnecessary lru drain |

Commit Message

Baolin Wang Oct. 20, 2023, 3:33 a.m. UTC
  When doing compaction, I found the lru_add_drain() is an obvious hotspot
when migrating pages. The distribution of this hotspot is as follows:
   - 18.75% compact_zone
      - 17.39% migrate_pages
         - 13.79% migrate_pages_batch
            - 11.66% migrate_folio_move
               - 7.02% lru_add_drain
                  + 7.02% lru_add_drain_cpu
               + 3.00% move_to_new_folio
                 1.23% rmap_walk
            + 1.92% migrate_folio_unmap
         + 3.20% migrate_pages_sync
      + 0.90% isolate_migratepages

The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
__unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
immediately, to help to build up the correct newpage->mlock_count in
remove_migration_ptes() for mlocked pages. However, if there are no mlocked
pages are migrating, then we can avoid this lru drain operation, especailly
for the heavy concurrent scenarios.

So we can record the source pages' mlocked status in migrate_folio_unmap(),
and only drain the lru list when the mlocked status is set in migrate_folio_move().
In addition, the page was already isolated from lru when migrating, so checking
the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().

After this patch, I can see the hotpot of the lru_add_drain() is gone:
   - 9.41% migrate_pages_batch
      - 6.15% migrate_folio_move
         - 3.64% move_to_new_folio
            + 1.80% migrate_folio_extra
            + 1.70% buffer_migrate_folio
         + 1.41% rmap_walk
         + 0.62% folio_add_lru
      + 3.07% migrate_folio_unmap

Meanwhile, the compaction latency shows some improvements when running
thpscale:
                            base                   patched
Amean     fault-both-1      1131.22 (   0.00%)     1112.55 *   1.65%*
Amean     fault-both-3      2489.75 (   0.00%)     2324.15 *   6.65%*
Amean     fault-both-5      3257.37 (   0.00%)     3183.18 *   2.28%*
Amean     fault-both-7      4257.99 (   0.00%)     4079.04 *   4.20%*
Amean     fault-both-12     6614.02 (   0.00%)     6075.60 *   8.14%*
Amean     fault-both-18    10607.78 (   0.00%)     8978.86 *  15.36%*
Amean     fault-both-24    14911.65 (   0.00%)    11619.55 *  22.08%*
Amean     fault-both-30    14954.67 (   0.00%)    14925.66 *   0.19%*
Amean     fault-both-32    16654.87 (   0.00%)    15580.31 *   6.45%*

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Chages from v1:
 - Use separate flags in __migrate_folio_record() to avoid to pack flags
in each call site per Ying.
---
 mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------
 1 file changed, 35 insertions(+), 12 deletions(-)
  

Comments

Huang, Ying Oct. 20, 2023, 3:42 a.m. UTC | #1
Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> When doing compaction, I found the lru_add_drain() is an obvious hotspot
> when migrating pages. The distribution of this hotspot is as follows:
>    - 18.75% compact_zone
>       - 17.39% migrate_pages
>          - 13.79% migrate_pages_batch
>             - 11.66% migrate_folio_move
>                - 7.02% lru_add_drain
>                   + 7.02% lru_add_drain_cpu
>                + 3.00% move_to_new_folio
>                  1.23% rmap_walk
>             + 1.92% migrate_folio_unmap
>          + 3.20% migrate_pages_sync
>       + 0.90% isolate_migratepages
>
> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
> immediately, to help to build up the correct newpage->mlock_count in
> remove_migration_ptes() for mlocked pages. However, if there are no mlocked
> pages are migrating, then we can avoid this lru drain operation, especailly
> for the heavy concurrent scenarios.
>
> So we can record the source pages' mlocked status in migrate_folio_unmap(),
> and only drain the lru list when the mlocked status is set in migrate_folio_move().
> In addition, the page was already isolated from lru when migrating, so checking
> the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().
>
> After this patch, I can see the hotpot of the lru_add_drain() is gone:
>    - 9.41% migrate_pages_batch
>       - 6.15% migrate_folio_move
>          - 3.64% move_to_new_folio
>             + 1.80% migrate_folio_extra
>             + 1.70% buffer_migrate_folio
>          + 1.41% rmap_walk
>          + 0.62% folio_add_lru
>       + 3.07% migrate_folio_unmap
>
> Meanwhile, the compaction latency shows some improvements when running
> thpscale:
>                             base                   patched
> Amean     fault-both-1      1131.22 (   0.00%)     1112.55 *   1.65%*
> Amean     fault-both-3      2489.75 (   0.00%)     2324.15 *   6.65%*
> Amean     fault-both-5      3257.37 (   0.00%)     3183.18 *   2.28%*
> Amean     fault-both-7      4257.99 (   0.00%)     4079.04 *   4.20%*
> Amean     fault-both-12     6614.02 (   0.00%)     6075.60 *   8.14%*
> Amean     fault-both-18    10607.78 (   0.00%)     8978.86 *  15.36%*
> Amean     fault-both-24    14911.65 (   0.00%)    11619.55 *  22.08%*
> Amean     fault-both-30    14954.67 (   0.00%)    14925.66 *   0.19%*
> Amean     fault-both-32    16654.87 (   0.00%)    15580.31 *   6.45%*
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Chages from v1:
>  - Use separate flags in __migrate_folio_record() to avoid to pack flags
> in each call site per Ying.
> ---
>  mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------
>  1 file changed, 35 insertions(+), 12 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 125194f5af0f..fac96139dbba 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1027,22 +1027,39 @@ union migration_ptr {
>  	struct anon_vma *anon_vma;
>  	struct address_space *mapping;
>  };
> +
> +enum {
> +	PAGE_WAS_MAPPED = 1 << 0,

PAGE_WAS_MAPPED = BIT(0) ?

> +	PAGE_WAS_MLOCKED = 1 << 1,
> +};
> +
>  static void __migrate_folio_record(struct folio *dst,
> -				   unsigned long page_was_mapped,
> +				   unsigned int page_was_mapped,
> +				   unsigned int page_was_mlocked,
>  				   struct anon_vma *anon_vma)
>  {
>  	union migration_ptr ptr = { .anon_vma = anon_vma };
> +	unsigned long page_flags = 0;

page_flags wasn't a good name, it can be confused with page->flags.
May be something like "page_attrs"?

> +
> +	if (page_was_mapped)
> +		page_flags |= PAGE_WAS_MAPPED;
> +	if (page_was_mlocked)
> +		page_flags |= PAGE_WAS_MLOCKED;
>  	dst->mapping = ptr.mapping;
> -	dst->private = (void *)page_was_mapped;
> +	dst->private = (void *)page_flags;
>  }
>  
>  static void __migrate_folio_extract(struct folio *dst,
>  				   int *page_was_mappedp,
> +				   int *page_was_mlocked,

Better to use the same naming convention.  Either both have "p" suffix,
or both not.

Otherwise looks good to me.

>  				   struct anon_vma **anon_vmap)
>  {
>  	union migration_ptr ptr = { .mapping = dst->mapping };
> +	unsigned long page_flags = (unsigned long)dst->private;
> +
>  	*anon_vmap = ptr.anon_vma;
> -	*page_was_mappedp = (unsigned long)dst->private;
> +	*page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0;
> +	*page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0;
>  	dst->mapping = NULL;
>  	dst->private = NULL;
>  }
> @@ -1103,7 +1120,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>  {
>  	struct folio *dst;
>  	int rc = -EAGAIN;
> -	int page_was_mapped = 0;
> +	int page_was_mapped = 0, page_was_mlocked = 0;
>  	struct anon_vma *anon_vma = NULL;
>  	bool is_lru = !__folio_test_movable(src);
>  	bool locked = false;
> @@ -1157,6 +1174,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>  		folio_lock(src);
>  	}
>  	locked = true;
> +	page_was_mlocked = folio_test_mlocked(src);
>  
>  	if (folio_test_writeback(src)) {
>  		/*
> @@ -1206,7 +1224,8 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>  	dst_locked = true;
>  
>  	if (unlikely(!is_lru)) {
> -		__migrate_folio_record(dst, page_was_mapped, anon_vma);
> +		__migrate_folio_record(dst, page_was_mapped,
> +				       page_was_mlocked, anon_vma);
>  		return MIGRATEPAGE_UNMAP;
>  	}
>  
> @@ -1236,7 +1255,8 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>  	}
>  
>  	if (!folio_mapped(src)) {
> -		__migrate_folio_record(dst, page_was_mapped, anon_vma);
> +		__migrate_folio_record(dst, page_was_mapped,
> +				       page_was_mlocked, anon_vma);
>  		return MIGRATEPAGE_UNMAP;
>  	}
>  
> @@ -1261,12 +1281,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
>  			      struct list_head *ret)
>  {
>  	int rc;
> -	int page_was_mapped = 0;
> +	int page_was_mapped = 0, page_was_mlocked = 0;
>  	struct anon_vma *anon_vma = NULL;
>  	bool is_lru = !__folio_test_movable(src);
>  	struct list_head *prev;
>  
> -	__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
> +	__migrate_folio_extract(dst, &page_was_mapped,
> +				&page_was_mlocked, &anon_vma);
>  	prev = dst->lru.prev;
>  	list_del(&dst->lru);
>  
> @@ -1287,7 +1308,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
>  	 * isolated from the unevictable LRU: but this case is the easiest.
>  	 */
>  	folio_add_lru(dst);
> -	if (page_was_mapped)
> +	if (page_was_mlocked)
>  		lru_add_drain();
>  
>  	if (page_was_mapped)
> @@ -1322,7 +1343,8 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
>  	 */
>  	if (rc == -EAGAIN) {
>  		list_add(&dst->lru, prev);
> -		__migrate_folio_record(dst, page_was_mapped, anon_vma);
> +		__migrate_folio_record(dst, page_was_mapped,
> +				       page_was_mlocked, anon_vma);
>  		return rc;
>  	}
>  
> @@ -1799,10 +1821,11 @@ static int migrate_pages_batch(struct list_head *from,
>  	dst = list_first_entry(&dst_folios, struct folio, lru);
>  	dst2 = list_next_entry(dst, lru);
>  	list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
> -		int page_was_mapped = 0;
> +		int page_was_mapped = 0, page_was_mlocked = 0;
>  		struct anon_vma *anon_vma = NULL;
>  
> -		__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
> +		__migrate_folio_extract(dst, &page_was_mapped,
> +					&page_was_mlocked, &anon_vma);
>  		migrate_folio_undo_src(folio, page_was_mapped, anon_vma,
>  				       true, ret_folios);
>  		list_del(&dst->lru);

--
Best Regards,
Huang, Ying
  
Hugh Dickins Oct. 20, 2023, 4:48 a.m. UTC | #2
On Fri, 20 Oct 2023, Baolin Wang wrote:

> When doing compaction, I found the lru_add_drain() is an obvious hotspot
> when migrating pages. The distribution of this hotspot is as follows:
>    - 18.75% compact_zone
>       - 17.39% migrate_pages
>          - 13.79% migrate_pages_batch
>             - 11.66% migrate_folio_move
>                - 7.02% lru_add_drain
>                   + 7.02% lru_add_drain_cpu
>                + 3.00% move_to_new_folio
>                  1.23% rmap_walk
>             + 1.92% migrate_folio_unmap
>          + 3.20% migrate_pages_sync
>       + 0.90% isolate_migratepages
> 
> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
> immediately, to help to build up the correct newpage->mlock_count in
> remove_migration_ptes() for mlocked pages. However, if there are no mlocked
> pages are migrating, then we can avoid this lru drain operation, especailly
> for the heavy concurrent scenarios.
> 
> So we can record the source pages' mlocked status in migrate_folio_unmap(),
> and only drain the lru list when the mlocked status is set in migrate_folio_move().
> In addition, the page was already isolated from lru when migrating, so checking
> the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().
> 
> After this patch, I can see the hotpot of the lru_add_drain() is gone:
>    - 9.41% migrate_pages_batch
>       - 6.15% migrate_folio_move
>          - 3.64% move_to_new_folio
>             + 1.80% migrate_folio_extra
>             + 1.70% buffer_migrate_folio
>          + 1.41% rmap_walk
>          + 0.62% folio_add_lru
>       + 3.07% migrate_folio_unmap
> 
> Meanwhile, the compaction latency shows some improvements when running
> thpscale:
>                             base                   patched
> Amean     fault-both-1      1131.22 (   0.00%)     1112.55 *   1.65%*
> Amean     fault-both-3      2489.75 (   0.00%)     2324.15 *   6.65%*
> Amean     fault-both-5      3257.37 (   0.00%)     3183.18 *   2.28%*
> Amean     fault-both-7      4257.99 (   0.00%)     4079.04 *   4.20%*
> Amean     fault-both-12     6614.02 (   0.00%)     6075.60 *   8.14%*
> Amean     fault-both-18    10607.78 (   0.00%)     8978.86 *  15.36%*
> Amean     fault-both-24    14911.65 (   0.00%)    11619.55 *  22.08%*
> Amean     fault-both-30    14954.67 (   0.00%)    14925.66 *   0.19%*
> Amean     fault-both-32    16654.87 (   0.00%)    15580.31 *   6.45%*
> 

Seems a sensible change with good results (I'll conceal how little of
the stats I understand, I expect everyone else understands them: in my
naivety, I'm mainly curious why rmap_walk's 1.23% didn't get a + on it).

> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Chages from v1:
>  - Use separate flags in __migrate_folio_record() to avoid to pack flags
> in each call site per Ying.
> ---
>  mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------
>  1 file changed, 35 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 125194f5af0f..fac96139dbba 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1027,22 +1027,39 @@ union migration_ptr {
>  	struct anon_vma *anon_vma;
>  	struct address_space *mapping;
>  };
> +
> +enum {
> +	PAGE_WAS_MAPPED = 1 << 0,
> +	PAGE_WAS_MLOCKED = 1 << 1,
> +};
> +

I was whispering to myself "I bet someone will suggest BIT()";
and indeed that someone has turned out to be Huang, Ying.

>  static void __migrate_folio_record(struct folio *dst,
> -				   unsigned long page_was_mapped,
> +				   unsigned int page_was_mapped,
> +				   unsigned int page_was_mlocked,
>  				   struct anon_vma *anon_vma)
>  {
>  	union migration_ptr ptr = { .anon_vma = anon_vma };
> +	unsigned long page_flags = 0;

Huang, Ying preferred a different name, me too: old_page_state?

> +
> +	if (page_was_mapped)
> +		page_flags |= PAGE_WAS_MAPPED;
> +	if (page_was_mlocked)
> +		page_flags |= PAGE_WAS_MLOCKED;

What's annoying me about the patch is all this mix of page_was_mapped and
page_was_mlocked variables, then the old_page_state bits.  Can't it be
done with PAGE_WAS_ bits in old_page_state throughout, without any
page_was_mapped and page_was_mlocked variables?

Hugh
  
Baolin Wang Oct. 21, 2023, 3:25 a.m. UTC | #3
On 10/20/2023 11:42 AM, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> When doing compaction, I found the lru_add_drain() is an obvious hotspot
>> when migrating pages. The distribution of this hotspot is as follows:
>>     - 18.75% compact_zone
>>        - 17.39% migrate_pages
>>           - 13.79% migrate_pages_batch
>>              - 11.66% migrate_folio_move
>>                 - 7.02% lru_add_drain
>>                    + 7.02% lru_add_drain_cpu
>>                 + 3.00% move_to_new_folio
>>                   1.23% rmap_walk
>>              + 1.92% migrate_folio_unmap
>>           + 3.20% migrate_pages_sync
>>        + 0.90% isolate_migratepages
>>
>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
>> immediately, to help to build up the correct newpage->mlock_count in
>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked
>> pages are migrating, then we can avoid this lru drain operation, especailly
>> for the heavy concurrent scenarios.
>>
>> So we can record the source pages' mlocked status in migrate_folio_unmap(),
>> and only drain the lru list when the mlocked status is set in migrate_folio_move().
>> In addition, the page was already isolated from lru when migrating, so checking
>> the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().
>>
>> After this patch, I can see the hotpot of the lru_add_drain() is gone:
>>     - 9.41% migrate_pages_batch
>>        - 6.15% migrate_folio_move
>>           - 3.64% move_to_new_folio
>>              + 1.80% migrate_folio_extra
>>              + 1.70% buffer_migrate_folio
>>           + 1.41% rmap_walk
>>           + 0.62% folio_add_lru
>>        + 3.07% migrate_folio_unmap
>>
>> Meanwhile, the compaction latency shows some improvements when running
>> thpscale:
>>                              base                   patched
>> Amean     fault-both-1      1131.22 (   0.00%)     1112.55 *   1.65%*
>> Amean     fault-both-3      2489.75 (   0.00%)     2324.15 *   6.65%*
>> Amean     fault-both-5      3257.37 (   0.00%)     3183.18 *   2.28%*
>> Amean     fault-both-7      4257.99 (   0.00%)     4079.04 *   4.20%*
>> Amean     fault-both-12     6614.02 (   0.00%)     6075.60 *   8.14%*
>> Amean     fault-both-18    10607.78 (   0.00%)     8978.86 *  15.36%*
>> Amean     fault-both-24    14911.65 (   0.00%)    11619.55 *  22.08%*
>> Amean     fault-both-30    14954.67 (   0.00%)    14925.66 *   0.19%*
>> Amean     fault-both-32    16654.87 (   0.00%)    15580.31 *   6.45%*
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> Chages from v1:
>>   - Use separate flags in __migrate_folio_record() to avoid to pack flags
>> in each call site per Ying.
>> ---
>>   mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------
>>   1 file changed, 35 insertions(+), 12 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 125194f5af0f..fac96139dbba 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1027,22 +1027,39 @@ union migration_ptr {
>>   	struct anon_vma *anon_vma;
>>   	struct address_space *mapping;
>>   };
>> +
>> +enum {
>> +	PAGE_WAS_MAPPED = 1 << 0,
> 
> PAGE_WAS_MAPPED = BIT(0) ?

Sure, will do.

> 
>> +	PAGE_WAS_MLOCKED = 1 << 1,
>> +};
>> +
>>   static void __migrate_folio_record(struct folio *dst,
>> -				   unsigned long page_was_mapped,
>> +				   unsigned int page_was_mapped,
>> +				   unsigned int page_was_mlocked,
>>   				   struct anon_vma *anon_vma)
>>   {
>>   	union migration_ptr ptr = { .anon_vma = anon_vma };
>> +	unsigned long page_flags = 0;
> 
> page_flags wasn't a good name, it can be confused with page->flags.

Agree.

> May be something like "page_attrs"?

OK, I prefer to the 'old_page_state' suggested by Hugh :)

>> +
>> +	if (page_was_mapped)
>> +		page_flags |= PAGE_WAS_MAPPED;
>> +	if (page_was_mlocked)
>> +		page_flags |= PAGE_WAS_MLOCKED;
>>   	dst->mapping = ptr.mapping;
>> -	dst->private = (void *)page_was_mapped;
>> +	dst->private = (void *)page_flags;
>>   }
>>   
>>   static void __migrate_folio_extract(struct folio *dst,
>>   				   int *page_was_mappedp,
>> +				   int *page_was_mlocked,
> 
> Better to use the same naming convention.  Either both have "p" suffix,
> or both not.

OK.

> 
> Otherwise looks good to me.

Thanks for reviewing.
  
Baolin Wang Oct. 21, 2023, 3:31 a.m. UTC | #4
On 10/20/2023 12:48 PM, Hugh Dickins wrote:
> On Fri, 20 Oct 2023, Baolin Wang wrote:
> 
>> When doing compaction, I found the lru_add_drain() is an obvious hotspot
>> when migrating pages. The distribution of this hotspot is as follows:
>>     - 18.75% compact_zone
>>        - 17.39% migrate_pages
>>           - 13.79% migrate_pages_batch
>>              - 11.66% migrate_folio_move
>>                 - 7.02% lru_add_drain
>>                    + 7.02% lru_add_drain_cpu
>>                 + 3.00% move_to_new_folio
>>                   1.23% rmap_walk
>>              + 1.92% migrate_folio_unmap
>>           + 3.20% migrate_pages_sync
>>        + 0.90% isolate_migratepages
>>
>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
>> immediately, to help to build up the correct newpage->mlock_count in
>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked
>> pages are migrating, then we can avoid this lru drain operation, especailly
>> for the heavy concurrent scenarios.
>>
>> So we can record the source pages' mlocked status in migrate_folio_unmap(),
>> and only drain the lru list when the mlocked status is set in migrate_folio_move().
>> In addition, the page was already isolated from lru when migrating, so checking
>> the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().
>>
>> After this patch, I can see the hotpot of the lru_add_drain() is gone:
>>     - 9.41% migrate_pages_batch
>>        - 6.15% migrate_folio_move
>>           - 3.64% move_to_new_folio
>>              + 1.80% migrate_folio_extra
>>              + 1.70% buffer_migrate_folio
>>           + 1.41% rmap_walk
>>           + 0.62% folio_add_lru
>>        + 3.07% migrate_folio_unmap
>>
>> Meanwhile, the compaction latency shows some improvements when running
>> thpscale:
>>                              base                   patched
>> Amean     fault-both-1      1131.22 (   0.00%)     1112.55 *   1.65%*
>> Amean     fault-both-3      2489.75 (   0.00%)     2324.15 *   6.65%*
>> Amean     fault-both-5      3257.37 (   0.00%)     3183.18 *   2.28%*
>> Amean     fault-both-7      4257.99 (   0.00%)     4079.04 *   4.20%*
>> Amean     fault-both-12     6614.02 (   0.00%)     6075.60 *   8.14%*
>> Amean     fault-both-18    10607.78 (   0.00%)     8978.86 *  15.36%*
>> Amean     fault-both-24    14911.65 (   0.00%)    11619.55 *  22.08%*
>> Amean     fault-both-30    14954.67 (   0.00%)    14925.66 *   0.19%*
>> Amean     fault-both-32    16654.87 (   0.00%)    15580.31 *   6.45%*
>>
> 
> Seems a sensible change with good results (I'll conceal how little of
> the stats I understand, I expect everyone else understands them: in my
> naivety, I'm mainly curious why rmap_walk's 1.23% didn't get a + on it).

TBH, I also don't know why the rmap_walk didn't get a + on it, let me 
check it again.

>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> Chages from v1:
>>   - Use separate flags in __migrate_folio_record() to avoid to pack flags
>> in each call site per Ying.
>> ---
>>   mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------
>>   1 file changed, 35 insertions(+), 12 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 125194f5af0f..fac96139dbba 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1027,22 +1027,39 @@ union migration_ptr {
>>   	struct anon_vma *anon_vma;
>>   	struct address_space *mapping;
>>   };
>> +
>> +enum {
>> +	PAGE_WAS_MAPPED = 1 << 0,
>> +	PAGE_WAS_MLOCKED = 1 << 1,
>> +};
>> +
> 
> I was whispering to myself "I bet someone will suggest BIT()";
> and indeed that someone has turned out to be Huang, Ying.

Sure.

> 
>>   static void __migrate_folio_record(struct folio *dst,
>> -				   unsigned long page_was_mapped,
>> +				   unsigned int page_was_mapped,
>> +				   unsigned int page_was_mlocked,
>>   				   struct anon_vma *anon_vma)
>>   {
>>   	union migration_ptr ptr = { .anon_vma = anon_vma };
>> +	unsigned long page_flags = 0;
> 
> Huang, Ying preferred a different name, me too: old_page_state?

OK, sounds better to me.

> 
>> +
>> +	if (page_was_mapped)
>> +		page_flags |= PAGE_WAS_MAPPED;
>> +	if (page_was_mlocked)
>> +		page_flags |= PAGE_WAS_MLOCKED;
> 
> What's annoying me about the patch is all this mix of page_was_mapped and
> page_was_mlocked variables, then the old_page_state bits.  Can't it be
> done with PAGE_WAS_ bits in old_page_state throughout, without any
> page_was_mapped and page_was_mlocked variables?

Yes, good point. Let me try it. Thanks for your comments.
  

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 125194f5af0f..fac96139dbba 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1027,22 +1027,39 @@  union migration_ptr {
 	struct anon_vma *anon_vma;
 	struct address_space *mapping;
 };
+
+enum {
+	PAGE_WAS_MAPPED = 1 << 0,
+	PAGE_WAS_MLOCKED = 1 << 1,
+};
+
 static void __migrate_folio_record(struct folio *dst,
-				   unsigned long page_was_mapped,
+				   unsigned int page_was_mapped,
+				   unsigned int page_was_mlocked,
 				   struct anon_vma *anon_vma)
 {
 	union migration_ptr ptr = { .anon_vma = anon_vma };
+	unsigned long page_flags = 0;
+
+	if (page_was_mapped)
+		page_flags |= PAGE_WAS_MAPPED;
+	if (page_was_mlocked)
+		page_flags |= PAGE_WAS_MLOCKED;
 	dst->mapping = ptr.mapping;
-	dst->private = (void *)page_was_mapped;
+	dst->private = (void *)page_flags;
 }
 
 static void __migrate_folio_extract(struct folio *dst,
 				   int *page_was_mappedp,
+				   int *page_was_mlocked,
 				   struct anon_vma **anon_vmap)
 {
 	union migration_ptr ptr = { .mapping = dst->mapping };
+	unsigned long page_flags = (unsigned long)dst->private;
+
 	*anon_vmap = ptr.anon_vma;
-	*page_was_mappedp = (unsigned long)dst->private;
+	*page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0;
+	*page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0;
 	dst->mapping = NULL;
 	dst->private = NULL;
 }
@@ -1103,7 +1120,7 @@  static int migrate_folio_unmap(new_folio_t get_new_folio,
 {
 	struct folio *dst;
 	int rc = -EAGAIN;
-	int page_was_mapped = 0;
+	int page_was_mapped = 0, page_was_mlocked = 0;
 	struct anon_vma *anon_vma = NULL;
 	bool is_lru = !__folio_test_movable(src);
 	bool locked = false;
@@ -1157,6 +1174,7 @@  static int migrate_folio_unmap(new_folio_t get_new_folio,
 		folio_lock(src);
 	}
 	locked = true;
+	page_was_mlocked = folio_test_mlocked(src);
 
 	if (folio_test_writeback(src)) {
 		/*
@@ -1206,7 +1224,8 @@  static int migrate_folio_unmap(new_folio_t get_new_folio,
 	dst_locked = true;
 
 	if (unlikely(!is_lru)) {
-		__migrate_folio_record(dst, page_was_mapped, anon_vma);
+		__migrate_folio_record(dst, page_was_mapped,
+				       page_was_mlocked, anon_vma);
 		return MIGRATEPAGE_UNMAP;
 	}
 
@@ -1236,7 +1255,8 @@  static int migrate_folio_unmap(new_folio_t get_new_folio,
 	}
 
 	if (!folio_mapped(src)) {
-		__migrate_folio_record(dst, page_was_mapped, anon_vma);
+		__migrate_folio_record(dst, page_was_mapped,
+				       page_was_mlocked, anon_vma);
 		return MIGRATEPAGE_UNMAP;
 	}
 
@@ -1261,12 +1281,13 @@  static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 			      struct list_head *ret)
 {
 	int rc;
-	int page_was_mapped = 0;
+	int page_was_mapped = 0, page_was_mlocked = 0;
 	struct anon_vma *anon_vma = NULL;
 	bool is_lru = !__folio_test_movable(src);
 	struct list_head *prev;
 
-	__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
+	__migrate_folio_extract(dst, &page_was_mapped,
+				&page_was_mlocked, &anon_vma);
 	prev = dst->lru.prev;
 	list_del(&dst->lru);
 
@@ -1287,7 +1308,7 @@  static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 	 * isolated from the unevictable LRU: but this case is the easiest.
 	 */
 	folio_add_lru(dst);
-	if (page_was_mapped)
+	if (page_was_mlocked)
 		lru_add_drain();
 
 	if (page_was_mapped)
@@ -1322,7 +1343,8 @@  static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 	 */
 	if (rc == -EAGAIN) {
 		list_add(&dst->lru, prev);
-		__migrate_folio_record(dst, page_was_mapped, anon_vma);
+		__migrate_folio_record(dst, page_was_mapped,
+				       page_was_mlocked, anon_vma);
 		return rc;
 	}
 
@@ -1799,10 +1821,11 @@  static int migrate_pages_batch(struct list_head *from,
 	dst = list_first_entry(&dst_folios, struct folio, lru);
 	dst2 = list_next_entry(dst, lru);
 	list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
-		int page_was_mapped = 0;
+		int page_was_mapped = 0, page_was_mlocked = 0;
 		struct anon_vma *anon_vma = NULL;
 
-		__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
+		__migrate_folio_extract(dst, &page_was_mapped,
+					&page_was_mlocked, &anon_vma);
 		migrate_folio_undo_src(folio, page_was_mapped, anon_vma,
 				       true, ret_folios);
 		list_del(&dst->lru);