[v1,1/4] mm/compaction: enable compacting >0 order folios.

Message ID 20231113170157.280181-2-zi.yan@sent.com
State New
Headers
Series Enable >0 order folio memory compaction |

Commit Message

Zi Yan Nov. 13, 2023, 5:01 p.m. UTC
  From: Zi Yan <ziy@nvidia.com>

migrate_pages() supports >0 order folio migration and during compaction,
even if compaction_alloc() cannot provide >0 order free pages,
migrate_pages() can split the source page and try to migrate the base pages
from the split. It can be a baseline and start point for adding support for
compacting >0 order folios.

Suggested-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/compaction.c | 57 ++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 15 deletions(-)
  

Comments

Matthew Wilcox Nov. 13, 2023, 6:30 p.m. UTC | #1
On Mon, Nov 13, 2023 at 12:01:54PM -0500, Zi Yan wrote:
> +	/* this makes migrate_pages() split the source page and retry */
> +	if (folio_order(src) > 0)
> +		return NULL;

Nit: folio_test_large() is more efficient than folio_order() > 0.
The former simply tests the bit, while the second tests the bit, then
loads folio->_order to check it's >0.  We know it will be, but there's
no way to tell gcc that if the bit is set, this value is definitely not 0.
  
Zi Yan Nov. 13, 2023, 7:22 p.m. UTC | #2
On 13 Nov 2023, at 13:30, Matthew Wilcox wrote:

> On Mon, Nov 13, 2023 at 12:01:54PM -0500, Zi Yan wrote:
>> +	/* this makes migrate_pages() split the source page and retry */
>> +	if (folio_order(src) > 0)
>> +		return NULL;
>
> Nit: folio_test_large() is more efficient than folio_order() > 0.
> The former simply tests the bit, while the second tests the bit, then
> loads folio->_order to check it's >0.  We know it will be, but there's
> no way to tell gcc that if the bit is set, this value is definitely not 0.

Got it. Make sense. Will change it in next version. Thanks.

--
Best Regards,
Yan, Zi
  
Baolin Wang Nov. 20, 2023, 9:18 a.m. UTC | #3
On 11/14/2023 1:01 AM, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> migrate_pages() supports >0 order folio migration and during compaction,
> even if compaction_alloc() cannot provide >0 order free pages,
> migrate_pages() can split the source page and try to migrate the base pages
> from the split. It can be a baseline and start point for adding support for
> compacting >0 order folios.
> 
> Suggested-by: Huang Ying <ying.huang@intel.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   mm/compaction.c | 57 ++++++++++++++++++++++++++++++++++++-------------
>   1 file changed, 42 insertions(+), 15 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 01ba298739dd..5217dd35b493 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>   	return too_many;
>   }
>   
> +/*
> + * 1. if the page order is larger than or equal to target_order (i.e.,
> + * cc->order and when it is not -1 for global compaction), skip it since
> + * target_order already indicates no free page with larger than target_order
> + * exists and later migrating it will most likely fail;
> + *
> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
> + * skip them;
> + */
> +static bool skip_isolation_on_order(int order, int target_order)
> +{
> +	return (target_order != -1 && order >= target_order) ||
> +		order >= pageblock_order;
> +}
> +
>   /**
>    * isolate_migratepages_block() - isolate all migrate-able pages within
>    *				  a single pageblock
> @@ -1009,7 +1024,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   		/*
>   		 * Regardless of being on LRU, compound pages such as THP and
>   		 * hugetlbfs are not to be compacted unless we are attempting
> -		 * an allocation much larger than the huge page size (eg CMA).
> +		 * an allocation larger than the compound page size.
>   		 * We can potentially save a lot of iterations if we skip them
>   		 * at once. The check is racy, but we can consider only valid
>   		 * values and the only danger is skipping too much.
> @@ -1017,11 +1032,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   		if (PageCompound(page) && !cc->alloc_contig) {
>   			const unsigned int order = compound_order(page);
>   
> -			if (likely(order <= MAX_ORDER)) {
> -				low_pfn += (1UL << order) - 1;
> -				nr_scanned += (1UL << order) - 1;
> +			/*
> +			 * Skip based on page order and compaction target order
> +			 * and skip hugetlbfs pages.
> +			 */
> +			if (skip_isolation_on_order(order, cc->order) ||
> +			    PageHuge(page)) {
> +				if (order <= MAX_ORDER) {
> +					low_pfn += (1UL << order) - 1;
> +					nr_scanned += (1UL << order) - 1;
> +				}
> +				goto isolate_fail;
>   			}
> -			goto isolate_fail;
>   		}
>   
>   		/*
> @@ -1144,17 +1166,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   					goto isolate_abort;
>   				}
>   			}
> +		}
>   
> -			/*
> -			 * folio become large since the non-locked check,
> -			 * and it's on LRU.
> -			 */
> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
> -				low_pfn += folio_nr_pages(folio) - 1;
> -				nr_scanned += folio_nr_pages(folio) - 1;
> -				folio_set_lru(folio);
> -				goto isolate_fail_put;
> -			}
> +		/*
> +		 * Check LRU folio order under the lock
> +		 */
> +		if (unlikely(skip_isolation_on_order(folio_order(folio),
> +						     cc->order) &&
> +			     !cc->alloc_contig)) {
> +			low_pfn += folio_nr_pages(folio) - 1;
> +			nr_scanned += folio_nr_pages(folio) - 1;
> +			folio_set_lru(folio);
> +			goto isolate_fail_put;
>   		}

Why was this part moved out of the 'if (lruvec != locked)' block? If we 
hold the lru lock, then we do not need to check again, right?
  
Zi Yan Nov. 20, 2023, 2:05 p.m. UTC | #4
>> @@ -1144,17 +1166,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   					goto isolate_abort;
>>   				}
>>   			}
>> +		}
>>  -			/*
>> -			 * folio become large since the non-locked check,
>> -			 * and it's on LRU.
>> -			 */
>> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
>> -				low_pfn += folio_nr_pages(folio) - 1;
>> -				nr_scanned += folio_nr_pages(folio) - 1;
>> -				folio_set_lru(folio);
>> -				goto isolate_fail_put;
>> -			}
>> +		/*
>> +		 * Check LRU folio order under the lock
>> +		 */
>> +		if (unlikely(skip_isolation_on_order(folio_order(folio),
>> +						     cc->order) &&
>> +			     !cc->alloc_contig)) {
>> +			low_pfn += folio_nr_pages(folio) - 1;
>> +			nr_scanned += folio_nr_pages(folio) - 1;
>> +			folio_set_lru(folio);
>> +			goto isolate_fail_put;
>>   		}
>
> Why was this part moved out of the 'if (lruvec != locked)' block? If we hold the lru lock, then we do not need to check again, right?

Probably I messed this up during rebase. Thank you for pointing this out.
Will fix it in the next version.

--
Best Regards,
Yan, Zi
  

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 01ba298739dd..5217dd35b493 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -816,6 +816,21 @@  static bool too_many_isolated(struct compact_control *cc)
 	return too_many;
 }
 
+/*
+ * 1. if the page order is larger than or equal to target_order (i.e.,
+ * cc->order and when it is not -1 for global compaction), skip it since
+ * target_order already indicates no free page with larger than target_order
+ * exists and later migrating it will most likely fail;
+ *
+ * 2. compacting > pageblock_order pages does not improve memory fragmentation,
+ * skip them;
+ */
+static bool skip_isolation_on_order(int order, int target_order)
+{
+	return (target_order != -1 && order >= target_order) ||
+		order >= pageblock_order;
+}
+
 /**
  * isolate_migratepages_block() - isolate all migrate-able pages within
  *				  a single pageblock
@@ -1009,7 +1024,7 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		/*
 		 * Regardless of being on LRU, compound pages such as THP and
 		 * hugetlbfs are not to be compacted unless we are attempting
-		 * an allocation much larger than the huge page size (eg CMA).
+		 * an allocation larger than the compound page size.
 		 * We can potentially save a lot of iterations if we skip them
 		 * at once. The check is racy, but we can consider only valid
 		 * values and the only danger is skipping too much.
@@ -1017,11 +1032,18 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (PageCompound(page) && !cc->alloc_contig) {
 			const unsigned int order = compound_order(page);
 
-			if (likely(order <= MAX_ORDER)) {
-				low_pfn += (1UL << order) - 1;
-				nr_scanned += (1UL << order) - 1;
+			/*
+			 * Skip based on page order and compaction target order
+			 * and skip hugetlbfs pages.
+			 */
+			if (skip_isolation_on_order(order, cc->order) ||
+			    PageHuge(page)) {
+				if (order <= MAX_ORDER) {
+					low_pfn += (1UL << order) - 1;
+					nr_scanned += (1UL << order) - 1;
+				}
+				goto isolate_fail;
 			}
-			goto isolate_fail;
 		}
 
 		/*
@@ -1144,17 +1166,18 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 					goto isolate_abort;
 				}
 			}
+		}
 
-			/*
-			 * folio become large since the non-locked check,
-			 * and it's on LRU.
-			 */
-			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
-				low_pfn += folio_nr_pages(folio) - 1;
-				nr_scanned += folio_nr_pages(folio) - 1;
-				folio_set_lru(folio);
-				goto isolate_fail_put;
-			}
+		/*
+		 * Check LRU folio order under the lock
+		 */
+		if (unlikely(skip_isolation_on_order(folio_order(folio),
+						     cc->order) &&
+			     !cc->alloc_contig)) {
+			low_pfn += folio_nr_pages(folio) - 1;
+			nr_scanned += folio_nr_pages(folio) - 1;
+			folio_set_lru(folio);
+			goto isolate_fail_put;
 		}
 
 		/* The folio is taken off the LRU */
@@ -1764,6 +1787,10 @@  static struct folio *compaction_alloc(struct folio *src, unsigned long data)
 	struct compact_control *cc = (struct compact_control *)data;
 	struct folio *dst;
 
+	/* this makes migrate_pages() split the source page and retry */
+	if (folio_order(src) > 0)
+		return NULL;
+
 	if (list_empty(&cc->freepages)) {
 		isolate_freepages(cc);