[v6,2/4] mm/compaction: enable compacting >0 order folios.

Message ID 20240216170432.1268753-3-zi.yan@sent.com
State New
Headers
Series [v6,1/4] mm/page_alloc: remove unused fpi_flags in free_pages_prepare() |

Commit Message

Zi Yan Feb. 16, 2024, 5:04 p.m. UTC
  From: Zi Yan <ziy@nvidia.com>

migrate_pages() supports >0 order folio migration and during compaction,
even if compaction_alloc() cannot provide >0 order free pages,
migrate_pages() can split the source page and try to migrate the base
pages from the split.  It can be a baseline and start point for adding
support for compacting >0 order folios.

Signed-off-by: Zi Yan <ziy@nvidia.com>
Suggested-by: Huang Ying <ying.huang@intel.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Yu Zhao <yuzhao@google.com>
Cc: Adam Manzanares <a.manzanares@samsung.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yin Fengwei <fengwei.yin@intel.com>
---
 mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 52 insertions(+), 14 deletions(-)
  

Comments

David Hildenbrand Feb. 20, 2024, 9:03 a.m. UTC | #1
On 16.02.24 18:04, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> migrate_pages() supports >0 order folio migration and during compaction,
> even if compaction_alloc() cannot provide >0 order free pages,
> migrate_pages() can split the source page and try to migrate the base
> pages from the split.  It can be a baseline and start point for adding
> support for compacting >0 order folios.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Suggested-by: Huang Ying <ying.huang@intel.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Tested-by: Yu Zhao <yuzhao@google.com>
> Cc: Adam Manzanares <a.manzanares@samsung.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Luis Chamberlain <mcgrof@kernel.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Yin Fengwei <fengwei.yin@intel.com>
> ---
>   mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
>   1 file changed, 52 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cc801ce099b4..aa6aad805c4d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>   	return too_many;
>   }
>   
> +/*


Can't you add these comments to the respective checks? Like

static bool skip_isolation_on_order(int order, int target_order)
{
	/*
	 * Unless we are performing global compaction (targert_order <
	 * 0), skip any folios that are larger than the target order: we
	 * wouldn't be here if we'd have a free folio with the desired
	 * target_order, so migrating this folio would likely fail
	 * later.
	 */
	if (target_order != -1 && order >= target_order)
		return true;
	/*
	 * We limit memory compaction to pageblocks and won't try
	 * creating free blocks of memory that are larger than that.
	 */
	return order >= pageblock_order;
}

Then, add a simple expressive function documentation (if really 
required) that doesn't contain all these details.

> + * 1. if the page order is larger than or equal to target_order (i.e.,
> + * cc->order and when it is not -1 for global compaction), skip it since
> + * target_order already indicates no free page with larger than target_order
> + * exists and later migrating it will most likely fail;
> + *
> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,

I'm pretty sure you meant "reduce" ?

> + * skip them;
> + */
> +static bool skip_isolation_on_order(int order, int target_order)
> +{
> +	return (target_order != -1 && order >= target_order) ||
> +		order >= pageblock_order;
> +}
> +
>   /**
>    * isolate_migratepages_block() - isolate all migrate-able pages within
>    *				  a single pageblock
> @@ -947,7 +962,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   			valid_page = page;
>   		}
>   
> -		if (PageHuge(page) && cc->alloc_contig) {
> +		if (PageHuge(page)) {
> +			/*
> +			 * skip hugetlbfs if we are not compacting for pages
> +			 * bigger than its order. THPs and other compound pages
> +			 * are handled below.
> +			 */
> +			if (!cc->alloc_contig) {
> +				const unsigned int order = compound_order(page);
> +
> +				if (order <= MAX_PAGE_ORDER) {
> +					low_pfn += (1UL << order) - 1;
> +					nr_scanned += (1UL << order) - 1;
> +				}
> +				goto isolate_fail;
> +			}
> +			/* for alloc_contig case */
>   			if (locked) {
>   				unlock_page_lruvec_irqrestore(locked, flags);
>   				locked = NULL;
> @@ -1008,21 +1038,24 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   		}
>   
>   		/*
> -		 * Regardless of being on LRU, compound pages such as THP and
> -		 * hugetlbfs are not to be compacted unless we are attempting
> -		 * an allocation much larger than the huge page size (eg CMA).
> -		 * We can potentially save a lot of iterations if we skip them
> -		 * at once. The check is racy, but we can consider only valid
> -		 * values and the only danger is skipping too much.
> +		 * Regardless of being on LRU, compound pages such as THP
> +		 * (hugetlbfs is handled above) are not to be compacted unless
> +		 * we are attempting an allocation larger than the compound
> +		 * page size. We can potentially save a lot of iterations if we
> +		 * skip them at once. The check is racy, but we can consider
> +		 * only valid values and the only danger is skipping too much.
>   		 */
>   		if (PageCompound(page) && !cc->alloc_contig) {
>   			const unsigned int order = compound_order(page);
>   
> -			if (likely(order <= MAX_PAGE_ORDER)) {
> -				low_pfn += (1UL << order) - 1;
> -				nr_scanned += (1UL << order) - 1;
> +			/* Skip based on page order and compaction target order. */
> +			if (skip_isolation_on_order(order, cc->order)) {
> +				if (order <= MAX_PAGE_ORDER) {
> +					low_pfn += (1UL << order) - 1;
> +					nr_scanned += (1UL << order) - 1;
> +				}
> +				goto isolate_fail;
>   			}
> -			goto isolate_fail;
>   		}
>   
>   		/*
> @@ -1165,10 +1198,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   			}
>   
>   			/*
> -			 * folio become large since the non-locked check,
> -			 * and it's on LRU.
> +			 * Check LRU folio order under the lock
>   			 */
> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
> +			if (unlikely(skip_isolation_on_order(folio_order(folio),
> +							     cc->order) &&
> +				     !cc->alloc_contig)) {
>   				low_pfn += folio_nr_pages(folio) - 1;
>   				nr_scanned += folio_nr_pages(folio) - 1;
>   				folio_set_lru(folio);
> @@ -1788,6 +1822,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>   	struct compact_control *cc = (struct compact_control *)data;
>   	struct folio *dst;
>   
> +	/* this makes migrate_pages() split the source page and retry */
> +	if (folio_test_large(src) > 0)
> +		return NULL;

Why the "> 0 " check ? Either it's large or it isn't.

Apart from that LGTM, but I am no compaction expert.
  
David Hildenbrand Feb. 20, 2024, 9:11 a.m. UTC | #2
On 20.02.24 10:03, David Hildenbrand wrote:
> On 16.02.24 18:04, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> migrate_pages() supports >0 order folio migration and during compaction,
>> even if compaction_alloc() cannot provide >0 order free pages,
>> migrate_pages() can split the source page and try to migrate the base
>> pages from the split.  It can be a baseline and start point for adding
>> support for compacting >0 order folios.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Suggested-by: Huang Ying <ying.huang@intel.com>
>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Tested-by: Yu Zhao <yuzhao@google.com>
>> Cc: Adam Manzanares <a.manzanares@samsung.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Cc: Luis Chamberlain <mcgrof@kernel.org>
>> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Yin Fengwei <fengwei.yin@intel.com>
>> ---
>>    mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
>>    1 file changed, 52 insertions(+), 14 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index cc801ce099b4..aa6aad805c4d 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>    	return too_many;
>>    }
>>    
>> +/*
> 
> 
> Can't you add these comments to the respective checks? Like
> 
> static bool skip_isolation_on_order(int order, int target_order)
> {
> 	/*
> 	 * Unless we are performing global compaction (targert_order <
> 	 * 0), skip any folios that are larger than the target order: we
> 	 * wouldn't be here if we'd have a free folio with the desired
> 	 * target_order, so migrating this folio would likely fail
> 	 * later.
> 	 */
> 	if (target_order != -1 && order >= target_order)
> 		return true;

I just stumbled over "is_via_compact_memory", likely that should be used 
instead of the "!= -1 check.
  
Zi Yan Feb. 20, 2024, 3:27 p.m. UTC | #3
On 20 Feb 2024, at 4:03, David Hildenbrand wrote:

> On 16.02.24 18:04, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> migrate_pages() supports >0 order folio migration and during compaction,
>> even if compaction_alloc() cannot provide >0 order free pages,
>> migrate_pages() can split the source page and try to migrate the base
>> pages from the split.  It can be a baseline and start point for adding
>> support for compacting >0 order folios.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Suggested-by: Huang Ying <ying.huang@intel.com>
>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Tested-by: Yu Zhao <yuzhao@google.com>
>> Cc: Adam Manzanares <a.manzanares@samsung.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Cc: Luis Chamberlain <mcgrof@kernel.org>
>> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Yin Fengwei <fengwei.yin@intel.com>
>> ---
>>   mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
>>   1 file changed, 52 insertions(+), 14 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index cc801ce099b4..aa6aad805c4d 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>   	return too_many;
>>   }
>>  +/*
>
>
> Can't you add these comments to the respective checks? Like
>
> static bool skip_isolation_on_order(int order, int target_order)
> {
> 	/*
> 	 * Unless we are performing global compaction (targert_order <
> 	 * 0), skip any folios that are larger than the target order: we
> 	 * wouldn't be here if we'd have a free folio with the desired
> 	 * target_order, so migrating this folio would likely fail
> 	 * later.
> 	 */
> 	if (target_order != -1 && order >= target_order)
> 		return true;
> 	/*
> 	 * We limit memory compaction to pageblocks and won't try
> 	 * creating free blocks of memory that are larger than that.
> 	 */
> 	return order >= pageblock_order;
> }
>
> Then, add a simple expressive function documentation (if really required) that doesn't contain all these details.
>

OK. No problem.

>> + * 1. if the page order is larger than or equal to target_order (i.e.,
>> + * cc->order and when it is not -1 for global compaction), skip it since
>> + * target_order already indicates no free page with larger than target_order
>> + * exists and later migrating it will most likely fail;
>> + *
>> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
>
> I'm pretty sure you meant "reduce" ?

Yes.

>
>> + * skip them;
>> + */
>> +static bool skip_isolation_on_order(int order, int target_order)
>> +{
>> +	return (target_order != -1 && order >= target_order) ||
>> +		order >= pageblock_order;
>> +}
>> +
>>   /**
>>    * isolate_migratepages_block() - isolate all migrate-able pages within
>>    *				  a single pageblock
>> @@ -947,7 +962,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   			valid_page = page;
>>   		}
>>  -		if (PageHuge(page) && cc->alloc_contig) {
>> +		if (PageHuge(page)) {
>> +			/*
>> +			 * skip hugetlbfs if we are not compacting for pages
>> +			 * bigger than its order. THPs and other compound pages
>> +			 * are handled below.
>> +			 */
>> +			if (!cc->alloc_contig) {
>> +				const unsigned int order = compound_order(page);
>> +
>> +				if (order <= MAX_PAGE_ORDER) {
>> +					low_pfn += (1UL << order) - 1;
>> +					nr_scanned += (1UL << order) - 1;
>> +				}
>> +				goto isolate_fail;
>> +			}
>> +			/* for alloc_contig case */
>>   			if (locked) {
>>   				unlock_page_lruvec_irqrestore(locked, flags);
>>   				locked = NULL;
>> @@ -1008,21 +1038,24 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   		}
>>    		/*
>> -		 * Regardless of being on LRU, compound pages such as THP and
>> -		 * hugetlbfs are not to be compacted unless we are attempting
>> -		 * an allocation much larger than the huge page size (eg CMA).
>> -		 * We can potentially save a lot of iterations if we skip them
>> -		 * at once. The check is racy, but we can consider only valid
>> -		 * values and the only danger is skipping too much.
>> +		 * Regardless of being on LRU, compound pages such as THP
>> +		 * (hugetlbfs is handled above) are not to be compacted unless
>> +		 * we are attempting an allocation larger than the compound
>> +		 * page size. We can potentially save a lot of iterations if we
>> +		 * skip them at once. The check is racy, but we can consider
>> +		 * only valid values and the only danger is skipping too much.
>>   		 */
>>   		if (PageCompound(page) && !cc->alloc_contig) {
>>   			const unsigned int order = compound_order(page);
>>  -			if (likely(order <= MAX_PAGE_ORDER)) {
>> -				low_pfn += (1UL << order) - 1;
>> -				nr_scanned += (1UL << order) - 1;
>> +			/* Skip based on page order and compaction target order. */
>> +			if (skip_isolation_on_order(order, cc->order)) {
>> +				if (order <= MAX_PAGE_ORDER) {
>> +					low_pfn += (1UL << order) - 1;
>> +					nr_scanned += (1UL << order) - 1;
>> +				}
>> +				goto isolate_fail;
>>   			}
>> -			goto isolate_fail;
>>   		}
>>    		/*
>> @@ -1165,10 +1198,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   			}
>>    			/*
>> -			 * folio become large since the non-locked check,
>> -			 * and it's on LRU.
>> +			 * Check LRU folio order under the lock
>>   			 */
>> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
>> +			if (unlikely(skip_isolation_on_order(folio_order(folio),
>> +							     cc->order) &&
>> +				     !cc->alloc_contig)) {
>>   				low_pfn += folio_nr_pages(folio) - 1;
>>   				nr_scanned += folio_nr_pages(folio) - 1;
>>   				folio_set_lru(folio);
>> @@ -1788,6 +1822,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>>   	struct compact_control *cc = (struct compact_control *)data;
>>   	struct folio *dst;
>>  +	/* this makes migrate_pages() split the source page and retry */
>> +	if (folio_test_large(src) > 0)
>> +		return NULL;
>
> Why the "> 0 " check ? Either it's large or it isn't.
Will fix it.

> Apart from that LGTM, but I am no compaction expert.

Thanks.


--
Best Regards,
Yan, Zi
  
Zi Yan Feb. 20, 2024, 3:27 p.m. UTC | #4
On 20 Feb 2024, at 4:11, David Hildenbrand wrote:

> On 20.02.24 10:03, David Hildenbrand wrote:
>> On 16.02.24 18:04, Zi Yan wrote:
>>> From: Zi Yan <ziy@nvidia.com>
>>>
>>> migrate_pages() supports >0 order folio migration and during compaction,
>>> even if compaction_alloc() cannot provide >0 order free pages,
>>> migrate_pages() can split the source page and try to migrate the base
>>> pages from the split.  It can be a baseline and start point for adding
>>> support for compacting >0 order folios.
>>>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> Suggested-by: Huang Ying <ying.huang@intel.com>
>>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>>> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Tested-by: Yu Zhao <yuzhao@google.com>
>>> Cc: Adam Manzanares <a.manzanares@samsung.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
>>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> Cc: Luis Chamberlain <mcgrof@kernel.org>
>>> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
>>> Cc: Vlastimil Babka <vbabka@suse.cz>
>>> Cc: Yin Fengwei <fengwei.yin@intel.com>
>>> ---
>>>    mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
>>>    1 file changed, 52 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index cc801ce099b4..aa6aad805c4d 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>>    	return too_many;
>>>    }
>>>   +/*
>>
>>
>> Can't you add these comments to the respective checks? Like
>>
>> static bool skip_isolation_on_order(int order, int target_order)
>> {
>> 	/*
>> 	 * Unless we are performing global compaction (targert_order <
>> 	 * 0), skip any folios that are larger than the target order: we
>> 	 * wouldn't be here if we'd have a free folio with the desired
>> 	 * target_order, so migrating this folio would likely fail
>> 	 * later.
>> 	 */
>> 	if (target_order != -1 && order >= target_order)
>> 		return true;
>
> I just stumbled over "is_via_compact_memory", likely that should be used instead of the "!= -1 check.

Thanks. Let me use it.


--
Best Regards,
Yan, Zi
  

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index cc801ce099b4..aa6aad805c4d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -816,6 +816,21 @@  static bool too_many_isolated(struct compact_control *cc)
 	return too_many;
 }
 
+/*
+ * 1. if the page order is larger than or equal to target_order (i.e.,
+ * cc->order and when it is not -1 for global compaction), skip it since
+ * target_order already indicates no free page with larger than target_order
+ * exists and later migrating it will most likely fail;
+ *
+ * 2. compacting > pageblock_order pages does not improve memory fragmentation,
+ * skip them;
+ */
+static bool skip_isolation_on_order(int order, int target_order)
+{
+	return (target_order != -1 && order >= target_order) ||
+		order >= pageblock_order;
+}
+
 /**
  * isolate_migratepages_block() - isolate all migrate-able pages within
  *				  a single pageblock
@@ -947,7 +962,22 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			valid_page = page;
 		}
 
-		if (PageHuge(page) && cc->alloc_contig) {
+		if (PageHuge(page)) {
+			/*
+			 * skip hugetlbfs if we are not compacting for pages
+			 * bigger than its order. THPs and other compound pages
+			 * are handled below.
+			 */
+			if (!cc->alloc_contig) {
+				const unsigned int order = compound_order(page);
+
+				if (order <= MAX_PAGE_ORDER) {
+					low_pfn += (1UL << order) - 1;
+					nr_scanned += (1UL << order) - 1;
+				}
+				goto isolate_fail;
+			}
+			/* for alloc_contig case */
 			if (locked) {
 				unlock_page_lruvec_irqrestore(locked, flags);
 				locked = NULL;
@@ -1008,21 +1038,24 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		}
 
 		/*
-		 * Regardless of being on LRU, compound pages such as THP and
-		 * hugetlbfs are not to be compacted unless we are attempting
-		 * an allocation much larger than the huge page size (eg CMA).
-		 * We can potentially save a lot of iterations if we skip them
-		 * at once. The check is racy, but we can consider only valid
-		 * values and the only danger is skipping too much.
+		 * Regardless of being on LRU, compound pages such as THP
+		 * (hugetlbfs is handled above) are not to be compacted unless
+		 * we are attempting an allocation larger than the compound
+		 * page size. We can potentially save a lot of iterations if we
+		 * skip them at once. The check is racy, but we can consider
+		 * only valid values and the only danger is skipping too much.
 		 */
 		if (PageCompound(page) && !cc->alloc_contig) {
 			const unsigned int order = compound_order(page);
 
-			if (likely(order <= MAX_PAGE_ORDER)) {
-				low_pfn += (1UL << order) - 1;
-				nr_scanned += (1UL << order) - 1;
+			/* Skip based on page order and compaction target order. */
+			if (skip_isolation_on_order(order, cc->order)) {
+				if (order <= MAX_PAGE_ORDER) {
+					low_pfn += (1UL << order) - 1;
+					nr_scanned += (1UL << order) - 1;
+				}
+				goto isolate_fail;
 			}
-			goto isolate_fail;
 		}
 
 		/*
@@ -1165,10 +1198,11 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			}
 
 			/*
-			 * folio become large since the non-locked check,
-			 * and it's on LRU.
+			 * Check LRU folio order under the lock
 			 */
-			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
+			if (unlikely(skip_isolation_on_order(folio_order(folio),
+							     cc->order) &&
+				     !cc->alloc_contig)) {
 				low_pfn += folio_nr_pages(folio) - 1;
 				nr_scanned += folio_nr_pages(folio) - 1;
 				folio_set_lru(folio);
@@ -1788,6 +1822,10 @@  static struct folio *compaction_alloc(struct folio *src, unsigned long data)
 	struct compact_control *cc = (struct compact_control *)data;
 	struct folio *dst;
 
+	/* this makes migrate_pages() split the source page and retry */
+	if (folio_test_large(src) > 0)
+		return NULL;
+
 	if (list_empty(&cc->freepages)) {
 		isolate_freepages(cc);