[RFC,3/4] mm/compaction: optimize >0 order folio compaction by sorting source pages.

Message ID 20230912162815.440749-4-zi.yan@sent.com
State New
Headers
Series Enable >0 order folio memory compaction |

Commit Message

Zi Yan Sept. 12, 2023, 4:28 p.m. UTC
  From: Zi Yan <ziy@nvidia.com>

It should maximize high order free page use and minimize free page splits.
It might be useful before free page merging is implemented.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/compaction.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)
  

Comments

Zi Yan Sept. 12, 2023, 8:31 p.m. UTC | #1
On 12 Sep 2023, at 13:56, Johannes Weiner wrote:

> On Tue, Sep 12, 2023 at 12:28:14PM -0400, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> It should maximize high order free page use and minimize free page splits.
>> It might be useful before free page merging is implemented.
>
> The premise sounds reasonable to me: start with the largest chunks in
> the hope of producing the desired block size before having to piece
> things together from the order-0s dribbles.
>
>> @@ -145,6 +145,38 @@ static void sort_free_pages(struct list_head *src, struct free_list *dst)
>>  	}
>>  }
>>
>> +static void sort_folios_by_order(struct list_head *pages)
>> +{
>> +	struct free_list page_list[MAX_ORDER + 1];
>> +	int order;
>> +	struct folio *folio, *next;
>> +
>> +	for (order = 0; order <= MAX_ORDER; order++) {
>> +		INIT_LIST_HEAD(&page_list[order].pages);
>> +		page_list[order].nr_free = 0;
>> +	}
>> +
>> +	list_for_each_entry_safe(folio, next, pages, lru) {
>> +		order = folio_order(folio);
>> +
>> +		if (order > MAX_ORDER)
>> +			continue;
>> +
>> +		list_move(&folio->lru, &page_list[order].pages);
>> +		page_list[order].nr_free++;
>> +	}
>> +
>> +	for (order = MAX_ORDER; order >= 0; order--) {
>> +		if (page_list[order].nr_free) {
>> +
>> +			list_for_each_entry_safe(folio, next,
>> +						 &page_list[order].pages, lru) {
>> +				list_move_tail(&folio->lru, pages);
>> +			}
>> +		}
>> +	}
>> +}
>> +
>>  #ifdef CONFIG_COMPACTION
>>  bool PageMovable(struct page *page)
>>  {
>> @@ -2636,6 +2668,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
>>  				pageblock_start_pfn(cc->migrate_pfn - 1));
>>  		}
>>
>> +		sort_folios_by_order(&cc->migratepages);
>
> Would it make sense to have isolate_migratepages_block() produce a
> sorted list already? By collecting into a struct free_list in there
> and finishing with that `for (order = MAX...) list_add_tail()' loop.
>
> That would save quite a bit of shuffling around. Compaction can be
> hot, and is expected to get hotter with growing larger order pressure.

Yes, that sounds reasonable. Will do that in the next version.

>
> The contig allocator doesn't care about ordering, but it should be
> possible to gate the sorting reasonably on !cc->alloc_contig.

Right. For !cc->alloc_contig, pages are put in struct free_list and
later sorted and moved to cc->migratepages. For cc->alloc_contig,
pages are directly put on cc->migratepages.

>
> An optimization down the line could be to skip the sorted list
> assembly for the compaction case entirely, have compact_zone() work
> directly on struct free_list, starting with the highest order and
> checking compact_finished() in between orders.

Sounds reasonable. It actually makes me think more and realize sorting
source pages might not be optimal all the time. Because in general
migrating higher order folios first would generate larger free spaces,
which might meet the compaction goal faster. But in some cases, the target
order, e.g., order 4, free pages can be generated by migrating one order-3
page and other 8 order-0 pages around. This means we might waste effort by
migrating all high order page first. I guess there will be a lot of possible
optimizations for different situations. :)


--
Best Regards,
Yan, Zi
  

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 45747ab5f380..4300d877b824 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -145,6 +145,38 @@  static void sort_free_pages(struct list_head *src, struct free_list *dst)
 	}
 }
 
+static void sort_folios_by_order(struct list_head *pages)
+{
+	struct free_list page_list[MAX_ORDER + 1];
+	int order;
+	struct folio *folio, *next;
+
+	for (order = 0; order <= MAX_ORDER; order++) {
+		INIT_LIST_HEAD(&page_list[order].pages);
+		page_list[order].nr_free = 0;
+	}
+
+	list_for_each_entry_safe(folio, next, pages, lru) {
+		order = folio_order(folio);
+
+		if (order > MAX_ORDER)
+			continue;
+
+		list_move(&folio->lru, &page_list[order].pages);
+		page_list[order].nr_free++;
+	}
+
+	for (order = MAX_ORDER; order >= 0; order--) {
+		if (page_list[order].nr_free) {
+
+			list_for_each_entry_safe(folio, next,
+						 &page_list[order].pages, lru) {
+				list_move_tail(&folio->lru, pages);
+			}
+		}
+	}
+}
+
 #ifdef CONFIG_COMPACTION
 bool PageMovable(struct page *page)
 {
@@ -2636,6 +2668,8 @@  compact_zone(struct compact_control *cc, struct capture_control *capc)
 				pageblock_start_pfn(cc->migrate_pfn - 1));
 		}
 
+		sort_folios_by_order(&cc->migratepages);
+
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
 				compaction_free, (unsigned long)cc, cc->mode,
 				MR_COMPACTION, &nr_succeeded);