[1/4] mm, compaction: Rename compact_control->rescan to finish_pageblock

Message ID 20230125134434.18017-2-mgorman@techsingularity.net
State New
Headers
Series Fix excessive CPU usage during compaction |

Commit Message

Mel Gorman Jan. 25, 2023, 1:44 p.m. UTC
  The rescan field was not well named albeit accurate at the time. Rename the
field to finish_pageblock to indicate that the remainder of the pageblock
should be scanned regardless of COMPACT_CLUSTER_MAX. The intent is that
pageblocks with transient failures get marked for skipping to avoid
revisiting the same pageblock.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/compaction.c | 24 ++++++++++++------------
 mm/internal.h   |  6 +++++-
 2 files changed, 17 insertions(+), 13 deletions(-)
  

Comments

Vlastimil Babka Feb. 7, 2023, 4:22 p.m. UTC | #1
On 1/25/23 14:44, Mel Gorman wrote:
> The rescan field was not well named albeit accurate at the time. Rename the
> field to finish_pageblock to indicate that the remainder of the pageblock
> should be scanned regardless of COMPACT_CLUSTER_MAX. The intent is that
> pageblocks with transient failures get marked for skipping to avoid
> revisiting the same pageblock.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>
  

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index ca1603524bbe..c018b0e65720 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1102,12 +1102,12 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 		/*
 		 * Avoid isolating too much unless this block is being
-		 * rescanned (e.g. dirty/writeback pages, parallel allocation)
+		 * fully scanned (e.g. dirty/writeback pages, parallel allocation)
 		 * or a lock is contended. For contention, isolate quickly to
 		 * potentially remove one source of contention.
 		 */
 		if (cc->nr_migratepages >= COMPACT_CLUSTER_MAX &&
-		    !cc->rescan && !cc->contended) {
+		    !cc->finish_pageblock && !cc->contended) {
 			++low_pfn;
 			break;
 		}
@@ -1172,14 +1172,14 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	}
 
 	/*
-	 * Updated the cached scanner pfn once the pageblock has been scanned
+	 * Update the cached scanner pfn once the pageblock has been scanned.
 	 * Pages will either be migrated in which case there is no point
 	 * scanning in the near future or migration failed in which case the
 	 * failure reason may persist. The block is marked for skipping if
 	 * there were no pages isolated in the block or if the block is
 	 * rescanned twice in a row.
 	 */
-	if (low_pfn == end_pfn && (!nr_isolated || cc->rescan)) {
+	if (low_pfn == end_pfn && (!nr_isolated || cc->finish_pageblock)) {
 		if (valid_page && !skip_updated)
 			set_pageblock_skip(valid_page);
 		update_cached_migrate(cc, low_pfn);
@@ -2374,17 +2374,17 @@  compact_zone(struct compact_control *cc, struct capture_control *capc)
 		unsigned long iteration_start_pfn = cc->migrate_pfn;
 
 		/*
-		 * Avoid multiple rescans which can happen if a page cannot be
-		 * isolated (dirty/writeback in async mode) or if the migrated
-		 * pages are being allocated before the pageblock is cleared.
-		 * The first rescan will capture the entire pageblock for
-		 * migration. If it fails, it'll be marked skip and scanning
-		 * will proceed as normal.
+		 * Avoid multiple rescans of the same pageblock which can
+		 * happen if a page cannot be isolated (dirty/writeback in
+		 * async mode) or if the migrated pages are being allocated
+		 * before the pageblock is cleared.  The first rescan will
+		 * capture the entire pageblock for migration. If it fails,
+		 * it'll be marked skip and scanning will proceed as normal.
 		 */
-		cc->rescan = false;
+		cc->finish_pageblock = false;
 		if (pageblock_start_pfn(last_migrated_pfn) ==
 		    pageblock_start_pfn(iteration_start_pfn)) {
-			cc->rescan = true;
+			cc->finish_pageblock = true;
 		}
 
 		switch (isolate_migratepages(cc)) {
diff --git a/mm/internal.h b/mm/internal.h
index bcf75a8b032d..21466d0ab22f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -422,7 +422,11 @@  struct compact_control {
 	bool proactive_compaction;	/* kcompactd proactive compaction */
 	bool whole_zone;		/* Whole zone should/has been scanned */
 	bool contended;			/* Signal lock contention */
-	bool rescan;			/* Rescanning the same pageblock */
+	bool finish_pageblock;		/* Scan the remainder of a pageblock. Used
+					 * when there are potentially transient
+					 * isolation or migration failures to
+					 * ensure forward progress.
+					 */
 	bool alloc_contig;		/* alloc_contig_range allocation */
 };