[2/4] mm/migrate: Unify and retry an unstable pmd when hit

Message ID 20230602230552.350731-3-peterx@redhat.com
State New
Headers
Series mm: Fix pmd_trans_unstable() call sites on retry |

Commit Message

Peter Xu June 2, 2023, 11:05 p.m. UTC
  There's one pmd_bad() check, but should be better to use pmd_clear_bad()
which is part of pmd_trans_unstable().

And I assume that's not enough, because there can be race of thp insertion
when reaching pmd_bad(), so it can be !bad but a thp, then the walk is
illegal.

There's one case though where the function used pmd_trans_unstable() but
only for pmd split.  Merge them into one, and if it happens retry the whole
pmd.

Cc: Alistair Popple <apopple@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/migrate_device.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)
  

Patch

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index d30c9de60b0d..6fc54c053c05 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -83,9 +83,6 @@  static int migrate_vma_collect_pmd(pmd_t *pmdp,
 		if (is_huge_zero_page(page)) {
 			spin_unlock(ptl);
 			split_huge_pmd(vma, pmdp, addr);
-			if (pmd_trans_unstable(pmdp))
-				return migrate_vma_collect_skip(start, end,
-								walk);
 		} else {
 			int ret;
 
@@ -106,8 +103,10 @@  static int migrate_vma_collect_pmd(pmd_t *pmdp,
 		}
 	}
 
-	if (unlikely(pmd_bad(*pmdp)))
-		return migrate_vma_collect_skip(start, end, walk);
+	if (unlikely(pmd_trans_unstable(pmdp))) {
+		walk->action = ACTION_AGAIN;
+		return 0;
+	}
 
 	ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
 	arch_enter_lazy_mmu_mode();