[v4,2/9] mm/hugetlb: Don't wait for migration entry during follow page

Message ID 20221216155100.2043537-3-peterx@redhat.com
State New
Headers
Series [v4,1/9] mm/hugetlb: Let vma_offset_start() to return start |

Commit Message

Peter Xu Dec. 16, 2022, 3:50 p.m. UTC
  That's what the code does with !hugetlb pages, so we should logically do
the same for hugetlb, so migration entry will also be treated as no page.

This is probably also the last piece in follow_page code that may sleep,
the last one should be removed in cf994dd8af27 ("mm/gup: remove
FOLL_MIGRATION", 2022-11-16).

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/hugetlb.c | 11 -----------
 1 file changed, 11 deletions(-)
  

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0dfe441f9f4d..8ccd55f9fbd3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6380,7 +6380,6 @@  struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 	if (WARN_ON_ONCE(flags & FOLL_PIN))
 		return NULL;
 
-retry:
 	pte = huge_pte_offset(mm, haddr, huge_page_size(h));
 	if (!pte)
 		return NULL;
@@ -6403,16 +6402,6 @@  struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 			page = NULL;
 			goto out;
 		}
-	} else {
-		if (is_hugetlb_entry_migration(entry)) {
-			spin_unlock(ptl);
-			__migration_entry_wait_huge(pte, ptl);
-			goto retry;
-		}
-		/*
-		 * hwpoisoned entry is treated as no_page_table in
-		 * follow_page_mask().
-		 */
 	}
 out:
 	spin_unlock(ptl);