mm/rmap: convert __page_check_anon_rmap() to folio

Message ID 20230915101731.1725986-1-yajun.deng@linux.dev
State New
Headers
Series mm/rmap: convert __page_check_anon_rmap() to folio |

Commit Message

Yajun Deng Sept. 15, 2023, 10:17 a.m. UTC
  The parameter page in __page_check_anon_rmap() is redundant. Remove it,
and convert __page_check_anon_rmap() to __folio_check_anon_rmap().

Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
---
 mm/rmap.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)
  

Comments

Matthew Wilcox Sept. 15, 2023, 1:49 p.m. UTC | #1
On Fri, Sep 15, 2023 at 06:17:31PM +0800, Yajun Deng wrote:
> @@ -1176,8 +1175,8 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page,
>  	 */
>  	VM_BUG_ON_FOLIO(folio_anon_vma(folio)->root != vma->anon_vma->root,
>  			folio);
> -	VM_BUG_ON_PAGE(page_to_pgoff(page) != linear_page_index(vma, address),
> -		       page);
> +	VM_BUG_ON_FOLIO(folio_pgoff(folio) != linear_page_index(vma, address),
> +		       folio);

No, this is not equivalent.  You haven't hit any problems testing it
because you don't have large anonymous folios.
  
David Hildenbrand Sept. 15, 2023, 3:26 p.m. UTC | #2
On 15.09.23 15:49, Matthew Wilcox wrote:
> On Fri, Sep 15, 2023 at 06:17:31PM +0800, Yajun Deng wrote:
>> @@ -1176,8 +1175,8 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page,
>>   	 */
>>   	VM_BUG_ON_FOLIO(folio_anon_vma(folio)->root != vma->anon_vma->root,
>>   			folio);
>> -	VM_BUG_ON_PAGE(page_to_pgoff(page) != linear_page_index(vma, address),
>> -		       page);
>> +	VM_BUG_ON_FOLIO(folio_pgoff(folio) != linear_page_index(vma, address),
>> +		       folio);
> 
> No, this is not equivalent.  You haven't hit any problems testing it
> because you don't have large anonymous folios.

Right, the address would have to be adjusted as well by the caller.
  

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index 789a2beb8b3a..520607f4d91c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1154,13 +1154,12 @@  static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
 }
 
 /**
- * __page_check_anon_rmap - sanity check anonymous rmap addition
+ * __folio_check_anon_rmap - sanity check anonymous rmap addition
  * @folio:	The folio containing @page.
- * @page:	the page to check the mapping of
  * @vma:	the vm area in which the mapping is added
  * @address:	the user virtual address mapped
  */
-static void __page_check_anon_rmap(struct folio *folio, struct page *page,
+static void __folio_check_anon_rmap(struct folio *folio,
 	struct vm_area_struct *vma, unsigned long address)
 {
 	/*
@@ -1176,8 +1175,8 @@  static void __page_check_anon_rmap(struct folio *folio, struct page *page,
 	 */
 	VM_BUG_ON_FOLIO(folio_anon_vma(folio)->root != vma->anon_vma->root,
 			folio);
-	VM_BUG_ON_PAGE(page_to_pgoff(page) != linear_page_index(vma, address),
-		       page);
+	VM_BUG_ON_FOLIO(folio_pgoff(folio) != linear_page_index(vma, address),
+		       folio);
 }
 
 /**
@@ -1245,7 +1244,7 @@  void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 		__folio_set_anon(folio, vma, address,
 				 !!(flags & RMAP_EXCLUSIVE));
 	} else if (likely(!folio_test_ksm(folio))) {
-		__page_check_anon_rmap(folio, page, vma, address);
+		__folio_check_anon_rmap(folio, vma, address);
 	}
 	if (flags & RMAP_EXCLUSIVE)
 		SetPageAnonExclusive(page);