[v3,2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI

Message ID 20221125213714.4115729-2-jannh@google.com
State New
Headers
Series [v3,1/3] mm/khugepaged: Take the right locks for page table retraction |

Commit Message

Jann Horn Nov. 25, 2022, 9:37 p.m. UTC
  Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
ensure that the page table was not removed by khugepaged in between.

However, lockless_pages_from_mm() still requires that the page table is not
concurrently freed. Fix it by sending IPIs (if the architecture uses
semi-RCU-style page table freeing) before freeing/reusing page tables.

Cc: stable@kernel.org
Fixes: ba76149f47d8 ("thp: khugepaged")
Signed-off-by: Jann Horn <jannh@google.com>
---
replaced the mmu_gather-based scheme with an RCU call as suggested by
Peter Xu

 include/asm-generic/tlb.h | 4 ++++
 mm/khugepaged.c           | 2 ++
 mm/mmu_gather.c           | 4 +---
 3 files changed, 7 insertions(+), 3 deletions(-)
  

Comments

David Hildenbrand Nov. 28, 2022, 1:46 p.m. UTC | #1
On 25.11.22 22:37, Jann Horn wrote:
> Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> ensure that the page table was not removed by khugepaged in between.
> 
> However, lockless_pages_from_mm() still requires that the page table is not
> concurrently freed.

That's an interesting point. For anon THPs, the page table won't get 
immediately freed, but instead will be deposited in the "pgtable list" 
stored alongside the THP.

 From there, it might get withdrawn (pgtable_trans_huge_withdraw()) and

a) Reused as a page table when splitting the THP. That should be fine, 
no garbage in it, simply a page table again.

b) Freed when zapping the THP (zap_deposited_table()). that would be bad.

... but I just realized that e.g., radix__pgtable_trans_huge_deposit 
uses actual page content to link the deposited page tables, which means 
we'd already storing garbage in there when depositing the page, not when 
freeing+reusing the page ....

Maybe worth adding to the description.

> Fix it by sending IPIs (if the architecture uses
> semi-RCU-style page table freeing) before freeing/reusing page tables.
> 
> Cc: stable@kernel.org
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
> replaced the mmu_gather-based scheme with an RCU call as suggested by
> Peter Xu
> 
>   include/asm-generic/tlb.h | 4 ++++
>   mm/khugepaged.c           | 2 ++
>   mm/mmu_gather.c           | 4 +---
>   3 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 492dce43236ea..cab7cfebf40bd 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
>   #define tlb_needs_table_invalidate() (true)
>   #endif
>   
> +void tlb_remove_table_sync_one(void);
> +
>   #else
>   
>   #ifdef tlb_needs_table_invalidate
>   #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
>   #endif
>   
> +static inline void tlb_remove_table_sync_one(void) { }
> +
>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
>   
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 674b111a24fa7..c3d3ce596bff7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>   	_pmd = pmdp_collapse_flush(vma, address, pmd);
>   	spin_unlock(pmd_ptl);
>   	mmu_notifier_invalidate_range_end(&range);
> +	tlb_remove_table_sync_one();
>   
>   	spin_lock(pte_ptl);
>   	result =  __collapse_huge_page_isolate(vma, address, pte, cc,
> @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>   		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>   
>   	pmd = pmdp_collapse_flush(vma, addr, pmdp);
> +	tlb_remove_table_sync_one();
>   	mm_dec_nr_ptes(mm);
>   	page_table_check_pte_clear_range(mm, addr, pmd);
>   	pte_free(mm, pmd_pgtable(pmd));
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index add4244e5790d..3a2c3f8cad2fe 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
>   	/* Simply deliver the interrupt */
>   }
>   
> -static void tlb_remove_table_sync_one(void)
> +void tlb_remove_table_sync_one(void)
>   {
>   	/*
>   	 * This isn't an RCU grace period and hence the page-tables cannot be
> @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>   
>   #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
> -static void tlb_remove_table_sync_one(void) { }
> -
>   static void tlb_remove_table_free(struct mmu_table_batch *batch)
>   {
>   	__tlb_remove_table_free(batch);

With CONFIG_MMU_GATHER_RCU_TABLE_FREE this will most certainly do the 
right thing. I assume with CONFIG_MMU_GATHER_RCU_TABLE_FREE, the 
assumption is that there will be an implicit IPI.

That implicit IPI has to happen before we deposit. I assume that is 
expected to happen during pmdp_collapse_flush() ?
  
Jann Horn Nov. 28, 2022, 4:58 p.m. UTC | #2
On Mon, Nov 28, 2022 at 2:46 PM David Hildenbrand <david@redhat.com> wrote:
> On 25.11.22 22:37, Jann Horn wrote:
> > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > ensure that the page table was not removed by khugepaged in between.
> >
> > However, lockless_pages_from_mm() still requires that the page table is not
> > concurrently freed.
>
> That's an interesting point. For anon THPs, the page table won't get
> immediately freed, but instead will be deposited in the "pgtable list"
> stored alongside the THP.
>
>  From there, it might get withdrawn (pgtable_trans_huge_withdraw()) and
>
> a) Reused as a page table when splitting the THP. That should be fine,
> no garbage in it, simply a page table again.

Depends on the definition of "fine" - it will be a page table again,
but deposited page tables are not associated with a specific address,
so it might be reused at a different address. If GUP-fast on address A
races with a page table from address A being deposited and reused at
address B, and then GUP-fast returns something from address B, that's
not exactly great either.

> b) Freed when zapping the THP (zap_deposited_table()). that would be bad.
>
> ... but I just realized that e.g., radix__pgtable_trans_huge_deposit
> uses actual page content to link the deposited page tables, which means
> we'd already storing garbage in there when depositing the page, not when
> freeing+reusing the page ....
>
> Maybe worth adding to the description.

Yeah, okay, I'll change the commit message and resend...

[...]
> With CONFIG_MMU_GATHER_RCU_TABLE_FREE this will most certainly do the
> right thing. I assume with CONFIG_MMU_GATHER_RCU_TABLE_FREE, the
> assumption is that there will be an implicit IPI.
>
> That implicit IPI has to happen before we deposit. I assume that is
> expected to happen during pmdp_collapse_flush() ?

Yeah, pmdp_collapse_flush() does a TLB flush, as the name says. And as
documented in a comment in mm/gup.c:

 * Before activating this code, please be aware that the following assumptions
 * are currently made:
 *
 *  *) Either MMU_GATHER_RCU_TABLE_FREE is enabled, and
tlb_remove_table() is used to
 *  free pages containing page tables or TLB flushing requires IPI broadcast.

I'll go sprinkle that in a comment somewhere, either in the file or in
the commit message...
  
David Hildenbrand Nov. 28, 2022, 5 p.m. UTC | #3
On 28.11.22 17:58, Jann Horn wrote:
> On Mon, Nov 28, 2022 at 2:46 PM David Hildenbrand <david@redhat.com> wrote:
>> On 25.11.22 22:37, Jann Horn wrote:
>>> Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
>>> collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
>>> ensure that the page table was not removed by khugepaged in between.
>>>
>>> However, lockless_pages_from_mm() still requires that the page table is not
>>> concurrently freed.
>>
>> That's an interesting point. For anon THPs, the page table won't get
>> immediately freed, but instead will be deposited in the "pgtable list"
>> stored alongside the THP.
>>
>>   From there, it might get withdrawn (pgtable_trans_huge_withdraw()) and
>>
>> a) Reused as a page table when splitting the THP. That should be fine,
>> no garbage in it, simply a page table again.
> 
> Depends on the definition of "fine" - it will be a page table again,
> but deposited page tables are not associated with a specific address,
> so it might be reused at a different address. If GUP-fast on address A
> races with a page table from address A being deposited and reused at
> address B, and then GUP-fast returns something from address B, that's
> not exactly great either.

The "PMD changed" check should catch that. We only care about not 
dereferencing something that's garbage and not a page/folio if I 
remember the previous discussions on that correctly.

Anyhow, feel free to add my

Acked-by: David Hildenbrand <david@redhat.com>
  

Patch

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 492dce43236ea..cab7cfebf40bd 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -222,12 +222,16 @@  extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
 #define tlb_needs_table_invalidate() (true)
 #endif
 
+void tlb_remove_table_sync_one(void);
+
 #else
 
 #ifdef tlb_needs_table_invalidate
 #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
 #endif
 
+static inline void tlb_remove_table_sync_one(void) { }
+
 #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 674b111a24fa7..c3d3ce596bff7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1057,6 +1057,7 @@  static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
 	mmu_notifier_invalidate_range_end(&range);
+	tlb_remove_table_sync_one();
 
 	spin_lock(pte_ptl);
 	result =  __collapse_huge_page_isolate(vma, address, pte, cc,
@@ -1415,6 +1416,7 @@  static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
 
 	pmd = pmdp_collapse_flush(vma, addr, pmdp);
+	tlb_remove_table_sync_one();
 	mm_dec_nr_ptes(mm);
 	page_table_check_pte_clear_range(mm, addr, pmd);
 	pte_free(mm, pmd_pgtable(pmd));
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index add4244e5790d..3a2c3f8cad2fe 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -153,7 +153,7 @@  static void tlb_remove_table_smp_sync(void *arg)
 	/* Simply deliver the interrupt */
 }
 
-static void tlb_remove_table_sync_one(void)
+void tlb_remove_table_sync_one(void)
 {
 	/*
 	 * This isn't an RCU grace period and hence the page-tables cannot be
@@ -177,8 +177,6 @@  static void tlb_remove_table_free(struct mmu_table_batch *batch)
 
 #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
-static void tlb_remove_table_sync_one(void) { }
-
 static void tlb_remove_table_free(struct mmu_table_batch *batch)
 {
 	__tlb_remove_table_free(batch);