[v2,1/2] mm: Call arch_swap_restore() from do_swap_page()

Message ID 20230516023514.2643054-2-pcc@google.com
State New
Headers
Series mm: Fix bug affecting swapping in MTE tagged pages |

Commit Message

Peter Collingbourne May 16, 2023, 2:35 a.m. UTC
  Commit c145e0b47c77 ("mm: streamline COW logic in do_swap_page()") moved
the call to swap_free() before the call to set_pte_at(), which meant that
the MTE tags could end up being freed before set_pte_at() had a chance
to restore them. Fix it by adding a call to the arch_swap_restore() hook
before the call to swap_free().

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/I6470efa669e8bd2f841049b8c61020c510678965
Cc: <stable@vger.kernel.org> # 6.1
Fixes: c145e0b47c77 ("mm: streamline COW logic in do_swap_page()")
Reported-by: Qun-wei Lin (林群崴) <Qun-wei.Lin@mediatek.com>
Link: https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@mediatek.com/
---
v2:
- Call arch_swap_restore() directly instead of via arch_do_swap_page()

 mm/memory.c | 7 +++++++
 1 file changed, 7 insertions(+)
  

Comments

David Hildenbrand May 16, 2023, 12:49 p.m. UTC | #1
On 16.05.23 04:35, Peter Collingbourne wrote:
> Commit c145e0b47c77 ("mm: streamline COW logic in do_swap_page()") moved
> the call to swap_free() before the call to set_pte_at(), which meant that
> the MTE tags could end up being freed before set_pte_at() had a chance
> to restore them. Fix it by adding a call to the arch_swap_restore() hook
> before the call to swap_free().
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/I6470efa669e8bd2f841049b8c61020c510678965
> Cc: <stable@vger.kernel.org> # 6.1
> Fixes: c145e0b47c77 ("mm: streamline COW logic in do_swap_page()")
> Reported-by: Qun-wei Lin (林群崴) <Qun-wei.Lin@mediatek.com>
> Link: https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@mediatek.com/
> ---
> v2:
> - Call arch_swap_restore() directly instead of via arch_do_swap_page()
> 
>   mm/memory.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 01a23ad48a04..a2d9e6952d31 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3914,6 +3914,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>   		}
>   	}
>   
> +	/*
> +	 * Some architectures may have to restore extra metadata to the page
> +	 * when reading from swap. This metadata may be indexed by swap entry
> +	 * so this must be called before swap_free().
> +	 */
> +	arch_swap_restore(entry, folio);
> +
>   	/*
>   	 * Remove the swap entry and conditionally try to free up the swapcache.
>   	 * We're already holding a reference on the page but haven't mapped it

Looks much better to me, thanks :)

... staring at unuse_pte(), I suspect it also doesn't take care of MTE 
tags and needs fixing?
  
Peter Collingbourne May 17, 2023, 2:21 a.m. UTC | #2
On Tue, May 16, 2023 at 5:49 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 16.05.23 04:35, Peter Collingbourne wrote:
> > Commit c145e0b47c77 ("mm: streamline COW logic in do_swap_page()") moved
> > the call to swap_free() before the call to set_pte_at(), which meant that
> > the MTE tags could end up being freed before set_pte_at() had a chance
> > to restore them. Fix it by adding a call to the arch_swap_restore() hook
> > before the call to swap_free().
> >
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > Link: https://linux-review.googlesource.com/id/I6470efa669e8bd2f841049b8c61020c510678965
> > Cc: <stable@vger.kernel.org> # 6.1
> > Fixes: c145e0b47c77 ("mm: streamline COW logic in do_swap_page()")
> > Reported-by: Qun-wei Lin (林群崴) <Qun-wei.Lin@mediatek.com>
> > Link: https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@mediatek.com/
> > ---
> > v2:
> > - Call arch_swap_restore() directly instead of via arch_do_swap_page()
> >
> >   mm/memory.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 01a23ad48a04..a2d9e6952d31 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3914,6 +3914,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >               }
> >       }
> >
> > +     /*
> > +      * Some architectures may have to restore extra metadata to the page
> > +      * when reading from swap. This metadata may be indexed by swap entry
> > +      * so this must be called before swap_free().
> > +      */
> > +     arch_swap_restore(entry, folio);
> > +
> >       /*
> >        * Remove the swap entry and conditionally try to free up the swapcache.
> >        * We're already holding a reference on the page but haven't mapped it
>
> Looks much better to me, thanks :)
>
> ... staring at unuse_pte(), I suspect it also doesn't take care of MTE
> tags and needs fixing?

Nice catch, I've fixed it in v3.

I don't think there are any other cases like this. I looked for code
that decrements the MM_SWAPENTS counter and we're already covering all
of them.

Peter
  

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 01a23ad48a04..a2d9e6952d31 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3914,6 +3914,13 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 		}
 	}
 
+	/*
+	 * Some architectures may have to restore extra metadata to the page
+	 * when reading from swap. This metadata may be indexed by swap entry
+	 * so this must be called before swap_free().
+	 */
+	arch_swap_restore(entry, folio);
+
 	/*
 	 * Remove the swap entry and conditionally try to free up the swapcache.
 	 * We're already holding a reference on the page but haven't mapped it