[00/12] mm: free retracted page table by RCU

Message ID 35e983f5-7ed3-b310-d949-9ae8b130cdab@google.com
Headers
Series mm: free retracted page table by RCU |

Message

Hugh Dickins May 29, 2023, 6:11 a.m. UTC
  Here is the third series of patches to mm (and a few architectures), based
on v6.4-rc3 with the preceding two series applied: in which khugepaged
takes advantage of pte_offset_map[_lock]() allowing for pmd transitions.

This follows on from the "arch: allow pte_offset_map[_lock]() to fail"
https://lore.kernel.org/linux-mm/77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com/
series of 23 posted on 2023-05-09,
and the "mm: allow pte_offset_map[_lock]() to fail"
https://lore.kernel.org/linux-mm/68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com/
series of 31 posted on 2023-05-21.

Those two series were "independent": neither depending for build or
correctness on the other, but both series needed before this third one
can safely make the effective changes.  I'll send v2 of those two series
in a couple of days, incorporating Acks and Revieweds and the minor fixes.

What is it all about?  Some mmap_lock avoidance i.e. latency reduction.
Initially just for the case of collapsing shmem or file pages to THPs:
the usefulness of MADV_COLLAPSE on shmem is being limited by that
mmap_write_lock it currently requires.

Likely to be relied upon later in other contexts e.g. freeing of
empty page tables (but that's not work I'm doing).  mmap_write_lock
avoidance when collapsing to anon THPs?  Perhaps, but again that's not
work I've done: a quick attempt was not as easy as the shmem/file case.

These changes (though of course not these exact patches) have been in
Google's data centre kernel for three years now: we do rely upon them.

Based on the preceding two series over v6.4-rc3, but good over
v6.4-rc[1-4], current mm-everything or current linux-next.

01/12 mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s
02/12 mm/pgtable: add PAE safety to __pte_offset_map()
03/12 arm: adjust_pte() use pte_offset_map_nolock()
04/12 powerpc: assert_pte_locked() use pte_offset_map_nolock()
05/12 powerpc: add pte_free_defer() for pgtables sharing page
06/12 sparc: add pte_free_defer() for pgtables sharing page
07/12 s390: add pte_free_defer(), with use of mmdrop_async()
08/12 mm/pgtable: add pte_free_defer() for pgtable as page
09/12 mm/khugepaged: retract_page_tables() without mmap or vma lock
10/12 mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()
11/12 mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps()
12/12 mm: delete mmap_write_trylock() and vma_try_start_write()

 arch/arm/mm/fault-armv.c            |   3 +-
 arch/powerpc/include/asm/pgalloc.h  |   4 +
 arch/powerpc/mm/pgtable-frag.c      |  18 ++
 arch/powerpc/mm/pgtable.c           |  16 +-
 arch/s390/include/asm/pgalloc.h     |   4 +
 arch/s390/mm/pgalloc.c              |  34 +++
 arch/sparc/include/asm/pgalloc_64.h |   4 +
 arch/sparc/mm/init_64.c             |  16 ++
 include/linux/mm.h                  |  17 --
 include/linux/mm_types.h            |   2 +-
 include/linux/mmap_lock.h           |  10 -
 include/linux/pgtable.h             |   6 +-
 include/linux/sched/mm.h            |   1 +
 kernel/fork.c                       |   2 +-
 mm/khugepaged.c                     | 425 ++++++++----------------------
 mm/pgtable-generic.c                |  44 +++-
 16 files changed, 253 insertions(+), 353 deletions(-)

Hugh
  

Comments

Jann Horn May 31, 2023, 5:59 p.m. UTC | #1
On Mon, May 29, 2023 at 8:11 AM Hugh Dickins <hughd@google.com> wrote:
> Here is the third series of patches to mm (and a few architectures), based
> on v6.4-rc3 with the preceding two series applied: in which khugepaged
> takes advantage of pte_offset_map[_lock]() allowing for pmd transitions.

To clarify: Part of the design here is that when you look up a user
page table with pte_offset_map_nolock() or pte_offset_map() without
holding mmap_lock in write mode, and you later lock the page table
yourself, you don't know whether you actually have the real page table
or a detached table that is currently in its RCU grace period, right?
And detached tables are supposed to consist of only zeroed entries,
and we assume that no relevant codepath will do anything bad if one of
these functions spuriously returns a pointer to a page table full of
zeroed entries?

So in particular, in handle_pte_fault() we can reach the "if
(unlikely(!pte_same(*vmf->pte, entry)))" with vmf->pte pointing to a
detached zeroed page table, but we're okay with that because in that
case we know that !pte_none(vmf->orig_pte)&&pte_none(*vmf->pte) ,
which implies !pte_same(*vmf->pte, entry) , which means we'll bail
out?

If that's the intent, it might be good to add some comments, because
at least to me that's not very obvious.
  
Hugh Dickins June 2, 2023, 4:37 a.m. UTC | #2
On Wed, 31 May 2023, Jann Horn wrote:
> On Mon, May 29, 2023 at 8:11 AM Hugh Dickins <hughd@google.com> wrote:
> > Here is the third series of patches to mm (and a few architectures), based
> > on v6.4-rc3 with the preceding two series applied: in which khugepaged
> > takes advantage of pte_offset_map[_lock]() allowing for pmd transitions.
> 
> To clarify: Part of the design here is that when you look up a user
> page table with pte_offset_map_nolock() or pte_offset_map() without
> holding mmap_lock in write mode, and you later lock the page table
> yourself, you don't know whether you actually have the real page table
> or a detached table that is currently in its RCU grace period, right?

Right.  (And I'd rather not assume anything of mmap_lock, but there are
one or two or three places that may still do so.)

> And detached tables are supposed to consist of only zeroed entries,
> and we assume that no relevant codepath will do anything bad if one of
> these functions spuriously returns a pointer to a page table full of
> zeroed entries?

(Nit that I expect you're well aware of: IIRC "zeroed" isn't 0 on s390.)

If someone is using pte_offset_map() without lock, they must be prepared
to accept page-table-like changes.  The limits of pte_offset_map_nolock()
with later spin_lock(ptl): I'm still exploring: there's certainly an
argument that one ought to do a pmd_same() check before proceeding,
but I don't think anywhere needs that at present.

Whether the page table has to be full of zeroed entries when detached:
I believe it is always like that at present (by the end of the series,
when the collapse_pte_offset_map() oddity is fixed), but whether it needs
to be so I'm not sure.  Quite likely it will need to be; but I'm open to
the possibility that all it needs is to be still a page table, with
perhaps new entries from a new usage in it.

The most obvious vital thing (in the split ptlock case) is that it
remains a struct page with a usable ptl spinlock embedded in it.

The question becomes more urgent when/if extending to replacing the
pagetable pmd by huge pmd in one go, without any mmap_lock: powerpc
wants to deposit the page table for later use even in the shmem/file
case (and all arches in the anon case): I did work out the details once
before, but I'm not sure whether I would still agree with myself; and was
glad to leave replacement out of this series, to revisit some time later.

> 
> So in particular, in handle_pte_fault() we can reach the "if
> (unlikely(!pte_same(*vmf->pte, entry)))" with vmf->pte pointing to a
> detached zeroed page table, but we're okay with that because in that
> case we know that !pte_none(vmf->orig_pte)&&pte_none(*vmf->pte) ,
> which implies !pte_same(*vmf->pte, entry) , which means we'll bail
> out?

There is no current (even at end of series) circumstance in which we
could be pointing to a detached page table there; but yes, I want to
allow for that, and yes I agree with your analysis.  But with the
interesting unanswered question for the future, of what if the same
value could be found there: would that imply it's safe to proceed,
or would some further prevention be needed?

> 
> If that's the intent, it might be good to add some comments, because
> at least to me that's not very obvious.

That's a very fair request; but I shall have difficulty deciding where
to place such comments.  I shall have to try, then you redirect me.

And I think we approach this in opposite ways: my nature is to put some
infrastructure in place, and then look at it to see what we can get away
with; whereas your nature is to define upfront what the possibilities are.
We can expect some tussles!

Thanks,
Hugh
  
Jann Horn June 2, 2023, 3:26 p.m. UTC | #3
On Fri, Jun 2, 2023 at 6:37 AM Hugh Dickins <hughd@google.com> wrote:
> On Wed, 31 May 2023, Jann Horn wrote:
> > On Mon, May 29, 2023 at 8:11 AM Hugh Dickins <hughd@google.com> wrote:
> > > Here is the third series of patches to mm (and a few architectures), based
> > > on v6.4-rc3 with the preceding two series applied: in which khugepaged
> > > takes advantage of pte_offset_map[_lock]() allowing for pmd transitions.
> >
> > To clarify: Part of the design here is that when you look up a user
> > page table with pte_offset_map_nolock() or pte_offset_map() without
> > holding mmap_lock in write mode, and you later lock the page table
> > yourself, you don't know whether you actually have the real page table
> > or a detached table that is currently in its RCU grace period, right?
>
> Right.  (And I'd rather not assume anything of mmap_lock, but there are
> one or two or three places that may still do so.)
>
> > And detached tables are supposed to consist of only zeroed entries,
> > and we assume that no relevant codepath will do anything bad if one of
> > these functions spuriously returns a pointer to a page table full of
> > zeroed entries?
>
> (Nit that I expect you're well aware of: IIRC "zeroed" isn't 0 on s390.)

I was not aware, thanks. I only knew that on Intel's Knights Landing
CPUs, the A/D bits are ignored by pte_none() due to some erratum.

> If someone is using pte_offset_map() without lock, they must be prepared
> to accept page-table-like changes.  The limits of pte_offset_map_nolock()
> with later spin_lock(ptl): I'm still exploring: there's certainly an
> argument that one ought to do a pmd_same() check before proceeding,
> but I don't think anywhere needs that at present.
>
> Whether the page table has to be full of zeroed entries when detached:
> I believe it is always like that at present (by the end of the series,
> when the collapse_pte_offset_map() oddity is fixed), but whether it needs
> to be so I'm not sure.  Quite likely it will need to be; but I'm open to
> the possibility that all it needs is to be still a page table, with
> perhaps new entries from a new usage in it.

My understanding is that at least handle_pte_fault(), the way it is
currently written, would do bad things in that case:

// assume we enter with mmap_lock in read mode,
// for a write fault on a shared writable VMA without a page_mkwrite handler
static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
{
  pte_t entry;

  if (unlikely(pmd_none(*vmf->pmd))) {
    [ not executed ]
  } else {
    /*
     * A regular pmd is established and it can't morph into a huge
     * pmd by anon khugepaged, since that takes mmap_lock in write
     * mode; but shmem or file collapse to THP could still morph
     * it into a huge pmd: just retry later if so.
     */
    vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd,
             vmf->address, &vmf->ptl);
    if (unlikely(!vmf->pte))
      [not executed]
    [assume that at this point, a concurrent THP collapse operation
     removes the page table, and the page table has now been reused
     and contains a read-only PTE]
    // this reads page table contents protected solely by RCU
    vmf->orig_pte = ptep_get_lockless(vmf->pte);
    vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID;

    if (pte_none(vmf->orig_pte)) {
      pte_unmap(vmf->pte);
      vmf->pte = NULL;
    }
  }

  if (!vmf->pte)
    [not executed]

  if (!pte_present(vmf->orig_pte))
    [not executed]

  if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
    [not executed]

  spin_lock(vmf->ptl);
  entry = vmf->orig_pte;
  if (unlikely(!pte_same(*vmf->pte, entry))) {
    [not executed]
  }
  if (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) {
    if (!pte_write(entry))
      // This will go into wp_page_shared(),
      // which will call wp_page_reuse(),
      // which will upgrade the page to writable
      return do_wp_page(vmf);
    [ not executed ]
  }
  [ not executed ]
}

That looks like we could end up racing such that a read-only PTE in
the reused page table gets upgraded to writable, which would probably
be a security bug.

But I guess if you added bailout checks to every page table lock
operation, it'd be a different story, maybe?

> The most obvious vital thing (in the split ptlock case) is that it
> remains a struct page with a usable ptl spinlock embedded in it.
>
> The question becomes more urgent when/if extending to replacing the
> pagetable pmd by huge pmd in one go, without any mmap_lock: powerpc
> wants to deposit the page table for later use even in the shmem/file
> case (and all arches in the anon case): I did work out the details once
> before, but I'm not sure whether I would still agree with myself; and was
> glad to leave replacement out of this series, to revisit some time later.
>
> >
> > So in particular, in handle_pte_fault() we can reach the "if
> > (unlikely(!pte_same(*vmf->pte, entry)))" with vmf->pte pointing to a
> > detached zeroed page table, but we're okay with that because in that
> > case we know that !pte_none(vmf->orig_pte)&&pte_none(*vmf->pte) ,
> > which implies !pte_same(*vmf->pte, entry) , which means we'll bail
> > out?
>
> There is no current (even at end of series) circumstance in which we
> could be pointing to a detached page table there; but yes, I want to
> allow for that, and yes I agree with your analysis.

Hmm, what am I missing here?

static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
{
  pte_t entry;

  if (unlikely(pmd_none(*vmf->pmd))) {
    [not executed]
  } else {
    /*
     * A regular pmd is established and it can't morph into a huge
     * pmd by anon khugepaged, since that takes mmap_lock in write
     * mode; but shmem or file collapse to THP could still morph
     * it into a huge pmd: just retry later if so.
     */
    vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd,
             vmf->address, &vmf->ptl);
    if (unlikely(!vmf->pte))
      [not executed]
    // this reads a present readonly PTE
    vmf->orig_pte = ptep_get_lockless(vmf->pte);
    vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID;

    if (pte_none(vmf->orig_pte)) {
      [not executed]
    }
  }

  [at this point, a concurrent THP collapse operation detaches the page table]
  // vmf->pte now points into a detached page table

  if (!vmf->pte)
    [not executed]

  if (!pte_present(vmf->orig_pte))
    [not executed]

  if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
    [not executed]

  spin_lock(vmf->ptl);
  entry = vmf->orig_pte;
  // vmf->pte still points into a detached page table
  if (unlikely(!pte_same(*vmf->pte, entry))) {
    update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
    goto unlock;
  }
  [...]
}

> But with the
> interesting unanswered question for the future, of what if the same
> value could be found there: would that imply it's safe to proceed,
> or would some further prevention be needed?

That would then hand the pointer to the detached page table to
functions like do_wp_page(), which I think will do bad things (see
above) if they are called on either a page table that has been reused
in a different VMA with different protection flags (which could, for
example, lead to pages becoming writable that should not be writable
or such things) or on a page table that is no longer in use (because
it would assume that PTEs referencing pages are refcounted when they
actually aren't).

> > If that's the intent, it might be good to add some comments, because
> > at least to me that's not very obvious.
>
> That's a very fair request; but I shall have difficulty deciding where
> to place such comments.  I shall have to try, then you redirect me.
>
> And I think we approach this in opposite ways: my nature is to put some
> infrastructure in place, and then look at it to see what we can get away
> with; whereas your nature is to define upfront what the possibilities are.
> We can expect some tussles!

Yeah. :P
One of my strongly-held beliefs is that it's important when making
changes to code to continuously ask oneself "If I had to explain the
rules by which this code operates - who has to take which locks, who
holds references to what and so on -, how complicated would those
rules be?", and if that turns into a series of exception cases, that
probably means there will be bugs, because someone will probably lose
track of one of those exceptions. So I would prefer it if we could
have some rule like "whenever you lock an L1 page table, you must
immediately recheck whether the page table is still referenced by the
L2 page table, unless you know that you have a stable page reference
for whatever reason", and then any code that operates on a locked page
table doesn't have to worry about whether the page table might be
detached.
  
Hugh Dickins June 6, 2023, 6:28 a.m. UTC | #4
On Fri, 2 Jun 2023, Jann Horn wrote:
> On Fri, Jun 2, 2023 at 6:37 AM Hugh Dickins <hughd@google.com> wrote:
> 
> > The most obvious vital thing (in the split ptlock case) is that it
> > remains a struct page with a usable ptl spinlock embedded in it.
> >
> > The question becomes more urgent when/if extending to replacing the
> > pagetable pmd by huge pmd in one go, without any mmap_lock: powerpc
> > wants to deposit the page table for later use even in the shmem/file
> > case (and all arches in the anon case): I did work out the details once
> > before, but I'm not sure whether I would still agree with myself; and was
> > glad to leave replacement out of this series, to revisit some time later.
> >
> > >
> > > So in particular, in handle_pte_fault() we can reach the "if
> > > (unlikely(!pte_same(*vmf->pte, entry)))" with vmf->pte pointing to a
> > > detached zeroed page table, but we're okay with that because in that
> > > case we know that !pte_none(vmf->orig_pte)&&pte_none(*vmf->pte) ,
> > > which implies !pte_same(*vmf->pte, entry) , which means we'll bail
> > > out?
> >
> > There is no current (even at end of series) circumstance in which we
> > could be pointing to a detached page table there; but yes, I want to
> > allow for that, and yes I agree with your analysis.
> 
> Hmm, what am I missing here?

I spent quite a while trying to reconstruct what I had been thinking,
what meaning of "detached" or "there" I had in mind when I asserted so
confidently "There is no current (even at end of series) circumstance
in which we could be pointing to a detached page table there".

But had to give up and get on with more useful work.
Of course you are right, and that is what this series is about.

Hugh

> 
> static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
> {
>   pte_t entry;
> 
>   if (unlikely(pmd_none(*vmf->pmd))) {
>     [not executed]
>   } else {
>     /*
>      * A regular pmd is established and it can't morph into a huge
>      * pmd by anon khugepaged, since that takes mmap_lock in write
>      * mode; but shmem or file collapse to THP could still morph
>      * it into a huge pmd: just retry later if so.
>      */
>     vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd,
>              vmf->address, &vmf->ptl);
>     if (unlikely(!vmf->pte))
>       [not executed]
>     // this reads a present readonly PTE
>     vmf->orig_pte = ptep_get_lockless(vmf->pte);
>     vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID;
> 
>     if (pte_none(vmf->orig_pte)) {
>       [not executed]
>     }
>   }
> 
>   [at this point, a concurrent THP collapse operation detaches the page table]
>   // vmf->pte now points into a detached page table
> 
>   if (!vmf->pte)
>     [not executed]
> 
>   if (!pte_present(vmf->orig_pte))
>     [not executed]
> 
>   if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
>     [not executed]
> 
>   spin_lock(vmf->ptl);
>   entry = vmf->orig_pte;
>   // vmf->pte still points into a detached page table
>   if (unlikely(!pte_same(*vmf->pte, entry))) {
>     update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
>     goto unlock;
>   }
>   [...]
> }