[0/4] mm: Fix pmd_trans_unstable() call sites on retry

Message ID 20230602230552.350731-1-peterx@redhat.com
Headers
Series mm: Fix pmd_trans_unstable() call sites on retry |

Message

Peter Xu June 2, 2023, 11:05 p.m. UTC
  When hit pmd_trans_unstable() under mmap read lock, it means we raced with
something else.  Per the comment above the helper, we can definitely treat
it as some pmd (none?) but the 100% correct way is always retry, and I
don't think it should race again in most cases.

Not taking care of that retry can mean different things on different
paths.

For example, for smaps it means inaccurate accountings when we skip those
raced regions, but it's fine anyway because the accounting is not for 100%
accurate.

I think it's broken for pagemap OTOH, because we have the pagemap buffer
linear to the VA we're scanning, it means if we skip some region the follow
up scans can fill in the wrong slots, I think.  It means the pagemap
results returned to userapp will be wrong when very unlucky.

This reminded me that I should have a look at all call sites of
pmd_trans_unstable(), some of them are alright but I do see many of them
may still be better to give another shot when hit.

This series tries to resolve all call sites for it on that retry attempt.

I really don't know whether I missed something, even if not, whether it
matters a lot to anyone.  Still, _if_ I'm correct may worth consider
fixing.  Happy to be prove wrong.  Then Muhammad should know how to code
his.

The patchset is only smoke tested, nothing wrong I see.

Please have a look, thanks.

Peter Xu (4):
  mm/mprotect: Retry on pmd_trans_unstable()
  mm/migrate: Unify and retry an unstable pmd when hit
  mm: Warn for unstable pmd in move_page_tables()
  mm: Make most walk page paths with pmd_trans_unstable() to retry

 fs/proc/task_mmu.c  | 17 +++++++++++++----
 mm/madvise.c        |  8 ++++++--
 mm/memcontrol.c     |  8 ++++++--
 mm/memory-failure.c |  4 +++-
 mm/mempolicy.c      |  4 +++-
 mm/migrate_device.c |  9 ++++-----
 mm/mprotect.c       | 20 +++++++++++---------
 mm/mremap.c         |  4 ++--
 8 files changed, 48 insertions(+), 26 deletions(-)
  

Comments

Peter Xu June 7, 2023, 1:49 p.m. UTC | #1
On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> Please have a look, thanks.

Hello, all,

This one seems to have more or less conflict with Hugh's rework on pmd
collapse.  Please hold off review or merging until I prepare another one
(probably based on Hugh's, after I have a closer read).

Sorry for the noise.
  
David Hildenbrand June 7, 2023, 3:45 p.m. UTC | #2
On 07.06.23 15:49, Peter Xu wrote:
> On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
>> Please have a look, thanks.
> 
> Hello, all,
> 
> This one seems to have more or less conflict with Hugh's rework on pmd
> collapse.  Please hold off review or merging until I prepare another one
> (probably based on Hugh's, after I have a closer read).
> 
> Sorry for the noise.
> 

[did not have time to look yet]

Are there any fixes buried in there that we'd like to have in earlier? I 
skimmed over the patches and all read like "cleanup" + "consistency", 
correct?
  
Peter Xu June 7, 2023, 4:21 p.m. UTC | #3
On Wed, Jun 07, 2023 at 05:45:28PM +0200, David Hildenbrand wrote:
> On 07.06.23 15:49, Peter Xu wrote:
> > On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> > > Please have a look, thanks.
> > 
> > Hello, all,
> > 
> > This one seems to have more or less conflict with Hugh's rework on pmd
> > collapse.  Please hold off review or merging until I prepare another one
> > (probably based on Hugh's, after I have a closer read).
> > 
> > Sorry for the noise.
> > 
> 
> [did not have time to look yet]
> 
> Are there any fixes buried in there that we'd like to have in earlier? I
> skimmed over the patches and all read like "cleanup" + "consistency",
> correct?

There are bug fixes when unluckily hitting unstable pmd I think, these ones
worth mentioning:

  - pagemap can be broken, causing read to be shifted over to the next
    (wrong data read)

  - memcg wrong accounting, e.g., moving one task from memcg1 to memcg2, we
    can skip an unstable pmd while it could quickly contain something that
    can belong to memcg1, I think.  This one needs some eyes from memcg
    developers.

I don't rush on having them because these are all theoretical and no bug
report I saw, no reproducer I wrote, only observed by my eyes.

At least the pagemap issue should have been there for 10+ years without
being noticed even if rightfully spot this time.  Meanwhile this seems to
have conflict with Hugh's series which should have been posted earlier - I
still need to check on how that will affect this series, but not yet.

Said that, let me know if any of you hit any (potential) issue with above
or think that we should to move this in earlier.

Thanks,
  
Yang Shi June 7, 2023, 4:39 p.m. UTC | #4
On Wed, Jun 7, 2023 at 9:21 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Jun 07, 2023 at 05:45:28PM +0200, David Hildenbrand wrote:
> > On 07.06.23 15:49, Peter Xu wrote:
> > > On Fri, Jun 02, 2023 at 07:05:48PM -0400, Peter Xu wrote:
> > > > Please have a look, thanks.
> > >
> > > Hello, all,
> > >
> > > This one seems to have more or less conflict with Hugh's rework on pmd
> > > collapse.  Please hold off review or merging until I prepare another one
> > > (probably based on Hugh's, after I have a closer read).
> > >
> > > Sorry for the noise.
> > >
> >
> > [did not have time to look yet]
> >
> > Are there any fixes buried in there that we'd like to have in earlier? I
> > skimmed over the patches and all read like "cleanup" + "consistency",
> > correct?
>
> There are bug fixes when unluckily hitting unstable pmd I think, these ones
> worth mentioning:
>
>   - pagemap can be broken, causing read to be shifted over to the next
>     (wrong data read)

Yes, it may corrupt the pagemap data. But anyway it seems like nobody
was busted by this one as you said.

>
>   - memcg wrong accounting, e.g., moving one task from memcg1 to memcg2, we
>     can skip an unstable pmd while it could quickly contain something that
>     can belong to memcg1, I think.  This one needs some eyes from memcg
>     developers.

I don't think this is an important thing. There are plenty of other
conditions that could make the accounting inaccurate, for example,
isolating page from LRU fails, force charge, etc. And it seems like
nobody was bothered by this either.

>
> I don't rush on having them because these are all theoretical and no bug
> report I saw, no reproducer I wrote, only observed by my eyes.
>
> At least the pagemap issue should have been there for 10+ years without
> being noticed even if rightfully spot this time.  Meanwhile this seems to
> have conflict with Hugh's series which should have been posted earlier - I
> still need to check on how that will affect this series, but not yet.
>
> Said that, let me know if any of you hit any (potential) issue with above
> or think that we should to move this in earlier.
>
> Thanks,
>
> --
> Peter Xu
>
>
  
Peter Xu June 7, 2023, 6:22 p.m. UTC | #5
On Wed, Jun 07, 2023 at 09:39:44AM -0700, Yang Shi wrote:
> I don't think this is an important thing. There are plenty of other
> conditions that could make the accounting inaccurate, for example,
> isolating page from LRU fails, force charge, etc. And it seems like
> nobody was bothered by this either.

Yes, I read that a bit more and I agree.  So let me summarize after I read
Hugh's series just now..

With the pre-requisite of the new __pte_map_offset() that Hugh proposed
here:

[PATCH 04/31] mm/pgtable: allow pte_offset_map[_lock]() to fail
https://lore.kernel.org/r/8218ffdc-8be-54e5-0a8-83f5542af283@google.com

We should not need pmd_trans_unstable() anymore as Hugh pointed out, which
I fully agree.  I think Hugh has covered all the issues that this series
wanted to address alongside, namely:

  Patch 1 (mprotect) is covered in:

  [PATCH 18/31] mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge()
  https://lore.kernel.org/r/4a834932-9064-9ed7-3cd1-99466f549486@google.com

  No way to move spinlock outside, so one -EAGAIN needed which makes sense.

  Patch 2 (migrate_device) is covered in:

  [PATCH 24/31] mm/migrate_device: allow pte_offset_map_lock() to fail
  https://lore.kernel.org/r/ea51bb69-189c-229b-fc0-9d3e7be5d6b@google.com

  By a direct retry, and more code unified so even better.

  Patch 3 () is covered in:

  [PATCH 19/31] mm/mremap: retry if either pte_offset_map_*lock() fails
  https://lore.kernel.org/r/2d3fbfea-5884-8211-0cc-954afe25ae9c@google.com

  Instead of WARN_ON_ONCE(), it got dropped which looks all good, too.

  Most of patch 4's changes are done in:

  [PATCH 09/31] mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails
  https://lore.kernel.org/r/6265ac58-6018-a8c6-cf38-69cba698471@google.com

  There're some different handling on memcg changes, where in Hugh's
  series it was put separately here:

  [PATCH 17/31] mm/various: give up if pte_offset_map[_lock]() fails
  https://lore.kernel.org/r/c299eba-4e17-c645-1115-ccd1fd9956bd@google.com

  After double check, I agree with Yang's comment and Hugh's approach,
  that no retry is needed for memcg.

Let's ignore this series, not needed anymore.

Thanks,