[RFC,v2,2/4] madvise: Use notify-able API to clear and flush page table entries

Message ID 20230721094043.2506691-3-fengwei.yin@intel.com
State New
Headers
Series fix large folio for madvise_cold_or_pageout() |

Commit Message

Yin Fengwei July 21, 2023, 9:40 a.m. UTC
  Currently, in function madvise_cold_or_pageout_pte_range(), the
young bit of pte/pmd is cleared notify subscripter.

Using notify-able API to make sure the subscripter is signaled about
the young bit clearing.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
---
 mm/madvise.c | 18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)
  

Comments

Yu Zhao July 25, 2023, 5:55 a.m. UTC | #1
On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>
> Currently, in function madvise_cold_or_pageout_pte_range(), the
> young bit of pte/pmd is cleared notify subscripter.
>
> Using notify-able API to make sure the subscripter is signaled about
> the young bit clearing.
>
> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
> ---
>  mm/madvise.c | 18 ++----------------
>  1 file changed, 2 insertions(+), 16 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index f12933ebcc24..b236e201a738 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>                         return 0;
>                 }
>
> -               if (pmd_young(orig_pmd)) {
> -                       pmdp_invalidate(vma, addr, pmd);
> -                       orig_pmd = pmd_mkold(orig_pmd);
> -
> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> -               }
> -
> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
>                 folio_clear_referenced(folio);
>                 folio_test_clear_young(folio);
>                 if (folio_test_active(folio))
> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>
>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>
> -               if (pte_young(ptent)) {
> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
> -                                                       tlb->fullmm);
> -                       ptent = pte_mkold(ptent);
> -                       set_pte_at(mm, addr, pte, ptent);
> -                       tlb_remove_tlb_entry(tlb, pte, addr);
> -               }
> -
> +               ptep_clear_flush_young_notify(vma, addr, pte);

These two places are tricky.

I agree there is a problem here, i.e., we are not consulting the mmu
notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
known problem to me for a while (not a high priority one).

tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
not. But, on x86, we might see a performance improvement since
ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
be regressions though.

I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
prefers flush. So I'll let him chime in.

If we do end up with ptep_clear_young_notify(), please remove
mmu_gather -- it should have been done in this patch.
  
Yin Fengwei July 26, 2023, 2:49 a.m. UTC | #2
On 7/25/23 13:55, Yu Zhao wrote:
> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>
>> Currently, in function madvise_cold_or_pageout_pte_range(), the
>> young bit of pte/pmd is cleared notify subscripter.
>>
>> Using notify-able API to make sure the subscripter is signaled about
>> the young bit clearing.
>>
>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
>> ---
>>  mm/madvise.c | 18 ++----------------
>>  1 file changed, 2 insertions(+), 16 deletions(-)
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index f12933ebcc24..b236e201a738 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>                         return 0;
>>                 }
>>
>> -               if (pmd_young(orig_pmd)) {
>> -                       pmdp_invalidate(vma, addr, pmd);
>> -                       orig_pmd = pmd_mkold(orig_pmd);
>> -
>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
>> -               }
>> -
>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
>>                 folio_clear_referenced(folio);
>>                 folio_test_clear_young(folio);
>>                 if (folio_test_active(folio))
>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>
>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>
>> -               if (pte_young(ptent)) {
>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
>> -                                                       tlb->fullmm);
>> -                       ptent = pte_mkold(ptent);
>> -                       set_pte_at(mm, addr, pte, ptent);
>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
>> -               }
>> -
>> +               ptep_clear_flush_young_notify(vma, addr, pte);
> 
> These two places are tricky.
> 
> I agree there is a problem here, i.e., we are not consulting the mmu
> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
> known problem to me for a while (not a high priority one).
> 
> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
> not. But, on x86, we might see a performance improvement since
> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
> be regressions though.
> 
> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
> prefers flush. So I'll let him chime in.
I am OK with either way even no flush way here is more efficient for
arm64. Let's wait for Minchan's comment.

> 
> If we do end up with ptep_clear_young_notify(), please remove
> mmu_gather -- it should have been done in this patch.

I suppose "remove mmu_gather" means to trigger flush tlb operation in
batched way to make sure no stale data in TLB for long time on arm64
platform.


Regards
Yin, Fengwei
  
Yu Zhao July 26, 2023, 3:26 a.m. UTC | #3
On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>
>
> On 7/25/23 13:55, Yu Zhao wrote:
> > On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>
> >> Currently, in function madvise_cold_or_pageout_pte_range(), the
> >> young bit of pte/pmd is cleared notify subscripter.
> >>
> >> Using notify-able API to make sure the subscripter is signaled about
> >> the young bit clearing.
> >>
> >> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
> >> ---
> >>  mm/madvise.c | 18 ++----------------
> >>  1 file changed, 2 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/mm/madvise.c b/mm/madvise.c
> >> index f12933ebcc24..b236e201a738 100644
> >> --- a/mm/madvise.c
> >> +++ b/mm/madvise.c
> >> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>                         return 0;
> >>                 }
> >>
> >> -               if (pmd_young(orig_pmd)) {
> >> -                       pmdp_invalidate(vma, addr, pmd);
> >> -                       orig_pmd = pmd_mkold(orig_pmd);
> >> -
> >> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
> >> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> >> -               }
> >> -
> >> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
> >>                 folio_clear_referenced(folio);
> >>                 folio_test_clear_young(folio);
> >>                 if (folio_test_active(folio))
> >> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>
> >>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>
> >> -               if (pte_young(ptent)) {
> >> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
> >> -                                                       tlb->fullmm);
> >> -                       ptent = pte_mkold(ptent);
> >> -                       set_pte_at(mm, addr, pte, ptent);
> >> -                       tlb_remove_tlb_entry(tlb, pte, addr);
> >> -               }
> >> -
> >> +               ptep_clear_flush_young_notify(vma, addr, pte);
> >
> > These two places are tricky.
> >
> > I agree there is a problem here, i.e., we are not consulting the mmu
> > notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
> > known problem to me for a while (not a high priority one).
> >
> > tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
> > not. But, on x86, we might see a performance improvement since
> > ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
> > be regressions though.
> >
> > I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
> > prefers flush. So I'll let him chime in.
> I am OK with either way even no flush way here is more efficient for
> arm64. Let's wait for Minchan's comment.

Yes, and I don't think there would be any "negative" consequences
without tlb flushes when clearing the A-bit.

> > If we do end up with ptep_clear_young_notify(), please remove
> > mmu_gather -- it should have been done in this patch.
>
> I suppose "remove mmu_gather" means to trigger flush tlb operation in
> batched way to make sure no stale data in TLB for long time on arm64
> platform.

In madvise_cold_or_pageout_pte_range(), we only need struct
mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
tlb after clearing the A-bit. There is no correction, e.g., potential
data corruption, involved there.
  
Yin Fengwei July 26, 2023, 4:44 a.m. UTC | #4
On 7/26/23 11:26, Yu Zhao wrote:
> On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>
>>
>> On 7/25/23 13:55, Yu Zhao wrote:
>>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>
>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
>>>> young bit of pte/pmd is cleared notify subscripter.
>>>>
>>>> Using notify-able API to make sure the subscripter is signaled about
>>>> the young bit clearing.
>>>>
>>>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
>>>> ---
>>>>  mm/madvise.c | 18 ++----------------
>>>>  1 file changed, 2 insertions(+), 16 deletions(-)
>>>>
>>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>>> index f12933ebcc24..b236e201a738 100644
>>>> --- a/mm/madvise.c
>>>> +++ b/mm/madvise.c
>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>                         return 0;
>>>>                 }
>>>>
>>>> -               if (pmd_young(orig_pmd)) {
>>>> -                       pmdp_invalidate(vma, addr, pmd);
>>>> -                       orig_pmd = pmd_mkold(orig_pmd);
>>>> -
>>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
>>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
>>>> -               }
>>>> -
>>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
>>>>                 folio_clear_referenced(folio);
>>>>                 folio_test_clear_young(folio);
>>>>                 if (folio_test_active(folio))
>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>
>>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>
>>>> -               if (pte_young(ptent)) {
>>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
>>>> -                                                       tlb->fullmm);
>>>> -                       ptent = pte_mkold(ptent);
>>>> -                       set_pte_at(mm, addr, pte, ptent);
>>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
>>>> -               }
>>>> -
>>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
>>>
>>> These two places are tricky.
>>>
>>> I agree there is a problem here, i.e., we are not consulting the mmu
>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
>>> known problem to me for a while (not a high priority one).
>>>
>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
>>> not. But, on x86, we might see a performance improvement since
>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
>>> be regressions though.
>>>
>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
>>> prefers flush. So I'll let him chime in.
>> I am OK with either way even no flush way here is more efficient for
>> arm64. Let's wait for Minchan's comment.
> 
> Yes, and I don't think there would be any "negative" consequences
> without tlb flushes when clearing the A-bit.
> 
>>> If we do end up with ptep_clear_young_notify(), please remove
>>> mmu_gather -- it should have been done in this patch.
>>
>> I suppose "remove mmu_gather" means to trigger flush tlb operation in
>> batched way to make sure no stale data in TLB for long time on arm64
>> platform.
> 
> In madvise_cold_or_pageout_pte_range(), we only need struct
> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
> tlb after clearing the A-bit. There is no correction, e.g., potential
> data corruption, involved there.

From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
is to prevent the stale data in TLB. I suppose there is no correction issue
there also.

So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?


Regards
Yin, Fengwei
  
Yu Zhao July 26, 2023, 5:40 a.m. UTC | #5
On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>
>
>
> On 7/26/23 11:26, Yu Zhao wrote:
> > On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>
> >>
> >> On 7/25/23 13:55, Yu Zhao wrote:
> >>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>>>
> >>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
> >>>> young bit of pte/pmd is cleared notify subscripter.
> >>>>
> >>>> Using notify-able API to make sure the subscripter is signaled about
> >>>> the young bit clearing.
> >>>>
> >>>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
> >>>> ---
> >>>>  mm/madvise.c | 18 ++----------------
> >>>>  1 file changed, 2 insertions(+), 16 deletions(-)
> >>>>
> >>>> diff --git a/mm/madvise.c b/mm/madvise.c
> >>>> index f12933ebcc24..b236e201a738 100644
> >>>> --- a/mm/madvise.c
> >>>> +++ b/mm/madvise.c
> >>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>                         return 0;
> >>>>                 }
> >>>>
> >>>> -               if (pmd_young(orig_pmd)) {
> >>>> -                       pmdp_invalidate(vma, addr, pmd);
> >>>> -                       orig_pmd = pmd_mkold(orig_pmd);
> >>>> -
> >>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
> >>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> >>>> -               }
> >>>> -
> >>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
> >>>>                 folio_clear_referenced(folio);
> >>>>                 folio_test_clear_young(folio);
> >>>>                 if (folio_test_active(folio))
> >>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>
> >>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>
> >>>> -               if (pte_young(ptent)) {
> >>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
> >>>> -                                                       tlb->fullmm);
> >>>> -                       ptent = pte_mkold(ptent);
> >>>> -                       set_pte_at(mm, addr, pte, ptent);
> >>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
> >>>> -               }
> >>>> -
> >>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
> >>>
> >>> These two places are tricky.
> >>>
> >>> I agree there is a problem here, i.e., we are not consulting the mmu
> >>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
> >>> known problem to me for a while (not a high priority one).
> >>>
> >>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
> >>> not. But, on x86, we might see a performance improvement since
> >>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
> >>> be regressions though.
> >>>
> >>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
> >>> prefers flush. So I'll let him chime in.
> >> I am OK with either way even no flush way here is more efficient for
> >> arm64. Let's wait for Minchan's comment.
> >
> > Yes, and I don't think there would be any "negative" consequences
> > without tlb flushes when clearing the A-bit.
> >
> >>> If we do end up with ptep_clear_young_notify(), please remove
> >>> mmu_gather -- it should have been done in this patch.
> >>
> >> I suppose "remove mmu_gather" means to trigger flush tlb operation in
> >> batched way to make sure no stale data in TLB for long time on arm64
> >> platform.
> >
> > In madvise_cold_or_pageout_pte_range(), we only need struct
> > mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
> > tlb after clearing the A-bit. There is no correction, e.g., potential
> > data corruption, involved there.
>
> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
> is to prevent the stale data in TLB. I suppose there is no correction issue
> there also.
>
> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?

Sorry, I'm not sure I understand your question here.

In this patch, you removed tlb_remove_tlb_entry(), so we don't need
struct mmu_gather *tlb any more.

If you are asking why I prefer ptep_clear_young_notify() (no flush),
which also doesn't need tlb_remove_tlb_entry(), then the answer is
that the TLB size doesn't scale like DRAM does: the gap has been
growing exponentially. So there is no way TLB can hold stale entries
long enough to cause a measurable effect on the A-bit. This isn't a
conjecture -- it's been proven conversely: we encountered bugs (almost
every year) caused by missing TLB flushes and resulting in data
corruption. They were never easy to reproduce, meaning stale entries
never stayed long in TLB.
  
Yin Fengwei July 26, 2023, 6:21 a.m. UTC | #6
On 7/26/23 13:40, Yu Zhao wrote:
> On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>
>>
>>
>> On 7/26/23 11:26, Yu Zhao wrote:
>>> On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>
>>>>
>>>> On 7/25/23 13:55, Yu Zhao wrote:
>>>>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>>>
>>>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
>>>>>> young bit of pte/pmd is cleared notify subscripter.
>>>>>>
>>>>>> Using notify-able API to make sure the subscripter is signaled about
>>>>>> the young bit clearing.
>>>>>>
>>>>>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
>>>>>> ---
>>>>>>  mm/madvise.c | 18 ++----------------
>>>>>>  1 file changed, 2 insertions(+), 16 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>>>>> index f12933ebcc24..b236e201a738 100644
>>>>>> --- a/mm/madvise.c
>>>>>> +++ b/mm/madvise.c
>>>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>>>                         return 0;
>>>>>>                 }
>>>>>>
>>>>>> -               if (pmd_young(orig_pmd)) {
>>>>>> -                       pmdp_invalidate(vma, addr, pmd);
>>>>>> -                       orig_pmd = pmd_mkold(orig_pmd);
>>>>>> -
>>>>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
>>>>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
>>>>>> -               }
>>>>>> -
>>>>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
>>>>>>                 folio_clear_referenced(folio);
>>>>>>                 folio_test_clear_young(folio);
>>>>>>                 if (folio_test_active(folio))
>>>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>>>
>>>>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>>
>>>>>> -               if (pte_young(ptent)) {
>>>>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
>>>>>> -                                                       tlb->fullmm);
>>>>>> -                       ptent = pte_mkold(ptent);
>>>>>> -                       set_pte_at(mm, addr, pte, ptent);
>>>>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
>>>>>> -               }
>>>>>> -
>>>>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
>>>>>
>>>>> These two places are tricky.
>>>>>
>>>>> I agree there is a problem here, i.e., we are not consulting the mmu
>>>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
>>>>> known problem to me for a while (not a high priority one).
>>>>>
>>>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
>>>>> not. But, on x86, we might see a performance improvement since
>>>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
>>>>> be regressions though.
>>>>>
>>>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
>>>>> prefers flush. So I'll let him chime in.
>>>> I am OK with either way even no flush way here is more efficient for
>>>> arm64. Let's wait for Minchan's comment.
>>>
>>> Yes, and I don't think there would be any "negative" consequences
>>> without tlb flushes when clearing the A-bit.
>>>
>>>>> If we do end up with ptep_clear_young_notify(), please remove
>>>>> mmu_gather -- it should have been done in this patch.
>>>>
>>>> I suppose "remove mmu_gather" means to trigger flush tlb operation in
>>>> batched way to make sure no stale data in TLB for long time on arm64
>>>> platform.
>>>
>>> In madvise_cold_or_pageout_pte_range(), we only need struct
>>> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
>>> tlb after clearing the A-bit. There is no correction, e.g., potential
>>> data corruption, involved there.
>>
>> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
>> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
>> is to prevent the stale data in TLB. I suppose there is no correction issue
>> there also.
>>
>> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?
> 
> Sorry, I'm not sure I understand your question here.
Oh. Sorry for the confusion. I will explain my understanding and
question in detail.

> 
> In this patch, you removed tlb_remove_tlb_entry(), so we don't need
> struct mmu_gather *tlb any more.
Yes. You are right.

> 
> If you are asking why I prefer ptep_clear_young_notify() (no flush),
> which also doesn't need tlb_remove_tlb_entry(), then the answer is
> that the TLB size doesn't scale like DRAM does: the gap has been
> growing exponentially. So there is no way TLB can hold stale entries
> long enough to cause a measurable effect on the A-bit. This isn't a
> conjecture -- it's been proven conversely: we encountered bugs (almost
> every year) caused by missing TLB flushes and resulting in data
> corruption. They were never easy to reproduce, meaning stale entries
> never stayed long in TLB.

when I read https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,

my understanding is that arm64 still keep something in ptep_clear_flush_young.
The reason is finishing TLB flush by next context-switch to make sure no
stale entries in TLB cross next context switch.

Should we still keep same behavior (no stable entries in TLB cross next
context switch) for madvise_cold_or_pageout_pte_range()?

So two versions work (I assume we should keep same behavior) for me:
  1. using xxx_flush_xxx() version. but with possible regression on arm64.
  2. using none flush version. But do batched TLB flush.

Hope this could make things clear. Thanks.

Regards
Yin, Fengwei
  
Yu Zhao July 27, 2023, 3:28 a.m. UTC | #7
On Wed, Jul 26, 2023 at 12:21 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>
>
> On 7/26/23 13:40, Yu Zhao wrote:
> > On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>
> >>
> >> On 7/26/23 11:26, Yu Zhao wrote:
> >>> On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>>>
> >>>>
> >>>> On 7/25/23 13:55, Yu Zhao wrote:
> >>>>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
> >>>>>>
> >>>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
> >>>>>> young bit of pte/pmd is cleared notify subscripter.
> >>>>>>
> >>>>>> Using notify-able API to make sure the subscripter is signaled about
> >>>>>> the young bit clearing.
> >>>>>>
> >>>>>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
> >>>>>> ---
> >>>>>>  mm/madvise.c | 18 ++----------------
> >>>>>>  1 file changed, 2 insertions(+), 16 deletions(-)
> >>>>>>
> >>>>>> diff --git a/mm/madvise.c b/mm/madvise.c
> >>>>>> index f12933ebcc24..b236e201a738 100644
> >>>>>> --- a/mm/madvise.c
> >>>>>> +++ b/mm/madvise.c
> >>>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>>>                         return 0;
> >>>>>>                 }
> >>>>>>
> >>>>>> -               if (pmd_young(orig_pmd)) {
> >>>>>> -                       pmdp_invalidate(vma, addr, pmd);
> >>>>>> -                       orig_pmd = pmd_mkold(orig_pmd);
> >>>>>> -
> >>>>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
> >>>>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> >>>>>> -               }
> >>>>>> -
> >>>>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
> >>>>>>                 folio_clear_referenced(folio);
> >>>>>>                 folio_test_clear_young(folio);
> >>>>>>                 if (folio_test_active(folio))
> >>>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>>>
> >>>>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>>>
> >>>>>> -               if (pte_young(ptent)) {
> >>>>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
> >>>>>> -                                                       tlb->fullmm);
> >>>>>> -                       ptent = pte_mkold(ptent);
> >>>>>> -                       set_pte_at(mm, addr, pte, ptent);
> >>>>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
> >>>>>> -               }
> >>>>>> -
> >>>>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
> >>>>>
> >>>>> These two places are tricky.
> >>>>>
> >>>>> I agree there is a problem here, i.e., we are not consulting the mmu
> >>>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
> >>>>> known problem to me for a while (not a high priority one).
> >>>>>
> >>>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
> >>>>> not. But, on x86, we might see a performance improvement since
> >>>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
> >>>>> be regressions though.
> >>>>>
> >>>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
> >>>>> prefers flush. So I'll let him chime in.
> >>>> I am OK with either way even no flush way here is more efficient for
> >>>> arm64. Let's wait for Minchan's comment.
> >>>
> >>> Yes, and I don't think there would be any "negative" consequences
> >>> without tlb flushes when clearing the A-bit.
> >>>
> >>>>> If we do end up with ptep_clear_young_notify(), please remove
> >>>>> mmu_gather -- it should have been done in this patch.
> >>>>
> >>>> I suppose "remove mmu_gather" means to trigger flush tlb operation in
> >>>> batched way to make sure no stale data in TLB for long time on arm64
> >>>> platform.
> >>>
> >>> In madvise_cold_or_pageout_pte_range(), we only need struct
> >>> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
> >>> tlb after clearing the A-bit. There is no correction, e.g., potential
> >>> data corruption, involved there.
> >>
> >> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
> >> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
> >> is to prevent the stale data in TLB. I suppose there is no correction issue
> >> there also.
> >>
> >> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?
> >
> > Sorry, I'm not sure I understand your question here.
> Oh. Sorry for the confusion. I will explain my understanding and
> question in detail.
>
> >
> > In this patch, you removed tlb_remove_tlb_entry(), so we don't need
> > struct mmu_gather *tlb any more.
> Yes. You are right.
>
> >
> > If you are asking why I prefer ptep_clear_young_notify() (no flush),
> > which also doesn't need tlb_remove_tlb_entry(), then the answer is
> > that the TLB size doesn't scale like DRAM does: the gap has been
> > growing exponentially. So there is no way TLB can hold stale entries
> > long enough to cause a measurable effect on the A-bit. This isn't a
> > conjecture -- it's been proven conversely: we encountered bugs (almost
> > every year) caused by missing TLB flushes and resulting in data
> > corruption. They were never easy to reproduce, meaning stale entries
> > never stayed long in TLB.
>
> when I read https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
>
> my understanding is that arm64 still keep something in ptep_clear_flush_young.
> The reason is finishing TLB flush by next context-switch to make sure no
> stale entries in TLB cross next context switch.
>
> Should we still keep same behavior (no stable entries in TLB cross next
> context switch) for madvise_cold_or_pageout_pte_range()?
>
> So two versions work (I assume we should keep same behavior) for me:
>   1. using xxx_flush_xxx() version. but with possible regression on arm64.
>   2. using none flush version. But do batched TLB flush.

I see. I don't think we need to flush at all, i.e., the no flush
version *without* batched TLB flush. So far nobody can actually prove
that flushing TLB while clearing the A-bit is beneficial, not even in
theory :)
  
Yin Fengwei July 28, 2023, 4:14 p.m. UTC | #8
On 7/27/2023 11:28 AM, Yu Zhao wrote:
> On Wed, Jul 26, 2023 at 12:21 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>
>>
>> On 7/26/23 13:40, Yu Zhao wrote:
>>> On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>
>>>>
>>>> On 7/26/23 11:26, Yu Zhao wrote:
>>>>> On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 7/25/23 13:55, Yu Zhao wrote:
>>>>>>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>>>>>>
>>>>>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
>>>>>>>> young bit of pte/pmd is cleared notify subscripter.
>>>>>>>>
>>>>>>>> Using notify-able API to make sure the subscripter is signaled about
>>>>>>>> the young bit clearing.
>>>>>>>>
>>>>>>>> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
>>>>>>>> ---
>>>>>>>>  mm/madvise.c | 18 ++----------------
>>>>>>>>  1 file changed, 2 insertions(+), 16 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>>>>>>> index f12933ebcc24..b236e201a738 100644
>>>>>>>> --- a/mm/madvise.c
>>>>>>>> +++ b/mm/madvise.c
>>>>>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>>>>>                         return 0;
>>>>>>>>                 }
>>>>>>>>
>>>>>>>> -               if (pmd_young(orig_pmd)) {
>>>>>>>> -                       pmdp_invalidate(vma, addr, pmd);
>>>>>>>> -                       orig_pmd = pmd_mkold(orig_pmd);
>>>>>>>> -
>>>>>>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
>>>>>>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
>>>>>>>> -               }
>>>>>>>> -
>>>>>>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
>>>>>>>>                 folio_clear_referenced(folio);
>>>>>>>>                 folio_test_clear_young(folio);
>>>>>>>>                 if (folio_test_active(folio))
>>>>>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>>>>>>>
>>>>>>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>>>>
>>>>>>>> -               if (pte_young(ptent)) {
>>>>>>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
>>>>>>>> -                                                       tlb->fullmm);
>>>>>>>> -                       ptent = pte_mkold(ptent);
>>>>>>>> -                       set_pte_at(mm, addr, pte, ptent);
>>>>>>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
>>>>>>>> -               }
>>>>>>>> -
>>>>>>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
>>>>>>>
>>>>>>> These two places are tricky.
>>>>>>>
>>>>>>> I agree there is a problem here, i.e., we are not consulting the mmu
>>>>>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
>>>>>>> known problem to me for a while (not a high priority one).
>>>>>>>
>>>>>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
>>>>>>> not. But, on x86, we might see a performance improvement since
>>>>>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
>>>>>>> be regressions though.
>>>>>>>
>>>>>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
>>>>>>> prefers flush. So I'll let him chime in.
>>>>>> I am OK with either way even no flush way here is more efficient for
>>>>>> arm64. Let's wait for Minchan's comment.
>>>>>
>>>>> Yes, and I don't think there would be any "negative" consequences
>>>>> without tlb flushes when clearing the A-bit.
>>>>>
>>>>>>> If we do end up with ptep_clear_young_notify(), please remove
>>>>>>> mmu_gather -- it should have been done in this patch.
>>>>>>
>>>>>> I suppose "remove mmu_gather" means to trigger flush tlb operation in
>>>>>> batched way to make sure no stale data in TLB for long time on arm64
>>>>>> platform.
>>>>>
>>>>> In madvise_cold_or_pageout_pte_range(), we only need struct
>>>>> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
>>>>> tlb after clearing the A-bit. There is no correction, e.g., potential
>>>>> data corruption, involved there.
>>>>
>>>> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
>>>> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
>>>> is to prevent the stale data in TLB. I suppose there is no correction issue
>>>> there also.
>>>>
>>>> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?
>>>
>>> Sorry, I'm not sure I understand your question here.
>> Oh. Sorry for the confusion. I will explain my understanding and
>> question in detail.
>>
>>>
>>> In this patch, you removed tlb_remove_tlb_entry(), so we don't need
>>> struct mmu_gather *tlb any more.
>> Yes. You are right.
>>
>>>
>>> If you are asking why I prefer ptep_clear_young_notify() (no flush),
>>> which also doesn't need tlb_remove_tlb_entry(), then the answer is
>>> that the TLB size doesn't scale like DRAM does: the gap has been
>>> growing exponentially. So there is no way TLB can hold stale entries
>>> long enough to cause a measurable effect on the A-bit. This isn't a
>>> conjecture -- it's been proven conversely: we encountered bugs (almost
>>> every year) caused by missing TLB flushes and resulting in data
>>> corruption. They were never easy to reproduce, meaning stale entries
>>> never stayed long in TLB.
>>
>> when I read https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
>>
>> my understanding is that arm64 still keep something in ptep_clear_flush_young.
>> The reason is finishing TLB flush by next context-switch to make sure no
>> stale entries in TLB cross next context switch.
>>
>> Should we still keep same behavior (no stable entries in TLB cross next
>> context switch) for madvise_cold_or_pageout_pte_range()?
>>
>> So two versions work (I assume we should keep same behavior) for me:
>>   1. using xxx_flush_xxx() version. but with possible regression on arm64.
>>   2. using none flush version. But do batched TLB flush.
> 
> I see. I don't think we need to flush at all, i.e., the no flush
> version *without* batched TLB flush. So far nobody can actually prove
> that flushing TLB while clearing the A-bit is beneficial, not even in
> theory :)

I will just send the fix for folio_mapcount() (with your reviewed-by) as
it's bug fix and it's better to be merged standalone.

The other three patches need more time for discussion.

Regards
Yin, Fengwei
  

Patch

diff --git a/mm/madvise.c b/mm/madvise.c
index f12933ebcc24..b236e201a738 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -403,14 +403,7 @@  static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			return 0;
 		}
 
-		if (pmd_young(orig_pmd)) {
-			pmdp_invalidate(vma, addr, pmd);
-			orig_pmd = pmd_mkold(orig_pmd);
-
-			set_pmd_at(mm, addr, pmd, orig_pmd);
-			tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
-		}
-
+		pmdp_clear_flush_young_notify(vma, addr, pmd);
 		folio_clear_referenced(folio);
 		folio_test_clear_young(folio);
 		if (folio_test_active(folio))
@@ -496,14 +489,7 @@  static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 
 		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
 
-		if (pte_young(ptent)) {
-			ptent = ptep_get_and_clear_full(mm, addr, pte,
-							tlb->fullmm);
-			ptent = pte_mkold(ptent);
-			set_pte_at(mm, addr, pte, ptent);
-			tlb_remove_tlb_entry(tlb, pte, addr);
-		}
-
+		ptep_clear_flush_young_notify(vma, addr, pte);
 		/*
 		 * We are deactivating a folio for accelerating reclaiming.
 		 * VM couldn't reclaim the folio unless we clear PG_young.