[v4,02/16] mm: Batch-copy PTE ranges during fork()

Message ID 20231218105100.172635-3-ryan.roberts@arm.com
State New
Headers
Series Transparent Contiguous PTEs for User Mappings |

Commit Message

Ryan Roberts Dec. 18, 2023, 10:50 a.m. UTC
  Convert copy_pte_range() to copy a batch of ptes in one go. A given
batch is determined by the architecture with the new helper,
pte_batch_remaining(), and maps a physically contiguous block of memory,
all belonging to the same folio. A pte batch is then write-protected in
one go in the parent using the new helper, ptep_set_wrprotects() and is
set in one go in the child using the new helper, set_ptes_full().

The primary motivation for this change is to reduce the number of tlb
maintenance operations that the arm64 backend has to perform during
fork, as it is about to add transparent support for the "contiguous bit"
in its ptes. By write-protecting the parent using the new
ptep_set_wrprotects() (note the 's' at the end) function, the backend
can avoid having to unfold contig ranges of PTEs, which is expensive,
when all ptes in the range are being write-protected. Similarly, by
using set_ptes_full() rather than set_pte_at() to set up ptes in the
child, the backend does not need to fold a contiguous range once they
are all populated - they can be initially populated as a contiguous
range in the first place.

This code is very performance sensitive, and a significant amount of
effort has been put into not regressing performance for the order-0
folio case. By default, pte_batch_remaining() is compile constant 1,
which enables the compiler to simplify the extra loops that are added
for batching and produce code that is equivalent (and equally
performant) as the previous implementation.

This change addresses the core-mm refactoring only and a separate change
will implement pte_batch_remaining(), ptep_set_wrprotects() and
set_ptes_full() in the arm64 backend to realize the performance
improvement as part of the work to enable contpte mappings.

To ensure the arm64 is performant once implemented, this change is very
careful to only call ptep_get() once per pte batch.

The following microbenchmark results demonstate that there is no
significant performance change after this patch. Fork is called in a
tight loop in a process with 1G of populated memory and the time for the
function to execute is measured. 100 iterations per run, 8 runs
performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
performed for case where 1G memory is comprised of order-0 folios and
case where comprised of pte-mapped order-9 folios. Negative is faster,
positive is slower, compared to baseline upon which the series is based:

| Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
| fork          |-------------------|-------------------|
| microbench    |    mean |   stdev |    mean |   stdev |
|---------------|---------|---------|---------|---------|
| baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
| after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |

| Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
| fork          |-------------------|-------------------|
| microbench    |    mean |   stdev |    mean |   stdev |
|---------------|---------|---------|---------|---------|
| baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
| after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |

Tested-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
 mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
 2 files changed, 139 insertions(+), 33 deletions(-)
  

Comments

David Hildenbrand Dec. 18, 2023, 5:47 p.m. UTC | #1
On 18.12.23 11:50, Ryan Roberts wrote:
> Convert copy_pte_range() to copy a batch of ptes in one go. A given
> batch is determined by the architecture with the new helper,
> pte_batch_remaining(), and maps a physically contiguous block of memory,
> all belonging to the same folio. A pte batch is then write-protected in
> one go in the parent using the new helper, ptep_set_wrprotects() and is
> set in one go in the child using the new helper, set_ptes_full().
> 
> The primary motivation for this change is to reduce the number of tlb
> maintenance operations that the arm64 backend has to perform during
> fork, as it is about to add transparent support for the "contiguous bit"
> in its ptes. By write-protecting the parent using the new
> ptep_set_wrprotects() (note the 's' at the end) function, the backend
> can avoid having to unfold contig ranges of PTEs, which is expensive,
> when all ptes in the range are being write-protected. Similarly, by
> using set_ptes_full() rather than set_pte_at() to set up ptes in the
> child, the backend does not need to fold a contiguous range once they
> are all populated - they can be initially populated as a contiguous
> range in the first place.
> 
> This code is very performance sensitive, and a significant amount of
> effort has been put into not regressing performance for the order-0
> folio case. By default, pte_batch_remaining() is compile constant 1,
> which enables the compiler to simplify the extra loops that are added
> for batching and produce code that is equivalent (and equally
> performant) as the previous implementation.
> 
> This change addresses the core-mm refactoring only and a separate change
> will implement pte_batch_remaining(), ptep_set_wrprotects() and
> set_ptes_full() in the arm64 backend to realize the performance
> improvement as part of the work to enable contpte mappings.
> 
> To ensure the arm64 is performant once implemented, this change is very
> careful to only call ptep_get() once per pte batch.
> 
> The following microbenchmark results demonstate that there is no
> significant performance change after this patch. Fork is called in a
> tight loop in a process with 1G of populated memory and the time for the
> function to execute is measured. 100 iterations per run, 8 runs
> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
> performed for case where 1G memory is comprised of order-0 folios and
> case where comprised of pte-mapped order-9 folios. Negative is faster,
> positive is slower, compared to baseline upon which the series is based:
> 
> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
> | fork          |-------------------|-------------------|
> | microbench    |    mean |   stdev |    mean |   stdev |
> |---------------|---------|---------|---------|---------|
> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
> 
> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
> | fork          |-------------------|-------------------|
> | microbench    |    mean |   stdev |    mean |   stdev |
> |---------------|---------|---------|---------|---------|
> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
> 
> Tested-by: John Hubbard <jhubbard@nvidia.com>
> Reviewed-by: Alistair Popple <apopple@nvidia.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>   mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>   2 files changed, 139 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index af7639c3b0a3..db93fb81465a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>   #define arch_flush_lazy_mmu_mode()	do {} while (0)
>   #endif
>   
> +#ifndef pte_batch_remaining
> +/**
> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
> + * @pte: Page table entry for the first page.
> + * @addr: Address of the first page.
> + * @end: Batch ceiling (e.g. end of vma).
> + *
> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
> + * In such cases, this function returns the remaining number of pages to the end
> + * of the current batch, as defined by addr. This can be useful when iterating
> + * over ptes.
> + *
> + * May be overridden by the architecture, else batch size is always 1.
> + */
> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
> +						unsigned long end)
> +{
> +	return 1;
> +}
> +#endif

It's a shame we now lose the optimization for all other archtiectures.

Was there no way to have some basic batching mechanism that doesn't 
require arch specifics?

I'd have thought that something very basic would have worked like:

* Check if PTE is the same when setting the PFN to 0.
* Check that PFN is consecutive
* Check that all PFNs belong to the same folio
  
Ryan Roberts Dec. 19, 2023, 8:30 a.m. UTC | #2
On 18/12/2023 17:47, David Hildenbrand wrote:
> On 18.12.23 11:50, Ryan Roberts wrote:
>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>> batch is determined by the architecture with the new helper,
>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>> all belonging to the same folio. A pte batch is then write-protected in
>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>> set in one go in the child using the new helper, set_ptes_full().
>>
>> The primary motivation for this change is to reduce the number of tlb
>> maintenance operations that the arm64 backend has to perform during
>> fork, as it is about to add transparent support for the "contiguous bit"
>> in its ptes. By write-protecting the parent using the new
>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>> when all ptes in the range are being write-protected. Similarly, by
>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>> child, the backend does not need to fold a contiguous range once they
>> are all populated - they can be initially populated as a contiguous
>> range in the first place.
>>
>> This code is very performance sensitive, and a significant amount of
>> effort has been put into not regressing performance for the order-0
>> folio case. By default, pte_batch_remaining() is compile constant 1,
>> which enables the compiler to simplify the extra loops that are added
>> for batching and produce code that is equivalent (and equally
>> performant) as the previous implementation.
>>
>> This change addresses the core-mm refactoring only and a separate change
>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>> set_ptes_full() in the arm64 backend to realize the performance
>> improvement as part of the work to enable contpte mappings.
>>
>> To ensure the arm64 is performant once implemented, this change is very
>> careful to only call ptep_get() once per pte batch.
>>
>> The following microbenchmark results demonstate that there is no
>> significant performance change after this patch. Fork is called in a
>> tight loop in a process with 1G of populated memory and the time for the
>> function to execute is measured. 100 iterations per run, 8 runs
>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>> performed for case where 1G memory is comprised of order-0 folios and
>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>> positive is slower, compared to baseline upon which the series is based:
>>
>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>> | fork          |-------------------|-------------------|
>> | microbench    |    mean |   stdev |    mean |   stdev |
>> |---------------|---------|---------|---------|---------|
>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>
>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>> | fork          |-------------------|-------------------|
>> | microbench    |    mean |   stdev |    mean |   stdev |
>> |---------------|---------|---------|---------|---------|
>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>
>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>>   include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>   mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>   2 files changed, 139 insertions(+), 33 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index af7639c3b0a3..db93fb81465a 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>   #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>   #endif
>>   +#ifndef pte_batch_remaining
>> +/**
>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>> + * @pte: Page table entry for the first page.
>> + * @addr: Address of the first page.
>> + * @end: Batch ceiling (e.g. end of vma).
>> + *
>> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
>> + * In such cases, this function returns the remaining number of pages to the end
>> + * of the current batch, as defined by addr. This can be useful when iterating
>> + * over ptes.
>> + *
>> + * May be overridden by the architecture, else batch size is always 1.
>> + */
>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>> +                        unsigned long end)
>> +{
>> +    return 1;
>> +}
>> +#endif
> 
> It's a shame we now lose the optimization for all other archtiectures.
> 
> Was there no way to have some basic batching mechanism that doesn't require arch
> specifics?

I tried a bunch of things but ultimately the way I've done it was the only way
to reduce the order-0 fork regression to 0.

My original v3 posting was costing 5% extra and even my first attempt at an
arch-specific version that didn't resolve to a compile-time constant 1 still
cost an extra 3%.


> 
> I'd have thought that something very basic would have worked like:
> 
> * Check if PTE is the same when setting the PFN to 0.
> * Check that PFN is consecutive
> * Check that all PFNs belong to the same folio

I haven't tried this exact approach, but I'd be surprised if I can get the
regression under 4% with this. Further along the series I spent a lot of time
having to fiddle with the arm64 implementation; every conditional and every
memory read (even when in cache) was a problem. There is just so little in the
inner loop that every instruction matters. (At least on Ampere Altra and Apple M2).

Of course if you're willing to pay that 4-5% for order-0 then the benefit to
order-9 is around 10% in my measurements. Personally though, I'd prefer to play
safe and ensure the common order-0 case doesn't regress, as you previously
suggested.
  
David Hildenbrand Dec. 19, 2023, 11:29 a.m. UTC | #3
On 19.12.23 09:30, Ryan Roberts wrote:
> On 18/12/2023 17:47, David Hildenbrand wrote:
>> On 18.12.23 11:50, Ryan Roberts wrote:
>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>> batch is determined by the architecture with the new helper,
>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>> all belonging to the same folio. A pte batch is then write-protected in
>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>> set in one go in the child using the new helper, set_ptes_full().
>>>
>>> The primary motivation for this change is to reduce the number of tlb
>>> maintenance operations that the arm64 backend has to perform during
>>> fork, as it is about to add transparent support for the "contiguous bit"
>>> in its ptes. By write-protecting the parent using the new
>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>> when all ptes in the range are being write-protected. Similarly, by
>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>> child, the backend does not need to fold a contiguous range once they
>>> are all populated - they can be initially populated as a contiguous
>>> range in the first place.
>>>
>>> This code is very performance sensitive, and a significant amount of
>>> effort has been put into not regressing performance for the order-0
>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>> which enables the compiler to simplify the extra loops that are added
>>> for batching and produce code that is equivalent (and equally
>>> performant) as the previous implementation.
>>>
>>> This change addresses the core-mm refactoring only and a separate change
>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>> set_ptes_full() in the arm64 backend to realize the performance
>>> improvement as part of the work to enable contpte mappings.
>>>
>>> To ensure the arm64 is performant once implemented, this change is very
>>> careful to only call ptep_get() once per pte batch.
>>>
>>> The following microbenchmark results demonstate that there is no
>>> significant performance change after this patch. Fork is called in a
>>> tight loop in a process with 1G of populated memory and the time for the
>>> function to execute is measured. 100 iterations per run, 8 runs
>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>> performed for case where 1G memory is comprised of order-0 folios and
>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>> positive is slower, compared to baseline upon which the series is based:
>>>
>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>
>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>
>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>>    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>    mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>    2 files changed, 139 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index af7639c3b0a3..db93fb81465a 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>    #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>    #endif
>>>    +#ifndef pte_batch_remaining
>>> +/**
>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>> + * @pte: Page table entry for the first page.
>>> + * @addr: Address of the first page.
>>> + * @end: Batch ceiling (e.g. end of vma).
>>> + *
>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
>>> + * In such cases, this function returns the remaining number of pages to the end
>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>> + * over ptes.
>>> + *
>>> + * May be overridden by the architecture, else batch size is always 1.
>>> + */
>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>> +                        unsigned long end)
>>> +{
>>> +    return 1;
>>> +}
>>> +#endif
>>
>> It's a shame we now lose the optimization for all other archtiectures.
>>
>> Was there no way to have some basic batching mechanism that doesn't require arch
>> specifics?
> 
> I tried a bunch of things but ultimately the way I've done it was the only way
> to reduce the order-0 fork regression to 0.

Let me give it a churn today. I think we should really focus on having 
only a single folio_test_large() check on the fast path for order-0. And 
not even try doing batching for anything that works on bare PFNs.

Off to prototyping ... :)
  
David Hildenbrand Dec. 19, 2023, 5:22 p.m. UTC | #4
On 19.12.23 09:30, Ryan Roberts wrote:
> On 18/12/2023 17:47, David Hildenbrand wrote:
>> On 18.12.23 11:50, Ryan Roberts wrote:
>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>> batch is determined by the architecture with the new helper,
>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>> all belonging to the same folio. A pte batch is then write-protected in
>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>> set in one go in the child using the new helper, set_ptes_full().
>>>
>>> The primary motivation for this change is to reduce the number of tlb
>>> maintenance operations that the arm64 backend has to perform during
>>> fork, as it is about to add transparent support for the "contiguous bit"
>>> in its ptes. By write-protecting the parent using the new
>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>> when all ptes in the range are being write-protected. Similarly, by
>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>> child, the backend does not need to fold a contiguous range once they
>>> are all populated - they can be initially populated as a contiguous
>>> range in the first place.
>>>
>>> This code is very performance sensitive, and a significant amount of
>>> effort has been put into not regressing performance for the order-0
>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>> which enables the compiler to simplify the extra loops that are added
>>> for batching and produce code that is equivalent (and equally
>>> performant) as the previous implementation.
>>>
>>> This change addresses the core-mm refactoring only and a separate change
>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>> set_ptes_full() in the arm64 backend to realize the performance
>>> improvement as part of the work to enable contpte mappings.
>>>
>>> To ensure the arm64 is performant once implemented, this change is very
>>> careful to only call ptep_get() once per pte batch.
>>>
>>> The following microbenchmark results demonstate that there is no
>>> significant performance change after this patch. Fork is called in a
>>> tight loop in a process with 1G of populated memory and the time for the
>>> function to execute is measured. 100 iterations per run, 8 runs
>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>> performed for case where 1G memory is comprised of order-0 folios and
>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>> positive is slower, compared to baseline upon which the series is based:
>>>
>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>
>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>
>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>>    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>    mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>    2 files changed, 139 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index af7639c3b0a3..db93fb81465a 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>    #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>    #endif
>>>    +#ifndef pte_batch_remaining
>>> +/**
>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>> + * @pte: Page table entry for the first page.
>>> + * @addr: Address of the first page.
>>> + * @end: Batch ceiling (e.g. end of vma).
>>> + *
>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
>>> + * In such cases, this function returns the remaining number of pages to the end
>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>> + * over ptes.
>>> + *
>>> + * May be overridden by the architecture, else batch size is always 1.
>>> + */
>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>> +                        unsigned long end)
>>> +{
>>> +    return 1;
>>> +}
>>> +#endif
>>
>> It's a shame we now lose the optimization for all other archtiectures.
>>
>> Was there no way to have some basic batching mechanism that doesn't require arch
>> specifics?
> 
> I tried a bunch of things but ultimately the way I've done it was the only way
> to reduce the order-0 fork regression to 0.
> 
> My original v3 posting was costing 5% extra and even my first attempt at an
> arch-specific version that didn't resolve to a compile-time constant 1 still
> cost an extra 3%.
> 
> 
>>
>> I'd have thought that something very basic would have worked like:
>>
>> * Check if PTE is the same when setting the PFN to 0.
>> * Check that PFN is consecutive
>> * Check that all PFNs belong to the same folio
> 
> I haven't tried this exact approach, but I'd be surprised if I can get the
> regression under 4% with this. Further along the series I spent a lot of time
> having to fiddle with the arm64 implementation; every conditional and every
> memory read (even when in cache) was a problem. There is just so little in the
> inner loop that every instruction matters. (At least on Ampere Altra and Apple M2).
> 
> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
> safe and ensure the common order-0 case doesn't regress, as you previously
> suggested.
> 

I just hacked something up, on top of my beloved rmap cleanup/batching 
series. I implemented very generic and simple batching for large folios 
(all PTE bits except the PFN have to match).

Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) 
Silver 4210R CPU.

order-0: 0.014210 -> 0.013969

-> Around 1.7 % faster

order-9: 0.014373 -> 0.009149

-> Around 36.3 % faster


But it's likely buggy, so don't trust the numbers just yet. If they 
actually hold up, we should probably do something like that ahead of 
time, before all the arm-specific cont-pte work.

I suspect you can easily extend that by arch hooks where reasonable.

The (3) patches on top of the rmap cleanups can be found at:

	https://github.com/davidhildenbrand/linux/tree/fork-batching
  
Ryan Roberts Dec. 19, 2023, 5:42 p.m. UTC | #5
On 19/12/2023 17:22, David Hildenbrand wrote:
> On 19.12.23 09:30, Ryan Roberts wrote:
>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>> batch is determined by the architecture with the new helper,
>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>
>>>> The primary motivation for this change is to reduce the number of tlb
>>>> maintenance operations that the arm64 backend has to perform during
>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>> in its ptes. By write-protecting the parent using the new
>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>> when all ptes in the range are being write-protected. Similarly, by
>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>> child, the backend does not need to fold a contiguous range once they
>>>> are all populated - they can be initially populated as a contiguous
>>>> range in the first place.
>>>>
>>>> This code is very performance sensitive, and a significant amount of
>>>> effort has been put into not regressing performance for the order-0
>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>> which enables the compiler to simplify the extra loops that are added
>>>> for batching and produce code that is equivalent (and equally
>>>> performant) as the previous implementation.
>>>>
>>>> This change addresses the core-mm refactoring only and a separate change
>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>> improvement as part of the work to enable contpte mappings.
>>>>
>>>> To ensure the arm64 is performant once implemented, this change is very
>>>> careful to only call ptep_get() once per pte batch.
>>>>
>>>> The following microbenchmark results demonstate that there is no
>>>> significant performance change after this patch. Fork is called in a
>>>> tight loop in a process with 1G of populated memory and the time for the
>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>> positive is slower, compared to baseline upon which the series is based:
>>>>
>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>> | fork          |-------------------|-------------------|
>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>> |---------------|---------|---------|---------|---------|
>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>
>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>> | fork          |-------------------|-------------------|
>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>> |---------------|---------|---------|---------|---------|
>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>
>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> ---
>>>>    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>    mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>    2 files changed, 139 insertions(+), 33 deletions(-)
>>>>
>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>> index af7639c3b0a3..db93fb81465a 100644
>>>> --- a/include/linux/pgtable.h
>>>> +++ b/include/linux/pgtable.h
>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>    #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>    #endif
>>>>    +#ifndef pte_batch_remaining
>>>> +/**
>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>> + * @pte: Page table entry for the first page.
>>>> + * @addr: Address of the first page.
>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>> + *
>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>> ptes.
>>>> + * In such cases, this function returns the remaining number of pages to
>>>> the end
>>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>>> + * over ptes.
>>>> + *
>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>> + */
>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>>> +                        unsigned long end)
>>>> +{
>>>> +    return 1;
>>>> +}
>>>> +#endif
>>>
>>> It's a shame we now lose the optimization for all other archtiectures.
>>>
>>> Was there no way to have some basic batching mechanism that doesn't require arch
>>> specifics?
>>
>> I tried a bunch of things but ultimately the way I've done it was the only way
>> to reduce the order-0 fork regression to 0.
>>
>> My original v3 posting was costing 5% extra and even my first attempt at an
>> arch-specific version that didn't resolve to a compile-time constant 1 still
>> cost an extra 3%.
>>
>>
>>>
>>> I'd have thought that something very basic would have worked like:
>>>
>>> * Check if PTE is the same when setting the PFN to 0.
>>> * Check that PFN is consecutive
>>> * Check that all PFNs belong to the same folio
>>
>> I haven't tried this exact approach, but I'd be surprised if I can get the
>> regression under 4% with this. Further along the series I spent a lot of time
>> having to fiddle with the arm64 implementation; every conditional and every
>> memory read (even when in cache) was a problem. There is just so little in the
>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>> M2).
>>
>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>> safe and ensure the common order-0 case doesn't regress, as you previously
>> suggested.
>>
> 
> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
> implemented very generic and simple batching for large folios (all PTE bits
> except the PFN have to match).
> 
> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
> 4210R CPU.
> 
> order-0: 0.014210 -> 0.013969
> 
> -> Around 1.7 % faster
> 
> order-9: 0.014373 -> 0.009149
> 
> -> Around 36.3 % faster

Well I guess that shows me :)

I'll do a review and run the tests on my HW to see if it concurs.

> 
> 
> But it's likely buggy, so don't trust the numbers just yet. If they actually
> hold up, we should probably do something like that ahead of time, before all the
> arm-specific cont-pte work.
> 
> I suspect you can easily extend that by arch hooks where reasonable.
> 
> The (3) patches on top of the rmap cleanups can be found at:
> 
>     https://github.com/davidhildenbrand/linux/tree/fork-batching
>
  
David Hildenbrand Dec. 20, 2023, 9:17 a.m. UTC | #6
On 19.12.23 18:42, Ryan Roberts wrote:
> On 19/12/2023 17:22, David Hildenbrand wrote:
>> On 19.12.23 09:30, Ryan Roberts wrote:
>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>> batch is determined by the architecture with the new helper,
>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>
>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>> maintenance operations that the arm64 backend has to perform during
>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>> in its ptes. By write-protecting the parent using the new
>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>> child, the backend does not need to fold a contiguous range once they
>>>>> are all populated - they can be initially populated as a contiguous
>>>>> range in the first place.
>>>>>
>>>>> This code is very performance sensitive, and a significant amount of
>>>>> effort has been put into not regressing performance for the order-0
>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>> which enables the compiler to simplify the extra loops that are added
>>>>> for batching and produce code that is equivalent (and equally
>>>>> performant) as the previous implementation.
>>>>>
>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>> improvement as part of the work to enable contpte mappings.
>>>>>
>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>> careful to only call ptep_get() once per pte batch.
>>>>>
>>>>> The following microbenchmark results demonstate that there is no
>>>>> significant performance change after this patch. Fork is called in a
>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>
>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>> | fork          |-------------------|-------------------|
>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>> |---------------|---------|---------|---------|---------|
>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>
>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>> | fork          |-------------------|-------------------|
>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>> |---------------|---------|---------|---------|---------|
>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>
>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>> ---
>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>     2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>> --- a/include/linux/pgtable.h
>>>>> +++ b/include/linux/pgtable.h
>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>     #endif
>>>>>     +#ifndef pte_batch_remaining
>>>>> +/**
>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>> + * @pte: Page table entry for the first page.
>>>>> + * @addr: Address of the first page.
>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>> + *
>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>> ptes.
>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>> the end
>>>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>>>> + * over ptes.
>>>>> + *
>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>> + */
>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>>>> +                        unsigned long end)
>>>>> +{
>>>>> +    return 1;
>>>>> +}
>>>>> +#endif
>>>>
>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>
>>>> Was there no way to have some basic batching mechanism that doesn't require arch
>>>> specifics?
>>>
>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>> to reduce the order-0 fork regression to 0.
>>>
>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>> cost an extra 3%.
>>>
>>>
>>>>
>>>> I'd have thought that something very basic would have worked like:
>>>>
>>>> * Check if PTE is the same when setting the PFN to 0.
>>>> * Check that PFN is consecutive
>>>> * Check that all PFNs belong to the same folio
>>>
>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>> regression under 4% with this. Further along the series I spent a lot of time
>>> having to fiddle with the arm64 implementation; every conditional and every
>>> memory read (even when in cache) was a problem. There is just so little in the
>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>> M2).
>>>
>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>> suggested.
>>>
>>
>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>> implemented very generic and simple batching for large folios (all PTE bits
>> except the PFN have to match).
>>
>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>> 4210R CPU.
>>
>> order-0: 0.014210 -> 0.013969
>>
>> -> Around 1.7 % faster
>>
>> order-9: 0.014373 -> 0.009149
>>
>> -> Around 36.3 % faster
> 
> Well I guess that shows me :)
> 
> I'll do a review and run the tests on my HW to see if it concurs.


I pushed a simple compile fixup (we need pte_next_pfn()).

Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes:

#ifndef wrprotect_ptes
static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
                pte_t *ptep, unsigned int nr)
{
        for (;;) {
                ptep_set_wrprotect(mm, addr, ptep);
                if (--nr == 0)
                        break;
                ptep++;
                addr += PAGE_SIZE;
        }
}
#endif
  
Ryan Roberts Dec. 20, 2023, 9:51 a.m. UTC | #7
On 20/12/2023 09:17, David Hildenbrand wrote:
> On 19.12.23 18:42, Ryan Roberts wrote:
>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>> batch is determined by the architecture with the new helper,
>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>
>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>> range in the first place.
>>>>>>
>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>> effort has been put into not regressing performance for the order-0
>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>> for batching and produce code that is equivalent (and equally
>>>>>> performant) as the previous implementation.
>>>>>>
>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>
>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>
>>>>>> The following microbenchmark results demonstate that there is no
>>>>>> significant performance change after this patch. Fork is called in a
>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>
>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>> | fork          |-------------------|-------------------|
>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>> |---------------|---------|---------|---------|---------|
>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>
>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>> | fork          |-------------------|-------------------|
>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>> |---------------|---------|---------|---------|---------|
>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>
>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>> ---
>>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>     2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>> --- a/include/linux/pgtable.h
>>>>>> +++ b/include/linux/pgtable.h
>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>     #endif
>>>>>>     +#ifndef pte_batch_remaining
>>>>>> +/**
>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>> + * @pte: Page table entry for the first page.
>>>>>> + * @addr: Address of the first page.
>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>> + *
>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>> ptes.
>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>> the end
>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>> iterating
>>>>>> + * over ptes.
>>>>>> + *
>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>> + */
>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>> addr,
>>>>>> +                        unsigned long end)
>>>>>> +{
>>>>>> +    return 1;
>>>>>> +}
>>>>>> +#endif
>>>>>
>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>
>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>> arch
>>>>> specifics?
>>>>
>>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>>> to reduce the order-0 fork regression to 0.
>>>>
>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>> cost an extra 3%.
>>>>
>>>>
>>>>>
>>>>> I'd have thought that something very basic would have worked like:
>>>>>
>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>> * Check that PFN is consecutive
>>>>> * Check that all PFNs belong to the same folio
>>>>
>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>> memory read (even when in cache) was a problem. There is just so little in the
>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>>> M2).
>>>>
>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>> suggested.
>>>>
>>>
>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>>> implemented very generic and simple batching for large folios (all PTE bits
>>> except the PFN have to match).
>>>
>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>> 4210R CPU.
>>>
>>> order-0: 0.014210 -> 0.013969
>>>
>>> -> Around 1.7 % faster
>>>
>>> order-9: 0.014373 -> 0.009149
>>>
>>> -> Around 36.3 % faster
>>
>> Well I guess that shows me :)
>>
>> I'll do a review and run the tests on my HW to see if it concurs.
> 
> 
> I pushed a simple compile fixup (we need pte_next_pfn()).

I've just been trying to compile and noticed this. Will take a look at your update.

But upon review, I've noticed the part that I think makes this difficult for
arm64 with the contpte optimization; You are calling ptep_get() for every pte in
the batch. While this is functionally correct, once arm64 has the contpte
changes, its ptep_get() has to read every pte in the contpte block in order to
gather the access and dirty bits. So if your batching function ends up wealking
a 16 entry contpte block, that will cause 16 x 16 reads, which kills
performance. That's why I added the arch-specific pte_batch_remaining()
function; this allows the core-mm to skip to the end of the contpte block and
avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
of 256.

I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
gathering. But we have a similar problem in zap_pte_range() and that function
needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
in my series).

I guess you are going to say that we should combine both approaches, so that
your batching loop can skip forward an arch-provided number of ptes? That would
certainly work, but feels like an orthogonal change to what I'm trying to
achieve :). Anyway, I'll spend some time playing with it today.


> 
> Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes:
> 
> #ifndef wrprotect_ptes
> static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>                pte_t *ptep, unsigned int nr)
> {
>        for (;;) {
>                ptep_set_wrprotect(mm, addr, ptep);
>                if (--nr == 0)
>                        break;
>                ptep++;
>                addr += PAGE_SIZE;
>        }
> }
> #endif
> 
>
  
David Hildenbrand Dec. 20, 2023, 9:54 a.m. UTC | #8
On 20.12.23 10:51, Ryan Roberts wrote:
> On 20/12/2023 09:17, David Hildenbrand wrote:
>> On 19.12.23 18:42, Ryan Roberts wrote:
>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>
>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>> range in the first place.
>>>>>>>
>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>> performant) as the previous implementation.
>>>>>>>
>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>
>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>
>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>
>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>> | fork          |-------------------|-------------------|
>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>
>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>> | fork          |-------------------|-------------------|
>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>
>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>> ---
>>>>>>>      include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>      mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>>      2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>
>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>> --- a/include/linux/pgtable.h
>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>      #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>      #endif
>>>>>>>      +#ifndef pte_batch_remaining
>>>>>>> +/**
>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>> + * @addr: Address of the first page.
>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>> + *
>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>> ptes.
>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>> the end
>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>> iterating
>>>>>>> + * over ptes.
>>>>>>> + *
>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>> + */
>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>> addr,
>>>>>>> +                        unsigned long end)
>>>>>>> +{
>>>>>>> +    return 1;
>>>>>>> +}
>>>>>>> +#endif
>>>>>>
>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>
>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>> arch
>>>>>> specifics?
>>>>>
>>>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>>>> to reduce the order-0 fork regression to 0.
>>>>>
>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>> cost an extra 3%.
>>>>>
>>>>>
>>>>>>
>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>
>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>> * Check that PFN is consecutive
>>>>>> * Check that all PFNs belong to the same folio
>>>>>
>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>> memory read (even when in cache) was a problem. There is just so little in the
>>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>>>> M2).
>>>>>
>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>> suggested.
>>>>>
>>>>
>>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>> except the PFN have to match).
>>>>
>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>> 4210R CPU.
>>>>
>>>> order-0: 0.014210 -> 0.013969
>>>>
>>>> -> Around 1.7 % faster
>>>>
>>>> order-9: 0.014373 -> 0.009149
>>>>
>>>> -> Around 36.3 % faster
>>>
>>> Well I guess that shows me :)
>>>
>>> I'll do a review and run the tests on my HW to see if it concurs.
>>
>>
>> I pushed a simple compile fixup (we need pte_next_pfn()).
> 
> I've just been trying to compile and noticed this. Will take a look at your update.
> 
> But upon review, I've noticed the part that I think makes this difficult for
> arm64 with the contpte optimization; You are calling ptep_get() for every pte in
> the batch. While this is functionally correct, once arm64 has the contpte
> changes, its ptep_get() has to read every pte in the contpte block in order to
> gather the access and dirty bits. So if your batching function ends up wealking
> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
> performance. That's why I added the arch-specific pte_batch_remaining()
> function; this allows the core-mm to skip to the end of the contpte block and
> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
> of 256.
> 
> I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
> gathering. But we have a similar problem in zap_pte_range() and that function
> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
> in my series).
> 
> I guess you are going to say that we should combine both approaches, so that
> your batching loop can skip forward an arch-provided number of ptes? That would
> certainly work, but feels like an orthogonal change to what I'm trying to
> achieve :). Anyway, I'll spend some time playing with it today.

You can overwrite the function or add special-casing internally, yes.

Right now, your patch is called "mm: Batch-copy PTE ranges during 
fork()" and it doesn't do any of that besides preparing for some arm64 work.
  
Ryan Roberts Dec. 20, 2023, 9:57 a.m. UTC | #9
On 20/12/2023 09:51, Ryan Roberts wrote:
> On 20/12/2023 09:17, David Hildenbrand wrote:
>> On 19.12.23 18:42, Ryan Roberts wrote:
>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>
>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>> range in the first place.
>>>>>>>
>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>> performant) as the previous implementation.
>>>>>>>
>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>
>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>
>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>
>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>> | fork          |-------------------|-------------------|
>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>
>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>> | fork          |-------------------|-------------------|
>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>
>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>> ---
>>>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>>     2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>
>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>> --- a/include/linux/pgtable.h
>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>     #endif
>>>>>>>     +#ifndef pte_batch_remaining
>>>>>>> +/**
>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>> + * @addr: Address of the first page.
>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>> + *
>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>> ptes.
>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>> the end
>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>> iterating
>>>>>>> + * over ptes.
>>>>>>> + *
>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>> + */
>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>> addr,
>>>>>>> +                        unsigned long end)
>>>>>>> +{
>>>>>>> +    return 1;
>>>>>>> +}
>>>>>>> +#endif
>>>>>>
>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>
>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>> arch
>>>>>> specifics?
>>>>>
>>>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>>>> to reduce the order-0 fork regression to 0.
>>>>>
>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>> cost an extra 3%.
>>>>>
>>>>>
>>>>>>
>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>
>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>> * Check that PFN is consecutive
>>>>>> * Check that all PFNs belong to the same folio
>>>>>
>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>> memory read (even when in cache) was a problem. There is just so little in the
>>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>>>> M2).
>>>>>
>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>> suggested.
>>>>>
>>>>
>>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>> except the PFN have to match).
>>>>
>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>> 4210R CPU.
>>>>
>>>> order-0: 0.014210 -> 0.013969
>>>>
>>>> -> Around 1.7 % faster
>>>>
>>>> order-9: 0.014373 -> 0.009149
>>>>
>>>> -> Around 36.3 % faster
>>>
>>> Well I guess that shows me :)
>>>
>>> I'll do a review and run the tests on my HW to see if it concurs.
>>
>>
>> I pushed a simple compile fixup (we need pte_next_pfn()).
> 
> I've just been trying to compile and noticed this. Will take a look at your update.

Took a look; there will still be arch work needed; arm64 doesn't define
PFN_PTE_SHIFT because it defines set_ptes(). I'm not sure if there are other
arches that also don't define PFN_PTE_SHIFT (or pte_next_pfn()) if the math is
more complex) - it will need an audit.

> 
> But upon review, I've noticed the part that I think makes this difficult for
> arm64 with the contpte optimization; You are calling ptep_get() for every pte in
> the batch. While this is functionally correct, once arm64 has the contpte
> changes, its ptep_get() has to read every pte in the contpte block in order to
> gather the access and dirty bits. So if your batching function ends up wealking
> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
> performance. That's why I added the arch-specific pte_batch_remaining()
> function; this allows the core-mm to skip to the end of the contpte block and
> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
> of 256.
> 
> I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
> gathering. But we have a similar problem in zap_pte_range() and that function
> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
> in my series).
> 
> I guess you are going to say that we should combine both approaches, so that
> your batching loop can skip forward an arch-provided number of ptes? That would
> certainly work, but feels like an orthogonal change to what I'm trying to
> achieve :). Anyway, I'll spend some time playing with it today.
> 
> 
>>
>> Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes:
>>
>> #ifndef wrprotect_ptes
>> static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>                pte_t *ptep, unsigned int nr)
>> {
>>        for (;;) {
>>                ptep_set_wrprotect(mm, addr, ptep);
>>                if (--nr == 0)
>>                        break;
>>                ptep++;
>>                addr += PAGE_SIZE;
>>        }
>> }
>> #endif

Yes that's a much better name; I've also introduced clear_ptes() (in patch 3)
and set_ptes_full(), which takes a flag that allows arm64 to avoid trying to
fold a contpte block; needed to avoid regressing fork once the contpte changes
are present.

>>
>>
>
  
David Hildenbrand Dec. 20, 2023, 10 a.m. UTC | #10
On 20.12.23 10:57, Ryan Roberts wrote:
> On 20/12/2023 09:51, Ryan Roberts wrote:
>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>
>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>> range in the first place.
>>>>>>>>
>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>> performant) as the previous implementation.
>>>>>>>>
>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>
>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>
>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>
>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>
>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>
>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>> ---
>>>>>>>>      include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>      mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>>>      2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>      #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>      #endif
>>>>>>>>      +#ifndef pte_batch_remaining
>>>>>>>> +/**
>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>> + * @addr: Address of the first page.
>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>> + *
>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>>> ptes.
>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>> the end
>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>> iterating
>>>>>>>> + * over ptes.
>>>>>>>> + *
>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>> + */
>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>> addr,
>>>>>>>> +                        unsigned long end)
>>>>>>>> +{
>>>>>>>> +    return 1;
>>>>>>>> +}
>>>>>>>> +#endif
>>>>>>>
>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>
>>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>>> arch
>>>>>>> specifics?
>>>>>>
>>>>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>
>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>>> cost an extra 3%.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>
>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>> * Check that PFN is consecutive
>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>
>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>> memory read (even when in cache) was a problem. There is just so little in the
>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>>>>> M2).
>>>>>>
>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>> suggested.
>>>>>>
>>>>>
>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>> except the PFN have to match).
>>>>>
>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>>> 4210R CPU.
>>>>>
>>>>> order-0: 0.014210 -> 0.013969
>>>>>
>>>>> -> Around 1.7 % faster
>>>>>
>>>>> order-9: 0.014373 -> 0.009149
>>>>>
>>>>> -> Around 36.3 % faster
>>>>
>>>> Well I guess that shows me :)
>>>>
>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>
>>>
>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>
>> I've just been trying to compile and noticed this. Will take a look at your update.
> 
> Took a look; there will still be arch work needed; arm64 doesn't define
> PFN_PTE_SHIFT because it defines set_ptes(). I'm not sure if there are other
> arches that also don't define PFN_PTE_SHIFT (or pte_next_pfn()) if the math is
> more complex) - it will need an audit.
> 

Right, likely many that have their own set_ptes() implementation right now.
  
Ryan Roberts Dec. 20, 2023, 10:11 a.m. UTC | #11
On 20/12/2023 09:54, David Hildenbrand wrote:
> On 20.12.23 10:51, Ryan Roberts wrote:
>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>
>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>> range in the first place.
>>>>>>>>
>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>> performant) as the previous implementation.
>>>>>>>>
>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>
>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>
>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>
>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>
>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>
>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>> ---
>>>>>>>>      include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>      mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>>>      2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>      #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>      #endif
>>>>>>>>      +#ifndef pte_batch_remaining
>>>>>>>> +/**
>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>> + * @addr: Address of the first page.
>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>> + *
>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>>> ptes.
>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>> the end
>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>> iterating
>>>>>>>> + * over ptes.
>>>>>>>> + *
>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>> + */
>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>> addr,
>>>>>>>> +                        unsigned long end)
>>>>>>>> +{
>>>>>>>> +    return 1;
>>>>>>>> +}
>>>>>>>> +#endif
>>>>>>>
>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>
>>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>>> arch
>>>>>>> specifics?
>>>>>>
>>>>>> I tried a bunch of things but ultimately the way I've done it was the only
>>>>>> way
>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>
>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>>> cost an extra 3%.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>
>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>> * Check that PFN is consecutive
>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>
>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>> memory read (even when in cache) was a problem. There is just so little in
>>>>>> the
>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>> Apple
>>>>>> M2).
>>>>>>
>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to
>>>>>> play
>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>> suggested.
>>>>>>
>>>>>
>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>> series. I
>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>> except the PFN have to match).
>>>>>
>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>>> 4210R CPU.
>>>>>
>>>>> order-0: 0.014210 -> 0.013969
>>>>>
>>>>> -> Around 1.7 % faster
>>>>>
>>>>> order-9: 0.014373 -> 0.009149
>>>>>
>>>>> -> Around 36.3 % faster
>>>>
>>>> Well I guess that shows me :)
>>>>
>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>
>>>
>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>
>> I've just been trying to compile and noticed this. Will take a look at your
>> update.
>>
>> But upon review, I've noticed the part that I think makes this difficult for
>> arm64 with the contpte optimization; You are calling ptep_get() for every pte in
>> the batch. While this is functionally correct, once arm64 has the contpte
>> changes, its ptep_get() has to read every pte in the contpte block in order to
>> gather the access and dirty bits. So if your batching function ends up wealking
>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>> performance. That's why I added the arch-specific pte_batch_remaining()
>> function; this allows the core-mm to skip to the end of the contpte block and
>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
>> of 256.
>>
>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
>> gathering. But we have a similar problem in zap_pte_range() and that function
>> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
>> in my series).
>>
>> I guess you are going to say that we should combine both approaches, so that
>> your batching loop can skip forward an arch-provided number of ptes? That would
>> certainly work, but feels like an orthogonal change to what I'm trying to
>> achieve :). Anyway, I'll spend some time playing with it today.
> 
> You can overwrite the function or add special-casing internally, yes.
> 
> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it
> doesn't do any of that besides preparing for some arm64 work.
> 

Well it allows an arch to opt-in to batching. But I see your point.

How do you want to handle your patches? Do you want to clean them up and I'll
base my stuff on top? Or do you want me to take them and sort it all out?

As I see it at the moment, I would keep your folio_pte_batch() always core, but
in subsequent patch, have it use pte_batch_remaining() (the arch function I have
in my series, which defaults to one). Then do a similar thing to what you have
done for fork in zap_pte_range() - also using folio_pte_batch(). Then lay my
series on top.
  
David Hildenbrand Dec. 20, 2023, 10:16 a.m. UTC | #12
On 20.12.23 11:11, Ryan Roberts wrote:
> On 20/12/2023 09:54, David Hildenbrand wrote:
>> On 20.12.23 10:51, Ryan Roberts wrote:
>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>
>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>> range in the first place.
>>>>>>>>>
>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>
>>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>
>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>
>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>>
>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>
>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>
>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>> ---
>>>>>>>>>       include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>       mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>>>>>>>       2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>       #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>       #endif
>>>>>>>>>       +#ifndef pte_batch_remaining
>>>>>>>>> +/**
>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>> + *
>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>>>> ptes.
>>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>>> the end
>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>> iterating
>>>>>>>>> + * over ptes.
>>>>>>>>> + *
>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>> + */
>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>>> addr,
>>>>>>>>> +                        unsigned long end)
>>>>>>>>> +{
>>>>>>>>> +    return 1;
>>>>>>>>> +}
>>>>>>>>> +#endif
>>>>>>>>
>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>
>>>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>>>> arch
>>>>>>>> specifics?
>>>>>>>
>>>>>>> I tried a bunch of things but ultimately the way I've done it was the only
>>>>>>> way
>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>
>>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>>>> cost an extra 3%.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>
>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>> * Check that PFN is consecutive
>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>
>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>>> memory read (even when in cache) was a problem. There is just so little in
>>>>>>> the
>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>> Apple
>>>>>>> M2).
>>>>>>>
>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to
>>>>>>> play
>>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>>> suggested.
>>>>>>>
>>>>>>
>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>> series. I
>>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>>> except the PFN have to match).
>>>>>>
>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>>>> 4210R CPU.
>>>>>>
>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>
>>>>>> -> Around 1.7 % faster
>>>>>>
>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>
>>>>>> -> Around 36.3 % faster
>>>>>
>>>>> Well I guess that shows me :)
>>>>>
>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>
>>>>
>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>
>>> I've just been trying to compile and noticed this. Will take a look at your
>>> update.
>>>
>>> But upon review, I've noticed the part that I think makes this difficult for
>>> arm64 with the contpte optimization; You are calling ptep_get() for every pte in
>>> the batch. While this is functionally correct, once arm64 has the contpte
>>> changes, its ptep_get() has to read every pte in the contpte block in order to
>>> gather the access and dirty bits. So if your batching function ends up wealking
>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>> function; this allows the core-mm to skip to the end of the contpte block and
>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
>>> of 256.
>>>
>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
>>> in my series).
>>>
>>> I guess you are going to say that we should combine both approaches, so that
>>> your batching loop can skip forward an arch-provided number of ptes? That would
>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>> achieve :). Anyway, I'll spend some time playing with it today.
>>
>> You can overwrite the function or add special-casing internally, yes.
>>
>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it
>> doesn't do any of that besides preparing for some arm64 work.
>>
> 
> Well it allows an arch to opt-in to batching. But I see your point.
> 
> How do you want to handle your patches? Do you want to clean them up and I'll
> base my stuff on top? Or do you want me to take them and sort it all out?

Whatever you prefer, it was mostly a quick prototype to see if we can 
achieve decent performance.

I can fixup the arch thingies (most should be easy, some might require a 
custom pte_next_pfn()) and you can focus on getting cont-pte sorted out 
on top [I assume that's what you want to work on :) ].

> 
> As I see it at the moment, I would keep your folio_pte_batch() always core, but
> in subsequent patch, have it use pte_batch_remaining() (the arch function I have
> in my series, which defaults to one). 

Just double-checking, how would it use pte_batch_remaining() ?

> Then do a similar thing to what you have
> done for fork in zap_pte_range() - also using folio_pte_batch(). Then lay my
> series on top.

Yes, we should probably try to handle the zapping part similarly: make 
it benefit all archs first, then special-case on cont-pte. I can help 
there as well.
  
Ryan Roberts Dec. 20, 2023, 10:41 a.m. UTC | #13
On 20/12/2023 10:16, David Hildenbrand wrote:
> On 20.12.23 11:11, Ryan Roberts wrote:
>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>
>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>> range in the first place.
>>>>>>>>>>
>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>
>>>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>
>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>
>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>>>
>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>
>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>
>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>> ---
>>>>>>>>>>       include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>       mm/memory.c             | 92
>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>       2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>       #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>       #endif
>>>>>>>>>>       +#ifndef pte_batch_remaining
>>>>>>>>>> +/**
>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>> boundary.
>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>> + *
>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>> batch of
>>>>>>>>>> ptes.
>>>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>>>> the end
>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>> iterating
>>>>>>>>>> + * over ptes.
>>>>>>>>>> + *
>>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>>> + */
>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>>>> addr,
>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>> +{
>>>>>>>>>> +    return 1;
>>>>>>>>>> +}
>>>>>>>>>> +#endif
>>>>>>>>>
>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>
>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>> require
>>>>>>>>> arch
>>>>>>>>> specifics?
>>>>>>>>
>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the only
>>>>>>>> way
>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>
>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>> still
>>>>>>>> cost an extra 3%.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>
>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>
>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>>>> regression under 4% with this. Further along the series I spent a lot of
>>>>>>>> time
>>>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>>>> memory read (even when in cache) was a problem. There is just so little in
>>>>>>>> the
>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>>> Apple
>>>>>>>> M2).
>>>>>>>>
>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>> benefit to
>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to
>>>>>>>> play
>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>>>> suggested.
>>>>>>>>
>>>>>>>
>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>> series. I
>>>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>>>> except the PFN have to match).
>>>>>>>
>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>> Silver
>>>>>>> 4210R CPU.
>>>>>>>
>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>
>>>>>>> -> Around 1.7 % faster
>>>>>>>
>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>
>>>>>>> -> Around 36.3 % faster
>>>>>>
>>>>>> Well I guess that shows me :)
>>>>>>
>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>
>>>>>
>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>
>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>> update.
>>>>
>>>> But upon review, I've noticed the part that I think makes this difficult for
>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>> pte in
>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>> changes, its ptep_get() has to read every pte in the contpte block in order to
>>>> gather the access and dirty bits. So if your batching function ends up wealking
>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>> function; this allows the core-mm to skip to the end of the contpte block and
>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>> instead
>>>> of 256.
>>>>
>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the
>>>> bit
>>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>>> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
>>>> in my series).
>>>>
>>>> I guess you are going to say that we should combine both approaches, so that
>>>> your batching loop can skip forward an arch-provided number of ptes? That would
>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>
>>> You can overwrite the function or add special-casing internally, yes.
>>>
>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it
>>> doesn't do any of that besides preparing for some arm64 work.
>>>
>>
>> Well it allows an arch to opt-in to batching. But I see your point.
>>
>> How do you want to handle your patches? Do you want to clean them up and I'll
>> base my stuff on top? Or do you want me to take them and sort it all out?
> 
> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
> decent performance.

I'm about to run it on Altra and M2. But I assume it will show similar results.

> 
> I can fixup the arch thingies (most should be easy, some might require a custom
> pte_next_pfn()) 

Well if you're happy to do that, great! I'm keen to get the contpte stuff into
v6.9 if at all possible, and I'm concious that I'm introducing more dependencies
on you. And its about to be holiday season...

> and you can focus on getting cont-pte sorted out on top [I
> assume that's what you want to work on :) ].

That's certainly what I'm focussed on. But I'm happy to do whatever is required
to get it over the line. I guess I'll start by finishing my review of your v1
rmap stuff.

> 
>>
>> As I see it at the moment, I would keep your folio_pte_batch() always core, but
>> in subsequent patch, have it use pte_batch_remaining() (the arch function I have
>> in my series, which defaults to one). 
> 
> Just double-checking, how would it use pte_batch_remaining() ?

I think something like this would do it (untested):

static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
		pte_t *start_ptep, pte_t pte, int max_nr)
{
	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
	pte_t expected_pte = pte_next_pfn(pte);
	pte_t *ptep = start_ptep;
	int nr;

	for (;;) {
		nr = min(max_nr, pte_batch_remaining());
		ptep += nr;
		max_nr -= nr;

		if (max_nr == 0)
			break;

		pte = ptep_get(ptep);

		/* Do all PTE bits match, and the PFN is consecutive? */
		if (!pte_same(pte, expected_pte))
			break;

		/*
		 * Stop immediately once we reached the end of the folio. In
		 * corner cases the next PFN might fall into a different
		 * folio.
		 */
		if (pte_pfn(pte) == folio_end_pfn - 1)
			break;

		expected_pte = pte_next_pfn(expected_pte);
	}

	return ptep - start_ptep;
}

Of course, if we have the concept of a "pte batch" in the core-mm, then we might
want to call the arch's thing something different; pte span? pte cont? pte cont
batch? ...


> 
>> Then do a similar thing to what you have
>> done for fork in zap_pte_range() - also using folio_pte_batch(). Then lay my
>> series on top.
> 
> Yes, we should probably try to handle the zapping part similarly: make it
> benefit all archs first, then special-case on cont-pte. I can help there as well.

OK great.
  
David Hildenbrand Dec. 20, 2023, 10:56 a.m. UTC | #14
On 20.12.23 11:41, Ryan Roberts wrote:
> On 20/12/2023 10:16, David Hildenbrand wrote:
>> On 20.12.23 11:11, Ryan Roberts wrote:
>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>
>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>> range in the first place.
>>>>>>>>>>>
>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>
>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>
>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>
>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>>>>
>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>
>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>
>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>> ---
>>>>>>>>>>>        include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>        mm/memory.c             | 92
>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>        2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>        #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>        #endif
>>>>>>>>>>>        +#ifndef pte_batch_remaining
>>>>>>>>>>> +/**
>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>> boundary.
>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>> + *
>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>> batch of
>>>>>>>>>>> ptes.
>>>>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>>>>> the end
>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>> iterating
>>>>>>>>>>> + * over ptes.
>>>>>>>>>>> + *
>>>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>>>> + */
>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>>>>> addr,
>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>> +{
>>>>>>>>>>> +    return 1;
>>>>>>>>>>> +}
>>>>>>>>>>> +#endif
>>>>>>>>>>
>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>
>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>> require
>>>>>>>>>> arch
>>>>>>>>>> specifics?
>>>>>>>>>
>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the only
>>>>>>>>> way
>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>
>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>> still
>>>>>>>>> cost an extra 3%.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>
>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>
>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>>>>> regression under 4% with this. Further along the series I spent a lot of
>>>>>>>>> time
>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>>>>> memory read (even when in cache) was a problem. There is just so little in
>>>>>>>>> the
>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>>>> Apple
>>>>>>>>> M2).
>>>>>>>>>
>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>> benefit to
>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to
>>>>>>>>> play
>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>>>>> suggested.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>> series. I
>>>>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>>>>> except the PFN have to match).
>>>>>>>>
>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>> Silver
>>>>>>>> 4210R CPU.
>>>>>>>>
>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>
>>>>>>>> -> Around 1.7 % faster
>>>>>>>>
>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>
>>>>>>>> -> Around 36.3 % faster
>>>>>>>
>>>>>>> Well I guess that shows me :)
>>>>>>>
>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>
>>>>>>
>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>
>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>> update.
>>>>>
>>>>> But upon review, I've noticed the part that I think makes this difficult for
>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>> pte in
>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>> changes, its ptep_get() has to read every pte in the contpte block in order to
>>>>> gather the access and dirty bits. So if your batching function ends up wealking
>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>> function; this allows the core-mm to skip to the end of the contpte block and
>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>> instead
>>>>> of 256.
>>>>>
>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the
>>>>> bit
>>>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
>>>>> in my series).
>>>>>
>>>>> I guess you are going to say that we should combine both approaches, so that
>>>>> your batching loop can skip forward an arch-provided number of ptes? That would
>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>
>>>> You can overwrite the function or add special-casing internally, yes.
>>>>
>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it
>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>
>>>
>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>
>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>
>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>> decent performance.
> 
> I'm about to run it on Altra and M2. But I assume it will show similar results.
> 
>>
>> I can fixup the arch thingies (most should be easy, some might require a custom
>> pte_next_pfn())
> 
> Well if you're happy to do that, great! I'm keen to get the contpte stuff into
> v6.9 if at all possible, and I'm concious that I'm introducing more dependencies
> on you. And its about to be holiday season...

There is still plenty of time for 6.9. I'll try to get the rmap cleanup 
finished asap.

> 
>> and you can focus on getting cont-pte sorted out on top [I
>> assume that's what you want to work on :) ].
> 
> That's certainly what I'm focussed on. But I'm happy to do whatever is required
> to get it over the line. I guess I'll start by finishing my review of your v1
> rmap stuff.

I'm planning on sending out a new version today.

> 
>>
>>>
>>> As I see it at the moment, I would keep your folio_pte_batch() always core, but
>>> in subsequent patch, have it use pte_batch_remaining() (the arch function I have
>>> in my series, which defaults to one).
>>
>> Just double-checking, how would it use pte_batch_remaining() ?
> 
> I think something like this would do it (untested):
> 
> static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> 		pte_t *start_ptep, pte_t pte, int max_nr)
> {
> 	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> 	pte_t expected_pte = pte_next_pfn(pte);
> 	pte_t *ptep = start_ptep;
> 	int nr;
> 
> 	for (;;) {
> 		nr = min(max_nr, pte_batch_remaining());
> 		ptep += nr;
> 		max_nr -= nr;
> 
> 		if (max_nr == 0)
> 			break;
> 

expected_pte would be messed up. We'd have to increment it a couple of 
times to make it match the nr of pages we're skipping.

> 		pte = ptep_get(ptep);
> 
> 		/* Do all PTE bits match, and the PFN is consecutive? */
> 		if (!pte_same(pte, expected_pte))
> 			break;
> 
> 		/*
> 		 * Stop immediately once we reached the end of the folio. In
> 		 * corner cases the next PFN might fall into a different
> 		 * folio.
> 		 */
> 		if (pte_pfn(pte) == folio_end_pfn - 1)
> 			break;
> 
> 		expected_pte = pte_next_pfn(expected_pte);
> 	}
> 
> 	return ptep - start_ptep;
> }
> 
> Of course, if we have the concept of a "pte batch" in the core-mm, then we might
> want to call the arch's thing something different; pte span? pte cont? pte cont
> batch? ...

So, you mean something like

/*
  * The architecture might be able to tell us efficiently using cont-pte
  * bits how many next PTEs are certainly compatible. So in that case,
  * simply skip forward.
  */
nr = min(max_nr, nr_cont_ptes(ptep));
...

I wonder if something simple at the start of the function might be good 
enough for arm with cont-pte as a first step:

nr = nr_cont_ptes(start_ptep)
if (nr != 1) {
	return min(max_nr, nr);
}

Which would get optimized out on other architectures.
  
Ryan Roberts Dec. 20, 2023, 11:28 a.m. UTC | #15
On 20/12/2023 10:56, David Hildenbrand wrote:
> On 20.12.23 11:41, Ryan Roberts wrote:
>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>> memory,
>>>>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>
>>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>> bit"
>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>
>>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>
>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>> change
>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>
>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>
>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for
>>>>>>>>>>>> the
>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>> based:
>>>>>>>>>>>>
>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>
>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>
>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>>        include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>        mm/memory.c             | 92
>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>        2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>        #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>        #endif
>>>>>>>>>>>>        +#ifndef pte_batch_remaining
>>>>>>>>>>>> +/**
>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>> boundary.
>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>> batch of
>>>>>>>>>>>> ptes.
>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>> pages to
>>>>>>>>>>>> the end
>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>>> iterating
>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>> long
>>>>>>>>>>>> addr,
>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +#endif
>>>>>>>>>>>
>>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>>
>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>> require
>>>>>>>>>>> arch
>>>>>>>>>>> specifics?
>>>>>>>>>>
>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>> only
>>>>>>>>>> way
>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>
>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>> at an
>>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>>> still
>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>
>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>
>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get
>>>>>>>>>> the
>>>>>>>>>> regression under 4% with this. Further along the series I spent a lot of
>>>>>>>>>> time
>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>> every
>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>> little in
>>>>>>>>>> the
>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>>>>> Apple
>>>>>>>>>> M2).
>>>>>>>>>>
>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>> benefit to
>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>> prefer to
>>>>>>>>>> play
>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>> previously
>>>>>>>>>> suggested.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>> series. I
>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>> bits
>>>>>>>>> except the PFN have to match).
>>>>>>>>>
>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>> Silver
>>>>>>>>> 4210R CPU.
>>>>>>>>>
>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>
>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>
>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>
>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>
>>>>>>>> Well I guess that shows me :)
>>>>>>>>
>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>
>>>>>>>
>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>
>>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>>> update.
>>>>>>
>>>>>> But upon review, I've noticed the part that I think makes this difficult for
>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>> pte in
>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>> order to
>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>> wealking
>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>> function; this allows the core-mm to skip to the end of the contpte block and
>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>> instead
>>>>>> of 256.
>>>>>>
>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the
>>>>>> bit
>>>>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>> patch 3
>>>>>> in my series).
>>>>>>
>>>>>> I guess you are going to say that we should combine both approaches, so that
>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>> would
>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>
>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>
>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>> and it
>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>
>>>>
>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>
>>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>
>>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>>> decent performance.
>>
>> I'm about to run it on Altra and M2. But I assume it will show similar results.

OK results in, not looking great, which aligns with my previous experience. That
said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
perhaps these results are not valid...

100 iterations per run, 8 runs over 2 reboots. Positive is slower than baseline,
negative is faster:

Fork, order-0, Apple M2 VM:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.8% |
| hugetlb-rmap-cleanups |       1.3% |      2.0% |
| fork-batching         |       3.5% |      1.2% |

Fork, order-9, Apple M2 VM:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.8% |
| hugetlb-rmap-cleanups |       0.9% |      0.9% |
| fork-batching         |     -35.6% |      2.0% |

Fork, order-0, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.7% |
| hugetlb-rmap-cleanups |       3.2% |      0.7% |
| fork-batching         |       5.5% |      1.1% |

Fork, order-9, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.1% |
| hugetlb-rmap-cleanups |       0.5% |      0.1% |
| fork-batching         |     -10.3% |      0.1% |


>>
>>>
>>> I can fixup the arch thingies (most should be easy, some might require a custom
>>> pte_next_pfn())
>>
>> Well if you're happy to do that, great! I'm keen to get the contpte stuff into
>> v6.9 if at all possible, and I'm concious that I'm introducing more dependencies
>> on you. And its about to be holiday season...
> 
> There is still plenty of time for 6.9. I'll try to get the rmap cleanup finished
> asap.
> 
>>
>>> and you can focus on getting cont-pte sorted out on top [I
>>> assume that's what you want to work on :) ].
>>
>> That's certainly what I'm focussed on. But I'm happy to do whatever is required
>> to get it over the line. I guess I'll start by finishing my review of your v1
>> rmap stuff.
> 
> I'm planning on sending out a new version today.
> 
>>
>>>
>>>>
>>>> As I see it at the moment, I would keep your folio_pte_batch() always core, but
>>>> in subsequent patch, have it use pte_batch_remaining() (the arch function I
>>>> have
>>>> in my series, which defaults to one).
>>>
>>> Just double-checking, how would it use pte_batch_remaining() ?
>>
>> I think something like this would do it (untested):
>>
>> static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>         pte_t *start_ptep, pte_t pte, int max_nr)
>> {
>>     unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
>>     pte_t expected_pte = pte_next_pfn(pte);
>>     pte_t *ptep = start_ptep;
>>     int nr;
>>
>>     for (;;) {
>>         nr = min(max_nr, pte_batch_remaining());
>>         ptep += nr;
>>         max_nr -= nr;
>>
>>         if (max_nr == 0)
>>             break;
>>
> 
> expected_pte would be messed up. We'd have to increment it a couple of times to
> make it match the nr of pages we're skipping.

Ahh, good point.

> 
>>         pte = ptep_get(ptep);
>>
>>         /* Do all PTE bits match, and the PFN is consecutive? */
>>         if (!pte_same(pte, expected_pte))
>>             break;
>>
>>         /*
>>          * Stop immediately once we reached the end of the folio. In
>>          * corner cases the next PFN might fall into a different
>>          * folio.
>>          */
>>         if (pte_pfn(pte) == folio_end_pfn - 1)
>>             break;
>>
>>         expected_pte = pte_next_pfn(expected_pte);
>>     }
>>
>>     return ptep - start_ptep;
>> }
>>
>> Of course, if we have the concept of a "pte batch" in the core-mm, then we might
>> want to call the arch's thing something different; pte span? pte cont? pte cont
>> batch? ...
> 
> So, you mean something like
> 
> /*
>  * The architecture might be able to tell us efficiently using cont-pte
>  * bits how many next PTEs are certainly compatible. So in that case,
>  * simply skip forward.
>  */
> nr = min(max_nr, nr_cont_ptes(ptep));
> ...
> 
> I wonder if something simple at the start of the function might be good enough
> for arm with cont-pte as a first step:
> 
> nr = nr_cont_ptes(start_ptep)
> if (nr != 1) {
>     return min(max_nr, nr);
> }

Yeah that would probably work. But we need to be careful for the case where
start_ptep is in the middle of a contpte block (which can happen - due to some
vma splitting operations, we can have a contpte block that spans 2 vmas). So
nr_cont_ptes() needs to either be spec'ed to only return the contpte size if
start_ptep is pointing to the front of the block, and all other times, return 1,
or it needs to return the number of ptes remaining to the end of the block (as
it does in my v4).

But I guess we need to get to the bottom of my arm64 perf numbers first... I'll
debug those bugs and rerun.

> 
> Which would get optimized out on other architectures.
> 
>
  
David Hildenbrand Dec. 20, 2023, 11:36 a.m. UTC | #16
On 20.12.23 12:28, Ryan Roberts wrote:
> On 20/12/2023 10:56, David Hildenbrand wrote:
>> On 20.12.23 11:41, Ryan Roberts wrote:
>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>> memory,
>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>
>>>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>> bit"
>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>> change
>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>
>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for
>>>>>>>>>>>>> the
>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>> based:
>>>>>>>>>>>>>
>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>
>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>         include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>         mm/memory.c             | 92
>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>         2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>         #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>         #endif
>>>>>>>>>>>>>         +#ifndef pte_batch_remaining
>>>>>>>>>>>>> +/**
>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>> batch of
>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>> pages to
>>>>>>>>>>>>> the end
>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>>>> iterating
>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>> long
>>>>>>>>>>>>> addr,
>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>
>>>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>>>
>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>> require
>>>>>>>>>>>> arch
>>>>>>>>>>>> specifics?
>>>>>>>>>>>
>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>> only
>>>>>>>>>>> way
>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>
>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>> at an
>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>>>> still
>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>
>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>
>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get
>>>>>>>>>>> the
>>>>>>>>>>> regression under 4% with this. Further along the series I spent a lot of
>>>>>>>>>>> time
>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>> every
>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>> little in
>>>>>>>>>>> the
>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>>>>>> Apple
>>>>>>>>>>> M2).
>>>>>>>>>>>
>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>> benefit to
>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>> prefer to
>>>>>>>>>>> play
>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>> previously
>>>>>>>>>>> suggested.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>> series. I
>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>> bits
>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>
>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>> Silver
>>>>>>>>>> 4210R CPU.
>>>>>>>>>>
>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>
>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>
>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>
>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>
>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>
>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>
>>>>>>>>
>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>
>>>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>>>> update.
>>>>>>>
>>>>>>> But upon review, I've noticed the part that I think makes this difficult for
>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>> pte in
>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>> order to
>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>> wealking
>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>> function; this allows the core-mm to skip to the end of the contpte block and
>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>> instead
>>>>>>> of 256.
>>>>>>>
>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the
>>>>>>> bit
>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>> patch 3
>>>>>>> in my series).
>>>>>>>
>>>>>>> I guess you are going to say that we should combine both approaches, so that
>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>> would
>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>
>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>
>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>> and it
>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>
>>>>>
>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>
>>>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>
>>>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>>>> decent performance.
>>>
>>> I'm about to run it on Altra and M2. But I assume it will show similar results.
> 
> OK results in, not looking great, which aligns with my previous experience. That
> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
> perhaps these results are not valid...

I didn't see that so far on x86, maybe related to the PFN fixup?

> 
> 100 iterations per run, 8 runs over 2 reboots. Positive is slower than baseline,
> negative is faster:
> 
> Fork, order-0, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
> | fork-batching         |       3.5% |      1.2% |
> 
> Fork, order-9, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
> | fork-batching         |     -35.6% |      2.0% |
> 
> Fork, order-0, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.7% |
> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
> | fork-batching         |       5.5% |      1.1% |
> 
> Fork, order-9, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.1% |
> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
> | fork-batching         |     -10.3% |      0.1% |

It's weird that an effective folio_test_large() should affect 
performance that much. So far I haven't seen that behavior on x86, I 
wodner why arm64 should behave here differently (also for the rmap 
cleanups). Code layout/size?

I'll dig it up again and test on x86 once more.

[...]

> 
> Yeah that would probably work. But we need to be careful for the case where
> start_ptep is in the middle of a contpte block (which can happen - due to some
> vma splitting operations, we can have a contpte block that spans 2 vmas). So
> nr_cont_ptes() needs to either be spec'ed to only return the contpte size if
> start_ptep is pointing to the front of the block, and all other times, return 1,
> or it needs to return the number of ptes remaining to the end of the block (as
> it does in my v4).
> 
> But I guess we need to get to the bottom of my arm64 perf numbers first... I'll
> debug those bugs and rerun.

Yes, I'll dig into it on x86 once more.
  
Ryan Roberts Dec. 20, 2023, 11:51 a.m. UTC | #17
On 20/12/2023 11:36, David Hildenbrand wrote:
> On 20.12.23 12:28, Ryan Roberts wrote:
>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>> change
>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>> very
>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>         include/linux/pgtable.h | 80
>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>         mm/memory.c             | 92
>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>         2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>         #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>         #endif
>>>>>>>>>>>>>>         +#ifndef pte_batch_remaining
>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>> long
>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>
>>>>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>> require
>>>>>>>>>>>>> arch
>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>
>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>> only
>>>>>>>>>>>> way
>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>
>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>> at an
>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>>>>> still
>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>
>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>
>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get
>>>>>>>>>>>> the
>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>> lot of
>>>>>>>>>>>> time
>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>> every
>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>> little in
>>>>>>>>>>>> the
>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>> and
>>>>>>>>>>>> Apple
>>>>>>>>>>>> M2).
>>>>>>>>>>>>
>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>> benefit to
>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>> prefer to
>>>>>>>>>>>> play
>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>> previously
>>>>>>>>>>>> suggested.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>> series. I
>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>> bits
>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>
>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>> Silver
>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>
>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>
>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>
>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>
>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>
>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>
>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>
>>>>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>>>>> update.
>>>>>>>>
>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>> for
>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>> pte in
>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>> order to
>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>> wealking
>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>> block and
>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>> instead
>>>>>>>> of 256.
>>>>>>>>
>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>> the
>>>>>>>> bit
>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>> function
>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>> patch 3
>>>>>>>> in my series).
>>>>>>>>
>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>> that
>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>> would
>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>
>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>
>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>> and it
>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>
>>>>>>
>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>
>>>>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>
>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>>>>> decent performance.
>>>>
>>>> I'm about to run it on Altra and M2. But I assume it will show similar results.
>>
>> OK results in, not looking great, which aligns with my previous experience. That
>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>> perhaps these results are not valid...
> 
> I didn't see that so far on x86, maybe related to the PFN fixup?

All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b19a8aee684c..9eb0fd693df9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
 }
 #define set_ptes set_ptes
 
+#define PFN_PTE_SHIFT          PAGE_SHIFT
+
 /*
  * Huge pte definitions.
  */


As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit case. But that won't affect this.


With VM_DEBUG on, this is the first warning I see during boot:


[    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0 mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
[    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0 nr_pages_mapped:2 pincount:0
[    0.279247] memcg:ffff1a678008a000
[    0.279518] aops:xfs_address_space_operations ino:b0f70c dentry name:"systemd"
[    0.279746] flags: 0xbfffc0000008068(uptodate|lru|private|head|node=0|zone=2|lastcpupid=0xffff)
[    0.280003] page_type: 0xffffffff()
[    0.280110] raw: 0bfffc0000008068 fffffc699effcb08 fffffc699effcd08 ffff1a678980a6b0
[    0.280338] raw: 00000000000001a8 ffff1a678a0f0200 0000000cffffffff ffff1a678008a000
[    0.280564] page dumped because: VM_WARN_ON_FOLIO((_Generic((page + nr_pages - 1), const struct page *: (const struct folio *)_compound_head(page + nr_pages - 1), struct page *: (struct folio *)_compound_head(page + nr_pages - 1))) != folio)
[    0.281196] ------------[ cut here ]------------
[    0.281349] WARNING: CPU: 2 PID: 1 at include/linux/rmap.h:208 __folio_rmap_sanity_checks.constprop.0+0x168/0x188
[    0.281650] Modules linked in:
[    0.281752] CPU: 2 PID: 1 Comm: systemd Not tainted 6.7.0-rc4-00345-gdb45492bba9d #7
[    0.281959] Hardware name: linux,dummy-virt (DT)
[    0.282079] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[    0.282260] pc : __folio_rmap_sanity_checks.constprop.0+0x168/0x188
[    0.282421] lr : __folio_rmap_sanity_checks.constprop.0+0x168/0x188
[    0.282583] sp : ffff80008007b9e0
[    0.282670] x29: ffff80008007b9e0 x28: 0000aaaacbecb000 x27: fffffc699effccc0
[    0.282872] x26: 00600001bff33fc3 x25: 0000000000000001 x24: ffff1a678a302228
[    0.283062] x23: ffff1a678a326658 x22: 0000000000000000 x21: 0000000000000004
[    0.283246] x20: fffffc699effccc0 x19: fffffc699effcc00 x18: 0000000000000000
[    0.283435] x17: 3736613166666666 x16: 2066666666666666 x15: 0720072007200720
[    0.283679] x14: 0720072007200720 x13: 0720072007200720 x12: 0720072007200720
[    0.283933] x11: 0720072007200720 x10: ffffa89ecd79ba50 x9 : ffffa89ecab23054
[    0.284214] x8 : ffffa89ecd743a50 x7 : ffffa89ecd79ba50 x6 : 0000000000000000
[    0.284545] x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
[    0.284875] x2 : 0000000000000000 x1 : ffff1a6781420000 x0 : 00000000000000e5
[    0.285205] Call trace:
[    0.285320]  __folio_rmap_sanity_checks.constprop.0+0x168/0x188
[    0.285594]  copy_page_range+0x1180/0x1328
[    0.285788]  copy_process+0x1b04/0x1db8
[    0.285933]  kernel_clone+0x94/0x3f8
[    0.286078]  __do_sys_clone+0x58/0x88
[    0.286247]  __arm64_sys_clone+0x28/0x40
[    0.286430]  invoke_syscall+0x50/0x128
[    0.286607]  el0_svc_common.constprop.0+0x48/0xf0
[    0.286826]  do_el0_svc+0x24/0x38
[    0.286983]  el0_svc+0x34/0xb8
[    0.287142]  el0t_64_sync_handler+0xc0/0xc8
[    0.287339]  el0t_64_sync+0x190/0x198
[    0.287514] ---[ end trace 0000000000000000 ]---


> 
>>
>> 100 iterations per run, 8 runs over 2 reboots. Positive is slower than baseline,
>> negative is faster:
>>
>> Fork, order-0, Apple M2 VM:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.8% |
>> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
>> | fork-batching         |       3.5% |      1.2% |
>>
>> Fork, order-9, Apple M2 VM:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.8% |
>> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
>> | fork-batching         |     -35.6% |      2.0% |
>>
>> Fork, order-0, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.7% |
>> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
>> | fork-batching         |       5.5% |      1.1% |
>>
>> Fork, order-9, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.1% |
>> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
>> | fork-batching         |     -10.3% |      0.1% |
> 
> It's weird that an effective folio_test_large() should affect performance that
> much. So far I haven't seen that behavior on x86, I wodner why arm64 should
> behave here differently (also for the rmap cleanups). Code layout/size?
> 
> I'll dig it up again and test on x86 once more.
> 
> [...]
> 
>>
>> Yeah that would probably work. But we need to be careful for the case where
>> start_ptep is in the middle of a contpte block (which can happen - due to some
>> vma splitting operations, we can have a contpte block that spans 2 vmas). So
>> nr_cont_ptes() needs to either be spec'ed to only return the contpte size if
>> start_ptep is pointing to the front of the block, and all other times, return 1,
>> or it needs to return the number of ptes remaining to the end of the block (as
>> it does in my v4).
>>
>> But I guess we need to get to the bottom of my arm64 perf numbers first... I'll
>> debug those bugs and rerun.
> 
> Yes, I'll dig into it on x86 once more.
>
  
David Hildenbrand Dec. 20, 2023, 11:58 a.m. UTC | #18
On 20.12.23 12:51, Ryan Roberts wrote:
> On 20/12/2023 11:36, David Hildenbrand wrote:
>> On 20.12.23 12:28, Ryan Roberts wrote:
>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>          include/linux/pgtable.h | 80
>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>          mm/memory.c             | 92
>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>          2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>          #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>          #endif
>>>>>>>>>>>>>>>          +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>>> require
>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>
>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>>> only
>>>>>>>>>>>>> way
>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>
>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>>> at an
>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>>>>>> still
>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>
>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get
>>>>>>>>>>>>> the
>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>> lot of
>>>>>>>>>>>>> time
>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>>> every
>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>> little in
>>>>>>>>>>>>> the
>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>>> and
>>>>>>>>>>>>> Apple
>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>> play
>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>> previously
>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>>> series. I
>>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>>> bits
>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>
>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>>> Silver
>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>
>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>
>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>
>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>
>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>
>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>
>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>
>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>>>>>> update.
>>>>>>>>>
>>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>>> for
>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>>> pte in
>>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>> order to
>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>> wealking
>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>> block and
>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>>> instead
>>>>>>>>> of 256.
>>>>>>>>>
>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>>> the
>>>>>>>>> bit
>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>> function
>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>> patch 3
>>>>>>>>> in my series).
>>>>>>>>>
>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>> that
>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>>> would
>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>
>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>
>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>>> and it
>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>
>>>>>>>
>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>
>>>>>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>
>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>>>>>> decent performance.
>>>>>
>>>>> I'm about to run it on Altra and M2. But I assume it will show similar results.
>>>
>>> OK results in, not looking great, which aligns with my previous experience. That
>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>> perhaps these results are not valid...
>>
>> I didn't see that so far on x86, maybe related to the PFN fixup?
> 
> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index b19a8aee684c..9eb0fd693df9 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>   }
>   #define set_ptes set_ptes
>   
> +#define PFN_PTE_SHIFT          PAGE_SHIFT
> +
>   /*
>    * Huge pte definitions.
>    */
> 
> 
> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit case. But that won't affect this.
> 
> 
> With VM_DEBUG on, this is the first warning I see during boot:
> 
> 
> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0 mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0 nr_pages_mapped:2 pincount:0

^ Ah, you are running with mTHP. Let me play with that.

The warning would indicate that nr is too large (or something else is 
messed up).
  
Ryan Roberts Dec. 20, 2023, 12:04 p.m. UTC | #19
On 20/12/2023 11:58, David Hildenbrand wrote:
> On 20.12.23 12:51, Ryan Roberts wrote:
>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>          include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>          mm/memory.c             | 92
>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>          2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>          #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>          #endif
>>>>>>>>>>>>>>>>          +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>>>> only
>>>>>>>>>>>>>> way
>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>> still
>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>> get
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>> time
>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>>>> every
>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>> play
>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>>>> series. I
>>>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>>>> bits
>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>>>> Silver
>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>
>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>
>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>
>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>
>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>
>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>
>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>
>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>> your
>>>>>>>>>> update.
>>>>>>>>>>
>>>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>>>> for
>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>>>> pte in
>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>> order to
>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>> wealking
>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>> block and
>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>>>> instead
>>>>>>>>>> of 256.
>>>>>>>>>>
>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>>>> the
>>>>>>>>>> bit
>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>> function
>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>> patch 3
>>>>>>>>>> in my series).
>>>>>>>>>>
>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>> that
>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>>>> would
>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>
>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>
>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>>>> and it
>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>
>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>> I'll
>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>
>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>> achieve
>>>>>>> decent performance.
>>>>>>
>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>> results.
>>>>
>>>> OK results in, not looking great, which aligns with my previous experience.
>>>> That
>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>> perhaps these results are not valid...
>>>
>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>
>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index b19a8aee684c..9eb0fd693df9 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>   }
>>   #define set_ptes set_ptes
>>   +#define PFN_PTE_SHIFT          PAGE_SHIFT
>> +
>>   /*
>>    * Huge pte definitions.
>>    */
>>
>>
>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>> case. But that won't affect this.
>>
>>
>> With VM_DEBUG on, this is the first warning I see during boot:
>>
>>
>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>> nr_pages_mapped:2 pincount:0
> 
> ^ Ah, you are running with mTHP. Let me play with that.

Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
up for PMD-sized THP.

I am using XFS though, so I imagine its a file folio.

I've rebased your rmap cleanup and fork batching to the version of mm-unstable
that I was doing all my other testing with so I could compare numbers. But its
not very old (perhaps a week). All the patches applied without any conflict.

> 
> The warning would indicate that nr is too large (or something else is messed up).
>
  
David Hildenbrand Dec. 20, 2023, 12:08 p.m. UTC | #20
On 20.12.23 13:04, Ryan Roberts wrote:
> On 20/12/2023 11:58, David Hildenbrand wrote:
>> On 20.12.23 12:51, Ryan Roberts wrote:
>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>           include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>           mm/memory.c             | 92
>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>           2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>           #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>           #endif
>>>>>>>>>>>>>>>>>           +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>
>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>
>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>> your
>>>>>>>>>>> update.
>>>>>>>>>>>
>>>>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>>>>> for
>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>>>>> pte in
>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>> order to
>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>> wealking
>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>> block and
>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>>>>> instead
>>>>>>>>>>> of 256.
>>>>>>>>>>>
>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>>>>> the
>>>>>>>>>>> bit
>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>> function
>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>> patch 3
>>>>>>>>>>> in my series).
>>>>>>>>>>>
>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>> that
>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>>>>> would
>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>
>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>
>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>>>>> and it
>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>
>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>> I'll
>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>
>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>> achieve
>>>>>>>> decent performance.
>>>>>>>
>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>> results.
>>>>>
>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>> That
>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>> perhaps these results are not valid...
>>>>
>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>
>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index b19a8aee684c..9eb0fd693df9 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>    }
>>>    #define set_ptes set_ptes
>>>    +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>> +
>>>    /*
>>>     * Huge pte definitions.
>>>     */
>>>
>>>
>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>> case. But that won't affect this.
>>>
>>>
>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>
>>>
>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>> nr_pages_mapped:2 pincount:0
>>
>> ^ Ah, you are running with mTHP. Let me play with that.
> 
> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
> up for PMD-sized THP.
> 
> I am using XFS though, so I imagine its a file folio.
> 

Right, that's even weirder :)

I should have that in my environment as well. Let me dig.
  
David Hildenbrand Dec. 20, 2023, 12:54 p.m. UTC | #21
On 20.12.23 13:04, Ryan Roberts wrote:
> On 20/12/2023 11:58, David Hildenbrand wrote:
>> On 20.12.23 12:51, Ryan Roberts wrote:
>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>           include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>           mm/memory.c             | 92
>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>           2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>           #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>           #endif
>>>>>>>>>>>>>>>>>           +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>
>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>
>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>> your
>>>>>>>>>>> update.
>>>>>>>>>>>
>>>>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>>>>> for
>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>>>>> pte in
>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>> order to
>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>> wealking
>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>> block and
>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>>>>> instead
>>>>>>>>>>> of 256.
>>>>>>>>>>>
>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>>>>> the
>>>>>>>>>>> bit
>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>> function
>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>> patch 3
>>>>>>>>>>> in my series).
>>>>>>>>>>>
>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>> that
>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>>>>> would
>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>
>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>
>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>>>>> and it
>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>
>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>> I'll
>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>
>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>> achieve
>>>>>>>> decent performance.
>>>>>>>
>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>> results.
>>>>>
>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>> That
>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>> perhaps these results are not valid...
>>>>
>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>
>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index b19a8aee684c..9eb0fd693df9 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>    }
>>>    #define set_ptes set_ptes
>>>    +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>> +
>>>    /*
>>>     * Huge pte definitions.
>>>     */
>>>
>>>
>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>> case. But that won't affect this.
>>>
>>>
>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>
>>>
>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>> nr_pages_mapped:2 pincount:0
>>
>> ^ Ah, you are running with mTHP. Let me play with that.
> 
> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
> up for PMD-sized THP.
> 
> I am using XFS though, so I imagine its a file folio.
> 
> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
> that I was doing all my other testing with so I could compare numbers. But its
> not very old (perhaps a week). All the patches applied without any conflict.

I think it was something stupid: I would get "17" from folio_pte_batch() 
for an order-4 folio, but only sometimes. The rmap sanity checks were 
definitely worth it :)

I guess we hit the case "next mapped folio is actually the next physical 
folio" and the detection for that was off by one.

diff --git a/mm/memory.c b/mm/memory.c
index 187d1b9b70e2..2af34add7ed7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -975,7 +975,7 @@ static inline int folio_pte_batch(struct folio 
*folio, unsigned long addr,
                  * corner cases the next PFN might fall into a different
                  * folio.
                  */
-               if (pte_pfn(pte) == folio_end_pfn - 1)
+               if (pte_pfn(pte) == folio_end_pfn)
                         break;

Briefly tested, have to do more testing.

I only tested with order-9, which means max_nr would cap at 512. 
Shouldn't affect the performance measurements, will redo them.
  
Ryan Roberts Dec. 20, 2023, 1:02 p.m. UTC | #22
On 20/12/2023 12:54, David Hildenbrand wrote:
> On 20.12.23 13:04, Ryan Roberts wrote:
>> On 20/12/2023 11:58, David Hildenbrand wrote:
>>> On 20.12.23 12:51, Ryan Roberts wrote:
>>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A
>>>>>>>>>>>>>>>>>> given
>>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform
>>>>>>>>>>>>>>>>>> during
>>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the
>>>>>>>>>>>>>>>>>> "contiguous
>>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected.
>>>>>>>>>>>>>>>>>> Similarly, by
>>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a
>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the
>>>>>>>>>>>>>>>>>> order-0
>>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this
>>>>>>>>>>>>>>>>>> change is
>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>           include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>>           mm/memory.c             | 92
>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>>           2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>>           #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>>           #endif
>>>>>>>>>>>>>>>>>>           +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a
>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte,
>>>>>>>>>>>>>>>>>> unsigned
>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that
>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it
>>>>>>>>>>>>>>>> was the
>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first
>>>>>>>>>>>>>>>> attempt
>>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every
>>>>>>>>>>>>>>>> conditional and
>>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere
>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap
>>>>>>>>>>>>>>> cleanup/batching
>>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>>> implemented very generic and simple batching for large folios
>>>>>>>>>>>>>>> (all PTE
>>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R)
>>>>>>>>>>>>>>> Xeon(R)
>>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>>
>>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>>> your
>>>>>>>>>>>> update.
>>>>>>>>>>>>
>>>>>>>>>>>> But upon review, I've noticed the part that I think makes this
>>>>>>>>>>>> difficult
>>>>>>>>>>>> for
>>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for
>>>>>>>>>>>> every
>>>>>>>>>>>> pte in
>>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the
>>>>>>>>>>>> contpte
>>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>>> order to
>>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>>> wealking
>>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>>> block and
>>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16
>>>>>>>>>>>> READ_ONCE()s
>>>>>>>>>>>> instead
>>>>>>>>>>>> of 256.
>>>>>>>>>>>>
>>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would
>>>>>>>>>>>> avoid
>>>>>>>>>>>> the
>>>>>>>>>>>> bit
>>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>>> function
>>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>>> patch 3
>>>>>>>>>>>> in my series).
>>>>>>>>>>>>
>>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>>> that
>>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes?
>>>>>>>>>>>> That
>>>>>>>>>>>> would
>>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm
>>>>>>>>>>>> trying to
>>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>>
>>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>>
>>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during
>>>>>>>>>>> fork()"
>>>>>>>>>>> and it
>>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>>
>>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>>> I'll
>>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>>
>>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>>> achieve
>>>>>>>>> decent performance.
>>>>>>>>
>>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>>> results.
>>>>>>
>>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>>> That
>>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>>> perhaps these results are not valid...
>>>>>
>>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>>
>>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>> b/arch/arm64/include/asm/pgtable.h
>>>> index b19a8aee684c..9eb0fd693df9 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>>    }
>>>>    #define set_ptes set_ptes
>>>>    +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>>> +
>>>>    /*
>>>>     * Huge pte definitions.
>>>>     */
>>>>
>>>>
>>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>>> case. But that won't affect this.
>>>>
>>>>
>>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>>
>>>>
>>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>>> nr_pages_mapped:2 pincount:0
>>>
>>> ^ Ah, you are running with mTHP. Let me play with that.
>>
>> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
>> up for PMD-sized THP.
>>
>> I am using XFS though, so I imagine its a file folio.
>>
>> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
>> that I was doing all my other testing with so I could compare numbers. But its
>> not very old (perhaps a week). All the patches applied without any conflict.
> 
> I think it was something stupid: I would get "17" from folio_pte_batch() for an
> order-4 folio, but only sometimes. The rmap sanity checks were definitely worth
> it :)
> 
> I guess we hit the case "next mapped folio is actually the next physical folio"
> and the detection for that was off by one.
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 187d1b9b70e2..2af34add7ed7 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -975,7 +975,7 @@ static inline int folio_pte_batch(struct folio *folio,
> unsigned long addr,
>                  * corner cases the next PFN might fall into a different
>                  * folio.
>                  */
> -               if (pte_pfn(pte) == folio_end_pfn - 1)
> +               if (pte_pfn(pte) == folio_end_pfn)
>                         break;
> 

haha, of course! I've been staring at this for an hour and didn't notice.

I no longer see any warnings during boot with debug enabled. Will rerun perf
measurements.


> Briefly tested, have to do more testing.
> 
> I only tested with order-9, which means max_nr would cap at 512. Shouldn't
> affect the performance measurements, will redo them.
>
  
David Hildenbrand Dec. 20, 2023, 1:06 p.m. UTC | #23
On 20.12.23 13:04, Ryan Roberts wrote:
> On 20/12/2023 11:58, David Hildenbrand wrote:
>> On 20.12.23 12:51, Ryan Roberts wrote:
>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is
>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>           include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>           mm/memory.c             | 92
>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>           2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>           #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>           #endif
>>>>>>>>>>>>>>>>>           +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra
>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>
>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>
>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>> your
>>>>>>>>>>> update.
>>>>>>>>>>>
>>>>>>>>>>> But upon review, I've noticed the part that I think makes this difficult
>>>>>>>>>>> for
>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>>>>>> pte in
>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>> order to
>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>> wealking
>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>> block and
>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>>>>>> instead
>>>>>>>>>>> of 256.
>>>>>>>>>>>
>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid
>>>>>>>>>>> the
>>>>>>>>>>> bit
>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>> function
>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>> patch 3
>>>>>>>>>>> in my series).
>>>>>>>>>>>
>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>> that
>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>>>>>> would
>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>
>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>
>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>>>>>> and it
>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>
>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>> I'll
>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>
>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>> achieve
>>>>>>>> decent performance.
>>>>>>>
>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>> results.
>>>>>
>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>> That
>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>> perhaps these results are not valid...
>>>>
>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>
>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index b19a8aee684c..9eb0fd693df9 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>    }
>>>    #define set_ptes set_ptes
>>>    +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>> +
>>>    /*
>>>     * Huge pte definitions.
>>>     */
>>>
>>>
>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>> case. But that won't affect this.
>>>
>>>
>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>
>>>
>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>> nr_pages_mapped:2 pincount:0
>>
>> ^ Ah, you are running with mTHP. Let me play with that.
> 
> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
> up for PMD-sized THP.
> 
> I am using XFS though, so I imagine its a file folio.
> 
> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
> that I was doing all my other testing with so I could compare numbers. But its
> not very old (perhaps a week). All the patches applied without any conflict.


It would also be interesting to know if the compiler on arm64 decides to 
do something stupid: like not inline wrprotect_ptes().

Because with an effective unlikely(folio_test_large(folio)) we shouldn't 
see that much overhead.
  
Ryan Roberts Dec. 20, 2023, 1:10 p.m. UTC | #24
On 20/12/2023 13:06, David Hildenbrand wrote:
> On 20.12.23 13:04, Ryan Roberts wrote:
>> On 20/12/2023 11:58, David Hildenbrand wrote:
>>> On 20.12.23 12:51, Ryan Roberts wrote:
>>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A
>>>>>>>>>>>>>>>>>> given
>>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform
>>>>>>>>>>>>>>>>>> during
>>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the
>>>>>>>>>>>>>>>>>> "contiguous
>>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected.
>>>>>>>>>>>>>>>>>> Similarly, by
>>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a
>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the
>>>>>>>>>>>>>>>>>> order-0
>>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this
>>>>>>>>>>>>>>>>>> change is
>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>           include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>>           mm/memory.c             | 92
>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>>           2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>>           #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>>           #endif
>>>>>>>>>>>>>>>>>>           +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a
>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte,
>>>>>>>>>>>>>>>>>> unsigned
>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that
>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it
>>>>>>>>>>>>>>>> was the
>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first
>>>>>>>>>>>>>>>> attempt
>>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every
>>>>>>>>>>>>>>>> conditional and
>>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere
>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap
>>>>>>>>>>>>>>> cleanup/batching
>>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>>> implemented very generic and simple batching for large folios
>>>>>>>>>>>>>>> (all PTE
>>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R)
>>>>>>>>>>>>>>> Xeon(R)
>>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>>
>>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>>> your
>>>>>>>>>>>> update.
>>>>>>>>>>>>
>>>>>>>>>>>> But upon review, I've noticed the part that I think makes this
>>>>>>>>>>>> difficult
>>>>>>>>>>>> for
>>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for
>>>>>>>>>>>> every
>>>>>>>>>>>> pte in
>>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the
>>>>>>>>>>>> contpte
>>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>>> order to
>>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>>> wealking
>>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>>> block and
>>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16
>>>>>>>>>>>> READ_ONCE()s
>>>>>>>>>>>> instead
>>>>>>>>>>>> of 256.
>>>>>>>>>>>>
>>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would
>>>>>>>>>>>> avoid
>>>>>>>>>>>> the
>>>>>>>>>>>> bit
>>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>>> function
>>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>>> patch 3
>>>>>>>>>>>> in my series).
>>>>>>>>>>>>
>>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>>> that
>>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes?
>>>>>>>>>>>> That
>>>>>>>>>>>> would
>>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm
>>>>>>>>>>>> trying to
>>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>>
>>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>>
>>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during
>>>>>>>>>>> fork()"
>>>>>>>>>>> and it
>>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>>
>>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>>> I'll
>>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>>
>>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>>> achieve
>>>>>>>>> decent performance.
>>>>>>>>
>>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>>> results.
>>>>>>
>>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>>> That
>>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>>> perhaps these results are not valid...
>>>>>
>>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>>
>>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>> b/arch/arm64/include/asm/pgtable.h
>>>> index b19a8aee684c..9eb0fd693df9 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>>    }
>>>>    #define set_ptes set_ptes
>>>>    +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>>> +
>>>>    /*
>>>>     * Huge pte definitions.
>>>>     */
>>>>
>>>>
>>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>>> case. But that won't affect this.
>>>>
>>>>
>>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>>
>>>>
>>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>>> nr_pages_mapped:2 pincount:0
>>>
>>> ^ Ah, you are running with mTHP. Let me play with that.
>>
>> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
>> up for PMD-sized THP.
>>
>> I am using XFS though, so I imagine its a file folio.
>>
>> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
>> that I was doing all my other testing with so I could compare numbers. But its
>> not very old (perhaps a week). All the patches applied without any conflict.
> 
> 
> It would also be interesting to know if the compiler on arm64 decides to do
> something stupid: like not inline wrprotect_ptes().
> 
> Because with an effective unlikely(folio_test_large(folio)) we shouldn't see
> that much overhead.
> 

What version of gcc are you using? I must confess I'm using the Ubuntu 20.04
default version:

aarch64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0

Perhaps I should grab something a bit newer?
  
David Hildenbrand Dec. 20, 2023, 1:13 p.m. UTC | #25
On 20.12.23 14:10, Ryan Roberts wrote:
> On 20/12/2023 13:06, David Hildenbrand wrote:
>> On 20.12.23 13:04, Ryan Roberts wrote:
>>> On 20/12/2023 11:58, David Hildenbrand wrote:
>>>> On 20.12.23 12:51, Ryan Roberts wrote:
>>>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A
>>>>>>>>>>>>>>>>>>> given
>>>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform
>>>>>>>>>>>>>>>>>>> during
>>>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the
>>>>>>>>>>>>>>>>>>> "contiguous
>>>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected.
>>>>>>>>>>>>>>>>>>> Similarly, by
>>>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a
>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the
>>>>>>>>>>>>>>>>>>> order-0
>>>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this
>>>>>>>>>>>>>>>>>>> change is
>>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>>            include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>>>            mm/memory.c             | 92
>>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>>>            2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>>>            #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>>>            #endif
>>>>>>>>>>>>>>>>>>>            +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a
>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte,
>>>>>>>>>>>>>>>>>>> unsigned
>>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that
>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it
>>>>>>>>>>>>>>>>> was the
>>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first
>>>>>>>>>>>>>>>>> attempt
>>>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every
>>>>>>>>>>>>>>>>> conditional and
>>>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere
>>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap
>>>>>>>>>>>>>>>> cleanup/batching
>>>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>>>> implemented very generic and simple batching for large folios
>>>>>>>>>>>>>>>> (all PTE
>>>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R)
>>>>>>>>>>>>>>>> Xeon(R)
>>>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>>>> your
>>>>>>>>>>>>> update.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But upon review, I've noticed the part that I think makes this
>>>>>>>>>>>>> difficult
>>>>>>>>>>>>> for
>>>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for
>>>>>>>>>>>>> every
>>>>>>>>>>>>> pte in
>>>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the
>>>>>>>>>>>>> contpte
>>>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>>>> order to
>>>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>>>> wealking
>>>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>>>> block and
>>>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16
>>>>>>>>>>>>> READ_ONCE()s
>>>>>>>>>>>>> instead
>>>>>>>>>>>>> of 256.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would
>>>>>>>>>>>>> avoid
>>>>>>>>>>>>> the
>>>>>>>>>>>>> bit
>>>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>>>> function
>>>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>>>> patch 3
>>>>>>>>>>>>> in my series).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>>>> that
>>>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes?
>>>>>>>>>>>>> That
>>>>>>>>>>>>> would
>>>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm
>>>>>>>>>>>>> trying to
>>>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>>>
>>>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>>>
>>>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during
>>>>>>>>>>>> fork()"
>>>>>>>>>>>> and it
>>>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>>>
>>>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>>>> I'll
>>>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>>>
>>>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>>>> achieve
>>>>>>>>>> decent performance.
>>>>>>>>>
>>>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>>>> results.
>>>>>>>
>>>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>>>> That
>>>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>>>> perhaps these results are not valid...
>>>>>>
>>>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>>>
>>>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>>> b/arch/arm64/include/asm/pgtable.h
>>>>> index b19a8aee684c..9eb0fd693df9 100644
>>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>>>     }
>>>>>     #define set_ptes set_ptes
>>>>>     +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>>>> +
>>>>>     /*
>>>>>      * Huge pte definitions.
>>>>>      */
>>>>>
>>>>>
>>>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>>>> case. But that won't affect this.
>>>>>
>>>>>
>>>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>>>
>>>>>
>>>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>>>> nr_pages_mapped:2 pincount:0
>>>>
>>>> ^ Ah, you are running with mTHP. Let me play with that.
>>>
>>> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
>>> up for PMD-sized THP.
>>>
>>> I am using XFS though, so I imagine its a file folio.
>>>
>>> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
>>> that I was doing all my other testing with so I could compare numbers. But its
>>> not very old (perhaps a week). All the patches applied without any conflict.
>>
>>
>> It would also be interesting to know if the compiler on arm64 decides to do
>> something stupid: like not inline wrprotect_ptes().
>>
>> Because with an effective unlikely(folio_test_large(folio)) we shouldn't see
>> that much overhead.
>>
> 
> What version of gcc are you using? I must confess I'm using the Ubuntu 20.04
> default version:
> 
> aarch64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
> 
> Perhaps I should grab something a bit newer?
> 

gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)

 From Fedora 38. So "a bit" newer :P
  
Ryan Roberts Dec. 20, 2023, 1:33 p.m. UTC | #26
On 20/12/2023 13:13, David Hildenbrand wrote:
> On 20.12.23 14:10, Ryan Roberts wrote:
>> On 20/12/2023 13:06, David Hildenbrand wrote:
>>> On 20.12.23 13:04, Ryan Roberts wrote:
>>>> On 20/12/2023 11:58, David Hildenbrand wrote:
>>>>> On 20.12.23 12:51, Ryan Roberts wrote:
>>>>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A
>>>>>>>>>>>>>>>>>>>> given
>>>>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous
>>>>>>>>>>>>>>>>>>>> block of
>>>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>>>>> one go in the parent using the new helper,
>>>>>>>>>>>>>>>>>>>> ptep_set_wrprotects()
>>>>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>>>>> set in one go in the child using the new helper,
>>>>>>>>>>>>>>>>>>>> set_ptes_full().
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform
>>>>>>>>>>>>>>>>>>>> during
>>>>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the
>>>>>>>>>>>>>>>>>>>> "contiguous
>>>>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected.
>>>>>>>>>>>>>>>>>>>> Similarly, by
>>>>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up
>>>>>>>>>>>>>>>>>>>> ptes in
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range
>>>>>>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a
>>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the
>>>>>>>>>>>>>>>>>>>> order-0
>>>>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a
>>>>>>>>>>>>>>>>>>>> separate
>>>>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this
>>>>>>>>>>>>>>>>>>>> change is
>>>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there
>>>>>>>>>>>>>>>>>>>> is no
>>>>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the
>>>>>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the
>>>>>>>>>>>>>>>>>>>> series is
>>>>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>>>            include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>>>>            mm/memory.c             | 92
>>>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>>>>            2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>>>>            #define arch_flush_lazy_mmu_mode()    do {} while
>>>>>>>>>>>>>>>>>>>> (0)
>>>>>>>>>>>>>>>>>>>>            #endif
>>>>>>>>>>>>>>>>>>>>            +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next
>>>>>>>>>>>>>>>>>>>> batch
>>>>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a
>>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining
>>>>>>>>>>>>>>>>>>>> number of
>>>>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be
>>>>>>>>>>>>>>>>>>>> useful
>>>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte,
>>>>>>>>>>>>>>>>>>>> unsigned
>>>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that
>>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it
>>>>>>>>>>>>>>>>>> was the
>>>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first
>>>>>>>>>>>>>>>>>> attempt
>>>>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked
>>>>>>>>>>>>>>>>>>> like:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I
>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every
>>>>>>>>>>>>>>>>>> conditional and
>>>>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere
>>>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap
>>>>>>>>>>>>>>>>> cleanup/batching
>>>>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>>>>> implemented very generic and simple batching for large folios
>>>>>>>>>>>>>>>>> (all PTE
>>>>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R)
>>>>>>>>>>>>>>>>> Xeon(R)
>>>>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a
>>>>>>>>>>>>>> look at
>>>>>>>>>>>>>> your
>>>>>>>>>>>>>> update.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But upon review, I've noticed the part that I think makes this
>>>>>>>>>>>>>> difficult
>>>>>>>>>>>>>> for
>>>>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for
>>>>>>>>>>>>>> every
>>>>>>>>>>>>>> pte in
>>>>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the
>>>>>>>>>>>>>> contpte
>>>>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>>>>> order to
>>>>>>>>>>>>>> gather the access and dirty bits. So if your batching function
>>>>>>>>>>>>>> ends up
>>>>>>>>>>>>>> wealking
>>>>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>>>>> performance. That's why I added the arch-specific
>>>>>>>>>>>>>> pte_batch_remaining()
>>>>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>>>>> block and
>>>>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16
>>>>>>>>>>>>>> READ_ONCE()s
>>>>>>>>>>>>>> instead
>>>>>>>>>>>>>> of 256.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would
>>>>>>>>>>>>>> avoid
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> bit
>>>>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>>>>> function
>>>>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there.
>>>>>>>>>>>>>> (see
>>>>>>>>>>>>>> patch 3
>>>>>>>>>>>>>> in my series).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess you are going to say that we should combine both
>>>>>>>>>>>>>> approaches, so
>>>>>>>>>>>>>> that
>>>>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes?
>>>>>>>>>>>>>> That
>>>>>>>>>>>>>> would
>>>>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm
>>>>>>>>>>>>>> trying to
>>>>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>>>>
>>>>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during
>>>>>>>>>>>>> fork()"
>>>>>>>>>>>>> and it
>>>>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>>>>
>>>>>>>>>>>> How do you want to handle your patches? Do you want to clean them up
>>>>>>>>>>>> and
>>>>>>>>>>>> I'll
>>>>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all
>>>>>>>>>>>> out?
>>>>>>>>>>>
>>>>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>>>>> achieve
>>>>>>>>>>> decent performance.
>>>>>>>>>>
>>>>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>>>>> results.
>>>>>>>>
>>>>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>>>>> That
>>>>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>>>>> perhaps these results are not valid...
>>>>>>>
>>>>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>>>>
>>>>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>>>>
>>>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>>>> b/arch/arm64/include/asm/pgtable.h
>>>>>> index b19a8aee684c..9eb0fd693df9 100644
>>>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>>>>     }
>>>>>>     #define set_ptes set_ptes
>>>>>>     +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>>>>> +
>>>>>>     /*
>>>>>>      * Huge pte definitions.
>>>>>>      */
>>>>>>
>>>>>>
>>>>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>>>>> case. But that won't affect this.
>>>>>>
>>>>>>
>>>>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>>>>
>>>>>>
>>>>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>>>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>>>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>>>>> nr_pages_mapped:2 pincount:0
>>>>>
>>>>> ^ Ah, you are running with mTHP. Let me play with that.
>>>>
>>>> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be
>>>> set
>>>> up for PMD-sized THP.
>>>>
>>>> I am using XFS though, so I imagine its a file folio.
>>>>
>>>> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
>>>> that I was doing all my other testing with so I could compare numbers. But its
>>>> not very old (perhaps a week). All the patches applied without any conflict.
>>>
>>>
>>> It would also be interesting to know if the compiler on arm64 decides to do
>>> something stupid: like not inline wrprotect_ptes().
>>>
>>> Because with an effective unlikely(folio_test_large(folio)) we shouldn't see
>>> that much overhead.
>>>
>>
>> What version of gcc are you using? I must confess I'm using the Ubuntu 20.04
>> default version:
>>
>> aarch64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
>>
>> Perhaps I should grab something a bit newer?
>>
> 
> gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)
> 
> From Fedora 38. So "a bit" newer :P
> 

I'll retry with newer toolchain.

FWIW, with the code fix and the original compiler:

Fork, order-0, Apple M2:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.8% |
| hugetlb-rmap-cleanups |       1.3% |      2.0% |
| fork-batching         |       4.3% |      1.0% |

Fork, order-9, Apple M2:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.8% |
| hugetlb-rmap-cleanups |       0.9% |      0.9% |
| fork-batching         |     -37.3% |      1.0% |

Fork, order-0, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.7% |
| hugetlb-rmap-cleanups |       3.2% |      0.7% |
| fork-batching         |       5.5% |      1.1% |

Fork, order-9, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.1% |
| hugetlb-rmap-cleanups |       0.5% |      0.1% |
| fork-batching         |     -10.4% |      0.1% |
  
David Hildenbrand Dec. 20, 2023, 2 p.m. UTC | #27
[...]

>>>
>>
>> gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)
>>
>>  From Fedora 38. So "a bit" newer :P
>>
> 
> I'll retry with newer toolchain.
> 
> FWIW, with the code fix and the original compiler:
> 
> Fork, order-0, Apple M2:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
> | fork-batching         |       4.3% |      1.0% |
> 
> Fork, order-9, Apple M2:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
> | fork-batching         |     -37.3% |      1.0% |
> 
> Fork, order-0, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.7% |
> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
> | fork-batching         |       5.5% |      1.1% |
> 
> Fork, order-9, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.1% |
> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
> | fork-batching         |     -10.4% |      0.1% |
> 

I just gave it another quick benchmark run on that Intel system.

hugetlb-rmap-cleanups -> fork-batching

order-0: 0.014114 -> 0.013848

-1.9%

order-9: 0.014262 -> 0.009410

-34%

Note that I disable SMT and turbo, and pin the test to one CPU, to make 
the results as stable as possible. My kernel config has anything related 
to debugging disabled.
  
Ryan Roberts Dec. 20, 2023, 3:05 p.m. UTC | #28
On 20/12/2023 14:00, David Hildenbrand wrote:
> [...]
> 
>>>>
>>>
>>> gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)
>>>
>>>  From Fedora 38. So "a bit" newer :P
>>>
>>
>> I'll retry with newer toolchain.
>>
>> FWIW, with the code fix and the original compiler:
>>
>> Fork, order-0, Apple M2:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.8% |
>> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
>> | fork-batching         |       4.3% |      1.0% |
>>
>> Fork, order-9, Apple M2:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.8% |
>> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
>> | fork-batching         |     -37.3% |      1.0% |
>>
>> Fork, order-0, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.7% |
>> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
>> | fork-batching         |       5.5% |      1.1% |
>>
>> Fork, order-9, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.1% |
>> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
>> | fork-batching         |     -10.4% |      0.1% |
>>
> 
> I just gave it another quick benchmark run on that Intel system.
> 
> hugetlb-rmap-cleanups -> fork-batching
> 
> order-0: 0.014114 -> 0.013848
> 
> -1.9%
> 
> order-9: 0.014262 -> 0.009410
> 
> -34%
> 
> Note that I disable SMT and turbo, and pin the test to one CPU, to make the
> results as stable as possible. My kernel config has anything related to
> debugging disabled.
> 

And with gcc 13.2 on arm64:

Fork, order-0, Apple M2 VM:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      1.5% |
| hugetlb-rmap-cleanups |      -3.3% |      1.1% |
| fork-batching         |      -3.6% |      1.4% |

Fork, order-9, Apple M2 VM:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      1.8% |
| hugetlb-rmap-cleanups |      -5.8% |      1.3% |
| fork-batching         |     -38.1% |      2.3% |

Fork, order-0, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      1.3% |
| hugetlb-rmap-cleanups |      -0.1% |      0.4% |
| fork-batching         |      -0.4% |      0.5% |

Fork, order-9, Ampere Altra:
| kernel                |   mean_rel |   std_rel |
|:----------------------|-----------:|----------:|
| mm-unstable           |       0.0% |      0.1% |
| hugetlb-rmap-cleanups |      -0.1% |      0.1% |
| fork-batching         |     -13.9% |      0.1% |


So all looking good. Compiler was the issue. Sorry for the noise.

So please go ahead with you rmap v2 stuff, and I'll wait for you to post the
fork and zap batching patches properly, then rebase my arm64 contpte stuff on
top and remeasure everything.

Thanks,
Ryan
  
David Hildenbrand Dec. 20, 2023, 3:35 p.m. UTC | #29
On 20.12.23 16:05, Ryan Roberts wrote:
> On 20/12/2023 14:00, David Hildenbrand wrote:
>> [...]
>>
>>>>>
>>>>
>>>> gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)
>>>>
>>>>   From Fedora 38. So "a bit" newer :P
>>>>
>>>
>>> I'll retry with newer toolchain.
>>>
>>> FWIW, with the code fix and the original compiler:
>>>
>>> Fork, order-0, Apple M2:
>>> | kernel                |   mean_rel |   std_rel |
>>> |:----------------------|-----------:|----------:|
>>> | mm-unstable           |       0.0% |      0.8% |
>>> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
>>> | fork-batching         |       4.3% |      1.0% |
>>>
>>> Fork, order-9, Apple M2:
>>> | kernel                |   mean_rel |   std_rel |
>>> |:----------------------|-----------:|----------:|
>>> | mm-unstable           |       0.0% |      0.8% |
>>> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
>>> | fork-batching         |     -37.3% |      1.0% |
>>>
>>> Fork, order-0, Ampere Altra:
>>> | kernel                |   mean_rel |   std_rel |
>>> |:----------------------|-----------:|----------:|
>>> | mm-unstable           |       0.0% |      0.7% |
>>> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
>>> | fork-batching         |       5.5% |      1.1% |
>>>
>>> Fork, order-9, Ampere Altra:
>>> | kernel                |   mean_rel |   std_rel |
>>> |:----------------------|-----------:|----------:|
>>> | mm-unstable           |       0.0% |      0.1% |
>>> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
>>> | fork-batching         |     -10.4% |      0.1% |
>>>
>>
>> I just gave it another quick benchmark run on that Intel system.
>>
>> hugetlb-rmap-cleanups -> fork-batching
>>
>> order-0: 0.014114 -> 0.013848
>>
>> -1.9%
>>
>> order-9: 0.014262 -> 0.009410
>>
>> -34%
>>
>> Note that I disable SMT and turbo, and pin the test to one CPU, to make the
>> results as stable as possible. My kernel config has anything related to
>> debugging disabled.
>>
> 
> And with gcc 13.2 on arm64:
> 
> Fork, order-0, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      1.5% |
> | hugetlb-rmap-cleanups |      -3.3% |      1.1% |
> | fork-batching         |      -3.6% |      1.4% |
> 
> Fork, order-9, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      1.8% |
> | hugetlb-rmap-cleanups |      -5.8% |      1.3% |
> | fork-batching         |     -38.1% |      2.3% |
> 
> Fork, order-0, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      1.3% |
> | hugetlb-rmap-cleanups |      -0.1% |      0.4% |
> | fork-batching         |      -0.4% |      0.5% |
> 
> Fork, order-9, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.1% |
> | hugetlb-rmap-cleanups |      -0.1% |      0.1% |
> | fork-batching         |     -13.9% |      0.1% |
> 
> 
> So all looking good. Compiler was the issue. Sorry for the noise.

No need to be sorry, good that we figured out what's going wrong here.

Weird that the compiler makes such a difference here.

> 
> So please go ahead with you rmap v2 stuff, and I'll wait for you to post the
> fork and zap batching patches properly, then rebase my arm64 contpte stuff on
> top and remeasure everything.

Yes, will get rmap v2 out soon, then start working on fork, and then try 
tackling zap. I have some holiday coming up, so it might take some time 
-- but there is plenty of time left.
  
Ryan Roberts Dec. 20, 2023, 3:59 p.m. UTC | #30
On 20/12/2023 15:35, David Hildenbrand wrote:
> On 20.12.23 16:05, Ryan Roberts wrote:
>> On 20/12/2023 14:00, David Hildenbrand wrote:
>>> [...]
>>>
>>>>>>
>>>>>
>>>>> gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)
>>>>>
>>>>>   From Fedora 38. So "a bit" newer :P
>>>>>
>>>>
>>>> I'll retry with newer toolchain.
>>>>
>>>> FWIW, with the code fix and the original compiler:
>>>>
>>>> Fork, order-0, Apple M2:
>>>> | kernel                |   mean_rel |   std_rel |
>>>> |:----------------------|-----------:|----------:|
>>>> | mm-unstable           |       0.0% |      0.8% |
>>>> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
>>>> | fork-batching         |       4.3% |      1.0% |
>>>>
>>>> Fork, order-9, Apple M2:
>>>> | kernel                |   mean_rel |   std_rel |
>>>> |:----------------------|-----------:|----------:|
>>>> | mm-unstable           |       0.0% |      0.8% |
>>>> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
>>>> | fork-batching         |     -37.3% |      1.0% |
>>>>
>>>> Fork, order-0, Ampere Altra:
>>>> | kernel                |   mean_rel |   std_rel |
>>>> |:----------------------|-----------:|----------:|
>>>> | mm-unstable           |       0.0% |      0.7% |
>>>> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
>>>> | fork-batching         |       5.5% |      1.1% |
>>>>
>>>> Fork, order-9, Ampere Altra:
>>>> | kernel                |   mean_rel |   std_rel |
>>>> |:----------------------|-----------:|----------:|
>>>> | mm-unstable           |       0.0% |      0.1% |
>>>> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
>>>> | fork-batching         |     -10.4% |      0.1% |
>>>>
>>>
>>> I just gave it another quick benchmark run on that Intel system.
>>>
>>> hugetlb-rmap-cleanups -> fork-batching
>>>
>>> order-0: 0.014114 -> 0.013848
>>>
>>> -1.9%
>>>
>>> order-9: 0.014262 -> 0.009410
>>>
>>> -34%
>>>
>>> Note that I disable SMT and turbo, and pin the test to one CPU, to make the
>>> results as stable as possible. My kernel config has anything related to
>>> debugging disabled.
>>>
>>
>> And with gcc 13.2 on arm64:
>>
>> Fork, order-0, Apple M2 VM:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      1.5% |
>> | hugetlb-rmap-cleanups |      -3.3% |      1.1% |
>> | fork-batching         |      -3.6% |      1.4% |
>>
>> Fork, order-9, Apple M2 VM:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      1.8% |
>> | hugetlb-rmap-cleanups |      -5.8% |      1.3% |
>> | fork-batching         |     -38.1% |      2.3% |
>>
>> Fork, order-0, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      1.3% |
>> | hugetlb-rmap-cleanups |      -0.1% |      0.4% |
>> | fork-batching         |      -0.4% |      0.5% |
>>
>> Fork, order-9, Ampere Altra:
>> | kernel                |   mean_rel |   std_rel |
>> |:----------------------|-----------:|----------:|
>> | mm-unstable           |       0.0% |      0.1% |
>> | hugetlb-rmap-cleanups |      -0.1% |      0.1% |
>> | fork-batching         |     -13.9% |      0.1% |
>>
>>
>> So all looking good. Compiler was the issue. Sorry for the noise.
> 
> No need to be sorry, good that we figured out what's going wrong here.
> 
> Weird that the compiler makes such a difference here.
> 
>>
>> So please go ahead with you rmap v2 stuff, and I'll wait for you to post the
>> fork and zap batching patches properly, then rebase my arm64 contpte stuff on
>> top and remeasure everything.
> 
> Yes, will get rmap v2 out soon, then start working on fork, and then try
> tackling zap. I have some holiday coming up, so it might take some time -- but
> there is plenty of time left.

Me too, I'll be out from end of Friday, returning on 2nd Jan.

Happy Christmas!

>
  

Patch

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index af7639c3b0a3..db93fb81465a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -205,6 +205,27 @@  static inline int pmd_young(pmd_t pmd)
 #define arch_flush_lazy_mmu_mode()	do {} while (0)
 #endif
 
+#ifndef pte_batch_remaining
+/**
+ * pte_batch_remaining - Number of pages from addr to next batch boundary.
+ * @pte: Page table entry for the first page.
+ * @addr: Address of the first page.
+ * @end: Batch ceiling (e.g. end of vma).
+ *
+ * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
+ * In such cases, this function returns the remaining number of pages to the end
+ * of the current batch, as defined by addr. This can be useful when iterating
+ * over ptes.
+ *
+ * May be overridden by the architecture, else batch size is always 1.
+ */
+static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
+						unsigned long end)
+{
+	return 1;
+}
+#endif
+
 #ifndef set_ptes
 
 #ifndef pte_next_pfn
@@ -246,6 +267,34 @@  static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 #endif
 #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
 
+#ifndef set_ptes_full
+/**
+ * set_ptes_full - Map consecutive pages to a contiguous range of addresses.
+ * @mm: Address space to map the pages into.
+ * @addr: Address to map the first page at.
+ * @ptep: Page table pointer for the first entry.
+ * @pte: Page table entry for the first page.
+ * @nr: Number of pages to map.
+ * @full: True if systematically setting all ptes and their previous values
+ *        were known to be none (e.g. part of fork).
+ *
+ * Some architectures (arm64) can optimize the implementation if copying ptes
+ * batach-by-batch from the parent, where a batch is defined by
+ * pte_batch_remaining().
+ *
+ * May be overridden by the architecture, else full is ignored and call is
+ * forwarded to set_ptes().
+ *
+ * Context: The caller holds the page table lock.  The pages all belong to the
+ * same folio.  The PTEs are all in the same PMD.
+ */
+static inline void set_ptes_full(struct mm_struct *mm, unsigned long addr,
+		pte_t *ptep, pte_t pte, unsigned int nr, int full)
+{
+	set_ptes(mm, addr, ptep, pte, nr);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
 extern int ptep_set_access_flags(struct vm_area_struct *vma,
 				 unsigned long address, pte_t *ptep,
@@ -622,6 +671,37 @@  static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
 }
 #endif
 
+#ifndef ptep_set_wrprotects
+struct mm_struct;
+/**
+ * ptep_set_wrprotects - Write protect a consecutive set of pages.
+ * @mm: Address space that the pages are mapped into.
+ * @address: Address of first page to write protect.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of pages to write protect.
+ * @full: True if systematically wite protecting all ptes (e.g. part of fork).
+ *
+ * Some architectures (arm64) can optimize the implementation if
+ * write-protecting ptes batach-by-batch, where a batch is defined by
+ * pte_batch_remaining().
+ *
+ * May be overridden by the architecture, else implemented as a loop over
+ * ptep_set_wrprotect().
+ *
+ * Context: The caller holds the page table lock. The PTEs are all in the same
+ * PMD.
+ */
+static inline void ptep_set_wrprotects(struct mm_struct *mm,
+				unsigned long address, pte_t *ptep,
+				unsigned int nr, int full)
+{
+	unsigned int i;
+
+	for (i = 0; i < nr; i++, address += PAGE_SIZE, ptep++)
+		ptep_set_wrprotect(mm, address, ptep);
+}
+#endif
+
 /*
  * On some architectures hardware does not set page access bit when accessing
  * memory page, it is responsibility of software setting this bit. It brings
diff --git a/mm/memory.c b/mm/memory.c
index 809746555827..111f8feeb56e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -929,42 +929,60 @@  copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 }
 
 /*
- * Copy one pte.  Returns 0 if succeeded, or -EAGAIN if one preallocated page
- * is required to copy this pte.
+ * Copy set of contiguous ptes.  Returns number of ptes copied if succeeded
+ * (always gte 1), or -EAGAIN if one preallocated page is required to copy the
+ * first pte.
  */
 static inline int
-copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
-		 pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
-		 struct folio **prealloc)
+copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+		  pte_t *dst_pte, pte_t *src_pte, pte_t pte,
+		  unsigned long addr, unsigned long end,
+		  int *rss, struct folio **prealloc)
 {
 	struct mm_struct *src_mm = src_vma->vm_mm;
 	unsigned long vm_flags = src_vma->vm_flags;
-	pte_t pte = ptep_get(src_pte);
 	struct page *page;
 	struct folio *folio;
+	int nr, i, ret;
+
+	nr = pte_batch_remaining(pte, addr, end);
 
 	page = vm_normal_page(src_vma, addr, pte);
-	if (page)
+	if (page) {
 		folio = page_folio(page);
+		folio_ref_add(folio, nr);
+	}
 	if (page && folio_test_anon(folio)) {
-		/*
-		 * If this page may have been pinned by the parent process,
-		 * copy the page immediately for the child so that we'll always
-		 * guarantee the pinned page won't be randomly replaced in the
-		 * future.
-		 */
-		folio_get(folio);
-		if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) {
-			/* Page may be pinned, we have to copy. */
-			folio_put(folio);
-			return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
-						 addr, rss, prealloc, page);
+		for (i = 0; i < nr; i++, page++) {
+			/*
+			 * If this page may have been pinned by the parent
+			 * process, copy the page immediately for the child so
+			 * that we'll always guarantee the pinned page won't be
+			 * randomly replaced in the future.
+			 */
+			if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) {
+				if (i != 0)
+					break;
+				/* Page may be pinned, we have to copy. */
+				folio_ref_sub(folio, nr);
+				ret = copy_present_page(dst_vma, src_vma,
+							dst_pte, src_pte, addr,
+							rss, prealloc, page);
+				return ret == 0 ? 1 : ret;
+			}
+			VM_BUG_ON(PageAnonExclusive(page));
 		}
-		rss[MM_ANONPAGES]++;
+
+		if (unlikely(i < nr)) {
+			folio_ref_sub(folio, nr - i);
+			nr = i;
+		}
+
+		rss[MM_ANONPAGES] += nr;
 	} else if (page) {
-		folio_get(folio);
-		page_dup_file_rmap(page, false);
-		rss[mm_counter_file(page)]++;
+		for (i = 0; i < nr; i++)
+			page_dup_file_rmap(page + i, false);
+		rss[mm_counter_file(page)] += nr;
 	}
 
 	/*
@@ -972,10 +990,9 @@  copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	 * in the parent and the child
 	 */
 	if (is_cow_mapping(vm_flags) && pte_write(pte)) {
-		ptep_set_wrprotect(src_mm, addr, src_pte);
+		ptep_set_wrprotects(src_mm, addr, src_pte, nr, true);
 		pte = pte_wrprotect(pte);
 	}
-	VM_BUG_ON(page && folio_test_anon(folio) && PageAnonExclusive(page));
 
 	/*
 	 * If it's a shared mapping, mark it clean in
@@ -988,8 +1005,8 @@  copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	if (!userfaultfd_wp(dst_vma))
 		pte = pte_clear_uffd_wp(pte);
 
-	set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
-	return 0;
+	set_ptes_full(dst_vma->vm_mm, addr, dst_pte, pte, nr, true);
+	return nr;
 }
 
 static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
@@ -1030,6 +1047,7 @@  copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	int rss[NR_MM_COUNTERS];
 	swp_entry_t entry = (swp_entry_t){0};
 	struct folio *prealloc = NULL;
+	int nr_ptes;
 
 again:
 	progress = 0;
@@ -1060,6 +1078,8 @@  copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	arch_enter_lazy_mmu_mode();
 
 	do {
+		nr_ptes = 1;
+
 		/*
 		 * We are holding two locks at this point - either of them
 		 * could generate latencies in another task on another CPU.
@@ -1095,16 +1115,21 @@  copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 			 * the now present pte.
 			 */
 			WARN_ON_ONCE(ret != -ENOENT);
+			ret = 0;
 		}
-		/* copy_present_pte() will clear `*prealloc' if consumed */
-		ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte,
-				       addr, rss, &prealloc);
+		/* copy_present_ptes() will clear `*prealloc' if consumed */
+		nr_ptes = copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte,
+					    ptent, addr, end, rss, &prealloc);
+
 		/*
 		 * If we need a pre-allocated page for this pte, drop the
 		 * locks, allocate, and try again.
 		 */
-		if (unlikely(ret == -EAGAIN))
+		if (unlikely(nr_ptes == -EAGAIN)) {
+			ret = -EAGAIN;
 			break;
+		}
+
 		if (unlikely(prealloc)) {
 			/*
 			 * pre-alloc page cannot be reused by next time so as
@@ -1115,8 +1140,9 @@  copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 			folio_put(prealloc);
 			prealloc = NULL;
 		}
-		progress += 8;
-	} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
+		progress += 8 * nr_ptes;
+	} while (dst_pte += nr_ptes, src_pte += nr_ptes,
+		 addr += PAGE_SIZE * nr_ptes, addr != end);
 
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(orig_src_pte, src_ptl);