[v4,01/16] mm: thp: Batch-collapse PMD with set_ptes()
Commit Message
Refactor __split_huge_pmd_locked() so that a present PMD can be
collapsed to PTEs in a single batch using set_ptes(). It also provides a
future opportunity to batch-add the folio to the rmap using David's new
batched rmap APIs.
This should improve performance a little bit, but the real motivation is
to remove the need for the arm64 backend to have to fold the contpte
entries. Instead, since the ptes are set as a batch, the contpte blocks
can be initially set up pre-folded (once the arm64 contpte support is
added in the next few patches). This leads to noticeable performance
improvement during split.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/huge_memory.c | 59 ++++++++++++++++++++++++++++--------------------
1 file changed, 34 insertions(+), 25 deletions(-)
Comments
On 18.12.23 11:50, Ryan Roberts wrote:
> Refactor __split_huge_pmd_locked() so that a present PMD can be
> collapsed to PTEs in a single batch using set_ptes(). It also provides a
> future opportunity to batch-add the folio to the rmap using David's new
> batched rmap APIs.
I'd drop that sentence and rather just say "In the future, we might get
rid of the remaining manual loop by using rmap batching.".
>
> This should improve performance a little bit, but the real motivation is
> to remove the need for the arm64 backend to have to fold the contpte
> entries. Instead, since the ptes are set as a batch, the contpte blocks
> can be initially set up pre-folded (once the arm64 contpte support is
> added in the next few patches). This leads to noticeable performance
> improvement during split.
>
Acked-by: David Hildenbrand <david@redhat.com>
On 18/12/2023 17:40, David Hildenbrand wrote:
> On 18.12.23 11:50, Ryan Roberts wrote:
>> Refactor __split_huge_pmd_locked() so that a present PMD can be
>> collapsed to PTEs in a single batch using set_ptes(). It also provides a
>> future opportunity to batch-add the folio to the rmap using David's new
>> batched rmap APIs.
>
> I'd drop that sentence and rather just say "In the future, we might get rid of
> the remaining manual loop by using rmap batching.".
OK fair enough. Will fix for next version.
>
>>
>> This should improve performance a little bit, but the real motivation is
>> to remove the need for the arm64 backend to have to fold the contpte
>> entries. Instead, since the ptes are set as a batch, the contpte blocks
>> can be initially set up pre-folded (once the arm64 contpte support is
>> added in the next few patches). This leads to noticeable performance
>> improvement during split.
>>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks!
@@ -2535,15 +2535,16 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
pte = pte_offset_map(&_pmd, haddr);
VM_BUG_ON(!pte);
- for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
- pte_t entry;
- /*
- * Note that NUMA hinting access restrictions are not
- * transferred to avoid any possibility of altering
- * permissions across VMAs.
- */
- if (freeze || pmd_migration) {
+
+ /*
+ * Note that NUMA hinting access restrictions are not transferred to
+ * avoid any possibility of altering permissions across VMAs.
+ */
+ if (freeze || pmd_migration) {
+ for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
+ pte_t entry;
swp_entry_t swp_entry;
+
if (write)
swp_entry = make_writable_migration_entry(
page_to_pfn(page + i));
@@ -2562,28 +2563,36 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
entry = pte_swp_mksoft_dirty(entry);
if (uffd_wp)
entry = pte_swp_mkuffd_wp(entry);
- } else {
- entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
- if (write)
- entry = pte_mkwrite(entry, vma);
+
+ VM_WARN_ON(!pte_none(ptep_get(pte + i)));
+ set_pte_at(mm, addr, pte + i, entry);
+ }
+ } else {
+ pte_t entry;
+
+ entry = mk_pte(page, READ_ONCE(vma->vm_page_prot));
+ if (write)
+ entry = pte_mkwrite(entry, vma);
+ if (!young)
+ entry = pte_mkold(entry);
+ /* NOTE: this may set soft-dirty too on some archs */
+ if (dirty)
+ entry = pte_mkdirty(entry);
+ if (soft_dirty)
+ entry = pte_mksoft_dirty(entry);
+ if (uffd_wp)
+ entry = pte_mkuffd_wp(entry);
+
+ for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
if (anon_exclusive)
SetPageAnonExclusive(page + i);
- if (!young)
- entry = pte_mkold(entry);
- /* NOTE: this may set soft-dirty too on some archs */
- if (dirty)
- entry = pte_mkdirty(entry);
- if (soft_dirty)
- entry = pte_mksoft_dirty(entry);
- if (uffd_wp)
- entry = pte_mkuffd_wp(entry);
page_add_anon_rmap(page + i, vma, addr, RMAP_NONE);
+ VM_WARN_ON(!pte_none(ptep_get(pte + i)));
}
- VM_BUG_ON(!pte_none(ptep_get(pte)));
- set_pte_at(mm, addr, pte, entry);
- pte++;
+
+ set_ptes(mm, haddr, pte, entry, HPAGE_PMD_NR);
}
- pte_unmap(pte - 1);
+ pte_unmap(pte);
if (!pmd_migration)
page_remove_rmap(page, vma, true);