[RFC,v2,4/5] mm/autonuma: call .numa_protect() when page is protected for NUMA migrate
Commit Message
Call mmu notifier's callback .numa_protect() in change_pmd_range() when
a page is ensured to be protected by PROT_NONE for NUMA migration purpose.
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
mm/huge_memory.c | 1 +
mm/mprotect.c | 1 +
2 files changed, 2 insertions(+)
Comments
> On Aug 10, 2023, at 2:00 AM, Yan Zhao <yan.y.zhao@intel.com> wrote:
>
> Call mmu notifier's callback .numa_protect() in change_pmd_range() when
> a page is ensured to be protected by PROT_NONE for NUMA migration purpose.
Consider squashing with the previous patch. It’s better to see the user
(caller) with the new functionality.
It would be useful to describe what the expected course of action that
numa_protect callback should take.
On Fri, Aug 11, 2023 at 11:52:53AM -0700, Nadav Amit wrote:
>
> > On Aug 10, 2023, at 2:00 AM, Yan Zhao <yan.y.zhao@intel.com> wrote:
> >
> > Call mmu notifier's callback .numa_protect() in change_pmd_range() when
> > a page is ensured to be protected by PROT_NONE for NUMA migration purpose.
>
> Consider squashing with the previous patch. It’s better to see the user
> (caller) with the new functionality.
>
> It would be useful to describe what the expected course of action that
> numa_protect callback should take.
Thanks! I'll do in this way when I prepare patches in future :)
@@ -1892,6 +1892,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!toptier)
xchg_page_access_time(page, jiffies_to_msecs(jiffies));
+ mmu_notifier_numa_protect(vma->vm_mm, addr, addr + PMD_SIZE);
}
/*
* In case prot_numa, we are under mmap_read_lock(mm). It's critical
@@ -164,6 +164,7 @@ static long change_pte_range(struct mmu_gather *tlb,
!toptier)
xchg_page_access_time(page,
jiffies_to_msecs(jiffies));
+ mmu_notifier_numa_protect(vma->vm_mm, addr, addr + PAGE_SIZE);
}
oldpte = ptep_modify_prot_start(vma, addr, pte);