[-V2] mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
Commit Message
Hi, Andrew,
The version 1 of this patch was merged in mm-unstable branch. If you
want to move that patch into mm-stable recently, it may be better to
update that patch with this new version firstly. If you want to do
that after v6.4-rc1, I will rebase this patch and resend it after
v6.4-rc1 is released.
Hi, Amit,
The patch has been changed based on comments from Xin. I keep your
"reviewed-by" because I think the change is trivial. But if you think
it's inappropriate, I will change that.
Best Regards,
Huang, Ying
------------------------------->8------------------------------------------
0Day/LKP reported a performance regression for commit
7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the
TLB flushing during page migration is batched. So, in
try_to_migrate_one(), ptep_clear_flush() is replaced with
set_tlb_ubc_flush_pending(). In further investigation, it is found
that the TLB flushing can be avoided in ptep_clear_flush() if the PTE
is inaccessible. In fact, we can optimize in similar way for the
batched TLB flushing too to improve the performance.
So in this patch, we check pte_accessible() before
set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show
that the benchmark score of the anon-cow-rand-mt test case of
vm-scalability test suite can improve up to 2.1% with the patch on a
Intel server machine. The TLB flushing IPI can reduce up to 44.3%.
Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com
Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@intel.com/
Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB")
Reported-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Nadav Amit <namit@vmware.com>
Cc: haoxin <xhao@linux.alibaba.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: David Hildenbrand <david@redhat.com>
---
mm/rmap.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
Comments
在 2023/4/24 下午2:54, Huang Ying 写道:
> Hi, Andrew,
>
> The version 1 of this patch was merged in mm-unstable branch. If you
> want to move that patch into mm-stable recently, it may be better to
> update that patch with this new version firstly. If you want to do
> that after v6.4-rc1, I will rebase this patch and resend it after
> v6.4-rc1 is released.
>
> Hi, Amit,
>
> The patch has been changed based on comments from Xin. I keep your
> "reviewed-by" because I think the change is trivial. But if you think
> it's inappropriate, I will change that.
>
> Best Regards,
> Huang, Ying
>
> ------------------------------->8------------------------------------------
> 0Day/LKP reported a performance regression for commit
> 7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the
> TLB flushing during page migration is batched. So, in
> try_to_migrate_one(), ptep_clear_flush() is replaced with
> set_tlb_ubc_flush_pending(). In further investigation, it is found
> that the TLB flushing can be avoided in ptep_clear_flush() if the PTE
> is inaccessible. In fact, we can optimize in similar way for the
> batched TLB flushing too to improve the performance.
>
> So in this patch, we check pte_accessible() before
> set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show
> that the benchmark score of the anon-cow-rand-mt test case of
> vm-scalability test suite can improve up to 2.1% with the patch on a
> Intel server machine. The TLB flushing IPI can reduce up to 44.3%.
>
> Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com
> Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@intel.com/
> Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB")
> Reported-by: kernel test robot <yujie.liu@intel.com>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Reviewed-by: Nadav Amit <namit@vmware.com>
> Cc: haoxin <xhao@linux.alibaba.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: David Hildenbrand <david@redhat.com>
> ---
> mm/rmap.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 8632e02661ac..be19232e94f4 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -641,10 +641,14 @@ void try_to_unmap_flush_dirty(void)
> #define TLB_FLUSH_BATCH_PENDING_LARGE \
> (TLB_FLUSH_BATCH_PENDING_MASK / 2)
>
> -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
> +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
> {
> struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
> int batch, nbatch;
> + bool writable = pte_dirty(pteval);
> +
> + if (!pte_accessible(mm, pteval))
> + return;
LGTM
Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
>
> arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
> tlb_ubc->flush_required = true;
> @@ -731,7 +735,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
> }
> }
> #else
> -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
> +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
> {
> }
>
> @@ -1582,7 +1586,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> */
> pteval = ptep_get_and_clear(mm, address, pvmw.pte);
>
> - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
> + set_tlb_ubc_flush_pending(mm, pteval);
> } else {
> pteval = ptep_clear_flush(vma, address, pvmw.pte);
> }
> @@ -1963,7 +1967,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> */
> pteval = ptep_get_and_clear(mm, address, pvmw.pte);
>
> - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
> + set_tlb_ubc_flush_pending(mm, pteval);
> } else {
> pteval = ptep_clear_flush(vma, address, pvmw.pte);
> }
@@ -641,10 +641,14 @@ void try_to_unmap_flush_dirty(void)
#define TLB_FLUSH_BATCH_PENDING_LARGE \
(TLB_FLUSH_BATCH_PENDING_MASK / 2)
-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
{
struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
int batch, nbatch;
+ bool writable = pte_dirty(pteval);
+
+ if (!pte_accessible(mm, pteval))
+ return;
arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
tlb_ubc->flush_required = true;
@@ -731,7 +735,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
}
}
#else
-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
{
}
@@ -1582,7 +1586,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
- set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ set_tlb_ubc_flush_pending(mm, pteval);
} else {
pteval = ptep_clear_flush(vma, address, pvmw.pte);
}
@@ -1963,7 +1967,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
*/
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
- set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ set_tlb_ubc_flush_pending(mm, pteval);
} else {
pteval = ptep_clear_flush(vma, address, pvmw.pte);
}