[v9,1/5] ksm: support unsharing KSM-placed zero pages

Message ID 20230524055711.20387-1-yang.yang29@zte.com.cn
State New
Headers
Series ksm: support tracking KSM-placed zero-pages |

Commit Message

Yang Yang May 24, 2023, 5:57 a.m. UTC
  From: xu xin <xu.xin16@zte.com.cn>

When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
unsharing will *not* actually unshare the shared zeropage as placed by KSM
(which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
zero pages are out of the control of KSM, the related counts of ksm pages
don't expose how many zero pages are placed by KSM (these special zero
pages are different from those initially mapped zero pages, because the
zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
and unshared page).

To not blindly unshare all shared zero_pages in applicable VMAs, the patch
use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.

In addition, we'll reuse this mechanism to reliably identify KSM-placed
ZeroPages to properly account for them (e.g., calculating the KSM profit
that includes zeropages) in the latter patches.

The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.

Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Suggested-by: David Hildenbrand <david@redhat.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Xuexin Jiang <jiang.xuexin@zte.com.cn>
Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn>
Reviewed-by: Yang Yang <yang.yang29@zte.com.cn>
---
 include/linux/ksm.h |  8 ++++++++
 mm/ksm.c            | 11 ++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)
  

Comments

David Hildenbrand May 24, 2023, 7:10 a.m. UTC | #1
On 24.05.23 07:57, Yang Yang wrote:
> From: xu xin <xu.xin16@zte.com.cn>
> 
> When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
> and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
> unsharing will *not* actually unshare the shared zeropage as placed by KSM
> (which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
> zero pages are out of the control of KSM, the related counts of ksm pages
> don't expose how many zero pages are placed by KSM (these special zero
> pages are different from those initially mapped zero pages, because the
> zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
> and unshared page).
> 
> To not blindly unshare all shared zero_pages in applicable VMAs, the patch
> use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
> Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.
> 
> In addition, we'll reuse this mechanism to reliably identify KSM-placed
> ZeroPages to properly account for them (e.g., calculating the KSM profit
> that includes zeropages) in the latter patches.
> 
> The patch will not degrade the performance of use_zero_pages as it doesn't
> change the way of merging empty pages in use_zero_pages's feature.
> 
> Signed-off-by: xu xin <xu.xin16@zte.com.cn>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Cc: Xuexin Jiang <jiang.xuexin@zte.com.cn>
> Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn>
> Reviewed-by: Yang Yang <yang.yang29@zte.com.cn>
> ---
>   include/linux/ksm.h |  8 ++++++++
>   mm/ksm.c            | 11 ++++++++---
>   2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 899a314bc487..4fd5f4a50bac 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -26,6 +26,12 @@ int ksm_disable(struct mm_struct *mm);
>   
>   int __ksm_enter(struct mm_struct *mm);
>   void __ksm_exit(struct mm_struct *mm);
> +/*
> + * To identify zeropages that were mapped by KSM, we reuse the dirty bit
> + * in the PTE. If the PTE is dirty, the zeropage was mapped by KSM when
> + * deduplicating memory.
> + */
> +#define is_ksm_zero_pte(pte)	(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
>   
>   static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>   {
> @@ -95,6 +101,8 @@ static inline void ksm_exit(struct mm_struct *mm)
>   {
>   }
>   
> +#define is_ksm_zero_pte(pte)	0
> +

Not required in this patch (and AFAIKS in the others). So you can drop that.

Reviewed-by: David Hildenbrand <david@redhat.com>
  

Patch

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 899a314bc487..4fd5f4a50bac 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -26,6 +26,12 @@  int ksm_disable(struct mm_struct *mm);
 
 int __ksm_enter(struct mm_struct *mm);
 void __ksm_exit(struct mm_struct *mm);
+/*
+ * To identify zeropages that were mapped by KSM, we reuse the dirty bit
+ * in the PTE. If the PTE is dirty, the zeropage was mapped by KSM when
+ * deduplicating memory.
+ */
+#define is_ksm_zero_pte(pte)	(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
 
 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
 {
@@ -95,6 +101,8 @@  static inline void ksm_exit(struct mm_struct *mm)
 {
 }
 
+#define is_ksm_zero_pte(pte)	0
+
 #ifdef CONFIG_MEMORY_FAILURE
 static inline void collect_procs_ksm(struct page *page,
 				     struct list_head *to_kill, int force_early)
diff --git a/mm/ksm.c b/mm/ksm.c
index 0156bded3a66..f31c789406b1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -447,7 +447,8 @@  static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex
 		if (is_migration_entry(entry))
 			page = pfn_swap_entry_to_page(entry);
 	}
-	ret = page && PageKsm(page);
+	/* return 1 if the page is an normal ksm page or KSM-placed zero page */
+	ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte);
 	pte_unmap_unlock(pte, ptl);
 	return ret;
 }
@@ -1220,8 +1221,12 @@  static int replace_page(struct vm_area_struct *vma, struct page *page,
 		page_add_anon_rmap(kpage, vma, addr, RMAP_NONE);
 		newpte = mk_pte(kpage, vma->vm_page_prot);
 	} else {
-		newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
-					       vma->vm_page_prot));
+		/*
+		 * Use pte_mkdirty to mark the zero page mapped by KSM, and then
+		 * we can easily track all KSM-placed zero pages by checking if
+		 * the dirty bit in zero page's PTE is set.
+		 */
+		newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)));
 		/*
 		 * We're replacing an anonymous page with a zero page, which is
 		 * not anonymous. We need to do proper accounting otherwise we