[v7,1/6] ksm: support unsharing KSM-placed zero pages
Commit Message
From: xu xin <xu.xin16@zte.com.cn>
When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
unsharing will *not* actually unshare the shared zeropage as placed by KSM
(which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
zero pages are out of the control of KSM, the related counts of ksm pages
don't expose how many zero pages are placed by KSM (these special zero
pages are different from those initially mapped zero pages, because the
zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
and unshared page)
To not blindly unshare all shared zero_pages in applicable VMAs, the patch
use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.
The architecture must guarantee that pte_mkdirty won't treat the pte as
writable. Otherwise, it will break KSM pages state (wrprotect) and affect
the KSM functionality. For safety, we restrict this feature only to the
tested and known-working architechtures fow now.
The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Suggested-by: David Hildenbrand <david@redhat.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Xuexin Jiang <jiang.xuexin@zte.com.cn>
Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn>
Reviewed-by: Yang Yang <yang.yang29@zte.com.cn>
---
include/linux/ksm.h | 9 +++++++++
mm/Kconfig | 24 +++++++++++++++++++++++-
mm/ksm.c | 5 +++--
3 files changed, 35 insertions(+), 3 deletions(-)
@@ -95,4 +95,13 @@ static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old)
#endif /* CONFIG_MMU */
#endif /* !CONFIG_KSM */
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+/* use pte_mkdirty to track a KSM-placed zero page */
+#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte))
+#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
+#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */
+#define set_pte_ksm_zero(pte) pte_mkspecial(pte)
+#define is_ksm_zero_pte(pte) 0
+#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
+
#endif /* __LINUX_KSM_H */
@@ -666,7 +666,7 @@ config MMU_NOTIFIER
bool
select INTERVAL_TREE
-config KSM
+menuconfig KSM
bool "Enable KSM for page merging"
depends on MMU
select XXHASH
@@ -681,6 +681,28 @@ config KSM
until a program has madvised that an area is MADV_MERGEABLE, and
root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
+if KSM
+
+config KSM_ZERO_PAGES_TRACK
+ bool "support tracking KSM-placed zero pages"
+ depends on KSM
+ depends on ARM || ARM64 || X86
+ default y
+ help
+ This allows KSM to track KSM-placed zero pages, including supporting
+ unsharing and counting the KSM-placed zero pages. if say N, then
+ madvise(,,UNMERGEABLE) can't unshare the KSM-placed zero pages, and
+ users can't know how many zero pages are placed by KSM. This feature
+ depends on pte_mkdirty (related with architecture) to mark KSM-placed
+ zero pages.
+
+ The architecture must guarantee that pte_mkdirty won't treat the pte
+ as writable. Otherwise, it will break KSM pages state (wrprotect) and
+ affect the KSM functionality. For safety, we restrict this feature only
+ to the tested and known-working architechtures.
+
+endif # KSM
+
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
depends on MMU
@@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex
if (is_migration_entry(entry))
page = pfn_swap_entry_to_page(entry);
}
- ret = page && PageKsm(page);
+ /* return 1 if the page is an normal ksm page or KSM-placed zero page */
+ ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte);
pte_unmap_unlock(pte, ptl);
return ret;
}
@@ -1240,7 +1241,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
page_add_anon_rmap(kpage, vma, addr, RMAP_NONE);
newpte = mk_pte(kpage, vma->vm_page_prot);
} else {
- newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
+ newpte = set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage),
vma->vm_page_prot));
/*
* We're replacing an anonymous page with a zero page, which is