From patchwork Fri Feb 10 01:16:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55178 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp686083wrn; Thu, 9 Feb 2023 17:35:18 -0800 (PST) X-Google-Smtp-Source: AK7set914wJCsSHebUaaxeKSFWAOo+ODza4JhqjJ4/J2rHbciNi+Dz/v6xsEe1kszT3zyWu69y/o X-Received: by 2002:a05:6a20:69a6:b0:bf:85bc:ef33 with SMTP id t38-20020a056a2069a600b000bf85bcef33mr13663509pzk.42.1675992917724; Thu, 09 Feb 2023 17:35:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675992917; cv=none; d=google.com; s=arc-20160816; b=nHgB4eSnOa8R03FHdSsRpHelLlYUjk7FEY+/g9o/8HDS4vFEk31ZS88KdtgJtD3gU4 8JYIpijflx2lgXqJi1I5PjRMcA9bh3QZ0G8PVViaKqaNKvcvzwuVH6MDIkaQiTH1rNJC bUoiD0202ZODD7B46cIlvaSCNp3xlftfQhbwrou5i9Lk2EYoVn7lpzVmxyumR8nvdgSC RJYII2+SC9PjQsdbsZP6id0cTKtmVYFGzBmzmPim54t1PwQIzAM/cqXs0kEhHL2pFirT ezU8+jHSdM1Ra9v4o6nOftl5bymtgh117bpwI4pL78BlBCQX3X/5XVhMAXNxqw+8A5ND z3UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=Go5knzcgdTW3rle4PYI2BccmNgMMlWTFK6bplH5FMRI=; b=wQNZYFuy9VfniX9016I+uRrb0y6JjjAbpzJ9uJClK5MxTSZJ8UrbKlSkTeYcwatuJG 1Lzl6A9FeIwphlYi0dm4z+fyDGoBU0b+8rVZAKAXD5Uv4RIqpRN3W0BwOy81ZacczH/E 7kzbTS5yIfTdgibCO0bSTAqApIv2wviQPfKwLxP50PgwaPqBSPmd/64AOgMxygAOLHjl 3KwfDB0U90yRVtivkWknlfBQAjqLPccH+CO8ZZqHLI1K4HVheyvuTPYugI/lj+rWv1Pv FZHmwto3G3ldFOLXCGjQgWAsZpF8gOXYvDNLcLa8w05GsJbhzUJhHG1vAiiFzbiIvcIw FgPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z5-20020a626505000000b0058bcb6283edsi2885706pfb.288.2023.02.09.17.35.04; Thu, 09 Feb 2023 17:35:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229483AbjBJBQx (ORCPT + 99 others); Thu, 9 Feb 2023 20:16:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229866AbjBJBQv (ORCPT ); Thu, 9 Feb 2023 20:16:51 -0500 Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37D2335B8 for ; Thu, 9 Feb 2023 17:16:47 -0800 (PST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4PCbRd2v10z8R044; Fri, 10 Feb 2023 09:16:45 +0800 (CST) Received: from szxlzmapp05.zte.com.cn ([10.5.230.85]) by mse-fl1.zte.com.cn with SMTP id 31A1GfQW007483; Fri, 10 Feb 2023 09:16:41 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:16:42 +0800 (CST) Date: Fri, 10 Feb 2023 09:16:42 +0800 (CST) X-Zmail-TransId: 2b0363e59afa1ea3fe53 X-Mailer: Zmail v1.0 Message-ID: <202302100916423431376@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_1/6=5D_ksm=3A_abstract_the_function_try=5Fto?= =?utf-8?q?=5Fget=5Fold=5Frmap=5Fitem?= X-MAIL: mse-fl1.zte.com.cn 31A1GfQW007483 X-Fangmail-Gw-Spam-Type: 0 X-FangMail-Miltered: at cgslv5.04-192.168.250.137.novalocal with ID 63E59AFD.000 by FangMail milter! X-FangMail-Envelope: 1675991805/4PCbRd2v10z8R044/63E59AFD.000/10.5.228.132/[10.5.228.132]/mse-fl1.zte.com.cn/ X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59AFD.000/4PCbRd2v10z8R044 X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SORTED_RECIPS,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757405949720729870?= X-GMAIL-MSGID: =?utf-8?q?1757405949720729870?= From: xu xin A new function try_to_get_old_rmap_item is abstracted from get_next_rmap_item. This function will be reused by the subsequent patches about counting ksm_zero_pages. The patch improves the readability and reusability of KSM code. Signed-off-by: xu xin Cc: David Hildenbrand Cc: Claudio Imbrenda Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang v5->v6: Modify some comments according to David's suggestions. --- mm/ksm.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 83e2f74ae7da..905a79d213da 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2214,23 +2214,38 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite } } -static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, - struct ksm_rmap_item **rmap_list, - unsigned long addr) +static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr, + struct ksm_rmap_item **rmap_list) { - struct ksm_rmap_item *rmap_item; - while (*rmap_list) { - rmap_item = *rmap_list; + struct ksm_rmap_item *rmap_item = *rmap_list; + if ((rmap_item->address & PAGE_MASK) == addr) return rmap_item; if (rmap_item->address > addr) break; *rmap_list = rmap_item->rmap_list; + /* + * If we end up here, the VMA is MADV_UNMERGEABLE or its page + * is ineligible or discarded, e.g. MADV_DONTNEED. + */ remove_rmap_item_from_tree(rmap_item); free_rmap_item(rmap_item); } + return NULL; +} + +static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, + struct ksm_rmap_item **rmap_list, + unsigned long addr) +{ + struct ksm_rmap_item *rmap_item; + + rmap_item = try_to_get_old_rmap_item(addr, rmap_list); + if (rmap_item) + return rmap_item; + rmap_item = alloc_rmap_item(); if (rmap_item) { /* It has already been zeroed */ From patchwork Fri Feb 10 01:17:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55181 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp687872wrn; Thu, 9 Feb 2023 17:40:12 -0800 (PST) X-Google-Smtp-Source: AK7set/F84p6DYBT1rNWOG0pNRoUURoJgn8pBgOLeOx8v9KyNO++4dnhdkBofm2+ialdN5AapTJD X-Received: by 2002:a05:6a00:c84:b0:5a6:cbdc:2a1a with SMTP id a4-20020a056a000c8400b005a6cbdc2a1amr7770581pfv.2.1675993212279; Thu, 09 Feb 2023 17:40:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675993212; cv=none; d=google.com; s=arc-20160816; b=lEw6tsL4WuU6bfE4Pc5uBvnoCMU1V7OZuFe8rrSqUoaW9eF9O0LMPGrYc20SVfX/qs GgjgbNn9ctJNBQaPRwMveaxOTFLc8sSmFaYDB2EhNfAP+gr2yrWTd3T6xI5sDe7RhDzc RPbFAHSdjEx4EERh5c8FpDsWlKoWMFAjJB8+X9/i+kSCHke1HyecaHs/iOs094fIXJFY sAet2i3PEBel1yh7zniF2nnpVAQVx7zmKEx0Ez6RmIofKasNnnQQu0iM3MX/zfm2lkdx iSzFHU1jkkkoDtsnfnx58l1WyAr0SvYIrtrdiOgyKVcfn6TdhnKlofdtjallje6jhFFR tBXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=WAeLIOqUu6S1p2iujxnPPM5NIajaPHAFmC1QUWCV04g=; b=SUB0ykDTRzcDtUBp43hhwSA32L2Qx4GCu5KiCa9xAPLztj7tbFykOcxnhdHO/Je+i1 TCDd2yEwAGqOqXiPzTboNmM95zzRYSnZS5q5ItMP1BnlD5CuhepXISnG5eJUK1EW8TvT kUMxZ2jT+e+Xulj1YoD0eVFfWJxFmAN0L8HuWbexPW7yqscEo1oGY8X4LzVL2kbB10R0 2y5zj/7uYgZ2GfARG7l6+jW42s3sewN+JjAl+iG8Vx3le07jaRC9j7m3SjhPTKotRxO2 5MnooV8uWEPAb6/GBF0Ws1jl3krf+win2gWqgwoKNuU18QL6dmeUuUzqbkSlmyz8MVnn myqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g10-20020aa79f0a000000b00593cda276b6si3447787pfr.322.2023.02.09.17.39.59; Thu, 09 Feb 2023 17:40:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231187AbjBJBSC (ORCPT + 99 others); Thu, 9 Feb 2023 20:18:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229866AbjBJBSB (ORCPT ); Thu, 9 Feb 2023 20:18:01 -0500 Received: from mxct.zte.com.cn (mxct.zte.com.cn [183.62.165.209]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DC5A17CE0 for ; Thu, 9 Feb 2023 17:17:59 -0800 (PST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxct.zte.com.cn (FangMail) with ESMTPS id 4PCbT15pgxz501Ql; Fri, 10 Feb 2023 09:17:57 +0800 (CST) Received: from szxlzmapp04.zte.com.cn ([10.5.231.166]) by mse-fl2.zte.com.cn with SMTP id 31A1HojB022735; Fri, 10 Feb 2023 09:17:50 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:17:51 +0800 (CST) Date: Fri, 10 Feb 2023 09:17:51 +0800 (CST) X-Zmail-TransId: 2b0363e59b3fffffffffe0043a66 X-Mailer: Zmail v1.0 Message-ID: <202302100917515661425@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_2/6=5D_ksm=3A_support_unsharing_zero_pages_pl?= =?utf-8?q?aced_by_KSM?= X-MAIL: mse-fl2.zte.com.cn 31A1HojB022735 X-Fangmail-Gw-Spam-Type: 0 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59B45.001/4PCbT15pgxz501Ql X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,SORTED_RECIPS, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757406258172658711?= X-GMAIL-MSGID: =?utf-8?q?1757406258172658711?= From: xu xin use_zero_pages may be very useful, not just because of cache colouring as described in doc, but also because use_zero_pages can accelerate merging empty pages when there are plenty of empty pages (full of zeros) as the time of page-by-page comparisons (unstable_tree_search_insert) is saved. But when enabling use_zero_pages, madvise(addr, len, MADV_UNMERGEABLE) and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing will *not* actually unshare the shared zeropage as placed by KSM (which is against the MADV_UNMERGEABLE documentation). As these KSM-placed zero pages are out of the control of KSM, the related counts of ksm pages don't expose how many zero pages are placed by KSM (these special zero pages are different from those initially mapped zero pages, because the zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete and unshared page) To not blindly unshare all shared zero_pages in applicable VMAs, the patch introduces a dedicated flag ZERO_PAGE_FLAG to mark the rmap_items of those shared zero_pages. and guarantee that these rmap_items will be not freed during the time of zero_pages not being writing, so we can only unshare the *KSM-placed* zero_pages. The patch will not degrade the performance of use_zero_pages as it doesn't change the way of merging empty pages in use_zero_pages's feature. Signed-off-by: xu xin Reported-by: David Hildenbrand Cc: Claudio Imbrenda Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang --- mm/ksm.c | 141 +++++++++++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 111 insertions(+), 30 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 905a79d213da..ab04b44679c8 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -214,6 +214,7 @@ struct ksm_rmap_item { #define SEQNR_MASK 0x0ff /* low bits of unstable tree seqnr */ #define UNSTABLE_FLAG 0x100 /* is a node of the unstable tree */ #define STABLE_FLAG 0x200 /* is listed from the stable tree */ +#define ZERO_PAGE_FLAG 0x400 /* is zero page placed by KSM */ /* The stable and unstable tree heads */ static struct rb_root one_stable_tree[1] = { RB_ROOT }; @@ -420,6 +421,11 @@ static inline bool ksm_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) == 0; } +enum break_ksm_pmd_entry_return_flag { + HAVE_KSM_PAGE = 1, + HAVE_ZERO_PAGE +}; + static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk) { @@ -427,6 +433,7 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex spinlock_t *ptl; pte_t *pte; int ret; + bool is_zero_page = false; if (pmd_leaf(*pmd) || !pmd_present(*pmd)) return 0; @@ -434,6 +441,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (pte_present(*pte)) { page = vm_normal_page(walk->vma, addr, *pte); + if (!page) + is_zero_page = is_zero_pfn(pte_pfn(*pte)); } else if (!pte_none(*pte)) { swp_entry_t entry = pte_to_swp_entry(*pte); @@ -444,7 +453,14 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex if (is_migration_entry(entry)) page = pfn_swap_entry_to_page(entry); } - ret = page && PageKsm(page); + + if (page && PageKsm(page)) + ret = HAVE_KSM_PAGE; + else if (is_zero_page) + ret = HAVE_ZERO_PAGE; + else + ret = 0; + pte_unmap_unlock(pte, ptl); return ret; } @@ -466,19 +482,22 @@ static const struct mm_walk_ops break_ksm_ops = { * of the process that owns 'vma'. We also do not want to enforce * protection keys here anyway. */ -static int break_ksm(struct vm_area_struct *vma, unsigned long addr) +static int break_ksm(struct vm_area_struct *vma, unsigned long addr, + bool unshare_zero_page) { vm_fault_t ret = 0; do { - int ksm_page; + int walk_result; cond_resched(); - ksm_page = walk_page_range_vma(vma, addr, addr + 1, + walk_result = walk_page_range_vma(vma, addr, addr + 1, &break_ksm_ops, NULL); - if (WARN_ON_ONCE(ksm_page < 0)) - return ksm_page; - if (!ksm_page) + if (WARN_ON_ONCE(walk_result < 0)) + return walk_result; + if (!walk_result) + return 0; + if (walk_result == HAVE_ZERO_PAGE && !unshare_zero_page) return 0; ret = handle_mm_fault(vma, addr, FAULT_FLAG_UNSHARE | FAULT_FLAG_REMOTE, @@ -539,7 +558,7 @@ static void break_cow(struct ksm_rmap_item *rmap_item) mmap_read_lock(mm); vma = find_mergeable_vma(mm, addr); if (vma) - break_ksm(vma, addr); + break_ksm(vma, addr, false); mmap_read_unlock(mm); } @@ -764,6 +783,30 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node, return NULL; } +/* + * Cleaning the rmap_item's ZERO_PAGE_FLAG + * This function will be called when unshare or writing on zero pages. + */ +static inline void clean_rmap_item_zero_flag(struct ksm_rmap_item *rmap_item) +{ + if (rmap_item->address & ZERO_PAGE_FLAG) + rmap_item->address &= PAGE_MASK; +} + +/* Only called when rmap_item is going to be freed */ +static inline void unshare_zero_pages(struct ksm_rmap_item *rmap_item) +{ + struct vm_area_struct *vma; + + if (rmap_item->address & ZERO_PAGE_FLAG) { + vma = vma_lookup(rmap_item->mm, rmap_item->address); + if (vma && !ksm_test_exit(rmap_item->mm)) + break_ksm(vma, rmap_item->address, true); + } + /* Put at last. */ + clean_rmap_item_zero_flag(rmap_item); +} + /* * Removing rmap_item from stable or unstable tree. * This function will clean the information from the stable/unstable tree. @@ -824,6 +867,7 @@ static void remove_trailing_rmap_items(struct ksm_rmap_item **rmap_list) struct ksm_rmap_item *rmap_item = *rmap_list; *rmap_list = rmap_item->rmap_list; remove_rmap_item_from_tree(rmap_item); + unshare_zero_pages(rmap_item); free_rmap_item(rmap_item); } } @@ -853,7 +897,7 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma, if (signal_pending(current)) err = -ERESTARTSYS; else - err = break_ksm(vma, addr); + err = break_ksm(vma, addr, false); } return err; } @@ -2044,6 +2088,39 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item, rmap_item->mm->ksm_merging_pages++; } +static int try_to_merge_with_kernel_zero_page(struct ksm_rmap_item *rmap_item, + struct page *page) +{ + struct mm_struct *mm = rmap_item->mm; + int err = 0; + + /* + * It should not take ZERO_PAGE_FLAG because on one hand, + * get_next_rmap_item don't return zero pages' rmap_item. + * On the other hand, even if zero page was writen as + * anonymous page, rmap_item has been cleaned after + * stable_tree_search + */ + if (!WARN_ON_ONCE(rmap_item->address & ZERO_PAGE_FLAG)) { + struct vm_area_struct *vma; + + mmap_read_lock(mm); + vma = find_mergeable_vma(mm, rmap_item->address); + if (vma) { + err = try_to_merge_one_page(vma, page, + ZERO_PAGE(rmap_item->address)); + if (!err) + rmap_item->address |= ZERO_PAGE_FLAG; + } else { + /* If the vma is out of date, we do not need to continue. */ + err = 0; + } + mmap_read_unlock(mm); + } + + return err; +} + /* * cmp_and_merge_page - first see if page can be merged into the stable tree; * if not, compare checksum to previous and if it's the same, see if page can @@ -2055,7 +2132,6 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item, */ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_item) { - struct mm_struct *mm = rmap_item->mm; struct ksm_rmap_item *tree_rmap_item; struct page *tree_page = NULL; struct ksm_stable_node *stable_node; @@ -2092,6 +2168,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite } remove_rmap_item_from_tree(rmap_item); + clean_rmap_item_zero_flag(rmap_item); if (kpage) { if (PTR_ERR(kpage) == -EBUSY) @@ -2128,29 +2205,16 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite * Same checksum as an empty page. We attempt to merge it with the * appropriate zero page if the user enabled this via sysfs. */ - if (ksm_use_zero_pages && (checksum == zero_checksum)) { - struct vm_area_struct *vma; - - mmap_read_lock(mm); - vma = find_mergeable_vma(mm, rmap_item->address); - if (vma) { - err = try_to_merge_one_page(vma, page, - ZERO_PAGE(rmap_item->address)); - } else { + if (ksm_use_zero_pages) { + if (checksum == zero_checksum) /* - * If the vma is out of date, we do not need to - * continue. + * In case of failure, the page was not really empty, so we + * need to continue. Otherwise we're done. */ - err = 0; - } - mmap_read_unlock(mm); - /* - * In case of failure, the page was not really empty, so we - * need to continue. Otherwise we're done. - */ - if (!err) - return; + if (!try_to_merge_with_kernel_zero_page(rmap_item, page)) + return; } + tree_rmap_item = unstable_tree_search_insert(rmap_item, page, &tree_page); if (tree_rmap_item) { @@ -2230,6 +2294,7 @@ static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr, * is ineligible or discarded, e.g. MADV_DONTNEED. */ remove_rmap_item_from_tree(rmap_item); + unshare_zero_pages(rmap_item); free_rmap_item(rmap_item); } @@ -2352,6 +2417,22 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) } if (is_zone_device_page(*page)) goto next_page; + if (is_zero_pfn(page_to_pfn(*page))) { + /* + * To monitor ksm zero pages which becomes non-anonymous, + * we have to save each rmap_item of zero pages by + * try_to_get_old_rmap_item() walking on + * ksm_scan.rmap_list, otherwise their rmap_items will be + * freed by the next turn of get_next_rmap_item(). The + * function get_next_rmap_item() will free all "skipped" + * rmap_items because it thinks its areas as UNMERGEABLE. + */ + rmap_item = try_to_get_old_rmap_item(ksm_scan.address, + ksm_scan.rmap_list); + if (rmap_item && (rmap_item->address & ZERO_PAGE_FLAG)) + ksm_scan.rmap_list = &rmap_item->rmap_list; + goto next_page; + } if (PageAnon(*page)) { flush_anon_page(vma, *page, ksm_scan.address); flush_dcache_page(*page); From patchwork Fri Feb 10 01:18:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55176 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp685581wrn; Thu, 9 Feb 2023 17:34:00 -0800 (PST) X-Google-Smtp-Source: AK7set+hsnXd+DRx9pif01vSg/Pangk9RUrBemu78xdFAapAub+VbVB54Mr0zxiLDjoWfRiLJRV5 X-Received: by 2002:a17:906:a0c:b0:87b:db29:61af with SMTP id w12-20020a1709060a0c00b0087bdb2961afmr13978441ejf.24.1675992840137; Thu, 09 Feb 2023 17:34:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675992840; cv=none; d=google.com; s=arc-20160816; b=r1Qvb7mxFNVJGyVOE5xnXRmMLiZ5R0WpGWVzgk63q1YPJVZCqGAlEPiJiDXYyr8rXx AiuiwXBtz68qkeo7QLjBKl97mF8YsVENEzeTmeRnokyVCG5CZNihmlnRJRs4pJf+CVTQ YYguYfoYuy7DPbYVMbMnoYXqarEPolz9xxrh9wMvvZ2hBYrc1FH7labdfe5mTemINumB U6BvVJ0UGqabQIzsP9AK4DykSxWhsRR5kNv1PSRXf7pD9Mxjg3O9TgUdBvOhcZRLaPF2 MrF5y+9lmcTn5R4CTWrSnepd0L7U0u9sf4mcbUVQ7l1Ve/jp0Z22VmwR62wfPlXDPuWi x/MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=v0XSjLI+rO5Kq2Y1Eem8ZQwbVe/7axGQVmO2VxpMLp4=; b=C2FZCZ79uitjLNlHnNIH2S0GTr2wNZmazhmMofS+vbYa6+XiWWmBFXR6pDoneZZCNS sSRFeNNqddI3lH9LqqvnmD5aQgOccOafNeym24OVbQdR3J+60PVGDqsoGrLto/UJR/p8 N7IaRqLS8UlEpZ/ECwKHBqnRC8iDeNbsL/SOJwYAhfqgUq+hLtGDX922zLysg3AKo+pO HayMU05UYzrKn/T7aKTjKTtqCXQtHiMLLOPn25IdJRiUa2bF1I7CQYaL4KAgK3UEaC/S maXG44+wwZmUaQAEkzk4C8OKpYHoXmrj2HKJWNFEEzjyVbmeMxFveUx/rWtbNIl2Rxr7 DVEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fq6-20020a1709069d8600b0087bce1ccb1dsi3268956ejc.48.2023.02.09.17.33.36; Thu, 09 Feb 2023 17:34:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230239AbjBJBTG (ORCPT + 99 others); Thu, 9 Feb 2023 20:19:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229628AbjBJBTE (ORCPT ); Thu, 9 Feb 2023 20:19:04 -0500 Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7EDB643F9 for ; Thu, 9 Feb 2023 17:19:01 -0800 (PST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4PCbVD3jTNz8R043; Fri, 10 Feb 2023 09:19:00 +0800 (CST) Received: from szxlzmapp07.zte.com.cn ([10.5.230.251]) by mse-fl1.zte.com.cn with SMTP id 31A1IqXc009217; Fri, 10 Feb 2023 09:18:52 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:18:52 +0800 (CST) Date: Fri, 10 Feb 2023 09:18:52 +0800 (CST) X-Zmail-TransId: 2b0363e59b7c1e046e42 X-Mailer: Zmail v1.0 Message-ID: <202302100918524481474@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_3/6=5D_ksm=3A_count_all_zero_pages_placed_by_?= =?utf-8?q?KSM?= X-MAIL: mse-fl1.zte.com.cn 31A1IqXc009217 X-Fangmail-Gw-Spam-Type: 0 X-FangMail-Miltered: at cgslv5.04-192.168.250.137.novalocal with ID 63E59B84.000 by FangMail milter! X-FangMail-Envelope: 1675991940/4PCbVD3jTNz8R043/63E59B84.000/10.5.228.132/[10.5.228.132]/mse-fl1.zte.com.cn/ X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59B84.000/4PCbVD3jTNz8R043 X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SORTED_RECIPS,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757405867861170524?= X-GMAIL-MSGID: =?utf-8?q?1757405867861170524?= From: xu xin As pages_sharing and pages_shared don't include the number of zero pages merged by KSM, we cannot know how many pages are zero pages placed by KSM when enabling use_zero_pages, which leads to KSM not being transparent with all actual merged pages by KSM. In the early days of use_zero_pages, zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so it's hard to count how many times one of those zeropages was then unmerged. But now, unsharing KSM-placed zero page accurately has been achieved, so we can easily count both how many times a page full of zeroes was merged with zero-page and how many times one of those pages was then unmerged. and so, it helps to estimate memory demands when each and every shared page could get unshared. So we add zero_pages_sharing under /sys/kernel/mm/ksm/ to show the number of all zero pages placed by KSM. Signed-off-by: xu xin Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang v4->v5: fix warning mm/ksm.c:3238:9: warning: no previous prototype for 'zero_pages_sharing_show' [-Wmissing-prototypes]. Reviewed-by: Claudio Imbrenda --- mm/ksm.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index ab04b44679c8..1fa668e1fe82 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -276,6 +276,9 @@ static unsigned int zero_checksum __read_mostly; /* Whether to merge empty (zeroed) pages with actual zero pages */ static bool ksm_use_zero_pages __read_mostly; +/* The number of zero pages placed by KSM use_zero_pages */ +static unsigned long ksm_zero_pages_sharing; + #ifdef CONFIG_NUMA /* Zeroed when merging across nodes is not allowed */ static unsigned int ksm_merge_across_nodes = 1; @@ -789,8 +792,10 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node, */ static inline void clean_rmap_item_zero_flag(struct ksm_rmap_item *rmap_item) { - if (rmap_item->address & ZERO_PAGE_FLAG) + if (rmap_item->address & ZERO_PAGE_FLAG) { + ksm_zero_pages_sharing--; rmap_item->address &= PAGE_MASK; + } } /* Only called when rmap_item is going to be freed */ @@ -2109,8 +2114,10 @@ static int try_to_merge_with_kernel_zero_page(struct ksm_rmap_item *rmap_item, if (vma) { err = try_to_merge_one_page(vma, page, ZERO_PAGE(rmap_item->address)); - if (!err) + if (!err) { rmap_item->address |= ZERO_PAGE_FLAG; + ksm_zero_pages_sharing++; + } } else { /* If the vma is out of date, we do not need to continue. */ err = 0; @@ -3230,6 +3237,13 @@ static ssize_t pages_volatile_show(struct kobject *kobj, } KSM_ATTR_RO(pages_volatile); +static ssize_t zero_pages_sharing_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%ld\n", ksm_zero_pages_sharing); +} +KSM_ATTR_RO(zero_pages_sharing); + static ssize_t stable_node_dups_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -3285,6 +3299,7 @@ static struct attribute *ksm_attrs[] = { &pages_sharing_attr.attr, &pages_unshared_attr.attr, &pages_volatile_attr.attr, + &zero_pages_sharing_attr.attr, &full_scans_attr.attr, #ifdef CONFIG_NUMA &merge_across_nodes_attr.attr, From patchwork Fri Feb 10 01:19:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55180 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp687741wrn; Thu, 9 Feb 2023 17:39:49 -0800 (PST) X-Google-Smtp-Source: AK7set+W8xlL9A1Vf5miUSWIDjAXr+fdcJ5w88ueNzZnzFVtufTb0vLnlQ0tyGhaWwipXeudPhRn X-Received: by 2002:a05:6a20:669e:b0:c0:61e9:afe8 with SMTP id o30-20020a056a20669e00b000c061e9afe8mr12006566pzh.52.1675993189428; Thu, 09 Feb 2023 17:39:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675993189; cv=none; d=google.com; s=arc-20160816; b=m0v4/BYH78cD5cq8IJ7ZCv7Gda+mIh7VerV5OXeLndaRzdsQO84EfcKiBDW314Qmf8 58ry5JVr508RYG0EosBu8FZEP5A2tigsd3evAl7oII5NdyPI3MJFXHbMCveZlZzehbZB d5rpAm885SAxmwwUz+iLonC0amcxD8ukNw/3V53GjTuh0LTjCaE34geYyJIvhvi1TEw7 znWTYAQ5/CWVhYmZXTQ0Yx/TQKuiRXPAtUBmlm06uUX7RABtyw9SMLL/VyoqHTW7mFKz xkvNzDdKLVaBZRjGwqspIZOZuNzZytiV4OGods3CsBi5nzaD/VEnkX2Fd7mCo+DRZi9f fmdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=kYr5WVQqOlA4UGoUEmtACqMpk4wQYhe20dijyYWJccU=; b=h61Z3qA4yuDZfD85nSol/sEKPKiM3mnWMjSkVpUzGs1AlzeflfQe831+y1sDNOUlsT 17CyokNCt7bLWWQyoMUUdZxrHpvVlQXYWrREHh3yNqADg8VyF2rC9uwbK2QQ7x3OcdM7 nZ+UbIEkVvMwrJDPBtfCEhiqnu3F51YUUO2jP6dZmE8HOAYVtUCSSQnv0c6ytzTlfqK6 /3Mg4LtAxiQpSqRKviiosTQagEENjpE+8/6aRI6eYPKxRPQhxTk6jzEhPP1eA6hnHIzu pmfa3YTVf6Z8b18zPQId7CSEzM9ooBTKgQyJUOhnUbk9ryAQeyCW9eKwp/t3okEKaESE AyiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z5-20020a626505000000b0058bcb6283edsi2885706pfb.288.2023.02.09.17.39.32; Thu, 09 Feb 2023 17:39:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230190AbjBJBUI (ORCPT + 99 others); Thu, 9 Feb 2023 20:20:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229628AbjBJBUG (ORCPT ); Thu, 9 Feb 2023 20:20:06 -0500 Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 314CC643F9 for ; Thu, 9 Feb 2023 17:20:05 -0800 (PST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4PCbWR6Qvjz8R042; Fri, 10 Feb 2023 09:20:03 +0800 (CST) Received: from szxlzmapp02.zte.com.cn ([10.5.231.79]) by mse-fl2.zte.com.cn with SMTP id 31A1JmBv024052; Fri, 10 Feb 2023 09:19:48 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:19:49 +0800 (CST) Date: Fri, 10 Feb 2023 09:19:49 +0800 (CST) X-Zmail-TransId: 2b0363e59bb508349ce2 X-Mailer: Zmail v1.0 Message-ID: <202302100919492571517@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_4/6=5D_ksm=3A_count_zero_pages_for_each_proce?= =?utf-8?q?ss?= X-MAIL: mse-fl2.zte.com.cn 31A1JmBv024052 X-Fangmail-Gw-Spam-Type: 0 X-FangMail-Miltered: at cgslv5.04-192.168.250.137.novalocal with ID 63E59BC3.000 by FangMail milter! X-FangMail-Envelope: 1675992003/4PCbWR6Qvjz8R042/63E59BC3.000/10.5.228.133/[10.5.228.133]/mse-fl2.zte.com.cn/ X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59BC3.000/4PCbWR6Qvjz8R042 X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SORTED_RECIPS,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757406234842973807?= X-GMAIL-MSGID: =?utf-8?q?1757406234842973807?= From: xu xin As the number of ksm zero pages is not included in ksm_merging_pages per process when enabling use_zero_pages, it's unclear of how many actual pages are merged by KSM. To let users accurately estimate their memory demands when unsharing KSM zero-pages, it's necessary to show KSM zero- pages per process. since unsharing zero pages placed by KSM accurately is achieved, then tracking empty pages merging and unmerging is not a difficult thing any longer. Since we already have /proc//ksm_stat, just add the information of zero_pages_sharing in it. Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Cc: Xiaokai Ran Cc: Yang Yang Signed-off-by: xu xin Reviewed-by: Claudio Imbrenda --- fs/proc/base.c | 1 + include/linux/mm_types.h | 7 ++++++- mm/ksm.c | 2 ++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index 9e479d7d202b..ac9ebe972be0 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -3207,6 +3207,7 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns, mm = get_task_mm(task); if (mm) { seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items); + seq_printf(m, "zero_pages_sharing %lu\n", mm->ksm_zero_pages_sharing); mmput(mm); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4e1031626403..5c734ebc1890 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -776,7 +776,7 @@ struct mm_struct { #ifdef CONFIG_KSM /* * Represent how many pages of this process are involved in KSM - * merging. + * merging (not including ksm_zero_pages_sharing). */ unsigned long ksm_merging_pages; /* @@ -784,6 +784,11 @@ struct mm_struct { * including merged and not merged. */ unsigned long ksm_rmap_items; + /* + * Represent how many empty pages are merged with kernel zero + * pages when enabling KSM use_zero_pages. + */ + unsigned long ksm_zero_pages_sharing; #endif #ifdef CONFIG_LRU_GEN struct { diff --git a/mm/ksm.c b/mm/ksm.c index 1fa668e1fe82..42dbcc3ec90d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -794,6 +794,7 @@ static inline void clean_rmap_item_zero_flag(struct ksm_rmap_item *rmap_item) { if (rmap_item->address & ZERO_PAGE_FLAG) { ksm_zero_pages_sharing--; + rmap_item->mm->ksm_zero_pages_sharing--; rmap_item->address &= PAGE_MASK; } } @@ -2117,6 +2118,7 @@ static int try_to_merge_with_kernel_zero_page(struct ksm_rmap_item *rmap_item, if (!err) { rmap_item->address |= ZERO_PAGE_FLAG; ksm_zero_pages_sharing++; + rmap_item->mm->ksm_zero_pages_sharing++; } } else { /* If the vma is out of date, we do not need to continue. */ From patchwork Fri Feb 10 01:20:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55182 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp688981wrn; Thu, 9 Feb 2023 17:43:21 -0800 (PST) X-Google-Smtp-Source: AK7set9JKZ2xAzcifUki3fiXqkOf3lr05H/mVbPCUO6Cf+/0Qj9cnaacJT31CjXGLNpCTWUD8Aoj X-Received: by 2002:a17:902:e84f:b0:198:a845:fbaf with SMTP id t15-20020a170902e84f00b00198a845fbafmr14879118plg.48.1675993401555; Thu, 09 Feb 2023 17:43:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675993401; cv=none; d=google.com; s=arc-20160816; b=ltfrBP3TfXHjjJgFYR1ZapVdMoxj51xkcoIfdpDwDi57e18BjsrDIvxh8IHF+QUiLp 6FtdWaXq22LyEnyAEfe2hUIUBqCD/zkmJLz5/OFyCzV0ggkIQNjQymiQDJAsl+Tv1zuo lNmfuidYkjkKstnF2BeUHWI+KMsQaFH7EkrOIlRTVkIS7/2JuFx7zkJm8/LVcrbMGNu/ GV1OewX3b0n6M/IVO7m2Oe45EvkTQxN1OAdw2TJtdHMgE+F1N4BlDxP9TmcW2hskGoQ5 YpW2hSx5KhNoK6l/cZupFh1A/eQJ+5PoxzwsAqUYcm3fw8iTh5kpdFi4jHOuJYtrmY1c CQfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=XU9FheZT57GPBRP073pAhzfqBkSRwOOpOqW3eL8idB8=; b=VTLPDxYQ7hkYVspNBsa69HmoA6VaTx+T8L6l2GbfJqiR77EOID5rN4J6cP0Jt8zoHc Hu6Y1A+s67XBBEhiWXGwgYqGj2dkLSsZpFHwQlbufZzM4bYXI0x1uRAt6Y0YE3WBW4h8 GLW4SCs6pAJB5kLQD3AWxFkoTXXA7TpJT4CHrPHrYe0RCtDhD/gqgn1+vGYslExSa4q7 4CJbcGKfIYCr0gRn4+FVCSKjxlvJ9tsEPWBrIpY0MrxHvT+fJ2oY3a0FS+BxRf8L+LLX l1Vh9hiYBxW+/aOgXIohAR5pXcxhMSy8zIbCc3nKXhPPu9rcHA9pqsACJZH4Ml0mZiWl Mj9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i2-20020a170902c94200b00188d892999esi3369122pla.521.2023.02.09.17.43.08; Thu, 09 Feb 2023 17:43:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230369AbjBJBUv (ORCPT + 99 others); Thu, 9 Feb 2023 20:20:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229827AbjBJBUu (ORCPT ); Thu, 9 Feb 2023 20:20:50 -0500 Received: from mxct.zte.com.cn (mxct.zte.com.cn [183.62.165.209]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78EC9643F9 for ; Thu, 9 Feb 2023 17:20:49 -0800 (PST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxct.zte.com.cn (FangMail) with ESMTPS id 4PCbXG5ZPmz501Qh; Fri, 10 Feb 2023 09:20:46 +0800 (CST) Received: from szxlzmapp01.zte.com.cn ([10.5.231.85]) by mse-fl2.zte.com.cn with SMTP id 31A1Kgft025577; Fri, 10 Feb 2023 09:20:42 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:20:42 +0800 (CST) Date: Fri, 10 Feb 2023 09:20:42 +0800 (CST) X-Zmail-TransId: 2b0363e59beaffffffffcc54c922 X-Mailer: Zmail v1.0 Message-ID: <202302100920429071565@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_5/6=5D_ksm=3A_add_zero=5Fpages=5Fsharing_docu?= =?utf-8?q?mentation?= X-MAIL: mse-fl2.zte.com.cn 31A1Kgft025577 X-Fangmail-Gw-Spam-Type: 0 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59BEE.002/4PCbXG5ZPmz501Qh X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,SORTED_RECIPS, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757406457100754579?= X-GMAIL-MSGID: =?utf-8?q?1757406457100754579?= From: xu xin When enabling use_zero_pages, pages_sharing cannot represent how much memory saved indeed. zero_pages_sharing + pages_sharing does. add the description of zero_pages_sharing. Cc: Xiaokai Ran Cc: Yang Yang Cc: Jiang Xuexin Cc: Claudio Imbrenda Cc: David Hildenbrand Signed-off-by: xu xin --- Documentation/admin-guide/mm/ksm.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst index fb6ba2002a4b..f160f9487a90 100644 --- a/Documentation/admin-guide/mm/ksm.rst +++ b/Documentation/admin-guide/mm/ksm.rst @@ -173,6 +173,13 @@ stable_node_chains the number of KSM pages that hit the ``max_page_sharing`` limit stable_node_dups number of duplicated KSM pages +zero_pages_sharing + how many empty pages are sharing kernel zero page(s) instead of + with each other as it would happen normally. Only effective when + enabling ``use_zero_pages`` knob. + +When enabling ``use_zero_pages``, the sum of ``pages_sharing`` + +``zero_pages_sharing`` represents how much really saved by KSM. A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing`` From patchwork Fri Feb 10 01:21:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 55179 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp686698wrn; Thu, 9 Feb 2023 17:36:52 -0800 (PST) X-Google-Smtp-Source: AK7set+YhRWAXvtgq1Hlda/CyZ4rqIEBzAPuHjQh5amlHISmmmdYasAz0Xv0PrnTG8rzyVTsaoPe X-Received: by 2002:a17:903:24c:b0:198:a715:d26d with SMTP id j12-20020a170903024c00b00198a715d26dmr14290415plh.8.1675993012194; Thu, 09 Feb 2023 17:36:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675993012; cv=none; d=google.com; s=arc-20160816; b=GktASP8JY8YW2p3Y3ONim6CPEzEAlNYFnbg0HBj8AURZMhCaukyb0TiiZ3kFagqrHu DFZm/hJ3LdV3LGlVjYnytU8yYemGedIuOaMaIxWARRBRwz5VTaE49SMMXois7JjHxhd7 8AphDfASZFG1NaoOMDmi+oPicAX/hVZUQWvHksBQxbD6DlXkRxZ1ulagrwXHur/83mZM 7MKDysWEOPKBEW5Tq4Rpxj6pomgjW88ypwFQeRlHS/UMOCsfTaBvQ9sSdLaGTUIx90Ry ubKgqa5o+JgCYYsHIC4YkbNQ4IIcXJO0GlgjqfMh//vuqJu3mWC5FrntGCstpVTvuI6s 90uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:cc:to:from:mime-version:message-id:date; bh=q8aT+UKs3RageA/xYVixFXiFtP0UQSUwN/I2j9zzRKw=; b=Q88EA9jnRFaaJZKyAuT3xxuXXUXmMiksgKI3yXUcjEH+cyIdvByMVX7stnVkYq36NA 1datfgCeka4qYk+819iZUuAWytikgp2mdqtuEn7jxGdKT2aJeHT2KV3Oqc3HoTN1oUUs a/F6FZy/A5uMQygsL2A+n1N69TkWLMH09GTmgQgH3QNU+It0G/1THlO9bn3YMairRBAi lnbnbb3AimFcevRJDsq8TNPp0nqkV1M/xzCUyLrLqQfJmRbpyIOaSfQI4uhccJqNVPzM TKzhIxv5pbRoq+AsgaBCCP9dKvjvvuJuk1qk41HFjInyKWZrpV6jadrPW19jWvv4TSZW tbLQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i2-20020a170902c94200b00188d892999esi3369122pla.521.2023.02.09.17.36.39; Thu, 09 Feb 2023 17:36:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230196AbjBJBWH (ORCPT + 99 others); Thu, 9 Feb 2023 20:22:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229695AbjBJBWF (ORCPT ); Thu, 9 Feb 2023 20:22:05 -0500 Received: from mxct.zte.com.cn (mxct.zte.com.cn [183.62.165.209]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 609DB5D3D5 for ; Thu, 9 Feb 2023 17:22:04 -0800 (PST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxct.zte.com.cn (FangMail) with ESMTPS id 4PCbYk6QnNz501Qf; Fri, 10 Feb 2023 09:22:02 +0800 (CST) Received: from szxlzmapp05.zte.com.cn ([10.5.230.85]) by mse-fl2.zte.com.cn with SMTP id 31A1LuBR026416; Fri, 10 Feb 2023 09:21:56 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:21:57 +0800 (CST) Date: Fri, 10 Feb 2023 09:21:57 +0800 (CST) X-Zmail-TransId: 2b0363e59c35ffffffffa055065e X-Mailer: Zmail v1.0 Message-ID: <202302100921574141612@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?utf-8?q?=5BPATCH_v6_6/6=5D_selftest=3A_add_testing_unsharing_and_?= =?utf-8?q?counting_ksm_zero?= =?utf-8?q?_page?= X-MAIL: mse-fl2.zte.com.cn 31A1LuBR026416 X-Fangmail-Gw-Spam-Type: 0 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59C3A.001/4PCbYk6QnNz501Qf X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,SORTED_RECIPS, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757406048476583424?= X-GMAIL-MSGID: =?utf-8?q?1757406048476583424?= From: xu xin Add a function test_unmerge_zero_page() to test the functionality on unsharing and counting ksm-placed zero pages and counting of this patch series. test_unmerge_zero_page() actually contains three subjct test objects: 1) whether the count of ksm zero page can react correctly to cow (copy on write); 2) whether the count of ksm zero page can react correctly to unmerge; 3) whether ksm zero pages are really unmerged. Signed-off-by: xu xin Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang v5->v6: According to David's suggestions, the following changes are made: 1) Rename check_ksm_zero_pages_count() -> ksm_get_zero_pages(), and do the comparison outside. 2) Open all global fd from main() rather than the test case. 3) Remove COW-related test codes and focus on explicit unmerging here. 4) Add some coments to explain why wait_two_full_scans is required. 5) Clean up some unneed changes. v4->v5: fix error of "} while (end_scans < start_scans + 20);" to "} while (end_scans < start_scans + 2);" in wait_two_full_scans(). --- tools/testing/selftests/vm/ksm_functional_tests.c | 96 ++++++++++++++++++++++- 1 file changed, 92 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/vm/ksm_functional_tests.c b/tools/testing/selftests/vm/ksm_functional_tests.c index b11b7e5115dc..3033cd6ed3b4 100644 --- a/tools/testing/selftests/vm/ksm_functional_tests.c +++ b/tools/testing/selftests/vm/ksm_functional_tests.c @@ -24,9 +24,12 @@ #define KiB 1024u #define MiB (1024 * KiB) +#define PageSize (4 * KiB) static int ksm_fd; static int ksm_full_scans_fd; +static int ksm_zero_pages_fd; +static int ksm_use_zero_pages_fd; static int pagemap_fd; static size_t pagesize; @@ -57,6 +60,21 @@ static bool range_maps_duplicates(char *addr, unsigned long size) return false; } +static long ksm_get_zero_pages(void) +{ + char buf[20]; + ssize_t read_size; + unsigned long ksm_zero_pages; + + read_size = pread(ksm_zero_pages_fd, buf, sizeof(buf) - 1, 0); + if (read_size < 0) + return -errno; + buf[read_size] = 0; + ksm_zero_pages = strtol(buf, NULL, 10); + + return ksm_zero_pages; +} + static long ksm_get_full_scans(void) { char buf[10]; @@ -70,15 +88,12 @@ static long ksm_get_full_scans(void) return strtol(buf, NULL, 10); } -static int ksm_merge(void) +static int wait_two_full_scans(void) { long start_scans, end_scans; - /* Wait for two full scans such that any possible merging happened. */ start_scans = ksm_get_full_scans(); if (start_scans < 0) - return start_scans; - if (write(ksm_fd, "1", 1) != 1) return -errno; do { end_scans = ksm_get_full_scans(); @@ -89,6 +104,34 @@ static int ksm_merge(void) return 0; } +static inline int ksm_merge(void) +{ + /* Wait for two full scans such that any possible merging happened. */ + if (write(ksm_fd, "1", 1) != 1) + return -errno; + + return wait_two_full_scans(); +} + +static int unmerge_zero_page(char *start, unsigned long size) +{ + int ret; + + ret = madvise(start, size, MADV_UNMERGEABLE); + if (ret) { + ksft_test_result_fail("MADV_UNMERGEABLE failed\n"); + return ret; + } + + /* + * Wait for two full scans such that any possible unmerging of zero + * pages happened. Why? Because the unmerge action of zero pages is not + * done in the context of madvise(), but in the context of + * unshare_zero_pages() of the ksmd thread. + */ + return wait_two_full_scans(); +} + static char *mmap_and_merge_range(char val, unsigned long size) { char *map; @@ -146,6 +189,48 @@ static void test_unmerge(void) munmap(map, size); } +static void test_unmerge_zero_pages(void) +{ + const unsigned int size = 2 * MiB; + char *map; + unsigned long pages_expected; + + ksft_print_msg("[RUN] %s\n", __func__); + + /* Confirm the interfaces*/ + if (ksm_zero_pages_fd < 0) { + ksft_test_result_skip("open(\"/sys/kernel/mm/ksm/zero_pages_sharing\") failed\n"); + return; + } + if (ksm_use_zero_pages_fd < 0) { + ksft_test_result_skip("open \"/sys/kernel/mm/ksm/use_zero_pages\" failed\n"); + return; + } + if (write(ksm_use_zero_pages_fd, "1", 1) != 1) { + ksft_test_result_skip("write \"/sys/kernel/mm/ksm/use_zero_pages\" failed\n"); + return; + } + + /* Mmap zero pages*/ + map = mmap_and_merge_range(0x00, size); + if (map == MAP_FAILED) + return; + + if (unmerge_zero_page(map + size / 2, size / 2)) + goto unmap; + + /* Check if zero_pages_sharing can be update correctly when unmerge */ + pages_expected = (size / 2) / PageSize; + ksft_test_result(pages_expected == ksm_get_zero_pages(), + "zero page count react to unmerge\n"); + + /* Check if ksm zero pages are really unmerged */ + ksft_test_result(!range_maps_duplicates(map + size / 2, size / 2), + "KSM zero pages were unmerged\n"); +unmap: + munmap(map, size); +} + static void test_unmerge_discarded(void) { const unsigned int size = 2 * MiB; @@ -264,8 +349,11 @@ int main(int argc, char **argv) pagemap_fd = open("/proc/self/pagemap", O_RDONLY); if (pagemap_fd < 0) ksft_exit_skip("open(\"/proc/self/pagemap\") failed\n"); + ksm_zero_pages_fd = open("/sys/kernel/mm/ksm/zero_pages_sharing", O_RDONLY); + ksm_use_zero_pages_fd = open("/sys/kernel/mm/ksm/use_zero_pages", O_RDWR); test_unmerge(); + test_unmerge_zero_pages(); test_unmerge_discarded(); #ifdef __NR_userfaultfd test_unmerge_uffd_wp();