From patchwork Tue Jun 13 03:09:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yang X-Patchwork-Id: 107049 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp274332vqr; Mon, 12 Jun 2023 20:15:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ72gI2SKiImVW/VQ82iSds3jTLUSjLokwo9hIwdzQhzOp0kDsUTHFY7ZtfisfC+uyX0U9tj X-Received: by 2002:a17:907:928a:b0:973:940e:a01d with SMTP id bw10-20020a170907928a00b00973940ea01dmr13009782ejc.67.1686626147981; Mon, 12 Jun 2023 20:15:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686626147; cv=none; d=google.com; s=arc-20160816; b=aZOx8m4nXtkolF9AvL0D+6sXTIVcZ2IBFqThk+KjkUD/s7yd04Tbp3IX5BHezkL/vZ Qpa+rGdAnkw12lmNySzdKzZ/+FtsA6vyIb+vIoCQhyjrNvO2vR3ePfDenTFlZnc5voTW ZmhtmP4MnqRhsaKr1Jn+nOGUWHah7tth7h6OANscdqfm7NHSwUs3nRTcgAHh65814RNx SSkXDv1WfI8oNMb2w8jTivTS/sSI+JZXwdG0OsXN3VDTrWIiNxWcvdmequ4vuhqCQjPY By8E2eYpxYAIBVm+cKy1OK/v4ffp3cXPnc4VuGsCyBG9noC2pcY8PSowywSn1gP7nZ91 Z/cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kCQUTeUP3ROQYZtPgyjuZCT/GfWbGU8BjzTa96ma1MM=; b=wbVKt1GX55kvJ9XX6ayqO89AZBBUvpPx1D1CZ0IrfOzTNA5Yjn+rCXYk9xV/C0InoG Ypf7SthzPX/hL7wg1cIZOOoYeAo3x6ZaxgegW045+G7NIKo+xlOv9cxuU2n0uUrKDSiO /HTc1nyw+SDdcJI1iXGNpOXvxv/Fb1o8xWRxZEoYoGr8XtreDvT65c3w4NCObgqSELJm w3rrorkYo7so/ETM40v7aXqax0Mefri1erSCz+VXbNtabw7Aa3lBK56TVubTxxgrGUgw 31OuHbvrCPsRDDvbUFL4Kvewpwu20KhTCMM9fRBj+c0fJuEcGvX3O/tC2Ams0k0+Q4u8 rUpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a27-20020a170906245b00b00978923794d4si3096386ejb.554.2023.06.12.20.15.23; Mon, 12 Jun 2023 20:15:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=zte.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235134AbjFMDLl (ORCPT + 99 others); Mon, 12 Jun 2023 23:11:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240428AbjFMDLG (ORCPT ); Mon, 12 Jun 2023 23:11:06 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16D5CF9 for ; Mon, 12 Jun 2023 20:09:32 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id 79561E2295; Tue, 13 Jun 2023 11:09:31 +0800 (CST) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Xuexin Jiang Subject: [PATCH RESEND v10 1/5] ksm: support unsharing KSM-placed zero pages Date: Tue, 13 Jun 2023 11:09:28 +0800 Message-Id: <20230613030928.185882-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202306131104554703428@zte.com.cn> References: <202306131104554703428@zte.com.cn> MIME-Version: 1.0 X-Spam-Status: No, score=3.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,FSL_HELO_NON_FQDN_1, HEADER_FROM_DIFFERENT_DOMAINS,HELO_NO_DOMAIN,NO_DNS_FOR_FROM, RCVD_IN_PBL,RDNS_NONE,SPF_SOFTFAIL,SPOOFED_FREEMAIL_NO_RDNS, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Level: *** X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768555699809767281?= X-GMAIL-MSGID: =?utf-8?q?1768555699809767281?= From: xu xin When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE) and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing will *not* actually unshare the shared zeropage as placed by KSM (which is against the MADV_UNMERGEABLE documentation). As these KSM-placed zero pages are out of the control of KSM, the related counts of ksm pages don't expose how many zero pages are placed by KSM (these special zero pages are different from those initially mapped zero pages, because the zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete and unshared page). To not blindly unshare all shared zero_pages in applicable VMAs, the patch use pte_mkdirty (related with architecture) to mark KSM-placed zero pages. Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages. In addition, we'll reuse this mechanism to reliably identify KSM-placed ZeroPages to properly account for them (e.g., calculating the KSM profit that includes zeropages) in the latter patches. The patch will not degrade the performance of use_zero_pages as it doesn't change the way of merging empty pages in use_zero_pages's feature. Signed-off-by: xu xin Acked-by: David Hildenbrand Cc: Claudio Imbrenda Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang --- include/linux/ksm.h | 6 ++++++ mm/ksm.c | 11 ++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 899a314bc487..98878107244f 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -26,6 +26,12 @@ int ksm_disable(struct mm_struct *mm); int __ksm_enter(struct mm_struct *mm); void __ksm_exit(struct mm_struct *mm); +/* + * To identify zeropages that were mapped by KSM, we reuse the dirty bit + * in the PTE. If the PTE is dirty, the zeropage was mapped by KSM when + * deduplicating memory. + */ +#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) { diff --git a/mm/ksm.c b/mm/ksm.c index 0156bded3a66..f31c789406b1 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex if (is_migration_entry(entry)) page = pfn_swap_entry_to_page(entry); } - ret = page && PageKsm(page); + /* return 1 if the page is an normal ksm page or KSM-placed zero page */ + ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte); pte_unmap_unlock(pte, ptl); return ret; } @@ -1220,8 +1221,12 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, page_add_anon_rmap(kpage, vma, addr, RMAP_NONE); newpte = mk_pte(kpage, vma->vm_page_prot); } else { - newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage), - vma->vm_page_prot)); + /* + * Use pte_mkdirty to mark the zero page mapped by KSM, and then + * we can easily track all KSM-placed zero pages by checking if + * the dirty bit in zero page's PTE is set. + */ + newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot))); /* * We're replacing an anonymous page with a zero page, which is * not anonymous. We need to do proper accounting otherwise we