From patchwork Thu Jul 27 14:18:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 126990 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a985:0:b0:3e4:2afc:c1 with SMTP id t5csp1137324vqo; Thu, 27 Jul 2023 07:24:38 -0700 (PDT) X-Google-Smtp-Source: APBJJlEIjyuzPTY2Wv1YkDS1oYooHOQKdxH0yXpZPYjm3fw1HlPcLS1Xe+G+/ynnrWWOuIgdH3Kq X-Received: by 2002:a17:902:ecd1:b0:1ac:3605:97ec with SMTP id a17-20020a170902ecd100b001ac360597ecmr4341853plh.62.1690467877756; Thu, 27 Jul 2023 07:24:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690467877; cv=none; d=google.com; s=arc-20160816; b=O/KDTlpAK7gIHy6nmxIvw17JHKbSfr34qYRbLqChV4u0IpHKOhAKJE7ajzV53d6QN4 v5T/qWwGFsGbXFSxWCL/V/QNsDZLDfqs7Hw//9eXlQ5SUrul7ItsXRDayDN4j6rjHtnX 8DtU/oUDAT0RJKawwHxnpOetgXRyEB9JCM05yIgbxne4ULcIA1zrbVi+uCXJQvGiBZzb YyzDCKRpk3TXxFhtHo8olncKWuZQ0aCmtL6rC2aEV/j1hx+0WKFtKpxAJ3MQw67dfCJZ 6N9cglbzLEGnab8secOiBSiWT8cdhFSHwez3YwOqzr/Ht4BUAayH8CnDe+yFaAwu9Rn9 siPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hNL1y3SfunmfxSUTvIEH48/V1tE9GSr47MXZQeskAHk=; fh=rUOdZ1rV05MjXD37NRPw+6Wt+asmOiidzfpNzlU6HPE=; b=pgBnW5v00GNtkaHzgqzViPhgw+/pIMmpXjFVV3q5tvlALAuP4OZdj2bjm4TziyDv3+ UyEc9N2ryo7XG+ODlk58Dm3/B6nVjhXY3p32S+slgHJhHiv4tB+Px74JXEIsbKpuwT5r JRTPKfXgWhFB8tUDcRkFr2+SdlOgUf5krLcGRTdZS4CxoQiJ1QNDeoggWZlCK9UyRQN2 JfQVL/X1oU35d4+4ekiwOqjkxwl1+aZP+27ch2Q4TkCCqxiX6Lqg+xQpjFyz0SatINev Z1CV/0eUnRh+JyLEEQZ04i121hf6yk+PcshRqWjo5ifqifJR4oXfaRgKpKRHA5spWBA6 odfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t5-20020a1709027fc500b001aaed82c2afsi1394323plb.171.2023.07.27.07.24.13; Thu, 27 Jul 2023 07:24:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233558AbjG0OSz (ORCPT + 99 others); Thu, 27 Jul 2023 10:18:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233509AbjG0OSw (ORCPT ); Thu, 27 Jul 2023 10:18:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E59BC122 for ; Thu, 27 Jul 2023 07:18:51 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92B98FEC; Thu, 27 Jul 2023 07:19:34 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8D9823F6C4; Thu, 27 Jul 2023 07:18:49 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , "Huang, Ying" , Zi Yan , Nathan Chancellor , Alexander Gordeev , Gerald Schaefer Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 1/3] mm: Allow deferred splitting of arbitrary large anon folios Date: Thu, 27 Jul 2023 15:18:35 +0100 Message-Id: <20230727141837.3386072-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230727141837.3386072-1-ryan.roberts@arm.com> References: <20230727141837.3386072-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772584045375689149 X-GMAIL-MSGID: 1772584045375689149 In preparation for the introduction of large folios for anonymous memory, we would like to be able to split them when they have unmapped subpages, in order to free those unused pages under memory pressure. So remove the artificial requirement that the large folio needed to be at least PMD-sized. Reviewed-by: Yu Zhao Reviewed-by: Yin Fengwei Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Signed-off-by: Ryan Roberts --- mm/rmap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 0c0d8857dfce..eb0bb00dae34 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1426,11 +1426,11 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, __lruvec_stat_mod_folio(folio, idx, -nr); /* - * Queue anon THP for deferred split if at least one + * Queue anon large folio for deferred split if at least one * page of the folio is unmapped and at least one page * is still mapped. */ - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) + if (folio_test_large(folio) && folio_test_anon(folio)) if (!compound || nr < nr_pmdmapped) deferred_split_folio(folio); } From patchwork Thu Jul 27 14:18:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 126991 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a985:0:b0:3e4:2afc:c1 with SMTP id t5csp1145373vqo; Thu, 27 Jul 2023 07:36:58 -0700 (PDT) X-Google-Smtp-Source: APBJJlGKSdWMbNy4sQUMX+8WxSSx2NhcHl/ozUKoVfh7DNANaa26ngEh6p7ZQdk4VclR+qVgJpqu X-Received: by 2002:a05:6512:3b0f:b0:4f8:7897:55e6 with SMTP id f15-20020a0565123b0f00b004f8789755e6mr2561007lfv.45.1690468618377; Thu, 27 Jul 2023 07:36:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690468618; cv=none; d=google.com; s=arc-20160816; b=QGWzwj7DSYWyTTENL2vsC/4KvwjHlXR3i/nON1BcMJqIyPVwQV2pfoKNzdGUCvVBPb lG9T9sQDgIagoGlPFkgzhFiwa7rbhj5VLKRTesdQlyL/kZMiIoTkxdA/7crT4nFmOGaN qh97k7gt56KgZ+KT41TkA61Ycge+YFcYjCE3Hsw1HYv2RTLQbv6uD3/sP6DALz9pnEpr V9jvHSmLvwfYpCgTeULMLzWo4yrsjW9f3CpZbkejE2sheyHj4yJdoVJ88fvxviijanYh IhXggl4hzw9wSV9lf34coR9f48Wzu5oHE6KZHXkGraVSaoj0fSB+D8qmbKpSoie4kdu2 53tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/R1i0Z79rbCdEGDmJ1wW40j2ENnF/On2EYwY3sRbF4E=; fh=rUOdZ1rV05MjXD37NRPw+6Wt+asmOiidzfpNzlU6HPE=; b=RNbU48V4RR7RdtQ8JOMZG4Stg3M4j5+OvIuJKFUhf59ORHIASOoglwOUmRolD+QqhA ROBXavCZJOWzjhk4aEO+A08tj83OF1jVzBpVVjLAaneVR0m1FiOC3PqHcpf/MiJ5BzAs JEs24MX214rCm+8PM5znJMFFJH6YcArfrmYM/i8uSZMSdf3xpyYADo+V8goNcs7VccjR z/JuOkSbcekYpOcBH9wMe+r/N6bUxk7+DK478Y6qUMfqf6GjMY/IC9Q3pOcg2nlLxP0U V71cgjdeuHelAI+ohiFGUmeHtMbx4lmePvjZq2t857HNY+OLIHn25mh2zPPnZgmGGXtC WQkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u15-20020a17090657cf00b00993629dea0csi1170097ejr.134.2023.07.27.07.36.32; Thu, 27 Jul 2023 07:36:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233601AbjG0OTG (ORCPT + 99 others); Thu, 27 Jul 2023 10:19:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233560AbjG0OSz (ORCPT ); Thu, 27 Jul 2023 10:18:55 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E4513122 for ; Thu, 27 Jul 2023 07:18:53 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1A5C1474; Thu, 27 Jul 2023 07:19:36 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BD0863F6C4; Thu, 27 Jul 2023 07:18:51 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , "Huang, Ying" , Zi Yan , Nathan Chancellor , Alexander Gordeev , Gerald Schaefer Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 2/3] mm: Implement folio_remove_rmap_range() Date: Thu, 27 Jul 2023 15:18:36 +0100 Message-Id: <20230727141837.3386072-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230727141837.3386072-1-ryan.roberts@arm.com> References: <20230727141837.3386072-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772584822190878174 X-GMAIL-MSGID: 1772584822190878174 Like page_remove_rmap() but batch-removes the rmap for a range of pages belonging to a folio. This can provide a small speedup due to less manipuation of the various counters. But more crucially, if removing the rmap for all pages of a folio in a batch, there is no need to (spuriously) add it to the deferred split list, which saves significant cost when there is contention for the split queue lock. All contained pages are accounted using the order-0 folio (or base page) scheme. page_remove_rmap() is refactored so that it forwards to folio_remove_rmap_range() for !compound cases, and both functions now share a common epilogue function. The intention here is to avoid duplication of code. Signed-off-by: Ryan Roberts --- include/linux/rmap.h | 2 + mm/rmap.c | 125 ++++++++++++++++++++++++++++++++----------- 2 files changed, 97 insertions(+), 30 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..f578975c12c0 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -200,6 +200,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_remove_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma); void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); diff --git a/mm/rmap.c b/mm/rmap.c index eb0bb00dae34..c3ef56f7ec15 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1359,6 +1359,94 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * __remove_rmap_finish - common operations when taking down a mapping. + * @folio: Folio containing all pages taken down. + * @vma: The VM area containing the range. + * @compound: True if pages were taken down from PMD or false if from PTE(s). + * @nr_unmapped: Number of pages within folio that are now unmapped. + * @nr_mapped: Number of pages within folio that are still mapped. + */ +static void __remove_rmap_finish(struct folio *folio, + struct vm_area_struct *vma, bool compound, + int nr_unmapped, int nr_mapped) +{ + enum node_stat_item idx; + + if (nr_unmapped) { + idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; + __lruvec_stat_mod_folio(folio, idx, -nr_unmapped); + + /* + * Queue large anon folio for deferred split if at least one + * page of the folio is unmapped and at least one page is still + * mapped. + */ + if (folio_test_large(folio) && + folio_test_anon(folio) && nr_mapped) + deferred_split_folio(folio); + } + + /* + * It would be tidy to reset folio_test_anon mapping when fully + * unmapped, but that might overwrite a racing page_add_anon_rmap + * which increments mapcount after us but sets mapping before us: + * so leave the reset to free_pages_prepare, and remember that + * it's only reliable while mapped. + */ + + munlock_vma_folio(folio, vma, compound); +} + +/** + * folio_remove_rmap_range - Take down PTE mappings from a range of pages. + * @folio: Folio containing all pages in range. + * @page: First page in range to unmap. + * @nr: Number of pages to unmap. + * @vma: The VM area containing the range. + * + * All pages in the range must belong to the same VMA & folio. They must be + * mapped with PTEs, not a PMD. + * + * Context: Caller holds the pte lock. + */ +void folio_remove_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma) +{ + atomic_t *mapped = &folio->_nr_pages_mapped; + int nr_unmapped = 0; + int nr_mapped = 0; + bool last; + + if (unlikely(folio_test_hugetlb(folio))) { + VM_WARN_ON_FOLIO(1, folio); + return; + } + + VM_WARN_ON_ONCE(page < &folio->page || + page + nr > (&folio->page + folio_nr_pages(folio))); + + if (!folio_test_large(folio)) { + /* Is this the page's last map to be removed? */ + last = atomic_add_negative(-1, &page->_mapcount); + nr_unmapped = last; + } else { + for (; nr != 0; nr--, page++) { + /* Is this the page's last map to be removed? */ + last = atomic_add_negative(-1, &page->_mapcount); + if (last) + nr_unmapped++; + } + + /* Pages still mapped if folio mapped entirely */ + nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped); + if (nr_mapped >= COMPOUND_MAPPED) + nr_unmapped = 0; + } + + __remove_rmap_finish(folio, vma, false, nr_unmapped, nr_mapped); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from @@ -1385,15 +1473,13 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, return; } - /* Is page being unmapped by PTE? Is this its last map to be removed? */ + /* Is page being unmapped by PTE? */ if (likely(!compound)) { - last = atomic_add_negative(-1, &page->_mapcount); - nr = last; - if (last && folio_test_large(folio)) { - nr = atomic_dec_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } - } else if (folio_test_pmd_mappable(folio)) { + folio_remove_rmap_range(folio, page, 1, vma); + return; + } + + if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ last = atomic_add_negative(-1, &folio->_entire_mapcount); @@ -1421,29 +1507,8 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, idx = NR_FILE_PMDMAPPED; __lruvec_stat_mod_folio(folio, idx, -nr_pmdmapped); } - if (nr) { - idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; - __lruvec_stat_mod_folio(folio, idx, -nr); - - /* - * Queue anon large folio for deferred split if at least one - * page of the folio is unmapped and at least one page - * is still mapped. - */ - if (folio_test_large(folio) && folio_test_anon(folio)) - if (!compound || nr < nr_pmdmapped) - deferred_split_folio(folio); - } - - /* - * It would be tidy to reset folio_test_anon mapping when fully - * unmapped, but that might overwrite a racing page_add_anon_rmap - * which increments mapcount after us but sets mapping before us: - * so leave the reset to free_pages_prepare, and remember that - * it's only reliable while mapped. - */ - munlock_vma_folio(folio, vma, compound); + __remove_rmap_finish(folio, vma, compound, nr, nr_pmdmapped - nr); } /* From patchwork Thu Jul 27 14:18:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 127000 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a985:0:b0:3e4:2afc:c1 with SMTP id t5csp1158418vqo; Thu, 27 Jul 2023 07:58:26 -0700 (PDT) X-Google-Smtp-Source: APBJJlHlL+HWok1cwuotnfVEjmBfBxwPmWKfAd1Y0aVfmb63tT76ty0s+1wcpBLnxA5K5OR8dyqK X-Received: by 2002:a05:6402:1502:b0:522:18b6:c01f with SMTP id f2-20020a056402150200b0052218b6c01fmr2836389edw.3.1690469906713; Thu, 27 Jul 2023 07:58:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690469906; cv=none; d=google.com; s=arc-20160816; b=ap5GC8QJbu58vOxTjtAgvZoHpYEByl+aCkDsGawVXlwhq0glNJPmWs7RrtPRR8NV+N Qcq0mQnLZdI/9BnNd0nmKG+PfhLiWsPNJ2V4/8minf/8c7s7lirRVJttlyrHyPnseMgA ejBHTpMv7ZCsWV7qeZysTQM1VXQsLJVvgncG8Z+Zcck9OvejTdqpHgBpE3SThIlvOM4T l4IjgMyQCImwERMzejnDnLaVdw7AqM/XMoa83Wc3xug5rzRsdDsEgmjIA8PJNJOd4SK1 II47Xq78FA+QLgM+/FBNTHjv+RXH2gRiN8npmnj7MJsmfUC+aqvUcjytUbu74XAhVB6o BL0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=4xvN9LRbQRZ8hP2GhriJAu0dRh41Z7ObX23pb31SOJA=; fh=rUOdZ1rV05MjXD37NRPw+6Wt+asmOiidzfpNzlU6HPE=; b=QkXEp13mtQ5TkNTk6coGsfRmIBrzX1v9iaRB0McTc6JGxJ1nOsa+NF6cCKE/Pyo3tZ 93WgVE1w2Nqzqy1+QZfoJNlK2e9BDI8DJHkKApyMF3bBUxx6x2Zv2Hyv9HaPMAI2Mmy7 M0ldv6WvE8o1B6p6J7NYgzToIpCIG+OXQOPfWYyVzCYFkVjqSb89Fk1ueox5esppVS2T CG5CsVRPB6h0B/OIqx1ZpH0JTp2RLp0w0lJiL7L6zKEofvWiqC98cBuXxHSLV/CQ/gPV nnci6MG5BigcdfQAeIMKuiGFoSHlXnn0rZViaL2sYs14LCMbYZdOYKRPL/YmMa+8bY1Y IW4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n12-20020a056402060c00b005221edcf201si1033410edv.57.2023.07.27.07.57.59; Thu, 27 Jul 2023 07:58:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233609AbjG0OTK (ORCPT + 99 others); Thu, 27 Jul 2023 10:19:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbjG0OS5 (ORCPT ); Thu, 27 Jul 2023 10:18:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1E0BE30D3 for ; Thu, 27 Jul 2023 07:18:56 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0D851477; Thu, 27 Jul 2023 07:19:38 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EC1323F6C4; Thu, 27 Jul 2023 07:18:53 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , "Huang, Ying" , Zi Yan , Nathan Chancellor , Alexander Gordeev , Gerald Schaefer Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings Date: Thu, 27 Jul 2023 15:18:37 +0100 Message-Id: <20230727141837.3386072-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230727141837.3386072-1-ryan.roberts@arm.com> References: <20230727141837.3386072-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772586172790477951 X-GMAIL-MSGID: 1772586172790477951 This allows batching the rmap removal with folio_remove_rmap_range(), which means we avoid spuriously adding a partially unmapped folio to the deferred split queue in the common case, which reduces split queue lock contention. Previously each page was removed from the rmap individually with page_remove_rmap(). If the first page belonged to a large folio, this would cause page_remove_rmap() to conclude that the folio was now partially mapped and add the folio to the deferred split queue. But subsequent calls would cause the folio to become fully unmapped, meaning there is no value to adding it to the split queue. A complicating factor is that for platforms where MMU_GATHER_NO_GATHER is enabled (e.g. s390), __tlb_remove_page() drops a reference to the page. This means that the folio reference count could drop to zero while still in use (i.e. before folio_remove_rmap_range() is called). This does not happen on other platforms because the actual page freeing is deferred. Solve this by appropriately getting/putting the folio to guarrantee it does not get freed early. Given the need to get/put the folio in the batch path, we stick to the non-batched path if the folio is not large. While the batched path is functionally correct for a folio with 1 page, it is unlikely to be as efficient as the existing non-batched path in this case. Signed-off-by: Ryan Roberts --- mm/memory.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 132 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 01f39e8144ef..d35bd8d2b855 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1391,6 +1391,99 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); } +static inline unsigned long page_cont_mapped_vaddr(struct page *page, + struct page *anchor, unsigned long anchor_vaddr) +{ + unsigned long offset; + unsigned long vaddr; + + offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; + vaddr = anchor_vaddr + offset; + + if (anchor > page) { + if (vaddr > anchor_vaddr) + return 0; + } else { + if (vaddr < anchor_vaddr) + return ULONG_MAX; + } + + return vaddr; +} + +static int folio_nr_pages_cont_mapped(struct folio *folio, + struct page *page, pte_t *pte, + unsigned long addr, unsigned long end) +{ + pte_t ptent; + int floops; + int i; + unsigned long pfn; + struct page *folio_end; + + if (!folio_test_large(folio)) + return 1; + + folio_end = &folio->page + folio_nr_pages(folio); + end = min(page_cont_mapped_vaddr(folio_end, page, addr), end); + floops = (end - addr) >> PAGE_SHIFT; + pfn = page_to_pfn(page); + pfn++; + pte++; + + for (i = 1; i < floops; i++) { + ptent = ptep_get(pte); + + if (!pte_present(ptent) || pte_pfn(ptent) != pfn) + break; + + pfn++; + pte++; + } + + return i; +} + +static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, + struct folio *folio, + struct page *page, pte_t *pte, + unsigned long addr, int nr_pages, + struct zap_details *details) +{ + struct mm_struct *mm = tlb->mm; + pte_t ptent; + bool full; + int i; + + /* __tlb_remove_page may drop a ref; prevent going to 0 while in use. */ + folio_get(folio); + + for (i = 0; i < nr_pages;) { + ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + tlb_remove_tlb_entry(tlb, pte, addr); + zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); + full = __tlb_remove_page(tlb, page, 0); + + if (unlikely(page_mapcount(page) < 1)) + print_bad_pte(vma, addr, ptent, page); + + i++; + page++; + pte++; + addr += PAGE_SIZE; + + if (unlikely(full)) + break; + } + + folio_remove_rmap_range(folio, page - i, i, vma); + + folio_put(folio); + + return i; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1428,6 +1521,45 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; + + /* + * Batch zap large anonymous folio mappings. This allows + * batching the rmap removal, which means we avoid + * spuriously adding a partially unmapped folio to the + * deferrred split queue in the common case, which + * reduces split queue lock contention. + */ + if (page && PageAnon(page)) { + struct folio *folio = page_folio(page); + + if (folio_test_large(folio)) { + int nr_pages_req, nr_pages; + int counter = mm_counter(page); + + nr_pages_req = folio_nr_pages_cont_mapped( + folio, page, pte, addr, + end); + + /* folio may be freed on return. */ + nr_pages = try_zap_anon_pte_range( + tlb, vma, folio, page, + pte, addr, nr_pages_req, + details); + + rss[counter] -= nr_pages; + nr_pages--; + pte += nr_pages; + addr += nr_pages << PAGE_SHIFT; + + if (unlikely(nr_pages < nr_pages_req)) { + force_flush = 1; + addr += PAGE_SIZE; + break; + } + continue; + } + } + ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr);