From patchwork Wed Aug 30 09:50:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 137212 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a7d1:0:b0:3f2:4152:657d with SMTP id p17csp4735032vqm; Wed, 30 Aug 2023 12:29:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEgbxcK3ievhznfLqEdvCqSHovDgQdN8iXOWZaRlS0yBwIOoWkKWqPFAsF/Kb4ZoNI08vmp X-Received: by 2002:a17:90a:f417:b0:26f:b158:ef2b with SMTP id ch23-20020a17090af41700b0026fb158ef2bmr3201546pjb.23.1693423793759; Wed, 30 Aug 2023 12:29:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693423793; cv=none; d=google.com; s=arc-20160816; b=G3e+1+KxknyS72L3wbsbEldpAOczOq71C15uZpTpT35QkySyOilJFUR5QSsh1YtzOf kLulOfWdk0Ij2JGPqsgbxlNjNdOjEi81mKStbkJvplJizXj9yhlPuVNy/372mEhDhPGB aM9ol5wGUhrrf6rKR71Nv9S19xE0RYLh9CVWjuJIC9yv9k3PLjPgZt6iH/MPIrjpEgzP mQj/v7CBKTGkRxwFXuC3D/mADAcmiuMxxmsMFSRSpyFL4Hs8ZJKWgAFbEAqXa5qi73ey prIpB/PNkmWvmb7xVi+gU6a8pViVurIC04mAHOEfgflRlg2Cg54G5UVMoIiLR1XDvfqa /Qvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=G17L5TxSQCpiXEhkpkLQWUKlMnog/0uc4pVDC0rr5Ls=; fh=7ufO3qkgzAujJjGJ3F7aZQP3DQw3L2uNuulBC14Kglk=; b=TzpjLbgzCb+FvEPhqQaWa0qDgcJKrRPi+rHQ1buEbhF3FFOEo38ju2lryNkYGMgtyC DLaF3wwN/OeNVVqOlaDch8Z8LOwZTNp1iO0Yiu2nhOXddQWTonMs/OnZvNcc4i7nHBg+ /3pvXBkP2BCw3LARfIPaQ+ax2BDCA2vKtm0mdb7lLKvgC0Sr0ot1Q2OM9gQDmMpmNO8z oy3rA3WE8tiVdl5l0ShgWJIr1OUDHbw3Aonexlr84CHfuSHMM/lGxtQT0Kv9n2bHUwaN knOAQeaf3FVrB2+juEca2zIqLzk3Ex4KhpNlHG+K2D7GPm/k5tRqMaGz0U433UIoZLEQ CNLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a16-20020a17090ad81000b00263f4166a0asi1480157pjv.48.2023.08.30.12.29.37; Wed, 30 Aug 2023 12:29:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245210AbjH3TWD (ORCPT + 99 others); Wed, 30 Aug 2023 15:22:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242852AbjH3Jup (ORCPT ); Wed, 30 Aug 2023 05:50:45 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 08682CCB for ; Wed, 30 Aug 2023 02:50:40 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3CB9DFEC; Wed, 30 Aug 2023 02:51:19 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3A5003F64C; Wed, 30 Aug 2023 02:50:37 -0700 (PDT) From: Ryan Roberts To: Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yu Zhao , "Kirill A. Shutemov" , Yin Fengwei , Yang Shi , "Huang, Ying" , Zi Yan Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] mm: Refector release_pages() Date: Wed, 30 Aug 2023 10:50:10 +0100 Message-Id: <20230830095011.1228673-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230830095011.1228673-1-ryan.roberts@arm.com> References: <20230830095011.1228673-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775683548037752482 X-GMAIL-MSGID: 1775683548037752482 In preparation for implementing folios_put_refs() in the next patch, refactor release_pages() into a set of helper functions, which can be reused. The primary difference between release_pages() and folios_put_refs() is how they iterate over the set of folios. The per-folio actions are identical. No functional change intended. Signed-off-by: Ryan Roberts --- mm/swap.c | 167 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 97 insertions(+), 70 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index b05cce475202..5d3e35668929 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -945,6 +945,98 @@ void lru_cache_disable(void) #endif } +struct folios_put_refs_ctx { + struct list_head pages_to_free; + struct lruvec *lruvec; + unsigned long flags; + unsigned int lock_batch; +}; + +static void __folios_put_refs_init(struct folios_put_refs_ctx *ctx) +{ + *ctx = (struct folios_put_refs_ctx) { + .pages_to_free = LIST_HEAD_INIT(ctx->pages_to_free), + .lruvec = NULL, + .flags = 0, + }; +} + +static void __folios_put_refs_complete(struct folios_put_refs_ctx *ctx) +{ + if (ctx->lruvec) + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + + mem_cgroup_uncharge_list(&ctx->pages_to_free); + free_unref_page_list(&ctx->pages_to_free); +} + +static void __folios_put_refs_do_one(struct folios_put_refs_ctx *ctx, + struct folio *folio, int refs) +{ + /* + * Make sure the IRQ-safe lock-holding time does not get + * excessive with a continuous string of pages from the + * same lruvec. The lock is held only if lruvec != NULL. + */ + if (ctx->lruvec && ++ctx->lock_batch == SWAP_CLUSTER_MAX) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + + if (is_huge_zero_page(&folio->page)) + return; + + if (folio_is_zone_device(folio)) { + if (ctx->lruvec) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + if (put_devmap_managed_page_refs(&folio->page, refs)) + return; + if (folio_ref_sub_and_test(folio, refs)) + free_zone_device_page(&folio->page); + return; + } + + if (!folio_ref_sub_and_test(folio, refs)) + return; + + if (folio_test_large(folio)) { + if (ctx->lruvec) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + __folio_put_large(folio); + return; + } + + if (folio_test_lru(folio)) { + struct lruvec *prev_lruvec = ctx->lruvec; + + ctx->lruvec = folio_lruvec_relock_irqsave(folio, ctx->lruvec, + &ctx->flags); + if (prev_lruvec != ctx->lruvec) + ctx->lock_batch = 0; + + lruvec_del_folio(ctx->lruvec, folio); + __folio_clear_lru_flags(folio); + } + + /* + * In rare cases, when truncation or holepunching raced with + * munlock after VM_LOCKED was cleared, Mlocked may still be + * found set here. This does not indicate a problem, unless + * "unevictable_pgs_cleared" appears worryingly large. + */ + if (unlikely(folio_test_mlocked(folio))) { + __folio_clear_mlocked(folio); + zone_stat_sub_folio(folio, NR_MLOCK); + count_vm_event(UNEVICTABLE_PGCLEARED); + } + + list_add(&folio->lru, &ctx->pages_to_free); +} + /** * release_pages - batched put_page() * @arg: array of pages to release @@ -959,10 +1051,9 @@ void release_pages(release_pages_arg arg, int nr) { int i; struct page **pages = arg.pages; - LIST_HEAD(pages_to_free); - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - unsigned int lock_batch; + struct folios_put_refs_ctx ctx; + + __folios_put_refs_init(&ctx); for (i = 0; i < nr; i++) { struct folio *folio; @@ -970,74 +1061,10 @@ void release_pages(release_pages_arg arg, int nr) /* Turn any of the argument types into a folio */ folio = page_folio(pages[i]); - /* - * Make sure the IRQ-safe lock-holding time does not get - * excessive with a continuous string of pages from the - * same lruvec. The lock is held only if lruvec != NULL. - */ - if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - - if (is_huge_zero_page(&folio->page)) - continue; - - if (folio_is_zone_device(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - if (put_devmap_managed_page(&folio->page)) - continue; - if (folio_put_testzero(folio)) - free_zone_device_page(&folio->page); - continue; - } - - if (!folio_put_testzero(folio)) - continue; - - if (folio_test_large(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - __folio_put_large(folio); - continue; - } - - if (folio_test_lru(folio)) { - struct lruvec *prev_lruvec = lruvec; - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, - &flags); - if (prev_lruvec != lruvec) - lock_batch = 0; - - lruvec_del_folio(lruvec, folio); - __folio_clear_lru_flags(folio); - } - - /* - * In rare cases, when truncation or holepunching raced with - * munlock after VM_LOCKED was cleared, Mlocked may still be - * found set here. This does not indicate a problem, unless - * "unevictable_pgs_cleared" appears worryingly large. - */ - if (unlikely(folio_test_mlocked(folio))) { - __folio_clear_mlocked(folio); - zone_stat_sub_folio(folio, NR_MLOCK); - count_vm_event(UNEVICTABLE_PGCLEARED); - } - - list_add(&folio->lru, &pages_to_free); + __folios_put_refs_do_one(&ctx, folio, 1); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - mem_cgroup_uncharge_list(&pages_to_free); - free_unref_page_list(&pages_to_free); + __folios_put_refs_complete(&ctx); } EXPORT_SYMBOL(release_pages);