From patchwork Wed Jun 21 16:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 111201 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp4512982vqr; Wed, 21 Jun 2023 10:05:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7ns9pZ/mIXbcIDKJJaOnKTiaSaklxKpP/yrjtY47aoTlv+8tbKl/4Ue+a+1UyJTAajKb/8 X-Received: by 2002:a05:6a00:b44:b0:668:79d6:34df with SMTP id p4-20020a056a000b4400b0066879d634dfmr11746998pfo.23.1687367153027; Wed, 21 Jun 2023 10:05:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687367153; cv=none; d=google.com; s=arc-20160816; b=tdUzJ8N4+2Ks3z04grhvs73TiR7ka1S923sSkzyYqevJV3EHu77oGfNfT1fqsrpzMN m0eHofBLFVlYK5mTphLzFHQKG84vU/3ZvJAy7cs/DYVJoiOvePQhfxRxJ7kc298RGJUs Tana9TUU8KbHkODi7r7BCdIGBq9CCs7h8jEldGkcdekmsEMKssuZxfMQGY4xAlb/K2VV eHELm3/+46p5y0nrqWnFmWhpVB0HQJOlBQx5rnzDpjBfA2d3S5Ysfq9+/lWWIaN3nHLf qrCdVBHKjMwsF5qbnkqnMORBO/NFNH8jPi2rpjPpmiuUDdB9RARrAOIJLzRBR+nYMG7D ygqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=H7xjtBA+jzTamJkZvSrBReaOKW0qDsx5kc7RVJ9s550=; b=sKts5cdiT3RZ9MkxfLj9PI+g+psg8fZJihAJ7ARyZulZ4O9/N4cG1/LnBkG3vdLhhm 4cvCynEFGFoebLoNPGFakVb2scNXMOvLeOgNYaFoaBquaU3QhMTKt+WhoB4h9rqsTeMx 5dsz+RJ8NFDbt7qXmwTzEb1CshmgHI/GZgkOz/tKeUAwBpLaF3ALnuCtKUXKffpCCIc5 UjvOcXHHLYwy2KAw9SVYKguRqT6xSM2CsyZzdd1hGExiPKITbXRuLhnw7ZbGfJVxK3WD HBkn2U8gK+Xx2NCpZ5+4mImEhASKM/Jbcv5thpwr4qTvtoGx++2Xy/LdEY5pTKr2hl2j F+9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=t7EgP9l+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a126-20020a624d84000000b0064d2d8d7c33si313488pfb.45.2023.06.21.10.05.38; Wed, 21 Jun 2023 10:05:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=t7EgP9l+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230140AbjFUQrJ (ORCPT + 99 others); Wed, 21 Jun 2023 12:47:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229806AbjFUQqX (ORCPT ); Wed, 21 Jun 2023 12:46:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49EFD1BD6; Wed, 21 Jun 2023 09:46:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=H7xjtBA+jzTamJkZvSrBReaOKW0qDsx5kc7RVJ9s550=; b=t7EgP9l+MssKQftiMAo7VAdBCc Nv98h8aNtVO16LGaXGeEq2m9ltD2DVLbWxZHMVIHdnansE52q8B0dmRkBjGAlUiCMoIMy/OreImcr 5cqHvjHzMp6Nq3yxahSKcv8h0zZPtEmJc2nmPl3Y7HZoHQEaS6cdgbsAZkrux0e53j9vMSEKgXdOK EKZ68SjKoRwct8bGiDcah4w9QZ3tVXt3ANsdqnDF3UeEkp2GKmn12AODcUbhsUcILgUOYmiiBAZWV 9DXjTmPrk3hO4ePJxd9hC8YahWikG3UdRtJUwE1RGLdoobQf/AQ8MdhsOblwR8sJo3t78oQYQslj4 WpEPi5ag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEd-D4; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 11/13] mm: Rename invalidate_mapping_pagevec to mapping_try_invalidate Date: Wed, 21 Jun 2023 17:45:55 +0100 Message-Id: <20230621164557.3510324-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769332699804443616?= X-GMAIL-MSGID: =?utf-8?q?1769332699804443616?= We don't use pagevecs for the LRU cache any more, and we don't know that the failed invalidations were due to the folio being in an LRU cache. So rename it to be more accurate. Signed-off-by: Matthew Wilcox (Oracle) --- mm/fadvise.c | 16 +++++++--------- mm/internal.h | 4 ++-- mm/truncate.c | 25 ++++++++++++------------- 3 files changed, 21 insertions(+), 24 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index fb7c5f43fd2a..f684ffd7f9c9 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -143,7 +143,7 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) } if (end_index >= start_index) { - unsigned long nr_pagevec = 0; + unsigned long nr_failed = 0; /* * It's common to FADV_DONTNEED right after @@ -156,17 +156,15 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) */ lru_add_drain(); - invalidate_mapping_pagevec(mapping, - start_index, end_index, - &nr_pagevec); + mapping_try_invalidate(mapping, start_index, end_index, + &nr_failed); /* - * If fewer pages were invalidated than expected then - * it is possible that some of the pages were on - * a per-cpu pagevec for a remote CPU. Drain all - * pagevecs and try again. + * The failures may be due to the folio being + * in the LRU cache of a remote CPU. Drain all + * caches and try again. */ - if (nr_pagevec) { + if (nr_failed) { lru_add_drain_all(); invalidate_mapping_pages(mapping, start_index, end_index); diff --git a/mm/internal.h b/mm/internal.h index 119a8241f9d9..2ff7587b4045 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -133,8 +133,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio); bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end); long invalidate_inode_page(struct page *page); -unsigned long invalidate_mapping_pagevec(struct address_space *mapping, - pgoff_t start, pgoff_t end, unsigned long *nr_pagevec); +unsigned long mapping_try_invalidate(struct address_space *mapping, + pgoff_t start, pgoff_t end, unsigned long *nr_failed); /** * folio_evictable - Test whether a folio is evictable. diff --git a/mm/truncate.c b/mm/truncate.c index 86de31ed4d32..4a917570887f 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -486,18 +486,17 @@ void truncate_inode_pages_final(struct address_space *mapping) EXPORT_SYMBOL(truncate_inode_pages_final); /** - * invalidate_mapping_pagevec - Invalidate all the unlocked pages of one inode - * @mapping: the address_space which holds the pages to invalidate + * mapping_try_invalidate - Invalidate all the evictable folios of one inode + * @mapping: the address_space which holds the folios to invalidate * @start: the offset 'from' which to invalidate * @end: the offset 'to' which to invalidate (inclusive) - * @nr_pagevec: invalidate failed page number for caller + * @nr_failed: How many folio invalidations failed * - * This helper is similar to invalidate_mapping_pages(), except that it accounts - * for pages that are likely on a pagevec and counts them in @nr_pagevec, which - * will be used by the caller. + * This function is similar to invalidate_mapping_pages(), except that it + * returns the number of folios which could not be evicted in @nr_failed. */ -unsigned long invalidate_mapping_pagevec(struct address_space *mapping, - pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) +unsigned long mapping_try_invalidate(struct address_space *mapping, + pgoff_t start, pgoff_t end, unsigned long *nr_failed) { pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; @@ -527,9 +526,9 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, */ if (!ret) { deactivate_file_folio(folio); - /* It is likely on the pagevec of a remote CPU */ - if (nr_pagevec) - (*nr_pagevec)++; + /* Likely in the lru cache of a remote CPU */ + if (nr_failed) + (*nr_failed)++; } count += ret; } @@ -552,12 +551,12 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, * If you want to remove all the pages of one inode, regardless of * their use and writeback state, use truncate_inode_pages(). * - * Return: the number of the cache entries that were invalidated + * Return: The number of indices that had their contents invalidated */ unsigned long invalidate_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t end) { - return invalidate_mapping_pagevec(mapping, start, end, NULL); + return mapping_try_invalidate(mapping, start, end, NULL); } EXPORT_SYMBOL(invalidate_mapping_pages);