Message ID | 20231107181805.4188397-1-shr@devkernel.io |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:aa0b:0:b0:403:3b70:6f57 with SMTP id k11csp421025vqo; Tue, 7 Nov 2023 10:18:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IFFUB4Cj/V5PX6jm3L+gu5h6ZX/txeHZ5bpkt7LekY5SrjIux0AVcxdBvosHlfeuRtl6pAp X-Received: by 2002:a05:6358:7e12:b0:169:65a1:655c with SMTP id o18-20020a0563587e1200b0016965a1655cmr32682001rwm.18.1699381125628; Tue, 07 Nov 2023 10:18:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699381125; cv=none; d=google.com; s=arc-20160816; b=1ELoLNwTfbbwXhc6IvuQi/Y0fIzyN0oZfg+Mh6wFKhJM9oL5cbx83YzeMHeufpBbX5 XgKQ6c/tC/ZfNeGARzII4LH4veopbH0oqZB3Vsv6ThpNSm+2RrvFahpDIPvBVJXfu8tH crsR5ODL0egYogka2AbOG9wJPTHlT3VvlQOEZkjps36PZDfOi/O5ctn7KKvf2e8ql/2S OUmAr3yOfI8lR1o3MqhSzcGNN0s4OtYo/A0hj7tcgekbuuUSzgqG3dy/MgA8/wWvMSW8 mBiJSU+2NTef8BlMqASrFVLDSMc8rcbp5irjQtQGOEiUNFDFXp6jdOnyFY/rE3/hJ3PJ Vf0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=27lZBsjz2S5mUUmZG7RU4YbWyzqaEVupahO80UQSBqU=; fh=8IhyiW5VzWeM7WGnXEvPLhfvEed9FZIAc7+Elj2fc+A=; b=N8LHFGcMhr/LwPIrBlcItG6Orqu9XDvNX4Of+ybcl47P4oRxxmZ1OKiVy3SFLuwZoq 0JTTZdJ6Og6SX9v5DQ8HrcsdLR3HRJZ0SRjE1Ct1liL/m6RAkKVuBu9zs6IOZUyn4q+R Zfhyg9eO7uYx/RTnXEbUsFyA5jjd0KqPhpap834AMGRSkvcVRfjRvDexCVB8P2+eoZWf bva8KZUpMZFOIwVCBEjt5qEZxG1bQegtRjpudatcudzlvSXQUSrFmVcnkamOSlS+ghOv oXAtYvpPtTPIV44qQPYg0tiVnngqCnopgyNLWjyt+QZ7n+EpnEDN839egviVHbcD8mUG MSSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id m16-20020a637d50000000b005898cf1c6a0si2685202pgn.324.2023.11.07.10.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 10:18:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 730288172974; Tue, 7 Nov 2023 10:18:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232861AbjKGSS2 (ORCPT <rfc822;lhua1029@gmail.com> + 32 others); Tue, 7 Nov 2023 13:18:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232166AbjKGSS1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 7 Nov 2023 13:18:27 -0500 Received: from 66-220-144-179.mail-mxout.facebook.com (66-220-144-179.mail-mxout.facebook.com [66.220.144.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDA6EB0 for <linux-kernel@vger.kernel.org>; Tue, 7 Nov 2023 10:18:24 -0800 (PST) Received: by devbig1114.prn1.facebook.com (Postfix, from userid 425415) id 51DF9EF0641B; Tue, 7 Nov 2023 10:18:12 -0800 (PST) From: Stefan Roesch <shr@devkernel.io> To: kernel-team@fb.com Cc: shr@devkernel.io, akpm@linux-foundation.org, hannes@cmpxchg.org, riel@surriel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH v2] mm: Fix for negative counter: nr_file_hugepages Date: Tue, 7 Nov 2023 10:18:05 -0800 Message-Id: <20231107181805.4188397-1-shr@devkernel.io> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 07 Nov 2023 10:18:43 -0800 (PST) X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781930262824479650 X-GMAIL-MSGID: 1781930262824479650 |
Series |
[v2] mm: Fix for negative counter: nr_file_hugepages
|
|
Commit Message
Stefan Roesch
Nov. 7, 2023, 6:18 p.m. UTC
While qualifiying the 6.4 release, the following warning was detected in messages: vmstat_refresh: nr_file_hugepages -15664 The warning is caused by the incorrect updating of the NR_FILE_THPS counter in the function split_huge_page_to_list. The if case is checking for folio_test_swapbacked, but the else case is missing the check for folio_test_pmd_mappable. The other functions that manipulate the counter like __filemap_add_folio and filemap_unaccount_folio have the corresponding check. I have a test case, which reproduces the problem. It can be found here: https://github.com/sroeschus/testcase/blob/main/vmstat_refresh/madv.c The test case reproduces on an XFS filesystem. Running the same test case on a BTRFS filesystem does not reproduce the problem. AFAIK version 6.1 until 6.6 are affected by this problem. Signed-off-by: Stefan Roesch <shr@devkernel.io> Co-debugged-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) base-commit: ffc253263a1375a65fa6c9f62a893e9767fbebfa
Comments
On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > +++ b/mm/huge_memory.c > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > if (folio_test_swapbacked(folio)) { > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > -nr); > - } else { > + } else if (folio_test_pmd_mappable(folio)) { > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > -nr); > filemap_nr_thps_dec(mapping); As I said, we also need the folio_test_pmd_mappable() for swapbacked. Not because there's currently a problem, but because we don't leave landmines for other people to trip over in future!
On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > +++ b/mm/huge_memory.c > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > if (folio_test_swapbacked(folio)) { > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > -nr); > > - } else { > > + } else if (folio_test_pmd_mappable(folio)) { > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > -nr); > > filemap_nr_thps_dec(mapping); > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > Not because there's currently a problem, but because we don't leave > landmines for other people to trip over in future! Do we need to fix filemap_unaccount_folio() as well?
On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote: > On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > > +++ b/mm/huge_memory.c > > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > if (folio_test_swapbacked(folio)) { > > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > > -nr); > > > - } else { > > > + } else if (folio_test_pmd_mappable(folio)) { > > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > > -nr); > > > filemap_nr_thps_dec(mapping); > > > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > > Not because there's currently a problem, but because we don't leave > > landmines for other people to trip over in future! > > Do we need to fix filemap_unaccount_folio() as well? Looks to me like it is already correct? __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); if (folio_test_pmd_mappable(folio)) __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else if (folio_test_pmd_mappable(folio)) { __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); }
On Tue, Nov 07, 2023 at 08:07:59PM +0000, Matthew Wilcox wrote: > On Tue, Nov 07, 2023 at 03:06:16PM -0500, Johannes Weiner wrote: > > On Tue, Nov 07, 2023 at 07:35:37PM +0000, Matthew Wilcox wrote: > > > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: > > > > +++ b/mm/huge_memory.c > > > > @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > > if (folio_test_swapbacked(folio)) { > > > > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, > > > > -nr); > > > > - } else { > > > > + } else if (folio_test_pmd_mappable(folio)) { > > > > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, > > > > -nr); > > > > filemap_nr_thps_dec(mapping); > > > > > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > > > Not because there's currently a problem, but because we don't leave > > > landmines for other people to trip over in future! > > > > Do we need to fix filemap_unaccount_folio() as well? > > Looks to me like it is already correct? > > __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); > if (folio_test_swapbacked(folio)) { > __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); > if (folio_test_pmd_mappable(folio)) > __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); > } else if (folio_test_pmd_mappable(folio)) { > __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); > filemap_nr_thps_dec(mapping); > } Argh, I overlooked it because it's nested further in due to that NR_SHMEM update. Sorry about the noise.
Matthew Wilcox <willy@infradead.org> writes: > On Tue, Nov 07, 2023 at 10:18:05AM -0800, Stefan Roesch wrote: >> +++ b/mm/huge_memory.c >> @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >> if (folio_test_swapbacked(folio)) { >> __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, >> -nr); >> - } else { >> + } else if (folio_test_pmd_mappable(folio)) { >> __lruvec_stat_mod_folio(folio, NR_FILE_THPS, >> -nr); >> filemap_nr_thps_dec(mapping); > > As I said, we also need the folio_test_pmd_mappable() for swapbacked. > Not because there's currently a problem, but because we don't leave > landmines for other people to trip over in future! I'll add it in the next version.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd90822b4..9dbd5ef5a3902 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2740,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); - } else { + } else if (folio_test_pmd_mappable(folio)) { __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping);