Message ID | 20240213215520.1048625-1-zi.yan@sent.com |
---|---|
Headers |
Return-Path: <linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp829634dyb; Tue, 13 Feb 2024 13:58:31 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWeQunaZB+3n0JG9Kv8TH9AYb6p1rUoeCNCF1qG0ScXyBez/7QRSc76PLoYstAyhF7CYLmXyy9AiL/voYjo2NnP8BK5ew== X-Google-Smtp-Source: AGHT+IGFEKv9wkqvu0ee2KEbJrE95gTU8vBRbs/pSMo6aRHhrU1G/PNSK/LkotDHJZQgItWlzcII X-Received: by 2002:a05:6214:5006:b0:68c:3ee9:ed0a with SMTP id jo6-20020a056214500600b0068c3ee9ed0amr948038qvb.12.1707861511049; Tue, 13 Feb 2024 13:58:31 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707861511; cv=pass; d=google.com; s=arc-20160816; b=p6M/88IjR27NfAQ+UV2sxAZAUT6wpN1AQfvfuhRp1m4dkdQjRBQy3wI53KvTOoqVTN onHfH+jibLngMvBaXmJLwHl+zhSjKttPld1FICfhY3UXh5er9nDxAnFqFTR27QWMBrCX ptlW97tY2JpZPADXFFfPL+lnQTJ5ljCdrua1CkAP0Fl/+zRgFlhIzZNZwN3dGUlugsoc FWhlkHP3N270mWxABqC04BNyDxj5mvgNndDB0S/EQ3mSFR7ZWrDj4YvJk3W8A0jD2ioJ bL9PgdOjAmwvZuAYNqUzkkXDT9XsIuK6RUnJNbosiHvJuK3qDYLXQnYkvfaWENmeuGqA 26Fg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:message-id:date:subject :cc:to:from:feedback-id:dkim-signature:dkim-signature; bh=rKsJORufnnGtyC9mo4arxw/2f9LvMA7u/22qfgccp3c=; fh=bxp1usyoETUpkH6FxtKeNPD7H3jKpnMdDISwOXKgDKE=; b=EKd4ZIYNnYGVfMgPAfWjbmj+HC1lfKwwOFvMfglKEg2ECM0zSyIiHM6YuEUvAbqcGi wP3Z5qmZIcx7+rzEG5lpXw2tVb0OEe/b83mP6tVNbZlZ+oCUB7AEb7xogvz42nytMKMh TUn0ob0T7jGcIH1QcaViWeYiRUyAcVcjsn2OYNiP3yZ1QcUHnsIdK/ToGQYoVgqeQ7i7 MLg1QSH/d/HOlpLnalyo0axdjULu+qk9f6lMoXebC4C27drVNqDT9ILMpaAQs43AT4T1 tADl2LbO9dpjyXwiQ3nbsU1pqU/fXYL7I4uw88B0av+XyR2tG4PHW+YmWMmFuV6mw8k7 raBg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=mr1rlhoL; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=Fd2Djm86; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCU3LDIABcxaZB29t3VB1LVA8rCC2r7ipQYGk09BHlFeCt3s2HPubQsAshX+Fb2Rz2+OJFiqCfgK5515EAZa4q3pPI5weA== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id o8-20020ad45c88000000b0068cd8f91c45si3783553qvh.274.2024.02.13.13.58.30 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 13:58:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=mr1rlhoL; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=Fd2Djm86; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64390-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C67831C20F98 for <ouuuleilei@gmail.com>; Tue, 13 Feb 2024 21:58:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2BF40633EC; Tue, 13 Feb 2024 21:55:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="mr1rlhoL"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Fd2Djm86" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D185661684; Tue, 13 Feb 2024 21:55:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; cv=none; b=S8DONL/Q/1It8vb7d/2NMqqcnVICGG++jY/Kv04US5M+1SIs8Ox75xnr6ezijebSTOxd3P/cLIPNP2YvHAnR9GH+JmEAwk/GudlV0ywGujUTJGJ8cBWWtZTVGendg6PCeSwq+f38ppA1bEd6kD3U3u1vRiyfl3+JAAX7ZWBevEY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; c=relaxed/simple; bh=/GyMLqEaIzbvTZ7xam4FQ0e1scykPoZPxEo4U3yALw8=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:Content-Type; b=pEQ79YLG4b8CFG5TXRRKpOyaxIW/12L+rYD/xzloLPqe9EUeKU+VAtSZ+Yaz0XeTxRCz8uZTDvmrH2AHRd0VhDNvwZxL2N1aN3qoZ7WSpFRuP9GcJ0gsNyzrbszxYQHw/VCkFPReuIP01ErancH+CE8jJkdDnP4WSoaD63Ckgl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=mr1rlhoL; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=Fd2Djm86; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 94E005C00B9; Tue, 13 Feb 2024 16:55:46 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 13 Feb 2024 16:55:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:message-id:mime-version:reply-to :reply-to:subject:subject:to:to; s=fm3; t=1707861346; x= 1707947746; bh=rKsJORufnnGtyC9mo4arxw/2f9LvMA7u/22qfgccp3c=; b=m r1rlhoLr2CPDTqsvidZUK1X1seSyDWNlCrtq5VdswnoSB7r4Dslk2NS5/Ex2154j u4r0ppFXXbbylSIwiVpj1pRT9PExZzo9rIaoFf3G241fWxGuJZ3IY1uKfBpvgWpm eY6O3NgMy0niZoaPwyB2lAFa/OCK0o3hEUGxF5IFQfbBRZXsFmgUeAUDN23/FHRt 2XKa6BuqUlXiZXJRJMw9MeQLVQ9rWoIParVMyzU1YjOXeNY18Q45l+Hh8bJUnqa5 3VUq3BuZ1YkX+H0dwDuJ+vQ+CONJlprI6FfXNXZKoux5Z6evZQU+Pp4Mkgvez/vZ A/rxo3iq07cjOJzf8lsbQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:message-id:mime-version:reply-to:reply-to :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm3; t=1707861346; x=1707947746; bh=r KsJORufnnGtyC9mo4arxw/2f9LvMA7u/22qfgccp3c=; b=Fd2Djm86eNZq4Z3Lw MptTHyTJB/A++7ca2z3/dZltN6Su0fua9dJHFOXn5UTGA0s5zv6Dx8P9bDl5DuSZ enKaCtT43/48VRr9lMbrKXiqDitLlytuhvBltIBLLankuEt/pWb9XFjkxiR3/GM8 KgL4O+LuJ+OPToSsVJctYyrFySFW0UeQwUkZOBbKXlMJqFBtM+psqcA7JkZtg2A8 Iafj/K0R+HeelFUsD6lOWLbC6B0EfVphE9aIBi0J/p33joIg1rFa0bYHEjMmHWFl ZatMuVRFtsDz0UQnXB2HUvRsAyOSqbfKYo1mhC34Grf/YGgzwT8KhA57a5D7b9fS Rzx/A== X-ME-Sender: <xms:YuXLZcawUI-DQBxk4O6fVj3k1_Im8S-srfKvn1f_ofMyXtMLMuGRYQ> <xme:YuXLZXZEq3bt7Xod245ANSeKVZNWcL2WjOK19zZXqHifwlFveBE8WdehAI9gSImBx xl1RA1bvvMuVw_BKw> X-ME-Received: <xmr:YuXLZW-Cls5PWOt1uJNR8cqeHIyKLuV0sJ5eYmzyAoQ9Ak7Agcj_h3UKfB3uD2ZRlkMbVA4W7cLATS-QJIy_3z1y0cjL9vhDXHxVW9vCv6mU_OBDhZxK330o> X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofhrgggtgfesthekredtredtjeenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepteduve ehteehheeiteeihfejveejledtgfdvieeuiedutefftdevtdfhteevtdffnecuffhomhgr ihhnpehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: <xmx:YuXLZWqvfPT-tt657zYvNbobjkSvgtn0jzCMkDb_2jiXnE0KiWuUrw> <xmx:YuXLZXqwq_ysUqTFfVs0gQOA8XabcR40rPMxodHTdqKb8lLL4FY8hQ> <xmx:YuXLZURcBtMsFWG4C6ohK8xc6qgp8sTDdP8TqANLc_u5wE13HMZiHA> <xmx:YuXLZe8idbP8I4VCtDUTyw3u10YZbwkP9gBvtJSKgjk9yOLIxTwcPg> Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:45 -0500 (EST) From: Zi Yan <zi.yan@sent.com> To: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>, linux-mm@kvack.org Cc: Zi Yan <ziy@nvidia.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Yang Shi <shy828301@gmail.com>, Yu Zhao <yuzhao@google.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, Ryan Roberts <ryan.roberts@arm.com>, =?utf-8?q?Michal_Koutn=C3=BD?= <mkoutny@suse.com>, Roman Gushchin <roman.gushchin@linux.dev>, "Zach O'Keefe" <zokeefe@google.com>, Hugh Dickins <hughd@google.com>, Mcgrof Chamberlain <mcgrof@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 0/7] Split a folio to any lower order folios Date: Tue, 13 Feb 2024 16:55:13 -0500 Message-ID: <20240213215520.1048625-1-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 Reply-To: Zi Yan <ziy@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790822591635300986 X-GMAIL-MSGID: 1790822591635300986 |
Series |
Split a folio to any lower order folios
|
|
Message
Zi Yan
Feb. 13, 2024, 9:55 p.m. UTC
From: Zi Yan <ziy@nvidia.com>
Hi all,
File folio supports any order and multi-size THP is upstreamed[1], so both
file and anonymous folios can be >0 order. Currently, split_huge_page()
only splits a huge page to order-0 pages, but splitting to orders higher than
0 is going to better utilize large folios. In addition, Large Block
Sizes in XFS support would benefit from it[2]. This patchset adds support for
splitting a large folio to any lower order folios and uses it during file
folio truncate operations.
For Patch 6, Hugh did not like my approach to minimize the number of
folios for truncate[3]. I would like to get more feedback, especially
from FS people, on it to decide whether to keep it or not.
The patchset is on top of mm-everything-2024-02-13-01-26.
Changelog
===
Since v3
---
1. Excluded shmem folios and pagecache folios without FS support from
splitting to any order (per Hugh Dickins).
2. Allowed splitting anonymous large folio to any lower order since
multi-size THP is upstreamed.
3. Adapted selftests code to new framework.
Since v2
---
1. Fixed an issue in __split_page_owner() introduced during my rebase
Since v1
---
1. Changed split_page_memcg() and split_page_owner() parameter to use order
2. Used folio_test_pmd_mappable() in place of the equivalent code
Details
===
* Patch 1 changes split_page_memcg() to use order instead of nr_pages
* Patch 2 changes split_page_owner() to use order instead of nr_pages
* Patch 3 and 4 add new_order parameter split_page_memcg() and
split_page_owner() and prepare for upcoming changes.
* Patch 5 adds split_huge_page_to_list_to_order() to split a huge page
to any lower order. The original split_huge_page_to_list() calls
split_huge_page_to_list_to_order() with new_order = 0.
* Patch 6 uses split_huge_page_to_list_to_order() in large pagecache folio
truncation instead of split the large folio all the way down to order-0.
* Patch 7 adds a test API to debugfs and test cases in
split_huge_page_test selftests.
Comments and/or suggestions are welcome.
[1] https://lore.kernel.org/all/20231207161211.2374093-1-ryan.roberts@arm.com/
[2] https://lore.kernel.org/linux-mm/qzbcjn4gcyxla4gwuj6smlnwknz2wvo5wrjctin6eengjfqjei@lzkxv3iy6bol/
[3] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@google.com/
Zi Yan (7):
mm/memcg: use order instead of nr in split_page_memcg()
mm/page_owner: use order instead of nr in split_page_owner()
mm: memcg: make memcg huge page split support any order split.
mm: page_owner: add support for splitting to any order in split
page_owner.
mm: thp: split huge page to any lower order pages (except order-1).
mm: truncate: split huge page cache page to a non-zero order if
possible.
mm: huge_memory: enable debugfs to split huge pages to any order.
include/linux/huge_mm.h | 21 +-
include/linux/memcontrol.h | 4 +-
include/linux/page_owner.h | 10 +-
mm/huge_memory.c | 149 +++++++++---
mm/memcontrol.c | 10 +-
mm/page_alloc.c | 8 +-
mm/page_owner.c | 8 +-
mm/truncate.c | 21 +-
.../selftests/mm/split_huge_page_test.c | 223 +++++++++++++++++-
9 files changed, 382 insertions(+), 72 deletions(-)
Comments
On 13.02.24 22:55, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > Hi all, > > File folio supports any order and multi-size THP is upstreamed[1], so both > file and anonymous folios can be >0 order. Currently, split_huge_page() > only splits a huge page to order-0 pages, but splitting to orders higher than > 0 is going to better utilize large folios. In addition, Large Block > Sizes in XFS support would benefit from it[2]. This patchset adds support for > splitting a large folio to any lower order folios and uses it during file > folio truncate operations. > > For Patch 6, Hugh did not like my approach to minimize the number of > folios for truncate[3]. I would like to get more feedback, especially > from FS people, on it to decide whether to keep it or not. I'm curious, would it make sense to exclude the "more" controversial parts (i.e., patch #6) for now, and focus on the XFS use case only?
On 13 Feb 2024, at 17:21, David Hildenbrand wrote: > On 13.02.24 22:55, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> Hi all, >> >> File folio supports any order and multi-size THP is upstreamed[1], so both >> file and anonymous folios can be >0 order. Currently, split_huge_page() >> only splits a huge page to order-0 pages, but splitting to orders higher than >> 0 is going to better utilize large folios. In addition, Large Block >> Sizes in XFS support would benefit from it[2]. This patchset adds support for >> splitting a large folio to any lower order folios and uses it during file >> folio truncate operations. >> >> For Patch 6, Hugh did not like my approach to minimize the number of >> folios for truncate[3]. I would like to get more feedback, especially >> from FS people, on it to decide whether to keep it or not. > > I'm curious, would it make sense to exclude the "more" controversial parts (i.e., patch #6) for now, and focus on the XFS use case only? Sure. Patch 6 was there to make use of split_huge_page_to_list_to_order(). Now we have multi-size THP and XFS use cases, it can be dropped. -- Best Regards, Yan, Zi
On 13/02/2024 22:31, Zi Yan wrote: > On 13 Feb 2024, at 17:21, David Hildenbrand wrote: > >> On 13.02.24 22:55, Zi Yan wrote: >>> From: Zi Yan <ziy@nvidia.com> >>> >>> Hi all, >>> >>> File folio supports any order and multi-size THP is upstreamed[1], so both >>> file and anonymous folios can be >0 order. Currently, split_huge_page() >>> only splits a huge page to order-0 pages, but splitting to orders higher than >>> 0 is going to better utilize large folios. In addition, Large Block >>> Sizes in XFS support would benefit from it[2]. This patchset adds support for >>> splitting a large folio to any lower order folios and uses it during file >>> folio truncate operations. >>> >>> For Patch 6, Hugh did not like my approach to minimize the number of >>> folios for truncate[3]. I would like to get more feedback, especially >>> from FS people, on it to decide whether to keep it or not. >> >> I'm curious, would it make sense to exclude the "more" controversial parts (i.e., patch #6) for now, and focus on the XFS use case only? > > Sure. Patch 6 was there to make use of split_huge_page_to_list_to_order(). > Now we have multi-size THP and XFS use cases, it can be dropped. What are your plans for how to determine when to split THP and to what order? I don't see anything in this series that would split anon THP to non-zero order? We have talked about using hints from user space in the past (e.g. mremap, munmap, madvise, etc). But chrome has a use case where it temporarily mprotects a single (4K) page as part of garbage collection (IIRC). If you eagerly split on that hint, you will have lost the benefits of the large folio when it later mprotects back to the original setting. I guess David will suggest this would be a good use case for the khugepaged-lite machanism we have been talking about. I dunno - it seems wasteful to split then collapse again. Or perhaps you're considering doing something clever in deferred split? > > -- > Best Regards, > Yan, Zi
On 14.02.24 11:50, Ryan Roberts wrote: > On 13/02/2024 22:31, Zi Yan wrote: >> On 13 Feb 2024, at 17:21, David Hildenbrand wrote: >> >>> On 13.02.24 22:55, Zi Yan wrote: >>>> From: Zi Yan <ziy@nvidia.com> >>>> >>>> Hi all, >>>> >>>> File folio supports any order and multi-size THP is upstreamed[1], so both >>>> file and anonymous folios can be >0 order. Currently, split_huge_page() >>>> only splits a huge page to order-0 pages, but splitting to orders higher than >>>> 0 is going to better utilize large folios. In addition, Large Block >>>> Sizes in XFS support would benefit from it[2]. This patchset adds support for >>>> splitting a large folio to any lower order folios and uses it during file >>>> folio truncate operations. >>>> >>>> For Patch 6, Hugh did not like my approach to minimize the number of >>>> folios for truncate[3]. I would like to get more feedback, especially >>>> from FS people, on it to decide whether to keep it or not. >>> >>> I'm curious, would it make sense to exclude the "more" controversial parts (i.e., patch #6) for now, and focus on the XFS use case only? >> >> Sure. Patch 6 was there to make use of split_huge_page_to_list_to_order(). >> Now we have multi-size THP and XFS use cases, it can be dropped. > > What are your plans for how to determine when to split THP and to what order? I > don't see anything in this series that would split anon THP to non-zero order? > > We have talked about using hints from user space in the past (e.g. mremap, > munmap, madvise, etc). But chrome has a use case where it temporarily mprotects > a single (4K) page as part of garbage collection (IIRC). If you eagerly split on > that hint, you will have lost the benefits of the large folio when it later > mprotects back to the original setting. Not only that, splitting will make some of these operations more expensive, possibly with no actual benefit. > > I guess David will suggest this would be a good use case for the khugepaged-lite > machanism we have been talking about. I dunno - it seems wasteful to split then > collapse again. I agree. mprotect() and even madvise(), ... might not be good candidates for splitting. mremap() likely is, if the folio is mapped exclusively. MADV_DONTNEED/munmap()/mlock() might be good candidates (again, if mapped exclusively). This will need a lot of thought I'm afraid (as you say, deferred splitting is another example).
On 14 Feb 2024, at 5:55, David Hildenbrand wrote: > On 14.02.24 11:50, Ryan Roberts wrote: >> On 13/02/2024 22:31, Zi Yan wrote: >>> On 13 Feb 2024, at 17:21, David Hildenbrand wrote: >>> >>>> On 13.02.24 22:55, Zi Yan wrote: >>>>> From: Zi Yan <ziy@nvidia.com> >>>>> >>>>> Hi all, >>>>> >>>>> File folio supports any order and multi-size THP is upstreamed[1], so both >>>>> file and anonymous folios can be >0 order. Currently, split_huge_page() >>>>> only splits a huge page to order-0 pages, but splitting to orders higher than >>>>> 0 is going to better utilize large folios. In addition, Large Block >>>>> Sizes in XFS support would benefit from it[2]. This patchset adds support for >>>>> splitting a large folio to any lower order folios and uses it during file >>>>> folio truncate operations. >>>>> >>>>> For Patch 6, Hugh did not like my approach to minimize the number of >>>>> folios for truncate[3]. I would like to get more feedback, especially >>>>> from FS people, on it to decide whether to keep it or not. >>>> >>>> I'm curious, would it make sense to exclude the "more" controversial parts (i.e., patch #6) for now, and focus on the XFS use case only? >>> >>> Sure. Patch 6 was there to make use of split_huge_page_to_list_to_order(). >>> Now we have multi-size THP and XFS use cases, it can be dropped. >> >> What are your plans for how to determine when to split THP and to what order? I >> don't see anything in this series that would split anon THP to non-zero order? >> >> We have talked about using hints from user space in the past (e.g. mremap, >> munmap, madvise, etc). But chrome has a use case where it temporarily mprotects >> a single (4K) page as part of garbage collection (IIRC). If you eagerly split on >> that hint, you will have lost the benefits of the large folio when it later >> mprotects back to the original setting. > > Not only that, splitting will make some of these operations more expensive, possibly with no actual benefit. > >> >> I guess David will suggest this would be a good use case for the khugepaged-lite >> machanism we have been talking about. I dunno - it seems wasteful to split then >> collapse again. > > I agree. mprotect() and even madvise(), ... might not be good candidates for splitting. mremap() likely is, if the folio is mapped exclusively. MADV_DONTNEED/munmap()/mlock() might be good candidates (again, if mapped exclusively). This will need a lot of thought I'm afraid (as you say, deferred splitting is another example). My initial use was for splitting 1GB THP to 2MB THP, but 1GB THP is not upstream yet. So for now, this might only be used by XFS. For anonymous large folios, we will use this when we find a justified use case. What I can think of is when a PMD-mapped THP happens to be split and the resulting order can be a HW/SW favored order, like 64KB or 32KB (to be able to use contig PTE), we split to that order, otherwise, we still split to order-0. -- Best Regards, Yan, Zi
Hi Pankaj, On 13 Feb 2024, at 16:55, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > Hi all, > > File folio supports any order and multi-size THP is upstreamed[1], so both > file and anonymous folios can be >0 order. Currently, split_huge_page() > only splits a huge page to order-0 pages, but splitting to orders higher than > 0 is going to better utilize large folios. In addition, Large Block > Sizes in XFS support would benefit from it[2]. This patchset adds support for Just talked to Matthew about his order-1 pagecache folio, I am planning to grab that into this one, so that I can remove the restriction in my patches and you guys do not need to do that in your patchset. Let me know if it works for you. -- Best Regards, Yan, Zi
On 16 Feb 2024, at 5:06, Pankaj Raghav (Samsung) wrote: > Hi Zi Yan, > > On Tue, Feb 13, 2024 at 04:55:13PM -0500, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> Hi all, >> >> File folio supports any order and multi-size THP is upstreamed[1], so both >> file and anonymous folios can be >0 order. Currently, split_huge_page() >> only splits a huge page to order-0 pages, but splitting to orders higher than >> 0 is going to better utilize large folios. In addition, Large Block >> Sizes in XFS support would benefit from it[2]. This patchset adds support for >> splitting a large folio to any lower order folios and uses it during file >> folio truncate operations. > > I added your patches on top of my patches, but removed patch 6 and I > added this instead: > > diff --git a/mm/truncate.c b/mm/truncate.c > index 725b150e47ac..dd07e2e327a8 100644 > --- a/mm/truncate.c > +++ b/mm/truncate.c > @@ -239,7 +239,8 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) > folio_invalidate(folio, offset, length); > if (!folio_test_large(folio)) > return true; > - if (split_folio(folio) == 0) > + if (split_folio_to_order(folio, > + mapping_min_folio_order(folio->mapping)) == 0) > return true; > if (folio_test_dirty(folio)) > return false; > > I ran genric/476 fstest[1] with SOAK_DURATION set to 360 seconds. This > test uses fstress to do a lot of writes, truncate operations, etc. I ran > this on XFS with **64k block size on a 4k page size system**. > > I recorded the vm event for split page and this was the result I got: > > Before your patches: > root@debian:~/xfstests# cat /proc/vmstat | grep split > thp_split_page 0 > thp_split_page_failed 5819 > > After your patches: > root@debian:~/xfstests# cat /proc/vmstat | grep split > thp_split_page 5846 > thp_split_page_failed 20 > > Your patch series definitely helps with splitting the folios while still > maintaining the min_folio_order that LBS requires. Sounds great! Thanks for testing. > > We are still discussing how to quantify this benefit in terms of some > metric with this support. If you have some ideas here, let me know. From my understanding, the benefit will come from that page cache folio size is bigger with LBS (plus this patchset) after truncate. I assume any benchmark testing read/write throughput after truncate operations might be helpful. > > I will run the whole xfstests tonight to check for any regressions. Can you use the update patches from: https://github.com/x-y-z/linux-1gb-thp/tree/split_thp_to_any_order_v5-mm-everything-2024-02-16-01-35? It contains changes and fixes based on the feedback from this version. I am planning to send this new version out soon. > > -- > Pankaj > > [1] https://github.com/kdave/xfstests/blob/master/tests/generic/476 -- Best Regards, Yan, Zi