Message ID | 20231122162950.3854897-1-ryan.roberts@arm.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2b07:b0:403:3b70:6f57 with SMTP id io7csp1448491vqb; Wed, 22 Nov 2023 08:31:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IGwnJMsi/H31rgNArCOnNOct8llGCnu2TlAt3DaVlaoKeysFZW9agE2u6NuRUzvD0Oua9qd X-Received: by 2002:a62:7995:0:b0:6bd:d884:df00 with SMTP id u143-20020a627995000000b006bdd884df00mr3142823pfc.9.1700670686316; Wed, 22 Nov 2023 08:31:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700670686; cv=none; d=google.com; s=arc-20160816; b=UMUuuATdTwslfh7Blx2dpRrmoaxW1JxFjjDOIREdwuGtatLOwQvHf5TheL3z9Du1Dt opz59izjIVRri5WfeOSc1Nc7FESMifv06YABsqoGS6URVxuFFYCvuQ7EsqVqzxPTUYjK pPujDZNLA/rzcOquC3MZLJwGMyKswgJm2terxEmpe7zsOjgP4P1wzJ5LWhD21ca4fM3U NVJdDwR8Fo30gJlPVzkp3zfV6O4zcYqQl52Wq2C1qsvs8FTZ03dN3DTs5Xytcx0CaglR MeW/F261sWnHcYtMSR0EIO2DPPkeZmudD3ZwxDG90VKJLga6Zhr4ZLI1C3mGJbx8Zl/r avgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=1dCnICrZ6KsJF4tcH7Xx42kryKvePRasXuLp1LXRiPk=; fh=7axEUdnMUApK6aLYLrmxIYzEqwt8JUQTbtI9r8kD7PQ=; b=ngLD1ieC+ksXwkRwgnMgETjTikukbpB6HZTVr1Irhdkc/CymSYVzwW98vopdoeW2e7 Jch5ksnGU9Bf/NvDXWBawn6DqnIn2E7+F4ZKVxVTQRaM1UFVhFI8oFW/UoLYh/IkRyvL wxyZoTkn2DIdXJhtzifBpNSUpY3ZiRikdWMmvHNiKXCFhKCzGasKJP8ke64AdH4nceKh ywXY4Oh8bS/jspL2zCLkq0j8xTf6VOiemtg73weEY2mcVv8xXzbhg/gdO0kavvh1eW1U VM+vBtNi788kidWWyirAjQ4PBqX7GwqoqqB1wEdopbng31LslYcnhIszYc28gZ5c3ej9 7elw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id x30-20020aa79a5e000000b006cb63c97b37si9098692pfj.146.2023.11.22.08.31.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Nov 2023 08:31:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id EBD6882D878C; Wed, 22 Nov 2023 08:30:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230051AbjKVQaL (ORCPT <rfc822;ouuuleilei@gmail.com> + 99 others); Wed, 22 Nov 2023 11:30:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbjKVQaK (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 22 Nov 2023 11:30:10 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 24394F4 for <linux-kernel@vger.kernel.org>; Wed, 22 Nov 2023 08:30:06 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A413DC15; Wed, 22 Nov 2023 08:30:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1A14F3F73F; Wed, 22 Nov 2023 08:30:02 -0800 (PST) From: Ryan Roberts <ryan.roberts@arm.com> To: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, Yin Fengwei <fengwei.yin@intel.com>, David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>, Catalin Marinas <catalin.marinas@arm.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Yang Shi <shy828301@gmail.com>, "Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>, Luis Chamberlain <mcgrof@kernel.org>, Itaru Kitayama <itaru.kitayama@gmail.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, John Hubbard <jhubbard@nvidia.com>, David Rientjes <rientjes@google.com>, Vlastimil Babka <vbabka@suse.cz>, Hugh Dickins <hughd@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Ryan Roberts <ryan.roberts@arm.com>, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RESEND PATCH v7 00/10] Small-sized THP for anonymous memory Date: Wed, 22 Nov 2023 16:29:40 +0000 Message-Id: <20231122162950.3854897-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 22 Nov 2023 08:30:11 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783282465832849234 X-GMAIL-MSGID: 1783282465832849234 |
Series |
Small-sized THP for anonymous memory
|
|
Message
Ryan Roberts
Nov. 22, 2023, 4:29 p.m. UTC
Note: I'm resending this at Andrew's suggestion due to having originally sent it during LPC. I'm hoping its in a position where the feedback is minor enough that I can rework in time for v6.8, but so far haven't had any. Hi All, This is v7 of a series to implement small-sized THP for anonymous memory (previously called "large anonymous folios"). The objective of this is to improve performance by allocating larger chunks of memory during anonymous page faults: 1) Since SW (the kernel) is dealing with larger chunks of memory than base pages, there are efficiency savings to be had; fewer page faults, batched PTE and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel overhead. This should benefit all architectures. 2) Since we are now mapping physically contiguous chunks of memory, we can take advantage of HW TLB compression techniques. A reduction in TLB pressure speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce TLB entries; "the contiguous bit" (architectural) and HPA (uarch). The major change in this revision is the migration to a new sysfs interface as recommended by David Hildenbrand - thanks to David for the suggestion! This interface is inspired by the existing per-hugepage-size sysfs interface used by hugetlb, provides full backwards compatibility with the existing PMD-size THP interface, and provides a base for future extensibility. See [7] for detailed discussion of the interface. By default, the existing behaviour (and performance) is maintained. The user must explicitly enable small-sized THP to see the performance benefit. The series has also become heavy with mm selftest changes: These all relate to enlightenment of cow and khugepaged tests to explicitly test with small-sized THP. This series is based on mm-unstable (60df8b4235f5). Prerequisites ============= Some work items identified as being prerequisites are listed on page 3 at [8]. The summary is: | item | status | |:------------------------------|:------------------------| | mlock | In mainline (v6.7) | | madvise | In mainline (v6.6) | | compaction | v1 posted [9] | | numa balancing | Investigated: see below | | user-triggered page migration | In mainline (v6.7) | | khugepaged collapse | In mainline (NOP) | On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, John Hubbard has investigated this and concluded that it is A) not clear at the moment what a better policy might be for PTE-mapped THP and B) questions whether this should really be considered a prerequisite given no regression is caused for the default "small-sized THP disabled" case, and there is no correctness issue when it is enabled - its just a potential for non-optimal performance. (John please do elaborate if I haven't captured this correctly!) If there are no disagreements about removing numa balancing from the list, then that just leaves compaction which is in review on list at the moment. I really would like to get this series (and its remaining comapction prerequisite) in for v6.8. I accept that it may be a bit optimistic at this point, but lets see where we get to with review? Testing ======= The series includes patches for mm selftests to enlighten the cow and khugepaged tests to explicitly test with small-order THP, in the same way that PMD-order THP is tested. The new tests all pass, and no regressions are observed in the mm selftest suite. I've also run my usual kernel compilation and java script benchmarks without any issues. Refer to my performance numbers posted with v6 [6]. (These are for small-sized THP only - they do not include the arm64 contpte follow-on series). John Hubbard at Nvidia has indicated dramatic 10x performance improvements for some workloads at [10]. (Observed using v6 of this series as well as the arm64 contpte series). Kefeng Wang at Huawei has also indicated he sees improvements at [11] although there are some latency regressions also. Changes since v6 [6] ==================== - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by JohnH) - Dropped accounting patch (#3 in v6) (suggested by DavidH) - Continue to account *PMD-sized* THP only for now - Can add more counters in future if needed - Page cache large folios haven't needed any new counters yet - Pivot to sysfs ABI proposed by DavidH - per-size directories in a similar shape to that used by hugetlb - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) - For now, users need to understand implicitly which sizes are beneficial to their HW/SW - Dropped arch_wants_pte_order() patch (#7 in v6) - No longer needed due to dropping patch "recommend" keyword patch - Enlightened khugepaged mm selftest to explicitly test with small-size THP - Scrubbed commit logs to use "small-sized THP" consistently (suggested by DavidH) Changes since v5 [5] ==================== - Added accounting for PTE-mapped THPs (patch 3) - Added runtime control mechanism via sysfs as extension to THP (patch 4) - Minor refactoring of alloc_anon_folio() to integrate with runtime controls - Stripped out hardcoded policy for allocation order; its now all user space controlled (although user space can request "recommend" which will configure the HW-preferred order) Changes since v4 [4] ==================== - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 now uses the default order-3 size. I have moved this patch over to the contpte series. - Added "mm: Allow deferred splitting of arbitrary large anon folios" back into series. I originally removed this at v2 to add to a separate series, but that series has transformed significantly and it no longer fits, so bringing it back here. - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but set_ptes() is in mm-unstable now. - Updated policy for when to allocate LAF; only fallback to order-0 if MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on sysfs's never/madvise/always knob. - Fallback to order-0 whenever uffd is armed for the vma, not just when uffd-wp is set on the pte. - alloc_anon_folio() now returns `struct folio *`, where errors are encoded with ERR_PTR(). The last 3 changes were proposed by Yu Zhao - thanks! Changes since v3 [3] ==================== - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a sysctl is preferable but we will wait until real workload needs it. - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). - Added mm selftests for large anon folios in cow test suite. Changes since v2 [2] ==================== - Dropped commit "Allow deferred splitting of arbitrary large anon folios" - Huang, Ying suggested the "batch zap" work (which I dropped from this series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've moved the deferred split patch to a separate series along with the batch zap changes. I plan to submit this series early next week. - Changed folio order fallback policy - We no longer iterate from preferred to 0 looking for acceptable policy - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only - Removed vma parameter from arch_wants_pte_order() - Added command line parameter `flexthp_unhinted_max` - clamps preferred order when vma hasn't explicitly opted-in to THP - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled for process or system). - Simplified implementation and integration with do_anonymous_page() - Removed dependency on set_ptes() Changes since v1 [1] ==================== - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() - replaced with arch-independent alloc_anon_folio() - follows THP allocation approach - no longer retry with intermediate orders if allocation fails - fallback directly to order-0 - remove folio_add_new_anon_rmap_range() patch - instead add its new functionality to folio_add_new_anon_rmap() - remove batch-zap pte mappings optimization patch - remove enabler folio_remove_rmap_range() patch too - These offer real perf improvement so will submit separately - simplify Kconfig - single FLEXIBLE_THP option, which is independent of arch - depends on TRANSPARENT_HUGEPAGE - when enabled default to max anon folio size of 64K unless arch explicitly overrides - simplify changes to do_anonymous_page(): - no more retry loop [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ [7] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ [8] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA [9] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ [10] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ [11] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ Thanks, Ryan Ryan Roberts (10): mm: Allow deferred splitting of arbitrary anon large folios mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() mm: thp: Introduce per-size thp sysfs interface mm: thp: Support allocation of anonymous small-sized THP selftests/mm/kugepaged: Restore thp settings at exit selftests/mm: Factor out thp settings management selftests/mm: Support small-sized THP interface in thp_settings selftests/mm/khugepaged: Enlighten for small-sized THP selftests/mm/cow: Generalize do_run_with_thp() helper selftests/mm/cow: Add tests for anonymous small-sized THP Documentation/admin-guide/mm/transhuge.rst | 74 +++- Documentation/filesystems/proc.rst | 6 +- fs/proc/task_mmu.c | 3 +- include/linux/huge_mm.h | 102 +++-- mm/huge_memory.c | 263 +++++++++++-- mm/khugepaged.c | 16 +- mm/memory.c | 112 +++++- mm/page_vma_mapped.c | 3 +- mm/rmap.c | 32 +- tools/testing/selftests/mm/Makefile | 4 +- tools/testing/selftests/mm/cow.c | 215 +++++++---- tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- tools/testing/selftests/mm/run_vmtests.sh | 2 + tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ tools/testing/selftests/mm/thp_settings.h | 80 ++++ 15 files changed, 1160 insertions(+), 511 deletions(-) create mode 100644 tools/testing/selftests/mm/thp_settings.c create mode 100644 tools/testing/selftests/mm/thp_settings.h -- 2.25.1
Comments
On 22.11.23 17:29, Ryan Roberts wrote: > Note: I'm resending this at Andrew's suggestion due to having originally sent > it during LPC. I'm hoping its in a position where the feedback is minor enough > that I can rework in time for v6.8, but so far haven't had any. > I'll have a look either this week or next week. Very high on my todo list :)
On 11/22/23 08:29, Ryan Roberts wrote: ... > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [8]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [9] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | > > On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, > John Hubbard has investigated this and concluded that it is A) not clear at the > moment what a better policy might be for PTE-mapped THP and B) questions whether > this should really be considered a prerequisite given no regression is caused > for the default "small-sized THP disabled" case, and there is no correctness > issue when it is enabled - its just a potential for non-optimal performance. > (John please do elaborate if I haven't captured this correctly!) That's accurate. I actually want to continue looking into this (Mel Gorman's recent replies to v6 provided helpful touchstones to the NUMA reasoning leading up to the present day), and maybe at least bring pte-thps into rough parity with THPs with respect to NUMA. But that really doesn't seem like something that needs to happen first, especially since the outcome might even be, "first, do no harm"--as in, it's better as-is. We'll see. > > If there are no disagreements about removing numa balancing from the list, then > that just leaves compaction which is in review on list at the moment. > > I really would like to get this series (and its remaining comapction > prerequisite) in for v6.8. I accept that it may be a bit optimistic at this > point, but lets see where we get to with review? > > > Testing > ======= > > The series includes patches for mm selftests to enlighten the cow and khugepaged > tests to explicitly test with small-order THP, in the same way that PMD-order > THP is tested. The new tests all pass, and no regressions are observed in the mm > selftest suite. I've also run my usual kernel compilation and java script > benchmarks without any issues. > > Refer to my performance numbers posted with v6 [6]. (These are for small-sized > THP only - they do not include the arm64 contpte follow-on series). > > John Hubbard at Nvidia has indicated dramatic 10x performance improvements for > some workloads at [10]. (Observed using v6 of this series as well as the arm64 > contpte series). > Testing continues. Some workloads do even much better than than 10x, it's quite remarkable and glorious to see. :) I can send more perf data perhaps in a few days or a week, if there is still doubt about the benefits. That was with the v6 series, though. I'm about to set up and run with v7, and expect to provide a tested by tag for functionality, sometime soon (in the next few days), if machine availability works out as expected. thanks,
On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: > Note: I'm resending this at Andrew's suggestion due to having originally sent > it during LPC. I'm hoping its in a position where the feedback is minor enough > that I can rework in time for v6.8, but so far haven't had any. > > Hi All, > > This is v7 of a series to implement small-sized THP for anonymous memory > (previously called "large anonymous folios"). The objective of this is to I'm still against small-sized THP. We've now got people asking whether the THP counters should be updated when dealing with large folios that are smaller than PMD sized. It's sowing confusion, and we should go back to large anon folios as a name.
On 23.11.23 16:59, Matthew Wilcox wrote: > On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >> Note: I'm resending this at Andrew's suggestion due to having originally sent >> it during LPC. I'm hoping its in a position where the feedback is minor enough >> that I can rework in time for v6.8, but so far haven't had any. >> >> Hi All, >> >> This is v7 of a series to implement small-sized THP for anonymous memory >> (previously called "large anonymous folios"). The objective of this is to > > I'm still against small-sized THP. We've now got people asking whether > the THP counters should be updated when dealing with large folios that > are smaller than PMD sized. It's sowing confusion, and we should go > back to large anon folios as a name. > I disagree. https://lore.kernel.org/all/65dbdf2a-9281-a3c3-b7e3-a79c5b60b357@redhat.com/
On Thu, Nov 23, 2023 at 05:05:37PM +0100, David Hildenbrand wrote: > On 23.11.23 16:59, Matthew Wilcox wrote: > > On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: > > > Note: I'm resending this at Andrew's suggestion due to having originally sent > > > it during LPC. I'm hoping its in a position where the feedback is minor enough > > > that I can rework in time for v6.8, but so far haven't had any. > > > > > > Hi All, > > > > > > This is v7 of a series to implement small-sized THP for anonymous memory > > > (previously called "large anonymous folios"). The objective of this is to > > > > I'm still against small-sized THP. We've now got people asking whether > > the THP counters should be updated when dealing with large folios that > > are smaller than PMD sized. It's sowing confusion, and we should go > > back to large anon folios as a name. > > > > I disagree. > > https://lore.kernel.org/all/65dbdf2a-9281-a3c3-b7e3-a79c5b60b357@redhat.com/ And yet: https://lore.kernel.org/linux-mm/20231106193315.GB3661273@cmpxchg.org/ "This is a small THP so we don't account it as a THP, we only account normal THPs as THPs" is a bizarre position to take. Not to mention that saying a foo is a small huge baz is just bizarre. Am I a small giant? Or just a large human?
On 23.11.23 17:18, Matthew Wilcox wrote: > On Thu, Nov 23, 2023 at 05:05:37PM +0100, David Hildenbrand wrote: >> On 23.11.23 16:59, Matthew Wilcox wrote: >>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>> Note: I'm resending this at Andrew's suggestion due to having originally sent >>>> it during LPC. I'm hoping its in a position where the feedback is minor enough >>>> that I can rework in time for v6.8, but so far haven't had any. >>>> >>>> Hi All, >>>> >>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>> (previously called "large anonymous folios"). The objective of this is to >>> >>> I'm still against small-sized THP. We've now got people asking whether >>> the THP counters should be updated when dealing with large folios that >>> are smaller than PMD sized. It's sowing confusion, and we should go >>> back to large anon folios as a name. >>> >> >> I disagree. >> >> https://lore.kernel.org/all/65dbdf2a-9281-a3c3-b7e3-a79c5b60b357@redhat.com/ > > And yet: > https://lore.kernel.org/linux-mm/20231106193315.GB3661273@cmpxchg.org/ > > "This is a small THP so we don't account it as a THP, we only account > normal THPs as THPs" is a bizarre position to take. > > Not to mention that saying a foo is a small huge baz is just bizarre. > Am I a small giant? Or just a large human? I like that analogy. Yet, "small giant" sounds "bigger" in some way IMHO ;) I'll note that "small-sized THP" is just a temporary feature name, it won't be exposed as such to the user in sysfs etc. In a couple of years, it will be forgotten. To me it makes sense: it's a hugepage (not a page) but smaller compared to what we previously had. But again, there won't be a "small_thp" toggle anywhere. Long-term it's simply going to be a THP. Quoting from my writeup: "Nowadays, when somebody says that they are using hugetlb huge pages, the first question frequently is "which huge page size?". The same will happen with transparent huge pages I believe.". Regarding the accounting: as I said a couple of times, "AnonHugePages" should have been called "AnonPmdMapped" or similar; that's what it really is: as soon as a THP is PTE-mapped, it's not accounted there. But we can't fix that I guess, unless we add some "world switch" for any workloads that would care about a different accounting. So we're really only concerned about: * AnonHugePages * ShmemHugePages * FileHugePages The question is if we really want to continue extending/adjusting the old meminfo interfaces and talk about how to perform accounting there. Because, as we learned, we might get a new file-based sysfs based interface, because Greg seems to be against exposing new values in the old single-file-based one. In a new one, we have all freedom to expose what we actually want nowadays, and can just document that the old interface was designed with the assumption that there is only a single THP size. ... like hugetlb, where we also only expose the "default hugetlb size" parameters for legacy reasons: HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
On 11/23/23 08:50, David Hildenbrand wrote: > On 23.11.23 17:18, Matthew Wilcox wrote: >> On Thu, Nov 23, 2023 at 05:05:37PM +0100, David Hildenbrand wrote: >>> On 23.11.23 16:59, Matthew Wilcox wrote: >>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>>> Note: I'm resending this at Andrew's suggestion due to having >>>>> originally sent >>>>> it during LPC. I'm hoping its in a position where the feedback is >>>>> minor enough >>>>> that I can rework in time for v6.8, but so far haven't had any. >>>>> >>>>> Hi All, >>>>> >>>>> This is v7 of a series to implement small-sized THP for anonymous >>>>> memory >>>>> (previously called "large anonymous folios"). The objective of this >>>>> is to >>>> >>>> I'm still against small-sized THP. We've now got people asking whether >>>> the THP counters should be updated when dealing with large folios that >>>> are smaller than PMD sized. It's sowing confusion, and we should go >>>> back to large anon folios as a name. >>>> >>> >>> I disagree. >>> >>> https://lore.kernel.org/all/65dbdf2a-9281-a3c3-b7e3-a79c5b60b357@redhat.com/ >> >> And yet: >> https://lore.kernel.org/linux-mm/20231106193315.GB3661273@cmpxchg.org/ >> >> "This is a small THP so we don't account it as a THP, we only account >> normal THPs as THPs" is a bizarre position to take. >> >> Not to mention that saying a foo is a small huge baz is just bizarre. >> Am I a small giant? Or just a large human? > > I like that analogy. Yet, "small giant" sounds "bigger" in some way IMHO ;) > > I'll note that "small-sized THP" is just a temporary feature name, it > won't be exposed as such to the user in sysfs etc. In a couple of years, > it will be forgotten. > > To me it makes sense: it's a hugepage (not a page) but smaller compared > to what we previously had. But again, there won't be a "small_thp" > toggle anywhere. > > Long-term it's simply going to be a THP. Quoting from my writeup: > > "Nowadays, when somebody says that they are using hugetlb huge pages, > the first question frequently is "which huge page size?". The same will > happen with transparent huge pages I believe.". > > > Regarding the accounting: as I said a couple of times, "AnonHugePages" > should have been called "AnonPmdMapped" or similar; that's what it > really is: as soon as a THP is PTE-mapped, it's not accounted there. But > we can't fix that I guess, unless we add some "world switch" for any > workloads that would care about a different accounting. > > So we're really only concerned about: > * AnonHugePages > * ShmemHugePages > * FileHugePages The v6 patchset had these counters: /proc/vmstat: nr_anon_thp_pte /proc/meminfo: AnonHugePteMap ...which leads to another naming possibility: pte-thp, or pte-mapped-thp, something along those lines. pte-thp avoids the "small huge" complaint, at least. thanks,
On 23 Nov 2023, at 11:50, David Hildenbrand wrote: > On 23.11.23 17:18, Matthew Wilcox wrote: >> On Thu, Nov 23, 2023 at 05:05:37PM +0100, David Hildenbrand wrote: >>> On 23.11.23 16:59, Matthew Wilcox wrote: >>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>>> Note: I'm resending this at Andrew's suggestion due to having originally sent >>>>> it during LPC. I'm hoping its in a position where the feedback is minor enough >>>>> that I can rework in time for v6.8, but so far haven't had any. >>>>> >>>>> Hi All, >>>>> >>>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>>> (previously called "large anonymous folios"). The objective of this is to >>>> >>>> I'm still against small-sized THP. We've now got people asking whether >>>> the THP counters should be updated when dealing with large folios that >>>> are smaller than PMD sized. It's sowing confusion, and we should go >>>> back to large anon folios as a name. >>>> >>> >>> I disagree. >>> >>> https://lore.kernel.org/all/65dbdf2a-9281-a3c3-b7e3-a79c5b60b357@redhat.com/ >> >> And yet: >> https://lore.kernel.org/linux-mm/20231106193315.GB3661273@cmpxchg.org/ >> >> "This is a small THP so we don't account it as a THP, we only account >> normal THPs as THPs" is a bizarre position to take. >> >> Not to mention that saying a foo is a small huge baz is just bizarre. >> Am I a small giant? Or just a large human? > > I like that analogy. Yet, "small giant" sounds "bigger" in some way IMHO ;) > > I'll note that "small-sized THP" is just a temporary feature name, it won't be exposed as such to the user in sysfs etc. In a couple of years, it will be forgotten. > > To me it makes sense: it's a hugepage (not a page) but smaller compared to what we previously had. But again, there won't be a "small_thp" toggle anywhere. > > Long-term it's simply going to be a THP. Quoting from my writeup: > > "Nowadays, when somebody says that they are using hugetlb huge pages, the first question frequently is "which huge page size?". The same will > happen with transparent huge pages I believe.". I agree. Especially our ultimate goal is to auto-tune THP sizes to give the best performance to user. Having a separate name for small sized THP is beneficial to kernel developers, since we want to use the right THP size for right workloads/scenarios. But for average user, it is better to keep interface as simple as possible, so that they can just turn on THP and get good performance boost. For ninja users, I assume they know differences between THP sizes to not confuse themselves and we can expose fine tune interfaces if really necessary. > > > Regarding the accounting: as I said a couple of times, "AnonHugePages" should have been called "AnonPmdMapped" or similar; that's what it really is: as soon as a THP is PTE-mapped, it's not accounted there. But we can't fix that I guess, unless we add some "world switch" for any workloads that would care about a different accounting. > > So we're really only concerned about: > * AnonHugePages > * ShmemHugePages > * FileHugePages > > The question is if we really want to continue extending/adjusting the old meminfo interfaces and talk about how to perform accounting there. > > Because, as we learned, we might get a new file-based sysfs based interface, because Greg seems to be against exposing new values in the old single-file-based one. I am not aware of this. And it is interesting. Do you have a pointer? > > In a new one, we have all freedom to expose what we actually want nowadays, and can just document that the old interface was designed with the assumption that there is only a single THP size. This sounds like a good strategy and hopefully we could design the new THP interface more future proof. > > ... like hugetlb, where we also only expose the "default hugetlb size" parameters for legacy reasons: > > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > > -- > Cheers, > > David / dhildenb -- Best Regards, Yan, Zi
>> So we're really only concerned about: >> * AnonHugePages >> * ShmemHugePages >> * FileHugePages >> >> The question is if we really want to continue extending/adjusting the old meminfo interfaces and talk about how to perform accounting there. >> >> Because, as we learned, we might get a new file-based sysfs based interface, because Greg seems to be against exposing new values in the old single-file-based one. > > I am not aware of this. And it is interesting. Do you have a pointer? Sure: https://lore.kernel.org/all/2023110216-labrador-neurosis-1e6e@gregkh/T/#u > >> >> In a new one, we have all freedom to expose what we actually want nowadays, and can just document that the old interface was designed with the assumption that there is only a single THP size. > > This sounds like a good strategy and hopefully we could design the new THP interface > more future proof. Yes!
On 23/11/2023 15:59, Matthew Wilcox wrote: > On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >> Note: I'm resending this at Andrew's suggestion due to having originally sent >> it during LPC. I'm hoping its in a position where the feedback is minor enough >> that I can rework in time for v6.8, but so far haven't had any. >> >> Hi All, >> >> This is v7 of a series to implement small-sized THP for anonymous memory >> (previously called "large anonymous folios"). The objective of this is to > > I'm still against small-sized THP. We've now got people asking whether > the THP counters should be updated when dealing with large folios that > are smaller than PMD sized. It's sowing confusion, and we should go > back to large anon folios as a name. I suspect I'm labouring the point here, but I'd like to drill into exactly what you are objecting to. Is it: A) Using the name "small-sized THP" (which is currently only used in the commit logs and a couple of times in the documentation). B) Exposing the controls for this feature as an extension to the existing /sys/kernel/mm/transparent_hugepage/* sysfs interface (note the interface never uses the term "small-sized"). If A) then this is easily solved by choosing another descriptive name and updating those places. Personally I think it would be best to continue to use "THP" since we are exposing the feature through that interface. Perhaps "large folio THP". If B) we could move the interface to /sys/kernel/mm/large_folio/*, but that introduces many more banana skins than the current approach IMHO: - We would still want to expose the PMD-size large folio through this new interface and so would still need "global" or equivalent for at least PMD size, but "global" now points to a completely different sibling directory structure. And it probably doesn't make any sense for the non-PMD-sizes to have "global" because that would imply the THP interface could control the non-PMD-sizes, which is what we are trying to separate in the first place. So we end up with an asymmetry. - When we get to adding other feature support for the smaller sizes (e.g. khugepaged), we will end up having to duplicate all the controls from transparent_hugepage/* to large_folio/*, then we have the problem that e.g. scan rates could differ and we would end up needing 2 separate daemons. On the interface, David and I did request feedback on the proposal a number of times before I coded it up. I'm sure all solvable eventually, but I personally think it is overall simpler and more understandable as it is. I also agree with the other points raised in favor of "small-sized THP". Thanks, Ryan
On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: > On 23/11/2023 15:59, Matthew Wilcox wrote: > > On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: > >> This is v7 of a series to implement small-sized THP for anonymous memory > >> (previously called "large anonymous folios"). The objective of this is to > > > > I'm still against small-sized THP. We've now got people asking whether > > the THP counters should be updated when dealing with large folios that > > are smaller than PMD sized. It's sowing confusion, and we should go > > back to large anon folios as a name. > > I suspect I'm labouring the point here, but I'd like to drill into exactly what > you are objecting to. Is it: > > A) Using the name "small-sized THP" (which is currently only used in the commit > logs and a couple of times in the documentation). Yes, this is what I'm objecting to. > B) Exposing the controls for this feature as an extension to the existing > /sys/kernel/mm/transparent_hugepage/* sysfs interface (note the interface never > uses the term "small-sized"). I don't object to the controls being here. I still wish we didn't need an interface to control them at all, but I don't have the time to become an expert in anonymous memory and figure out how to make that happen. > If A) then this is easily solved by choosing another descriptive name and > updating those places. Personally I think it would be best to continue to use > "THP" since we are exposing the feature through that interface. Perhaps "large > folio THP". I think that continues the confusion about the existing interfaces we have which count THP (and mean "PMD sized THP"). I'd really prefer the term "THP" to unambiguously mean PMD sized THP. I don't understand why you felt the need to move away from Large Anon Folios as a name.
On 24/11/2023 15:13, Matthew Wilcox wrote: > On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >> On 23/11/2023 15:59, Matthew Wilcox wrote: >>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>> (previously called "large anonymous folios"). The objective of this is to >>> >>> I'm still against small-sized THP. We've now got people asking whether >>> the THP counters should be updated when dealing with large folios that >>> are smaller than PMD sized. It's sowing confusion, and we should go >>> back to large anon folios as a name. >> >> I suspect I'm labouring the point here, but I'd like to drill into exactly what >> you are objecting to. Is it: >> >> A) Using the name "small-sized THP" (which is currently only used in the commit >> logs and a couple of times in the documentation). > > Yes, this is what I'm objecting to. > >> B) Exposing the controls for this feature as an extension to the existing >> /sys/kernel/mm/transparent_hugepage/* sysfs interface (note the interface never >> uses the term "small-sized"). > > I don't object to the controls being here. I still wish we didn't need > an interface to control them at all, but I don't have the time to become > an expert in anonymous memory and figure out how to make that happen. > >> If A) then this is easily solved by choosing another descriptive name and >> updating those places. Personally I think it would be best to continue to use >> "THP" since we are exposing the feature through that interface. Perhaps "large >> folio THP". > > I think that continues the confusion about the existing interfaces we > have which count THP (and mean "PMD sized THP"). I'd really prefer the > term "THP" to unambiguously mean PMD sized THP. I don't understand why > you felt the need to move away from Large Anon Folios as a name. > Because the controls are exposed in the sysfs THP directory (and therefore documented in the transhuge.rst document). It seems odd to refer to them as large anon folios within the kernel but expose them as as part of the THP interface. But I'm certainly open to the idea of changing the name in the commit logs and being careful to distance it from THP transhuge.rst if that's the concensus. I am opposed to moving/changing the interface though - that's actually what I thought you were suggesting.
On 24.11.23 16:13, Matthew Wilcox wrote: > On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >> On 23/11/2023 15:59, Matthew Wilcox wrote: >>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>> (previously called "large anonymous folios"). The objective of this is to >>> >>> I'm still against small-sized THP. We've now got people asking whether >>> the THP counters should be updated when dealing with large folios that >>> are smaller than PMD sized. It's sowing confusion, and we should go >>> back to large anon folios as a name. >> >> I suspect I'm labouring the point here, but I'd like to drill into exactly what >> you are objecting to. Is it: >> >> A) Using the name "small-sized THP" (which is currently only used in the commit >> logs and a couple of times in the documentation). > > Yes, this is what I'm objecting to. I'll just repeat that "large anon folio" is misleading, because * we already have "large anon folios" in hugetlb * we already have PMD-sized "large anon folios" in THP But inn the end, I don't care how we will call this in a commit message. Just sticking to what we have right now makes most sense to me. I know, as the creator of the term "folio" you have to object :P Sorry ;)
On Fri, Nov 24, 2023 at 04:25:38PM +0100, David Hildenbrand wrote: > On 24.11.23 16:13, Matthew Wilcox wrote: > > On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: > > > On 23/11/2023 15:59, Matthew Wilcox wrote: > > > > On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: > > > > > This is v7 of a series to implement small-sized THP for anonymous memory > > > > > (previously called "large anonymous folios"). The objective of this is to > > > > > > > > I'm still against small-sized THP. We've now got people asking whether > > > > the THP counters should be updated when dealing with large folios that > > > > are smaller than PMD sized. It's sowing confusion, and we should go > > > > back to large anon folios as a name. > > > > > > I suspect I'm labouring the point here, but I'd like to drill into exactly what > > > you are objecting to. Is it: > > > > > > A) Using the name "small-sized THP" (which is currently only used in the commit > > > logs and a couple of times in the documentation). > > > > Yes, this is what I'm objecting to. > > I'll just repeat that "large anon folio" is misleading, because > * we already have "large anon folios" in hugetlb We do? Where? > * we already have PMD-sized "large anon folios" in THP Right, those are already accounted as THP, and that's what users expect. If we're allocating 1024 x 64kB chunks of memory, the user won't be able to distinguish that from 32 x 2MB chunks of memory, and yet the performance profile for some applications will be very different. > But inn the end, I don't care how we will call this in a commit message. > > Just sticking to what we have right now makes most sense to me. > > I know, as the creator of the term "folio" you have to object :P Sorry ;) I don't care if it's called something to do with folios or not. I am objecting to the use of the term "small THP" on the grounds of confusion and linguistic nonsense.
On 24.11.23 16:53, Matthew Wilcox wrote: > On Fri, Nov 24, 2023 at 04:25:38PM +0100, David Hildenbrand wrote: >> On 24.11.23 16:13, Matthew Wilcox wrote: >>> On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >>>> On 23/11/2023 15:59, Matthew Wilcox wrote: >>>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>>>> (previously called "large anonymous folios"). The objective of this is to >>>>> >>>>> I'm still against small-sized THP. We've now got people asking whether >>>>> the THP counters should be updated when dealing with large folios that >>>>> are smaller than PMD sized. It's sowing confusion, and we should go >>>>> back to large anon folios as a name. >>>> >>>> I suspect I'm labouring the point here, but I'd like to drill into exactly what >>>> you are objecting to. Is it: >>>> >>>> A) Using the name "small-sized THP" (which is currently only used in the commit >>>> logs and a couple of times in the documentation). >>> >>> Yes, this is what I'm objecting to. >> >> I'll just repeat that "large anon folio" is misleading, because >> * we already have "large anon folios" in hugetlb > > We do? Where? MAP_PRIVATE of hugetlb. hugepage_add_anon_rmap() instantiates them. Hugetlb is likely one of the oldest user of compund pages aka large folios. > >> * we already have PMD-sized "large anon folios" in THP > > Right, those are already accounted as THP, and that's what users expect. > If we're allocating 1024 x 64kB chunks of memory, the user won't be able > to distinguish that from 32 x 2MB chunks of memory, and yet the > performance profile for some applications will be very different. Very right, and because there will be a difference between 1024 x 64kB, 2048 x 32 kB and so forth, we need new memory stats either way. Ryan had some ideas on that, but currently, that's considered future work, just like it likely is for the pagecache as well and needs much more thoughts. Initially, the admin will have to enable all that for anon either way. It all boils down to one memory statistic for anon memory (AnonHugePages) that's messed-up already. > >> But inn the end, I don't care how we will call this in a commit message. >> >> Just sticking to what we have right now makes most sense to me. >> >> I know, as the creator of the term "folio" you have to object :P Sorry ;) > > I don't care if it's called something to do with folios or not. I Good! > am objecting to the use of the term "small THP" on the grounds of > confusion and linguistic nonsense. Maybe that's the reason why FreeBSD calls them "medium-sized superpages", because "Medium-sized" seems to be more appropriate to express something "in between". So far I thought the reason was because they focused on 64k only. Never trust a German guy on naming suggestions. John has so far been my naming expert, so I'm hoping he can help. "Sub-pmd-sized THP" is just mouthful. But then, again, this is would just be a temporary name, and in the future THP will just naturally come in multiple sizes (and others here seem to agree on that). But just to repeat: I don't think there is need to come up with new terminology and that there will be mass-confusion. So far I've not heard a compelling argument besides "one memory counter could confuse an admin that explicitly enables that new behavior.". Side note: I'm, happy that we've reached a stage where we're nitpicking on names :)
On 22.11.23 17:29, Ryan Roberts wrote: > In preparation for supporting anonymous small-sized THP, improve > folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be > passed to it. In this case, all contained pages are accounted using the > order-0 folio (or base page) scheme. > > Reviewed-by: Yu Zhao <yuzhao@google.com> > Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > --- > mm/rmap.c | 28 ++++++++++++++++++++-------- > 1 file changed, 20 insertions(+), 8 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 49e4d86a4f70..b086dc957b0c 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, > * This means the inc-and-test can be bypassed. > * The folio does not have to be locked. > * > - * If the folio is large, it is accounted as a THP. As the folio > + * If the folio is pmd-mappable, it is accounted as a THP. As the folio > * is new, it's assumed to be mapped exclusively by a single process. > */ > void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, > unsigned long address) > { > - int nr; > + int nr = folio_nr_pages(folio); > > - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); > + VM_BUG_ON_VMA(address < vma->vm_start || > + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > __folio_set_swapbacked(folio); > + __folio_set_anon(folio, vma, address, true); Likely the changed order doesn't matter. LGTM Reviewed-by: David Hildenbrand <david@redhat.com>
David Hildenbrand <david@redhat.com> writes: > On 24.11.23 16:53, Matthew Wilcox wrote: >> On Fri, Nov 24, 2023 at 04:25:38PM +0100, David Hildenbrand wrote: >>> On 24.11.23 16:13, Matthew Wilcox wrote: >>>> On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >>>>> On 23/11/2023 15:59, Matthew Wilcox wrote: >>>>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>>>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>>>>> (previously called "large anonymous folios"). The objective of this is to >>>>>> >>>>>> I'm still against small-sized THP. We've now got people asking whether >>>>>> the THP counters should be updated when dealing with large folios that >>>>>> are smaller than PMD sized. It's sowing confusion, and we should go >>>>>> back to large anon folios as a name. >>>>> >>>>> I suspect I'm labouring the point here, but I'd like to drill into exactly what >>>>> you are objecting to. Is it: >>>>> >>>>> A) Using the name "small-sized THP" (which is currently only used in the commit >>>>> logs and a couple of times in the documentation). >>>> >>>> Yes, this is what I'm objecting to. >>> >>> I'll just repeat that "large anon folio" is misleading, because >>> * we already have "large anon folios" in hugetlb >> We do? Where? > > MAP_PRIVATE of hugetlb. hugepage_add_anon_rmap() instantiates them. > > Hugetlb is likely one of the oldest user of compund pages aka large folios. I don't like "large anon folios" because it seems to confuse collegaues when explaining that large anon folios are actually smaller than the existing Hugetlb/THP size. I suspect this is because they already assume large folios are used for THP. I guess this wouldn't be an issue if everyone assumed THP was implemented with huge folios, but that doesn't seem to be the case for me at least. Likely because the default THP size is often 2MB, which is hardly huge. >> >>> * we already have PMD-sized "large anon folios" in THP >> Right, those are already accounted as THP, and that's what users >> expect. >> If we're allocating 1024 x 64kB chunks of memory, the user won't be able >> to distinguish that from 32 x 2MB chunks of memory, and yet the >> performance profile for some applications will be very different. > > Very right, and because there will be a difference between 1024 x > 64kB, 2048 x 32 kB and so forth, we need new memory stats either way. > > Ryan had some ideas on that, but currently, that's considered future > work, just like it likely is for the pagecache as well and needs much > more thoughts. > > Initially, the admin will have to enable all that for anon either > way. It all boils down to one memory statistic for anon memory > (AnonHugePages) that's messed-up already. > >> >>> But inn the end, I don't care how we will call this in a commit message. >>> >>> Just sticking to what we have right now makes most sense to me. >>> >>> I know, as the creator of the term "folio" you have to object :P Sorry ;) >> I don't care if it's called something to do with folios or not. I > > Good! > >> am objecting to the use of the term "small THP" on the grounds of >> confusion and linguistic nonsense. > > Maybe that's the reason why FreeBSD calls them "medium-sized > superpages", because "Medium-sized" seems to be more appropriate to > express something "in between". Transparent Medium Pages? > So far I thought the reason was because they focused on 64k only. > > Never trust a German guy on naming suggestions. John has so far been > my naming expert, so I'm hoping he can help. Likewise :-) > "Sub-pmd-sized THP" is just mouthful. But then, again, this is would > just be a temporary name, and in the future THP will just naturally > come in multiple sizes (and others here seem to agree on that). > > > But just to repeat: I don't think there is need to come up with new > terminology and that there will be mass-confusion. So far I've not > heard a compelling argument besides "one memory counter could confuse > an admin that explicitly enables that new behavior.". > > Side note: I'm, happy that we've reached a stage where we're > nitpicking on names :)
On 27/11/2023 08:20, Alistair Popple wrote: > > David Hildenbrand <david@redhat.com> writes: > >> On 24.11.23 16:53, Matthew Wilcox wrote: >>> On Fri, Nov 24, 2023 at 04:25:38PM +0100, David Hildenbrand wrote: >>>> On 24.11.23 16:13, Matthew Wilcox wrote: >>>>> On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >>>>>> On 23/11/2023 15:59, Matthew Wilcox wrote: >>>>>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: >>>>>>>> This is v7 of a series to implement small-sized THP for anonymous memory >>>>>>>> (previously called "large anonymous folios"). The objective of this is to >>>>>>> >>>>>>> I'm still against small-sized THP. We've now got people asking whether >>>>>>> the THP counters should be updated when dealing with large folios that >>>>>>> are smaller than PMD sized. It's sowing confusion, and we should go >>>>>>> back to large anon folios as a name. >>>>>> >>>>>> I suspect I'm labouring the point here, but I'd like to drill into exactly what >>>>>> you are objecting to. Is it: >>>>>> >>>>>> A) Using the name "small-sized THP" (which is currently only used in the commit >>>>>> logs and a couple of times in the documentation). >>>>> >>>>> Yes, this is what I'm objecting to. >>>> >>>> I'll just repeat that "large anon folio" is misleading, because >>>> * we already have "large anon folios" in hugetlb >>> We do? Where? >> >> MAP_PRIVATE of hugetlb. hugepage_add_anon_rmap() instantiates them. >> >> Hugetlb is likely one of the oldest user of compund pages aka large folios. > > I don't like "large anon folios" because it seems to confuse collegaues > when explaining that large anon folios are actually smaller than the > existing Hugetlb/THP size. I suspect this is because they already assume > large folios are used for THP. I guess this wouldn't be an issue if > everyone assumed THP was implemented with huge folios, but that doesn't > seem to be the case for me at least. Likely because the default THP size > is often 2MB, which is hardly huge. > >>> >>>> * we already have PMD-sized "large anon folios" in THP >>> Right, those are already accounted as THP, and that's what users >>> expect. >>> If we're allocating 1024 x 64kB chunks of memory, the user won't be able >>> to distinguish that from 32 x 2MB chunks of memory, and yet the >>> performance profile for some applications will be very different. >> >> Very right, and because there will be a difference between 1024 x >> 64kB, 2048 x 32 kB and so forth, we need new memory stats either way. >> >> Ryan had some ideas on that, but currently, that's considered future >> work, just like it likely is for the pagecache as well and needs much >> more thoughts. >> >> Initially, the admin will have to enable all that for anon either >> way. It all boils down to one memory statistic for anon memory >> (AnonHugePages) that's messed-up already. >> >>> >>>> But inn the end, I don't care how we will call this in a commit message. >>>> >>>> Just sticking to what we have right now makes most sense to me. >>>> >>>> I know, as the creator of the term "folio" you have to object :P Sorry ;) >>> I don't care if it's called something to do with folios or not. I >> >> Good! >> >>> am objecting to the use of the term "small THP" on the grounds of >>> confusion and linguistic nonsense. >> >> Maybe that's the reason why FreeBSD calls them "medium-sized >> superpages", because "Medium-sized" seems to be more appropriate to >> express something "in between". > > Transparent Medium Pages? I don't think this is future proof; If we are going to invent a new term, it needs to be indpendent of size to include all sizes including PMD-size and perhaps in future, bigger-than-PMD-size. I think generalizing the meaning of "huge" in THP to mean "bigger than the base page" is the best way to do this. Then as David says, over time people will qualify it with a specific size when appropriate. > >> So far I thought the reason was because they focused on 64k only. >> >> Never trust a German guy on naming suggestions. John has so far been >> my naming expert, so I'm hoping he can help. > > Likewise :-) > >> "Sub-pmd-sized THP" is just mouthful. But then, again, this is would >> just be a temporary name, and in the future THP will just naturally >> come in multiple sizes (and others here seem to agree on that). I actually don't mind "sub-pmd-sized THP" given the few locations its actually going to live. >> >> >> But just to repeat: I don't think there is need to come up with new >> terminology and that there will be mass-confusion. So far I've not >> heard a compelling argument besides "one memory counter could confuse >> an admin that explicitly enables that new behavior.". >> >> Side note: I'm, happy that we've reached a stage where we're >> nitpicking on names :) > Agreed. We are bikeshedding here. But if we really can't swallow "small-sized THP" then perhaps the most efficient way to move this forwards is to review the documentation (where "small-sized THP" appears twice in order to differentiate from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up with a good description in the context of those prose? Then once we have that, hopefully a term will fall out that I'll update the commit logs with.
On 11/27/23 02:31, Ryan Roberts wrote: > On 27/11/2023 08:20, Alistair Popple wrote: >> David Hildenbrand <david@redhat.com> writes: >>> On 24.11.23 16:53, Matthew Wilcox wrote: >>>> On Fri, Nov 24, 2023 at 04:25:38PM +0100, David Hildenbrand wrote: >>>>> On 24.11.23 16:13, Matthew Wilcox wrote: >>>>>> On Fri, Nov 24, 2023 at 09:56:37AM +0000, Ryan Roberts wrote: >>>>>>> On 23/11/2023 15:59, Matthew Wilcox wrote: >>>>>>>> On Wed, Nov 22, 2023 at 04:29:40PM +0000, Ryan Roberts wrote: ... >>> Maybe that's the reason why FreeBSD calls them "medium-sized >>> superpages", because "Medium-sized" seems to be more appropriate to >>> express something "in between". >> >> Transparent Medium Pages? I enjoyed this suggestion, because the resulting acronym is TMP. Which *might* occasionally lead to confusion. haha :) > > I don't think this is future proof; If we are going to invent a new term, it > needs to be indpendent of size to include all sizes including PMD-size and > perhaps in future, bigger-than-PMD-size. I think generalizing the meaning of > "huge" in THP to mean "bigger than the base page" is the best way to do this. > Then as David says, over time people will qualify it with a specific size when > appropriate. > >> >>> So far I thought the reason was because they focused on 64k only. >>> >>> Never trust a German guy on naming suggestions. John has so far been >>> my naming expert, so I'm hoping he can help. >> >> Likewise :-) >> I appreciate the call-out, although my latest suggestion seems to have gotten buried in the avalanche of discussions. I'm going to revive it and try again, though. >>> "Sub-pmd-sized THP" is just mouthful. But then, again, this is would >>> just be a temporary name, and in the future THP will just naturally >>> come in multiple sizes (and others here seem to agree on that). > > I actually don't mind "sub-pmd-sized THP" given the few locations its actually > going to live. > >>> >>> >>> But just to repeat: I don't think there is need to come up with new >>> terminology and that there will be mass-confusion. So far I've not >>> heard a compelling argument besides "one memory counter could confuse >>> an admin that explicitly enables that new behavior.". >>> >>> Side note: I'm, happy that we've reached a stage where we're >>> nitpicking on names :) >> > > Agreed. We are bikeshedding here. But if we really can't swallow "small-sized > THP" then perhaps the most efficient way to move this forwards is to review the > documentation (where "small-sized THP" appears twice in order to differentiate > from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up with > a good description in the context of those prose? Then once we have that, > hopefully a term will fall out that I'll update the commit logs with. > I will see you over in patch 3, then. I've already looked at it and am going to suggest a long and a short name. The long name is for use in comments and documentation, and the short name is for variable fragments: Long name: "pte-mapped THPs" Short names: pte_thp, or pte-thp thanks,
On Fri, Nov 24, 2023 at 06:34:10PM +0100, David Hildenbrand wrote: > On 24.11.23 16:53, Matthew Wilcox wrote: > > > * we already have PMD-sized "large anon folios" in THP > > > > Right, those are already accounted as THP, and that's what users expect. > > If we're allocating 1024 x 64kB chunks of memory, the user won't be able > > to distinguish that from 32 x 2MB chunks of memory, and yet the > > performance profile for some applications will be very different. > > Very right, and because there will be a difference between 1024 x 64kB, 2048 > x 32 kB and so forth, we need new memory stats either way. > > Ryan had some ideas on that, but currently, that's considered future work, > just like it likely is for the pagecache as well and needs much more > thoughts. > > Initially, the admin will have to enable all that for anon either way. It > all boils down to one memory statistic for anon memory (AnonHugePages) > that's messed-up already. So we have FileHugePages which is very carefully only PMD-sized large folios. If people start making AnonHugePages count non-PMD-sized large folios, that's going to be inconsistent. > > am objecting to the use of the term "small THP" on the grounds of > > confusion and linguistic nonsense. > > Maybe that's the reason why FreeBSD calls them "medium-sized superpages", > because "Medium-sized" seems to be more appropriate to express something "in > between". I don't mind "medium" in the name. > So far I thought the reason was because they focused on 64k only. > > Never trust a German guy on naming suggestions. John has so far been my > naming expert, so I'm hoping he can help. > > "Sub-pmd-sized THP" is just mouthful. But then, again, this is would just be > a temporary name, and in the future THP will just naturally come in multiple > sizes (and others here seem to agree on that). I do not. If we'd come to this fifteen years ago, maybe, but people now have an understanding that THPs are necessarily PMD sized.
On Mon, Nov 27, 2023 at 07:20:26PM +1100, Alistair Popple wrote: > I don't like "large anon folios" because it seems to confuse collegaues > when explaining that large anon folios are actually smaller than the > existing Hugetlb/THP size. I suspect this is because they already assume > large folios are used for THP. I guess this wouldn't be an issue if > everyone assumed THP was implemented with huge folios, but that doesn't > seem to be the case for me at least. Likely because the default THP size > is often 2MB, which is hardly huge. I find your colleagues confusing. To me, "huge" seems bigger than "large". I don't seem to be the only one: https://www.quora.com/What-is-the-difference-among-big-large-huge-enormous-and-giant (for example) Perhaps the problem is that people have turned "THP" into a thing in its own right. So they feel comfortable talking about small THP, medium THP and large THP and ignoring that there's already a "huge" embedded in THP. Now if you'll excuse me, I have to put my PIN number into the ATM machine.
On 28.11.23 05:05, Matthew Wilcox wrote: > On Fri, Nov 24, 2023 at 06:34:10PM +0100, David Hildenbrand wrote: >> On 24.11.23 16:53, Matthew Wilcox wrote: >>>> * we already have PMD-sized "large anon folios" in THP >>> >>> Right, those are already accounted as THP, and that's what users expect. >>> If we're allocating 1024 x 64kB chunks of memory, the user won't be able >>> to distinguish that from 32 x 2MB chunks of memory, and yet the >>> performance profile for some applications will be very different. >> >> Very right, and because there will be a difference between 1024 x 64kB, 2048 >> x 32 kB and so forth, we need new memory stats either way. >> >> Ryan had some ideas on that, but currently, that's considered future work, >> just like it likely is for the pagecache as well and needs much more >> thoughts. >> >> Initially, the admin will have to enable all that for anon either way. It >> all boils down to one memory statistic for anon memory (AnonHugePages) . >> that's messed-up already. > > So we have FileHugePages which is very carefully only PMD-sized large > folios. If people start making AnonHugePages count non-PMD-sized > large folios, that's going to be inconsistent. Right, and that's why we decided to leave these counters alone for now and rather document that they only apply to PMD-sized THP for historical reasons. We'll want new stats either way. Hopefully we'll make it more future-proof this time. > >>> am objecting to the use of the term "small THP" on the grounds of >>> confusion and linguistic nonsense. >> >> Maybe that's the reason why FreeBSD calls them "medium-sized superpages", >> because "Medium-sized" seems to be more appropriate to express something "in >> between". > > I don't mind "medium" in the name. > >> So far I thought the reason was because they focused on 64k only. >> >> Never trust a German guy on naming suggestions. John has so far been my >> naming expert, so I'm hoping he can help. >> >> "Sub-pmd-sized THP" is just mouthful. But then, again, this is would just be >> a temporary name, and in the future THP will just naturally come in multiple >> sizes (and others here seem to agree on that). > > I do not. If we'd come to this fifteen years ago, maybe, but people now > have an understanding that THPs are necessarily PMD sized. Well, I still find people being confused about THP vs. hugetlb, so likely some confusion is unavoidable. :) In your other mail you write "Perhaps the problem is that people have turned "THP" into a thing in its own right." I think that's exactly the case, and I see how that can be confusing when spelling out THP and reading "small-huge: does it cancel out?".
>> >> Agreed. We are bikeshedding here. But if we really can't swallow "small-sized >> THP" then perhaps the most efficient way to move this forwards is to review the >> documentation (where "small-sized THP" appears twice in order to differentiate >> from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up with >> a good description in the context of those prose? Then once we have that, >> hopefully a term will fall out that I'll update the commit logs with. >> > > I will see you over in patch 3, then. I've already looked at it and am going > to suggest a long and a short name. The long name is for use in comments and > documentation, and the short name is for variable fragments: > > Long name: "pte-mapped THPs" > Short names: pte_thp, or pte-thp The issue is that any THP can be pte-mapped, even a PMD-sized THP. However, the "natural" way to map a PMD-sized THP is using a PMD.
On 28/11/2023 08:48, David Hildenbrand wrote: > >>> >>> Agreed. We are bikeshedding here. But if we really can't swallow "small-sized >>> THP" then perhaps the most efficient way to move this forwards is to review the >>> documentation (where "small-sized THP" appears twice in order to differentiate >>> from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up with >>> a good description in the context of those prose? Then once we have that, >>> hopefully a term will fall out that I'll update the commit logs with. >>> >> >> I will see you over in patch 3, then. I've already looked at it and am going >> to suggest a long and a short name. The long name is for use in comments and >> documentation, and the short name is for variable fragments: >> >> Long name: "pte-mapped THPs" >> Short names: pte_thp, or pte-thp > > The issue is that any THP can be pte-mapped, even a PMD-sized THP. However, the > "natural" way to map a PMD-sized THP is using a PMD. > How about we just stop trying to come up with a term for the "small-sized THP" vs "PMD-sized THP" and instead invent a name that covers ALL THP: "multi-size THP" vs "PMD-sized THP". Then in the docs we can talk about how multi-size THP introduces the ability to allocate memory in blocks that are bigger than a base page but smaller than traditional PMD-size, in increments of a power-of-2 number of pages.
On 28.11.23 13:15, Ryan Roberts wrote: > On 28/11/2023 08:48, David Hildenbrand wrote: >> >>>> >>>> Agreed. We are bikeshedding here. But if we really can't swallow "small-sized >>>> THP" then perhaps the most efficient way to move this forwards is to review the >>>> documentation (where "small-sized THP" appears twice in order to differentiate >>>> from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up with >>>> a good description in the context of those prose? Then once we have that, >>>> hopefully a term will fall out that I'll update the commit logs with. >>>> >>> >>> I will see you over in patch 3, then. I've already looked at it and am going >>> to suggest a long and a short name. The long name is for use in comments and >>> documentation, and the short name is for variable fragments: >>> >>> Long name: "pte-mapped THPs" >>> Short names: pte_thp, or pte-thp >> >> The issue is that any THP can be pte-mapped, even a PMD-sized THP. However, the >> "natural" way to map a PMD-sized THP is using a PMD. >> > > How about we just stop trying to come up with a term for the "small-sized THP" > vs "PMD-sized THP" and instead invent a name that covers ALL THP: > > "multi-size THP" vs "PMD-sized THP". > > Then in the docs we can talk about how multi-size THP introduces the ability to > allocate memory in blocks that are bigger than a base page but smaller than > traditional PMD-size, in increments of a power-of-2 number of pages. So you're thinking of something like "multi-size THP" as a feature name, and stating that for now we limit it to <= PMD size. mTHP would be the short name? For the stats, we'd document that "AnonHugePages" and friends only count traditional PMD-sized THP for historical reasons -- and that AnonHugePages should have been called AnonHugePmdMapped (which we could still add as an alias and document why AnonHugePages is weird). Regarding new stats, maybe an interface that indicates the actual sizes would be best. As discussed, extending the existing single-large-file statistics might not be possible and we'd have to come up with a new interface, that maybe completely lacks "AnonHugePages" and directly goes for the individual sizes.
On 28/11/2023 14:09, David Hildenbrand wrote: > On 28.11.23 13:15, Ryan Roberts wrote: >> On 28/11/2023 08:48, David Hildenbrand wrote: >>> >>>>> >>>>> Agreed. We are bikeshedding here. But if we really can't swallow "small-sized >>>>> THP" then perhaps the most efficient way to move this forwards is to review >>>>> the >>>>> documentation (where "small-sized THP" appears twice in order to differentiate >>>>> from PMD-sized THP) - its in patch 3. Perhaps it will be easier to come up >>>>> with >>>>> a good description in the context of those prose? Then once we have that, >>>>> hopefully a term will fall out that I'll update the commit logs with. >>>>> >>>> >>>> I will see you over in patch 3, then. I've already looked at it and am going >>>> to suggest a long and a short name. The long name is for use in comments and >>>> documentation, and the short name is for variable fragments: >>>> >>>> Long name: "pte-mapped THPs" >>>> Short names: pte_thp, or pte-thp >>> >>> The issue is that any THP can be pte-mapped, even a PMD-sized THP. However, the >>> "natural" way to map a PMD-sized THP is using a PMD. >>> >> >> How about we just stop trying to come up with a term for the "small-sized THP" >> vs "PMD-sized THP" and instead invent a name that covers ALL THP: >> >> "multi-size THP" vs "PMD-sized THP". >> >> Then in the docs we can talk about how multi-size THP introduces the ability to >> allocate memory in blocks that are bigger than a base page but smaller than >> traditional PMD-size, in increments of a power-of-2 number of pages. > > So you're thinking of something like "multi-size THP" as a feature name, and > stating that for now we limit it to <= PMD size. mTHP would be the short name? Sure. > > For the stats, we'd document that "AnonHugePages" and friends only count > traditional PMD-sized THP for historical reasons -- and that AnonHugePages > should have been called AnonHugePmdMapped (which we could still add as an alias > and document why AnonHugePages is weird). Sounds good to me. > > Regarding new stats, maybe an interface that indicates the actual sizes would be > best. As discussed, extending the existing single-large-file statistics might > not be possible and we'd have to come up with a new interface, that maybe > completely lacks "AnonHugePages" and directly goes for the individual sizes. Yes, but I think we are agreed this is future work.
>> Regarding new stats, maybe an interface that indicates the actual sizes would be >> best. As discussed, extending the existing single-large-file statistics might >> not be possible and we'd have to come up with a new interface, that maybe >> completely lacks "AnonHugePages" and directly goes for the individual sizes. > > Yes, but I think we are agreed this is future work. > Yes, indeed, just spelling it out.
On 11/28/23 07:34, Ryan Roberts wrote: > On 28/11/2023 14:09, David Hildenbrand wrote: >> On 28.11.23 13:15, Ryan Roberts wrote: >>> On 28/11/2023 08:48, David Hildenbrand wrote: >>> How about we just stop trying to come up with a term for the "small-sized THP" >>> vs "PMD-sized THP" and instead invent a name that covers ALL THP: >>> >>> "multi-size THP" vs "PMD-sized THP". >>> >>> Then in the docs we can talk about how multi-size THP introduces the ability to >>> allocate memory in blocks that are bigger than a base page but smaller than >>> traditional PMD-size, in increments of a power-of-2 number of pages. >> >> So you're thinking of something like "multi-size THP" as a feature name, and >> stating that for now we limit it to <= PMD size. mTHP would be the short name? > > Sure. Sounds workable to me, too. > >> >> For the stats, we'd document that "AnonHugePages" and friends only count >> traditional PMD-sized THP for historical reasons -- and that AnonHugePages >> should have been called AnonHugePmdMapped (which we could still add as an alias >> and document why AnonHugePages is weird). > > Sounds good to me. OK. > >> >> Regarding new stats, maybe an interface that indicates the actual sizes would be >> best. As discussed, extending the existing single-large-file statistics might >> not be possible and we'd have to come up with a new interface, that maybe >> completely lacks "AnonHugePages" and directly goes for the individual sizes. > > Yes, but I think we are agreed this is future work. > We do want to have at least some way to verify that mTHP is active from day 0, though. thanks,
On 28/11/2023 18:39, John Hubbard wrote: > On 11/28/23 07:34, Ryan Roberts wrote: >> On 28/11/2023 14:09, David Hildenbrand wrote: >>> On 28.11.23 13:15, Ryan Roberts wrote: >>>> On 28/11/2023 08:48, David Hildenbrand wrote: >>>> How about we just stop trying to come up with a term for the "small-sized THP" >>>> vs "PMD-sized THP" and instead invent a name that covers ALL THP: >>>> >>>> "multi-size THP" vs "PMD-sized THP". >>>> >>>> Then in the docs we can talk about how multi-size THP introduces the ability to >>>> allocate memory in blocks that are bigger than a base page but smaller than >>>> traditional PMD-size, in increments of a power-of-2 number of pages. >>> >>> So you're thinking of something like "multi-size THP" as a feature name, and >>> stating that for now we limit it to <= PMD size. mTHP would be the short name? >> >> Sure. > > Sounds workable to me, too. > >> >>> >>> For the stats, we'd document that "AnonHugePages" and friends only count >>> traditional PMD-sized THP for historical reasons -- and that AnonHugePages >>> should have been called AnonHugePmdMapped (which we could still add as an alias >>> and document why AnonHugePages is weird). >> >> Sounds good to me. > > OK. > >> >>> >>> Regarding new stats, maybe an interface that indicates the actual sizes would be >>> best. As discussed, extending the existing single-large-file statistics might >>> not be possible and we'd have to come up with a new interface, that maybe >>> completely lacks "AnonHugePages" and directly goes for the individual sizes. >> >> Yes, but I think we are agreed this is future work. >> > > We do want to have at least some way to verify that mTHP is active from > day 0, though. Could you clarify what you mean by "active"? Current plan is that there will be a per-size transparent_hugepage/hugepages-<size>kB/enabled sysfs file that can be querried to see if the size is enabled (available for the kernel to use). But for this initial submission, we previously agreed (well, at least David and I) that not having a full set of stats is not a problem - they can come later. So the only way to verify that the kernel is allocating and mapping a particular THP size is to parse /proc/<pid>pagemap and look at the PFNs for now. Is that sufficient? > > > thanks,
On 11/29/23 01:59, Ryan Roberts wrote: ... >>>> Regarding new stats, maybe an interface that indicates the actual sizes would be >>>> best. As discussed, extending the existing single-large-file statistics might >>>> not be possible and we'd have to come up with a new interface, that maybe >>>> completely lacks "AnonHugePages" and directly goes for the individual sizes. >>> >>> Yes, but I think we are agreed this is future work. >>> >> >> We do want to have at least some way to verify that mTHP is active from >> day 0, though. > > Could you clarify what you mean by "active"? I was thinking of the *pte* counters that we had in v6, in /proc/vmstat and /proc/meminfo. I missed those, they were helpful in confirming that the test was actually using the new feature. It's easy to misconfigure these tests because there are so many settings (in addition to kernel settings), and people were having some difficulty. > > Current plan is that there will be a per-size > transparent_hugepage/hugepages-<size>kB/enabled sysfs file that can be querried > to see if the size is enabled (available for the kernel to use). > > But for this initial submission, we previously agreed (well, at least David and > I) that not having a full set of stats is not a problem - they can come later. > So the only way to verify that the kernel is allocating and mapping a particular > THP size is to parse /proc/<pid>pagemap and look at the PFNs for now. Is that > sufficient? > ugh, that's a little rough for just a command line sysadmin or QA person, isn't it? Still, I expect we can survive without it for an initial release. thanks,