Message ID | 20231204102027.57185-1-ryan.roberts@arm.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2665725vqy; Mon, 4 Dec 2023 02:21:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IEdjI8nfqiBHAMWz3hkA7hNwjH4/QpNHM5IzaQptyF6eb3lL5ESFiieHpIK1gSNh2boA8gD X-Received: by 2002:a17:903:124c:b0:1d0:6ffd:6e70 with SMTP id u12-20020a170903124c00b001d06ffd6e70mr1627546plh.104.1701685274076; Mon, 04 Dec 2023 02:21:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701685274; cv=none; d=google.com; s=arc-20160816; b=yGlnAb0VtSqbJ3tM4O8p4UUE4B+p717qjCM/kPln2pYGuoRhoPrM+8sn3cmdiGz5AC lLXwfsH1Gja0JrdfuET/JUfNd4VoCE8QmLDMa2ZojeWneTTUMbMvb1mwBuxSAKrPAnVF NqhN9qXtrYN0McSBn4jUzu0ywpAZYq8+l8yare1lyW/R0je/HH7Qh1ovFpvwrSLsZVce QY6B/aUxzV5gSNH6vwgqklICgu4/YgVAJmB06wPwHAlLw+eVX6sEw33vkgehC7j8Uwlh SUru1PnlGNKHmEGUhbDZwWlWTShOo0OAmSN6YknVp9vsM1Sc5V2XjcA+zrGFzLQJDNqP Tk8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=LfyC4J0DXUA4wb/eNN9vwjLSplqJSAUot36vwLVRM+w=; fh=q/mRw8BjqfXMwBZlJBUekkoSHkVT4tbxFKy9NNgF+Vc=; b=x5FQqBGF0Rwq2uiSi/XOx34uoYC+iz8B6UHT8mmhuF4zRUmbxz29gXhCxsduAn8eGV WaCIxhbNP+MA2BT7/GaLfECjA9uBJYhj4o46QmQ0msMEcSJzDLdaXA803TwjHgVyw7Sr uEPgWvt/wuFW7uA2gK+YTiwOErHWKt/o10wHowvJ3h5VV/LcSneQOJLDVSCuYpCFdVLV RR6hEOFxkTJsAvr5MTSfFM+3ePFI7BzamtwU5pdmRq265IRJd4Y/1uU7tMzIR4xroVyt wVs7wBd2Sqn+PqfgFzgGSOBeEDfcbkNwX5BwOwO5Pn6JC1FBt/1qtQqPBNkRVY3ChtvA mlpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id z2-20020a170902d54200b001d00a866032si4883826plf.223.2023.12.04.02.21.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:21:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 900C0805E130; Mon, 4 Dec 2023 02:21:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231817AbjLDKU4 (ORCPT <rfc822;chrisfriedt@gmail.com> + 99 others); Mon, 4 Dec 2023 05:20:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343834AbjLDKUt (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 4 Dec 2023 05:20:49 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C715985 for <linux-kernel@vger.kernel.org>; Mon, 4 Dec 2023 02:20:51 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD1B5FEC; Mon, 4 Dec 2023 02:21:38 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 886F63F6C4; Mon, 4 Dec 2023 02:20:48 -0800 (PST) From: Ryan Roberts <ryan.roberts@arm.com> To: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, Yin Fengwei <fengwei.yin@intel.com>, David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>, Catalin Marinas <catalin.marinas@arm.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Yang Shi <shy828301@gmail.com>, "Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>, Luis Chamberlain <mcgrof@kernel.org>, Itaru Kitayama <itaru.kitayama@gmail.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, John Hubbard <jhubbard@nvidia.com>, David Rientjes <rientjes@google.com>, Vlastimil Babka <vbabka@suse.cz>, Hugh Dickins <hughd@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com>, Barry Song <21cnbao@gmail.com>, Alistair Popple <apopple@nvidia.com> Cc: Ryan Roberts <ryan.roberts@arm.com>, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v8 00/10] Multi-size THP for anonymous memory Date: Mon, 4 Dec 2023 10:20:17 +0000 Message-Id: <20231204102027.57185-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 04 Dec 2023 02:21:10 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784346337467540975 X-GMAIL-MSGID: 1784346337467540975 |
Series |
Multi-size THP for anonymous memory
|
|
Message
Ryan Roberts
Dec. 4, 2023, 10:20 a.m. UTC
Hi All, A new week, a new version, a new name... This is v8 of a series to implement multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" and "large anonymous folios"). Matthew objected to "small huge" so hopefully this fares better. The objective of this is to improve performance by allocating larger chunks of memory during anonymous page faults: 1) Since SW (the kernel) is dealing with larger chunks of memory than base pages, there are efficiency savings to be had; fewer page faults, batched PTE and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel overhead. This should benefit all architectures. 2) Since we are now mapping physically contiguous chunks of memory, we can take advantage of HW TLB compression techniques. A reduction in TLB pressure speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce TLB entries; "the contiguous bit" (architectural) and HPA (uarch). This version changes the name and tidies up some of the kernel code and test code, based on feedback against v7 (see change log for details). By default, the existing behaviour (and performance) is maintained. The user must explicitly enable multi-size THP to see the performance benefit. This is done via a new sysfs interface (as recommended by David Hildenbrand - thanks to David for the suggestion)! This interface is inspired by the existing per-hugepage-size sysfs interface used by hugetlb, provides full backwards compatibility with the existing PMD-size THP interface, and provides a base for future extensibility. See [8] for detailed discussion of the interface. This series is based on mm-unstable (715b67adf4c8). Prerequisites ============= Some work items identified as being prerequisites are listed on page 3 at [9]. The summary is: | item | status | |:------------------------------|:------------------------| | mlock | In mainline (v6.7) | | madvise | In mainline (v6.6) | | compaction | v1 posted [10] | | numa balancing | Investigated: see below | | user-triggered page migration | In mainline (v6.7) | | khugepaged collapse | In mainline (NOP) | On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, John Hubbard has investigated this and concluded that it is A) not clear at the moment what a better policy might be for PTE-mapped THP and B) questions whether this should really be considered a prerequisite given no regression is caused for the default "multi-size THP disabled" case, and there is no correctness issue when it is enabled - its just a potential for non-optimal performance. If there are no disagreements about removing numa balancing from the list (none were raised when I first posted this comment against v7), then that just leaves compaction which is in review on list at the moment. I really would like to get this series (and its remaining comapction prerequisite) in for v6.8. I accept that it may be a bit optimistic at this point, but lets see where we get to with review? Testing ======= The series includes patches for mm selftests to enlighten the cow and khugepaged tests to explicitly test with multi-size THP, in the same way that PMD-sized THP is tested. The new tests all pass, and no regressions are observed in the mm selftest suite. I've also run my usual kernel compilation and java script benchmarks without any issues. Refer to my performance numbers posted with v6 [6]. (These are for multi-size THP only - they do not include the arm64 contpte follow-on series). John Hubbard at Nvidia has indicated dramatic 10x performance improvements for some workloads at [11]. (Observed using v6 of this series as well as the arm64 contpte series). Kefeng Wang at Huawei has also indicated he sees improvements at [12] although there are some latency regressions also. Changes since v7 [7] ==================== - Renamed "small-sized THP" -> "multi-size THP" in commit logs - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair) - Patch 3: - Fine-tuned transhuge documentation multi-size THP (JohnH) - Converted hugepage_global_enabled() and hugepage_global_always() macros to static inline functions (JohnH) - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH) - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH) - Renamed "global" enabled sysfs file option to "inherit" (JohnH) - Patch 9: - cow selftest: Renamed param size -> thpsize (David) - cow selftest: Changed test fail to assert() (David) - cow selftest: Log PMD size separately from all the supported THP sizes (David) - Patch 10: - cow selftest: No longer special case pmdsize; keep all THP sizes in thpsizes[] Changes since v6 [6] ==================== - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by JohnH) - Dropped accounting patch (#3 in v6) (suggested by DavidH) - Continue to account *PMD-sized* THP only for now - Can add more counters in future if needed - Page cache large folios haven't needed any new counters yet - Pivot to sysfs ABI proposed by DavidH - per-size directories in a similar shape to that used by hugetlb - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) - For now, users need to understand implicitly which sizes are beneficial to their HW/SW - Dropped arch_wants_pte_order() patch (#7 in v6) - No longer needed due to dropping patch "recommend" keyword patch - Enlightened khugepaged mm selftest to explicitly test with small-size THP - Scrubbed commit logs to use "small-sized THP" consistently (suggested by DavidH) Changes since v5 [5] ==================== - Added accounting for PTE-mapped THPs (patch 3) - Added runtime control mechanism via sysfs as extension to THP (patch 4) - Minor refactoring of alloc_anon_folio() to integrate with runtime controls - Stripped out hardcoded policy for allocation order; its now all user space controlled (although user space can request "recommend" which will configure the HW-preferred order) Changes since v4 [4] ==================== - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 now uses the default order-3 size. I have moved this patch over to the contpte series. - Added "mm: Allow deferred splitting of arbitrary large anon folios" back into series. I originally removed this at v2 to add to a separate series, but that series has transformed significantly and it no longer fits, so bringing it back here. - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but set_ptes() is in mm-unstable now. - Updated policy for when to allocate LAF; only fallback to order-0 if MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on sysfs's never/madvise/always knob. - Fallback to order-0 whenever uffd is armed for the vma, not just when uffd-wp is set on the pte. - alloc_anon_folio() now returns `struct folio *`, where errors are encoded with ERR_PTR(). The last 3 changes were proposed by Yu Zhao - thanks! Changes since v3 [3] ==================== - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a sysctl is preferable but we will wait until real workload needs it. - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). - Added mm selftests for large anon folios in cow test suite. Changes since v2 [2] ==================== - Dropped commit "Allow deferred splitting of arbitrary large anon folios" - Huang, Ying suggested the "batch zap" work (which I dropped from this series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've moved the deferred split patch to a separate series along with the batch zap changes. I plan to submit this series early next week. - Changed folio order fallback policy - We no longer iterate from preferred to 0 looking for acceptable policy - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only - Removed vma parameter from arch_wants_pte_order() - Added command line parameter `flexthp_unhinted_max` - clamps preferred order when vma hasn't explicitly opted-in to THP - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled for process or system). - Simplified implementation and integration with do_anonymous_page() - Removed dependency on set_ptes() Changes since v1 [1] ==================== - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() - replaced with arch-independent alloc_anon_folio() - follows THP allocation approach - no longer retry with intermediate orders if allocation fails - fallback directly to order-0 - remove folio_add_new_anon_rmap_range() patch - instead add its new functionality to folio_add_new_anon_rmap() - remove batch-zap pte mappings optimization patch - remove enabler folio_remove_rmap_range() patch too - These offer real perf improvement so will submit separately - simplify Kconfig - single FLEXIBLE_THP option, which is independent of arch - depends on TRANSPARENT_HUGEPAGE - when enabled default to max anon folio size of 64K unless arch explicitly overrides - simplify changes to do_anonymous_page(): - no more retry loop [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ [7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/ [8] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA [10] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ [11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ [12] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ Thanks, Ryan Ryan Roberts (10): mm: Allow deferred splitting of arbitrary anon large folios mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() mm: thp: Introduce multi-size THP sysfs interface mm: thp: Support allocation of anonymous multi-size THP selftests/mm/kugepaged: Restore thp settings at exit selftests/mm: Factor out thp settings management selftests/mm: Support multi-size THP interface in thp_settings selftests/mm/khugepaged: Enlighten for multi-size THP selftests/mm/cow: Generalize do_run_with_thp() helper selftests/mm/cow: Add tests for anonymous multi-size THP Documentation/admin-guide/mm/transhuge.rst | 97 ++++- Documentation/filesystems/proc.rst | 6 +- fs/proc/task_mmu.c | 3 +- include/linux/huge_mm.h | 116 ++++-- mm/huge_memory.c | 268 ++++++++++++-- mm/khugepaged.c | 20 +- mm/memory.c | 114 +++++- mm/page_vma_mapped.c | 3 +- mm/rmap.c | 32 +- tools/testing/selftests/mm/Makefile | 4 +- tools/testing/selftests/mm/cow.c | 185 +++++++--- tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- tools/testing/selftests/mm/run_vmtests.sh | 2 + tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ tools/testing/selftests/mm/thp_settings.h | 80 ++++ 15 files changed, 1177 insertions(+), 512 deletions(-) create mode 100644 tools/testing/selftests/mm/thp_settings.c create mode 100644 tools/testing/selftests/mm/thp_settings.h -- 2.25.1
Comments
On Mon, 4 Dec 2023 10:20:17 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote: > Hi All, > > > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [9]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [10] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | What does "prerequisites" mean here? Won't compile without? Kernel crashes without? Nice-to-have-after? Please expand on this. I looked at [9], but access is denied. > [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA
On Mon, Dec 4, 2023 at 11:20 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). > > By default, the existing behaviour (and performance) is maintained. The user > must explicitly enable multi-size THP to see the performance benefit. This is > done via a new sysfs interface (as recommended by David Hildenbrand - thanks to > David for the suggestion)! This interface is inspired by the existing > per-hugepage-size sysfs interface used by hugetlb, provides full backwards > compatibility with the existing PMD-size THP interface, and provides a base for > future extensibility. See [8] for detailed discussion of the interface. > > This series is based on mm-unstable (715b67adf4c8). > > > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [9]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [10] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | > > On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, > John Hubbard has investigated this and concluded that it is A) not clear at the > moment what a better policy might be for PTE-mapped THP and B) questions whether > this should really be considered a prerequisite given no regression is caused > for the default "multi-size THP disabled" case, and there is no correctness > issue when it is enabled - its just a potential for non-optimal performance. > > If there are no disagreements about removing numa balancing from the list (none > were raised when I first posted this comment against v7), then that just leaves > compaction which is in review on list at the moment. > > I really would like to get this series (and its remaining comapction > prerequisite) in for v6.8. I accept that it may be a bit optimistic at this > point, but lets see where we get to with review? > Hi Ryan, A question but i don't think it should block this series, do we have any plan to extend /proc/meminfo, /proc/pid/smaps, /proc/vmstat to present some information regarding the new multi-size THP. e.g how many folios in each-size for the system, how many multi-size folios LRU, how many large folios in each VMA etc. In products and labs, we need some health monitors to make sure the system status is visible and works as expected. right now, i feel i am like blindly exploring the system without those statistics. > > Testing > ======= > > The series includes patches for mm selftests to enlighten the cow and khugepaged > tests to explicitly test with multi-size THP, in the same way that PMD-sized > THP is tested. The new tests all pass, and no regressions are observed in the mm > selftest suite. I've also run my usual kernel compilation and java script > benchmarks without any issues. > > Refer to my performance numbers posted with v6 [6]. (These are for multi-size > THP only - they do not include the arm64 contpte follow-on series). > > John Hubbard at Nvidia has indicated dramatic 10x performance improvements for > some workloads at [11]. (Observed using v6 of this series as well as the arm64 > contpte series). > > Kefeng Wang at Huawei has also indicated he sees improvements at [12] although > there are some latency regressions also. > > > Changes since v7 [7] > ==================== > > - Renamed "small-sized THP" -> "multi-size THP" in commit logs > - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair) > - Patch 3: > - Fine-tuned transhuge documentation multi-size THP (JohnH) > - Converted hugepage_global_enabled() and hugepage_global_always() macros > to static inline functions (JohnH) > - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH) > - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH) > - Renamed "global" enabled sysfs file option to "inherit" (JohnH) > - Patch 9: > - cow selftest: Renamed param size -> thpsize (David) > - cow selftest: Changed test fail to assert() (David) > - cow selftest: Log PMD size separately from all the supported THP sizes > (David) > - Patch 10: > - cow selftest: No longer special case pmdsize; keep all THP sizes in > thpsizes[] > > > Changes since v6 [6] > ==================== > > - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by > JohnH) > - Dropped accounting patch (#3 in v6) (suggested by DavidH) > - Continue to account *PMD-sized* THP only for now > - Can add more counters in future if needed > - Page cache large folios haven't needed any new counters yet > - Pivot to sysfs ABI proposed by DavidH > - per-size directories in a similar shape to that used by hugetlb > - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) > - For now, users need to understand implicitly which sizes are beneficial > to their HW/SW > - Dropped arch_wants_pte_order() patch (#7 in v6) > - No longer needed due to dropping patch "recommend" keyword patch > - Enlightened khugepaged mm selftest to explicitly test with small-size THP > - Scrubbed commit logs to use "small-sized THP" consistently (suggested by > DavidH) > > > Changes since v5 [5] > ==================== > > - Added accounting for PTE-mapped THPs (patch 3) > - Added runtime control mechanism via sysfs as extension to THP (patch 4) > - Minor refactoring of alloc_anon_folio() to integrate with runtime controls > - Stripped out hardcoded policy for allocation order; its now all user space > controlled (although user space can request "recommend" which will configure > the HW-preferred order) > > > Changes since v4 [4] > ==================== > > - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 > now uses the default order-3 size. I have moved this patch over to > the contpte series. > - Added "mm: Allow deferred splitting of arbitrary large anon folios" back > into series. I originally removed this at v2 to add to a separate series, > but that series has transformed significantly and it no longer fits, so > bringing it back here. > - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but > set_ptes() is in mm-unstable now. > - Updated policy for when to allocate LAF; only fallback to order-0 if > MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on > sysfs's never/madvise/always knob. > - Fallback to order-0 whenever uffd is armed for the vma, not just when > uffd-wp is set on the pte. > - alloc_anon_folio() now returns `struct folio *`, where errors are encoded > with ERR_PTR(). > > The last 3 changes were proposed by Yu Zhao - thanks! > > > Changes since v3 [3] > ==================== > > - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. > - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a > sysctl is preferable but we will wait until real workload needs it. > - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). > - Added mm selftests for large anon folios in cow test suite. > > > Changes since v2 [2] > ==================== > > - Dropped commit "Allow deferred splitting of arbitrary large anon folios" > - Huang, Ying suggested the "batch zap" work (which I dropped from this > series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've > moved the deferred split patch to a separate series along with the batch > zap changes. I plan to submit this series early next week. > - Changed folio order fallback policy > - We no longer iterate from preferred to 0 looking for acceptable policy > - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only > - Removed vma parameter from arch_wants_pte_order() > - Added command line parameter `flexthp_unhinted_max` > - clamps preferred order when vma hasn't explicitly opted-in to THP > - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled > for process or system). > - Simplified implementation and integration with do_anonymous_page() > - Removed dependency on set_ptes() > > > Changes since v1 [1] > ==================== > > - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() > - replaced with arch-independent alloc_anon_folio() > - follows THP allocation approach > - no longer retry with intermediate orders if allocation fails > - fallback directly to order-0 > - remove folio_add_new_anon_rmap_range() patch > - instead add its new functionality to folio_add_new_anon_rmap() > - remove batch-zap pte mappings optimization patch > - remove enabler folio_remove_rmap_range() patch too > - These offer real perf improvement so will submit separately > - simplify Kconfig > - single FLEXIBLE_THP option, which is independent of arch > - depends on TRANSPARENT_HUGEPAGE > - when enabled default to max anon folio size of 64K unless arch > explicitly overrides > - simplify changes to do_anonymous_page(): > - no more retry loop > > > [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ > [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ > [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ > [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ > [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ > [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ > [7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/ > [8] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ > [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA > [10] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ > [11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ > [12] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ > > > Thanks, > Ryan > > Ryan Roberts (10): > mm: Allow deferred splitting of arbitrary anon large folios > mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() > mm: thp: Introduce multi-size THP sysfs interface > mm: thp: Support allocation of anonymous multi-size THP > selftests/mm/kugepaged: Restore thp settings at exit > selftests/mm: Factor out thp settings management > selftests/mm: Support multi-size THP interface in thp_settings > selftests/mm/khugepaged: Enlighten for multi-size THP > selftests/mm/cow: Generalize do_run_with_thp() helper > selftests/mm/cow: Add tests for anonymous multi-size THP > > Documentation/admin-guide/mm/transhuge.rst | 97 ++++- > Documentation/filesystems/proc.rst | 6 +- > fs/proc/task_mmu.c | 3 +- > include/linux/huge_mm.h | 116 ++++-- > mm/huge_memory.c | 268 ++++++++++++-- > mm/khugepaged.c | 20 +- > mm/memory.c | 114 +++++- > mm/page_vma_mapped.c | 3 +- > mm/rmap.c | 32 +- > tools/testing/selftests/mm/Makefile | 4 +- > tools/testing/selftests/mm/cow.c | 185 +++++++--- > tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- > tools/testing/selftests/mm/run_vmtests.sh | 2 + > tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ > tools/testing/selftests/mm/thp_settings.h | 80 ++++ > 15 files changed, 1177 insertions(+), 512 deletions(-) > create mode 100644 tools/testing/selftests/mm/thp_settings.c > create mode 100644 tools/testing/selftests/mm/thp_settings.h > > -- > 2.25.1 > Thanks Barry
On 12/4/23 02:20, Ryan Roberts wrote: > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). Using a couple of Armv8 systems, I've tested this patchset. I applied it to top of tree (Linux 6.7-rc4), on top of your latest contig pte series [1]. With those two patchsets applied, the mm selftests look OK--or at least as OK as they normally do. I compared test runs between THP/mTHP set to "always", vs "never", to verify that there were no new test failures. Details: specifically, I set one particular page size (2 MB) to "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled between "always" and "never". I also re-ran my usual compute/AI benchmark, and I'm still seeing the same 10x performance improvement that I reported for the v6 patchset. So for this patchset and for [1] as well, please feel free to add: Tested-by: John Hubbard <jhubbard@nvidia.com> [1] https://lore.kernel.org/all/20231204105440.61448-1-ryan.roberts@arm.com/ thanks,
On 04/12/2023 19:30, Andrew Morton wrote: > On Mon, 4 Dec 2023 10:20:17 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote: > >> Hi All, >> >> >> Prerequisites >> ============= >> >> Some work items identified as being prerequisites are listed on page 3 at [9]. >> The summary is: >> >> | item | status | >> |:------------------------------|:------------------------| >> | mlock | In mainline (v6.7) | >> | madvise | In mainline (v6.6) | >> | compaction | v1 posted [10] | >> | numa balancing | Investigated: see below | >> | user-triggered page migration | In mainline (v6.7) | >> | khugepaged collapse | In mainline (NOP) | > > What does "prerequisites" mean here? Won't compile without? Kernel > crashes without? Nice-to-have-after? Please expand on this. Short answer: It's supposed to mean things that either need to be done to prevent the mm from regressing (both correctness and performance) when multi-size THP is present but disabled, or things that need to be done to make the mm robust (but not neccessarily optimially performant) when multi-size THP is enabled. But in reality, all of the things on the list could really be reclassified as "nice-to-have-after", IMHO; their absence will neither cause compilation nor runtime errors. Longer answer: When I first started looking at this, I was advised that there were likely a number of corners which made assumptions about large folios always being PMD-sized, and if not found and fixed, could lead to stability issues. At the time I was also pursuing a strategy of multi-size THP being a compile-time feature with no runtime control, so I decided it was important for multi-size THP to not effectively disable other features (e.g. various madvise ops used to ignore PTE-mapped large folios). This list represents all the things that I could find based on code review, as well as things suggested by others, and in the end, they all fall into that last category of "PTE-mapped large folios efectively disable existing features". But given we now have runtime controls to opt-in to multi-size THP, I'm not sure we need to classify these as prerequisites. But I didn't want to unilaterally make that decision, given this list has previously been discussed and agreed by others. It's also worth noting that in the case of compaction, that's already a problem for large folios in the page cache; large folios will be skipped. > > I looked at [9], but access is denied. Sorry about that; its owned by David Rientjes so I can't fix that for you. It's a PDF of a slide with the following table: +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ | Item | Description | Assignee | Status | +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ | mlock | Large, pte-mapped folios are ignored when mlock is requested. | Yin, Fengwei | In mainline (v6.7) | | | Code comment for mlock_vma_folio() says "...filter out pte mappings | | | | | of THPs which cannot be consistently counted: a pte mapping of the | | | | | THP head cannot be distinguished by the page alone." | | | | madvise | MADV_COLD, MADV_PAGEOUT, MADV_FREE: For large folios, code assumes | Yin, Fengwei | In mainline (v6.6) | | | exclusive only if mapcount==1, else skips remainder of operation. | | | | | For large, pte-mapped folios, exclusive folios can have mapcount | | | | | upto nr_pages and still be exclusive. Even better; don't split | | | | | the folio if it fits entirely within the range. | | | | compaction | Raised at LSFMM: Compaction skips non-order-0 pages. | Zi Yan | v1 posted | | | Already problem for page-cache pages today. | | | | numa balancing | Large, pte-mapped folios are ignored by numa-balancing code. Commit | John Hubbard | Investigated: | | | comment (e81c480): "We're going to have THP mapped with PTEs. It | | Not prerequisite | | | will confuse numabalancing. Let's skip them for now." | | | | user-triggered page migration | mm/migrate.c (migrate_pages syscall) We don't want to migrate folio | Kefeng Wang | In mainline (v6.7) | | | that is shared. | | | | khugepaged collapse | collapse small-sized THP to PMD-sized THP in khugepaged/MADV_COLLAPSE. | Ryan Roberts | In mainline (NOP) | | | Kirill thinks khugepage should already be able to collapse | | | | | small large folios to PMD-sized THP; verification required. | | | +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ Thanks, Ryan > >> [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA > >
On 05/12/2023 03:28, Barry Song wrote: > On Mon, Dec 4, 2023 at 11:20 PM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). >> >> By default, the existing behaviour (and performance) is maintained. The user >> must explicitly enable multi-size THP to see the performance benefit. This is >> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >> David for the suggestion)! This interface is inspired by the existing >> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >> compatibility with the existing PMD-size THP interface, and provides a base for >> future extensibility. See [8] for detailed discussion of the interface. >> >> This series is based on mm-unstable (715b67adf4c8). >> >> >> Prerequisites >> ============= >> >> Some work items identified as being prerequisites are listed on page 3 at [9]. >> The summary is: >> >> | item | status | >> |:------------------------------|:------------------------| >> | mlock | In mainline (v6.7) | >> | madvise | In mainline (v6.6) | >> | compaction | v1 posted [10] | >> | numa balancing | Investigated: see below | >> | user-triggered page migration | In mainline (v6.7) | >> | khugepaged collapse | In mainline (NOP) | >> >> On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, >> John Hubbard has investigated this and concluded that it is A) not clear at the >> moment what a better policy might be for PTE-mapped THP and B) questions whether >> this should really be considered a prerequisite given no regression is caused >> for the default "multi-size THP disabled" case, and there is no correctness >> issue when it is enabled - its just a potential for non-optimal performance. >> >> If there are no disagreements about removing numa balancing from the list (none >> were raised when I first posted this comment against v7), then that just leaves >> compaction which is in review on list at the moment. >> >> I really would like to get this series (and its remaining comapction >> prerequisite) in for v6.8. I accept that it may be a bit optimistic at this >> point, but lets see where we get to with review? >> > > Hi Ryan, > > A question but i don't think it should block this series, do we have any plan > to extend /proc/meminfo, /proc/pid/smaps, /proc/vmstat to present some > information regarding the new multi-size THP. > > e.g how many folios in each-size for the system, how many multi-size folios LRU, > how many large folios in each VMA etc. > > In products and labs, we need some health monitors to make sure the system > status is visible and works as expected. right now, i feel i am like > blindly exploring > the system without those statistics. Yes it's definitely on the list. I had a patch in v6 that added various stats. But after discussion with David, it became clear there were a few issues with the implementation and I ripped it out. We also decided that since the page cache already uses large folios and we don't have counters for those, we could probably live (initially at least) without counters for multi-size THP too. But you are the second person to raise this in as many weeks, so clearly this should be at the top of the list for enhancements after this initial merge. For now, you can parse /proc/<pid>/pagemap to see how well multi-size THP is being utilized. That's not a simple interface though. Yu Zhou shared a Python script a while back. I wonder if there is value in tidying that up and putting it in tools/mm in the short term? There were 2 main issues with the previous implementation: 1) What should the semantic of a counter be? Because a PTE-mapped THP can be partially unmapped or mremapped. So should we count number of pages from a folio of a given size that are mapped (easy) or should we only count when the whole folio is contiguously mapped? (I'm sure there are many other semantics we could consider). The latter is not easy to spot at the moment - perhaps the work David has been doing on tidying up the rmap functions might help? 2) How should we expose the info? There has been pushback for extending files in sysfs that expose multiple pieces of data, so David has suggested that in the long term it might be good to completely redesign the stats interface. It's certainly something that needs a lot more discussion - input encouraged! Thanks, Ryan > >> >> Testing >> ======= >> >> The series includes patches for mm selftests to enlighten the cow and khugepaged >> tests to explicitly test with multi-size THP, in the same way that PMD-sized >> THP is tested. The new tests all pass, and no regressions are observed in the mm >> selftest suite. I've also run my usual kernel compilation and java script >> benchmarks without any issues. >> >> Refer to my performance numbers posted with v6 [6]. (These are for multi-size >> THP only - they do not include the arm64 contpte follow-on series). >> >> John Hubbard at Nvidia has indicated dramatic 10x performance improvements for >> some workloads at [11]. (Observed using v6 of this series as well as the arm64 >> contpte series). >> >> Kefeng Wang at Huawei has also indicated he sees improvements at [12] although >> there are some latency regressions also. >> >> >> Changes since v7 [7] >> ==================== >> >> - Renamed "small-sized THP" -> "multi-size THP" in commit logs >> - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair) >> - Patch 3: >> - Fine-tuned transhuge documentation multi-size THP (JohnH) >> - Converted hugepage_global_enabled() and hugepage_global_always() macros >> to static inline functions (JohnH) >> - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH) >> - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH) >> - Renamed "global" enabled sysfs file option to "inherit" (JohnH) >> - Patch 9: >> - cow selftest: Renamed param size -> thpsize (David) >> - cow selftest: Changed test fail to assert() (David) >> - cow selftest: Log PMD size separately from all the supported THP sizes >> (David) >> - Patch 10: >> - cow selftest: No longer special case pmdsize; keep all THP sizes in >> thpsizes[] >> >> >> Changes since v6 [6] >> ==================== >> >> - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by >> JohnH) >> - Dropped accounting patch (#3 in v6) (suggested by DavidH) >> - Continue to account *PMD-sized* THP only for now >> - Can add more counters in future if needed >> - Page cache large folios haven't needed any new counters yet >> - Pivot to sysfs ABI proposed by DavidH >> - per-size directories in a similar shape to that used by hugetlb >> - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) >> - For now, users need to understand implicitly which sizes are beneficial >> to their HW/SW >> - Dropped arch_wants_pte_order() patch (#7 in v6) >> - No longer needed due to dropping patch "recommend" keyword patch >> - Enlightened khugepaged mm selftest to explicitly test with small-size THP >> - Scrubbed commit logs to use "small-sized THP" consistently (suggested by >> DavidH) >> >> >> Changes since v5 [5] >> ==================== >> >> - Added accounting for PTE-mapped THPs (patch 3) >> - Added runtime control mechanism via sysfs as extension to THP (patch 4) >> - Minor refactoring of alloc_anon_folio() to integrate with runtime controls >> - Stripped out hardcoded policy for allocation order; its now all user space >> controlled (although user space can request "recommend" which will configure >> the HW-preferred order) >> >> >> Changes since v4 [4] >> ==================== >> >> - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 >> now uses the default order-3 size. I have moved this patch over to >> the contpte series. >> - Added "mm: Allow deferred splitting of arbitrary large anon folios" back >> into series. I originally removed this at v2 to add to a separate series, >> but that series has transformed significantly and it no longer fits, so >> bringing it back here. >> - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but >> set_ptes() is in mm-unstable now. >> - Updated policy for when to allocate LAF; only fallback to order-0 if >> MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on >> sysfs's never/madvise/always knob. >> - Fallback to order-0 whenever uffd is armed for the vma, not just when >> uffd-wp is set on the pte. >> - alloc_anon_folio() now returns `struct folio *`, where errors are encoded >> with ERR_PTR(). >> >> The last 3 changes were proposed by Yu Zhao - thanks! >> >> >> Changes since v3 [3] >> ==================== >> >> - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. >> - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a >> sysctl is preferable but we will wait until real workload needs it. >> - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). >> - Added mm selftests for large anon folios in cow test suite. >> >> >> Changes since v2 [2] >> ==================== >> >> - Dropped commit "Allow deferred splitting of arbitrary large anon folios" >> - Huang, Ying suggested the "batch zap" work (which I dropped from this >> series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've >> moved the deferred split patch to a separate series along with the batch >> zap changes. I plan to submit this series early next week. >> - Changed folio order fallback policy >> - We no longer iterate from preferred to 0 looking for acceptable policy >> - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only >> - Removed vma parameter from arch_wants_pte_order() >> - Added command line parameter `flexthp_unhinted_max` >> - clamps preferred order when vma hasn't explicitly opted-in to THP >> - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled >> for process or system). >> - Simplified implementation and integration with do_anonymous_page() >> - Removed dependency on set_ptes() >> >> >> Changes since v1 [1] >> ==================== >> >> - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() >> - replaced with arch-independent alloc_anon_folio() >> - follows THP allocation approach >> - no longer retry with intermediate orders if allocation fails >> - fallback directly to order-0 >> - remove folio_add_new_anon_rmap_range() patch >> - instead add its new functionality to folio_add_new_anon_rmap() >> - remove batch-zap pte mappings optimization patch >> - remove enabler folio_remove_rmap_range() patch too >> - These offer real perf improvement so will submit separately >> - simplify Kconfig >> - single FLEXIBLE_THP option, which is independent of arch >> - depends on TRANSPARENT_HUGEPAGE >> - when enabled default to max anon folio size of 64K unless arch >> explicitly overrides >> - simplify changes to do_anonymous_page(): >> - no more retry loop >> >> >> [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ >> [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ >> [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ >> [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ >> [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ >> [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ >> [7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/ >> [8] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ >> [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA >> [10] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ >> [11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ >> [12] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ >> >> >> Thanks, >> Ryan >> >> Ryan Roberts (10): >> mm: Allow deferred splitting of arbitrary anon large folios >> mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() >> mm: thp: Introduce multi-size THP sysfs interface >> mm: thp: Support allocation of anonymous multi-size THP >> selftests/mm/kugepaged: Restore thp settings at exit >> selftests/mm: Factor out thp settings management >> selftests/mm: Support multi-size THP interface in thp_settings >> selftests/mm/khugepaged: Enlighten for multi-size THP >> selftests/mm/cow: Generalize do_run_with_thp() helper >> selftests/mm/cow: Add tests for anonymous multi-size THP >> >> Documentation/admin-guide/mm/transhuge.rst | 97 ++++- >> Documentation/filesystems/proc.rst | 6 +- >> fs/proc/task_mmu.c | 3 +- >> include/linux/huge_mm.h | 116 ++++-- >> mm/huge_memory.c | 268 ++++++++++++-- >> mm/khugepaged.c | 20 +- >> mm/memory.c | 114 +++++- >> mm/page_vma_mapped.c | 3 +- >> mm/rmap.c | 32 +- >> tools/testing/selftests/mm/Makefile | 4 +- >> tools/testing/selftests/mm/cow.c | 185 +++++++--- >> tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- >> tools/testing/selftests/mm/run_vmtests.sh | 2 + >> tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ >> tools/testing/selftests/mm/thp_settings.h | 80 ++++ >> 15 files changed, 1177 insertions(+), 512 deletions(-) >> create mode 100644 tools/testing/selftests/mm/thp_settings.c >> create mode 100644 tools/testing/selftests/mm/thp_settings.h >> >> -- >> 2.25.1 >> > > Thanks > Barry
On 05/12/2023 03:37, John Hubbard wrote: > On 12/4/23 02:20, Ryan Roberts wrote: >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). > > Using a couple of Armv8 systems, I've tested this patchset. I applied it > to top of tree (Linux 6.7-rc4), on top of your latest contig pte series > [1]. > > With those two patchsets applied, the mm selftests look OK--or at least > as OK as they normally do. I compared test runs between THP/mTHP set to > "always", vs "never", to verify that there were no new test failures. > Details: specifically, I set one particular page size (2 MB) to > "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled > between "always" and "never". Excellent - I'm guessing this was for 64K base pages? > > I also re-ran my usual compute/AI benchmark, and I'm still seeing the > same 10x performance improvement that I reported for the v6 patchset. > > So for this patchset and for [1] as well, please feel free to add: > > Tested-by: John Hubbard <jhubbard@nvidia.com> Thanks! > > > [1] https://lore.kernel.org/all/20231204105440.61448-1-ryan.roberts@arm.com/ > > > thanks,
On 05.12.23 11:48, Ryan Roberts wrote: > On 05/12/2023 01:24, Barry Song wrote: >> On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote: >>> >>> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts <ryan.roberts@arm.com> wrote: >>>> >>>> Introduce the logic to allow THP to be configured (through the new sysfs >>>> interface we just added) to allocate large folios to back anonymous >>>> memory, which are larger than the base page size but smaller than >>>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). >>>> >>>> mTHP continues to be PTE-mapped, but in many cases can still provide >>>> similar benefits to traditional PMD-sized THP: Page faults are >>>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on >>>> the configured order), but latency spikes are much less prominent >>>> because the size of each page isn't as huge as the PMD-sized variant and >>>> there is less memory to clear in each page fault. The number of per-page >>>> operations (e.g. ref counting, rmap management, lru list management) are >>>> also significantly reduced since those ops now become per-folio. >>>> >>>> Some architectures also employ TLB compression mechanisms to squeeze >>>> more entries in when a set of PTEs are virtually and physically >>>> contiguous and approporiately aligned. In this case, TLB misses will >>>> occur less often. >>>> >>>> The new behaviour is disabled by default, but can be enabled at runtime >>>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled >>>> (see documentation in previous commit). The long term aim is to change >>>> the default to include suitable lower orders, but there are some risks >>>> around internal fragmentation that need to be better understood first. >>>> >>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>> --- >>>> include/linux/huge_mm.h | 6 ++- >>>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- >>>> 2 files changed, 101 insertions(+), 11 deletions(-) >>>> >>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>> index bd0eadd3befb..91a53b9835a4 100644 >>>> --- a/include/linux/huge_mm.h >>>> +++ b/include/linux/huge_mm.h >>>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; >>>> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) >>>> >>>> /* >>>> - * Mask of all large folio orders supported for anonymous THP. >>>> + * Mask of all large folio orders supported for anonymous THP; all orders up to >>>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 >>>> + * (which is a limitation of the THP implementation). >>>> */ >>>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) >>>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) >>>> >>>> /* >>>> * Mask of all large folio orders supported for file THP. >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 3ceeb0f45bf5..bf7e93813018 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>> return ret; >>>> } >>>> >>>> +static bool pte_range_none(pte_t *pte, int nr_pages) >>>> +{ >>>> + int i; >>>> + >>>> + for (i = 0; i < nr_pages; i++) { >>>> + if (!pte_none(ptep_get_lockless(pte + i))) >>>> + return false; >>>> + } >>>> + >>>> + return true; >>>> +} >>>> + >>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) >>>> +{ >>>> + gfp_t gfp; >>>> + pte_t *pte; >>>> + unsigned long addr; >>>> + struct folio *folio; >>>> + struct vm_area_struct *vma = vmf->vma; >>>> + unsigned long orders; >>>> + int order; >>>> + >>>> + /* >>>> + * If uffd is active for the vma we need per-page fault fidelity to >>>> + * maintain the uffd semantics. >>>> + */ >>>> + if (userfaultfd_armed(vma)) >>>> + goto fallback; >>>> + >>>> + /* >>>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled >>>> + * for this vma. Then filter out the orders that can't be allocated over >>>> + * the faulting address and still be fully contained in the vma. >>>> + */ >>>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, >>>> + BIT(PMD_ORDER) - 1); >>>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); >>>> + >>>> + if (!orders) >>>> + goto fallback; >>>> + >>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>>> + if (!pte) >>>> + return ERR_PTR(-EAGAIN); >>>> + >>>> + order = first_order(orders); >>>> + while (orders) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>> + vmf->pte = pte + pte_index(addr); >>>> + if (pte_range_none(vmf->pte, 1 << order)) >>>> + break; >>>> + order = next_order(&orders, order); >>>> + } >>>> + >>>> + vmf->pte = NULL; >>>> + pte_unmap(pte); >>>> + >>>> + gfp = vma_thp_gfp_mask(vma); >>>> + >>>> + while (orders) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); >>>> + if (folio) { >>>> + clear_huge_page(&folio->page, addr, 1 << order); >>> >>> Minor. >>> >>> Do we have to constantly clear a huge page? Is it possible to let >>> post_alloc_hook() >>> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as >>> vma_alloc_zeroed_movable_folio() is doing? > > I'm currently following the same allocation pattern as is done for PMD-sized > THP. In earlier versions of this patch I was trying to be smarter and use the > __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple > and follow the existing pattern. Yes, this should be optimized on top IMHO.
On 04.12.23 11:20, Ryan Roberts wrote: > Introduce the logic to allow THP to be configured (through the new sysfs > interface we just added) to allocate large folios to back anonymous > memory, which are larger than the base page size but smaller than > PMD-size. We call this new THP extension "multi-size THP" (mTHP). > > mTHP continues to be PTE-mapped, but in many cases can still provide > similar benefits to traditional PMD-sized THP: Page faults are > significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on > the configured order), but latency spikes are much less prominent > because the size of each page isn't as huge as the PMD-sized variant and > there is less memory to clear in each page fault. The number of per-page > operations (e.g. ref counting, rmap management, lru list management) are > also significantly reduced since those ops now become per-folio. > > Some architectures also employ TLB compression mechanisms to squeeze > more entries in when a set of PTEs are virtually and physically > contiguous and approporiately aligned. In this case, TLB misses will > occur less often. > > The new behaviour is disabled by default, but can be enabled at runtime > by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled > (see documentation in previous commit). The long term aim is to change > the default to include suitable lower orders, but there are some risks > around internal fragmentation that need to be better understood first. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> In general, looks good to me, some comments/nits. And the usual "let's make sure we don't degrade order-0 and keep that as fast as possible" comment. > --- > include/linux/huge_mm.h | 6 ++- > mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- > 2 files changed, 101 insertions(+), 11 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index bd0eadd3befb..91a53b9835a4 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; > #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) > > /* > - * Mask of all large folio orders supported for anonymous THP. > + * Mask of all large folio orders supported for anonymous THP; all orders up to > + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 > + * (which is a limitation of the THP implementation). > */ > -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) > +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) > > /* > * Mask of all large folio orders supported for file THP. > diff --git a/mm/memory.c b/mm/memory.c > index 3ceeb0f45bf5..bf7e93813018 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > return ret; > } > > +static bool pte_range_none(pte_t *pte, int nr_pages) > +{ > + int i; > + > + for (i = 0; i < nr_pages; i++) { > + if (!pte_none(ptep_get_lockless(pte + i))) > + return false; > + } > + > + return true; > +} > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +{ > + gfp_t gfp; > + pte_t *pte; > + unsigned long addr; > + struct folio *folio; > + struct vm_area_struct *vma = vmf->vma; > + unsigned long orders; > + int order; Nit: reverse christmas tree encouraged ;) > + > + /* > + * If uffd is active for the vma we need per-page fault fidelity to > + * maintain the uffd semantics. > + */ > + if (userfaultfd_armed(vma)) Nit: unlikely() > + goto fallback; > + > + /* > + * Get a list of all the (large) orders below PMD_ORDER that are enabled > + * for this vma. Then filter out the orders that can't be allocated over > + * the faulting address and still be fully contained in the vma. > + */ > + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, > + BIT(PMD_ORDER) - 1); > + orders = thp_vma_suitable_orders(vma, vmf->address, orders); Comment: Both will eventually loop over all orders, correct? Could eventually be sped up in the future. Nit: the orders = ... order = ... looks like this might deserve a helper function that makes this easier to read. Nit: Why call thp_vma_suitable_orders if the orders are already 0? Again, some helper might be reasonable where that is handled internally. Comment: For order-0 we'll always perform a function call to both thp_vma_allowable_orders() / thp_vma_suitable_orders(). We should perform some fast and efficient check if any <PMD THP are even enabled in the system / for this VMA, and in that case just fallback before doing more expensive checks. > + > + if (!orders) > + goto fallback; > + > + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > + if (!pte) > + return ERR_PTR(-EAGAIN); > + > + order = first_order(orders); > + while (orders) { > + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > + vmf->pte = pte + pte_index(addr); > + if (pte_range_none(vmf->pte, 1 << order)) > + break; Comment: Likely it would make sense to scan only once and determine the "largest none range" around that address, having the largest suitable order in mind. > + order = next_order(&orders, order); > + } > + > + vmf->pte = NULL; Nit: Can you elaborate why you are messing with vmf->pte here? A simple helper variable will make this code look less magical. Unless I am missing something important :) > + pte_unmap(pte); > + > + gfp = vma_thp_gfp_mask(vma); > + > + while (orders) { > + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > + folio = vma_alloc_folio(gfp, order, vma, addr, true); > + if (folio) { > + clear_huge_page(&folio->page, addr, 1 << order); > + return folio; > + } > + order = next_order(&orders, order); > + } > + Queestion: would it make sense to combine both loops? I suspect memory allocations with pte_offset_map()/kmao are problematic. > +fallback: > + return vma_alloc_zeroed_movable_folio(vma, vmf->address); > +} > +#else > +#define alloc_anon_folio(vmf) \ > + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address) > +#endif > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > { > + int i; > + int nr_pages = 1; > + unsigned long addr = vmf->address; > bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); > struct vm_area_struct *vma = vmf->vma; > struct folio *folio; Nit: reverse christmas tree :) > @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > /* Allocate our own private page. */ > if (unlikely(anon_vma_prepare(vma))) > goto oom; > - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); > + folio = alloc_anon_folio(vmf); > + if (IS_ERR(folio)) > + return 0; > if (!folio) > goto oom; > > + nr_pages = folio_nr_pages(folio); > + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); > + > if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) > goto oom_free_page; > folio_throttle_swaprate(folio, GFP_KERNEL); > @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (vma->vm_flags & VM_WRITE) > entry = pte_mkwrite(pte_mkdirty(entry), vma); > > - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > - &vmf->ptl); > + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); > if (!vmf->pte) > goto release; > - if (vmf_pte_changed(vmf)) { > - update_mmu_tlb(vma, vmf->address, vmf->pte); > + if ((nr_pages == 1 && vmf_pte_changed(vmf)) || > + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { > + for (i = 0; i < nr_pages; i++) > + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); Comment: separating the order-0 case from the other case might make this easier to read. > goto release; > } > > @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > return handle_userfault(vmf, VM_UFFD_MISSING); > } > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - folio_add_new_anon_rmap(folio, vma, vmf->address); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + folio_add_new_anon_rmap(folio, vma, addr); > folio_add_lru_vma(folio, vma); > setpte: > if (uffd_wp) > entry = pte_mkuffd_wp(entry); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); > + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); Benchmarking order-0 allocations might be interesting. There will be some added checks + multiple loops/conditionals for order-0 that could be avoided by having two separate code paths. If we can't measure a difference, all good.
> Comment: Both will eventually loop over all orders, correct? Could > eventually be sped up in the future. > > Nit: the orders = ... order = ... looks like this might deserve a helper > function that makes this easier to read. > > Nit: Why call thp_vma_suitable_orders if the orders are already 0? > Again, some helper might be reasonable where that is handled internally. > > Comment: For order-0 we'll always perform a function call to both > thp_vma_allowable_orders() / thp_vma_suitable_orders(). We should > perform some fast and efficient check if any <PMD THP are even enabled > in the system / for this VMA, and in that case just fallback before > doing more expensive checks. Correction: only a call to thp_vma_allowable_orders(). I wonder if we can move some of thp_vma_allowable_orders() into the header file where we'd just check as fast and as efficiently for "no THP < PMD_THP enabled" on this system.
On 04.12.23 11:20, Ryan Roberts wrote: > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). > > By default, the existing behaviour (and performance) is maintained. The user > must explicitly enable multi-size THP to see the performance benefit. This is > done via a new sysfs interface (as recommended by David Hildenbrand - thanks to > David for the suggestion)! This interface is inspired by the existing > per-hugepage-size sysfs interface used by hugetlb, provides full backwards > compatibility with the existing PMD-size THP interface, and provides a base for > future extensibility. See [8] for detailed discussion of the interface. > > This series is based on mm-unstable (715b67adf4c8). I took a look at the core pieces. Some things might want some smaller tweaks, but nothing that should stop this from having fun in mm-unstable, and replacing the smaller things as we move forward.
On 12/5/23 3:13 AM, Ryan Roberts wrote: > On 05/12/2023 03:37, John Hubbard wrote: >> On 12/4/23 02:20, Ryan Roberts wrote: ... >> With those two patchsets applied, the mm selftests look OK--or at least >> as OK as they normally do. I compared test runs between THP/mTHP set to >> "always", vs "never", to verify that there were no new test failures. >> Details: specifically, I set one particular page size (2 MB) to >> "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled >> between "always" and "never". > > Excellent - I'm guessing this was for 64K base pages? Yes. thanks,
On Tue, Dec 5, 2023 at 11:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 05/12/2023 01:24, Barry Song wrote: > > On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote: > >> > >> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > >>> > >>> Introduce the logic to allow THP to be configured (through the new sysfs > >>> interface we just added) to allocate large folios to back anonymous > >>> memory, which are larger than the base page size but smaller than > >>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). > >>> > >>> mTHP continues to be PTE-mapped, but in many cases can still provide > >>> similar benefits to traditional PMD-sized THP: Page faults are > >>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on > >>> the configured order), but latency spikes are much less prominent > >>> because the size of each page isn't as huge as the PMD-sized variant and > >>> there is less memory to clear in each page fault. The number of per-page > >>> operations (e.g. ref counting, rmap management, lru list management) are > >>> also significantly reduced since those ops now become per-folio. > >>> > >>> Some architectures also employ TLB compression mechanisms to squeeze > >>> more entries in when a set of PTEs are virtually and physically > >>> contiguous and approporiately aligned. In this case, TLB misses will > >>> occur less often. > >>> > >>> The new behaviour is disabled by default, but can be enabled at runtime > >>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled > >>> (see documentation in previous commit). The long term aim is to change > >>> the default to include suitable lower orders, but there are some risks > >>> around internal fragmentation that need to be better understood first. > >>> > >>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > >>> --- > >>> include/linux/huge_mm.h | 6 ++- > >>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- > >>> 2 files changed, 101 insertions(+), 11 deletions(-) > >>> > >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >>> index bd0eadd3befb..91a53b9835a4 100644 > >>> --- a/include/linux/huge_mm.h > >>> +++ b/include/linux/huge_mm.h > >>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; > >>> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) > >>> > >>> /* > >>> - * Mask of all large folio orders supported for anonymous THP. > >>> + * Mask of all large folio orders supported for anonymous THP; all orders up to > >>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 > >>> + * (which is a limitation of the THP implementation). > >>> */ > >>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) > >>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) > >>> > >>> /* > >>> * Mask of all large folio orders supported for file THP. > >>> diff --git a/mm/memory.c b/mm/memory.c > >>> index 3ceeb0f45bf5..bf7e93813018 100644 > >>> --- a/mm/memory.c > >>> +++ b/mm/memory.c > >>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >>> return ret; > >>> } > >>> > >>> +static bool pte_range_none(pte_t *pte, int nr_pages) > >>> +{ > >>> + int i; > >>> + > >>> + for (i = 0; i < nr_pages; i++) { > >>> + if (!pte_none(ptep_get_lockless(pte + i))) > >>> + return false; > >>> + } > >>> + > >>> + return true; > >>> +} > >>> + > >>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > >>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) > >>> +{ > >>> + gfp_t gfp; > >>> + pte_t *pte; > >>> + unsigned long addr; > >>> + struct folio *folio; > >>> + struct vm_area_struct *vma = vmf->vma; > >>> + unsigned long orders; > >>> + int order; > >>> + > >>> + /* > >>> + * If uffd is active for the vma we need per-page fault fidelity to > >>> + * maintain the uffd semantics. > >>> + */ > >>> + if (userfaultfd_armed(vma)) > >>> + goto fallback; > >>> + > >>> + /* > >>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled > >>> + * for this vma. Then filter out the orders that can't be allocated over > >>> + * the faulting address and still be fully contained in the vma. > >>> + */ > >>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, > >>> + BIT(PMD_ORDER) - 1); > >>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); > >>> + > >>> + if (!orders) > >>> + goto fallback; > >>> + > >>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > >>> + if (!pte) > >>> + return ERR_PTR(-EAGAIN); > >>> + > >>> + order = first_order(orders); > >>> + while (orders) { > >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > >>> + vmf->pte = pte + pte_index(addr); > >>> + if (pte_range_none(vmf->pte, 1 << order)) > >>> + break; > >>> + order = next_order(&orders, order); > >>> + } > >>> + > >>> + vmf->pte = NULL; > >>> + pte_unmap(pte); > >>> + > >>> + gfp = vma_thp_gfp_mask(vma); > >>> + > >>> + while (orders) { > >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > >>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); > >>> + if (folio) { > >>> + clear_huge_page(&folio->page, addr, 1 << order); > >> > >> Minor. > >> > >> Do we have to constantly clear a huge page? Is it possible to let > >> post_alloc_hook() > >> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as > >> vma_alloc_zeroed_movable_folio() is doing? > > I'm currently following the same allocation pattern as is done for PMD-sized > THP. In earlier versions of this patch I was trying to be smarter and use the > __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple > and follow the existing pattern. > > I have a vague recollection __GFP_ZERO is not preferred for large folios because > of some issue with virtually indexed caches? (Matthew: did I see you mention > that in some other context?) > > That said, I wasn't aware that Android ships with > CONFIG_INIT_ON_ALLOC_DEFAULT_ON (I thought it was only used as a debug option), > so I can see the potential for some overhead reduction here. > > Options: > > 1) leave it as is and accept the duplicated clearing > 2) Pass __GFP_ZERO and remove clear_huge_page() > 3) define __GFP_SKIP_ZERO even when kasan is not enabled and pass it down so > clear_huge_page() is the only clear > 4) make clear_huge_page() conditional on !want_init_on_alloc() > > I prefer option 4. What do you think? either 1 and 4 is ok to me if we will finally remove this duplicated clear_huge_page on top. 4 is even better as it can at least temporarily resolve the problem. in Android gki_defconfig, https://android.googlesource.com/kernel/common/+/refs/heads/android14-6.1-lts/arch/arm64/configs/gki_defconfig Android always has the below, CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y here is some explanation for the reason, https://source.android.com/docs/security/test/memory-safety/zero-initialized-memory > > As an aside, I've also noticed that clear_huge_page() should take vmf->address > so that it clears the faulting page last to keep the cache hot. If we decide on > an option that keeps clear_huge_page(), I'll also make that change. > > Thanks, > Ryan > > >> Thanks Barry
On 06.12.23 11:13, Ryan Roberts wrote: > On 05/12/2023 17:21, David Hildenbrand wrote: >> On 04.12.23 11:20, Ryan Roberts wrote: >>> Hi All, >>> >>> A new week, a new version, a new name... This is v8 of a series to implement >>> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >>> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >>> this fares better. >>> >>> The objective of this is to improve performance by allocating larger chunks of >>> memory during anonymous page faults: >>> >>> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >>> pages, there are efficiency savings to be had; fewer page faults, batched PTE >>> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >>> overhead. This should benefit all architectures. >>> 2) Since we are now mapping physically contiguous chunks of memory, we can take >>> advantage of HW TLB compression techniques. A reduction in TLB pressure >>> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >>> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >>> >>> This version changes the name and tidies up some of the kernel code and test >>> code, based on feedback against v7 (see change log for details). >>> >>> By default, the existing behaviour (and performance) is maintained. The user >>> must explicitly enable multi-size THP to see the performance benefit. This is >>> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >>> David for the suggestion)! This interface is inspired by the existing >>> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >>> compatibility with the existing PMD-size THP interface, and provides a base for >>> future extensibility. See [8] for detailed discussion of the interface. >>> >>> This series is based on mm-unstable (715b67adf4c8). >> >> I took a look at the core pieces. Some things might want some smaller tweaks, >> but nothing that should stop this from having fun in mm-unstable, and replacing >> the smaller things as we move forward. >> > > Thanks! I'll address your comments and see if I can post another (final??) > version next week. It's always possible to do incremental changes on top that Andrew will squash in the end. I even recall that he prefers that way once a series has been in mm-unstable for a bit, so one can better observe the diff and which effects they have.
On Wed, Dec 6, 2023 at 11:16 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 05/12/2023 20:16, Barry Song wrote: > > On Tue, Dec 5, 2023 at 11:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > >> > >> On 05/12/2023 01:24, Barry Song wrote: > >>> On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote: > >>>> > >>>> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > >>>>> > >>>>> Introduce the logic to allow THP to be configured (through the new sysfs > >>>>> interface we just added) to allocate large folios to back anonymous > >>>>> memory, which are larger than the base page size but smaller than > >>>>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). > >>>>> > >>>>> mTHP continues to be PTE-mapped, but in many cases can still provide > >>>>> similar benefits to traditional PMD-sized THP: Page faults are > >>>>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on > >>>>> the configured order), but latency spikes are much less prominent > >>>>> because the size of each page isn't as huge as the PMD-sized variant and > >>>>> there is less memory to clear in each page fault. The number of per-page > >>>>> operations (e.g. ref counting, rmap management, lru list management) are > >>>>> also significantly reduced since those ops now become per-folio. > >>>>> > >>>>> Some architectures also employ TLB compression mechanisms to squeeze > >>>>> more entries in when a set of PTEs are virtually and physically > >>>>> contiguous and approporiately aligned. In this case, TLB misses will > >>>>> occur less often. > >>>>> > >>>>> The new behaviour is disabled by default, but can be enabled at runtime > >>>>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled > >>>>> (see documentation in previous commit). The long term aim is to change > >>>>> the default to include suitable lower orders, but there are some risks > >>>>> around internal fragmentation that need to be better understood first. > >>>>> > >>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > >>>>> --- > >>>>> include/linux/huge_mm.h | 6 ++- > >>>>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- > >>>>> 2 files changed, 101 insertions(+), 11 deletions(-) > >>>>> > >>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >>>>> index bd0eadd3befb..91a53b9835a4 100644 > >>>>> --- a/include/linux/huge_mm.h > >>>>> +++ b/include/linux/huge_mm.h > >>>>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; > >>>>> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) > >>>>> > >>>>> /* > >>>>> - * Mask of all large folio orders supported for anonymous THP. > >>>>> + * Mask of all large folio orders supported for anonymous THP; all orders up to > >>>>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 > >>>>> + * (which is a limitation of the THP implementation). > >>>>> */ > >>>>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) > >>>>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) > >>>>> > >>>>> /* > >>>>> * Mask of all large folio orders supported for file THP. > >>>>> diff --git a/mm/memory.c b/mm/memory.c > >>>>> index 3ceeb0f45bf5..bf7e93813018 100644 > >>>>> --- a/mm/memory.c > >>>>> +++ b/mm/memory.c > >>>>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >>>>> return ret; > >>>>> } > >>>>> > >>>>> +static bool pte_range_none(pte_t *pte, int nr_pages) > >>>>> +{ > >>>>> + int i; > >>>>> + > >>>>> + for (i = 0; i < nr_pages; i++) { > >>>>> + if (!pte_none(ptep_get_lockless(pte + i))) > >>>>> + return false; > >>>>> + } > >>>>> + > >>>>> + return true; > >>>>> +} > >>>>> + > >>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > >>>>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) > >>>>> +{ > >>>>> + gfp_t gfp; > >>>>> + pte_t *pte; > >>>>> + unsigned long addr; > >>>>> + struct folio *folio; > >>>>> + struct vm_area_struct *vma = vmf->vma; > >>>>> + unsigned long orders; > >>>>> + int order; > >>>>> + > >>>>> + /* > >>>>> + * If uffd is active for the vma we need per-page fault fidelity to > >>>>> + * maintain the uffd semantics. > >>>>> + */ > >>>>> + if (userfaultfd_armed(vma)) > >>>>> + goto fallback; > >>>>> + > >>>>> + /* > >>>>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled > >>>>> + * for this vma. Then filter out the orders that can't be allocated over > >>>>> + * the faulting address and still be fully contained in the vma. > >>>>> + */ > >>>>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, > >>>>> + BIT(PMD_ORDER) - 1); > >>>>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); > >>>>> + > >>>>> + if (!orders) > >>>>> + goto fallback; > >>>>> + > >>>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > >>>>> + if (!pte) > >>>>> + return ERR_PTR(-EAGAIN); > >>>>> + > >>>>> + order = first_order(orders); > >>>>> + while (orders) { > >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > >>>>> + vmf->pte = pte + pte_index(addr); > >>>>> + if (pte_range_none(vmf->pte, 1 << order)) > >>>>> + break; > >>>>> + order = next_order(&orders, order); > >>>>> + } > >>>>> + > >>>>> + vmf->pte = NULL; > >>>>> + pte_unmap(pte); > >>>>> + > >>>>> + gfp = vma_thp_gfp_mask(vma); > >>>>> + > >>>>> + while (orders) { > >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > >>>>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); > >>>>> + if (folio) { > >>>>> + clear_huge_page(&folio->page, addr, 1 << order); > >>>> > >>>> Minor. > >>>> > >>>> Do we have to constantly clear a huge page? Is it possible to let > >>>> post_alloc_hook() > >>>> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as > >>>> vma_alloc_zeroed_movable_folio() is doing? > >> > >> I'm currently following the same allocation pattern as is done for PMD-sized > >> THP. In earlier versions of this patch I was trying to be smarter and use the > >> __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple > >> and follow the existing pattern. > >> > >> I have a vague recollection __GFP_ZERO is not preferred for large folios because > >> of some issue with virtually indexed caches? (Matthew: did I see you mention > >> that in some other context?) > >> > >> That said, I wasn't aware that Android ships with > >> CONFIG_INIT_ON_ALLOC_DEFAULT_ON (I thought it was only used as a debug option), > >> so I can see the potential for some overhead reduction here. > >> > >> Options: > >> > >> 1) leave it as is and accept the duplicated clearing > >> 2) Pass __GFP_ZERO and remove clear_huge_page() > >> 3) define __GFP_SKIP_ZERO even when kasan is not enabled and pass it down so > >> clear_huge_page() is the only clear > >> 4) make clear_huge_page() conditional on !want_init_on_alloc() > >> > >> I prefer option 4. What do you think? > > > > either 1 and 4 is ok to me if we will finally remove this duplicated > > clear_huge_page on top. > > 4 is even better as it can at least temporarily resolve the problem. > > I'm going to stick with option 1 for this series. Then we can fix it uniformly > here and for PMD-sized THP in a separate patch (possibly with the approach > suggested in 4). Ok. Thanks. there is no one fixing PMD-sized THP, probably because PMD-sized THP is shutdown immediately after Android boots :-) > > > > > in Android gki_defconfig, > > https://android.googlesource.com/kernel/common/+/refs/heads/android14-6.1-lts/arch/arm64/configs/gki_defconfig > > > > Android always has the below, > > CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y > > > > here is some explanation for the reason, > > https://source.android.com/docs/security/test/memory-safety/zero-initialized-memory > > > >> > >> As an aside, I've also noticed that clear_huge_page() should take vmf->address > >> so that it clears the faulting page last to keep the cache hot. If we decide on > >> an option that keeps clear_huge_page(), I'll also make that change. > > I'll make this change for the next version. > > >> > >> Thanks, > >> Ryan > >> Thanks Barry
On 07.12.23 11:37, Ryan Roberts wrote: > On 06/12/2023 15:44, Ryan Roberts wrote: >> On 06/12/2023 14:19, Ryan Roberts wrote: >>> On 05/12/2023 16:32, David Hildenbrand wrote: >>>> On 04.12.23 11:20, Ryan Roberts wrote: >>>>> Introduce the logic to allow THP to be configured (through the new sysfs >>>>> interface we just added) to allocate large folios to back anonymous >>>>> memory, which are larger than the base page size but smaller than >>>>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). >>>>> >>>>> mTHP continues to be PTE-mapped, but in many cases can still provide >>>>> similar benefits to traditional PMD-sized THP: Page faults are >>>>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on >>>>> the configured order), but latency spikes are much less prominent >>>>> because the size of each page isn't as huge as the PMD-sized variant and >>>>> there is less memory to clear in each page fault. The number of per-page >>>>> operations (e.g. ref counting, rmap management, lru list management) are >>>>> also significantly reduced since those ops now become per-folio. >>>>> >>>>> Some architectures also employ TLB compression mechanisms to squeeze >>>>> more entries in when a set of PTEs are virtually and physically >>>>> contiguous and approporiately aligned. In this case, TLB misses will >>>>> occur less often. >>>>> >>>>> The new behaviour is disabled by default, but can be enabled at runtime >>>>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled >>>>> (see documentation in previous commit). The long term aim is to change >>>>> the default to include suitable lower orders, but there are some risks >>>>> around internal fragmentation that need to be better understood first. >>>>> >>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>> >>>> In general, looks good to me, some comments/nits. And the usual "let's make sure >>>> we don't degrade order-0 and keep that as fast as possible" comment. >>>> >>>>> --- >>>>> include/linux/huge_mm.h | 6 ++- >>>>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- >>>>> 2 files changed, 101 insertions(+), 11 deletions(-) >>>>> >>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>>> index bd0eadd3befb..91a53b9835a4 100644 >>>>> --- a/include/linux/huge_mm.h >>>>> +++ b/include/linux/huge_mm.h >>>>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; >>>>> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) >>>>> /* >>>>> - * Mask of all large folio orders supported for anonymous THP. >>>>> + * Mask of all large folio orders supported for anonymous THP; all orders up to >>>>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 >>>>> + * (which is a limitation of the THP implementation). >>>>> */ >>>>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) >>>>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) >>>>> /* >>>>> * Mask of all large folio orders supported for file THP. >>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>> index 3ceeb0f45bf5..bf7e93813018 100644 >>>>> --- a/mm/memory.c >>>>> +++ b/mm/memory.c >>>>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>> return ret; >>>>> } >>>>> +static bool pte_range_none(pte_t *pte, int nr_pages) >>>>> +{ >>>>> + int i; >>>>> + >>>>> + for (i = 0; i < nr_pages; i++) { >>>>> + if (!pte_none(ptep_get_lockless(pte + i))) >>>>> + return false; >>>>> + } >>>>> + >>>>> + return true; >>>>> +} >>>>> + >>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>>>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) >>>>> +{ >>>>> + gfp_t gfp; >>>>> + pte_t *pte; >>>>> + unsigned long addr; >>>>> + struct folio *folio; >>>>> + struct vm_area_struct *vma = vmf->vma; >>>>> + unsigned long orders; >>>>> + int order; >>>> >>>> Nit: reverse christmas tree encouraged ;) >>> >>> ACK will fix. >>> >>>> >>>>> + >>>>> + /* >>>>> + * If uffd is active for the vma we need per-page fault fidelity to >>>>> + * maintain the uffd semantics. >>>>> + */ >>>>> + if (userfaultfd_armed(vma)) >>>> >>>> Nit: unlikely() >>> >>> ACK will fix. >>> >>>> >>>>> + goto fallback; >>>>> + >>>>> + /* >>>>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled >>>>> + * for this vma. Then filter out the orders that can't be allocated over >>>>> + * the faulting address and still be fully contained in the vma. >>>>> + */ >>>>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, >>>>> + BIT(PMD_ORDER) - 1); >>>>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); >>>> >>>> Comment: Both will eventually loop over all orders, correct? Could eventually be >>>> sped up in the future. >>> >>> No only thp_vma_suitable_orders() will loop. thp_vma_allowable_orders() only >>> loops if in_pf=false (it's true here). >>> >>>> >>>> Nit: the orders = ... order = ... looks like this might deserve a helper >>>> function that makes this easier to read. >>> >>> To be honest, the existing function that I've modified is a bit of a mess. >>> thp_vma_allowable_orders() calls thp_vma_suitable_orders() if we are not in a >>> page fault, because the page fault handlers already do that check themselves. It >>> would be nice to refactor the whole thing so that thp_vma_allowable_orders() is >>> a strict superset of thp_vma_suitable_orders(). Then this can just call >>> thp_vma_allowable_orders(). But that's going to start touching the PMD and PUD >>> handlers, so prefer if we leave that for a separate patch set. >>> >>>> >>>> Nit: Why call thp_vma_suitable_orders if the orders are already 0? Again, some >>>> helper might be reasonable where that is handled internally. >>> >>> Because thp_vma_suitable_orders() will handle it safely and is inline, so it >>> should just as efficient? This would go away with the refactoring described above. >>> >>>> >>>> Comment: For order-0 we'll always perform a function call to both >>>> thp_vma_allowable_orders() / thp_vma_suitable_orders(). We should perform some >>>> fast and efficient check if any <PMD THP are even enabled in the system / for >>>> this VMA, and in that case just fallback before doing more expensive checks. >>> >> >> I just noticed I got these functions round the wrong way in my previous response: >> >>> thp_vma_allowable_orders() is inline as you mentioned. >> >> ^ Meant thp_vma_suitable_orders() here. >> >>> >>> I was deliberately trying to keep all the decision logic in one place >>> (thp_vma_suitable_orders) because it's already pretty complicated. But if you >> >> ^ Meant thp_vma_allowable_orders() here. >> >> Sorry for the confusion. >> >>> insist, how about this in the header: >>> >>> static inline >>> unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, >>> unsigned long vm_flags, bool smaps, >>> bool in_pf, bool enforce_sysfs, >>> unsigned long orders) >>> { >>> /* Optimization to check if required orders are enabled early. */ >>> if (enforce_sysfs && vma_is_anonymous(vma)) { >>> unsigned long mask = READ_ONCE(huge_anon_orders_always); >>> >>> if (vm_flags & VM_HUGEPAGE) >>> mask |= READ_ONCE(huge_anon_orders_madvise); >>> if (hugepage_global_always() || >>> ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled())) >>> mask |= READ_ONCE(huge_anon_orders_inherit); >>> >>> orders &= mask; >>> if (!orders) >>> return 0; >>> >>> enforce_sysfs = false; >>> } >>> >>> return __thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, >>> enforce_sysfs, orders); >>> } >>> >>> Then the above check can be removed from __thp_vma_allowable_orders() - it will >>> still retain the `if (enforce_sysfs && !vma_is_anonymous(vma))` part. >>> >>> >>>> >>>>> + >>>>> + if (!orders) >>>>> + goto fallback; >>>>> + >>>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>>>> + if (!pte) >>>>> + return ERR_PTR(-EAGAIN); >>>>> + >>>>> + order = first_order(orders); >>>>> + while (orders) { >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>>> + vmf->pte = pte + pte_index(addr); >>>>> + if (pte_range_none(vmf->pte, 1 << order)) >>>>> + break; >>>> >>>> Comment: Likely it would make sense to scan only once and determine the "largest >>>> none range" around that address, having the largest suitable order in mind. >>> >>> Yes, that's how I used to do it, but Yu Zhou requested simplifying to this, >>> IIRC. Perhaps this an optimization opportunity for later? >>> >>>> >>>>> + order = next_order(&orders, order); >>>>> + } >>>>> + >>>>> + vmf->pte = NULL; >>>> >>>> Nit: Can you elaborate why you are messing with vmf->pte here? A simple helper >>>> variable will make this code look less magical. Unless I am missing something >>>> important :) >>> >>> Gahh, I used to pass the vmf to what pte_range_none() was refactored into (an >>> approach that was suggested by Yu Zhou IIRC). But since I did some refactoring >>> based on some comments from JohnH, I see I don't need that anymore. Agreed; it >>> will be much clearer just to use a local variable. Will fix. >>> >>>> >>>>> + pte_unmap(pte); >>>>> + >>>>> + gfp = vma_thp_gfp_mask(vma); >>>>> + >>>>> + while (orders) { >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); >>>>> + if (folio) { >>>>> + clear_huge_page(&folio->page, addr, 1 << order); >>>>> + return folio; >>>>> + } >>>>> + order = next_order(&orders, order); >>>>> + } >>>>> + >>>> >>>> Queestion: would it make sense to combine both loops? I suspect memory >>>> allocations with pte_offset_map()/kmao are problematic. >>> >>> They are both operating on separate orders; next_order() is "consuming" an order >>> by removing the current one from the orders bitfield and returning the next one. >>> >>> So the first loop starts at the highest order and keeps checking lower orders >>> until one fully fits in the VMA. And the second loop starts at the first order >>> that was found to fully fits and loops to lower orders until an allocation is >>> successful. >>> >>> So I don't see a need to combine the loops. >>> >>>> >>>>> +fallback: >>>>> + return vma_alloc_zeroed_movable_folio(vma, vmf->address); >>>>> +} >>>>> +#else >>>>> +#define alloc_anon_folio(vmf) \ >>>>> + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address) >>>>> +#endif >>>>> + >>>>> /* >>>>> * We enter with non-exclusive mmap_lock (to exclude vma changes, >>>>> * but allow concurrent faults), and pte mapped but not yet locked. >>>>> @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>> */ >>>>> static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>>>> { >>>>> + int i; >>>>> + int nr_pages = 1; >>>>> + unsigned long addr = vmf->address; >>>>> bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); >>>>> struct vm_area_struct *vma = vmf->vma; >>>>> struct folio *folio; >>>> >>>> Nit: reverse christmas tree :) >>> >>> ACK >>> >>>> >>>>> @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>>>> /* Allocate our own private page. */ >>>>> if (unlikely(anon_vma_prepare(vma))) >>>>> goto oom; >>>>> - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); >>>>> + folio = alloc_anon_folio(vmf); >>>>> + if (IS_ERR(folio)) >>>>> + return 0; >>>>> if (!folio) >>>>> goto oom; >>>>> + nr_pages = folio_nr_pages(folio); >>>>> + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); >>>>> + >>>>> if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) >>>>> goto oom_free_page; >>>>> folio_throttle_swaprate(folio, GFP_KERNEL); >>>>> @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>>>> if (vma->vm_flags & VM_WRITE) >>>>> entry = pte_mkwrite(pte_mkdirty(entry), vma); >>>>> - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, >>>>> - &vmf->ptl); >>>>> + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); >>>>> if (!vmf->pte) >>>>> goto release; >>>>> - if (vmf_pte_changed(vmf)) { >>>>> - update_mmu_tlb(vma, vmf->address, vmf->pte); >>>>> + if ((nr_pages == 1 && vmf_pte_changed(vmf)) || >>>>> + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { >>>>> + for (i = 0; i < nr_pages; i++) >>>>> + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); >>>> >>>> Comment: separating the order-0 case from the other case might make this easier >>>> to read. >>> >>> Yeah fair enough. Will fix. >>> >>>> >>>>> goto release; >>>>> } >>>>> @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault >>>>> *vmf) >>>>> return handle_userfault(vmf, VM_UFFD_MISSING); >>>>> } >>>>> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); >>>>> - folio_add_new_anon_rmap(folio, vma, vmf->address); >>>>> + folio_ref_add(folio, nr_pages - 1); >>>>> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); >>>>> + folio_add_new_anon_rmap(folio, vma, addr); >>>>> folio_add_lru_vma(folio, vma); >>>>> setpte: >>>>> if (uffd_wp) >>>>> entry = pte_mkuffd_wp(entry); >>>>> - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); >>>>> + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); >>>>> /* No need to invalidate - it was non-present before */ >>>>> - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >>>>> + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); >>>>> unlock: >>>>> if (vmf->pte) >>>>> pte_unmap_unlock(vmf->pte, vmf->ptl); >>>> >>>> Benchmarking order-0 allocations might be interesting. There will be some added >>>> checks + multiple loops/conditionals for order-0 that could be avoided by having >>>> two separate code paths. If we can't measure a difference, all good. >>> >>> Yep will do - will post numbers once I have them. I've been assuming that the >>> major cost is clearing the page, but perhaps I'm wrong. >>> > > I added a "write-fault-byte" benchmark to the microbenchmark tool you gave me. > This elides the normal memset page population routine, and instead writes the > first byte of every page while the timer is running. > > I ran with 100 iterations per run, then ran the whole thing 16 times. I ran it > for a baseline kernel, as well as v8 (this series) and v9 (with changes from > your review). I repeated on Ampere Altra (bare metal) and Apple M2 (VM): > > | | m2 vm | altra | > |--------------|---------------------|--------------------:| > | kernel | mean | std_rel | mean | std_rel | > |--------------|----------|----------|----------|---------:| > | baseline | 0.000% | 0.341% | 0.000% | 3.581% | > | anonfolio-v8 | 0.005% | 0.272% | 5.068% | 1.128% | > | anonfolio-v9 | -0.013% | 0.442% | 0.107% | 1.788% | > > No measurable difference on M2, but altra has a slow down in v8 which is fixed > in v9; Looking at the changes, this is either down to the new unlikely() for the > uffd or due to moving the THP order check to be inline within > thp_vma_allowable_orders(). I suspect the last one. > > So I have all the changes done and perf numbers to show no regression for > order-0. I'm gonna do a final check and post v9 later today. Good! Let me catch up on your comments real quick.
[...] >> >> Nit: the orders = ... order = ... looks like this might deserve a helper >> function that makes this easier to read. > > To be honest, the existing function that I've modified is a bit of a mess. It's all an ugly mess and I hate it. It would be cleanest if we'd just have "thp_vma_configured_orders()" that gives us all configured orders for the given VMA+flags combination. No passing in of orders, try handling the masking in the caller. Then, we move that nasty "transhuge_vma_suitable" handling for !in_pf out of there and handle that in the callers. The comment "Huge fault does the check in fault handlers. And this check is not suitable for huge PUD fault handlers." already makes me angry, what a mess. Then, we'd have a thp_vma_fitting_orders() / thp_vma_is_fitting_order() function that does the filtering only based on the given address + vma size/alignment. That's roughly "thp_vma_suitable_orders()". Finding a good name to combine both could be something like "thp_vma_possible_orders()". Would make more sense to me (but again, German guy, so it's probably all wrong). > thp_vma_allowable_orders() calls thp_vma_suitable_orders() if we are not in a > page fault, because the page fault handlers already do that check themselves. It > would be nice to refactor the whole thing so that thp_vma_allowable_orders() is > a strict superset of thp_vma_suitable_orders(). Then this can just call > thp_vma_allowable_orders(). But that's going to start touching the PMD and PUD > handlers, so prefer if we leave that for a separate patch set. > >> >> Nit: Why call thp_vma_suitable_orders if the orders are already 0? Again, some >> helper might be reasonable where that is handled internally. > > Because thp_vma_suitable_orders() will handle it safely and is inline, so it > should just as efficient? This would go away with the refactoring described above. Right. Won't win in a beauty contest. Some simple helper might make this much easier to digest. > >> >> Comment: For order-0 we'll always perform a function call to both >> thp_vma_allowable_orders() / thp_vma_suitable_orders(). We should perform some >> fast and efficient check if any <PMD THP are even enabled in the system / for >> this VMA, and in that case just fallback before doing more expensive checks. > > thp_vma_allowable_orders() is inline as you mentioned. > > I was deliberately trying to keep all the decision logic in one place > (thp_vma_suitable_orders) because it's already pretty complicated. But if you > insist, how about this in the header: > > static inline > unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, > unsigned long vm_flags, bool smaps, > bool in_pf, bool enforce_sysfs, > unsigned long orders) > { > /* Optimization to check if required orders are enabled early. */ > if (enforce_sysfs && vma_is_anonymous(vma)) { > unsigned long mask = READ_ONCE(huge_anon_orders_always); > > if (vm_flags & VM_HUGEPAGE) > mask |= READ_ONCE(huge_anon_orders_madvise); > if (hugepage_global_always() || > ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled())) > mask |= READ_ONCE(huge_anon_orders_inherit); > > orders &= mask; > if (!orders) > return 0; > > enforce_sysfs = false; > } > > return __thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, > enforce_sysfs, orders); > } > > Then the above check can be removed from __thp_vma_allowable_orders() - it will > still retain the `if (enforce_sysfs && !vma_is_anonymous(vma))` part. > Better. I still kind-of hate having to pass in orders here. Such masking is better done in the caller (see above how it might be done when moving the transhuge_vma_suitable() check out). > >> >>> + >>> + if (!orders) >>> + goto fallback; >>> + >>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>> + if (!pte) >>> + return ERR_PTR(-EAGAIN); >>> + >>> + order = first_order(orders); >>> + while (orders) { >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>> + vmf->pte = pte + pte_index(addr); >>> + if (pte_range_none(vmf->pte, 1 << order)) >>> + break; >> >> Comment: Likely it would make sense to scan only once and determine the "largest >> none range" around that address, having the largest suitable order in mind. > > Yes, that's how I used to do it, but Yu Zhou requested simplifying to this, > IIRC. Perhaps this an optimization opportunity for later? Yes, definetly. > >> >>> + order = next_order(&orders, order); >>> + } >>> + >>> + vmf->pte = NULL; >> >> Nit: Can you elaborate why you are messing with vmf->pte here? A simple helper >> variable will make this code look less magical. Unless I am missing something >> important :) > > Gahh, I used to pass the vmf to what pte_range_none() was refactored into (an > approach that was suggested by Yu Zhou IIRC). But since I did some refactoring > based on some comments from JohnH, I see I don't need that anymore. Agreed; it > will be much clearer just to use a local variable. Will fix. > >> >>> + pte_unmap(pte); >>> + >>> + gfp = vma_thp_gfp_mask(vma); >>> + >>> + while (orders) { >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); >>> + if (folio) { >>> + clear_huge_page(&folio->page, addr, 1 << order); >>> + return folio; >>> + } >>> + order = next_order(&orders, order); >>> + } >>> + >> >> Queestion: would it make sense to combine both loops? I suspect memory >> allocations with pte_offset_map()/kmao are problematic. > > They are both operating on separate orders; next_order() is "consuming" an order > by removing the current one from the orders bitfield and returning the next one. > > So the first loop starts at the highest order and keeps checking lower orders > until one fully fits in the VMA. And the second loop starts at the first order > that was found to fully fits and loops to lower orders until an allocation is > successful. Right, but you know from the first loop which order is applicable (and will be fed to the second loop) and could just pte_unmap(pte) + tryalloc. If that fails, remap and try with the next orders. That would make the code certainly easier to understand. That "orders" magic of constructing, filtering, walking is confusing :) I might find some time today to see if there is an easy way to cleanup all what I spelled out above. It really is a mess. But likely that cleanup could be deferred (but you're touching it, so ... :) ).
>> >> Right, but you know from the first loop which order is applicable (and will be >> fed to the second loop) and could just pte_unmap(pte) + tryalloc. If that fails, >> remap and try with the next orders. > > You mean something like this? > > pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > if (!pte) > return ERR_PTR(-EAGAIN); > > order = highest_order(orders); > while (orders) { > addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > if (!pte_range_none(pte + pte_index(addr), 1 << order)) { > order = next_order(&orders, order); > continue; > } > > pte_unmap(pte); > > folio = vma_alloc_folio(gfp, order, vma, addr, true); > if (folio) { > clear_huge_page(&folio->page, vmf->address, 1 << order); > return folio; > } > > pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > if (!pte) > return ERR_PTR(-EAGAIN); > > order = next_order(&orders, order); > } > > pte_unmap(pte); > > I don't really like that because if high order folio allocations fail, then you > are calling pte_range_none() again for the next lower order; once that check has > succeeded for an order it shouldn't be required for any lower orders. In this > case you also have lots of pte map/unmap. I see what you mean. > > The original version feels more efficient to me. Yes it is. Adding in some comments might help, like /* * Find the largest order where the aligned range is completely prot_none(). Note * that all remaining orders will be completely prot_none(). */ ... /* Try allocating the largest of the remaining orders. */ > >> >> That would make the code certainly easier to understand. That "orders" magic of >> constructing, filtering, walking is confusing :) >> >> >> I might find some time today to see if there is an easy way to cleanup all what >> I spelled out above. It really is a mess. But likely that cleanup could be >> deferred (but you're touching it, so ... :) ). > > I'm going to ignore the last 5 words. I heard the "that cleanup could be > deferred" part loud and clear though :) :) If we could stop passing orders into thp_vma_allowable_orders(), that would probably be the biggest win. It's just all a confusing mess.
On 07.12.23 15:45, Ryan Roberts wrote: > On 07/12/2023 13:28, David Hildenbrand wrote: >>>> >>>> Right, but you know from the first loop which order is applicable (and will be >>>> fed to the second loop) and could just pte_unmap(pte) + tryalloc. If that fails, >>>> remap and try with the next orders. >>> >>> You mean something like this? >>> >>> pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>> if (!pte) >>> return ERR_PTR(-EAGAIN); >>> >>> order = highest_order(orders); >>> while (orders) { >>> addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>> if (!pte_range_none(pte + pte_index(addr), 1 << order)) { >>> order = next_order(&orders, order); >>> continue; >>> } >>> >>> pte_unmap(pte); >>> >>> folio = vma_alloc_folio(gfp, order, vma, addr, true); >>> if (folio) { >>> clear_huge_page(&folio->page, vmf->address, 1 << order); >>> return folio; >>> } >>> >>> pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>> if (!pte) >>> return ERR_PTR(-EAGAIN); >>> >>> order = next_order(&orders, order); >>> } >>> >>> pte_unmap(pte); >>> >>> I don't really like that because if high order folio allocations fail, then you >>> are calling pte_range_none() again for the next lower order; once that check has >>> succeeded for an order it shouldn't be required for any lower orders. In this >>> case you also have lots of pte map/unmap. >> >> I see what you mean. >> >>> >>> The original version feels more efficient to me. >> Yes it is. Adding in some comments might help, like >> >> /* >> * Find the largest order where the aligned range is completely prot_none(). Note >> * that all remaining orders will be completely prot_none(). >> */ >> ... >> >> /* Try allocating the largest of the remaining orders. */ > > OK added. > >> >>> >>>> >>>> That would make the code certainly easier to understand. That "orders" magic of >>>> constructing, filtering, walking is confusing :) >>>> >>>> >>>> I might find some time today to see if there is an easy way to cleanup all what >>>> I spelled out above. It really is a mess. But likely that cleanup could be >>>> deferred (but you're touching it, so ... :) ). >>> >>> I'm going to ignore the last 5 words. I heard the "that cleanup could be >>> deferred" part loud and clear though :) >> >> :) >> >> If we could stop passing orders into thp_vma_allowable_orders(), that would >> probably >> be the biggest win. It's just all a confusing mess. > > > > I tried an approach like you suggested in the other thread originally, but I > struggled to define exactly what "thp_vma_configured_orders()" should mean; > Ideally, I just want "all the THP orders that are currently enabled for this > VMA+flags". But some callers want to enforce_sysfs and others don't, so you > probably have to at least pass that flag. Then you have DAX which explicitly Yes, the flags would still be passed. It's kind of the "context". > ignores enforce_sysfs, but only in a page fault. And shmem, which ignores > enforce_sysfs, but only outside of a page fault. So it quickly becomes pretty > complex. It is basically thp_vma_allowable_orders() as currently defined. Yeah, but moving the "can we actually fit a THP in there" check out of the picture. > > If this could be a simple function then it could be inline and as you say, we > can do the masking in the caller and exit early for the order-0 case. But it is > very complex (at least if you want to retain the equivalent logic to what > thp_vma_allowable_orders() has) so I'm not sure how to do the order-0 early exit > without passing in the orders bitfield. And we are unlikely to exit early > because PMD-sized THP is likely enabled and because we didn't pass in a orders > bitfield, that wasn't filtered out. > > In short, I can't see a solution that's better than the one I have. But if you > have something in mind, if you can spell it out, then I'll have a go at tidying > it up and integrating it into the series. Otherwise I really would prefer to > leave it for a separate series. I'm playing with some cleanups, but they can all be built on top if they materialize.