[v9,00/10] Multi-size THP for anonymous memory

Message ID 20231207161211.2374093-1-ryan.roberts@arm.com
Headers
Series Multi-size THP for anonymous memory |

Message

Ryan Roberts Dec. 7, 2023, 4:12 p.m. UTC
  Hi All,

This is v9 (and hopefully the last) of a series to implement multi-size THP
(mTHP) for anonymous memory (previously called "small-sized THP" and "large
anonymous folios").

The objective of this is to improve performance by allocating larger chunks of
memory during anonymous page faults:

1) Since SW (the kernel) is dealing with larger chunks of memory than base
   pages, there are efficiency savings to be had; fewer page faults, batched PTE
   and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel
   overhead. This should benefit all architectures.
2) Since we are now mapping physically contiguous chunks of memory, we can take
   advantage of HW TLB compression techniques. A reduction in TLB pressure
   speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce
   TLB entries; "the contiguous bit" (architectural) and HPA (uarch).

This version incorporates David's feedback on the core patches (#3, #4) and adds
some RB and TB tags (see change log for details).

By default, the existing behaviour (and performance) is maintained. The user
must explicitly enable multi-size THP to see the performance benefit. This is
done via a new sysfs interface (as recommended by David Hildenbrand - thanks to
David for the suggestion)! This interface is inspired by the existing
per-hugepage-size sysfs interface used by hugetlb, provides full backwards
compatibility with the existing PMD-size THP interface, and provides a base for
future extensibility. See [9] for detailed discussion of the interface.

This series is based on mm-unstable (715b67adf4c8).


Prerequisites
=============

I'm removing this section on the basis that I don't believe what we were
previously calling prerequisites are really prerequisites anymore. We originally
defined them when mTHP was a compile-time feature. There is now a runtime
control to opt-in to mTHP; when disabled, correctness and performance are as
before. When enabled, the code is still correct/robust, but in the absence of
the one remaining item (compaction) there may be a performance impact in some
corners. See the old list in the v8 cover letter at [8]. And a longer
explanation of my thinking here [10].

SUMMARY: I don't think we should hold this series up, waiting for the items on
the prerequisites list. I believe this series should be ready now so hopefully
can be added to mm-unstable for some testing, then fingers crossed for v6.8.


Testing
=======

The series includes patches for mm selftests to enlighten the cow and khugepaged
tests to explicitly test with multi-size THP, in the same way that PMD-sized
THP is tested. The new tests all pass, and no regressions are observed in the mm
selftest suite. I've also run my usual kernel compilation and java script
benchmarks without any issues.

Refer to my performance numbers posted with v6 [6]. (These are for multi-size
THP only - they do not include the arm64 contpte follow-on series).

John Hubbard at Nvidia has indicated dramatic 10x performance improvements for
some workloads at [11]. (Observed using v6 of this series as well as the arm64
contpte series).

Kefeng Wang at Huawei has also indicated he sees improvements at [12] although
there are some latency regressions also.

I've also checked that there is no regression in the write fault path when mTHP
is disabled using a microbenchmark. I ran it for a baseline kernel, as well as
v8 and v9. I repeated on Ampere Altra (bare metal) and Apple M2 (VM):

|              |        m2 vm        |        altra        |
|--------------|---------------------|---------------------|
| kernel       |     mean |  std_rel |     mean |  std_rel |
|--------------|----------|----------|----------|----------|
| baseline     |   0.000% |   0.341% |   0.000% |   3.581% |
| anonfolio-v8 |   0.005% |   0.272% |   5.068% |   1.128% |
| anonfolio-v9 |  -0.013% |   0.442% |   0.107% |   1.788% |

There is no measurable difference on M2, but altra has a slow down in v8 which
is fixed in v9 by moving the THP order check to be inline within
thp_vma_allowable_orders(), as suggested by David.


Changes since v8 [8]
====================

  - Added various Reviewed-by/Tested-by tags (Barry, David, Kefeng, John)
  - Patch 3:
      - Renamed first_order() -> highest_order() (David)
      - Made helpers for thp_vma_suitable_orders() thp_vma_allowable_orders()
        that take a single unencoded order parameter: thp_vma_suitable_order()
        and thp_vma_allowable_order(), and use them to aid readability (David)
      - Split thp_vma_allowable_orders() into an order-0 fast-path inline and
        slow-path __thp_vma_allowable_orders() part (David)
      - Added spin lock to serialize changes to huge_anon_orders_* fields to
        prevent possibility of clearing all bits when threads are racing (David)
  - Patch 4:
      - Pass address of faulting page (not start of folio) to clear_huge_page()
      - Reverse xmas tree for variable lists (David)
      - Added unlikely() for uffd check (David)
      - tidied up a local variable in alloc_anon_folio() (David)
      - Separated update_mmu_tlb() handling for nr_pages == 1 vs > 1 (David)


Changes since v7 [7]
====================

  - Renamed "small-sized THP" -> "multi-size THP" in commit logs
  - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair)
  - Patch 3:
      - Fine-tuned transhuge documentation multi-size THP (JohnH)
      - Converted hugepage_global_enabled() and hugepage_global_always() macros
        to static inline functions (JohnH)
      - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH)
      - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH)
      - Renamed "global" enabled sysfs file option to "inherit" (JohnH)
  - Patch 9:
      - cow selftest: Renamed param size -> thpsize (David)
      - cow selftest: Changed test fail to assert() (David)
      - cow selftest: Log PMD size separately from all the supported THP sizes
        (David)
  - Patch 10:
      - cow selftest: No longer special case pmdsize; keep all THP sizes in
        thpsizes[]


Changes since v6 [6]
====================

  - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by
    JohnH)
  - Dropped accounting patch (#3 in v6) (suggested by DavidH)
      - Continue to account *PMD-sized* THP only for now
      - Can add more counters in future if needed
      - Page cache large folios haven't needed any new counters yet
  - Pivot to sysfs ABI proposed by DavidH
      - per-size directories in a similar shape to that used by hugetlb
  - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou)
      - For now, users need to understand implicitly which sizes are beneficial
        to their HW/SW
  - Dropped arch_wants_pte_order() patch (#7 in v6)
      - No longer needed due to dropping patch "recommend" keyword patch
  - Enlightened khugepaged mm selftest to explicitly test with small-size THP
  - Scrubbed commit logs to use "small-sized THP" consistently (suggested by
    DavidH)


Changes since v5 [5]
====================

  - Added accounting for PTE-mapped THPs (patch 3)
  - Added runtime control mechanism via sysfs as extension to THP (patch 4)
  - Minor refactoring of alloc_anon_folio() to integrate with runtime controls
  - Stripped out hardcoded policy for allocation order; its now all user space
    controlled (although user space can request "recommend" which will configure
    the HW-preferred order)


Changes since v4 [4]
====================

  - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64
    now uses the default order-3 size. I have moved this patch over to
    the contpte series.
  - Added "mm: Allow deferred splitting of arbitrary large anon folios" back
    into series. I originally removed this at v2 to add to a separate series,
    but that series has transformed significantly and it no longer fits, so
    bringing it back here.
  - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but
    set_ptes() is in mm-unstable now.
  - Updated policy for when to allocate LAF; only fallback to order-0 if
    MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on
    sysfs's never/madvise/always knob.
  - Fallback to order-0 whenever uffd is armed for the vma, not just when
    uffd-wp is set on the pte.
  - alloc_anon_folio() now returns `struct folio *`, where errors are encoded
    with ERR_PTR().

  The last 3 changes were proposed by Yu Zhao - thanks!


Changes since v3 [3]
====================

  - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO.
  - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a
    sysctl is preferable but we will wait until real workload needs it.
  - Fixed uninitialized `addr` on read fault path in do_anonymous_page().
  - Added mm selftests for large anon folios in cow test suite.


Changes since v2 [2]
====================

  - Dropped commit "Allow deferred splitting of arbitrary large anon folios"
      - Huang, Ying suggested the "batch zap" work (which I dropped from this
        series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've
        moved the deferred split patch to a separate series along with the batch
        zap changes. I plan to submit this series early next week.
  - Changed folio order fallback policy
      - We no longer iterate from preferred to 0 looking for acceptable policy
      - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only
  - Removed vma parameter from arch_wants_pte_order()
  - Added command line parameter `flexthp_unhinted_max`
      - clamps preferred order when vma hasn't explicitly opted-in to THP
  - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled
    for process or system).
  - Simplified implementation and integration with do_anonymous_page()
  - Removed dependency on set_ptes()


Changes since v1 [1]
====================

  - removed changes to arch-dependent vma_alloc_zeroed_movable_folio()
  - replaced with arch-independent alloc_anon_folio()
      - follows THP allocation approach
  - no longer retry with intermediate orders if allocation fails
      - fallback directly to order-0
  - remove folio_add_new_anon_rmap_range() patch
      - instead add its new functionality to folio_add_new_anon_rmap()
  - remove batch-zap pte mappings optimization patch
      - remove enabler folio_remove_rmap_range() patch too
      - These offer real perf improvement so will submit separately
  - simplify Kconfig
      - single FLEXIBLE_THP option, which is independent of arch
      - depends on TRANSPARENT_HUGEPAGE
      - when enabled default to max anon folio size of 64K unless arch
        explicitly overrides
  - simplify changes to do_anonymous_page():
      - no more retry loop


[1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/
[2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/
[3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/
[4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/
[5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/
[6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/
[7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/
[8] https://lore.kernel.org/linux-mm/20231204102027.57185-1-ryan.roberts@arm.com/
[9] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/
[10] https://lore.kernel.org/linux-mm/2de0617e-d1d7-49ec-9cb8-206eaf37caed@arm.com/
[11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/
[12] https://lore.kernel.org/linux-mm/b8f5a47a-af1e-44ed-a89b-460d0be56d2c@huawei.com/


Thanks,
Ryan

Ryan Roberts (10):
  mm: Allow deferred splitting of arbitrary anon large folios
  mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap()
  mm: thp: Introduce multi-size THP sysfs interface
  mm: thp: Support allocation of anonymous multi-size THP
  selftests/mm/kugepaged: Restore thp settings at exit
  selftests/mm: Factor out thp settings management
  selftests/mm: Support multi-size THP interface in thp_settings
  selftests/mm/khugepaged: Enlighten for multi-size THP
  selftests/mm/cow: Generalize do_run_with_thp() helper
  selftests/mm/cow: Add tests for anonymous multi-size THP

 Documentation/admin-guide/mm/transhuge.rst |  97 ++++-
 Documentation/filesystems/proc.rst         |   6 +-
 fs/proc/task_mmu.c                         |   3 +-
 include/linux/huge_mm.h                    | 183 +++++++--
 mm/huge_memory.c                           | 231 ++++++++++--
 mm/khugepaged.c                            |  20 +-
 mm/memory.c                                | 117 +++++-
 mm/page_vma_mapped.c                       |   3 +-
 mm/rmap.c                                  |  32 +-
 tools/testing/selftests/mm/Makefile        |   4 +-
 tools/testing/selftests/mm/cow.c           | 185 +++++++---
 tools/testing/selftests/mm/khugepaged.c    | 410 ++++-----------------
 tools/testing/selftests/mm/run_vmtests.sh  |   2 +
 tools/testing/selftests/mm/thp_settings.c  | 349 ++++++++++++++++++
 tools/testing/selftests/mm/thp_settings.h  |  80 ++++
 15 files changed, 1218 insertions(+), 504 deletions(-)
 create mode 100644 tools/testing/selftests/mm/thp_settings.c
 create mode 100644 tools/testing/selftests/mm/thp_settings.h

--
2.25.1
  

Comments

Andrew Morton Dec. 7, 2023, 10:05 p.m. UTC | #1
On Thu,  7 Dec 2023 16:12:01 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote:

> Hi All,
> 
> This is v9 (and hopefully the last) of a series to implement multi-size THP
> (mTHP) for anonymous memory (previously called "small-sized THP" and "large
> anonymous folios").

A general point on the [0/N] intro.  Bear in mind that this is
(intended to be) for ever.  Five years hence, people won't be
interested in knowing which version the patchset was, in seeing what
changed from the previous iteration, etc.  This is all important and
useful info, of course.  But it's best suited for being below the
"^---$" separator.

Also, those five-years-from-now people won't want to have to go click
on some link to find the performance testing results and suchlike. 
It's better to paste such important info right into their faces.
  
Ryan Roberts Dec. 11, 2023, 11:51 a.m. UTC | #2
On 07/12/2023 22:05, Andrew Morton wrote:
> On Thu,  7 Dec 2023 16:12:01 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote:
> 
>> Hi All,
>>
>> This is v9 (and hopefully the last) of a series to implement multi-size THP
>> (mTHP) for anonymous memory (previously called "small-sized THP" and "large
>> anonymous folios").
> 
> A general point on the [0/N] intro.  Bear in mind that this is
> (intended to be) for ever.  Five years hence, people won't be
> interested in knowing which version the patchset was, in seeing what
> changed from the previous iteration, etc.  This is all important and
> useful info, of course.  But it's best suited for being below the
> "^---$" separator.
> 
> Also, those five-years-from-now people won't want to have to go click
> on some link to find the performance testing results and suchlike. 
> It's better to paste such important info right into their faces.

Sorry about this, Andrew - you've given me feedback on this before, and I've
been trying to improve this. I'm obviously not there yet. Will fix for next time.
  
David Hildenbrand Dec. 12, 2023, 3:02 p.m. UTC | #3
On 07.12.23 17:12, Ryan Roberts wrote:
> Introduce the logic to allow THP to be configured (through the new sysfs
> interface we just added) to allocate large folios to back anonymous
> memory, which are larger than the base page size but smaller than
> PMD-size. We call this new THP extension "multi-size THP" (mTHP).
> 
> mTHP continues to be PTE-mapped, but in many cases can still provide
> similar benefits to traditional PMD-sized THP: Page faults are
> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on
> the configured order), but latency spikes are much less prominent
> because the size of each page isn't as huge as the PMD-sized variant and
> there is less memory to clear in each page fault. The number of per-page
> operations (e.g. ref counting, rmap management, lru list management) are
> also significantly reduced since those ops now become per-folio.

I'll note that with always-pte-mapped-thp it will be much easier to support
incremental page clearing (e.g., zero only parts of the folio and map the
remainder in a pro-non-like fashion whereby we'll zero on the next page fault).
With a PMD-sized thp, you have to eventually place/rip out page tables to
achieve that.

> 
> Some architectures also employ TLB compression mechanisms to squeeze
> more entries in when a set of PTEs are virtually and physically
> contiguous and approporiately aligned. In this case, TLB misses will
> occur less often.
> 
> The new behaviour is disabled by default, but can be enabled at runtime
> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled
> (see documentation in previous commit). The long term aim is to change
> the default to include suitable lower orders, but there are some risks
> around internal fragmentation that need to be better understood first.
> 
> Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Tested-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   include/linux/huge_mm.h |   6 ++-
>   mm/memory.c             | 111 ++++++++++++++++++++++++++++++++++++----
>   2 files changed, 106 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 609c153bae57..fa7a38a30fc6 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr;
>   #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)

[...]

> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	unsigned long orders;
> +	struct folio *folio;
> +	unsigned long addr;
> +	pte_t *pte;
> +	gfp_t gfp;
> +	int order;
> +
> +	/*
> +	 * If uffd is active for the vma we need per-page fault fidelity to
> +	 * maintain the uffd semantics.
> +	 */
> +	if (unlikely(userfaultfd_armed(vma)))
> +		goto fallback;
> +
> +	/*
> +	 * Get a list of all the (large) orders below PMD_ORDER that are enabled
> +	 * for this vma. Then filter out the orders that can't be allocated over
> +	 * the faulting address and still be fully contained in the vma.
> +	 */
> +	orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true,
> +					  BIT(PMD_ORDER) - 1);
> +	orders = thp_vma_suitable_orders(vma, vmf->address, orders);
> +
> +	if (!orders)
> +		goto fallback;
> +
> +	pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
> +	if (!pte)
> +		return ERR_PTR(-EAGAIN);
> +
> +	/*
> +	 * Find the highest order where the aligned range is completely
> +	 * pte_none(). Note that all remaining orders will be completely
> +	 * pte_none().
> +	 */
> +	order = highest_order(orders);
> +	while (orders) {
> +		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> +		if (pte_range_none(pte + pte_index(addr), 1 << order))
> +			break;
> +		order = next_order(&orders, order);
> +	}
> +
> +	pte_unmap(pte);
> +
> +	/* Try allocating the highest of the remaining orders. */
> +	gfp = vma_thp_gfp_mask(vma);
> +	while (orders) {
> +		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> +		folio = vma_alloc_folio(gfp, order, vma, addr, true);
> +		if (folio) {
> +			clear_huge_page(&folio->page, vmf->address, 1 << order);
> +			return folio;
> +		}
> +		order = next_order(&orders, order);
> +	}
> +
> +fallback:
> +	return vma_alloc_zeroed_movable_folio(vma, vmf->address);
> +}
> +#else
> +#define alloc_anon_folio(vmf) \
> +		vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address)
> +#endif

A neater alternative might be

static struct folio *alloc_anon_folio(struct vm_fault *vmf)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
	/* magic */
fallback:
#endif
	return vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address):
}

[...]

Acked-by: David Hildenbrand <david@redhat.com>
  
David Hildenbrand Dec. 12, 2023, 4:35 p.m. UTC | #4
On 12.12.23 16:38, Ryan Roberts wrote:
> On 12/12/2023 15:02, David Hildenbrand wrote:
>> On 07.12.23 17:12, Ryan Roberts wrote:
>>> Introduce the logic to allow THP to be configured (through the new sysfs
>>> interface we just added) to allocate large folios to back anonymous
>>> memory, which are larger than the base page size but smaller than
>>> PMD-size. We call this new THP extension "multi-size THP" (mTHP).
>>>
>>> mTHP continues to be PTE-mapped, but in many cases can still provide
>>> similar benefits to traditional PMD-sized THP: Page faults are
>>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on
>>> the configured order), but latency spikes are much less prominent
>>> because the size of each page isn't as huge as the PMD-sized variant and
>>> there is less memory to clear in each page fault. The number of per-page
>>> operations (e.g. ref counting, rmap management, lru list management) are
>>> also significantly reduced since those ops now become per-folio.
>>
>> I'll note that with always-pte-mapped-thp it will be much easier to support
>> incremental page clearing (e.g., zero only parts of the folio and map the
>> remainder in a pro-non-like fashion whereby we'll zero on the next page fault).
>> With a PMD-sized thp, you have to eventually place/rip out page tables to
>> achieve that.
> 
> But then you lose the benefits of reduced number of page faults; reducing page
> faults gives a big speed up for workloads with lots of short lived processes
> like compiling.

Well, you can do interesting things like "allocate order-5", but zero in 
order-3 chunks. You get less page faults and pay for alloc/rmap only once.

But yes, all has pros and cons.

[...]

>>
>>>
>>> Some architectures also employ TLB compression mechanisms to squeeze
>>> more entries in when a set of PTEs are virtually and physically
>>> contiguous and approporiately aligned. In this case, TLB misses will
>>> occur less often.
>>>
>>> The new behaviour is disabled by default, but can be enabled at runtime
>>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled
>>> (see documentation in previous commit). The long term aim is to change
>>> the default to include suitable lower orders, but there are some risks
>>> around internal fragmentation that need to be better understood first.
>>>
>>> Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>>    include/linux/huge_mm.h |   6 ++-
>>>    mm/memory.c             | 111 ++++++++++++++++++++++++++++++++++++----
>>>    2 files changed, 106 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index 609c153bae57..fa7a38a30fc6 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr;
>>>    #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
>>
>> [...]
>>
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>>> +{
>>> +    struct vm_area_struct *vma = vmf->vma;
>>> +    unsigned long orders;
>>> +    struct folio *folio;
>>> +    unsigned long addr;
>>> +    pte_t *pte;
>>> +    gfp_t gfp;
>>> +    int order;
>>> +
>>> +    /*
>>> +     * If uffd is active for the vma we need per-page fault fidelity to
>>> +     * maintain the uffd semantics.
>>> +     */
>>> +    if (unlikely(userfaultfd_armed(vma)))
>>> +        goto fallback;
>>> +
>>> +    /*
>>> +     * Get a list of all the (large) orders below PMD_ORDER that are enabled
>>> +     * for this vma. Then filter out the orders that can't be allocated over
>>> +     * the faulting address and still be fully contained in the vma.
>>> +     */
>>> +    orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true,
>>> +                      BIT(PMD_ORDER) - 1);
>>> +    orders = thp_vma_suitable_orders(vma, vmf->address, orders);
>>> +
>>> +    if (!orders)
>>> +        goto fallback;
>>> +
>>> +    pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
>>> +    if (!pte)
>>> +        return ERR_PTR(-EAGAIN);
>>> +
>>> +    /*
>>> +     * Find the highest order where the aligned range is completely
>>> +     * pte_none(). Note that all remaining orders will be completely
>>> +     * pte_none().
>>> +     */
>>> +    order = highest_order(orders);
>>> +    while (orders) {
>>> +        addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
>>> +        if (pte_range_none(pte + pte_index(addr), 1 << order))
>>> +            break;
>>> +        order = next_order(&orders, order);
>>> +    }
>>> +
>>> +    pte_unmap(pte);
>>> +
>>> +    /* Try allocating the highest of the remaining orders. */
>>> +    gfp = vma_thp_gfp_mask(vma);
>>> +    while (orders) {
>>> +        addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
>>> +        folio = vma_alloc_folio(gfp, order, vma, addr, true);
>>> +        if (folio) {
>>> +            clear_huge_page(&folio->page, vmf->address, 1 << order);
>>> +            return folio;
>>> +        }
>>> +        order = next_order(&orders, order);
>>> +    }
>>> +
>>> +fallback:
>>> +    return vma_alloc_zeroed_movable_folio(vma, vmf->address);
>>> +}
>>> +#else
>>> +#define alloc_anon_folio(vmf) \
>>> +        vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address)
>>> +#endif
>>
>> A neater alternative might be
>>
>> static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>> {
>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>      /* magic */
>> fallback:
>> #endif
>>      return vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address):
>> }
> 
> I guess beauty lies in the eye of the beholder... I don't find it much neater
> personally :). But happy to make the change if you insist; what's the process
> now that its in mm-unstable? Just send a patch to Andrew for squashing?

That way it is clear that the fallback for thp is just what !thp does.

But either is fine for me; no need to change if you disagree.
  
Dan Carpenter Dec. 13, 2023, 7:21 a.m. UTC | #5
On Thu, Dec 07, 2023 at 04:12:05PM +0000, Ryan Roberts wrote:
> @@ -4176,10 +4260,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  	/* Allocate our own private page. */
>  	if (unlikely(anon_vma_prepare(vma)))
>  		goto oom;
> -	folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
> +	folio = alloc_anon_folio(vmf);
> +	if (IS_ERR(folio))
> +		return 0;
>  	if (!folio)
>  		goto oom;

Returning zero is weird.  I think it should be a vm_fault_t code.

This mixing of error pointers and NULL is going to cause problems.
Normally when we have a mix of error pointers and NULL then the NULL is
not an error but instead means that the feature has been deliberately
turned off.  I'm unable to figure out what the meaning is here.

It should return one or the other, or if it's a mix then add a giant
comment explaining what they mean.

regards,
dan carpenter

>  
> +	nr_pages = folio_nr_pages(folio);
> +	addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE);
> +
>  	if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL))
>  		goto oom_free_page;
>  	folio_throttle_swaprate(folio, GFP_KERNEL);
  
Dan Carpenter Dec. 14, 2023, 11:30 a.m. UTC | #6
On Thu, Dec 14, 2023 at 10:54:19AM +0000, Ryan Roberts wrote:
> On 13/12/2023 07:21, Dan Carpenter wrote:
> > On Thu, Dec 07, 2023 at 04:12:05PM +0000, Ryan Roberts wrote:
> >> @@ -4176,10 +4260,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >>  	/* Allocate our own private page. */
> >>  	if (unlikely(anon_vma_prepare(vma)))
> >>  		goto oom;
> >> -	folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
> >> +	folio = alloc_anon_folio(vmf);
> >> +	if (IS_ERR(folio))
> >> +		return 0;
> >>  	if (!folio)
> >>  		goto oom;
> > 
> > Returning zero is weird.  I think it should be a vm_fault_t code.
> 
> It's the same pattern that the existing code a little further down this function
> already implements:
> 
> 	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl);
> 	if (!vmf->pte)
> 		goto release;
> 
> If we fail to map/lock the pte (due to a race), then we return 0 to allow user
> space to rerun the faulting instruction and cause the fault to happen again. The
> above code ends up calling "return ret;" and ret is 0.
> 

Ah, okay.  Thanks!

> > 
> > This mixing of error pointers and NULL is going to cause problems.
> > Normally when we have a mix of error pointers and NULL then the NULL is
> > not an error but instead means that the feature has been deliberately
> > turned off.  I'm unable to figure out what the meaning is here.
> 
> There are 3 conditions that the function can return:
> 
>  - folio successfully allocated
>  - folio failed to be allocated due to OOM
>  - fault needs to be tried again due to losing race
> 
> Previously only the first 2 conditions were possible and they were indicated by
> NULL/not-NULL. The new 3rd condition is only possible when THP is compile-time
> enabled. So it keeps the logic simpler to keep the NULL/not-NULL distinction for
> the first 2, and use the error code for the final one.
> 
> There are IS_ERR() and IS_ERR_OR_NULL() variants so I assume a pattern where you
> can have pointer, error or NULL is somewhat common already?

People are confused by this a lot so I have written a blog about it:

https://staticthinking.wordpress.com/2022/08/01/mixing-error-pointers-and-null/

The IS_ERR_OR_NULL() function should be used like this:

int blink_leds()
{
	led = get_leds();
	if (IS_ERR_OR_NULL(led))
		return PTR_ERR(led);  <-- NULL means zero/success
	return led->blink();
}

In the case of alloc_anon_folio(), I would be tempted to create a
wrapper around it where NULL becomes ERR_PTR(-ENOMEM).  But this is
obviously fast path code and I haven't benchmarked it.

Adding a comment is the other option.

regards,
dan carpenter