[RFC,v1,0/2] Swap-out small-sized THP without splitting

Message ID 20231010142111.3997780-1-ryan.roberts@arm.com
Headers
Series Swap-out small-sized THP without splitting |

Message

Ryan Roberts Oct. 10, 2023, 2:21 p.m. UTC
  Hi All,

This is an RFC for a small series to add support for swapping out small-sized
THP without needing to first split the large folio via __split_huge_page(). It
closely follows the approach already used by PMD-sized THP.

"Small-sized THP" is an upcoming feature that enables performance improvements
by allocating large folios for anonymous memory, where the large folio size is
smaller than the traditional PMD-size. See [1].

In some circumstances I've observed a performance regression (see patch 2 for
details), and this series is an attempt to fix the regression in advance of
merging small-sized THP support.

I've done what I thought was the smallest change possible, and as a result, this
approach is only employed when the swap is backed by a non-rotating block device
(just as PMD-sized THP is supported today). However, I have a few questions on
whether we should consider relaxing those requirements in certain circumstances:


1) block-backed vs file-backed
==============================

The code only attempts to allocate a contiguous set of entries if swap is backed
by a block device (i.e. not file-backed). The original commit, f0eea189e8e9
("mm, THP, swap: don't allocate huge cluster for file backed swap device"),
stated "It's hard to write a whole transparent huge page (THP) to a file backed
swap device". But didn't state why. Does this imply there is a size limit at
which it becomes hard? And does that therefore imply that for "small enough"
sizes we should now allow use with file-back swap?

This original commit was subsequently fixed with commit 41663430588c ("mm, THP,
swap: fix allocating cluster for swapfile by mistake"), which said the original
commit was using the wrong flag to determine if it was a block device and
therefore in some cases was actually doing large allocations for a file-backed
swap device, and this was causing file-system corruption. But that implies some
sort of correctness issue to me, rather than the performance issue I inferred
from the original commit.

If anyone can offer an explanation, that would be helpful in determining if we
should allow some large sizes for file-backed swap.


2) rotating vs non-rotating
===========================

I notice that the clustered approach is only used for non-rotating swap. That
implies that for rotating media, we will always fail a large allocation, and
fall back to splitting THPs to single pages. Which implies that the regression
I'm fixing here may still be present on rotating media? Or perhaps rotating disk
is so slow that the cost of writing the data out dominates the cost of
splitting?

I considered that potentially the free swap entry search algorithm that is used
in this case could be modified to look for (small) contiguous runs of entries;
Up to ~16 pages (order-4) could be done by doing 2x 64bit reads from map instead
of single byte.

I haven't looked into this idea in detail, but wonder if anybody thinks it is
worth the effort? Or perhaps it would end up causing bad fragmentation.


Finally on testing, I've run the mm selftests and see no regressions, but I
don't think there is anything in there specifically aimed towards swap? Are
there any functional or performance tests that I should run? It would certainly
be good to confirm I haven't regressed PMD-size THP swap performance.

Thanks,
Ryan

[1] https://lore.kernel.org/linux-mm/15a52c3d-9584-449b-8228-1335e0753b04@arm.com/

Ryan Roberts (2):
  mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
  mm: swap: Swap-out small-sized THP without splitting

 include/linux/swap.h |  17 +++----
 mm/huge_memory.c     |   3 --
 mm/swapfile.c        | 105 ++++++++++++++++++++++---------------------
 mm/vmscan.c          |  10 +++--
 4 files changed, 66 insertions(+), 69 deletions(-)

--
2.25.1
  

Comments

Huang, Ying Oct. 11, 2023, 6:37 a.m. UTC | #1
Ryan Roberts <ryan.roberts@arm.com> writes:

> Hi All,
>
> This is an RFC for a small series to add support for swapping out small-sized
> THP without needing to first split the large folio via __split_huge_page(). It
> closely follows the approach already used by PMD-sized THP.
>
> "Small-sized THP" is an upcoming feature that enables performance improvements
> by allocating large folios for anonymous memory, where the large folio size is
> smaller than the traditional PMD-size. See [1].
>
> In some circumstances I've observed a performance regression (see patch 2 for
> details), and this series is an attempt to fix the regression in advance of
> merging small-sized THP support.
>
> I've done what I thought was the smallest change possible, and as a result, this
> approach is only employed when the swap is backed by a non-rotating block device
> (just as PMD-sized THP is supported today). However, I have a few questions on
> whether we should consider relaxing those requirements in certain circumstances:
>
>
> 1) block-backed vs file-backed
> ==============================
>
> The code only attempts to allocate a contiguous set of entries if swap is backed
> by a block device (i.e. not file-backed). The original commit, f0eea189e8e9
> ("mm, THP, swap: don't allocate huge cluster for file backed swap device"),
> stated "It's hard to write a whole transparent huge page (THP) to a file backed
> swap device". But didn't state why. Does this imply there is a size limit at
> which it becomes hard? And does that therefore imply that for "small enough"
> sizes we should now allow use with file-back swap?
>
> This original commit was subsequently fixed with commit 41663430588c ("mm, THP,
> swap: fix allocating cluster for swapfile by mistake"), which said the original
> commit was using the wrong flag to determine if it was a block device and
> therefore in some cases was actually doing large allocations for a file-backed
> swap device, and this was causing file-system corruption. But that implies some
> sort of correctness issue to me, rather than the performance issue I inferred
> from the original commit.
>
> If anyone can offer an explanation, that would be helpful in determining if we
> should allow some large sizes for file-backed swap.

swap use 'swap extent' (swap_info_struct.swap_extent_root) to map from
swap offset to storage block number.  For block-backed swap, the mapping
is pure linear.  So, you can use arbitrary large page size.  But for
file-backed swap, only PAGE_SIZE alignment is guaranteed.

> 2) rotating vs non-rotating
> ===========================
>
> I notice that the clustered approach is only used for non-rotating swap. That
> implies that for rotating media, we will always fail a large allocation, and
> fall back to splitting THPs to single pages. Which implies that the regression
> I'm fixing here may still be present on rotating media? Or perhaps rotating disk
> is so slow that the cost of writing the data out dominates the cost of
> splitting?
>
> I considered that potentially the free swap entry search algorithm that is used
> in this case could be modified to look for (small) contiguous runs of entries;
> Up to ~16 pages (order-4) could be done by doing 2x 64bit reads from map instead
> of single byte.
>
> I haven't looked into this idea in detail, but wonder if anybody thinks it is
> worth the effort? Or perhaps it would end up causing bad fragmentation.

I doubt anybody will use rotating storage to back swap now.

> Finally on testing, I've run the mm selftests and see no regressions, but I
> don't think there is anything in there specifically aimed towards swap? Are
> there any functional or performance tests that I should run? It would certainly
> be good to confirm I haven't regressed PMD-size THP swap performance.

I have used swap sub test case of vm-scalbility to test.

https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/

--
Best Regards,
Huang, Ying
  
Ryan Roberts Oct. 11, 2023, 7:42 a.m. UTC | #2
On 11/10/2023 07:37, Huang, Ying wrote:
> Ryan Roberts <ryan.roberts@arm.com> writes:
> 
>> Hi All,
>>
>> This is an RFC for a small series to add support for swapping out small-sized
>> THP without needing to first split the large folio via __split_huge_page(). It
>> closely follows the approach already used by PMD-sized THP.
>>
>> "Small-sized THP" is an upcoming feature that enables performance improvements
>> by allocating large folios for anonymous memory, where the large folio size is
>> smaller than the traditional PMD-size. See [1].
>>
>> In some circumstances I've observed a performance regression (see patch 2 for
>> details), and this series is an attempt to fix the regression in advance of
>> merging small-sized THP support.
>>
>> I've done what I thought was the smallest change possible, and as a result, this
>> approach is only employed when the swap is backed by a non-rotating block device
>> (just as PMD-sized THP is supported today). However, I have a few questions on
>> whether we should consider relaxing those requirements in certain circumstances:
>>
>>
>> 1) block-backed vs file-backed
>> ==============================
>>
>> The code only attempts to allocate a contiguous set of entries if swap is backed
>> by a block device (i.e. not file-backed). The original commit, f0eea189e8e9
>> ("mm, THP, swap: don't allocate huge cluster for file backed swap device"),
>> stated "It's hard to write a whole transparent huge page (THP) to a file backed
>> swap device". But didn't state why. Does this imply there is a size limit at
>> which it becomes hard? And does that therefore imply that for "small enough"
>> sizes we should now allow use with file-back swap?
>>
>> This original commit was subsequently fixed with commit 41663430588c ("mm, THP,
>> swap: fix allocating cluster for swapfile by mistake"), which said the original
>> commit was using the wrong flag to determine if it was a block device and
>> therefore in some cases was actually doing large allocations for a file-backed
>> swap device, and this was causing file-system corruption. But that implies some
>> sort of correctness issue to me, rather than the performance issue I inferred
>> from the original commit.
>>
>> If anyone can offer an explanation, that would be helpful in determining if we
>> should allow some large sizes for file-backed swap.
> 
> swap use 'swap extent' (swap_info_struct.swap_extent_root) to map from
> swap offset to storage block number.  For block-backed swap, the mapping
> is pure linear.  So, you can use arbitrary large page size.  But for
> file-backed swap, only PAGE_SIZE alignment is guaranteed.

Ahh, I see, so its a correctness issue then. Thanks!


> 
>> 2) rotating vs non-rotating
>> ===========================
>>
>> I notice that the clustered approach is only used for non-rotating swap. That
>> implies that for rotating media, we will always fail a large allocation, and
>> fall back to splitting THPs to single pages. Which implies that the regression
>> I'm fixing here may still be present on rotating media? Or perhaps rotating disk
>> is so slow that the cost of writing the data out dominates the cost of
>> splitting?
>>
>> I considered that potentially the free swap entry search algorithm that is used
>> in this case could be modified to look for (small) contiguous runs of entries;
>> Up to ~16 pages (order-4) could be done by doing 2x 64bit reads from map instead
>> of single byte.
>>
>> I haven't looked into this idea in detail, but wonder if anybody thinks it is
>> worth the effort? Or perhaps it would end up causing bad fragmentation.
> 
> I doubt anybody will use rotating storage to back swap now.

I'm often using a QEMU VM for testing with an Ubuntu install. The disk is
enumerating as rotating storage and the swap device is file-backed. But I guess
the former issue at least, is me setting up QEMU with the incorrect options.

> 
>> Finally on testing, I've run the mm selftests and see no regressions, but I
>> don't think there is anything in there specifically aimed towards swap? Are
>> there any functional or performance tests that I should run? It would certainly
>> be good to confirm I haven't regressed PMD-size THP swap performance.
> 
> I have used swap sub test case of vm-scalbility to test.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/

Great - I shall take a look!

> 
> --
> Best Regards,
> Huang, Ying
  
Ryan Roberts Oct. 13, 2023, 4:31 p.m. UTC | #3
On 11/10/2023 07:37, Huang, Ying wrote:
> Ryan Roberts <ryan.roberts@arm.com> writes:
> 
> [...]
> 
>> Finally on testing, I've run the mm selftests and see no regressions, but I
>> don't think there is anything in there specifically aimed towards swap? Are
>> there any functional or performance tests that I should run? It would certainly
>> be good to confirm I haven't regressed PMD-size THP swap performance.
> 
> I have used swap sub test case of vm-scalbility to test.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/

I ended up using `usemem`, which is the core of this test suite, but deviated
from the pre-canned test case to allow me to use anonymous memory and get
numbers for small-sized THP (this is a very useful tool - thanks for pointing it
out!)

I've run the tests on Ampere Altra, set up with a 35G block ram device as the
swap device and from inside a memcg limited to 40G memory. I've then run
`usemem` with 70 processes (each has its own core), each allocating and writing
1G of memory. I've repeated everything 5 times and taken the mean and stdev:


Mean Performance Improvement vs 4K/baseline

| alloc size |            baseline |    remove-huge-flag | swap-file-small-thp |
|            |  v6.6-rc4+anonfolio |           + patch 1 |           + patch 2 |
|:-----------|--------------------:|--------------------:|--------------------:|
| 4K Page    |                0.0% |                2.3% |                9.1% |
| 64K THP    |              -44.1% |              -46.3% |               30.6% |
| 2M THP     |               56.0% |               54.2% |               60.1% |


Standard Deviation as Percentage of Mean

| alloc size |            baseline |    remove-huge-flag | swap-file-small-thp |
|            |  v6.6-rc4+anonfolio |           + patch 1 |           + patch 2 |
|:-----------|--------------------:|--------------------:|--------------------:|
| 4K Page    |                3.4% |                7.1% |                1.7% |
| 64K THP    |                1.9% |                5.6% |                7.7% |
| 2M THP     |                1.9% |                2.1% |                3.2% |


I don't see any meaningful performance cost to removing the HUGE flag, so
hopefully this gives us confidence to move forward with patch 1.

You can indeed see the performance regression in the baseline when THP is
configured to allocate small-sized THP only (in this case 64K). And you can see
the regression is fixed by patch 2, which avoids splitting the THP and thus
avoids the extra TLBIs. This correlates with what I saw in kernel compilation
workload.

Huang Ying, based on these results, do you still want me to persue a per-cpu
solution to avoid potential contention on the swap info lock? - I proposed in
the thread against patch 2 to do this in the swap_slots layer if so, rather than
in swapfile.c directly (I'm not sure how your original proposal would actually
work?). But based on these results, its not obvious to me that there is a
definite problem here, and it might be simpler to avoid the complexity?

Thanks,
Ryan

> 
> --
> Best Regards,
> Huang, Ying