[v4,6/7] mm: truncate: split huge page cache page to a non-zero order if possible.

Message ID 20240213215520.1048625-7-zi.yan@sent.com
State New
Headers
Series Split a folio to any lower order folios |

Commit Message

Zi Yan Feb. 13, 2024, 9:55 p.m. UTC
  From: Zi Yan <ziy@nvidia.com>

To minimize the number of pages after a huge page truncation, we do not
need to split it all the way down to order-0. The huge page has at most
three parts, the part before offset, the part to be truncated, the part
remaining at the end. Find the greatest common divisor of them to
calculate the new page order from it, so we can split the huge
page to this order and keep the remaining pages as large and as few as
possible.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/truncate.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)
  

Comments

Zi Yan Feb. 14, 2024, 4:19 p.m. UTC | #1
On 14 Feb 2024, at 5:43, Ryan Roberts wrote:

> On 13/02/2024 21:55, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> To minimize the number of pages after a huge page truncation, we do not
>> need to split it all the way down to order-0. The huge page has at most
>> three parts, the part before offset, the part to be truncated, the part
>> remaining at the end. Find the greatest common divisor of them to
>> calculate the new page order from it, so we can split the huge
>> page to this order and keep the remaining pages as large and as few as
>> possible.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>  mm/truncate.c | 21 +++++++++++++++++++--
>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/truncate.c b/mm/truncate.c
>> index 725b150e47ac..49ddbbf7a617 100644
>> --- a/mm/truncate.c
>> +++ b/mm/truncate.c
>> @@ -21,6 +21,7 @@
>>  #include <linux/task_io_accounting_ops.h>
>>  #include <linux/shmem_fs.h>
>>  #include <linux/rmap.h>
>> +#include <linux/gcd.h>
>>  #include "internal.h"
>>
>>  /*
>> @@ -210,7 +211,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
>>  bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>  {
>>  	loff_t pos = folio_pos(folio);
>> -	unsigned int offset, length;
>> +	unsigned int offset, length, remaining;
>> +	unsigned int new_order = folio_order(folio);
>>
>>  	if (pos < start)
>>  		offset = start - pos;
>> @@ -221,6 +223,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>  		length = length - offset;
>>  	else
>>  		length = end + 1 - pos - offset;
>> +	remaining = folio_size(folio) - offset - length;
>>
>>  	folio_wait_writeback(folio);
>>  	if (length == folio_size(folio)) {
>> @@ -235,11 +238,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>  	 */
>>  	folio_zero_range(folio, offset, length);
>>
>> +	/*
>> +	 * Use the greatest common divisor of offset, length, and remaining
>> +	 * as the smallest page size and compute the new order from it. So we
>> +	 * can truncate a subpage as large as possible. Round up gcd to
>> +	 * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0.
>> +	 */
>> +	new_order = ilog2(round_up(gcd(gcd(offset, length), remaining),
>> +				   PAGE_SIZE) / PAGE_SIZE);
>
> Given you have up to 2 regions remaining, isn't it possible that you want a
> different order for both those regions (or even multiple orders within the same
> region)? I guess you just choose gcd for simplicity?

Right. You raise the same concern as Hugh[1]. I am minimizing the call of
split_huge_page_to_list_to_order() and you and Hugh want to minimize the
number of folios after the split. Yours will give better outcome after split,
but requires either multiple calls or a more sophisticated implementation
of page split[2]. We probably can revisit this once splitting to any order
gets wider use.

>> +
>> +	/* order-1 THP not supported, downgrade to order-0 */
>> +	if (new_order == 1)
>> +		new_order = 0;
>
> I guess this would need to change if supporting order-1 file folios?

Right.

>> +
>> +
>>  	if (folio_has_private(folio))
>>  		folio_invalidate(folio, offset, length);
>>  	if (!folio_test_large(folio))
>>  		return true;
>> -	if (split_folio(folio) == 0)
>> +	if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0)
>
> I know you are discussing removing this patch, but since you created
> split_folio_to_order() wouldn't that be better here?

Sure. Will change the patch locally.

[1] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@google.com/
[2] https://lore.kernel.org/linux-mm/0AC0520E-1BD2-497E-A7ED-05394400BFC9@nvidia.com/

--
Best Regards,
Yan, Zi
  
Ryan Roberts Feb. 14, 2024, 4:25 p.m. UTC | #2
On 14/02/2024 16:19, Zi Yan wrote:
> On 14 Feb 2024, at 5:43, Ryan Roberts wrote:
> 
>> On 13/02/2024 21:55, Zi Yan wrote:
>>> From: Zi Yan <ziy@nvidia.com>
>>>
>>> To minimize the number of pages after a huge page truncation, we do not
>>> need to split it all the way down to order-0. The huge page has at most
>>> three parts, the part before offset, the part to be truncated, the part
>>> remaining at the end. Find the greatest common divisor of them to
>>> calculate the new page order from it, so we can split the huge
>>> page to this order and keep the remaining pages as large and as few as
>>> possible.
>>>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>  mm/truncate.c | 21 +++++++++++++++++++--
>>>  1 file changed, 19 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/truncate.c b/mm/truncate.c
>>> index 725b150e47ac..49ddbbf7a617 100644
>>> --- a/mm/truncate.c
>>> +++ b/mm/truncate.c
>>> @@ -21,6 +21,7 @@
>>>  #include <linux/task_io_accounting_ops.h>
>>>  #include <linux/shmem_fs.h>
>>>  #include <linux/rmap.h>
>>> +#include <linux/gcd.h>
>>>  #include "internal.h"
>>>
>>>  /*
>>> @@ -210,7 +211,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
>>>  bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>>  {
>>>  	loff_t pos = folio_pos(folio);
>>> -	unsigned int offset, length;
>>> +	unsigned int offset, length, remaining;
>>> +	unsigned int new_order = folio_order(folio);
>>>
>>>  	if (pos < start)
>>>  		offset = start - pos;
>>> @@ -221,6 +223,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>>  		length = length - offset;
>>>  	else
>>>  		length = end + 1 - pos - offset;
>>> +	remaining = folio_size(folio) - offset - length;
>>>
>>>  	folio_wait_writeback(folio);
>>>  	if (length == folio_size(folio)) {
>>> @@ -235,11 +238,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>>>  	 */
>>>  	folio_zero_range(folio, offset, length);
>>>
>>> +	/*
>>> +	 * Use the greatest common divisor of offset, length, and remaining
>>> +	 * as the smallest page size and compute the new order from it. So we
>>> +	 * can truncate a subpage as large as possible. Round up gcd to
>>> +	 * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0.
>>> +	 */
>>> +	new_order = ilog2(round_up(gcd(gcd(offset, length), remaining),
>>> +				   PAGE_SIZE) / PAGE_SIZE);
>>
>> Given you have up to 2 regions remaining, isn't it possible that you want a
>> different order for both those regions (or even multiple orders within the same
>> region)? I guess you just choose gcd for simplicity?
> 
> Right. You raise the same concern as Hugh[1]. I am minimizing the call of
> split_huge_page_to_list_to_order() and you and Hugh want to minimize the
> number of folios after the split. Yours will give better outcome after split,
> but requires either multiple calls or a more sophisticated implementation
> of page split[2]. We probably can revisit this once splitting to any order
> gets wider use.

Yeah, fair enough. Sorry hadn't read Hugh's original feedback.

> 
>>> +
>>> +	/* order-1 THP not supported, downgrade to order-0 */
>>> +	if (new_order == 1)
>>> +		new_order = 0;
>>
>> I guess this would need to change if supporting order-1 file folios?
> 
> Right.
> 
>>> +
>>> +
>>>  	if (folio_has_private(folio))
>>>  		folio_invalidate(folio, offset, length);
>>>  	if (!folio_test_large(folio))
>>>  		return true;
>>> -	if (split_folio(folio) == 0)
>>> +	if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0)
>>
>> I know you are discussing removing this patch, but since you created
>> split_folio_to_order() wouldn't that be better here?
> 
> Sure. Will change the patch locally.
> 
> [1] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@google.com/
> [2] https://lore.kernel.org/linux-mm/0AC0520E-1BD2-497E-A7ED-05394400BFC9@nvidia.com/
> 
> --
> Best Regards,
> Yan, Zi
  

Patch

diff --git a/mm/truncate.c b/mm/truncate.c
index 725b150e47ac..49ddbbf7a617 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -21,6 +21,7 @@ 
 #include <linux/task_io_accounting_ops.h>
 #include <linux/shmem_fs.h>
 #include <linux/rmap.h>
+#include <linux/gcd.h>
 #include "internal.h"
 
 /*
@@ -210,7 +211,8 @@  int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
 bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 {
 	loff_t pos = folio_pos(folio);
-	unsigned int offset, length;
+	unsigned int offset, length, remaining;
+	unsigned int new_order = folio_order(folio);
 
 	if (pos < start)
 		offset = start - pos;
@@ -221,6 +223,7 @@  bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 		length = length - offset;
 	else
 		length = end + 1 - pos - offset;
+	remaining = folio_size(folio) - offset - length;
 
 	folio_wait_writeback(folio);
 	if (length == folio_size(folio)) {
@@ -235,11 +238,25 @@  bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 	 */
 	folio_zero_range(folio, offset, length);
 
+	/*
+	 * Use the greatest common divisor of offset, length, and remaining
+	 * as the smallest page size and compute the new order from it. So we
+	 * can truncate a subpage as large as possible. Round up gcd to
+	 * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0.
+	 */
+	new_order = ilog2(round_up(gcd(gcd(offset, length), remaining),
+				   PAGE_SIZE) / PAGE_SIZE);
+
+	/* order-1 THP not supported, downgrade to order-0 */
+	if (new_order == 1)
+		new_order = 0;
+
+
 	if (folio_has_private(folio))
 		folio_invalidate(folio, offset, length);
 	if (!folio_test_large(folio))
 		return true;
-	if (split_folio(folio) == 0)
+	if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0)
 		return true;
 	if (folio_test_dirty(folio))
 		return false;