[v1,04/10] mm: Implement folio_add_new_anon_rmap_range()

Message ID 20230626171430.3167004-5-ryan.roberts@arm.com
State New
Headers
Series variable-order, large folios for anonymous memory |

Commit Message

Ryan Roberts June 26, 2023, 5:14 p.m. UTC
  Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
belonging to a folio, for effciency savings. All pages are accounted as
small pages.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 include/linux/rmap.h |  2 ++
 mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 45 insertions(+)
  

Comments

Yu Zhao June 27, 2023, 7:08 a.m. UTC | #1
On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
> belonging to a folio, for effciency savings. All pages are accounted as
> small pages.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  include/linux/rmap.h |  2 ++
>  mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 45 insertions(+)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index a3825ce81102..15433a3d0cbf 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>                 unsigned long address);
>  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>                 unsigned long address);
> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
> +               int nr, struct vm_area_struct *vma, unsigned long address);

We should update folio_add_new_anon_rmap() to support large() &&
!folio_test_pmd_mappable() folios instead.

I double checked all places currently using folio_add_new_anon_rmap(),
and as expected, none actually allocates large() &&
!folio_test_pmd_mappable() and maps it one by one, which makes the
cases simpler, i.e.,
  if (!large())
    // the existing basepage case
  else if (!folio_test_pmd_mappable())
    // our new case
  else
    // the existing THP case

>  void page_add_file_rmap(struct page *, struct vm_area_struct *,
>                 bool compound);
>  void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 1d8369549424..4050bcea7ae7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>         __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>  }
>
> +/**
> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
> + * anonymous potentially large folio.
> + * @folio:      The folio containing the pages to be mapped
> + * @page:       First page in the folio to be mapped
> + * @nr:         Number of pages to be mapped
> + * @vma:        the vm area in which the mapping is added
> + * @address:    the user virtual address of the first page to be mapped
> + *
> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
> + * bypassed and the folio does not have to be locked. All pages in the folio are
> + * individually accounted.
> + *
> + * As the folio is new, it's assumed to be mapped exclusively by a single
> + * process.
> + */
> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
> +               int nr, struct vm_area_struct *vma, unsigned long address)
> +{
> +       int i;
> +
> +       VM_BUG_ON_VMA(address < vma->vm_start ||
> +                     address + (nr << PAGE_SHIFT) > vma->vm_end, vma);

BTW, VM_BUG_ON* shouldn't be used in new code:
Documentation/process/coding-style.rst
  
Ryan Roberts June 27, 2023, 8:09 a.m. UTC | #2
On 27/06/2023 08:08, Yu Zhao wrote:
> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
>> belonging to a folio, for effciency savings. All pages are accounted as
>> small pages.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>>  include/linux/rmap.h |  2 ++
>>  mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 45 insertions(+)
>>
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index a3825ce81102..15433a3d0cbf 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>>                 unsigned long address);
>>  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>>                 unsigned long address);
>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>> +               int nr, struct vm_area_struct *vma, unsigned long address);
> 
> We should update folio_add_new_anon_rmap() to support large() &&
> !folio_test_pmd_mappable() folios instead.
> 
> I double checked all places currently using folio_add_new_anon_rmap(),
> and as expected, none actually allocates large() &&
> !folio_test_pmd_mappable() and maps it one by one, which makes the
> cases simpler, i.e.,
>   if (!large())
>     // the existing basepage case
>   else if (!folio_test_pmd_mappable())
>     // our new case
>   else
>     // the existing THP case

I don't have a strong opinion either way. Happy to go with this suggestion. But
the reason I did it as a new function was because I was following the pattern in
[1] which adds a new folio_add_file_rmap_range() function.

[1] https://lore.kernel.org/linux-mm/20230315051444.3229621-35-willy@infradead.org/


> 
>>  void page_add_file_rmap(struct page *, struct vm_area_struct *,
>>                 bool compound);
>>  void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 1d8369549424..4050bcea7ae7 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>         __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>  }
>>
>> +/**
>> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
>> + * anonymous potentially large folio.
>> + * @folio:      The folio containing the pages to be mapped
>> + * @page:       First page in the folio to be mapped
>> + * @nr:         Number of pages to be mapped
>> + * @vma:        the vm area in which the mapping is added
>> + * @address:    the user virtual address of the first page to be mapped
>> + *
>> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
>> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
>> + * bypassed and the folio does not have to be locked. All pages in the folio are
>> + * individually accounted.
>> + *
>> + * As the folio is new, it's assumed to be mapped exclusively by a single
>> + * process.
>> + */
>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>> +               int nr, struct vm_area_struct *vma, unsigned long address)
>> +{
>> +       int i;
>> +
>> +       VM_BUG_ON_VMA(address < vma->vm_start ||
>> +                     address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> 
> BTW, VM_BUG_ON* shouldn't be used in new code:
> Documentation/process/coding-style.rst

Thanks, sorry about that. Was copy-pasting from folio_add_new_anon_rmap().
  
Yin Fengwei June 28, 2023, 2:17 a.m. UTC | #3
On 6/27/23 15:08, Yu Zhao wrote:
> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
>> belonging to a folio, for effciency savings. All pages are accounted as
>> small pages.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>>  include/linux/rmap.h |  2 ++
>>  mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 45 insertions(+)
>>
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index a3825ce81102..15433a3d0cbf 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>>                 unsigned long address);
>>  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>>                 unsigned long address);
>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>> +               int nr, struct vm_area_struct *vma, unsigned long address);
> 
> We should update folio_add_new_anon_rmap() to support large() &&
> !folio_test_pmd_mappable() folios instead.
> 
> I double checked all places currently using folio_add_new_anon_rmap(),
> and as expected, none actually allocates large() &&
> !folio_test_pmd_mappable() and maps it one by one, which makes the
> cases simpler, i.e.,
>   if (!large())
>     // the existing basepage case
>   else if (!folio_test_pmd_mappable())
>     // our new case
>   else
>     // the existing THP case
I suppose we can merge the new case and existing THP case.


Regards
Yin, Fengwei

> 
>>  void page_add_file_rmap(struct page *, struct vm_area_struct *,
>>                 bool compound);
>>  void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 1d8369549424..4050bcea7ae7 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>         __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>  }
>>
>> +/**
>> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
>> + * anonymous potentially large folio.
>> + * @folio:      The folio containing the pages to be mapped
>> + * @page:       First page in the folio to be mapped
>> + * @nr:         Number of pages to be mapped
>> + * @vma:        the vm area in which the mapping is added
>> + * @address:    the user virtual address of the first page to be mapped
>> + *
>> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
>> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
>> + * bypassed and the folio does not have to be locked. All pages in the folio are
>> + * individually accounted.
>> + *
>> + * As the folio is new, it's assumed to be mapped exclusively by a single
>> + * process.
>> + */
>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>> +               int nr, struct vm_area_struct *vma, unsigned long address)
>> +{
>> +       int i;
>> +
>> +       VM_BUG_ON_VMA(address < vma->vm_start ||
>> +                     address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> 
> BTW, VM_BUG_ON* shouldn't be used in new code:
> Documentation/process/coding-style.rst
  
Yin Fengwei June 28, 2023, 2:20 a.m. UTC | #4
On 6/27/23 16:09, Ryan Roberts wrote:
> On 27/06/2023 08:08, Yu Zhao wrote:
>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>
>>> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
>>> belonging to a folio, for effciency savings. All pages are accounted as
>>> small pages.
>>>
>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>> ---
>>>  include/linux/rmap.h |  2 ++
>>>  mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 45 insertions(+)
>>>
>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>> index a3825ce81102..15433a3d0cbf 100644
>>> --- a/include/linux/rmap.h
>>> +++ b/include/linux/rmap.h
>>> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>>>                 unsigned long address);
>>>  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>>>                 unsigned long address);
>>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>>> +               int nr, struct vm_area_struct *vma, unsigned long address);
>>
>> We should update folio_add_new_anon_rmap() to support large() &&
>> !folio_test_pmd_mappable() folios instead.
>>
>> I double checked all places currently using folio_add_new_anon_rmap(),
>> and as expected, none actually allocates large() &&
>> !folio_test_pmd_mappable() and maps it one by one, which makes the
>> cases simpler, i.e.,
>>   if (!large())
>>     // the existing basepage case
>>   else if (!folio_test_pmd_mappable())
>>     // our new case
>>   else
>>     // the existing THP case
> 
> I don't have a strong opinion either way. Happy to go with this suggestion. But
> the reason I did it as a new function was because I was following the pattern in
> [1] which adds a new folio_add_file_rmap_range() function.
> 
> [1] https://lore.kernel.org/linux-mm/20230315051444.3229621-35-willy@infradead.org/
Oh. There is different here:
For page cache, large folio could be created by previous file access. But later
file access by other process just need map partial large folio. In this case, we need
_range for filemap.

But for anonymous, I suppose we always map whole folio in. So I agree with Yu. We
don't need _range for folio_add_new_anon_rmap(). Thanks.


Regards
Yin, Fengwei

> 
> 
>>
>>>  void page_add_file_rmap(struct page *, struct vm_area_struct *,
>>>                 bool compound);
>>>  void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 1d8369549424..4050bcea7ae7 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>>         __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>>  }
>>>
>>> +/**
>>> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
>>> + * anonymous potentially large folio.
>>> + * @folio:      The folio containing the pages to be mapped
>>> + * @page:       First page in the folio to be mapped
>>> + * @nr:         Number of pages to be mapped
>>> + * @vma:        the vm area in which the mapping is added
>>> + * @address:    the user virtual address of the first page to be mapped
>>> + *
>>> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
>>> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
>>> + * bypassed and the folio does not have to be locked. All pages in the folio are
>>> + * individually accounted.
>>> + *
>>> + * As the folio is new, it's assumed to be mapped exclusively by a single
>>> + * process.
>>> + */
>>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>>> +               int nr, struct vm_area_struct *vma, unsigned long address)
>>> +{
>>> +       int i;
>>> +
>>> +       VM_BUG_ON_VMA(address < vma->vm_start ||
>>> +                     address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>>
>> BTW, VM_BUG_ON* shouldn't be used in new code:
>> Documentation/process/coding-style.rst
> 
> Thanks, sorry about that. Was copy-pasting from folio_add_new_anon_rmap().
>
  
Ryan Roberts June 28, 2023, 11:09 a.m. UTC | #5
On 28/06/2023 03:20, Yin Fengwei wrote:
> 
> 
> On 6/27/23 16:09, Ryan Roberts wrote:
>> On 27/06/2023 08:08, Yu Zhao wrote:
>>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>
>>>> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages
>>>> belonging to a folio, for effciency savings. All pages are accounted as
>>>> small pages.
>>>>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> ---
>>>>  include/linux/rmap.h |  2 ++
>>>>  mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>>>>  2 files changed, 45 insertions(+)
>>>>
>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>>> index a3825ce81102..15433a3d0cbf 100644
>>>> --- a/include/linux/rmap.h
>>>> +++ b/include/linux/rmap.h
>>>> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>>>>                 unsigned long address);
>>>>  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>>>>                 unsigned long address);
>>>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>>>> +               int nr, struct vm_area_struct *vma, unsigned long address);
>>>
>>> We should update folio_add_new_anon_rmap() to support large() &&
>>> !folio_test_pmd_mappable() folios instead.
>>>
>>> I double checked all places currently using folio_add_new_anon_rmap(),
>>> and as expected, none actually allocates large() &&
>>> !folio_test_pmd_mappable() and maps it one by one, which makes the
>>> cases simpler, i.e.,
>>>   if (!large())
>>>     // the existing basepage case
>>>   else if (!folio_test_pmd_mappable())
>>>     // our new case
>>>   else
>>>     // the existing THP case
>>
>> I don't have a strong opinion either way. Happy to go with this suggestion. But
>> the reason I did it as a new function was because I was following the pattern in
>> [1] which adds a new folio_add_file_rmap_range() function.
>>
>> [1] https://lore.kernel.org/linux-mm/20230315051444.3229621-35-willy@infradead.org/
> Oh. There is different here:
> For page cache, large folio could be created by previous file access. But later
> file access by other process just need map partial large folio. In this case, we need
> _range for filemap.
> 
> But for anonymous, I suppose we always map whole folio in. So I agree with Yu. We
> don't need _range for folio_add_new_anon_rmap(). Thanks.

Yes that makes sense - thanks. I'll merge the new case into
folio_add_new_anon_rmap() for v2.

> 
> 
> Regards
> Yin, Fengwei
> 
>>
>>
>>>
>>>>  void page_add_file_rmap(struct page *, struct vm_area_struct *,
>>>>                 bool compound);
>>>>  void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index 1d8369549424..4050bcea7ae7 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>>>         __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>>>  }
>>>>
>>>> +/**
>>>> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
>>>> + * anonymous potentially large folio.
>>>> + * @folio:      The folio containing the pages to be mapped
>>>> + * @page:       First page in the folio to be mapped
>>>> + * @nr:         Number of pages to be mapped
>>>> + * @vma:        the vm area in which the mapping is added
>>>> + * @address:    the user virtual address of the first page to be mapped
>>>> + *
>>>> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
>>>> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
>>>> + * bypassed and the folio does not have to be locked. All pages in the folio are
>>>> + * individually accounted.
>>>> + *
>>>> + * As the folio is new, it's assumed to be mapped exclusively by a single
>>>> + * process.
>>>> + */
>>>> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
>>>> +               int nr, struct vm_area_struct *vma, unsigned long address)
>>>> +{
>>>> +       int i;
>>>> +
>>>> +       VM_BUG_ON_VMA(address < vma->vm_start ||
>>>> +                     address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>>>
>>> BTW, VM_BUG_ON* shouldn't be used in new code:
>>> Documentation/process/coding-style.rst
>>
>> Thanks, sorry about that. Was copy-pasting from folio_add_new_anon_rmap().
>>
  

Patch

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index a3825ce81102..15433a3d0cbf 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -196,6 +196,8 @@  void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
 		unsigned long address);
 void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
 		unsigned long address);
+void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
+		int nr, struct vm_area_struct *vma, unsigned long address);
 void page_add_file_rmap(struct page *, struct vm_area_struct *,
 		bool compound);
 void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
diff --git a/mm/rmap.c b/mm/rmap.c
index 1d8369549424..4050bcea7ae7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1305,6 +1305,49 @@  void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 	__page_set_anon_rmap(folio, &folio->page, vma, address, 1);
 }
 
+/**
+ * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new
+ * anonymous potentially large folio.
+ * @folio:      The folio containing the pages to be mapped
+ * @page:       First page in the folio to be mapped
+ * @nr:         Number of pages to be mapped
+ * @vma:        the vm area in which the mapping is added
+ * @address:    the user virtual address of the first page to be mapped
+ *
+ * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio
+ * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is
+ * bypassed and the folio does not have to be locked. All pages in the folio are
+ * individually accounted.
+ *
+ * As the folio is new, it's assumed to be mapped exclusively by a single
+ * process.
+ */
+void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page,
+		int nr, struct vm_area_struct *vma, unsigned long address)
+{
+	int i;
+
+	VM_BUG_ON_VMA(address < vma->vm_start ||
+		      address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
+	__folio_set_swapbacked(folio);
+
+	if (folio_test_large(folio)) {
+		/* increment count (starts at 0) */
+		atomic_set(&folio->_nr_pages_mapped, nr);
+	}
+
+	for (i = 0; i < nr; i++) {
+		/* increment count (starts at -1) */
+		atomic_set(&page->_mapcount, 0);
+		__page_set_anon_rmap(folio, page, vma, address, 1);
+		page++;
+		address += PAGE_SIZE;
+	}
+
+	__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
+
+}
+
 /**
  * folio_add_file_rmap_range - add pte mapping to page range of a folio
  * @folio:	The folio to add the mapping to