[v3,32/60] arm64: head: allocate more pages for the kernel mapping

Message ID 20230307140522.2311461-33-ardb@kernel.org
State New
Headers
Series arm64: Add support for LPA2 at stage1 and WXN |

Commit Message

Ard Biesheuvel March 7, 2023, 2:04 p.m. UTC
  In preparation for switching to an early kernel mapping routine that
maps each segment according to its precise boundaries, and with the
correct attributes, let's allocate some extra pages for page tables for
the 4k page size configuration. This is necessary because the start and
end of each segment may not be aligned to the block size, and so we'll
need an extra page table at each segment boundary.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)
  

Comments

Ryan Roberts April 17, 2023, 3:48 p.m. UTC | #1
On 07/03/2023 14:04, Ard Biesheuvel wrote:
> In preparation for switching to an early kernel mapping routine that
> maps each segment according to its precise boundaries, and with the
> correct attributes, let's allocate some extra pages for page tables for
> the 4k page size configuration. This is necessary because the start and
> end of each segment may not be aligned to the block size, and so we'll
> need an extra page table at each segment boundary.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 4d13c73171e1e360..50b5c145358a5d8e 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -80,7 +80,7 @@
>  			+ EARLY_PGDS((vstart), (vend), add) 	/* each PGDIR needs a next level page table */	\
>  			+ EARLY_PUDS((vstart), (vend), add)	/* each PUD needs a next level page table */	\
>  			+ EARLY_PMDS((vstart), (vend), add))	/* each PMD needs a next level page table */
> -#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
> +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES))
>  
>  /* the initial ID map may need two extra pages if it needs to be extended */
>  #if VA_BITS < 48
> @@ -101,6 +101,15 @@
>  #define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>  #endif
>  
> +/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
> +#define KERNEL_SEGMENT_COUNT	5
> +
> +#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
> +#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)

I'm guessing the block size for 4K pages is PMD, so you need these extra pages
to define PTEs for the case where the section start/end addresses are not on
exact 2MB boundaries? But in that case, isn't it possible that you would need 2
extra PTE tables per segment - one for the start and one for the end?

> +#else
> +#define EARLY_SEGMENT_EXTRA_PAGES 0
> +#endif
> +
>  /*
>   * Initial memory map attributes.
>   */
  
Ard Biesheuvel April 17, 2023, 4:11 p.m. UTC | #2
On Mon, 17 Apr 2023 at 17:48, Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 07/03/2023 14:04, Ard Biesheuvel wrote:
> > In preparation for switching to an early kernel mapping routine that
> > maps each segment according to its precise boundaries, and with the
> > correct attributes, let's allocate some extra pages for page tables for
> > the 4k page size configuration. This is necessary because the start and
> > end of each segment may not be aligned to the block size, and so we'll
> > need an extra page table at each segment boundary.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++-
> >  1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> > index 4d13c73171e1e360..50b5c145358a5d8e 100644
> > --- a/arch/arm64/include/asm/kernel-pgtable.h
> > +++ b/arch/arm64/include/asm/kernel-pgtable.h
> > @@ -80,7 +80,7 @@
> >                       + EARLY_PGDS((vstart), (vend), add)     /* each PGDIR needs a next level page table */  \
> >                       + EARLY_PUDS((vstart), (vend), add)     /* each PUD needs a next level page table */    \
> >                       + EARLY_PMDS((vstart), (vend), add))    /* each PMD needs a next level page table */
> > -#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
> > +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES))
> >
> >  /* the initial ID map may need two extra pages if it needs to be extended */
> >  #if VA_BITS < 48
> > @@ -101,6 +101,15 @@
> >  #define SWAPPER_TABLE_SHIFT  PMD_SHIFT
> >  #endif
> >
> > +/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
> > +#define KERNEL_SEGMENT_COUNT 5
> > +
> > +#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
> > +#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
>
> I'm guessing the block size for 4K pages is PMD, so you need these extra pages
> to define PTEs for the case where the section start/end addresses are not on
> exact 2MB boundaries? But in that case, isn't it possible that you would need 2
> extra PTE tables per segment - one for the start and one for the end?
>

The end of one segment is the start of another, so we need one at the
start, plus one each for each segment end.
  
Ryan Roberts April 17, 2023, 4:18 p.m. UTC | #3
On 17/04/2023 17:11, Ard Biesheuvel wrote:
> On Mon, 17 Apr 2023 at 17:48, Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 07/03/2023 14:04, Ard Biesheuvel wrote:
>>> In preparation for switching to an early kernel mapping routine that
>>> maps each segment according to its precise boundaries, and with the
>>> correct attributes, let's allocate some extra pages for page tables for
>>> the 4k page size configuration. This is necessary because the start and
>>> end of each segment may not be aligned to the block size, and so we'll
>>> need an extra page table at each segment boundary.
>>>
>>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>>> ---
>>>  arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++-
>>>  1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
>>> index 4d13c73171e1e360..50b5c145358a5d8e 100644
>>> --- a/arch/arm64/include/asm/kernel-pgtable.h
>>> +++ b/arch/arm64/include/asm/kernel-pgtable.h
>>> @@ -80,7 +80,7 @@
>>>                       + EARLY_PGDS((vstart), (vend), add)     /* each PGDIR needs a next level page table */  \
>>>                       + EARLY_PUDS((vstart), (vend), add)     /* each PUD needs a next level page table */    \
>>>                       + EARLY_PMDS((vstart), (vend), add))    /* each PMD needs a next level page table */
>>> -#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
>>> +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES))
>>>
>>>  /* the initial ID map may need two extra pages if it needs to be extended */
>>>  #if VA_BITS < 48
>>> @@ -101,6 +101,15 @@
>>>  #define SWAPPER_TABLE_SHIFT  PMD_SHIFT
>>>  #endif
>>>
>>> +/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
>>> +#define KERNEL_SEGMENT_COUNT 5
>>> +
>>> +#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
>>> +#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
>>
>> I'm guessing the block size for 4K pages is PMD, so you need these extra pages
>> to define PTEs for the case where the section start/end addresses are not on
>> exact 2MB boundaries? But in that case, isn't it possible that you would need 2
>> extra PTE tables per segment - one for the start and one for the end?
>>
> 
> The end of one segment is the start of another, so we need one at the
> start, plus one each for each segment end.

Ahh, of course. Thanks.
  

Patch

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 4d13c73171e1e360..50b5c145358a5d8e 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -80,7 +80,7 @@ 
 			+ EARLY_PGDS((vstart), (vend), add) 	/* each PGDIR needs a next level page table */	\
 			+ EARLY_PUDS((vstart), (vend), add)	/* each PUD needs a next level page table */	\
 			+ EARLY_PMDS((vstart), (vend), add))	/* each PMD needs a next level page table */
-#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
+#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES))
 
 /* the initial ID map may need two extra pages if it needs to be extended */
 #if VA_BITS < 48
@@ -101,6 +101,15 @@ 
 #define SWAPPER_TABLE_SHIFT	PMD_SHIFT
 #endif
 
+/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */
+#define KERNEL_SEGMENT_COUNT	5
+
+#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
+#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
+#else
+#define EARLY_SEGMENT_EXTRA_PAGES 0
+#endif
+
 /*
  * Initial memory map attributes.
  */