x86/boot/compressed: Reserve more memory for page tables

Message ID 20230914123001.27659-1-kirill.shutemov@linux.intel.com
State New
Headers
Series x86/boot/compressed: Reserve more memory for page tables |

Commit Message

Kirill A. Shutemov Sept. 14, 2023, 12:30 p.m. UTC
  The decompressor has a hard limit on the number of page tables it can
allocate. This limit is defined at compile-time and will cause boot
failure if it is reached.

The kernel is very strict and calculates the limit precisely for the
worst-case scenario based on the current configuration. However, it is
easy to forget to adjust the limit when a new use-case arises. The
worst-case scenario is rarely encountered during sanity checks.

In the case of enabling 5-level paging, a use-case was overlooked. The
limit needs to be increased by one to accommodate the additional level.
This oversight went unnoticed until Aaron attempted to run the kernel
via kexec with 5-level paging and unaccepted memory enabled.

To address this issue, let's allocate some extra space for page tables.
128K should be sufficient for any use-case. The logic can be simplified
by using a single value for all kernel configurations.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Aaron Lu <aaron.lu@intel.com>
Fixes: 34bbb0009f3b ("x86/boot/compressed: Enable 5-level paging during decompression stage")
---
 arch/x86/include/asm/boot.h | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)
  

Comments

Dave Hansen Sept. 14, 2023, 3:51 p.m. UTC | #1
On 9/14/23 05:30, Kirill A. Shutemov wrote:
> +/*
> + * Total number of page table kernel_add_identity_map() can allocate,
> + * including page tables consumed by startup_32().
> + */
> +# define BOOT_PGT_SIZE		(32*4096)

I agree that needing to know this in advance *exactly* is troublesome.

But I do think that we should preserve the comment about the worst-case
scenario.  Also, I thought this was triggered by unaccepted memory.  Am
I remembering it wrong?  How was it in play?

Either way, I think your general approach here is sound.  But let's add
one little tweak to at least warn when we're getting close to the limit.
 Now that nobody has to worry about the limit for the immediate future
it's a guarantee that in the long term someone will plow through it
accidentally.

Let's add a soft warning when we're nearing the limit so that there's a
chance to catch these things in the future.
  
Kirill A. Shutemov Sept. 14, 2023, 5:07 p.m. UTC | #2
On Thu, Sep 14, 2023 at 08:51:50AM -0700, Dave Hansen wrote:
> On 9/14/23 05:30, Kirill A. Shutemov wrote:
> > +/*
> > + * Total number of page table kernel_add_identity_map() can allocate,
> > + * including page tables consumed by startup_32().
> > + */
> > +# define BOOT_PGT_SIZE		(32*4096)
> 
> I agree that needing to know this in advance *exactly* is troublesome.
> 
> But I do think that we should preserve the comment about the worst-case
> scenario.

Want me to send v2 for that?

> Also, I thought this was triggered by unaccepted memory.  Am
> I remembering it wrong?  How was it in play?

Unaccepted memory touched EFI system table. I was able to reproduce
without unaccepted memory enabled: if get_rsdp_addr() takes
efi_get_rsdp_addr() path. So it is not the root cause, just a trigger.

So we need several things to run into the problem:

- System supports 5-level paging and it is enabled;

- Decompressor takes control in 64-bit mode, so it uses page tables
  inherited from bootloader until initialize_identity_maps().

  In initialize_identity_maps() kernel resets page tables, rebuilding them
  from scratch. Here we only map what is definitely required: kernel,
  cmdline, boot_patams, setup_data.

  Entering in 32-bit mode would make startup_32() map the first 4G
  unconditionally, but in this setup we rely more on #PF to fill page
  table. It masks problem as we rarely need all four PMD tables.

- Make kernel touch at least one page per-gigabyte in the first 4G.

  In our case, unaccepted memory path was the last straw: it triggered
  allocation of the fourth PMD table which failed. 

We can increase the constant by one and it will work as long as nobody
need anything beyond the first 4G (or any 1G-aligned 4G region where we've
got loaded, I guess). I am not sure we can guarantee this with
(potentially buggy) ACPI and EFI in the picture.

> Either way, I think your general approach here is sound.  But let's add
> one little tweak to at least warn when we're getting close to the limit.

Yeah, makes sense.
  

Patch

diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
index 9191280d9ea3..aaf1b2fc6ede 100644
--- a/arch/x86/include/asm/boot.h
+++ b/arch/x86/include/asm/boot.h
@@ -40,23 +40,20 @@ 
 #ifdef CONFIG_X86_64
 # define BOOT_STACK_SIZE	0x4000
 
-# define BOOT_INIT_PGT_SIZE	(6*4096)
-# ifdef CONFIG_RANDOMIZE_BASE
 /*
- * Assuming all cross the 512GB boundary:
- * 1 page for level4
- * (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
- * 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
- * Total is 19 pages.
+ * Used by decompressor's startup_32() to allocate page tables for identity
+ * mapping of the 4G of RAM in 4-level paging mode.
+ *
+ * The additional page table needed for 5-level paging is allocated from
+ * trampoline_32bit memory.
  */
-#  ifdef CONFIG_X86_VERBOSE_BOOTUP
-#   define BOOT_PGT_SIZE	(19*4096)
-#  else /* !CONFIG_X86_VERBOSE_BOOTUP */
-#   define BOOT_PGT_SIZE	(17*4096)
-#  endif
-# else /* !CONFIG_RANDOMIZE_BASE */
-#  define BOOT_PGT_SIZE		BOOT_INIT_PGT_SIZE
-# endif
+# define BOOT_INIT_PGT_SIZE	(6*4096)
+
+/*
+ * Total number of page table kernel_add_identity_map() can allocate,
+ * including page tables consumed by startup_32().
+ */
+# define BOOT_PGT_SIZE		(32*4096)
 
 #else /* !CONFIG_X86_64 */
 # define BOOT_STACK_SIZE	0x1000