Message ID | 20230307140522.2311461-26-ardb@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2460502wrd; Tue, 7 Mar 2023 06:28:07 -0800 (PST) X-Google-Smtp-Source: AK7set8f68MaZftTXley1k4cwamTjy+HvXHk4eBoMTFHi/X5X1e06nSTYPdpoDz0K6MAo7vITVxP X-Received: by 2002:a17:903:2603:b0:19d:16ed:3e34 with SMTP id jd3-20020a170903260300b0019d16ed3e34mr13448811plb.26.1678199286865; Tue, 07 Mar 2023 06:28:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678199286; cv=none; d=google.com; s=arc-20160816; b=l0OyEAxFQrAmLME6ENl6cawVOaEd1SyUUyaBaH8mNsLXhDNwcDWhyCKpvpfr3Rwvqz o64Ao+M0JbG/iSfsLlZkeaFW3ky/f25j5kxKFWpNiyGinN09GE6ZM7tNcVC1OLRl58H7 STrPk/4LjStN7y7fljRhr7IISTSDen58s5gs8ZQNYmq6F8CX5bDnQJ+kDMZaLsTTo3Ty iCw0p717V1sstcZmwEUqlu7PTor6jmAfn+NCrxW4xC90VD57rHwwwPhgdd6zHt10JrNI WBkt7gbmVstub/fmXcgIMgBx5X4Aeg+Pism71IbC0+fHJlDZe8X5FnQl0GFO/LPf9qwP KMfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ln1M7PqkPnP72au4z4yaD7EZzV8AFX5deRIIU0dw0Mg=; b=eHLJ1w2/GHxZvHfiE5cZzQdz2m3SvjUrUPTnmThIudBjFFHJtE0EIHylKqvf7YLq5j kwAMgPdIUK4PwlODNmDsT5Xkp3Uy+1keQxcPMF7r/HQMP8ga/mp9yzWllHLVf6MRNV8s RPNu0QUWIduB7lxsZZWDtPE7RzSDs3sun8AzVr0JXbpz1Ou9s6AQ1MdrRmFqlkNfnawe clWjpWkvED50/XwZ3oboEBnug/6H/4poU9LtpoOWH3UoDEbRA9bNVFXGtvZ90ZGURKGZ vAGhJV7TzA19b+HC4C85KSQwVNmvyGVxWf2Oied/LzTFGyYNXzaaAgqJljoiysPFSwhK lvgQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZaLcnLPE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jy8-20020a17090342c800b0019d2142ffffsi11755603plb.295.2023.03.07.06.27.54; Tue, 07 Mar 2023 06:28:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZaLcnLPE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229978AbjCGOKe (ORCPT <rfc822;toshivichauhan@gmail.com> + 99 others); Tue, 7 Mar 2023 09:10:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229932AbjCGOJJ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 7 Mar 2023 09:09:09 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C49CC211CF for <linux-kernel@vger.kernel.org>; Tue, 7 Mar 2023 06:08:04 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 043CF6135E for <linux-kernel@vger.kernel.org>; Tue, 7 Mar 2023 14:07:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72DE3C433A0; Tue, 7 Mar 2023 14:07:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198058; bh=WFiiboUNlXGaUmkdqX5fmc34WsQ0VoU7MZXGSi+88TY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZaLcnLPEg0R+H7zkhz3RRod76KFXo1jHQJ+Z2pIZw8gBUIaW920u2MWBAmmWFdtKP 8dvvK9g9F+GKwTaN5prLbCyhc8eVemUZCWJq1ZGQkECClqtwEUjo+zWRE5nRBkgiXY fzJbMXpG0hH+B2/0kM7PEcSKjnIttGefaKzzJRxe5sXlOT4jyYbnp1QJOpXyu/sme2 tL/ldM3Ry/gF4tHQN2bHSIXOAZDGDRq42ujT/Gg+kWZVWZRe39bY4Bdka6rUSICEoQ lNtKXSQLKwpr157uiwgiwPtjo+A14yZGy3ZXJvZw97A2zMhhtbSfu6vgepvECvLty6 vxtK5iidqEnPw== From: Ard Biesheuvel <ardb@kernel.org> To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel <ardb@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Ryan Roberts <ryan.roberts@arm.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Kees Cook <keescook@chromium.org> Subject: [PATCH v3 25/60] arm64: head: Clear BSS and the kernel page tables in one go Date: Tue, 7 Mar 2023 15:04:47 +0100 Message-Id: <20230307140522.2311461-26-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2710; i=ardb@kernel.org; h=from:subject; bh=WFiiboUNlXGaUmkdqX5fmc34WsQ0VoU7MZXGSi+88TY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxa4nKujXcpU7UUunp7dPm2V91+ioYHtD9PcSgc0Kl Q6WK4I7SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwESu3GP4zSJrGhGr3XMr3WO3 MePS4n1MhiGmidrP95xlZ0tJs/26hZGhYcsnfq7aKU21ytETP+vEmM9WnKO85YPQ7uiwN60LBcV ZAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759719495719064122?= X-GMAIL-MSGID: =?utf-8?q?1759719495719064122?= |
Series |
arm64: Add support for LPA2 at stage1 and WXN
|
|
Commit Message
Ard Biesheuvel
March 7, 2023, 2:04 p.m. UTC
We will move the CPU feature overrides into BSS in a subsequent patch,
and this requires that BSS is zeroed before the feature override
detection code runs. So let's map BSS read-write in the ID map, and zero
it via this mapping.
Since the kernel page tables are right next to it, and also zeroed via
the ID map, let's drop the separate clear_page_tables() function, and
just zero everything in one go.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/head.S | 33 +++++++-------------
1 file changed, 11 insertions(+), 22 deletions(-)
Comments
On 07/03/2023 14:04, Ard Biesheuvel wrote: > We will move the CPU feature overrides into BSS in a subsequent patch, > and this requires that BSS is zeroed before the feature override > detection code runs. So let's map BSS read-write in the ID map, and zero > it via this mapping. > > Since the kernel page tables are right next to it, and also zeroed via > the ID map, let's drop the separate clear_page_tables() function, and > just zero everything in one go. > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > --- > arch/arm64/kernel/head.S | 33 +++++++------------- > 1 file changed, 11 insertions(+), 22 deletions(-) > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index 0fa44b3188c1e204..ade0cb99c8a83a3d 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -177,17 +177,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) > ret > SYM_CODE_END(preserve_boot_args) > > -SYM_FUNC_START_LOCAL(clear_page_tables) > - /* > - * Clear the init page tables. > - */ > - adrp x0, init_pg_dir > - adrp x1, init_pg_end > - sub x2, x1, x0 > - mov x1, xzr > - b __pi_memset // tail call > -SYM_FUNC_END(clear_page_tables) > - > /* > * Macro to populate page table entries, these entries can be pointers to the next level > * or last level entries pointing to physical memory. > @@ -386,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) > > map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT > > - /* Remap the kernel page tables r/w in the ID map */ > + /* Remap BSS and the kernel page tables r/w in the ID map */ > adrp x1, _text > - adrp x2, init_pg_dir > + adrp x2, __bss_start > adrp x3, _end > bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 > mov x5, SWAPPER_RW_MMUFLAGS > @@ -489,14 +478,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) > mov x0, x20 > bl set_cpu_boot_mode_flag > > - // Clear BSS > - adr_l x0, __bss_start > - mov x1, xzr > - adr_l x2, __bss_stop > - sub x2, x2, x0 > - bl __pi_memset > - dsb ishst // Make zero page visible to PTW > - > #if VA_BITS > 48 > adr_l x8, vabits_actual // Set this early so KASAN early init > str x25, [x8] // ... observes the correct value > @@ -780,6 +761,15 @@ SYM_FUNC_START_LOCAL(__primary_switch) > adrp x1, reserved_pg_dir > adrp x2, init_idmap_pg_dir > bl __enable_mmu > + > + // Clear BSS > + adrp x0, __bss_start > + mov x1, xzr > + adrp x2, init_pg_end > + sub x2, x2, x0 > + bl __pi_memset > + dsb ishst // Make zero page visible to PTW Is it possible to add an assert somewhere (or at the very least a comment in vmlinux.lds.S) to ensure that nothing gets inserted between the BSS and the page tables? It feels a bit fragile otherwise. I also wonder what's the point in calling __pi_memset() from here? Why not just do it all in C? > + > #ifdef CONFIG_RELOCATABLE > adrp x23, KERNEL_START > and x23, x23, MIN_KIMG_ALIGN - 1 > @@ -794,7 +784,6 @@ SYM_FUNC_START_LOCAL(__primary_switch) > orr x23, x23, x0 // record kernel offset > #endif > #endif > - bl clear_page_tables > bl create_kernel_mapping > > adrp x1, init_pg_dir
On Mon, 17 Apr 2023 at 16:00, Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 07/03/2023 14:04, Ard Biesheuvel wrote: > > We will move the CPU feature overrides into BSS in a subsequent patch, > > and this requires that BSS is zeroed before the feature override > > detection code runs. So let's map BSS read-write in the ID map, and zero > > it via this mapping. > > > > Since the kernel page tables are right next to it, and also zeroed via > > the ID map, let's drop the separate clear_page_tables() function, and > > just zero everything in one go. > > > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > > --- > > arch/arm64/kernel/head.S | 33 +++++++------------- > > 1 file changed, 11 insertions(+), 22 deletions(-) > > > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > > index 0fa44b3188c1e204..ade0cb99c8a83a3d 100644 > > --- a/arch/arm64/kernel/head.S > > +++ b/arch/arm64/kernel/head.S > > @@ -177,17 +177,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) > > ret > > SYM_CODE_END(preserve_boot_args) > > > > -SYM_FUNC_START_LOCAL(clear_page_tables) > > - /* > > - * Clear the init page tables. > > - */ > > - adrp x0, init_pg_dir > > - adrp x1, init_pg_end > > - sub x2, x1, x0 > > - mov x1, xzr > > - b __pi_memset // tail call > > -SYM_FUNC_END(clear_page_tables) > > - > > /* > > * Macro to populate page table entries, these entries can be pointers to the next level > > * or last level entries pointing to physical memory. > > @@ -386,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) > > > > map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT > > > > - /* Remap the kernel page tables r/w in the ID map */ > > + /* Remap BSS and the kernel page tables r/w in the ID map */ > > adrp x1, _text > > - adrp x2, init_pg_dir > > + adrp x2, __bss_start > > adrp x3, _end > > bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 > > mov x5, SWAPPER_RW_MMUFLAGS > > @@ -489,14 +478,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) > > mov x0, x20 > > bl set_cpu_boot_mode_flag > > > > - // Clear BSS > > - adr_l x0, __bss_start > > - mov x1, xzr > > - adr_l x2, __bss_stop > > - sub x2, x2, x0 > > - bl __pi_memset > > - dsb ishst // Make zero page visible to PTW > > - > > #if VA_BITS > 48 > > adr_l x8, vabits_actual // Set this early so KASAN early init > > str x25, [x8] // ... observes the correct value > > @@ -780,6 +761,15 @@ SYM_FUNC_START_LOCAL(__primary_switch) > > adrp x1, reserved_pg_dir > > adrp x2, init_idmap_pg_dir > > bl __enable_mmu > > + > > + // Clear BSS > > + adrp x0, __bss_start > > + mov x1, xzr > > + adrp x2, init_pg_end > > + sub x2, x2, x0 > > + bl __pi_memset > > + dsb ishst // Make zero page visible to PTW > > Is it possible to add an assert somewhere (or at the very least a comment in > vmlinux.lds.S) to ensure that nothing gets inserted between the BSS and the page > tables? It feels a bit fragile otherwise. > I'm not sure that matters. The contents are not covered by the loaded image so they are undefined otherwise in any case. > I also wonder what's the point in calling __pi_memset() from here? Why not just > do it all in C? > That happens in one of the subsequent patches.
On 17/04/2023 15:02, Ard Biesheuvel wrote: > On Mon, 17 Apr 2023 at 16:00, Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> On 07/03/2023 14:04, Ard Biesheuvel wrote: >>> We will move the CPU feature overrides into BSS in a subsequent patch, >>> and this requires that BSS is zeroed before the feature override >>> detection code runs. So let's map BSS read-write in the ID map, and zero >>> it via this mapping. >>> >>> Since the kernel page tables are right next to it, and also zeroed via >>> the ID map, let's drop the separate clear_page_tables() function, and >>> just zero everything in one go. >>> >>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> >>> --- >>> arch/arm64/kernel/head.S | 33 +++++++------------- >>> 1 file changed, 11 insertions(+), 22 deletions(-) >>> >>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S >>> index 0fa44b3188c1e204..ade0cb99c8a83a3d 100644 >>> --- a/arch/arm64/kernel/head.S >>> +++ b/arch/arm64/kernel/head.S >>> @@ -177,17 +177,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) >>> ret >>> SYM_CODE_END(preserve_boot_args) >>> >>> -SYM_FUNC_START_LOCAL(clear_page_tables) >>> - /* >>> - * Clear the init page tables. >>> - */ >>> - adrp x0, init_pg_dir >>> - adrp x1, init_pg_end >>> - sub x2, x1, x0 >>> - mov x1, xzr >>> - b __pi_memset // tail call >>> -SYM_FUNC_END(clear_page_tables) >>> - >>> /* >>> * Macro to populate page table entries, these entries can be pointers to the next level >>> * or last level entries pointing to physical memory. >>> @@ -386,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) >>> >>> map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT >>> >>> - /* Remap the kernel page tables r/w in the ID map */ >>> + /* Remap BSS and the kernel page tables r/w in the ID map */ >>> adrp x1, _text >>> - adrp x2, init_pg_dir >>> + adrp x2, __bss_start >>> adrp x3, _end >>> bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 >>> mov x5, SWAPPER_RW_MMUFLAGS >>> @@ -489,14 +478,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) >>> mov x0, x20 >>> bl set_cpu_boot_mode_flag >>> >>> - // Clear BSS >>> - adr_l x0, __bss_start >>> - mov x1, xzr >>> - adr_l x2, __bss_stop >>> - sub x2, x2, x0 >>> - bl __pi_memset >>> - dsb ishst // Make zero page visible to PTW >>> - >>> #if VA_BITS > 48 >>> adr_l x8, vabits_actual // Set this early so KASAN early init >>> str x25, [x8] // ... observes the correct value >>> @@ -780,6 +761,15 @@ SYM_FUNC_START_LOCAL(__primary_switch) >>> adrp x1, reserved_pg_dir >>> adrp x2, init_idmap_pg_dir >>> bl __enable_mmu >>> + >>> + // Clear BSS >>> + adrp x0, __bss_start >>> + mov x1, xzr >>> + adrp x2, init_pg_end >>> + sub x2, x2, x0 >>> + bl __pi_memset >>> + dsb ishst // Make zero page visible to PTW >> >> Is it possible to add an assert somewhere (or at the very least a comment in >> vmlinux.lds.S) to ensure that nothing gets inserted between the BSS and the page >> tables? It feels a bit fragile otherwise. >> > > I'm not sure that matters. The contents are not covered by the loaded > image so they are undefined otherwise in any case. OK, so you couldn't accidentally zero anything in the image. But it could represent a performance regression if something big was added between them that doesn't need to be zeroed. All hypothetical, but this is currently an unstated assumption that I think is worth stating at least as a comment in the linker script. > >> I also wonder what's the point in calling __pi_memset() from here? Why not just >> do it all in C? >> > > That happens in one of the subsequent patches. Ahh, cheers... Haven't got that far yet. (very impressive that you immediately knew that given you posted the series 6 weeks ago! I usually can't remember what I did yesterday ;-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 0fa44b3188c1e204..ade0cb99c8a83a3d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -177,17 +177,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) ret SYM_CODE_END(preserve_boot_args) -SYM_FUNC_START_LOCAL(clear_page_tables) - /* - * Clear the init page tables. - */ - adrp x0, init_pg_dir - adrp x1, init_pg_end - sub x2, x1, x0 - mov x1, xzr - b __pi_memset // tail call -SYM_FUNC_END(clear_page_tables) - /* * Macro to populate page table entries, these entries can be pointers to the next level * or last level entries pointing to physical memory. @@ -386,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT - /* Remap the kernel page tables r/w in the ID map */ + /* Remap BSS and the kernel page tables r/w in the ID map */ adrp x1, _text - adrp x2, init_pg_dir + adrp x2, __bss_start adrp x3, _end bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 mov x5, SWAPPER_RW_MMUFLAGS @@ -489,14 +478,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) mov x0, x20 bl set_cpu_boot_mode_flag - // Clear BSS - adr_l x0, __bss_start - mov x1, xzr - adr_l x2, __bss_stop - sub x2, x2, x0 - bl __pi_memset - dsb ishst // Make zero page visible to PTW - #if VA_BITS > 48 adr_l x8, vabits_actual // Set this early so KASAN early init str x25, [x8] // ... observes the correct value @@ -780,6 +761,15 @@ SYM_FUNC_START_LOCAL(__primary_switch) adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu + + // Clear BSS + adrp x0, __bss_start + mov x1, xzr + adrp x2, init_pg_end + sub x2, x2, x0 + bl __pi_memset + dsb ishst // Make zero page visible to PTW + #ifdef CONFIG_RELOCATABLE adrp x23, KERNEL_START and x23, x23, MIN_KIMG_ALIGN - 1 @@ -794,7 +784,6 @@ SYM_FUNC_START_LOCAL(__primary_switch) orr x23, x23, x0 // record kernel offset #endif #endif - bl clear_page_tables bl create_kernel_mapping adrp x1, init_pg_dir