From patchwork Tue Mar 7 14:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 6241 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2457863wrd; Tue, 7 Mar 2023 06:23:03 -0800 (PST) X-Google-Smtp-Source: AK7set+oXcG+58NTuyTY9RHV+Q41H3pGpHCAYDdNUtyW3bzib7k2wZ/ufW0JUV78jWvJwuAO4EAM X-Received: by 2002:a17:902:e5d0:b0:19d:1bbb:3547 with SMTP id u16-20020a170902e5d000b0019d1bbb3547mr17499731plf.43.1678198983534; Tue, 07 Mar 2023 06:23:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1678198983; cv=none; d=google.com; s=arc-20160816; b=sFNJb7WLsqmtVe9id+Q5wG1agcko0qHo8+HUDfmmY4iswiteDbRKWijYp4f4XYR9ZA /9PHm7MktgRAtZx90zdqnSeIgeU6d3oIIOc3uyTSjMqjxO5nUHSycvsUP3scBeKAaWu2 RsYQnbeoWnbl2kVBzTTM9yvOVYzd83vemTz/dO6vx/4NUwX9bb4W/lP+hdNHYcLQN7E0 AgiMaf8jx420SvtD8eN4JmnolRWZj2HbImuRCPE223vX2mhD5II6rPc0Znw323iZI8r5 KP+N/wZqgVX2ofstgl8WSIoivJqJImeazM0IVlRM5rzA0Kj+ShqSdeDwvwDm63n/KGGb 7dKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=sZ2kpRlSkQcCtiyswHaBra6aIvvw51Og7rPoAaPzrG4=; b=EmlQLlbo0kyihjLfaFu4GhoJuOa0800M96DApkbOtzClP/UoXZNfKe1el3ilQAHX+5 2w+ONAz7xhNsNIhiAbDWWTJwDV2qrQheozSVXvvK96JXvElY0djwm+pSbJadmHXiYFg4 Mnbb+Cfb/WzSsuxtvsloX09B+UAF+kUuTzxqqZiGIJiRob4goCMOQIMvS21B/CSY0E+B 1Ykz35n25F26dXrhja+qqwDsnVfH/y9xPSTzrMrTlM9o+jrUbGOEosAad4LkF3EKWMVo kaqL9+aGEWd3cG542CFFxkxnlLVJ2XCux+e4DTt+aZRnlfVKOUNY01b9/2Ui7Nc4Rjad zPUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=L9IZuemD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kp3-20020a170903280300b0019c3530a9b1si7966586plb.518.2023.03.07.06.22.51; Tue, 07 Mar 2023 06:23:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=L9IZuemD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230039AbjCGOGp (ORCPT + 99 others); Tue, 7 Mar 2023 09:06:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229586AbjCGOGm (ORCPT ); Tue, 7 Mar 2023 09:06:42 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D30DA7D0B9 for ; Tue, 7 Mar 2023 06:06:40 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8BDD6B818F3 for ; Tue, 7 Mar 2023 14:06:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 453E9C4339C; Tue, 7 Mar 2023 14:06:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678197998; bh=q9N7KrKYI/2Gh9tZra2QeuqsR/1uRQqHKm5vwtrhO70=; h=From:To:Cc:Subject:Date:From; b=L9IZuemDij39vUJnsSLcL/wSszCzKQ0F6PWeQmhpZYTwhCwPbXvhgjabfOODlv1Qq 3Qq7nBT/E+ihP2gOZuZI0dvK1gumnXL+PF7KusbTHhtrbwT+5nF1h9DZ0SkhX34VC8 m6DNN5umrCoAoAP76hh2F6VIygMRLKzvLTbxNIW6vfln4u2Lc9l3o2K8pnlGxsxV/x FQuvnxfpd+JcFfZqcJSqlKvCRtGE1Qjp/D6ku/hGaj+KStJ5cKfEI4Mt9nEhvDbQ10 do2xc2qgF5KOqnI/++dmsjUPy7ceNzYaOo0s7Ne5/1oXQCKMOsjzYV/+igcrNxeSGa APavsMU3Tsfjg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 00/60] arm64: Add support for LPA2 at stage1 and WXN Date: Tue, 7 Mar 2023 15:04:22 +0100 Message-Id: <20230307140522.2311461-1-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=11466; i=ardb@kernel.org; h=from:subject; bh=q9N7KrKYI/2Gh9tZra2QeuqsR/1uRQqHKm5vwtrhO70=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXd+W/cjC9JATezvjO/dJpmx6p/YpbhprMVhWF1is9Fd 932qLnRUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACaSlsDwv2Tpu3R2ftXjfId8 RIz/OM+cW/8mNN1gdUXtj23FE7V3+zIyHFNeOFWOtUa6ZsujGXNOK+64qqs/fcfhWv+8L7vFJxf 2cAIA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759719177478056383?= X-GMAIL-MSGID: =?utf-8?q?1759719177478056383?= This is a followup to [0], which was a lot smaller. Thanks to Ryan for feedback and review. This series is independent from Ryan's work on adding support for LPA2 to KVM - the only potential source of conflict should be the patch "arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect", which could simply be dropped in favour of the KVM changes to make it support LPA2. The first ~15 patches of this series rework how the kernel VA space is organized, so that the vmemmap region does not take up more space than necessary, and so that most of it can be reclaimed when running a build capable of 52-bit virtual addressing on hardware that is not. This is needed because the vmemmap region will take up a substantial part of the upper VA region that it shares with the kernel, modules and vmalloc/vmap mappings once we enable LPA2 with 4k pages. The next ~30 patches rework the early init code, reimplementing most of the page table and relocation handling in C code. There are several reasons why this is beneficial: - we generally prefer C code over asm for these things, and the macros that currently exist in head.S for creating the kernel pages tables are a good example why; - we no longer need to create the kernel mapping in two passes, which means we can remove the logic that copies parts of the fixmap and the KAsan shadow from one set of page tables to the other; this is especially advantageous for KAsan with LPA2, which needs more elaborate shadow handling across multiple levels, since the KAsan region cannot be placed on exact pgd_t bouundaries in that case; - we can read the ID registers and parse command line overrides before creating the page tables, which simplifies the LPA2 case, as flicking the global TCR_EL1.DS bit at a later stage would require elaborate repainting of all page table descriptors, some of which with the MMU disabled; - we can use more elaborate logic to create the mappings, which means we can use more precise mappings for code and data sections even when using 2 MiB granularity, and this is a prerequisite for running with WXN. As part of the ID map changes, we decouple the ID map size from the kernel VA size, and switch to a 48-bit VA map for all configurations. The next 18 patches rework the existing LVA support as a CPU feature, which simplifies some code and gets rid of the vabits_actual variable. Then, LPA2 support is implemented in the same vein. This requires adding support for 5 level paging as well, given that LPA2 introduces a new paging level '-1' when using 4k pages. Combined with the vmemmap changes at the start of the series, the resulting LPA2/4k pages configuration will have the exact same VA space layout as the ordinary 4k/4 levels configuration, and so LPA2 support can reasonably be enabled by default, as the fallback is seamless on non-LPA2 hardware. In the 16k/LPA2 case, the fallback also reduces the number of paging levels, resulting in a 47-bit VA space. This is based on the assumption that hybrid LPA2/non-LPA2 16k pages kernels in production use would prefer not to take the performance hit of 4 level paging to gain only a single additional bit of VA space. (Note that generic Android kernels use only 3 levels of paging today.) Bespoke 16k configurations can still configure 48-bit virtual addressing as before. Finally, the last two patches enable support for running with the WXN control enabled. This was previously part of a separate series [1], but given that the delta is tiny, it is included here as well. [0] https://lore.kernel.org/all/20221124123932.2648991-1-ardb@kernel.org/ [1] https://lore.kernel.org/all/20221111171201.2088501-1-ardb@kernel.org/ Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Mark Rutland Cc: Ryan Roberts Cc: Anshuman Khandual Cc: Kees Cook Anshuman Khandual (2): arm64/mm: Add FEAT_LPA2 specific TCR_EL1.DS field arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] Ard Biesheuvel (57): // KASLR / vmemmap reorg arm64: kernel: Disable latent_entropy GCC plugin in early C runtime arm64: mm: Take potential load offset into account when KASLR is off arm64: mm: get rid of kimage_vaddr global variable arm64: mm: Move PCI I/O emulation region above the vmemmap region arm64: mm: Move fixmap region above vmemmap region arm64: ptdump: Allow VMALLOC_END to be defined at boot arm64: ptdump: Discover start of vmemmap region at runtime arm64: vmemmap: Avoid base2 order of struct page size to dimension region arm64: mm: Reclaim unused vmemmap region for vmalloc use arm64: kaslr: Adjust randomization range dynamically arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() arm64: kvm: honour 'nokaslr' command line option for the HYP VA space // Reimplement page table creation code in C arm64: kernel: Manage absolute relocations in code built under pi/ arm64: kernel: Don't rely on objcopy to make code under pi/ __init arm64: head: move relocation handling to C code arm64: idreg-override: Omit non-NULL checks for override pointer arm64: idreg-override: Prepare for place relative reloc patching arm64: idreg-override: Avoid parameq() and parameqn() arm64: idreg-override: avoid strlen() to check for empty strings arm64: idreg-override: Avoid sprintf() for simple string concatenation arm64: idreg-override: Avoid kstrtou64() to parse a single hex digit arm64: idreg-override: Move to early mini C runtime arm64: kernel: Remove early fdt remap code arm64: head: Clear BSS and the kernel page tables in one go arm64: Move feature overrides into the BSS section arm64: head: Run feature override detection before mapping the kernel arm64: head: move dynamic shadow call stack patching into early C runtime arm64: kaslr: Use feature override instead of parsing the cmdline again arm64: idreg-override: Create a pseudo feature for rodata=off arm64: Add helpers to probe local CPU for PAC/BTI/E0PD support arm64: head: allocate more pages for the kernel mapping arm64: head: move memstart_offset_seed handling to C code arm64: head: Move early kernel mapping routines into C code arm64: mm: Use 48-bit virtual addressing for the permanent ID map arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels arm64: kernel: Create initial ID map from C code arm64: mm: avoid fixmap for early swapper_pg_dir updates arm64: mm: omit redundant remap of kernel image arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" // Implement LPA2 support arm64: mm: Handle LVA support as a CPU feature arm64: mm: Add feature override support for LVA arm64: mm: Wire up TCR.DS bit to PTE shareability fields arm64: mm: Add LPA2 support to phys<->pte conversion routines arm64: mm: Add definitions to support 5 levels of paging arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion arm64: Enable LPA2 at boot if supported by the system arm64: mm: Add 5 level paging support to fixmap and swapper handling arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging arm64: mm: Add support for folding PUDs at runtime arm64: ptdump: Disregard unaddressable VA space arm64: ptdump: Deal with translation levels folded at runtime arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs arm64: defconfig: Enable LPA2 support // Allow WXN control to be enabled at boot mm: add arch hook to validate mmap() prot flags arm64: mm: add support for WXN memory translation attribute Marc Zyngier (1): arm64: Turn kaslr_feature_override into a generic SW feature override arch/arm64/Kconfig | 34 +- arch/arm64/configs/defconfig | 2 +- arch/arm64/include/asm/assembler.h | 55 +-- arch/arm64/include/asm/cpufeature.h | 102 +++++ arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/kasan.h | 2 - arch/arm64/include/asm/kernel-pgtable.h | 104 ++--- arch/arm64/include/asm/memory.h | 50 +-- arch/arm64/include/asm/mman.h | 36 ++ arch/arm64/include/asm/mmu.h | 26 +- arch/arm64/include/asm/mmu_context.h | 49 ++- arch/arm64/include/asm/pgalloc.h | 53 ++- arch/arm64/include/asm/pgtable-hwdef.h | 33 +- arch/arm64/include/asm/pgtable-prot.h | 18 +- arch/arm64/include/asm/pgtable-types.h | 6 + arch/arm64/include/asm/pgtable.h | 229 +++++++++- arch/arm64/include/asm/scs.h | 34 +- arch/arm64/include/asm/setup.h | 3 - arch/arm64/include/asm/sysreg.h | 2 + arch/arm64/include/asm/tlb.h | 3 +- arch/arm64/kernel/Makefile | 7 +- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/cpufeature.c | 90 ++-- arch/arm64/kernel/head.S | 465 ++------------------ arch/arm64/kernel/idreg-override.c | 322 -------------- arch/arm64/kernel/image-vars.h | 32 ++ arch/arm64/kernel/kaslr.c | 4 +- arch/arm64/kernel/module.c | 2 +- arch/arm64/kernel/pi/Makefile | 28 +- arch/arm64/kernel/pi/idreg-override.c | 396 +++++++++++++++++ arch/arm64/kernel/pi/kaslr_early.c | 78 +--- arch/arm64/kernel/pi/map_kernel.c | 284 ++++++++++++ arch/arm64/kernel/pi/map_range.c | 104 +++++ arch/arm64/kernel/{ => pi}/patch-scs.c | 36 +- arch/arm64/kernel/pi/pi.h | 30 ++ arch/arm64/kernel/pi/relacheck.c | 130 ++++++ arch/arm64/kernel/pi/relocate.c | 64 +++ arch/arm64/kernel/setup.c | 22 - arch/arm64/kernel/sleep.S | 3 - arch/arm64/kernel/suspend.c | 2 +- arch/arm64/kernel/vmlinux.lds.S | 14 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 + arch/arm64/kvm/mmu.c | 22 +- arch/arm64/kvm/va_layout.c | 10 +- arch/arm64/mm/init.c | 2 +- arch/arm64/mm/kasan_init.c | 154 +++++-- arch/arm64/mm/mmap.c | 4 + arch/arm64/mm/mmu.c | 268 ++++++----- arch/arm64/mm/pgd.c | 17 +- arch/arm64/mm/proc.S | 106 ++++- arch/arm64/mm/ptdump.c | 43 +- arch/arm64/tools/cpucaps | 1 + include/linux/mman.h | 15 + mm/mmap.c | 3 + 54 files changed, 2259 insertions(+), 1345 deletions(-) delete mode 100644 arch/arm64/kernel/idreg-override.c create mode 100644 arch/arm64/kernel/pi/idreg-override.c create mode 100644 arch/arm64/kernel/pi/map_kernel.c create mode 100644 arch/arm64/kernel/pi/map_range.c rename arch/arm64/kernel/{ => pi}/patch-scs.c (89%) create mode 100644 arch/arm64/kernel/pi/pi.h create mode 100644 arch/arm64/kernel/pi/relacheck.c create mode 100644 arch/arm64/kernel/pi/relocate.c