From patchwork Fri Dec 16 16:21:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34005 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1064883wrn; Fri, 16 Dec 2022 08:28:28 -0800 (PST) X-Google-Smtp-Source: AA0mqf4ykAGlBLmn3u84UF4CeIdeo3zxzhtW+cx2dkcOzsfEWbiKQg1m1+XsP6hNye1ANjz1jgiA X-Received: by 2002:a05:6402:241e:b0:45c:834b:eb59 with SMTP id t30-20020a056402241e00b0045c834beb59mr50933457eda.36.1671208108187; Fri, 16 Dec 2022 08:28:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208108; cv=none; d=google.com; s=arc-20160816; b=SG0zFnn0XyUn0FprF6M8pkpiosmg7Alw5vlQ+HeZGw5ZOFHWI5tGmZYqRKeURMsL8i 3JszqKPoMT4Yp3/kAfmmQlmFm7wai3AwA3TdcscaI0lBwyJt7ibLynvkiqrf783ZEB6U XL5OolqOUJQU4El9Eu+ot3SNbAutt45Q7vCQasi597BzM3q5sUJC2jY8XcNicMO8Jg0m E7TWULA26Q9g3Zh7aVl9fENT4WEALeHECYdgCz0Taofhrpy1VdWxCkJym0AGjZ7bo30w GbFsmF0j5dj4HirKtiYoKebVvGkCTRCq06/wI5lQKD3VYMmoxAlGAi8tr4py7ZFKnmcT 9xxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bZlFlLyFfvGOX6GgDGk2MQgjTa7NIX6OhSZPRX3tHGU=; b=aP5pYGxvWBM05y0WJj/THmO8VHYbUW2sb8YMVqL48n3/h8E9pOCiMGKzO7qU44W6xL QhRIIB//v4w3G4eIVAfeXlWMOMNbl91EX4Lxwn5lLnVN0wcXurhG95rOY+hXwrZ/tmqK e94NiOWufIx/vADMyZsiFT4t6nHc0STJALcr/TjUwHqeIzgvKDKrW6Wf3h78eVBrg9hk OirpCdaZ4M3WQ2ewlhjoeTlByMTMjg0fSbZDKkin4R1iBhl9DdhcOjm7mzAuT5KhP7qQ vtvOWCo6fOBq8f/z7JaAHFDAy+U8B4nzNa7wnk1XvxvTnyCEKS3nk7RLVhTa0NRNz+yI KVGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=hzoUHwIP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u20-20020aa7d0d4000000b00469af6681b1si2384513edo.183.2022.12.16.08.28.04; Fri, 16 Dec 2022 08:28:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=hzoUHwIP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231615AbiLPQYU (ORCPT + 99 others); Fri, 16 Dec 2022 11:24:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231904AbiLPQXj (ORCPT ); Fri, 16 Dec 2022 11:23:39 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7684030555 for ; Fri, 16 Dec 2022 08:22:54 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id v124-20020a1cac82000000b003cf7a4ea2caso4454980wme.5 for ; Fri, 16 Dec 2022 08:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bZlFlLyFfvGOX6GgDGk2MQgjTa7NIX6OhSZPRX3tHGU=; b=hzoUHwIPuovEIg56X3mIHKrMAVWbDg4PTj1OawlyJ0rtKp6ujgWep89ao0y+5ke3mh 1q7p1+qseox94TLoIgXvk2lPbcLe8X6GjYKBkWkw8wp3tEJUJBrkS/x0tvg2FCUXExNP CUyunn28pIiyM9pLMK9fqxH7NrkwwGcAzqhf9tDrYRIrc244BvaHFtWts6h/mmhf/AB+ OxQoDZds3cGUyWYrFryWMhyP7VUzN+BU9fbNV4YOAzoAHQCiNa4HaAY2BrXegUuG5mv+ bT2082EZHC/bsqvPriRtqkyxTkGYolAK6zcOqwFSJ5UhLB6ZpJ8nW+tMC9QovY8CRmEA uWdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bZlFlLyFfvGOX6GgDGk2MQgjTa7NIX6OhSZPRX3tHGU=; b=0lcP1+absePpomNf4CgjtcWqLWkpxskR3j4dFCIo6mFV8s3bo55JpsSjldMRNj+avM Og1y7iUwWmGj1XT7VJYNhLVgVbWiPb05J6WMIaops3loF1wzvBbJOZTisCH4x6McQIKJ +/x0KNujW5YszW7nWX28YZPKA1rYm50yFyxWAmMCkBo19G99L7SWtyZH9lwbvO/0wP73 2i/VYhCTWziQIbXCDPOmb5L4LZfYj0OdSHrMQawpK+sGVWgaIqrZ8iZm50/F1WRXatOu PwyK7QSJmYo7UhNM6okNONRoQ/kvf8sYZ2PdjkioWsUmln3bNXcTbQzq2wIlSRQz+xt+ jB5Q== X-Gm-Message-State: ANoB5pkPwiNYjcOlZ1xJ1htRaUdeZ3b3LLhRKnZjCTLBjar/zW85gOyr Wv4CCCQ6nfHwsp6Rv31Q12c5eg== X-Received: by 2002:a05:600c:4e91:b0:3d1:dc6f:b1a4 with SMTP id f17-20020a05600c4e9100b003d1dc6fb1a4mr36135364wmq.5.1671207773030; Fri, 16 Dec 2022 08:22:53 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id h16-20020a05600c351000b003d23a3b783bsm3444035wmq.10.2022.12.16.08.22.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:22:52 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 1/6] riscv: Split early and final KASAN population functions Date: Fri, 16 Dec 2022 17:21:36 +0100 Message-Id: <20221216162141.1701255-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388713579833022?= X-GMAIL-MSGID: =?utf-8?q?1752388713579833022?= This is a preliminary work that allows to make the code more understandable. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/kasan_init.c | 181 +++++++++++++++++++++++-------------- 1 file changed, 114 insertions(+), 67 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index a22e418dbd82..a7314ffe7d76 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -95,23 +95,13 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned } static void __init kasan_populate_pud(pgd_t *pgd, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; pud_t *pudp, *base_pud; unsigned long next; - if (early) { - /* - * We can't use pgd_page_vaddr here as it would return a linear - * mapping address but it is not mapped yet, but when populating - * early_pg_dir, we need the physical address and when populating - * swapper_pg_dir, we need the kernel virtual address so use - * pt_ops facility. - */ - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd))); - } else if (pgd_none(*pgd)) { + if (pgd_none(*pgd)) { base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); } else { base_pud = (pud_t *)pgd_page_vaddr(*pgd); @@ -128,16 +118,10 @@ static void __init kasan_populate_pud(pgd_t *pgd, next = pud_addr_end(vaddr, end); if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { - if (early) { - phys_addr = __pa(((uintptr_t)kasan_early_shadow_pmd)); - set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); + phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); + if (phys_addr) { + set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); continue; - } else { - phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); - if (phys_addr) { - set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } } } @@ -150,32 +134,19 @@ static void __init kasan_populate_pud(pgd_t *pgd, * it entirely, memblock could allocate a page at a physical address * where KASAN is not populated yet and then we'd get a page fault. */ - if (!early) - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); } static void __init kasan_populate_p4d(pgd_t *pgd, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; p4d_t *p4dp, *base_p4d; unsigned long next; - if (early) { - /* - * We can't use pgd_page_vaddr here as it would return a linear - * mapping address but it is not mapped yet, but when populating - * early_pg_dir, we need the physical address and when populating - * swapper_pg_dir, we need the kernel virtual address so use - * pt_ops facility. - */ - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd))); - } else { - base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); - if (base_p4d == lm_alias(kasan_early_shadow_p4d)) - base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); - } + base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); + if (base_p4d == lm_alias(kasan_early_shadow_p4d)) + base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); p4dp = base_p4d + p4d_index(vaddr); @@ -183,20 +154,14 @@ static void __init kasan_populate_p4d(pgd_t *pgd, next = p4d_addr_end(vaddr, end); if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) { - if (early) { - phys_addr = __pa(((uintptr_t)kasan_early_shadow_pud)); - set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); + phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); + if (phys_addr) { + set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); continue; - } else { - phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); - if (phys_addr) { - set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } } } - kasan_populate_pud((pgd_t *)p4dp, vaddr, next, early); + kasan_populate_pud((pgd_t *)p4dp, vaddr, next); } while (p4dp++, vaddr = next, vaddr != end); /* @@ -205,8 +170,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd, * it entirely, memblock could allocate a page at a physical address * where KASAN is not populated yet and then we'd get a page fault. */ - if (!early) - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); } #define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \ @@ -214,16 +178,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd, (pgtable_l4_enabled ? \ (uintptr_t)kasan_early_shadow_pud : \ (uintptr_t)kasan_early_shadow_pmd)) -#define kasan_populate_pgd_next(pgdp, vaddr, next, early) \ +#define kasan_populate_pgd_next(pgdp, vaddr, next) \ (pgtable_l5_enabled ? \ - kasan_populate_p4d(pgdp, vaddr, next, early) : \ + kasan_populate_p4d(pgdp, vaddr, next) : \ (pgtable_l4_enabled ? \ - kasan_populate_pud(pgdp, vaddr, next, early) : \ + kasan_populate_pud(pgdp, vaddr, next) : \ kasan_populate_pmd((pud_t *)pgdp, vaddr, next))) static void __init kasan_populate_pgd(pgd_t *pgdp, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; unsigned long next; @@ -232,11 +195,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, next = pgd_addr_end(vaddr, end); if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { - if (early) { - phys_addr = __pa((uintptr_t)kasan_early_shadow_pgd_next); - set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); - continue; - } else if (pgd_page_vaddr(*pgdp) == + if (pgd_page_vaddr(*pgdp) == (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) { /* * pgdp can't be none since kasan_early_init @@ -253,7 +212,95 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, } } - kasan_populate_pgd_next(pgdp, vaddr, next, early); + kasan_populate_pgd_next(pgdp, vaddr, next); + } while (pgdp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_pud(p4d_t *p4dp, + unsigned long vaddr, + unsigned long end) +{ + pud_t *pudp, *base_pud; + phys_addr_t phys_addr; + unsigned long next; + + if (!pgtable_l4_enabled) { + pudp = (pud_t *)p4dp; + } else { + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + pudp = base_pud + pud_index(vaddr); + } + + do { + next = pud_addr_end(vaddr, end); + + if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && + (next - vaddr) >= PUD_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd); + set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + BUG(); + } while (pudp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_p4d(pgd_t *pgdp, + unsigned long vaddr, + unsigned long end) +{ + p4d_t *p4dp, *base_p4d; + phys_addr_t phys_addr; + unsigned long next; + + /* + * We can't use pgd_page_vaddr here as it would return a linear + * mapping address but it is not mapped yet, but when populating + * early_pg_dir, we need the physical address and when populating + * swapper_pg_dir, we need the kernel virtual address so use + * pt_ops facility. + * Note that this test is then completely equivalent to + * p4dp = p4d_offset(pgdp, vaddr) + */ + if (!pgtable_l5_enabled) { + p4dp = (p4d_t *)pgdp; + } else { + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + p4dp = base_p4d + p4d_index(vaddr); + } + + do { + next = p4d_addr_end(vaddr, end); + + if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && + (next - vaddr) >= P4D_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_pud); + set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + kasan_early_populate_pud(p4dp, vaddr, next); + } while (p4dp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_pgd(pgd_t *pgdp, + unsigned long vaddr, + unsigned long end) +{ + phys_addr_t phys_addr; + unsigned long next; + + do { + next = pgd_addr_end(vaddr, end); + + if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d); + set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + kasan_early_populate_p4d(pgdp, vaddr, next); } while (pgdp++, vaddr = next, vaddr != end); } @@ -290,16 +337,16 @@ asmlinkage void __init kasan_early_init(void) PAGE_TABLE)); } - kasan_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START), - KASAN_SHADOW_START, KASAN_SHADOW_END, true); + kasan_early_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); local_flush_tlb_all(); } void __init kasan_swapper_init(void) { - kasan_populate_pgd(pgd_offset_k(KASAN_SHADOW_START), - KASAN_SHADOW_START, KASAN_SHADOW_END, true); + kasan_early_populate_pgd(pgd_offset_k(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); local_flush_tlb_all(); } @@ -309,7 +356,7 @@ static void __init kasan_populate(void *start, void *end) unsigned long vaddr = (unsigned long)start & PAGE_MASK; unsigned long vend = PAGE_ALIGN((unsigned long)end); - kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend, false); + kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend); local_flush_tlb_all(); memset(start, KASAN_SHADOW_INIT, end - start); From patchwork Fri Dec 16 16:21:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34006 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1064986wrn; Fri, 16 Dec 2022 08:28:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXsmpsZqMdAgYKcC6OAfYuYU8DMUd+cCvKEAbUu/ebMH5n+j6Ypg/8tEPTHPskG7Ki75B/an X-Received: by 2002:a17:906:2802:b0:7c0:b3a3:9b70 with SMTP id r2-20020a170906280200b007c0b3a39b70mr2297512ejc.62.1671208120621; Fri, 16 Dec 2022 08:28:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208120; cv=none; d=google.com; s=arc-20160816; b=bU+dd4N2ADk+QyUD0ob9nTx0FR8BlcupoZMGfXIZunmyN+IavD+ULRHgj+nZX6HGvB UxUHZOz6nIFY3O86O1UhQPyyxUDTtBkr7ZwjK5Ru+/C10g2piUs00TIqwn9HnFwjiZ5X /Qj1gpQJvwPeDu2ovluDXnhvkMmukUWJM4z4UXWreqOS1iwSlF2VeGzzhJt0/1aRQzm8 glenQz4lhZ/K/SCSB/o+RQkOdIajf9MDfEBYOch3Nt/NwO41eiSUP5Qmurxmg6v4egdM 8o7N79gw7mkU8LsT2r1aRFmu93/QsTvsrny1l+6C2QssykIyW/dee772jFr8Z5NkqMQV nCpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yJQoFuKpH7yTfAbQG+gRnfL9cAhkMBaH0dMNqiGISLg=; b=Wl2iJ2CHBHtlz6LJB8YXqHvPT868BhVoH3f/0+I0nCrvn/px+T/AU+Mr7YTldzwhOO SSH0YVCq4JgoweSGUtXui0zBWUpxlq0rmTKTzBMpj7raPM4lRxXQvn6+N13aryQbJrwa 9y16DQ6yvwv01Rn9YGyDgiSfkkqcVvoW2LzT5WAPd7m6R7WKoZotefLZ7SX1KIxMaJ4K iK6E7ySzmq6AdZ0nvmdzUuZNVOeqFb9zZcUTcKIIx9/qAhOl/PkRvVaO2t3SEguCAP73 T8xxmxxyaA6Jq+xJr9CJo12yJ+1Tw4iU3vdPYmowf+SQOOCmqICaNF+x+z0PJwCx2V3N nJyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b="CGmb/DkP"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hc32-20020a17090716a000b007c157940720si2773059ejc.440.2022.12.16.08.28.17; Fri, 16 Dec 2022 08:28:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b="CGmb/DkP"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230237AbiLPQZI (ORCPT + 99 others); Fri, 16 Dec 2022 11:25:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231804AbiLPQYQ (ORCPT ); Fri, 16 Dec 2022 11:24:16 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94C392EF4C for ; Fri, 16 Dec 2022 08:23:55 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id a17so1855659wrt.11 for ; Fri, 16 Dec 2022 08:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yJQoFuKpH7yTfAbQG+gRnfL9cAhkMBaH0dMNqiGISLg=; b=CGmb/DkPlQ+89dTMOmhCitHQKzcKtX5/4Hzl6rBBDwBXwXlm0uuINoeuMVSLM7ytDN jxI53MfugFy6bCfvWFR1t1ZyvE5pqBgFcbMjYOlRXGWxKSLw2TPoz50AUu2/Fja2QztJ 3PxZhGurUCUZzdMt+Yt8lL2uPVEaUsi/P99Y9FmKR88O0HKCybCfPfIm+WFs3VTQt6Vl 2rZ3JmyLQAe1HzfPEpEZyITLqWNIOvXpdjuESwva9GFta2EfarUEA+bz5S9y1tZUyylt I4VGTXm7bKbtZmwcbDa13nqy8VEdRFsboSJmp1UMbO5bJSwi+z4XeF5gLjqBdKWrFML6 nhpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yJQoFuKpH7yTfAbQG+gRnfL9cAhkMBaH0dMNqiGISLg=; b=eYJNqNPuP4zYf8g/l7Imo9yN8jysjgFDgpvwmdkxfmWdfsA72q4en9ItngpXGCYhwO /HZ1dIL5B9x9320G/a87fytKxYAhE0zjlkG15NW685t9B1nOvN8kedncfxnGl5b6posu juJ4bMfa6I++NRu0Fbeay4gyGDk02RqoKVyALwxWn/DX63q/xRanhSUEpxxyh7Vo7dE8 +QXvrfbOA7Gyqo9GCRG6JjFJniT+rcM85eTXiwMzluXoZ+RCJy5j3voNVcE4smZYFKZ1 AbVHwwMlJKSkQA0mxVwTdAWwGkYVWy8C+9ugXKIvzwK+cF6UVuXgZlu3cQNzY7F8QBlz /Kkw== X-Gm-Message-State: ANoB5pkGw3ql2OoK48ar6TrQYH6tU9HndN3RuqbTYIXlFFN9y4CiPU5O mIuEkZvhqBJbhZxyF1GqFRnTMQ== X-Received: by 2002:a05:6000:1d88:b0:236:5892:33ae with SMTP id bk8-20020a0560001d8800b00236589233aemr20796414wrb.4.1671207834135; Fri, 16 Dec 2022 08:23:54 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id k6-20020a5d66c6000000b00242271fd2besm2656662wrw.89.2022.12.16.08.23.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:23:53 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 2/6] riscv: Rework kasan population functions Date: Fri, 16 Dec 2022 17:21:37 +0100 Message-Id: <20221216162141.1701255-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388726467205983?= X-GMAIL-MSGID: =?utf-8?q?1752388726467205983?= Our previous kasan population implementation used to have the final kasan shadow region mapped with kasan_early_shadow_page, because we did not clean the early mapping and then we had to populate the kasan region "in-place" which made the code cumbersome. So now we clear the early mapping, establish a temporary mapping while we populate the kasan shadow region with just the kernel regions that will be used. This new version uses the "generic" way of going through a page table that may be folded at runtime (avoid the XXX_next macros). It was tested with outline instrumentation on an Ubuntu kernel configuration successfully. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/kasan_init.c | 358 +++++++++++++++++++------------------ 1 file changed, 184 insertions(+), 174 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index a7314ffe7d76..5c7b1d07faf2 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -18,58 +18,48 @@ * For sv39, the region is aligned on PGDIR_SIZE so we only need to populate * the page global directory with kasan_early_shadow_pmd. * - * For sv48 and sv57, the region is not aligned on PGDIR_SIZE so the mapping - * must be divided as follows: - * - the first PGD entry, although incomplete, is populated with - * kasan_early_shadow_pud/p4d - * - the PGD entries in the middle are populated with kasan_early_shadow_pud/p4d - * - the last PGD entry is shared with the kernel mapping so populated at the - * lower levels pud/p4d - * - * In addition, when shallow populating a kasan region (for example vmalloc), - * this region may also not be aligned on PGDIR size, so we must go down to the - * pud level too. + * For sv48 and sv57, the region start is aligned on PGDIR_SIZE whereas the end + * region is not and then we have to go down to the PUD level. */ extern pgd_t early_pg_dir[PTRS_PER_PGD]; +pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss; +p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss; +pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss; static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pte_t *ptep, *base_pte; + pte_t *ptep, *p; - if (pmd_none(*pmd)) - base_pte = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); - else - base_pte = (pte_t *)pmd_page_vaddr(*pmd); + if (pmd_none(*pmd)) { + p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); + set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE)); + } - ptep = base_pte + pte_index(vaddr); + ptep = pte_offset_kernel(pmd, vaddr); do { if (pte_none(*ptep)) { phys_addr = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); set_pte(ptep, pfn_pte(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PAGE_SIZE); } } while (ptep++, vaddr += PAGE_SIZE, vaddr != end); - - set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(base_pte)), PAGE_TABLE)); } static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pmd_t *pmdp, *base_pmd; + pmd_t *pmdp, *p; unsigned long next; if (pud_none(*pud)) { - base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); - } else { - base_pmd = (pmd_t *)pud_pgtable(*pud); - if (base_pmd == lm_alias(kasan_early_shadow_pmd)) - base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); + set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - pmdp = base_pmd + pmd_index(vaddr); + pmdp = pmd_offset(pud, vaddr); do { next = pmd_addr_end(vaddr, end); @@ -78,41 +68,28 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned phys_addr = memblock_phys_alloc(PMD_SIZE, PMD_SIZE); if (phys_addr) { set_pmd(pmdp, pfn_pmd(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PMD_SIZE); continue; } } kasan_populate_pte(pmdp, vaddr, next); } while (pmdp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole PGD to be populated before setting the PGD in - * the page table, otherwise, if we did set the PGD before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pud(pud, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE)); } -static void __init kasan_populate_pud(pgd_t *pgd, +static void __init kasan_populate_pud(p4d_t *p4d, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pud_t *pudp, *base_pud; + pud_t *pudp, *p; unsigned long next; - if (pgd_none(*pgd)) { - base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); - } else { - base_pud = (pud_t *)pgd_page_vaddr(*pgd); - if (base_pud == lm_alias(kasan_early_shadow_pud)) { - base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); - memcpy(base_pud, (void *)kasan_early_shadow_pud, - sizeof(pud_t) * PTRS_PER_PUD); - } + if (p4d_none(*p4d)) { + p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); + set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - pudp = base_pud + pud_index(vaddr); + pudp = pud_offset(p4d, vaddr); do { next = pud_addr_end(vaddr, end); @@ -121,34 +98,28 @@ static void __init kasan_populate_pud(pgd_t *pgd, phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); if (phys_addr) { set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PUD_SIZE); continue; } } kasan_populate_pmd(pudp, vaddr, next); } while (pudp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole PGD to be populated before setting the PGD in - * the page table, otherwise, if we did set the PGD before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); } static void __init kasan_populate_p4d(pgd_t *pgd, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - p4d_t *p4dp, *base_p4d; + p4d_t *p4dp, *p; unsigned long next; - base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); - if (base_p4d == lm_alias(kasan_early_shadow_p4d)) - base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); + if (pgd_none(*pgd)) { + p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); + } - p4dp = base_p4d + p4d_index(vaddr); + p4dp = p4d_offset(pgd, vaddr); do { next = p4d_addr_end(vaddr, end); @@ -157,34 +128,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd, phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); if (phys_addr) { set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, P4D_SIZE); continue; } } - kasan_populate_pud((pgd_t *)p4dp, vaddr, next); + kasan_populate_pud(p4dp, vaddr, next); } while (p4dp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole P4D to be populated before setting the P4D in - * the page table, otherwise, if we did set the P4D before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); } -#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \ - (uintptr_t)kasan_early_shadow_p4d : \ - (pgtable_l4_enabled ? \ - (uintptr_t)kasan_early_shadow_pud : \ - (uintptr_t)kasan_early_shadow_pmd)) -#define kasan_populate_pgd_next(pgdp, vaddr, next) \ - (pgtable_l5_enabled ? \ - kasan_populate_p4d(pgdp, vaddr, next) : \ - (pgtable_l4_enabled ? \ - kasan_populate_pud(pgdp, vaddr, next) : \ - kasan_populate_pmd((pud_t *)pgdp, vaddr, next))) - static void __init kasan_populate_pgd(pgd_t *pgdp, unsigned long vaddr, unsigned long end) { @@ -194,25 +146,86 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, do { next = pgd_addr_end(vaddr, end); - if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { - if (pgd_page_vaddr(*pgdp) == - (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) { - /* - * pgdp can't be none since kasan_early_init - * initialized all KASAN shadow region with - * kasan_early_shadow_pud: if this is still the - * case, that means we can try to allocate a - * hugepage as a replacement. - */ - phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); - if (phys_addr) { - set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } + if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); + if (phys_addr) { + set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PGDIR_SIZE); + continue; } } - kasan_populate_pgd_next(pgdp, vaddr, next); + kasan_populate_p4d(pgdp, vaddr, next); + } while (pgdp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_pud(p4d_t *p4dp, + unsigned long vaddr, unsigned long end) +{ + pud_t *pudp, *base_pud; + unsigned long next; + + if (!pgtable_l4_enabled) { + pudp = (pud_t *)p4dp; + } else { + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + pudp = base_pud + pud_index(vaddr); + } + + do { + next = pud_addr_end(vaddr, end); + + if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { + pud_clear(pudp); + continue; + } + + BUG(); + } while (pudp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_p4d(pgd_t *pgdp, + unsigned long vaddr, unsigned long end) +{ + p4d_t *p4dp, *base_p4d; + unsigned long next; + + if (!pgtable_l5_enabled) { + p4dp = (p4d_t *)pgdp; + } else { + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + p4dp = base_p4d + p4d_index(vaddr); + } + + do { + next = p4d_addr_end(vaddr, end); + + if (pgtable_l4_enabled && IS_ALIGNED(vaddr, P4D_SIZE) && + (next - vaddr) >= P4D_SIZE) { + p4d_clear(p4dp); + continue; + } + + kasan_early_clear_pud(p4dp, vaddr, next); + } while (p4dp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_pgd(pgd_t *pgdp, + unsigned long vaddr, unsigned long end) +{ + unsigned long next; + + do { + next = pgd_addr_end(vaddr, end); + + if (pgtable_l5_enabled && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + pgd_clear(pgdp); + continue; + } + + kasan_early_clear_p4d(pgdp, vaddr, next); } while (pgdp++, vaddr = next, vaddr != end); } @@ -357,117 +370,64 @@ static void __init kasan_populate(void *start, void *end) unsigned long vend = PAGE_ALIGN((unsigned long)end); kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend); - - local_flush_tlb_all(); - memset(start, KASAN_SHADOW_INIT, end - start); } -static void __init kasan_shallow_populate_pmd(pgd_t *pgdp, +static void __init kasan_shallow_populate_pud(p4d_t *p4d, unsigned long vaddr, unsigned long end) { unsigned long next; - pmd_t *pmdp, *base_pmd; - bool is_kasan_pte; - - base_pmd = (pmd_t *)pgd_page_vaddr(*pgdp); - pmdp = base_pmd + pmd_index(vaddr); - - do { - next = pmd_addr_end(vaddr, end); - is_kasan_pte = (pmd_pgtable(*pmdp) == lm_alias(kasan_early_shadow_pte)); - - if (is_kasan_pte) - pmd_clear(pmdp); - } while (pmdp++, vaddr = next, vaddr != end); -} - -static void __init kasan_shallow_populate_pud(pgd_t *pgdp, - unsigned long vaddr, unsigned long end) -{ - unsigned long next; - pud_t *pudp, *base_pud; - pmd_t *base_pmd; - bool is_kasan_pmd; - - base_pud = (pud_t *)pgd_page_vaddr(*pgdp); - pudp = base_pud + pud_index(vaddr); + void *p; + pud_t *pud_k = pud_offset(p4d, vaddr); do { next = pud_addr_end(vaddr, end); - is_kasan_pmd = (pud_pgtable(*pudp) == lm_alias(kasan_early_shadow_pmd)); - if (!is_kasan_pmd) - continue; - - base_pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE); - set_pud(pudp, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE)); - - if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) + if (pud_none(*pud_k)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; + } - memcpy(base_pmd, (void *)kasan_early_shadow_pmd, PAGE_SIZE); - kasan_shallow_populate_pmd((pgd_t *)pudp, vaddr, next); - } while (pudp++, vaddr = next, vaddr != end); + BUG(); + } while (pud_k++, vaddr = next, vaddr != end); } -static void __init kasan_shallow_populate_p4d(pgd_t *pgdp, +static void __init kasan_shallow_populate_p4d(pgd_t *pgd, unsigned long vaddr, unsigned long end) { unsigned long next; - p4d_t *p4dp, *base_p4d; - pud_t *base_pud; - bool is_kasan_pud; - - base_p4d = (p4d_t *)pgd_page_vaddr(*pgdp); - p4dp = base_p4d + p4d_index(vaddr); + void *p; + p4d_t *p4d_k = p4d_offset(pgd, vaddr); do { next = p4d_addr_end(vaddr, end); - is_kasan_pud = (p4d_pgtable(*p4dp) == lm_alias(kasan_early_shadow_pud)); - - if (!is_kasan_pud) - continue; - - base_pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE); - set_p4d(p4dp, pfn_p4d(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); - if (IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) + if (p4d_none(*p4d_k)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; + } - memcpy(base_pud, (void *)kasan_early_shadow_pud, PAGE_SIZE); - kasan_shallow_populate_pud((pgd_t *)p4dp, vaddr, next); - } while (p4dp++, vaddr = next, vaddr != end); + kasan_shallow_populate_pud(p4d_k, vaddr, end); + } while (p4d_k++, vaddr = next, vaddr != end); } -#define kasan_shallow_populate_pgd_next(pgdp, vaddr, next) \ - (pgtable_l5_enabled ? \ - kasan_shallow_populate_p4d(pgdp, vaddr, next) : \ - (pgtable_l4_enabled ? \ - kasan_shallow_populate_pud(pgdp, vaddr, next) : \ - kasan_shallow_populate_pmd(pgdp, vaddr, next))) - static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long end) { unsigned long next; void *p; pgd_t *pgd_k = pgd_offset_k(vaddr); - bool is_kasan_pgd_next; do { next = pgd_addr_end(vaddr, end); - is_kasan_pgd_next = (pgd_page_vaddr(*pgd_k) == - (unsigned long)lm_alias(kasan_early_shadow_pgd_next)); - if (is_kasan_pgd_next) { + if (pgd_none(*pgd_k)) { p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); - } - - if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) continue; + } - memcpy(p, (void *)kasan_early_shadow_pgd_next, PAGE_SIZE); - kasan_shallow_populate_pgd_next(pgd_k, vaddr, next); + kasan_shallow_populate_p4d(pgd_k, vaddr, next); } while (pgd_k++, vaddr = next, vaddr != end); } @@ -477,7 +437,37 @@ static void __init kasan_shallow_populate(void *start, void *end) unsigned long vend = PAGE_ALIGN((unsigned long)end); kasan_shallow_populate_pgd(vaddr, vend); - local_flush_tlb_all(); +} + +void create_tmp_mapping(void) +{ + void *ptr; + p4d_t *base_p4d; + + /* + * We need to clean the early mapping: this is hard to achieve "in-place", + * so install a temporary mapping like arm64 and x86 do. + */ + memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD); + + /* Copy the last p4d since it is shared with the kernel mapping. */ + if (pgtable_l5_enabled) { + ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END)); + memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D); + set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)], + pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE)); + base_p4d = tmp_p4d; + } else { + base_p4d = (p4d_t *)tmp_pg_dir; + } + + /* Copy the last pud since it is shared with the kernel mapping. */ + if (pgtable_l4_enabled) { + ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END))); + memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD); + set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)], + pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE)); + } } void __init kasan_init(void) @@ -485,10 +475,27 @@ void __init kasan_init(void) phys_addr_t p_start, p_end; u64 i; - if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) + create_tmp_mapping(); + csr_write(CSR_SATP, PFN_DOWN(__pa(tmp_pg_dir)) | satp_mode); + + kasan_early_clear_pgd(pgd_offset_k(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); + + kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)FIXADDR_START), + (void *)kasan_mem_to_shadow((void *)VMALLOC_START)); + + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) { kasan_shallow_populate( (void *)kasan_mem_to_shadow((void *)VMALLOC_START), (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); + /* Shallow populate modules and BPF which are vmalloc-allocated */ + kasan_shallow_populate( + (void *)kasan_mem_to_shadow((void *)MODULES_VADDR), + (void *)kasan_mem_to_shadow((void *)MODULES_END)); + } else { + kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)VMALLOC_START), + (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); + } /* Populate the linear mapping */ for_each_mem_range(i, &p_start, &p_end) { @@ -501,8 +508,8 @@ void __init kasan_init(void) kasan_populate(kasan_mem_to_shadow(start), kasan_mem_to_shadow(end)); } - /* Populate kernel, BPF, modules mapping */ - kasan_populate(kasan_mem_to_shadow((const void *)MODULES_VADDR), + /* Populate kernel */ + kasan_populate(kasan_mem_to_shadow((const void *)MODULES_END), kasan_mem_to_shadow((const void *)MODULES_VADDR + SZ_2G)); for (i = 0; i < PTRS_PER_PTE; i++) @@ -513,4 +520,7 @@ void __init kasan_init(void) memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); init_task.kasan_depth = 0; + + csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode); + local_flush_tlb_all(); } From patchwork Fri Dec 16 16:21:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34007 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1065014wrn; Fri, 16 Dec 2022 08:28:43 -0800 (PST) X-Google-Smtp-Source: AA0mqf6VXhrL81oMid2AZejc78PXqbVCcfGARkL62LQDJyEBob+OCRVU47OIf0+iPnKXdUpPvnOp X-Received: by 2002:a17:906:f106:b0:7c0:aea2:e910 with SMTP id gv6-20020a170906f10600b007c0aea2e910mr39469720ejb.3.1671208123512; Fri, 16 Dec 2022 08:28:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208123; cv=none; d=google.com; s=arc-20160816; b=HUaMko0oUqkJW1vKsbG/50YgFNFl7oWRuVR0kBaGQkAZRJLgCwshaeDL59h5Nm7iDW 64mmIonvOwbsRrLW25XdkCYCZimpPucUcS6YjQH5wbKGOdY54y4CXUYBAVHfR/BoqzR4 KAyKV/ZAJpF15s3ZncH1xZcGaAxmiy8INioeoRC2+/FDyMPESna1lMOlhJEsS813L/yM 2ssoN2XDOtdiBtFDrw8hmgmY17KuqYf0t6Wi9PDFkXsxlrkkD8Tboz2WE00/F6pXzdIs +w1ZWEycPNkF8NRvceoqXYek3T6Ig93QKl/k0UClYeRQEXzye7RhYSAP+lJui9/pdBQG qEnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xKAmpdKbBscT+wI0zsC3ixLztutjRoSSvZIlGNHmM4Q=; b=Wvr1wANpb/r1u1ZTEwxOcVSKU+Rux0HnWi15ON6O63V+weQvvuLBh5Z0aZmcD36PhA ZEN4sIXQo0WPUxfhAWJwzswLtP87ci6roSq2gncSyNNGSM7lNmmUOTi6oxrGckyPN4nQ RAx68j2af4tOZ+rfuIa5ashU3NACMherqgNj4lGtpyZrh4k2W9fT57PMdopJtLjCfXKj n44/X8c2geWU3FVQs6FC+MRmHUdrCNBZ5IP7Qz+vTUPvgtdYeX/Pw1h8jcAu6YL63AMr eZeAh1TvXDCmvRGG2EE1hyvp9fsbv9Ppeqlp+zKlJyl3CYI1rFM+2FcT8oYfiPkUv+9M qEFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=5zZ212Ik; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hb7-20020a170907160700b007c0969e429bsi3212840ejc.30.2022.12.16.08.28.20; Fri, 16 Dec 2022 08:28:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=5zZ212Ik; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231314AbiLPQZp (ORCPT + 99 others); Fri, 16 Dec 2022 11:25:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229742AbiLPQY6 (ORCPT ); Fri, 16 Dec 2022 11:24:58 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 672BA10FDF for ; Fri, 16 Dec 2022 08:24:56 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id z8-20020a05600c220800b003d33b0bda11so2624103wml.0 for ; Fri, 16 Dec 2022 08:24:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xKAmpdKbBscT+wI0zsC3ixLztutjRoSSvZIlGNHmM4Q=; b=5zZ212IkhjKj94ipOAPbwYPYOlsIe2Q2iWqZ84pUt+gO3r0Eyarf711NDpKVO1C14q nc7RKDG6tdAFzkDUhRHwpMXM+IdQ7pAncnZIS/bWk/Hi0ReDG9cPXdIIugmvowrmL6wv LMTwApz9J7YrsCYDgOiiPW37lHNWWdAZw1sOVAgSDioakBmFysGZiya7IoZO/Vgqhhwf SLaq/M9LNpwE9S8g2yUbEIti1w6dDDrrElKg7SELhEGIvFdL8X/d9W70ns0APlzb3KLh EyU6JIbiXwoEnIPZ0d5qtYxPEjutK2P0Q/qFUR125kgS+E3BEfl91D5jcEuGMxVOc/Kl QPxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xKAmpdKbBscT+wI0zsC3ixLztutjRoSSvZIlGNHmM4Q=; b=2Ut1sEQhP2Fz0HvqFP/vPdHJETn8cwh1Sv52yHFedUGMX7uDvd/Tb2KRvTsR1vz0rc MPdi2/8G2ES+pmjnBnfkBpOLAV3QMvJZhhkzOPxUZYOD3xoERrH1y49ig4ilZnl7phqn oD8BfQTNKY8tksYUMDUgENwz+cYB9sTs/taz5uUzXHud/3YfbzH1yeBeTTUq6KyK0yIk 1J37lk8UueS7utMrDqmv/8sfcXr5TPXAoqcvU0TiwKDt24IEsNGUSGIevNFvzyz45tQl Mkfq/PtnAJMvABxsa/P/TlLL4BtVh2elUFT40BvI8C1F/DetCqKc5DtKO94xnpw0vTx4 KZuA== X-Gm-Message-State: ANoB5pmaQuS3VrsbUbcBAJb0UW8gbSstjh2GTIgDcl1fEP9wHmCKbdY3 JBdY1glDKy93sSfoN8dbGBleBQ== X-Received: by 2002:a05:600c:4f89:b0:3cf:d0be:1231 with SMTP id n9-20020a05600c4f8900b003cfd0be1231mr36102867wmq.13.1671207895040; Fri, 16 Dec 2022 08:24:55 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id h16-20020a05600c351000b003d23a3b783bsm3450995wmq.10.2022.12.16.08.24.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:24:54 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 3/6] riscv: Move DTB_EARLY_BASE_VA to the kernel address space Date: Fri, 16 Dec 2022 17:21:38 +0100 Message-Id: <20221216162141.1701255-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388729357217801?= X-GMAIL-MSGID: =?utf-8?q?1752388729357217801?= The early virtual address should lie in the kernel address space for inline kasan instrumentation to succeed, otherwise kasan tries to dereference an address that does not exist in the address space (since kasan only maps *kernel* address space, not the userspace). Simply use the very first address of the kernel address space for the early fdt mapping. It allowed an Ubuntu kernel to boot successfully with inline instrumentation. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 58bcf395efdc..d5aa6ca732f2 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -57,7 +57,7 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] EXPORT_SYMBOL(empty_zero_page); extern char _start[]; -#define DTB_EARLY_BASE_VA PGDIR_SIZE +#define DTB_EARLY_BASE_VA (ADDRESS_SPACE_END - (PTRS_PER_PGD / 2 * PGDIR_SIZE) + 1) void *_dtb_early_va __initdata; uintptr_t _dtb_early_pa __initdata; From patchwork Fri Dec 16 16:21:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34008 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1065289wrn; Fri, 16 Dec 2022 08:29:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf65EJlmK9FtQ79FGys1jpg32wn+9oXuHyEqeX6z4g05ZsdAMfDg9qlUZIyb64LQtv2sOOFO X-Received: by 2002:a17:907:1110:b0:7c0:fd1a:79ef with SMTP id qu16-20020a170907111000b007c0fd1a79efmr27590959ejb.48.1671208156633; Fri, 16 Dec 2022 08:29:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208156; cv=none; d=google.com; s=arc-20160816; b=tA+szKRZhoDJBtMX/ClfS5LxSXg+ZQiylZRAwxuHWKiLCq0I4jLp7Ngy/FXj4gdCSB vS1MZH+XTPSj9JgZfwpo8CnYnHupM6B6lthhVE0jYdvGAqcwqkL3Zz12zo1t8wQaSxDK Of8+ol8OvZuvCPsLhdMjloszcNDfrTZZZe0YV29TnbdnfdHdSGiQ3VOdqSI+H9DtQpVH OkHUXiCugNXdeoFFTO92yIwtclDnSVBdpW9IFBygNxShho+UqsW5KPLZqUK72F9BHXNf oKfRRN0QgkMvZaUb7N+Mr0o46GWrTo3Dka0g/A0IaHKz/pnwunQ+OC+gb8A9Sr5ST45/ OiAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ny2TExHkEZdNiam4GgngpmtWZKCcqjrEUo0Wf99rwww=; b=bvdXw7p2dzBHqLi8SpKKoXxuX9cQ985FQQrmIQzqOjf2WVIsk3KSvScpmzKXuQrG3w uMu5q0m2ZerBJW0sBmsac8fVzSUVkEBjrRUNtyAtC8daGpd9dnXu9OSVIY8SzlwjjMNR DVceHJg8/CG9tQufZuR0gmemcXH/9Dp5ng27rJWUqJebKvB9/Y/86J8HdHK0SprjwHen aCnalkSdUQ3UEZO1+Zg0uqBCMenrSFmybyXfYAUlsMdATZ5T9auGCyhlgjW062sJVvQj 6NPQ8F3MbYM754HcxEFwRmv5vom8aOR3ATDZou3DXwO2aI79TGXYpVeg/K1p4AKBwWBf jm6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=6wp5stVF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id he19-20020a1709073d9300b007da4fe085bbsi3282803ejc.144.2022.12.16.08.28.53; Fri, 16 Dec 2022 08:29:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=6wp5stVF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbiLPQ0a (ORCPT + 99 others); Fri, 16 Dec 2022 11:26:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231532AbiLPQ0B (ORCPT ); Fri, 16 Dec 2022 11:26:01 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF5042D1EC for ; Fri, 16 Dec 2022 08:25:57 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id v124-20020a1cac82000000b003cf7a4ea2caso4461853wme.5 for ; Fri, 16 Dec 2022 08:25:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ny2TExHkEZdNiam4GgngpmtWZKCcqjrEUo0Wf99rwww=; b=6wp5stVF9wjQFTJ01YMUJ26jEyjMMZ+/S++XQfIRFc1O1EVl2zzZnEEnO5OWQgAVF5 MzNxy/fJb9BfqXJ2jywoEtILpRcDGr3989nW2gV9HaQobfOAZLyq573kcT8k9PbwIxlD rEfEkgCBf21a+MAdPaXAQ1LvnNQDg6zZ62KQq5POXUvTz+8Puz8cClo3xalXOrYbN6cr 0lvPPE3obyuSWCBTo9fVMa8R1BoUX2yd09ZFEEM5a1ZUT+hbAJ/aCPPbGJxdIlnqjNjY MgB7fCZu7OmcizKUKo30l/oUeCv2w8HY1r0NEzc7JDsx969xBMdZjKH4mfvpwOAZwpuS 3Txw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ny2TExHkEZdNiam4GgngpmtWZKCcqjrEUo0Wf99rwww=; b=EeTATVZh48dJh1uYq2OeHk0NpGN8/5N0qiX2e+WXqJL1Eqx6ifMxk9tSsfUelUNxSX ZFjqSCYxxg6DnYUFgAyaBypRos8Px71/TN4X/QjBKMga7iANla+jV0s0RY9EuHpooaH+ 59DbTVh5//08Pv5MkdCHWVsLZZFdHF+QG40iJXMD9n795HEp5lRIVGHrnDBwozdScbLw aYBzrJLMtQ0ozfT9hg2JaUoEqYGp1o784BlifjHbg2walIZzRdBwDfqGPduxknoPPJ7g 3zM2TdERd8nq5OmLcMaRETBcBKBCIPS6Rg3MBSV8azNvJR1RX5x+5LyGdFC4Ws6LGvno TkmA== X-Gm-Message-State: ANoB5pnMotrwAWntP3lnrz8bHdbj+lKNkJBHvU9gTiVZZE60rEfko0Yd SCeTFJdY9zNBafig1mtmp6XXsQ== X-Received: by 2002:a05:600c:554b:b0:3d2:1761:3742 with SMTP id iz11-20020a05600c554b00b003d217613742mr20371748wmb.15.1671207955923; Fri, 16 Dec 2022 08:25:55 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id j9-20020a05600c190900b003b4cba4ef71sm11838404wmq.41.2022.12.16.08.25.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:25:55 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 4/6] riscv: Fix EFI stub usage of KASAN instrumented string functions Date: Fri, 16 Dec 2022 17:21:39 +0100 Message-Id: <20221216162141.1701255-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388763865814762?= X-GMAIL-MSGID: =?utf-8?q?1752388763865814762?= The EFI stub must not use any KASAN instrumented code as the kernel proper did not initialize the thread pointer and the mapping for the KASAN shadow region. Avoid using generic string functions by copying stub dependencies from lib/string.c to drivers/firmware/efi/libstub/string.c as RISC-V does not implement architecture-specific versions of those functions. Signed-off-by: Alexandre Ghiti --- arch/riscv/kernel/image-vars.h | 8 -- drivers/firmware/efi/libstub/Makefile | 7 +- drivers/firmware/efi/libstub/string.c | 133 ++++++++++++++++++++++++++ 3 files changed, 137 insertions(+), 11 deletions(-) diff --git a/arch/riscv/kernel/image-vars.h b/arch/riscv/kernel/image-vars.h index d6e5f739905e..15616155008c 100644 --- a/arch/riscv/kernel/image-vars.h +++ b/arch/riscv/kernel/image-vars.h @@ -23,14 +23,6 @@ * linked at. The routines below are all implemented in assembler in a * position independent manner */ -__efistub_memcmp = memcmp; -__efistub_memchr = memchr; -__efistub_strlen = strlen; -__efistub_strnlen = strnlen; -__efistub_strcmp = strcmp; -__efistub_strncmp = strncmp; -__efistub_strrchr = strrchr; - __efistub__start = _start; __efistub__start_kernel = _start_kernel; __efistub__end = _end; diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index b1601aad7e1a..031d2268bab5 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -130,9 +130,10 @@ STUBCOPY_RELOC-$(CONFIG_ARM) := R_ARM_ABS # also means that we need to be extra careful to make sure that the stub does # not rely on any absolute symbol references, considering that the virtual # kernel mapping that the linker uses is not active yet when the stub is -# executing. So build all C dependencies of the EFI stub into libstub, and do -# a verification pass to see if any absolute relocations exist in any of the -# object files. +# executing. In addition, we need to make sure that the stub does not use KASAN +# instrumented code like the generic string functions. So build all C +# dependencies of the EFI stub into libstub, and do a verification pass to see +# if any absolute relocations exist in any of the object files. # STUBCOPY_FLAGS-$(CONFIG_ARM64) += --prefix-alloc-sections=.init \ --prefix-symbols=__efistub_ diff --git a/drivers/firmware/efi/libstub/string.c b/drivers/firmware/efi/libstub/string.c index 5d13e43869ee..5154ae6e7f10 100644 --- a/drivers/firmware/efi/libstub/string.c +++ b/drivers/firmware/efi/libstub/string.c @@ -113,3 +113,136 @@ long simple_strtol(const char *cp, char **endp, unsigned int base) return simple_strtoull(cp, endp, base); } + +#ifndef __HAVE_ARCH_STRLEN +/** + * strlen - Find the length of a string + * @s: The string to be sized + */ +size_t strlen(const char *s) +{ + const char *sc; + + for (sc = s; *sc != '\0'; ++sc) + /* nothing */; + return sc - s; +} +EXPORT_SYMBOL(strlen); +#endif + +#ifndef __HAVE_ARCH_STRNLEN +/** + * strnlen - Find the length of a length-limited string + * @s: The string to be sized + * @count: The maximum number of bytes to search + */ +size_t strnlen(const char *s, size_t count) +{ + const char *sc; + + for (sc = s; count-- && *sc != '\0'; ++sc) + /* nothing */; + return sc - s; +} +EXPORT_SYMBOL(strnlen); +#endif + +#ifndef __HAVE_ARCH_STRCMP +/** + * strcmp - Compare two strings + * @cs: One string + * @ct: Another string + */ +int strcmp(const char *cs, const char *ct) +{ + unsigned char c1, c2; + + while (1) { + c1 = *cs++; + c2 = *ct++; + if (c1 != c2) + return c1 < c2 ? -1 : 1; + if (!c1) + break; + } + return 0; +} +EXPORT_SYMBOL(strcmp); +#endif + +#ifndef __HAVE_ARCH_STRRCHR +/** + * strrchr - Find the last occurrence of a character in a string + * @s: The string to be searched + * @c: The character to search for + */ +char *strrchr(const char *s, int c) +{ + const char *last = NULL; + do { + if (*s == (char)c) + last = s; + } while (*s++); + return (char *)last; +} +EXPORT_SYMBOL(strrchr); +#endif + +#ifndef __HAVE_ARCH_MEMCMP +/** + * memcmp - Compare two areas of memory + * @cs: One area of memory + * @ct: Another area of memory + * @count: The size of the area. + */ +#undef memcmp +__visible int memcmp(const void *cs, const void *ct, size_t count) +{ + const unsigned char *su1, *su2; + int res = 0; + +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS + if (count >= sizeof(unsigned long)) { + const unsigned long *u1 = cs; + const unsigned long *u2 = ct; + do { + if (get_unaligned(u1) != get_unaligned(u2)) + break; + u1++; + u2++; + count -= sizeof(unsigned long); + } while (count >= sizeof(unsigned long)); + cs = u1; + ct = u2; + } +#endif + for (su1 = cs, su2 = ct; 0 < count; ++su1, ++su2, count--) + if ((res = *su1 - *su2) != 0) + break; + return res; +} +EXPORT_SYMBOL(memcmp); +#endif + +#ifndef __HAVE_ARCH_MEMCHR +/** + * memchr - Find a character in an area of memory. + * @s: The memory area + * @c: The byte to search for + * @n: The size of the area. + * + * returns the address of the first occurrence of @c, or %NULL + * if @c is not found + */ +void *memchr(const void *s, int c, size_t n) +{ + const unsigned char *p = s; + while (n-- != 0) { + if ((unsigned char)c == *p++) { + return (void *)(p - 1); + } + } + return NULL; +} +EXPORT_SYMBOL(memchr); +#endif From patchwork Fri Dec 16 16:21:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34011 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1066407wrn; Fri, 16 Dec 2022 08:31:08 -0800 (PST) X-Google-Smtp-Source: AA0mqf6YCSVjzRb87MmPy37U46wnMBCmDgVWRJavH3tlyOInakssP5lMgef4lpe5csHrPJVBWZXS X-Received: by 2002:a17:906:bccf:b0:7c1:11fd:9b98 with SMTP id lw15-20020a170906bccf00b007c111fd9b98mr28515714ejb.27.1671208268772; Fri, 16 Dec 2022 08:31:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208268; cv=none; d=google.com; s=arc-20160816; b=iy5oEcU+uX6MnW3skQV0TSmiaQFuPfQtuubrdgGCoa5iOb8QpCzHnj+NhSK+G9frKL 0cMvw/KMQWHxXGQwgNfc57I3yuaKa2D2J+PFIa2nSj+KjuGvtSyiGtGhJJSD5dveyp/q 3Qy8chJw4zgEt+sBaq4lXA8ZKBetYMoDRBS59WcPRf4lzH9gPkArsa489qsZDfSAB055 wCzH9iDG2cL4VpxdjKOSG3Edw2dx6V/m89NKtqtzU5VbSX5Av8jXr33L4OUxDSbaI2UT J5YJDoUx2EMTM5C+C07HPvfgqQvD1gJhiqgTQIuvM+dvha8x3sxUNPrW52LgPSOeZvdA ILIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7CyWdPwmsoEln1rCKIxbXcZ7tw/K7pVLqcNgYwRXhLM=; b=X4xK0DUMIC/dJU2w9wx6hN4dCX6Kclgo27V6ELv1AZfzLRei+Xxn0QlhF/Xfd2NQh/ bMlkzyJMOhG2dEn04fh9c7M/FE8G1a86jeirLltOhsbyxfcTnb2pfXvpLGzCePJtp1oz Qvu5/rPaTpr0fwSFw1TJ5ItoqIJhZou4btnD1Kjn6Lm9TbuiqP35BBCCy/0cpKLDqYm7 9t6L0CHKzhqjUiCO0aSvTg6DB4VAFupjDEGg/WJ9+cuBRYP6MCLgoZC55vwGPkq94Gxa D34w1md9tmzMBaecyvi4oqA6GiCFqjD/j/ubsQLjbAzi0bwQ17Ucc9Zr8qvby/vLWnOW SeQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=Rvbra2pe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp23-20020a1709073e1700b0078df1c345e4si3048197ejc.518.2022.12.16.08.30.44; Fri, 16 Dec 2022 08:31:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=Rvbra2pe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231236AbiLPQ13 (ORCPT + 99 others); Fri, 16 Dec 2022 11:27:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231766AbiLPQ1B (ORCPT ); Fri, 16 Dec 2022 11:27:01 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373EE61516 for ; Fri, 16 Dec 2022 08:26:58 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id h7so3031254wrs.6 for ; Fri, 16 Dec 2022 08:26:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7CyWdPwmsoEln1rCKIxbXcZ7tw/K7pVLqcNgYwRXhLM=; b=Rvbra2pefYxN5Q59kp/VQ7E4iiWi056Ll+Lc97nNZ8xQjIW01p4/Hhmq8nyEsUCZaN fFEJ5yuh0sy9fep0YuWkUVmUgvhZWLsE6V/HZlrqB9miJXEuCFTFIs/2UY6MbYspUkw8 sgtydWJi7DQ6WrAVuF3osCpkGVohxOEeO4DvlEa5KpypUXdKdcXWxti2XjDocPJq856p qMZjBMDvhk8b7uBHCy5IqIy3rBfQdCQG92tE+pUgOJMDn560Xj0on7/f03e3sGFwkkaP g88GQZxRYpt/58KFbHAgzx82CtnYDhfZQONayWapn6ilzFCEXq+aR6oxaHrFfnaE8lO7 OAnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7CyWdPwmsoEln1rCKIxbXcZ7tw/K7pVLqcNgYwRXhLM=; b=Tc/9RmzViz4KtvgweHFjE7EdeHsiMMFMDK2s0H15sJ9pEOszEv3G8RDuA29OywMAhh 4/JURHTvlNU/SdonqvWAOoZ6oqF3pjM1x3R5FgNyWkBBiN/E1i/Z1KHFNrFd4lKt+DJK 4pCBkPMXNIxG9LpMNVbhUm7vtxsvz9sgzdikEsbC8OyjCuZ18HYb6J6ipo0tJ+/UhTSE nMdfWXpN1WhGRxK8Mw+ugPkB/WbVhoSeEIACiJb9WKGGqRaV4d6tNp8xk3VHeFM2OR8Y H/f+d+ZmO0qOdvhq5NAEG7cEo0S1iJbX6uynh1qdOz/hvat8hXMKhKURofo1qAdUhVa3 ujBw== X-Gm-Message-State: ANoB5pn/8jAsUeI50zVP5z+oUnfgeAxBm62pspWloFmXZ0zLDrBFuOJp J0NlKJ0BYv05McQc/0UzljSiAQ== X-Received: by 2002:adf:f9c7:0:b0:242:4c28:c9a9 with SMTP id w7-20020adff9c7000000b002424c28c9a9mr19141287wrr.46.1671208016801; Fri, 16 Dec 2022 08:26:56 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id z7-20020a5d4407000000b0024245e543absm2554603wrq.88.2022.12.16.08.26.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:26:56 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 5/6] riscv: Fix ptdump when KASAN is enabled Date: Fri, 16 Dec 2022 17:21:40 +0100 Message-Id: <20221216162141.1701255-6-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388881571207331?= X-GMAIL-MSGID: =?utf-8?q?1752388881571207331?= The KASAN shadow region was moved next to the kernel mapping but the ptdump code was not updated and it appears to break the dump of the kernel page table, so fix this by moving the KASAN shadow region in ptdump. Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping") Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/ptdump.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 830e7de65e3a..20a9f991a6d7 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -59,10 +59,6 @@ struct ptd_mm_info { }; enum address_markers_idx { -#ifdef CONFIG_KASAN - KASAN_SHADOW_START_NR, - KASAN_SHADOW_END_NR, -#endif FIXMAP_START_NR, FIXMAP_END_NR, PCI_IO_START_NR, @@ -74,6 +70,10 @@ enum address_markers_idx { VMALLOC_START_NR, VMALLOC_END_NR, PAGE_OFFSET_NR, +#ifdef CONFIG_KASAN + KASAN_SHADOW_START_NR, + KASAN_SHADOW_END_NR, +#endif #ifdef CONFIG_64BIT MODULES_MAPPING_NR, KERNEL_MAPPING_NR, @@ -82,10 +82,6 @@ enum address_markers_idx { }; static struct addr_marker address_markers[] = { -#ifdef CONFIG_KASAN - {0, "Kasan shadow start"}, - {0, "Kasan shadow end"}, -#endif {0, "Fixmap start"}, {0, "Fixmap end"}, {0, "PCI I/O start"}, @@ -97,6 +93,10 @@ static struct addr_marker address_markers[] = { {0, "vmalloc() area"}, {0, "vmalloc() end"}, {0, "Linear mapping"}, +#ifdef CONFIG_KASAN + {0, "Kasan shadow start"}, + {0, "Kasan shadow end"}, +#endif #ifdef CONFIG_64BIT {0, "Modules/BPF mapping"}, {0, "Kernel mapping"}, @@ -362,10 +362,6 @@ static int __init ptdump_init(void) { unsigned int i, j; -#ifdef CONFIG_KASAN - address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; - address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; -#endif address_markers[FIXMAP_START_NR].start_address = FIXADDR_START; address_markers[FIXMAP_END_NR].start_address = FIXADDR_TOP; address_markers[PCI_IO_START_NR].start_address = PCI_IO_START; @@ -377,6 +373,10 @@ static int __init ptdump_init(void) address_markers[VMALLOC_START_NR].start_address = VMALLOC_START; address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET; +#ifdef CONFIG_KASAN + address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; + address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; +#endif #ifdef CONFIG_64BIT address_markers[MODULES_MAPPING_NR].start_address = MODULES_VADDR; address_markers[KERNEL_MAPPING_NR].start_address = kernel_map.virt_addr; From patchwork Fri Dec 16 16:21:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 34010 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1065451wrn; Fri, 16 Dec 2022 08:29:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf5xrVEyeMi9nBMAzV+NMP8LwsHJL8uOQQ7Porcb0M0vPeaSVmGuz//HLb2HtygteiP7W97A X-Received: by 2002:a05:6402:1002:b0:467:9384:a7aa with SMTP id c2-20020a056402100200b004679384a7aamr30429125edu.15.1671208176656; Fri, 16 Dec 2022 08:29:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671208176; cv=none; d=google.com; s=arc-20160816; b=IHZc7gRsX9/rzUxdn/KVq82n5kxiktTB0TmRxF1Y9J7DniSb43GQ8CqcTsdL85vLht uJzHiZefybb9xz44fRcS19PcqEHep6bGNUPEwb7S9ZQc0BIajMFAYv0NSI+9XOaDVeei hc8DUDTZ+c8s3i+q/s9kkC9sdfbWlz7FpSIJb/sffL2etWNsgPOePCt17FrhNV0TuiqF QQoX13Hywlk3gyKppSswzUKCG+XF9H/SPWg0hVRZt85gD8jgziiGBPl6OsZ0DVnESgJV tJnesFyJnwHJbOSyfdoLhT2bOK7Qn52IYFnWg9tK8ted7Y3BYk+3HX+xJYJq0ywIlH9q w25w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/qYMZscnhg3axVtI2bTNdXZnKgui98AcKoWlQAnIeCs=; b=Aa1uKzmW3yzq9bTdkO4pOyU3y9GZA5Ze8VQp3DPuQSnwg7ompfD8w42oZsN7wdO3Xr LmrGwEZ9m2dfieTBUIbWRDtwOIGlbEEUbNK4nOZLGqFCjEwFtQIH9J/2se7O+nV40tPa eSTjuSVaLZ4yMyW8/wM/ROPDCJoJYnweWyviRMfjQTbqDULmZHDPTGfbAn3BvI2NUZmO Dhm5POdExlQS7kvpX2lrPSsk0ceAecaSGulv5qoXIDVyYUveggudOkTtW8ewC9BmEGtS Id1fRBDM9Jh+YfZR2JwmtJVxoLq7nl+3WRTAwSi37weCX5eMV/763eejiijsibulnviu zRBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=vhEZGm1Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y10-20020a056402440a00b0046c9a4ab639si3321287eda.548.2022.12.16.08.29.14; Fri, 16 Dec 2022 08:29:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=vhEZGm1Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231352AbiLPQ2L (ORCPT + 99 others); Fri, 16 Dec 2022 11:28:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230471AbiLPQ2A (ORCPT ); Fri, 16 Dec 2022 11:28:00 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1909D2D1FB for ; Fri, 16 Dec 2022 08:27:59 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id v124-20020a1cac82000000b003cf7a4ea2caso4465647wme.5 for ; Fri, 16 Dec 2022 08:27:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/qYMZscnhg3axVtI2bTNdXZnKgui98AcKoWlQAnIeCs=; b=vhEZGm1QXPdlJCD+7d9PcJy32KzwQJAMknHGf8qqBajr4gCrr0UxQ70GIvhyqSYoNz eNFWzXaH5KOeakclklBAaJQTNJ0A+j2VNskqQLWwhTpZd3CKARTbDX2opCaYTEhAiOvq eDh8gvk1mz/IqzUCP055btEAV/12C4EOpPySxKaOT77Zrlsdsqr1WHFXoPE8sw6Z8D7R b/AXf/Dv1w9KgT2gAp/6tjyNEmhKOTfQn9SLuqxmHOEsjI2c695gxRTAVuu+HCIMCAl1 4PamHQAR09nYPNXQzRqMO1oJkExVcjPxnkbuboCIC60O7ssqlEycQyWCYz4hAYvth7Dw WRNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/qYMZscnhg3axVtI2bTNdXZnKgui98AcKoWlQAnIeCs=; b=P9J1yLSB9RjVQg2pzFkPf7hizpDcnwl5A6A0I1Z5Dhf2ekWLbYR5T6hdE8K8UM0BQf QXQateTYnLiTnIe4t6zYUjeln+kEFnpS+qpH9QDF+H+ci4OOuvL76c0HR9jbparck44X JS392uzbtu1ggBaaOgs6AIp+g/RapdPRKwLshS3MtKEtRR0oJIPeDBTrTq+AVvAAJ6CE cnZ7WNkGt1U4apXmpPjNAltUmp3Tr0VjyyG47pLPNmQIGNIKdPB11ti0a+gnAtYbfGEc YcK2XVm+D1V2sIeIZw0oOfFCgix5hQBMlgrA+KScFUQOBkfFJQme040jmPNW2nZJ2qH0 XAbg== X-Gm-Message-State: ANoB5pnWlFGFO1LYo+nZ4/NA/PT5KQU/QjTM4QO56VQyUR/QKXB4/qLi MM9lEBLCDQURRkWSWoeJ3Oxrhg== X-Received: by 2002:a05:600c:34cd:b0:3cf:c2a5:5abc with SMTP id d13-20020a05600c34cd00b003cfc2a55abcmr26558882wmq.17.1671208077738; Fri, 16 Dec 2022 08:27:57 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id i27-20020a05600c4b1b00b003d220ef3232sm2784387wmp.34.2022.12.16.08.27.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:27:57 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 6/6] riscv: Unconditionnally select KASAN_VMALLOC if KASAN Date: Fri, 16 Dec 2022 17:21:41 +0100 Message-Id: <20221216162141.1701255-7-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752388785108992566?= X-GMAIL-MSGID: =?utf-8?q?1752388785108992566?= If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default. Signed-off-by: Alexandre Ghiti --- arch/riscv/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 6b48a3ae9843..2be0d0d230df 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -113,6 +113,7 @@ config RISCV select HAVE_RSEQ select IRQ_DOMAIN select IRQ_FORCED_THREADING + select KASAN_VMALLOC if KASAN select MODULES_USE_ELF_RELA if MODULES select MODULE_SECTIONS if MODULES select OF