From patchwork Mon Oct 17 23:37:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giulio Benetti X-Patchwork-Id: 3831 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1695128wrs; Mon, 17 Oct 2022 16:55:35 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5u1eOMqKEHQdmopdTVWarjch8+Cpro9JEaIdVg0OGDF4wrT8Ua5m7TqcCPwSS/7rr8GCSe X-Received: by 2002:a17:907:a425:b0:78d:b3ce:1e43 with SMTP id sg37-20020a170907a42500b0078db3ce1e43mr159662ejc.95.1666050935709; Mon, 17 Oct 2022 16:55:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666050935; cv=none; d=google.com; s=arc-20160816; b=kCZ7D1Um26oKpLUfEV6ziD4FDPqHaB0XQe+YzUGLet7SzvySaWxE3EEcC/Q7Ntfk+k eFGp7WmhzqdlXjSuWJIkLxc42j2iG7iRrWDYMGocM8GCZg8LCCXOGweSg4mT8OJO13ob SLJYgdxv6vpJQI9+/V0jsfsdzbwShc1gvf6Rdv+iMnPnT8ICkE8TaRGaDLPG3RAQqIEJ y+r9K7EBQCiu6sKtiiFBRZ+lyT60pwJ2vOtU9FFWftov7N8T5RG52eNYJVymFsZE37f6 acFCjU95UIlMp9CybtJfSYdepJKelNZhWO1eIAcP5C/eQbwXgsQ3WRTrjrjMNtNGEJVg 9jqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=hn5QdmyOf4I5MlxdwKWYJh5EK1aNtSNm9EF9j7TV4H8=; b=HfuMYYxOaMOqw/O15YPcR3rjrYwlcU95YNaAV9EhDPJDjjLHRjBD/1f3JLYvXUF0f3 KQax46j+AAOgv38oBc8rmGw05vf9BRo0k/kkYzlfZ+vRhrEMguH2/rbUIobr0BnRQ8eM 2RSLlqRVXx5r1b73Kdub6s2shPgcl7Nv9aD8W9A2nAyKrvZ4VplW3qn9BCNQxR8LjVvC 1IT/maqw0nM1F0TJRi0i82J0RKuOb5d1wUdd1IsXgkjel/jeDL7VxsYPldAqdRHByd9V MkVmdbWUp1VrW1RRifxUEiZFheWo1KlCKUV7qrqB+yBUuYwscQCIwLTYv8XGbfCkP0Ax tY5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@aruba.it header.s=a1 header.b=bUwxpBru; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i6-20020a0564020f0600b00454474269dbsi9581976eda.154.2022.10.17.16.55.11; Mon, 17 Oct 2022 16:55:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@aruba.it header.s=a1 header.b=bUwxpBru; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230103AbiJQXiL (ORCPT + 99 others); Mon, 17 Oct 2022 19:38:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229904AbiJQXiK (ORCPT ); Mon, 17 Oct 2022 19:38:10 -0400 X-Greylist: delayed 60 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 17 Oct 2022 16:38:07 PDT Received: from smtpcmd0641.aruba.it (smtpcmd0641.aruba.it [62.149.156.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2A5558050F for ; Mon, 17 Oct 2022 16:38:06 -0700 (PDT) Received: from localhost.localdomain ([146.241.87.206]) by Aruba Outgoing Smtp with ESMTPSA id kZfJoHtVXJpY4kZfJoMX3o; Tue, 18 Oct 2022 01:37:03 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=aruba.it; s=a1; t=1666049823; bh=+OQAKoYv7OiuITTsmVZiFS/2M0iuzzB5uuDxcJkXXgI=; h=From:To:Subject:Date:MIME-Version; b=bUwxpBrurlQyUR5ls45YsLQoMzDT8C9/GiWvONMGEjAi19/DBuVq243MR53ypulYl IH/b3lGfM5D5z1UJ3Ek2gcgoO/bSYzEuyd5mtcmEQiegZMNusd9L+NNnsmJNX22L3q FkkYh4BS/ihkYsyDLkadWF/3aq7dfurgJp4yOyXlDhi1jYK2EklFsgG8M68xtq6P3/ 5B8Iq4v/0v4fol+5j9xbhoWicSGwhc/R7Ck+6Pj8nUG0lOS6q+NMQPxXn7A9Ad8p50 mq2EvrKVhi1+oB5wzsArJslj27JtC1sdV/zJ23hdsUU0Sjdifip1Mwo3n1zBVTl/nM xe0RNKxSZZmLw== From: Giulio Benetti To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Russell King , Giulio Benetti , Anshuman Khandual , Andrew Morton , Kefeng Wang , Russell King , Arnd Bergmann , Will Deacon Subject: [PATCH] ARM: mm: fix no-MMU ZERO_PAGE() implementation Date: Tue, 18 Oct 2022 01:37:00 +0200 Message-Id: <20221017233700.84918-1-giulio.benetti@benettiengineering.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-CMAE-Envelope: MS4xfEesJnfYhGVfzJwrpi2WrccOoQLk7qp7dxIosvDTgNTHuD+L/Kgzbkpqd5IzkRrqzYNkCaCxHsGSoGNJBk/6hCXucsuTWi1skZxo2vJ2oPXyWgrir0Dc WSG2BDgQRCTBN3fISBsiKdgBXLrKGfZL8BvLcoY1DCli3bD3X4YOgh0xNnuWOlxBZZ+UI5vYWrk/vQkIy1MjU4r9NL2P3zARME3m7jk1ZKUijybkhhAVOU6r IKu98Y6Ui8ByU+5a3DA22aTL9J5YfLdorcXL/Ef0irNtMNXrGVKwYfuLS1MZPQLyYe4CtnfkWBFPtq5GyNyOaNpd+wF1ILw98TgmEAJhgyebqBH2KL1Ad6Mv JDM3AuMzmChqAj4em51r15cfjGCGXrEAnbyRTENfxX0XFkAwr3xMouc4rbRgHya9z+H17oqI2G2nta7f3ybPP+WLN/TGioTA1lxsgKHpoxWHfl4ai+UALEIx h+iN2PkHoXKPdCD6yAlWNL/JF0cfG1eFndeljNJyUL4NgDbanjiBNn6C3Tia8AilFEDttK2spTNGMRf96iMSgmpPI8PBwZF1DRMPZjjHksQjG3nFprJyg3ge r1Q= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746981025682035475?= X-GMAIL-MSGID: =?utf-8?q?1746981025682035475?= Actually in no-MMU SoCs(i.e. i.MXRT) ZERO_PAGE(vaddr) expands to ``` virt_to_page(0) ``` that in order expands to: ``` pfn_to_page(virt_to_pfn(0)) ``` and then virt_to_pfn(0) to: ``` #define virt_to_pfn(0) \ ((((unsigned long)(0) - PAGE_OFFSET) >> PAGE_SHIFT) + \ PHYS_PFN_OFFSET) ``` where PAGE_OFFSET and PHYS_PFN_OFFSET are the DRAM offset(0x80000000) and PAGE_SHIFT is 12. This way we obtain 16MB(0x01000000) summed to the base of DRAM(0x80000000). When ZERO_PAGE(0) is then used, for example in bio_add_page(), the page gets an address that is out of DRAM bounds. So instead of using fake virtual page 0 let's allocate a dedicated zero_page during paging_init() and assign it to a global 'struct page * empty_zero_page' the same way mmu.c does. Then let's move ZERO_PAGE() definition to the top of pgtable.h to be in common between mmu.c and nommu.c. Signed-off-by: Giulio Benetti --- arch/arm/include/asm/pgtable-nommu.h | 6 ------ arch/arm/include/asm/pgtable.h | 16 +++++++++------- arch/arm/mm/nommu.c | 19 +++++++++++++++++++ 3 files changed, 28 insertions(+), 13 deletions(-) diff --git a/arch/arm/include/asm/pgtable-nommu.h b/arch/arm/include/asm/pgtable-nommu.h index d16aba48fa0a..090011394477 100644 --- a/arch/arm/include/asm/pgtable-nommu.h +++ b/arch/arm/include/asm/pgtable-nommu.h @@ -44,12 +44,6 @@ typedef pte_t *pte_addr_t; -/* - * ZERO_PAGE is a global shared page that is always zero: used - * for zero-mapped memory areas etc.. - */ -#define ZERO_PAGE(vaddr) (virt_to_page(0)) - /* * Mark the prot value as uncacheable and unbufferable. */ diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 78a532068fec..ef48a55e9af8 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -10,6 +10,15 @@ #include #include +#ifndef __ASSEMBLY__ +/* + * ZERO_PAGE is a global shared page that is always zero: used + * for zero-mapped memory areas etc.. + */ +extern struct page *empty_zero_page; +#define ZERO_PAGE(vaddr) (empty_zero_page) +#endif + #ifndef CONFIG_MMU #include @@ -139,13 +148,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, */ #ifndef __ASSEMBLY__ -/* - * ZERO_PAGE is a global shared page that is always zero: used - * for zero-mapped memory areas etc.. - */ -extern struct page *empty_zero_page; -#define ZERO_PAGE(vaddr) (empty_zero_page) - extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index c42debaded95..c1494a4dee25 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -26,6 +26,13 @@ unsigned long vectors_base; +/* + * empty_zero_page is a special page that is used for + * zero-initialized data and COW. + */ +struct page *empty_zero_page; +EXPORT_SYMBOL(empty_zero_page); + #ifdef CONFIG_ARM_MPU struct mpu_rgn_info mpu_rgn_info; #endif @@ -148,9 +155,21 @@ void __init adjust_lowmem_bounds(void) */ void __init paging_init(const struct machine_desc *mdesc) { + void *zero_page; + early_trap_init((void *)vectors_base); mpu_setup(); + + /* allocate the zero page. */ + zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + if (!zero_page) + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", + __func__, PAGE_SIZE, PAGE_SIZE); + bootmem_init(); + + empty_zero_page = virt_to_page(zero_page); + flush_dcache_page(empty_zero_page); } /*