Message ID | 20230529144022.42927-1-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1555106vqr; Mon, 29 May 2023 07:27:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6HfvR+7FK8rdprcwOQu+r3+/sMNjuamfeKEBPfmjqOgzE/5Ciu6SVvj0fPQkGX95eaNIFf X-Received: by 2002:a05:6a20:8f19:b0:10b:9527:7127 with SMTP id b25-20020a056a208f1900b0010b95277127mr9226631pzk.20.1685370460601; Mon, 29 May 2023 07:27:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685370460; cv=none; d=google.com; s=arc-20160816; b=Q9nOMGwAagO7/74DCkpdzWAUgXThpCTuRpf0s/ATZsKkJWss4auPfhPOcEUOeMZA8m qRBK+li3FBAGLwb2DM8VurYdm6tci3a67j+0FLGZQ8mVu86H4INDM14WUhX+sVtSqZOZ mGzVSclnQ7TvK1v1j0BiyCUVWkMrmXccjNEVOvzivr7k7LXsBUO5DSvkO2x0bM+guAji T5OGRfKZ1GTA5B53QpG2nS4uaKbj//B1gOUe0sFSZmJSxQ/3nxS3Ioe+FPoW/e28zaYa SuMr5+dA2nYrkCWtCsnUElXCzo4ofS2bGRRxo89s/EjhGiPqm2rd6nq0q6wPKaOlOl4k fK6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=/2MaV0KNaymjWa/QmfNWdkHqutIHDLmXnraiPd+bz3w=; b=n7dlEINE///FPbLMKA+vA4BZ/OtobzEkVSx/Uli1WXw6bV0EyIu4n32hgr5OXPHCp8 n9XYzBHdYnd6TS2zRdJoq6wHM2UazEjebigodhnDTVb7gBk6jahIV6wqN3sGGjSRKEi5 KiN3h+16UbB6tG8DyEnDBO+6Ty+SQ38Ur6UU5dA56RtBlU/mbspwnJ3MNQFaVQta7p1u Ol2cTFkQKKfeI+/KnNMt2WxDSSf8oV0fkjuvJKn6jU42S/ztE01ElfSGASz9rB+Kq5Zq XwOJ3jQjnTpy71/eobgVOQRp35Cahsw+kOnUrNAOAy+BqKmDzgjm4mzM2tW4liXuXQ3W 3lSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j1-20020a170902c3c100b001b04c92cb42si543031plj.357.2023.05.29.07.27.25; Mon, 29 May 2023 07:27:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230098AbjE2OY0 (ORCPT <rfc822;zhuangel570@gmail.com> + 99 others); Mon, 29 May 2023 10:24:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229667AbjE2OYX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 29 May 2023 10:24:23 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 749BAA0 for <linux-kernel@vger.kernel.org>; Mon, 29 May 2023 07:24:21 -0700 (PDT) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QVHmh030kzLmPC; Mon, 29 May 2023 22:22:44 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 29 May 2023 22:24:17 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org> CC: Kefeng Wang <wangkefeng.wang@huawei.com>, Baoquan He <bhe@redhat.com> Subject: [PATCH -next] mm: page_alloc: simplify has_managed_dma() Date: Mon, 29 May 2023 22:40:22 +0800 Message-ID: <20230529144022.42927-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767239016023949465?= X-GMAIL-MSGID: =?utf-8?q?1767239016023949465?= |
Series |
[-next] mm: page_alloc: simplify has_managed_dma()
|
|
Commit Message
Kefeng Wang
May 29, 2023, 2:40 p.m. UTC
The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
is enough, so simplify has_managed_dma() and make it inline.
Cc: Baoquan He <bhe@redhat.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/mmzone.h | 21 +++++++++++----------
mm/page_alloc.c | 15 ---------------
2 files changed, 11 insertions(+), 25 deletions(-)
Comments
On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote: > The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0) > is enough, so simplify has_managed_dma() and make it inline. That's true on x86, but is it true on all architectures?
On 2023/5/29 22:26, Matthew Wilcox wrote: > On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote: >> The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0) >> is enough, so simplify has_managed_dma() and make it inline. > > That's true on x86, but is it true on all architectures? There is no document about numa node info for the DMA_ZONE, + Mike I used 'git grep -w ZONE_DMA arch/' 1) the following archs without NUMA support, so it's true for them, arch/alpha/mm/init.c: max_zone_pfn[ZONE_DMA] = dma_pfn; arch/arm/mm/init.c: max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, max_low); arch/m68k/mm/init.c: max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; arch/m68k/mm/mcfmmu.c: max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend); arch/m68k/mm/motorola.c: max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM(); arch/m68k/mm/sun3mmu.c: max_zone_pfn[ZONE_DMA] = ((unsigned long)high_memory) >> PAGE_SHIFT; arch/microblaze/mm/init.c: zones_size[ZONE_DMA] = max_low_pfn; arch/microblaze/mm/init.c: zones_size[ZONE_DMA] = max_pfn; 2) Simple check following archs, it seems that it is yes to them too. arch/mips/mm/init.c: max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; arch/powerpc/mm/mem.c: max_zone_pfns[ZONE_DMA] = min(max_low_pfn, arch/s390/mm/init.c: max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS); arch/sparc/mm/srmmu.c: max_zone_pfn[ZONE_DMA] = max_low_pfn; arch/x86/mm/init.c: max_zone_pfns[ZONE_DMA] = min(MAX_DMA_PFN, max_low_pfn); arch/arm64/mm/init.c: max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); arch/loongarch/mm/init.c: max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
On 05/30/23 at 10:10am, Kefeng Wang wrote: > > > On 2023/5/29 22:26, Matthew Wilcox wrote: > > On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote: > > > The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0) > > > is enough, so simplify has_managed_dma() and make it inline. > > > > That's true on x86, but is it true on all architectures? > > There is no document about numa node info for the DMA_ZONE, + Mike > > I used 'git grep -w ZONE_DMA arch/' willy is right. max_zone_pfn can only limit the range of zone, but can't decide which zone is put on which node. The memory layout is decided by firmware. I searched commit log to get below commit which can give a good example. commit c1d0da83358a2316d9be7f229f26126dbaa07468 Author: Laurent Dufour <ldufour@linux.ibm.com> Date: Fri Sep 25 21:19:28 2020 -0700 mm: replace memmap_context by meminit_context Patch series "mm: fix memory to node bad links in sysfs", v3. Sometimes, firmware may expose interleaved memory layout like this: Early memory node ranges node 1: [mem 0x0000000000000000-0x000000011fffffff] node 2: [mem 0x0000000120000000-0x000000014fffffff] node 1: [mem 0x0000000150000000-0x00000001ffffffff] node 0: [mem 0x0000000200000000-0x000000048fffffff] node 2: [mem 0x0000000490000000-0x00000007ffffffff] > > 1) the following archs without NUMA support, so it's true for them, > > arch/alpha/mm/init.c: max_zone_pfn[ZONE_DMA] = dma_pfn; > arch/arm/mm/init.c: max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, > max_low); > arch/m68k/mm/init.c: max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; > arch/m68k/mm/mcfmmu.c: max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend); > arch/m68k/mm/motorola.c: max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM(); > arch/m68k/mm/sun3mmu.c: max_zone_pfn[ZONE_DMA] = ((unsigned > long)high_memory) >> PAGE_SHIFT; > arch/microblaze/mm/init.c: zones_size[ZONE_DMA] = max_low_pfn; > arch/microblaze/mm/init.c: zones_size[ZONE_DMA] = max_pfn; > > > 2) Simple check following archs, it seems that it is yes to them too. > > arch/mips/mm/init.c: max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; > arch/powerpc/mm/mem.c: max_zone_pfns[ZONE_DMA] = min(max_low_pfn, > arch/s390/mm/init.c: max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS); > arch/sparc/mm/srmmu.c: max_zone_pfn[ZONE_DMA] = max_low_pfn; > arch/x86/mm/init.c: max_zone_pfns[ZONE_DMA] = min(MAX_DMA_PFN, > max_low_pfn); > arch/arm64/mm/init.c: max_zone_pfns[ZONE_DMA] = > PFN_DOWN(arm64_dma_phys_limit); > arch/loongarch/mm/init.c: max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; >
On 2023/5/30 12:18, Baoquan He wrote: > On 05/30/23 at 10:10am, Kefeng Wang wrote: >> >> >> On 2023/5/29 22:26, Matthew Wilcox wrote: >>> On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote: >>>> The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0) >>>> is enough, so simplify has_managed_dma() and make it inline. >>> >>> That's true on x86, but is it true on all architectures? >> >> There is no document about numa node info for the DMA_ZONE, + Mike >> >> I used 'git grep -w ZONE_DMA arch/' > > willy is right. max_zone_pfn can only limit the range of zone, but > can't decide which zone is put on which node. The memory layout is > decided by firmware. I searched commit log to get below commit which > can give a good example. > > commit c1d0da83358a2316d9be7f229f26126dbaa07468 > Author: Laurent Dufour <ldufour@linux.ibm.com> > Date: Fri Sep 25 21:19:28 2020 -0700 > > mm: replace memmap_context by meminit_context > > Patch series "mm: fix memory to node bad links in sysfs", v3. > > Sometimes, firmware may expose interleaved memory layout like this: > > Early memory node ranges > node 1: [mem 0x0000000000000000-0x000000011fffffff] > node 2: [mem 0x0000000120000000-0x000000014fffffff] > node 1: [mem 0x0000000150000000-0x00000001ffffffff] > node 0: [mem 0x0000000200000000-0x000000048fffffff] > node 2: [mem 0x0000000490000000-0x00000007ffffffff] Oh, it looks strange, but it do occur if firmware report as this way. Thanks Willy and Baoquan, please ignore the patch.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5a7ada0413da..48e9fd8eccb4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1503,16 +1503,6 @@ static inline int is_highmem(struct zone *zone) return is_highmem_idx(zone_idx(zone)); } -#ifdef CONFIG_ZONE_DMA -bool has_managed_dma(void); -#else -static inline bool has_managed_dma(void) -{ - return false; -} -#endif - - #ifndef CONFIG_NUMA extern struct pglist_data contig_page_data; @@ -1527,6 +1517,17 @@ static inline struct pglist_data *NODE_DATA(int nid) #endif /* !CONFIG_NUMA */ +static inline bool has_managed_dma(void) +{ +#ifdef CONFIG_ZONE_DMA + struct zone *zone = NODE_DATA(0)->node_zones + ZONE_DMA; + + if (managed_zone(zone)) + return true; +#endif + return false; +} + extern struct pglist_data *first_online_pgdat(void); extern struct pglist_data *next_online_pgdat(struct pglist_data *pgdat); extern struct zone *next_zone(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e671c747892f..e847b39939b8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6613,18 +6613,3 @@ bool put_page_back_buddy(struct page *page) return ret; } #endif - -#ifdef CONFIG_ZONE_DMA -bool has_managed_dma(void) -{ - struct pglist_data *pgdat; - - for_each_online_pgdat(pgdat) { - struct zone *zone = &pgdat->node_zones[ZONE_DMA]; - - if (managed_zone(zone)) - return true; - } - return false; -} -#endif /* CONFIG_ZONE_DMA */