Message ID | 20231202134224.4029-1-jszhang@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp1770765vqy; Sat, 2 Dec 2023 05:55:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IFRlV86iCwTqSVwClr5OVD1RW9On+17ZmkTj3A2Nq8YtErpcFlkpCSdCa/Q2pOgTiKulml1 X-Received: by 2002:a17:902:6bca:b0:1d0:79c4:e627 with SMTP id m10-20020a1709026bca00b001d079c4e627mr520181plt.31.1701525306210; Sat, 02 Dec 2023 05:55:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701525306; cv=none; d=google.com; s=arc-20160816; b=k8wm0mmDYOUmFaEXojH9ctcVEUwJG8F+g3q9mgwzOApC8ETWQuJ2NGEuwE1Okb1SJe 4/rb8LbDyi5nbyRiboUAuDhP13ftF7K5zs/nfWHCTuNbzwjibQwc3TvIOZAItI8xDT+P I4rbfTsatorhIM9G/eR0UF7yFSG1G03ieD17/X9f7k79TgIa2bRlkJjvLL1/G1zcEhQS 7PsrFyLm8nsNaDk5rC9M7dDHe/7gwPdpeiEwgUA2lY+Im7ce/wWbI0eGYHNQmN3reVDF yquBWD09XHh8pcgJVqf5LNwr6EY3bw/aZAr3mWD7My8vXBjO2uOCc4POnsq2clK3+nxa jLuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=uVsF2vqpIWWUcY4O1DFWFBUuRNQc0O37DC2OgfboiMI=; fh=srb/EijQIrMEdOeqn3qQX6qrsAGyqHAqeSFkl/qDdLM=; b=RBjOaa1bL2BfDsqo97cCdA1FQMdeTBZ8DnvK0N7yDfcVharHDlA3Vs62nPxTznvOhl WTiPGzXWBeGx2MCtzmQj2sksgt4wEuWMBm/8/Mg7hz/3LkZaD7W0iiGotl2MPp8g5GlF 1NmSQRjM9+WgM0W0Px3rMy20/+ZWPoU2nSH3Gs516CsLix0q3pxLgQ+6CAPp103yyKwc ndSW+UI22sns6KupsVG931UVncoXkM7/Lev1GIUXzdnr4l9ThMPQfBdk/aQYxlIdm4OQ xr7zenGRtVf9tYv/4LfLYxVrTB1fT1jx6rlGYAqaAagHa1vCCPmFC3C57hebkMzjBfsQ Z/+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TLDhW872; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id d1-20020a170902cec100b001d014c43a94si5223458plg.517.2023.12.02.05.55.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 02 Dec 2023 05:55:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TLDhW872; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 9FD2E8088A9A; Sat, 2 Dec 2023 05:55:03 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232888AbjLBNyq (ORCPT <rfc822;pwkd43@gmail.com> + 99 others); Sat, 2 Dec 2023 08:54:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232883AbjLBNyo (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 2 Dec 2023 08:54:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 041CC116 for <linux-kernel@vger.kernel.org>; Sat, 2 Dec 2023 05:54:51 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 848CCC433C7; Sat, 2 Dec 2023 13:54:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701525290; bh=bomojrOAMU2gHGxmYDLygOQUoO4KABGe2Me37Cc0YE0=; h=From:To:Cc:Subject:Date:From; b=TLDhW872Qeq6T2lOiL/tmbelYRkMW2rIQaE4jkN2Q7OnQKvnK0+B81Hngx2UZIR6/ ETWx7tcNYE1TVKJ/8r0TWFldBBt2c6NYUdKvD/W53sET147ei1ih0uJTMZ62Xg1vv+ D/4V4wCKLUKiU811CGfxekWJC4K6Fggm+IeihHucqryvW612yFnA8LGBAEtNM8svgs vMuRCcbUFUwuUPLb6beNdkzHMJzbgsMvem8OkFLCefLaFbR1d0cEhJFlyN0VhjWcFn yRoteBGcCKVHJ9iOC9iJtNNYym5kkvhP81ByRe4F8vGM9m0Du3s0rI+Ok4JtfJDPIx Vyg8Ib3CbhMAQ== From: Jisheng Zhang <jszhang@kernel.org> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu> Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required Date: Sat, 2 Dec 2023 21:42:24 +0800 Message-Id: <20231202134224.4029-1-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Sat, 02 Dec 2023 05:55:03 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784178599313109860 X-GMAIL-MSGID: 1784178599313109860 |
Series |
[v2] riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required
|
|
Commit Message
Jisheng Zhang
Dec. 2, 2023, 1:42 p.m. UTC
After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC
for !dma_coherent"), for non-coherent platforms with less than 4GB
memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters
to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go
further: If no bouncing needed for ZONE_DMA, let kernel automatically
allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on
non-coherent platforms, so that no need to pass "swiotlb=mmnn,force"
any more.
The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing"
is taken from arm64. Users can still force smaller swiotlb buffer by
passing "swiotlb=mmnn".
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
since v2:
- fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n
arch/riscv/include/asm/cache.h | 2 +-
arch/riscv/mm/init.c | 16 +++++++++++++++-
2 files changed, 16 insertions(+), 2 deletions(-)
Comments
On Sat, Dec 02, 2023 at 09:42:24PM +0800, Jisheng Zhang wrote: > After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC > for !dma_coherent"), for non-coherent platforms with less than 4GB > memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters > to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go > further: If no bouncing needed for ZONE_DMA, let kernel automatically > allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on > non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" > any more. > > The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" > is taken from arm64. Users can still force smaller swiotlb buffer by > passing "swiotlb=mmnn". and this one is missed either. let me know if there's something need to be done for merging. Thanks in advance, > > Signed-off-by: Jisheng Zhang <jszhang@kernel.org> > --- > > since v2: > - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n > > arch/riscv/include/asm/cache.h | 2 +- > arch/riscv/mm/init.c | 16 +++++++++++++++- > 2 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h > index 2174fe7bac9a..570e9d8acad1 100644 > --- a/arch/riscv/include/asm/cache.h > +++ b/arch/riscv/include/asm/cache.h > @@ -26,8 +26,8 @@ > > #ifndef __ASSEMBLY__ > > -#ifdef CONFIG_RISCV_DMA_NONCOHERENT > extern int dma_cache_alignment; > +#ifdef CONFIG_RISCV_DMA_NONCOHERENT > #define dma_get_cache_alignment dma_get_cache_alignment > static inline int dma_get_cache_alignment(void) > { > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 2e011cbddf3a..cbcb9918f721 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } > > void __init mem_init(void) > { > + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); > #ifdef CONFIG_FLATMEM > BUG_ON(!mem_map); > #endif /* CONFIG_FLATMEM */ > > - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); > + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && > + dma_cache_alignment != 1) { > + /* > + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb > + * buffer per 1GB of RAM for kmalloc() bouncing on > + * non-coherent platforms. > + */ > + unsigned long size = > + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); > + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); > + swiotlb = true; > + } > + > + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); > memblock_free_all(); > > print_vm_layout(); > -- > 2.42.0 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
Hi Jisheng, On 02/12/2023 14:42, Jisheng Zhang wrote: > After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC > for !dma_coherent"), for non-coherent platforms with less than 4GB > memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters > to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go > further: If no bouncing needed for ZONE_DMA, let kernel automatically > allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on > non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" > any more. IIUC, DMA_BOUNCE_UNALIGNED_KMALLOC is enabled for all non-coherent platforms, even those with less than 4GB of memory. But the DMA bouncing (which is necessary to enable kmalloc-8/16/32/96...) was not enabled unless the user specified "swiotlb=mmnn,force" on the kernel command line. But does that mean that if the user did not specify "swiotlb=mmnn,force", the kmalloc-8/16/32/96 were enabled anyway and the behaviour was wrong (by lack of DMA bouncing)? I'm trying to understand if that's a fix or an enhancement. Thanks, Alex > > The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" > is taken from arm64. Users can still force smaller swiotlb buffer by > passing "swiotlb=mmnn". > > Signed-off-by: Jisheng Zhang <jszhang@kernel.org> > --- > > since v2: > - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n > > arch/riscv/include/asm/cache.h | 2 +- > arch/riscv/mm/init.c | 16 +++++++++++++++- > 2 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h > index 2174fe7bac9a..570e9d8acad1 100644 > --- a/arch/riscv/include/asm/cache.h > +++ b/arch/riscv/include/asm/cache.h > @@ -26,8 +26,8 @@ > > #ifndef __ASSEMBLY__ > > -#ifdef CONFIG_RISCV_DMA_NONCOHERENT > extern int dma_cache_alignment; > +#ifdef CONFIG_RISCV_DMA_NONCOHERENT > #define dma_get_cache_alignment dma_get_cache_alignment > static inline int dma_get_cache_alignment(void) > { > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 2e011cbddf3a..cbcb9918f721 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } > > void __init mem_init(void) > { > + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); > #ifdef CONFIG_FLATMEM > BUG_ON(!mem_map); > #endif /* CONFIG_FLATMEM */ > > - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); > + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && > + dma_cache_alignment != 1) { > + /* > + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb > + * buffer per 1GB of RAM for kmalloc() bouncing on > + * non-coherent platforms. > + */ > + unsigned long size = > + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); > + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); > + swiotlb = true; > + } > + > + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); > memblock_free_all(); > > print_vm_layout();
On Tue, Jan 16, 2024 at 09:23:47AM +0100, Alexandre Ghiti wrote: > Hi Jisheng, > > On 02/12/2023 14:42, Jisheng Zhang wrote: > > After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC > > for !dma_coherent"), for non-coherent platforms with less than 4GB > > memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters > > to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go > > further: If no bouncing needed for ZONE_DMA, let kernel automatically > > allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on > > non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" > > any more. > > IIUC, DMA_BOUNCE_UNALIGNED_KMALLOC is enabled for all non-coherent > platforms, even those with less than 4GB of memory. But the DMA bouncing > (which is necessary to enable kmalloc-8/16/32/96...) was not enabled unless > the user specified "swiotlb=mmnn,force" on the kernel command line. But does > that mean that if the user did not specify "swiotlb=mmnn,force", the > kmalloc-8/16/32/96 were enabled anyway and the behaviour was wrong (by lack > of DMA bouncing)? Hi Alex, For coherent platforms, kmalloc-8/16/32/96 was enabled. For non-coherent platforms, if memory is more than 4GB, kmalloc-8/16/32/96 was enabled. For non-coherent platforms, if memory is less than 4GB, kmalloc-8/16/32/96 was not enabled. If users want kmalloc-8/16/32/96, we rely on users to pass "swiotlb=mmnn,force" This patch tries to remove the "swiotlb=mmnn,force" requirement for the last case. After this patch, kernel automatically uses "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" by default. So this is an enhancement. Thanks > > I'm trying to understand if that's a fix or an enhancement. > > Thanks, > > Alex > > > > > > The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" > > is taken from arm64. Users can still force smaller swiotlb buffer by > > passing "swiotlb=mmnn". > > > > Signed-off-by: Jisheng Zhang <jszhang@kernel.org> > > --- > > > > since v2: > > - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n > > > > arch/riscv/include/asm/cache.h | 2 +- > > arch/riscv/mm/init.c | 16 +++++++++++++++- > > 2 files changed, 16 insertions(+), 2 deletions(-) > > > > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h > > index 2174fe7bac9a..570e9d8acad1 100644 > > --- a/arch/riscv/include/asm/cache.h > > +++ b/arch/riscv/include/asm/cache.h > > @@ -26,8 +26,8 @@ > > #ifndef __ASSEMBLY__ > > -#ifdef CONFIG_RISCV_DMA_NONCOHERENT > > extern int dma_cache_alignment; > > +#ifdef CONFIG_RISCV_DMA_NONCOHERENT > > #define dma_get_cache_alignment dma_get_cache_alignment > > static inline int dma_get_cache_alignment(void) > > { > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > index 2e011cbddf3a..cbcb9918f721 100644 > > --- a/arch/riscv/mm/init.c > > +++ b/arch/riscv/mm/init.c > > @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } > > void __init mem_init(void) > > { > > + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); > > #ifdef CONFIG_FLATMEM > > BUG_ON(!mem_map); > > #endif /* CONFIG_FLATMEM */ > > - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); > > + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && > > + dma_cache_alignment != 1) { > > + /* > > + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb > > + * buffer per 1GB of RAM for kmalloc() bouncing on > > + * non-coherent platforms. > > + */ > > + unsigned long size = > > + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); > > + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); > > + swiotlb = true; > > + } > > + > > + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); > > memblock_free_all(); > > print_vm_layout();
On 16/01/2024 09:47, Jisheng Zhang wrote: > On Tue, Jan 16, 2024 at 09:23:47AM +0100, Alexandre Ghiti wrote: >> Hi Jisheng, >> >> On 02/12/2023 14:42, Jisheng Zhang wrote: >>> After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC >>> for !dma_coherent"), for non-coherent platforms with less than 4GB >>> memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters >>> to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go >>> further: If no bouncing needed for ZONE_DMA, let kernel automatically >>> allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on >>> non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" >>> any more. >> IIUC, DMA_BOUNCE_UNALIGNED_KMALLOC is enabled for all non-coherent >> platforms, even those with less than 4GB of memory. But the DMA bouncing >> (which is necessary to enable kmalloc-8/16/32/96...) was not enabled unless >> the user specified "swiotlb=mmnn,force" on the kernel command line. But does >> that mean that if the user did not specify "swiotlb=mmnn,force", the >> kmalloc-8/16/32/96 were enabled anyway and the behaviour was wrong (by lack >> of DMA bouncing)? > Hi Alex, > > For coherent platforms, kmalloc-8/16/32/96 was enabled. > > For non-coherent platforms, if memory is more than 4GB, kmalloc-8/16/32/96 was enabled. > > For non-coherent platforms, if memory is less than 4GB, kmalloc-8/16/32/96 was not > enabled. If users want kmalloc-8/16/32/96, we rely on users to pass "swiotlb=mmnn,force" That's what I was unsure of :) > > This patch tries to remove the "swiotlb=mmnn,force" requirement for the > last case. After this patch, kernel automatically uses "1MB swiotlb buffer per > 1GB of RAM for kmalloc() bouncing" by default. > > So this is an enhancement. Great, so you can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex > > Thanks >> I'm trying to understand if that's a fix or an enhancement. >> >> Thanks, >> >> Alex >> >> >>> The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" >>> is taken from arm64. Users can still force smaller swiotlb buffer by >>> passing "swiotlb=mmnn". >>> >>> Signed-off-by: Jisheng Zhang <jszhang@kernel.org> >>> --- >>> >>> since v2: >>> - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n >>> >>> arch/riscv/include/asm/cache.h | 2 +- >>> arch/riscv/mm/init.c | 16 +++++++++++++++- >>> 2 files changed, 16 insertions(+), 2 deletions(-) >>> >>> diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h >>> index 2174fe7bac9a..570e9d8acad1 100644 >>> --- a/arch/riscv/include/asm/cache.h >>> +++ b/arch/riscv/include/asm/cache.h >>> @@ -26,8 +26,8 @@ >>> #ifndef __ASSEMBLY__ >>> -#ifdef CONFIG_RISCV_DMA_NONCOHERENT >>> extern int dma_cache_alignment; >>> +#ifdef CONFIG_RISCV_DMA_NONCOHERENT >>> #define dma_get_cache_alignment dma_get_cache_alignment >>> static inline int dma_get_cache_alignment(void) >>> { >>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>> index 2e011cbddf3a..cbcb9918f721 100644 >>> --- a/arch/riscv/mm/init.c >>> +++ b/arch/riscv/mm/init.c >>> @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } >>> void __init mem_init(void) >>> { >>> + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); >>> #ifdef CONFIG_FLATMEM >>> BUG_ON(!mem_map); >>> #endif /* CONFIG_FLATMEM */ >>> - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); >>> + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && >>> + dma_cache_alignment != 1) { >>> + /* >>> + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb >>> + * buffer per 1GB of RAM for kmalloc() bouncing on >>> + * non-coherent platforms. >>> + */ >>> + unsigned long size = >>> + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); >>> + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); >>> + swiotlb = true; >>> + } >>> + >>> + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); >>> memblock_free_all(); >>> print_vm_layout();
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h index 2174fe7bac9a..570e9d8acad1 100644 --- a/arch/riscv/include/asm/cache.h +++ b/arch/riscv/include/asm/cache.h @@ -26,8 +26,8 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_RISCV_DMA_NONCOHERENT extern int dma_cache_alignment; +#ifdef CONFIG_RISCV_DMA_NONCOHERENT #define dma_get_cache_alignment dma_get_cache_alignment static inline int dma_get_cache_alignment(void) { diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 2e011cbddf3a..cbcb9918f721 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } void __init mem_init(void) { + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && + dma_cache_alignment != 1) { + /* + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb + * buffer per 1GB of RAM for kmalloc() bouncing on + * non-coherent platforms. + */ + unsigned long size = + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); + swiotlb = true; + } + + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); memblock_free_all(); print_vm_layout();