Message ID | 20231127030930.1074374-1-zhaoyang.huang@unisoc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp2822124vqx; Sun, 26 Nov 2023 19:10:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IHwHD/1qpJ0aSL/0bxRY9aPaGQ5ys8gsgTTU98oLxArgklJca3zohcM63LTyZiVASIe4tKt X-Received: by 2002:a05:6a00:22d3:b0:6cb:e635:f493 with SMTP id f19-20020a056a0022d300b006cbe635f493mr13560434pfj.9.1701054626520; Sun, 26 Nov 2023 19:10:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701054626; cv=none; d=google.com; s=arc-20160816; b=Z3/A8hdYyqhcnZNlHMpogPltEWq7aea7Rb5qFcY0pPJMRwEiJxIGHkw8DcSpyPID5u Ylr9+kFkwRVCXBeeDLfUE6ReG4gB66d/rOo2CU4QRv2FzGvK944HoCThjoeV1Av0Hf5x GvqnwS+8Cc+jNr7tEVsl72Gs59/bfDpDPB0hnXlczGvQvJaRUo+bQOk+JTjZuM/j+dT8 Ozxb0YpBkfriRfFf5h6wxCPC2WrjLTxnoHt0+dFdxTDS4wOTGkuhs91ZwnXORQsWT9uB goXfMTwPkB7NXBBFVPHYiULbv5HfNqB/E3HENYtwG0PcGAOOWEh889Aau46D5kFqFujq LlvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:to:from; bh=gslNZfHvbURZK+yh+d+fEE4aM7/rVgbzHav5dEsWqNE=; fh=CslJHvE4pJCtvM75IQVWRnCNiwO9JnTlLxYKIiIeY3Q=; b=n4HcETI57f6R27WQu1QuAPWrmiWVDQqRpVmkBvmKHGtc5pYjIpSp0vLCg3K4PwTJFk InZoKV58YSO2ULn6GKBrX+dParJoZU0mfYHrWrIgir5I3HV320GASImyxcTq02E7mgLp LNtyIw4MvdrGw8UTRTt0EO19Qrd+DcypJ36PQoO6TUlvoe0oC/aTIrs/U+nKulBl4r2G 1fYGjfpBqFzet4IIzSOTYS/fvIAW3kTTPMoW5vqMELCMzF5GBZIjImyndZqiP5r8X/Xn +hL/Hd4A/rPV2hTpXM+03oSxk/Pc/BOOx2ZAIcDJGnGnm+gw56+o6Lvp1sCHfr+uttJa z3wQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id h22-20020a056a00219600b006cbb2cd545esi8984063pfi.5.2023.11.26.19.10.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Nov 2023 19:10:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 628008087DE3; Sun, 26 Nov 2023 19:10:19 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229527AbjK0DKK (ORCPT <rfc822;toshivichauhan@gmail.com> + 99 others); Sun, 26 Nov 2023 22:10:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229379AbjK0DKJ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 26 Nov 2023 22:10:09 -0500 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88789C8 for <linux-kernel@vger.kernel.org>; Sun, 26 Nov 2023 19:10:14 -0800 (PST) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 3AR39aK1072979; Mon, 27 Nov 2023 11:09:36 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4Sdr5d2Pt5z2K85d4; Mon, 27 Nov 2023 11:04:05 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 27 Nov 2023 11:09:33 +0800 From: "zhaoyang.huang" <zhaoyang.huang@unisoc.com> To: Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, Robin Murphy <robin.murphy@arm.com>, <iommu@lists.linux.dev>, <linux-kernel@vger.kernel.org>, Zhaoyang Huang <huangzhaoyang@gmail.com>, <steve.kang@unisoc.com> Subject: [PATCH] kernel: dma: let dma use vmalloc area Date: Mon, 27 Nov 2023 11:09:30 +0800 Message-ID: <20231127030930.1074374-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 3AR39aK1072979 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 26 Nov 2023 19:10:19 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783685056336385886 X-GMAIL-MSGID: 1783685056336385886 |
Series |
kernel: dma: let dma use vmalloc area
|
|
Commit Message
zhaoyang.huang
Nov. 27, 2023, 3:09 a.m. UTC
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> memremap within dma_init_coherent_memory will map the given phys_addr into vmalloc area if the pa is not found during iterating iomem_resources, which conflict the rejection of vmalloc area in dma_map_single_attrs. IMO, it is find to let all valid virtual address be valid for DMA as the user will keep corresponding RAM safe for transfer. Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> --- include/linux/dma-mapping.h | 12 +++++++----- kernel/dma/debug.c | 4 ---- 2 files changed, 7 insertions(+), 9 deletions(-)
Comments
This patch arose from a real problem where the driver failed to use dma_map_single(dev, ptr). The ptr is a vmalloc va which is mapped over the reserve memory by dma_init_coherent_memory. On Mon, Nov 27, 2023 at 11:09 AM zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > memremap within dma_init_coherent_memory will map the given phys_addr > into vmalloc area if the pa is not found during iterating iomem_resources, > which conflict the rejection of vmalloc area in dma_map_single_attrs. > IMO, it is find to let all valid virtual address be valid for DMA as the > user will keep corresponding RAM safe for transfer. > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > include/linux/dma-mapping.h | 12 +++++++----- > kernel/dma/debug.c | 4 ---- > 2 files changed, 7 insertions(+), 9 deletions(-) > > diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h > index f0ccca16a0ac..7a7b87289d55 100644 > --- a/include/linux/dma-mapping.h > +++ b/include/linux/dma-mapping.h > @@ -328,12 +328,14 @@ static inline void dma_free_noncoherent(struct device *dev, size_t size, > static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, > size_t size, enum dma_data_direction dir, unsigned long attrs) > { > - /* DMA must never operate on areas that might be remapped. */ > - if (dev_WARN_ONCE(dev, is_vmalloc_addr(ptr), > - "rejecting DMA map of vmalloc memory\n")) > - return DMA_MAPPING_ERROR; > + struct page *page; > + > debug_dma_map_single(dev, ptr, size); > - return dma_map_page_attrs(dev, virt_to_page(ptr), offset_in_page(ptr), > + if (is_vmalloc_addr(ptr)) > + page = vmalloc_to_page(ptr); > + else > + page = virt_to_page(ptr); > + return dma_map_page_attrs(dev, page, offset_in_page(ptr), > size, dir, attrs); > } > > diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c > index 06366acd27b0..51e1fe9a70aa 100644 > --- a/kernel/dma/debug.c > +++ b/kernel/dma/debug.c > @@ -1198,10 +1198,6 @@ void debug_dma_map_single(struct device *dev, const void *addr, > if (!virt_addr_valid(addr)) > err_printk(dev, NULL, "device driver maps memory from invalid area [addr=%p] [len=%lu]\n", > addr, len); > - > - if (is_vmalloc_addr(addr)) > - err_printk(dev, NULL, "device driver maps memory from vmalloc area [addr=%p] [len=%lu]\n", > - addr, len); > } > EXPORT_SYMBOL(debug_dma_map_single); > > -- > 2.25.1 >
On Mon, Nov 27, 2023 at 11:09:30AM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > memremap within dma_init_coherent_memory will map the given phys_addr > into vmalloc area if the pa is not found during iterating iomem_resources, > which conflict the rejection of vmalloc area in dma_map_single_attrs. I can't parse this sentence. > IMO, it is find to let all valid virtual address be valid for DMA as the > user will keep corresponding RAM safe for transfer. No, vmalloc address can't be passed to map_single. You need to pass the page to dma_map_page, and explicitly mange cache consistency using the invalidate_kernel_vmap_range and flush_kernel_vmap_range helpers.
On Mon, Nov 27, 2023 at 3:14 PM Christoph Hellwig <hch@lst.de> wrote: > > On Mon, Nov 27, 2023 at 11:09:30AM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > memremap within dma_init_coherent_memory will map the given phys_addr > > into vmalloc area if the pa is not found during iterating iomem_resources, > > which conflict the rejection of vmalloc area in dma_map_single_attrs. > > I can't parse this sentence. Sorry for the confusion, please find below codes for more information. dma_init_coherent_memory memremap addr = ioremap_wt(offset, size); What I mean is addr is a vmalloc address, which is implicitly mapped by dma's framework and not be aware of to the driver. > > > IMO, it is find to let all valid virtual address be valid for DMA as the > > user will keep corresponding RAM safe for transfer. > > No, vmalloc address can't be passed to map_single. You need to pass > the page to dma_map_page, and explicitly mange cache consistency > using the invalidate_kernel_vmap_range and flush_kernel_vmap_range > helpers. Please correct me if I am wrong. According to my understanding, cache consistency could be solved inside dma_map_page via either dma_direct_map_page(swio/arch_sync_dma_for_device) or ops->map_page. The original thought of rejecting vmalloc is that this pa is not safe as this mapping could go in any time. What I am suggesting is to let this kind of va be enrolled. static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) { /* DMA must never operate on areas that might be remapped. */ if (dev_WARN_ONCE(dev, is_vmalloc_addr(ptr), "rejecting DMA map of vmalloc memory\n")) return DMA_MAPPING_ERROR; >
On Mon, Nov 27, 2023 at 04:56:45PM +0800, Zhaoyang Huang wrote: > Sorry for the confusion, please find below codes for more information. > dma_init_coherent_memory > memremap > addr = ioremap_wt(offset, size); > What I mean is addr is a vmalloc address, which is implicitly mapped > by dma's framework and not be aware of to the driver. Yes. And it is only returned from dma_alloc_coherent, which should never be passed to dma_map_<anything>. > Please correct me if I am wrong. According to my understanding, cache > consistency could be solved inside dma_map_page via either > dma_direct_map_page(swio/arch_sync_dma_for_device) or ops->map_page. > The original thought of rejecting vmalloc is that this pa is not safe > as this mapping could go in any time. What I am suggesting is to let > this kind of va be enrolled. But that only works for the direct mapping. It does not work for the additional aliases created by vmap/ioremap/memremap. Now that only matters if the cache is virtually indexed, which is rather unusual these days.
On 2023-11-27 8:56 am, Zhaoyang Huang wrote: > On Mon, Nov 27, 2023 at 3:14 PM Christoph Hellwig <hch@lst.de> wrote: >> >> On Mon, Nov 27, 2023 at 11:09:30AM +0800, zhaoyang.huang wrote: >>> From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> >>> >>> memremap within dma_init_coherent_memory will map the given phys_addr >>> into vmalloc area if the pa is not found during iterating iomem_resources, >>> which conflict the rejection of vmalloc area in dma_map_single_attrs. >> >> I can't parse this sentence. > Sorry for the confusion, please find below codes for more information. > dma_init_coherent_memory > memremap > addr = ioremap_wt(offset, size); > What I mean is addr is a vmalloc address, which is implicitly mapped > by dma's framework and not be aware of to the driver. >> >>> IMO, it is find to let all valid virtual address be valid for DMA as the >>> user will keep corresponding RAM safe for transfer. >> >> No, vmalloc address can't be passed to map_single. You need to pass >> the page to dma_map_page, and explicitly mange cache consistency >> using the invalidate_kernel_vmap_range and flush_kernel_vmap_range >> helpers. > Please correct me if I am wrong. According to my understanding, cache > consistency could be solved inside dma_map_page via either > dma_direct_map_page(swio/arch_sync_dma_for_device) or ops->map_page. > The original thought of rejecting vmalloc is that this pa is not safe > as this mapping could go in any time. What I am suggesting is to let > this kind of va be enrolled. No, the point is that dma_map_single() uses virt_to_page(), and virt_to_page() is definitely not valid for vmalloc addresses. At worst it may blow up in itself with an out-of-bounds dereference; at best it's going to return a completely bogus page pointer which may then make dma_map_page() fall over. Thanks, Robin. > > static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, > size_t size, enum dma_data_direction dir, unsigned long attrs) > { > /* DMA must never operate on areas that might be remapped. */ > if (dev_WARN_ONCE(dev, is_vmalloc_addr(ptr), > "rejecting DMA map of vmalloc memory\n")) > return DMA_MAPPING_ERROR; > >> >
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f0ccca16a0ac..7a7b87289d55 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -328,12 +328,14 @@ static inline void dma_free_noncoherent(struct device *dev, size_t size, static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - /* DMA must never operate on areas that might be remapped. */ - if (dev_WARN_ONCE(dev, is_vmalloc_addr(ptr), - "rejecting DMA map of vmalloc memory\n")) - return DMA_MAPPING_ERROR; + struct page *page; + debug_dma_map_single(dev, ptr, size); - return dma_map_page_attrs(dev, virt_to_page(ptr), offset_in_page(ptr), + if (is_vmalloc_addr(ptr)) + page = vmalloc_to_page(ptr); + else + page = virt_to_page(ptr); + return dma_map_page_attrs(dev, page, offset_in_page(ptr), size, dir, attrs); } diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 06366acd27b0..51e1fe9a70aa 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -1198,10 +1198,6 @@ void debug_dma_map_single(struct device *dev, const void *addr, if (!virt_addr_valid(addr)) err_printk(dev, NULL, "device driver maps memory from invalid area [addr=%p] [len=%lu]\n", addr, len); - - if (is_vmalloc_addr(addr)) - err_printk(dev, NULL, "device driver maps memory from vmalloc area [addr=%p] [len=%lu]\n", - addr, len); } EXPORT_SYMBOL(debug_dma_map_single);