From patchwork Mon Mar 27 12:13:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 75391 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1475779vqo; Mon, 27 Mar 2023 05:34:42 -0700 (PDT) X-Google-Smtp-Source: AKy350aNXwei8e22iA5maBtnYsCnXNUlptSfC34KqzoZm/DPBz8t+3Gg2a6QGtEjpNLSOWgMwCrA X-Received: by 2002:a17:902:d485:b0:1a1:dbaf:ae31 with SMTP id c5-20020a170902d48500b001a1dbafae31mr12137072plg.1.1679920482628; Mon, 27 Mar 2023 05:34:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679920482; cv=none; d=google.com; s=arc-20160816; b=a5RCWNOJE/7rORMarCP2VDS/5KwppseTOoPo0/R64129lBjn9T3XPCwaFEOtVCGyGV 14WSEZZcIplm+v3MJVGWo4JlEh6H00PH8Mu5LnAuJ+oO3NTLll0Zz/VbGzvhFRHG8o89 2bToeF9R2uNIiCxn7atDlCD1QSq36tAKe99aucI2eA74ktmRny4mLPP5LcNTuFgtfg1W rHuE9uaLcOBBi++e2cSXTxrCBkIP7IF6NVEHji247cvcZmS9xjlxe3QhN6MhUt2fpMfD jMiDgserSn5Bt/t9wzUB51RS4z7Qnskp5VIdVmTRVrjjWquNKBKk9ajcOgY4ZSGYdnqb 6rKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gSGEH6e3Cllp5FWJVrDTOQZcYB8Z/jV+D/BlbQ5HZVA=; b=pmGhbtfFnZRqWnmI1vknvHYeIEpmv3n0qhGWlSzlaYmeXzfTIFRaW6TInxImJACxfT 2nroSccGWaSNjxlYvWurjWZ2TV2AVltpioYclhycSNep54VasS15zkhXaxwxQ0eYq8Lj 6PmJSEC2kFVodT0bosu5OVTI9oEcKDu0Lc2lEDOm1uDUG/AAnwgjvVOBOMkyhTEf06tx scs6VsbgV+Db8O36CMVWG7G01faFGO7iXW8Eh78xN69aD+ZqUmCGjgjcqRCBmfngWHXV d8okduBa3c1k98hsnjeLLWaCKb4iwUN+62hUtgM7H9Do6yXSYVBqoCppkKLpy1X9Zbje CC4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LeIWTWJl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o9-20020a170902d4c900b001a0039e3597si28243371plg.316.2023.03.27.05.34.29; Mon, 27 Mar 2023 05:34:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LeIWTWJl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232784AbjC0MSB (ORCPT + 99 others); Mon, 27 Mar 2023 08:18:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232836AbjC0MRP (ORCPT ); Mon, 27 Mar 2023 08:17:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01E3E55BF; Mon, 27 Mar 2023 05:16:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8F993B81183; Mon, 27 Mar 2023 12:16:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13EE3C433AC; Mon, 27 Mar 2023 12:16:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679919385; bh=z4o74QJO2HbXUwmw9wKbmWzDWd6hXoo60uihfOQ+9AY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LeIWTWJl7na8lr1HDpXLEID7w40kD3jUGlqyEdLIjp28/W62Bg2Q01Di95UzrKvRG HJPQ86ajlp3iCoZ3CJayJ9d+7TXVGDKXmF4H9pFJrUDGFmaBQ1sxvyiJllRmrJgLiW kC23F7Igb+UcfCgn+A3+gD/4+YvQeXUupIIuvr0FfATyUyZ8zELrbCiAvyv4ru8qIH aoWdr4S3g5t8VzyZGuK2ajzBYoNZaX0/7e8Xyof9vCg5r1OYOOAEwSmcelSSXNNM+u 9342trVskeAxtOQIo4GHu8vegYfBrfg6AvF+3aPu6K84TrrA9KjWBg30daulsN+rpN khfEwXxa706kA== From: Arnd Bergmann To: linux-kernel@vger.kernel.org Cc: Arnd Bergmann , Vineet Gupta , Russell King , Neil Armstrong , Linus Walleij , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Stafford Horne , Helge Deller , Michael Ellerman , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Max Filippov , Christoph Hellwig , Robin Murphy , Lad Prabhakar , Conor Dooley , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-oxnas@groups.io, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH 17/21] ARM: dma-mapping: use arch_sync_dma_for_{device,cpu}() internally Date: Mon, 27 Mar 2023 14:13:13 +0200 Message-Id: <20230327121317.4081816-18-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230327121317.4081816-1-arnd@kernel.org> References: <20230327121317.4081816-1-arnd@kernel.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761524299978937099?= X-GMAIL-MSGID: =?utf-8?q?1761524299978937099?= From: Arnd Bergmann The arm specific iommu code in dma-mapping.c uses the page+offset based __dma_page_cpu_to_dev()/__dma_page_dev_to_cpu() helpers in place of the phys_addr_t based arch_sync_dma_for_device()/arch_sync_dma_for_cpu() wrappers around the. In order to be able to move the latter part set of functions into common code, change the iommu implementation to use them directly and remove the internal ones as a separate interface. As page+offset and phys_address are equivalent, but are used in different parts of the code here, this allows removing some of the conversion but adds them elsewhere. Signed-off-by: Arnd Bergmann Reviewed-by: Linus Walleij --- arch/arm/mm/dma-mapping.c | 93 ++++++++++++++------------------------- 1 file changed, 33 insertions(+), 60 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..ce4b74f34a58 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -622,16 +622,14 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr, kfree(buf); } -static void dma_cache_maint_page(struct page *page, unsigned long offset, +static void dma_cache_maint(phys_addr_t paddr, size_t size, enum dma_data_direction dir, void (*op)(const void *, size_t, int)) { - unsigned long pfn; + unsigned long pfn = PFN_DOWN(paddr); + unsigned long offset = paddr % PAGE_SIZE; size_t left = size; - pfn = page_to_pfn(page) + offset / PAGE_SIZE; - offset %= PAGE_SIZE; - /* * A single sg entry may refer to multiple physically contiguous * pages. But we still need to process highmem pages individually. @@ -641,8 +639,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, do { size_t len = left; void *vaddr; - - page = pfn_to_page(pfn); + struct page *page = pfn_to_page(pfn); if (PageHighMem(page)) { if (len + offset > PAGE_SIZE) @@ -674,14 +671,11 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, * Note: Drivers should NOT use this function directly. * Use the driver DMA support - see dma-mapping.h (dma_sync_*) */ -static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr; + dma_cache_maint(paddr, size, dir, dmac_map_area); - dma_cache_maint_page(page, off, size, dir, dmac_map_area); - - paddr = page_to_phys(page) + off; if (dir == DMA_FROM_DEVICE) { outer_inv_range(paddr, paddr + size); } else { @@ -690,34 +684,30 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, /* FIXME: non-speculating: flush on bidirectional mappings? */ } -static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr = page_to_phys(page) + off; - /* FIXME: non-speculating: not required */ /* in any case, don't bother invalidating if DMA to device */ if (dir != DMA_TO_DEVICE) { outer_inv_range(paddr, paddr + size); - dma_cache_maint_page(page, off, size, dir, dmac_unmap_area); + dma_cache_maint(paddr, size, dir, dmac_unmap_area); } /* * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; + unsigned long pfn = PFN_UP(paddr); + unsigned long off = paddr & (PAGE_SIZE - 1); size_t left = size; - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; + if (off) left -= PAGE_SIZE - off; - } + while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); + struct page *page = pfn_to_page(pfn++); set_bit(PG_dcache_clean, &page->flags); left -= PAGE_SIZE; } @@ -1204,7 +1194,7 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, unsigned int len = PAGE_ALIGN(s->offset + s->length); if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(phys + s->offset, s->length, dir); prot = __dma_info_to_prot(dir, attrs); @@ -1306,8 +1296,7 @@ static void arm_iommu_unmap_sg(struct device *dev, __iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s)); if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_dev_to_cpu(sg_page(s), s->offset, - s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); } } @@ -1329,7 +1318,7 @@ static void arm_iommu_sync_sg_for_cpu(struct device *dev, return; for_each_sg(sg, s, nents, i) - __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); } @@ -1351,7 +1340,8 @@ static void arm_iommu_sync_sg_for_device(struct device *dev, return; for_each_sg(sg, s, nents, i) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(page_to_phys(sg_page(s)) + s->offset, + s->length, dir); } /** @@ -1373,7 +1363,8 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, int ret, prot, len = PAGE_ALIGN(size + offset); if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(page, offset, size, dir); + arch_sync_dma_for_device(page_to_phys(page) + offset, + size, dir); dma_addr = __alloc_iova(mapping, len); if (dma_addr == DMA_MAPPING_ERROR) @@ -1406,7 +1397,7 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); dma_addr_t iova = handle & PAGE_MASK; - struct page *page; + phys_addr_t phys; int offset = handle & ~PAGE_MASK; int len = PAGE_ALIGN(size + offset); @@ -1414,8 +1405,8 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, return; if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { - page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys = iommu_iova_to_phys(mapping->domain, handle); + arch_sync_dma_for_cpu(phys, size, dir); } iommu_unmap(mapping->domain, iova, len); @@ -1483,30 +1474,26 @@ static void arm_iommu_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); - dma_addr_t iova = handle & PAGE_MASK; - struct page *page; - unsigned int offset = handle & ~PAGE_MASK; + phys_addr_t phys; - if (dev->dma_coherent || !iova) + if (dev->dma_coherent || !(handle & PAGE_MASK)) return; - page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys = iommu_iova_to_phys(mapping->domain, handle); + arch_sync_dma_for_cpu(phys, size, dir); } static void arm_iommu_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); - dma_addr_t iova = handle & PAGE_MASK; - struct page *page; - unsigned int offset = handle & ~PAGE_MASK; + phys_addr_t phys; - if (dev->dma_coherent || !iova) + if (dev->dma_coherent || !(handle & PAGE_MASK)) return; - page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_cpu_to_dev(page, offset, size, dir); + phys = iommu_iova_to_phys(mapping->domain, handle); + arch_sync_dma_for_device(phys, size, dir); } static const struct dma_map_ops iommu_ops = { @@ -1789,20 +1776,6 @@ void arch_teardown_dma_ops(struct device *dev) set_dma_ops(dev, NULL); } -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) {