From patchwork Sat Jan 28 08:32:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guorui Yu X-Patchwork-Id: 49822 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1247844wrn; Sat, 28 Jan 2023 00:33:48 -0800 (PST) X-Google-Smtp-Source: AK7set+95bXbuTGDz8WIffz658a8+vi55z4Oy8VD170HKiUT24Whzrd/mlXb1IRAGMjrcFaw9JyY X-Received: by 2002:a17:90a:354:b0:22b:f763:78d6 with SMTP id 20-20020a17090a035400b0022bf76378d6mr16046331pjf.38.1674894827793; Sat, 28 Jan 2023 00:33:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674894827; cv=none; d=google.com; s=arc-20160816; b=kj1428zbW9SVBAkxvPjPBysCDINLhzG9RyWVLY1U4g1KhmoyFdepZTlcI4d11mrikO /YsqhVQEG+BxEgGkZBjTFGCxf/XHiFcEnNNNQ5MwLwl1ufpZrvklAgzbgf+yMSnDiZ+c QEQWPHRkEsqArDsaKfUHafyvHssp1xwNLDdbUdocVJeqbBtIbh2UmYrfGLR6XUy55Ov8 s75s/2RbTmxK1xRH1gOfTXnQ986FETCTIivdIhRdLSGidERUJFxPtO1BCGnHNwXfiyV8 sQ3i1Hzrda/eIoRaefMhbMAOCCJim/FpeZhrD3AV5Iydd3PJwX8x++q5tqVbWvzMNvaY 8QeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=GargGBvqfEJcMDQqrUSZqh7hsidirRO6tYAhSVa9tEM=; b=tHUX+7jm563g0TMBOujLd62G6RRSdGmov6fc1pSiWLG/rbZ+jLTMqpBdAxBo0jG3Rt 5M22FDSumtAbgGcnkeUQEJQtN7DqK8+9945QEDkQJV5PioCncEnWpYVFLbI+hiBpVnCZ kcGQxNaFPGg1oaOIF8aEirDPGejcxX6WVkVdBbtLgJie6lJR+9/alhHE15c8ulxRGPKs h4ItDvni+2s+hvwI6Mcbigz0u+1w4I48IZ+zWpuvp5Rnu2T11eQSVQbB+jAdNfoQt0cw MQqFM1C2InNF05UYRLLRVfFlI6uDRLDsMikaaAz6xUYlAGSElQ1HTHxcuBU2yi6uWbeU mT2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b14-20020a63714e000000b004e4b651bb5esi400226pgn.172.2023.01.28.00.33.35; Sat, 28 Jan 2023 00:33:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233962AbjA1IdP (ORCPT + 99 others); Sat, 28 Jan 2023 03:33:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233950AbjA1IdN (ORCPT ); Sat, 28 Jan 2023 03:33:13 -0500 Received: from out30-1.freemail.mail.aliyun.com (out30-1.freemail.mail.aliyun.com [115.124.30.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 246E832E6A for ; Sat, 28 Jan 2023 00:33:09 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R341e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=guorui.yu@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VaGx7Lw_1674894786; Received: from localhost(mailfrom:GuoRui.Yu@linux.alibaba.com fp:SMTPD_---0VaGx7Lw_1674894786) by smtp.aliyun-inc.com; Sat, 28 Jan 2023 16:33:06 +0800 From: "GuoRui.Yu" To: linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, konrad.wilk@oracle.com, linux-coco@lists.linux.dev Cc: GuoRui.Yu@linux.alibaba.com, robin.murphy@arm.com Subject: [PATCH 1/4] swiotlb: Split common code from swiotlb.{c,h} Date: Sat, 28 Jan 2023 16:32:51 +0800 Message-Id: <20230128083254.86012-2-GuoRui.Yu@linux.alibaba.com> X-Mailer: git-send-email 2.29.2.540.g3cf59784d4 In-Reply-To: <20230128083254.86012-1-GuoRui.Yu@linux.alibaba.com> References: <20230128083254.86012-1-GuoRui.Yu@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756254518782284258?= X-GMAIL-MSGID: =?utf-8?q?1756254518782284258?= Split swiotlb_bounce, swiotlb_release_slots, and is_swiotlb_buffer from swiotlb.{c,h} to common-swiotlb.c, and prepare for the new swiotlb implementaion. Signed-off-by: GuoRui.Yu --- include/linux/swiotlb.h | 10 ++--- kernel/dma/Makefile | 2 +- kernel/dma/common-swiotlb.c | 74 +++++++++++++++++++++++++++++++++ kernel/dma/swiotlb.c | 82 ++++--------------------------------- 4 files changed, 88 insertions(+), 80 deletions(-) create mode 100644 kernel/dma/common-swiotlb.c diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 35bc4e281c21..c5e74d3f9cbf 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -58,6 +58,9 @@ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir); dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); +void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, + enum dma_data_direction dir); +void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr); #ifdef CONFIG_SWIOTLB @@ -105,12 +108,7 @@ struct io_tlb_mem { }; extern struct io_tlb_mem io_tlb_default_mem; -static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) -{ - struct io_tlb_mem *mem = dev->dma_io_tlb_mem; - - return mem && paddr >= mem->start && paddr < mem->end; -} +bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr); static inline bool is_swiotlb_force_bounce(struct device *dev) { diff --git a/kernel/dma/Makefile b/kernel/dma/Makefile index 21926e46ef4f..fc0ea13bc089 100644 --- a/kernel/dma/Makefile +++ b/kernel/dma/Makefile @@ -6,7 +6,7 @@ obj-$(CONFIG_DMA_OPS) += dummy.o obj-$(CONFIG_DMA_CMA) += contiguous.o obj-$(CONFIG_DMA_DECLARE_COHERENT) += coherent.o obj-$(CONFIG_DMA_API_DEBUG) += debug.o -obj-$(CONFIG_SWIOTLB) += swiotlb.o +obj-$(CONFIG_SWIOTLB) += swiotlb.o common-swiotlb.o obj-$(CONFIG_DMA_COHERENT_POOL) += pool.o obj-$(CONFIG_MMU) += remap.o obj-$(CONFIG_DMA_MAP_BENCHMARK) += map_benchmark.o diff --git a/kernel/dma/common-swiotlb.c b/kernel/dma/common-swiotlb.c new file mode 100644 index 000000000000..d477d5f2a71b --- /dev/null +++ b/kernel/dma/common-swiotlb.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include + +#define CREATE_TRACE_POINTS +#include + +/* + * Create a swiotlb mapping for the buffer at @paddr, and in case of DMAing + * to the device copy the data into it as well. + */ +dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + phys_addr_t swiotlb_addr; + dma_addr_t dma_addr; + + trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size); + + swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir, + attrs); + if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Ensure that the address returned is DMA'ble */ + dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr); + if (unlikely(!dma_capable(dev, dma_addr, size, true))) { + swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, dir, + attrs | DMA_ATTR_SKIP_CPU_SYNC); + dev_WARN_ONCE(dev, 1, + "swiotlb addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + return DMA_MAPPING_ERROR; + } + + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(swiotlb_addr, size, dir); + return dma_addr; +} + +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr, + size_t mapping_size, enum dma_data_direction dir, + unsigned long attrs) +{ + /* + * First, sync the memory before unmapping the entry + */ + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + swiotlb_release_slots(dev, tlb_addr); +} + +void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir) +{ + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) + swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE); + else + BUG_ON(dir != DMA_FROM_DEVICE); +} + +void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir) +{ + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) + swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE); + else + BUG_ON(dir != DMA_TO_DEVICE); +} diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a34c38bbe28f..f3ff4de08653 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -48,9 +48,6 @@ #include #endif -#define CREATE_TRACE_POINTS -#include - #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) /* @@ -523,7 +520,7 @@ static unsigned int swiotlb_align_offset(struct device *dev, u64 addr) /* * Bounce: copy the swiotlb buffer from or back to the original dma location */ -static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, +void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { struct io_tlb_mem *mem = dev->dma_io_tlb_mem; @@ -793,7 +790,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return tlb_addr; } -static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr) +void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr) { struct io_tlb_mem *mem = dev->dma_io_tlb_mem; unsigned long flags; @@ -840,74 +837,6 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr) spin_unlock_irqrestore(&area->lock, flags); } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr, - size_t mapping_size, enum dma_data_direction dir, - unsigned long attrs) -{ - /* - * First, sync the memory before unmapping the entry - */ - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE); - - swiotlb_release_slots(dev, tlb_addr); -} - -void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir) -{ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) - swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE); - else - BUG_ON(dir != DMA_FROM_DEVICE); -} - -void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir) -{ - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE); - else - BUG_ON(dir != DMA_TO_DEVICE); -} - -/* - * Create a swiotlb mapping for the buffer at @paddr, and in case of DMAing - * to the device copy the data into it as well. - */ -dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - phys_addr_t swiotlb_addr; - dma_addr_t dma_addr; - - trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size); - - swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir, - attrs); - if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR) - return DMA_MAPPING_ERROR; - - /* Ensure that the address returned is DMA'ble */ - dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr); - if (unlikely(!dma_capable(dev, dma_addr, size, true))) { - swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, dir, - attrs | DMA_ATTR_SKIP_CPU_SYNC); - dev_WARN_ONCE(dev, 1, - "swiotlb addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - return DMA_MAPPING_ERROR; - } - - if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(swiotlb_addr, size, dir); - return dma_addr; -} - size_t swiotlb_max_mapping_size(struct device *dev) { int min_align_mask = dma_get_min_align_mask(dev); @@ -924,6 +853,13 @@ size_t swiotlb_max_mapping_size(struct device *dev) return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align; } +bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) +{ + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; + + return mem && paddr >= mem->start && paddr < mem->end; +} + bool is_swiotlb_active(struct device *dev) { struct io_tlb_mem *mem = dev->dma_io_tlb_mem;