From patchwork Mon Mar 27 12:12:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 7248 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1464785vqo; Mon, 27 Mar 2023 05:17:17 -0700 (PDT) X-Google-Smtp-Source: AK7set/QztDKqewlpyAUIPYWUzy08j/13FHJFLJTBENO1OpL+8pdhmctNnH9YH28XPHDq2ievBXn X-Received: by 2002:a17:906:c204:b0:92a:11be:1a40 with SMTP id d4-20020a170906c20400b0092a11be1a40mr17538016ejz.11.1679919436745; Mon, 27 Mar 2023 05:17:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679919436; cv=none; d=google.com; s=arc-20160816; b=N3t0A8ur0Hl+k4AxgHeW8tx+ILsqyINiTpM1IF2PXcSqW1Zi9rg7tmXs/oUw++qtEu GqPz7IQFF6kBrzVn9RCOIiTwDxfs9s4GxYxTT0hb3XFgO/cju0KJKfZAKLJnyYtAr/ls nzjnXvY07MuF79aIX64emvq72OlQhJDpiXBJnsEOGouSvyEJMInDdzXF5vleEwerd2kL KhuU0kej4wsTGYxBtQPwX+uQyYCVVap6sV8MC84W30dK1qIYWIwUxvDEPH/ZjMGiZcjQ q0k6a4XAWcYn+52R2zcZC20TYqmRA+zMEt7N4aTpLHg6OdXypwZvdGy/NZytiSsoUojE swZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=Yo0XhNwN4lkVRAnkSS1ypR7YDTscI3Y/t70XVLlvjCk=; b=ea3dk3Pz0TlImWJCza6CkgzW/611iCLZyesvwuq7sH6FnsDKbEhddK+/IlEEx1OtkK t5n0bl7cK4QwM7CcTSPPW5eOQ2OplPkTrqIkf0X2IhmwL0Povw7W7y3rTjFFgJ17/M/C cycasnrkIlnuhnueItgT/VG18Zl8Bu19EzFl3KXeWnNsMIN4oOMmhAtBi9cPZFkwLjjM hyYQ4NRbQR+545aiVWZxcUGCb0+Uw2MreT/SiwXGTLeIpKxJuTluM7N6lild63kPJqpX f0rV5RP6P5wW+lGwN+brREsK6mGLgmiyd5xvnbxKvqVyPnzT6P3FrMTP0YgWmOLIWhHa bV+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pq3BAphM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a13-20020a1709066d4d00b0092be4d3413asi27228530ejt.131.2023.03.27.05.16.53; Mon, 27 Mar 2023 05:17:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pq3BAphM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232674AbjC0MOI (ORCPT + 99 others); Mon, 27 Mar 2023 08:14:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232607AbjC0MOG (ORCPT ); Mon, 27 Mar 2023 08:14:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9D1A3C0A; Mon, 27 Mar 2023 05:14:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 632BAB81151; Mon, 27 Mar 2023 12:13:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6F8CC433D2; Mon, 27 Mar 2023 12:13:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679919237; bh=GiqSNoh/gnOD0kxoCZ+KxN9c2SO+z1nJBns1fyyi+vo=; h=From:To:Cc:Subject:Date:From; b=pq3BAphM3wHtatYjV+DfN2I8sJHS4IYBYyDE+v6/Qc8ZwCT8gpvS4DAGv7VxPzFGA QTONxH3L5u7xfGpaWF72Vn6IOud15UVhnlOu7Pkpc4MFJc3T+eyi/9YHM2uw4o3Q19 hlFRkjcr9R2SPBbGCiCcG9K2ncI1oTIHchYegylFXUpPuOkFuj5EhA9x60aFehDKxy Qltw/cCcMI8A23tAm4waeZqorYzy7ERYFcrUh9Z5Dt10vOC7n/8mHT1x94BLc8QwMZ RjkfvTiQw9Na09hCmY2JMMUGUbEev+PR6WHG/lHK6hz99r9Xiisa4orl4KnTMuyptQ /3AEVTN69OUkQ== From: Arnd Bergmann To: linux-kernel@vger.kernel.org Cc: Arnd Bergmann , Vineet Gupta , Russell King , Neil Armstrong , Linus Walleij , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Stafford Horne , Helge Deller , Michael Ellerman , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Max Filippov , Christoph Hellwig , Robin Murphy , Lad Prabhakar , Conor Dooley , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-oxnas@groups.io, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH 00/21] dma-mapping: unify support for cache flushes Date: Mon, 27 Mar 2023 14:12:56 +0200 Message-Id: <20230327121317.4081816-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761523203179902339?= X-GMAIL-MSGID: =?utf-8?q?1761523203179902339?= From: Arnd Bergmann After a long discussion about adding SoC specific semantics for when to flush caches in drivers/soc/ drivers that we determined to be fundamentally flawed[1], I volunteered to try to move that logic into architecture-independent code and make all existing architectures do the same thing. As we had determined earlier, the behavior is wildly different across architectures, but most of the differences come down to either bugs (when required flushes are missing) or extra flushes that are harmless but might hurt performance. I finally found the time to come up with an implementation of this, which starts by replacing every outlier with one of the three common options: 1. architectures without speculative prefetching (hegagon, m68k, openrisc, sh, sparc, and certain armv4 and xtensa implementations) only flush their caches before a DMA, by cleaning write-back caches (if any) before a DMA to the device, and by invalidating the caches before a DMA from a device 2. arc, microblaze, mips, nios2, sh and later xtensa now follow the normal 32-bit arm model and invalidate their writeback caches again after a DMA from the device, to remove stale cache lines that got prefetched during the DMA. arc, csky and mips used to invalidate buffers also before the bidirectional DMA, but this is now skipped whenever we know it gets invalidated again after the DMA. 3. parisc, powerpc and riscv already flushed buffers before a DMA_FROM_DEVICE, and these get moved to the arm64 behavior that does the writeback before and invalidate after both DMA_FROM_DEVICE and DMA_BIDIRECTIONAL in order to avoid the problem of accidentally leaking stale data if the DMA does not actually happen[2]. The last patch in the series replaces the architecture specific code with a shared version that implements all three based on architecture specific parameters that are almost always determined at compile time. The difference between cases 1. and 2. is hardware specific, while between 2. and 3. we need to decide which semantics we want, but I explicitly avoid this question in my series and leave it to be decided later. Another difference that I do not address here is what cache invalidation does for partical cache lines. On arm32, arm64 and powerpc, a partial cache line always gets written back before invalidation in order to ensure that data before or after the buffer is not discarded. On all other architectures, the assumption is cache lines are never shared between DMA buffer and data that is accessed by the CPU. If we end up always writing back dirty cache lines before a DMA (option 3 above), then this point becomes moot, otherwise we should probably address this in a follow-up series to document one behavior or the other and implement it consistently. Please review! Arnd [1] https://lore.kernel.org/all/20221212115505.36770-1-prabhakar.mahadev-lad.rj@bp.renesas.com/ [2] https://lore.kernel.org/all/20220606152150.GA31568@willie-the-truck/ Arnd Bergmann (21): openrisc: dma-mapping: flush bidirectional mappings xtensa: dma-mapping: use normal cache invalidation rules sparc32: flush caches in dma_sync_*for_device microblaze: dma-mapping: skip extra DMA flushes powerpc: dma-mapping: split out cache operation logic powerpc: dma-mapping: minimize for_cpu flushing powerpc: dma-mapping: always clean cache in _for_device() op riscv: dma-mapping: only invalidate after DMA, not flush riscv: dma-mapping: skip invalidation before bidirectional DMA csky: dma-mapping: skip invalidating before DMA from device mips: dma-mapping: skip invalidating before bidirectional DMA mips: dma-mapping: split out cache operation logic arc: dma-mapping: skip invalidating before bidirectional DMA parisc: dma-mapping: use regular flush/invalidate ops ARM: dma-mapping: always invalidate WT caches before DMA ARM: dma-mapping: bring back dmac_{clean,inv}_range ARM: dma-mapping: use arch_sync_dma_for_{device,cpu}() internally ARM: drop SMP support for ARM11MPCore ARM: dma-mapping: use generic form of arch_sync_dma_* helpers ARM: dma-mapping: split out arch_dma_mark_clean() helper dma-mapping: replace custom code with generic implementation arch/arc/mm/dma.c | 66 ++------ arch/arm/Kconfig | 4 + arch/arm/include/asm/cacheflush.h | 21 +++ arch/arm/include/asm/glue-cache.h | 4 + arch/arm/mach-oxnas/Kconfig | 4 - arch/arm/mach-oxnas/Makefile | 1 - arch/arm/mach-oxnas/headsmp.S | 23 --- arch/arm/mach-oxnas/platsmp.c | 96 ----------- arch/arm/mach-versatile/platsmp-realview.c | 4 - arch/arm/mm/Kconfig | 19 --- arch/arm/mm/cache-fa.S | 4 +- arch/arm/mm/cache-nop.S | 6 + arch/arm/mm/cache-v4.S | 13 +- arch/arm/mm/cache-v4wb.S | 4 +- arch/arm/mm/cache-v4wt.S | 22 ++- arch/arm/mm/cache-v6.S | 35 +--- arch/arm/mm/cache-v7.S | 6 +- arch/arm/mm/cache-v7m.S | 4 +- arch/arm/mm/dma-mapping-nommu.c | 36 ++-- arch/arm/mm/dma-mapping.c | 181 ++++++++++----------- arch/arm/mm/proc-arm1020.S | 4 +- arch/arm/mm/proc-arm1020e.S | 4 +- arch/arm/mm/proc-arm1022.S | 4 +- arch/arm/mm/proc-arm1026.S | 4 +- arch/arm/mm/proc-arm920.S | 4 +- arch/arm/mm/proc-arm922.S | 4 +- arch/arm/mm/proc-arm925.S | 4 +- arch/arm/mm/proc-arm926.S | 4 +- arch/arm/mm/proc-arm940.S | 4 +- arch/arm/mm/proc-arm946.S | 4 +- arch/arm/mm/proc-feroceon.S | 8 +- arch/arm/mm/proc-macros.S | 2 + arch/arm/mm/proc-mohawk.S | 4 +- arch/arm/mm/proc-xsc3.S | 4 +- arch/arm/mm/proc-xscale.S | 6 +- arch/arm64/mm/dma-mapping.c | 28 ++-- arch/csky/mm/dma-mapping.c | 46 +++--- arch/hexagon/kernel/dma.c | 44 ++--- arch/m68k/kernel/dma.c | 43 +++-- arch/microblaze/kernel/dma.c | 38 ++--- arch/mips/mm/dma-noncoherent.c | 75 +++------ arch/nios2/mm/dma-mapping.c | 57 +++---- arch/openrisc/kernel/dma.c | 62 ++++--- arch/parisc/include/asm/cacheflush.h | 6 +- arch/parisc/kernel/pci-dma.c | 33 +++- arch/powerpc/mm/dma-noncoherent.c | 76 +++++---- arch/riscv/mm/dma-noncoherent.c | 51 +++--- arch/sh/kernel/dma-coherent.c | 43 +++-- arch/sparc/Kconfig | 2 +- arch/sparc/kernel/ioport.c | 38 +++-- arch/xtensa/Kconfig | 1 - arch/xtensa/include/asm/cacheflush.h | 6 +- arch/xtensa/kernel/pci-dma.c | 47 +++--- include/linux/dma-sync.h | 107 ++++++++++++ 54 files changed, 721 insertions(+), 699 deletions(-) delete mode 100644 arch/arm/mach-oxnas/headsmp.S delete mode 100644 arch/arm/mach-oxnas/platsmp.c create mode 100644 include/linux/dma-sync.h