From patchwork Mon Dec 4 14:03:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 173353 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2781740vqy; Mon, 4 Dec 2023 06:04:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IFmRzqPHcyYFUpQZpcGdKYEjHkKrkQHHAGdbWMgoQxa14W5qjNMhsc48ZtuQguLwh6PrEhF X-Received: by 2002:a17:903:32d0:b0:1d0:9660:fac3 with SMTP id i16-20020a17090332d000b001d09660fac3mr1057168plr.109.1701698666636; Mon, 04 Dec 2023 06:04:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701698666; cv=none; d=google.com; s=arc-20160816; b=r3QRO+DgdVamIZ5yeqAu5RTWCTUR1rVLf3gxXlV46qyu2iwuQsP6jO+u/Lk0oo7nTr v+Oo2CL5gNsGtLbeXoFle5zBy89vfzH9kr7/lPQUJoWM2uTdV3dVJzWDyPosFL/QCeiG 8ErH9eD7bx4lWh2ZUL/59EYW2+qJM08Xxbkke+F06l6Q8oXlu/5XExt7yg0gcSJ89t9x Q8jV1YM+nwtkp3wl2JgWQ8vWPk5tuiVq5x0sFVzqvUTW9F4PGeGS0VKSqYgbexuQGKX7 OUF2qeM173SBYfwS8aKc+7qWzoknNVJE7qVjM6QbfvM6giocdpDcrCiekOaBQh/UNBs9 1RMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=t53LIR4hled+zaU+hMQumxi96x360KE+fXgFa+68WO0=; fh=lq3uIejE9+UCb9GhFK9s8FDaMDYUiH23kGvWD8KW0Eg=; b=onKJoc9KkJ9LhoMvBftDkWhkcGgxgwrrlARQ0oYSOtfw9rruhaEX3YrP9slSrqHGve kLMWo7vitYyYphOLa9Eghg3JBuyFJIgZb2aBT7a1HlY+l/2WXZHgR0avvv/eBB6RsF6J hJAjyA1kLB+zHjS5JxwmKuLxb8OEBhrIE3JKooUP6W+vc4T6raAGkM0T46k0zuKtH2mK tg2wQsJchf8LXgcFZUTNA7PfFcuqP6hnhNUmLBqyrdAstxLbuEZYHb63RqFCWl0U1lgL ihagf7Dm3qJsPduCi18FlEdP36cUwmUUWK1sFj02h3CpB5/MjB0k0n4Fy8EzT2bO+f6Z iiqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=TDJaYPmj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id v10-20020a170902b7ca00b001d0259167f4si415232plz.287.2023.12.04.06.04.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:04:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=TDJaYPmj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 8239A80A13A4; Mon, 4 Dec 2023 06:04:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235740AbjLDOEK (ORCPT + 99 others); Mon, 4 Dec 2023 09:04:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234485AbjLDOEI (ORCPT ); Mon, 4 Dec 2023 09:04:08 -0500 Received: from aposti.net (aposti.net [89.234.176.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF0ED106; Mon, 4 Dec 2023 06:04:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1701698643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t53LIR4hled+zaU+hMQumxi96x360KE+fXgFa+68WO0=; b=TDJaYPmjldl/ePtWEpqi3WHE5UesyvkxNu+zsaaQtjq/n8Vvnri9vvSYRWT2oGDHG+8lXC D8wfcN7colyxcta5CRvVF6saoZZ/bIZSAJmFABC8UdutQymfKP69sr8eehCafhquSZ9t8P T/5uA+bXwGO9j/+34leXH11fWp5Trlk= From: Paul Cercueil To: Vinod Koul Cc: Lars-Peter Clausen , =?utf-8?q?Nuno_S=C3=A1?= , Michael Hennerich , dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Cercueil Subject: [PATCH 1/4] dmaengine: axi-dmac: Small code cleanup Date: Mon, 4 Dec 2023 15:03:49 +0100 Message-ID: <20231204140352.30420-2-paul@crapouillou.net> In-Reply-To: <20231204140352.30420-1-paul@crapouillou.net> References: <20231204140352.30420-1-paul@crapouillou.net> MIME-Version: 1.0 X-Spam: Yes X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:04:18 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784360381049987175 X-GMAIL-MSGID: 1784360381049987175 Use a for() loop instead of a while() loop in axi_dmac_fill_linear_sg(). Signed-off-by: Paul Cercueil --- drivers/dma/dma-axi-dmac.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c index 2457a420c13d..760940b21eab 100644 --- a/drivers/dma/dma-axi-dmac.c +++ b/drivers/dma/dma-axi-dmac.c @@ -508,16 +508,13 @@ static struct axi_dmac_sg *axi_dmac_fill_linear_sg(struct axi_dmac_chan *chan, segment_size = ((segment_size - 1) | chan->length_align_mask) + 1; for (i = 0; i < num_periods; i++) { - len = period_len; - - while (len > segment_size) { + for (len = period_len; len > segment_size; sg++) { if (direction == DMA_DEV_TO_MEM) sg->dest_addr = addr; else sg->src_addr = addr; sg->x_len = segment_size; sg->y_len = 1; - sg++; addr += segment_size; len -= segment_size; } From patchwork Mon Dec 4 14:03:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 173354 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2782038vqy; Mon, 4 Dec 2023 06:04:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IEhRU8JmLBSoq2I46p8T/rLD0mzvNuufuQf2VVQG3wTAjLu/c8m3zlvppWf8w88sBNIH+fi X-Received: by 2002:a05:6358:cc27:b0:170:17eb:14b9 with SMTP id gx39-20020a056358cc2700b0017017eb14b9mr1552613rwb.41.1701698688025; Mon, 04 Dec 2023 06:04:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701698687; cv=none; d=google.com; s=arc-20160816; b=JCDvr745TlidmfzDkDh/m9OQFglGWtLq0xtHgn7EMIOvt4LTmgv5zd814QDU9+nxki Toj7PbqoHylITRfeuXn+208q3IqpkXXe8espMP2tUJi0J7OYa9JqRpRDJZyYuwaG1vOr rDAtcp1Yq3UsM+sFW0GAfFnY18mTUEw3Y0RZHg2xPC4z0keRjd0VLVBHNVuFuQRt3aIn TCSd+U3zy7KPTJom/I5O2zJ6xr08MMfIZuBN9dZsctcMPlSAGHiljA0FBgxIM31rQCFm PI7GQk4bBwdBuZormx4j1Gh3Kai/f4hCCinKfViWoBlBN/NDG54e3bKbFopA9LKDse4M fh0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dX1OOI1l+mZGv2ZWGYC1ua/+SRhxUb4vYorhXQG2m0s=; fh=lq3uIejE9+UCb9GhFK9s8FDaMDYUiH23kGvWD8KW0Eg=; b=G96G+c1A9BA39qpYMU07L7V5hUkWAHRIOoMAUBn0ooh/Lxp8qjRKEqCPDu57+j+K2Y gAq05bagw0W83UufHRnz6wkGNmAlDQaVa6myBXdeshtWAVfkQLBkPJC5oNjHEhzD0g/C OyuUim4qzHGu9ce+st1pVOf2PDa5ibRq/p/xkPK7wZH/l1D9Ur8rvrC4YkFi+jIdm1DJ WzKqdLgsD/ub8nlf3Ydoorwig2p1YlH5QcxK2N8epIEGm7aiFpb2MlwzaGcg5U5lVtRZ E6nKNyUsm1d21u2iItVq/atFZoPLbCMBU26pGibkoKzH6kDoeNuMM7V5l24KjZpoVjM3 pNTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=kgGCE+R0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id l9-20020a635709000000b005c6254ecfadsi7995227pgb.95.2023.12.04.06.04.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:04:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=kgGCE+R0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id C181780A13B5; Mon, 4 Dec 2023 06:04:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235654AbjLDOER (ORCPT + 99 others); Mon, 4 Dec 2023 09:04:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235720AbjLDOEQ (ORCPT ); Mon, 4 Dec 2023 09:04:16 -0500 Received: from aposti.net (aposti.net [89.234.176.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2BFB106; Mon, 4 Dec 2023 06:04:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1701698644; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dX1OOI1l+mZGv2ZWGYC1ua/+SRhxUb4vYorhXQG2m0s=; b=kgGCE+R0lw5XkIe3Gf+n8SZcjidIFNuof2w+/0GiugzqBGJiW+qdVeAszQ+UhTGG43tlaE LC4EZRmplrGIyDhdXyDymE6d3WHWh3rw9ccmDUq6gm6F6pJ7FXESvMDxb7obKfASXalo8C mO1uWfJ6rZErBaMu5NMHbz94ZDgrSZ4= From: Paul Cercueil To: Vinod Koul Cc: Lars-Peter Clausen , =?utf-8?q?Nuno_S=C3=A1?= , Michael Hennerich , dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Cercueil Subject: [PATCH 2/4] dmaengine: axi-dmac: Allocate hardware descriptors Date: Mon, 4 Dec 2023 15:03:50 +0100 Message-ID: <20231204140352.30420-3-paul@crapouillou.net> In-Reply-To: <20231204140352.30420-1-paul@crapouillou.net> References: <20231204140352.30420-1-paul@crapouillou.net> MIME-Version: 1.0 X-Spam: Yes X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:04:43 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784360402971176959 X-GMAIL-MSGID: 1784360402971176959 Change where and how the DMA transfers meta-data is stored, to prepare for the upcoming introduction of scatter-gather support. Allocate hardware descriptors in the format that the HDL core will be expecting them when the scatter-gather feature is enabled, and use these fields to store the data that was previously stored in the axi_dmac_sg structure. Note that the 'x_len' and 'y_len' fields now contain the transfer length minus one, since that's what the hardware will expect in these fields. Signed-off-by: Paul Cercueil --- drivers/dma/dma-axi-dmac.c | 134 ++++++++++++++++++++++++------------- 1 file changed, 88 insertions(+), 46 deletions(-) diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c index 760940b21eab..185230a769b9 100644 --- a/drivers/dma/dma-axi-dmac.c +++ b/drivers/dma/dma-axi-dmac.c @@ -97,20 +97,31 @@ /* The maximum ID allocated by the hardware is 31 */ #define AXI_DMAC_SG_UNUSED 32U +struct axi_dmac_hw_desc { + u32 flags; + u32 id; + u64 dest_addr; + u64 src_addr; + u64 __unused; + u32 y_len; + u32 x_len; + u32 src_stride; + u32 dst_stride; + u64 __pad[2]; +}; + struct axi_dmac_sg { - dma_addr_t src_addr; - dma_addr_t dest_addr; - unsigned int x_len; - unsigned int y_len; - unsigned int dest_stride; - unsigned int src_stride; - unsigned int id; unsigned int partial_len; bool schedule_when_free; + + struct axi_dmac_hw_desc *hw; + dma_addr_t hw_phys; }; struct axi_dmac_desc { struct virt_dma_desc vdesc; + struct axi_dmac_chan *chan; + bool cyclic; bool have_partial_xfer; @@ -229,7 +240,7 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) sg = &desc->sg[desc->num_submitted]; /* Already queued in cyclic mode. Wait for it to finish */ - if (sg->id != AXI_DMAC_SG_UNUSED) { + if (sg->hw->id != AXI_DMAC_SG_UNUSED) { sg->schedule_when_free = true; return; } @@ -246,16 +257,16 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) chan->next_desc = desc; } - sg->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID); + sg->hw->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID); if (axi_dmac_dest_is_mem(chan)) { - axi_dmac_write(dmac, AXI_DMAC_REG_DEST_ADDRESS, sg->dest_addr); - axi_dmac_write(dmac, AXI_DMAC_REG_DEST_STRIDE, sg->dest_stride); + axi_dmac_write(dmac, AXI_DMAC_REG_DEST_ADDRESS, sg->hw->dest_addr); + axi_dmac_write(dmac, AXI_DMAC_REG_DEST_STRIDE, sg->hw->dst_stride); } if (axi_dmac_src_is_mem(chan)) { - axi_dmac_write(dmac, AXI_DMAC_REG_SRC_ADDRESS, sg->src_addr); - axi_dmac_write(dmac, AXI_DMAC_REG_SRC_STRIDE, sg->src_stride); + axi_dmac_write(dmac, AXI_DMAC_REG_SRC_ADDRESS, sg->hw->src_addr); + axi_dmac_write(dmac, AXI_DMAC_REG_SRC_STRIDE, sg->hw->src_stride); } /* @@ -270,8 +281,8 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) if (chan->hw_partial_xfer) flags |= AXI_DMAC_FLAG_PARTIAL_REPORT; - axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->x_len - 1); - axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, sg->y_len - 1); + axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->hw->x_len); + axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, sg->hw->y_len); axi_dmac_write(dmac, AXI_DMAC_REG_FLAGS, flags); axi_dmac_write(dmac, AXI_DMAC_REG_START_TRANSFER, 1); } @@ -286,9 +297,9 @@ static inline unsigned int axi_dmac_total_sg_bytes(struct axi_dmac_chan *chan, struct axi_dmac_sg *sg) { if (chan->hw_2d) - return sg->x_len * sg->y_len; + return (sg->hw->x_len + 1) * (sg->hw->y_len + 1); else - return sg->x_len; + return (sg->hw->x_len + 1); } static void axi_dmac_dequeue_partial_xfers(struct axi_dmac_chan *chan) @@ -307,9 +318,9 @@ static void axi_dmac_dequeue_partial_xfers(struct axi_dmac_chan *chan) list_for_each_entry(desc, &chan->active_descs, vdesc.node) { for (i = 0; i < desc->num_sgs; i++) { sg = &desc->sg[i]; - if (sg->id == AXI_DMAC_SG_UNUSED) + if (sg->hw->id == AXI_DMAC_SG_UNUSED) continue; - if (sg->id == id) { + if (sg->hw->id == id) { desc->have_partial_xfer = true; sg->partial_len = len; found_sg = true; @@ -376,12 +387,12 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan, do { sg = &active->sg[active->num_completed]; - if (sg->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */ + if (sg->hw->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */ break; - if (!(BIT(sg->id) & completed_transfers)) + if (!(BIT(sg->hw->id) & completed_transfers)) break; active->num_completed++; - sg->id = AXI_DMAC_SG_UNUSED; + sg->hw->id = AXI_DMAC_SG_UNUSED; if (sg->schedule_when_free) { sg->schedule_when_free = false; start_next = true; @@ -476,22 +487,52 @@ static void axi_dmac_issue_pending(struct dma_chan *c) spin_unlock_irqrestore(&chan->vchan.lock, flags); } -static struct axi_dmac_desc *axi_dmac_alloc_desc(unsigned int num_sgs) +static struct axi_dmac_desc * +axi_dmac_alloc_desc(struct axi_dmac_chan *chan, unsigned int num_sgs) { + struct axi_dmac *dmac = chan_to_axi_dmac(chan); + struct device *dev = dmac->dma_dev.dev; + struct axi_dmac_hw_desc *hws; struct axi_dmac_desc *desc; + dma_addr_t hw_phys; unsigned int i; desc = kzalloc(struct_size(desc, sg, num_sgs), GFP_NOWAIT); if (!desc) return NULL; desc->num_sgs = num_sgs; + desc->chan = chan; - for (i = 0; i < num_sgs; i++) - desc->sg[i].id = AXI_DMAC_SG_UNUSED; + hws = dma_alloc_coherent(dev, PAGE_ALIGN(num_sgs * sizeof(*hws)), + &hw_phys, GFP_ATOMIC); + if (!hws) { + kfree(desc); + return NULL; + } + + for (i = 0; i < num_sgs; i++) { + desc->sg[i].hw = &hws[i]; + desc->sg[i].hw_phys = hw_phys + i * sizeof(*hws); + + hws[i].id = AXI_DMAC_SG_UNUSED; + hws[i].flags = 0; + } return desc; } +static void axi_dmac_free_desc(struct axi_dmac_desc *desc) +{ + struct axi_dmac *dmac = chan_to_axi_dmac(desc->chan); + struct device *dev = dmac->dma_dev.dev; + struct axi_dmac_hw_desc *hw = desc->sg[0].hw; + dma_addr_t hw_phys = desc->sg[0].hw_phys; + + dma_free_coherent(dev, PAGE_ALIGN(desc->num_sgs * sizeof(*hw)), + hw, hw_phys); + kfree(desc); +} + static struct axi_dmac_sg *axi_dmac_fill_linear_sg(struct axi_dmac_chan *chan, enum dma_transfer_direction direction, dma_addr_t addr, unsigned int num_periods, unsigned int period_len, @@ -510,21 +551,22 @@ static struct axi_dmac_sg *axi_dmac_fill_linear_sg(struct axi_dmac_chan *chan, for (i = 0; i < num_periods; i++) { for (len = period_len; len > segment_size; sg++) { if (direction == DMA_DEV_TO_MEM) - sg->dest_addr = addr; + sg->hw->dest_addr = addr; else - sg->src_addr = addr; - sg->x_len = segment_size; - sg->y_len = 1; + sg->hw->src_addr = addr; + sg->hw->x_len = segment_size - 1; + sg->hw->y_len = 0; + sg->hw->flags = 0; addr += segment_size; len -= segment_size; } if (direction == DMA_DEV_TO_MEM) - sg->dest_addr = addr; + sg->hw->dest_addr = addr; else - sg->src_addr = addr; - sg->x_len = len; - sg->y_len = 1; + sg->hw->src_addr = addr; + sg->hw->x_len = len - 1; + sg->hw->y_len = 0; sg++; addr += len; } @@ -551,7 +593,7 @@ static struct dma_async_tx_descriptor *axi_dmac_prep_slave_sg( for_each_sg(sgl, sg, sg_len, i) num_sgs += DIV_ROUND_UP(sg_dma_len(sg), chan->max_length); - desc = axi_dmac_alloc_desc(num_sgs); + desc = axi_dmac_alloc_desc(chan, num_sgs); if (!desc) return NULL; @@ -560,7 +602,7 @@ static struct dma_async_tx_descriptor *axi_dmac_prep_slave_sg( for_each_sg(sgl, sg, sg_len, i) { if (!axi_dmac_check_addr(chan, sg_dma_address(sg)) || !axi_dmac_check_len(chan, sg_dma_len(sg))) { - kfree(desc); + axi_dmac_free_desc(desc); return NULL; } @@ -595,7 +637,7 @@ static struct dma_async_tx_descriptor *axi_dmac_prep_dma_cyclic( num_periods = buf_len / period_len; num_segments = DIV_ROUND_UP(period_len, chan->max_length); - desc = axi_dmac_alloc_desc(num_periods * num_segments); + desc = axi_dmac_alloc_desc(chan, num_periods * num_segments); if (!desc) return NULL; @@ -650,26 +692,26 @@ static struct dma_async_tx_descriptor *axi_dmac_prep_interleaved( return NULL; } - desc = axi_dmac_alloc_desc(1); + desc = axi_dmac_alloc_desc(chan, 1); if (!desc) return NULL; if (axi_dmac_src_is_mem(chan)) { - desc->sg[0].src_addr = xt->src_start; - desc->sg[0].src_stride = xt->sgl[0].size + src_icg; + desc->sg[0].hw->src_addr = xt->src_start; + desc->sg[0].hw->src_stride = xt->sgl[0].size + src_icg; } if (axi_dmac_dest_is_mem(chan)) { - desc->sg[0].dest_addr = xt->dst_start; - desc->sg[0].dest_stride = xt->sgl[0].size + dst_icg; + desc->sg[0].hw->dest_addr = xt->dst_start; + desc->sg[0].hw->dst_stride = xt->sgl[0].size + dst_icg; } if (chan->hw_2d) { - desc->sg[0].x_len = xt->sgl[0].size; - desc->sg[0].y_len = xt->numf; + desc->sg[0].hw->x_len = xt->sgl[0].size - 1; + desc->sg[0].hw->y_len = xt->numf - 1; } else { - desc->sg[0].x_len = xt->sgl[0].size * xt->numf; - desc->sg[0].y_len = 1; + desc->sg[0].hw->x_len = xt->sgl[0].size * xt->numf - 1; + desc->sg[0].hw->y_len = 0; } if (flags & DMA_CYCLIC) @@ -685,7 +727,7 @@ static void axi_dmac_free_chan_resources(struct dma_chan *c) static void axi_dmac_desc_free(struct virt_dma_desc *vdesc) { - kfree(container_of(vdesc, struct axi_dmac_desc, vdesc)); + axi_dmac_free_desc(to_axi_dmac_desc(vdesc)); } static bool axi_dmac_regmap_rdwr(struct device *dev, unsigned int reg) From patchwork Mon Dec 4 14:03:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 173356 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2782136vqy; Mon, 4 Dec 2023 06:04:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IHaSjxeAClBDMvORr4Xh8ZSauHsuuYhs8IeYP8Ycbm8N5S9486XhsS+qtoQz1xaB2O1W4WZ X-Received: by 2002:a05:6a20:4406:b0:187:da70:62f1 with SMTP id ce6-20020a056a20440600b00187da7062f1mr5480207pzb.27.1701698695363; Mon, 04 Dec 2023 06:04:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701698695; cv=none; d=google.com; s=arc-20160816; b=bBL0A+PCfViMJEBkOoLoX8G3H5bIccu0yIzmywfXy4COWRF/nN2XNsPRtk6eBDOPVW eNJjj2NIxI7MqpjBoNsh6m/vjUwF5WiitRvwJ2P4jp2XkRXs6vkIvg2WI9cPIMR1q7fk Gf1/8mSPVYMI68XaNH3BFzgRZ+QfvW4L4xO1IuULUC9h75pP/dlXaLBGZUhwmSBRD9aN sThzLdXja6WVS0Yc+6LUOYLxyQpx0JB489du2Xe9mAjPHBMap3mOnx6d1nSrIhw1mEUs BH8Ouljil75Jujt1SV/j8ia5GR71kUiyXgrKyZCW6cktQwR0xYlKhmzByZ1+KhcHPKJ1 yJYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DfTyNXzfHCBEsP75+j9tuzi+q9BjmgIsVWTMiU/h6PU=; fh=lq3uIejE9+UCb9GhFK9s8FDaMDYUiH23kGvWD8KW0Eg=; b=Lcq+MYDIeA5zDyMVcSwEzqc0ZPp3Yuu9mEwuJlQNzBG6IXa5TcdqJOUtBb++i07QvB w4zBotQ9n0yJXsc+e1VYu/GJLA4JEnjeDbxJj6FCXhTqTm8XoafL5c47TKAWzUIWvUh4 yzfR2oZSisgEexVEoEbd6JcNzfMxgWgrA5IuWuwEsYBj13O0Zx8AfAoct68ppM4aop06 Caa3oYvxqXm31l2Yi3O1blRKGX+WcwS4rMMnB/Fn4ACbf6BDtibZknXL05OnA3m+uViB yURtn3MpwEPEjwgd+8A+uF60qOyhadpocniKqFdDJSqZefoKkxwESvlfHt6VuzjO9Kcn 2d5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=uv1YE4Hx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id 134-20020a63028c000000b005ab7b51ab5bsi7844478pgc.110.2023.12.04.06.04.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:04:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=uv1YE4Hx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 465E880968A8; Mon, 4 Dec 2023 06:04:50 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234450AbjLDOE0 (ORCPT + 99 others); Mon, 4 Dec 2023 09:04:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235738AbjLDOEY (ORCPT ); Mon, 4 Dec 2023 09:04:24 -0500 Received: from aposti.net (aposti.net [89.234.176.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A481E119; Mon, 4 Dec 2023 06:04:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1701698644; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DfTyNXzfHCBEsP75+j9tuzi+q9BjmgIsVWTMiU/h6PU=; b=uv1YE4Hx6lw5Y0EUgsnkEY/KkEVonGO7+iy5yXFAX/UB9a5hEJa7B1Rs+brPp3UJDOSNaF BecYdGbtg518Qfw8gOYH0nXm2+g8ctEi1wwMM6KaZqzpBcc2+7z4efCIP6nU3RX4sGQuLV +BANWnvn9zIAw4MIJUo5PVQAGrsgxNw= From: Paul Cercueil To: Vinod Koul Cc: Lars-Peter Clausen , =?utf-8?q?Nuno_S=C3=A1?= , Michael Hennerich , dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Cercueil Subject: [PATCH 3/4] dmaengine: axi-dmac: Add support for scatter-gather transfers Date: Mon, 4 Dec 2023 15:03:51 +0100 Message-ID: <20231204140352.30420-4-paul@crapouillou.net> In-Reply-To: <20231204140352.30420-1-paul@crapouillou.net> References: <20231204140352.30420-1-paul@crapouillou.net> MIME-Version: 1.0 X-Spam: Yes X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:04:50 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784360411273349620 X-GMAIL-MSGID: 1784360411273349620 Implement support for scatter-gather transfers. Build a chain of hardware descriptors, each one corresponding to a segment of the transfer, and linked to the next one. The hardware will transfer the chain and only fire interrupts when the whole chain has been transferred. Support for scatter-gather is automatically enabled when the driver detects that the hardware supports it, by writing then reading the AXI_DMAC_REG_SG_ADDRESS register. If not available, the driver will fall back to standard DMA transfers. Signed-off-by: Paul Cercueil --- drivers/dma/dma-axi-dmac.c | 135 +++++++++++++++++++++++++------------ 1 file changed, 93 insertions(+), 42 deletions(-) diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c index 185230a769b9..5109530b66de 100644 --- a/drivers/dma/dma-axi-dmac.c +++ b/drivers/dma/dma-axi-dmac.c @@ -81,9 +81,13 @@ #define AXI_DMAC_REG_CURRENT_DEST_ADDR 0x438 #define AXI_DMAC_REG_PARTIAL_XFER_LEN 0x44c #define AXI_DMAC_REG_PARTIAL_XFER_ID 0x450 +#define AXI_DMAC_REG_CURRENT_SG_ID 0x454 +#define AXI_DMAC_REG_SG_ADDRESS 0x47c +#define AXI_DMAC_REG_SG_ADDRESS_HIGH 0x4bc #define AXI_DMAC_CTRL_ENABLE BIT(0) #define AXI_DMAC_CTRL_PAUSE BIT(1) +#define AXI_DMAC_CTRL_ENABLE_SG BIT(2) #define AXI_DMAC_IRQ_SOT BIT(0) #define AXI_DMAC_IRQ_EOT BIT(1) @@ -97,12 +101,16 @@ /* The maximum ID allocated by the hardware is 31 */ #define AXI_DMAC_SG_UNUSED 32U +/* Flags for axi_dmac_hw_desc.flags */ +#define AXI_DMAC_HW_FLAG_LAST BIT(0) +#define AXI_DMAC_HW_FLAG_IRQ BIT(1) + struct axi_dmac_hw_desc { u32 flags; u32 id; u64 dest_addr; u64 src_addr; - u64 __unused; + u64 next_sg_addr; u32 y_len; u32 x_len; u32 src_stride; @@ -150,6 +158,7 @@ struct axi_dmac_chan { bool hw_partial_xfer; bool hw_cyclic; bool hw_2d; + bool hw_sg; }; struct axi_dmac { @@ -224,9 +233,11 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) unsigned int flags = 0; unsigned int val; - val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER); - if (val) /* Queue is full, wait for the next SOT IRQ */ - return; + if (!chan->hw_sg) { + val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER); + if (val) /* Queue is full, wait for the next SOT IRQ */ + return; + } desc = chan->next_desc; @@ -245,9 +256,10 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) return; } - desc->num_submitted++; - if (desc->num_submitted == desc->num_sgs || - desc->have_partial_xfer) { + if (chan->hw_sg) { + chan->next_desc = NULL; + } else if (++desc->num_submitted == desc->num_sgs || + desc->have_partial_xfer) { if (desc->cyclic) desc->num_submitted = 0; /* Start again */ else @@ -259,14 +271,16 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) sg->hw->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID); - if (axi_dmac_dest_is_mem(chan)) { - axi_dmac_write(dmac, AXI_DMAC_REG_DEST_ADDRESS, sg->hw->dest_addr); - axi_dmac_write(dmac, AXI_DMAC_REG_DEST_STRIDE, sg->hw->dst_stride); - } + if (!chan->hw_sg) { + if (axi_dmac_dest_is_mem(chan)) { + axi_dmac_write(dmac, AXI_DMAC_REG_DEST_ADDRESS, sg->hw->dest_addr); + axi_dmac_write(dmac, AXI_DMAC_REG_DEST_STRIDE, sg->hw->dst_stride); + } - if (axi_dmac_src_is_mem(chan)) { - axi_dmac_write(dmac, AXI_DMAC_REG_SRC_ADDRESS, sg->hw->src_addr); - axi_dmac_write(dmac, AXI_DMAC_REG_SRC_STRIDE, sg->hw->src_stride); + if (axi_dmac_src_is_mem(chan)) { + axi_dmac_write(dmac, AXI_DMAC_REG_SRC_ADDRESS, sg->hw->src_addr); + axi_dmac_write(dmac, AXI_DMAC_REG_SRC_STRIDE, sg->hw->src_stride); + } } /* @@ -281,8 +295,14 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) if (chan->hw_partial_xfer) flags |= AXI_DMAC_FLAG_PARTIAL_REPORT; - axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->hw->x_len); - axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, sg->hw->y_len); + if (chan->hw_sg) { + axi_dmac_write(dmac, AXI_DMAC_REG_SG_ADDRESS, (u32)sg->hw_phys); + axi_dmac_write(dmac, AXI_DMAC_REG_SG_ADDRESS_HIGH, + (u64)sg->hw_phys >> 32); + } else { + axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->hw->x_len); + axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, sg->hw->y_len); + } axi_dmac_write(dmac, AXI_DMAC_REG_FLAGS, flags); axi_dmac_write(dmac, AXI_DMAC_REG_START_TRANSFER, 1); } @@ -359,6 +379,9 @@ static void axi_dmac_compute_residue(struct axi_dmac_chan *chan, rslt->result = DMA_TRANS_NOERROR; rslt->residue = 0; + if (chan->hw_sg) + return; + /* * We get here if the last completed segment is partial, which * means we can compute the residue from that segment onwards @@ -385,36 +408,46 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan, (completed_transfers & AXI_DMAC_FLAG_PARTIAL_XFER_DONE)) axi_dmac_dequeue_partial_xfers(chan); - do { - sg = &active->sg[active->num_completed]; - if (sg->hw->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */ - break; - if (!(BIT(sg->hw->id) & completed_transfers)) - break; - active->num_completed++; - sg->hw->id = AXI_DMAC_SG_UNUSED; - if (sg->schedule_when_free) { - sg->schedule_when_free = false; - start_next = true; + if (chan->hw_sg) { + if (active->cyclic) { + vchan_cyclic_callback(&active->vdesc); + } else { + list_del(&active->vdesc.node); + vchan_cookie_complete(&active->vdesc); + active = axi_dmac_active_desc(chan); } + } else { + do { + sg = &active->sg[active->num_completed]; + if (sg->hw->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */ + break; + if (!(BIT(sg->hw->id) & completed_transfers)) + break; + active->num_completed++; + sg->hw->id = AXI_DMAC_SG_UNUSED; + if (sg->schedule_when_free) { + sg->schedule_when_free = false; + start_next = true; + } - if (sg->partial_len) - axi_dmac_compute_residue(chan, active); + if (sg->partial_len) + axi_dmac_compute_residue(chan, active); - if (active->cyclic) - vchan_cyclic_callback(&active->vdesc); + if (active->cyclic) + vchan_cyclic_callback(&active->vdesc); - if (active->num_completed == active->num_sgs || - sg->partial_len) { - if (active->cyclic) { - active->num_completed = 0; /* wrap around */ - } else { - list_del(&active->vdesc.node); - vchan_cookie_complete(&active->vdesc); - active = axi_dmac_active_desc(chan); + if (active->num_completed == active->num_sgs || + sg->partial_len) { + if (active->cyclic) { + active->num_completed = 0; /* wrap around */ + } else { + list_del(&active->vdesc.node); + vchan_cookie_complete(&active->vdesc); + active = axi_dmac_active_desc(chan); + } } - } - } while (active); + } while (active); + } return start_next; } @@ -478,8 +511,12 @@ static void axi_dmac_issue_pending(struct dma_chan *c) struct axi_dmac_chan *chan = to_axi_dmac_chan(c); struct axi_dmac *dmac = chan_to_axi_dmac(chan); unsigned long flags; + u32 ctrl = AXI_DMAC_CTRL_ENABLE; + + if (chan->hw_sg) + ctrl |= AXI_DMAC_CTRL_ENABLE_SG; - axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, AXI_DMAC_CTRL_ENABLE); + axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, ctrl); spin_lock_irqsave(&chan->vchan.lock, flags); if (vchan_issue_pending(&chan->vchan)) @@ -516,8 +553,14 @@ axi_dmac_alloc_desc(struct axi_dmac_chan *chan, unsigned int num_sgs) hws[i].id = AXI_DMAC_SG_UNUSED; hws[i].flags = 0; + + /* Link hardware descriptors */ + hws[i].next_sg_addr = hw_phys + (i + 1) * sizeof(*hws); } + /* The last hardware descriptor will trigger an interrupt */ + desc->sg[num_sgs - 1].hw->flags = AXI_DMAC_HW_FLAG_LAST | AXI_DMAC_HW_FLAG_IRQ; + return desc; } @@ -753,6 +796,9 @@ static bool axi_dmac_regmap_rdwr(struct device *dev, unsigned int reg) case AXI_DMAC_REG_CURRENT_DEST_ADDR: case AXI_DMAC_REG_PARTIAL_XFER_LEN: case AXI_DMAC_REG_PARTIAL_XFER_ID: + case AXI_DMAC_REG_CURRENT_SG_ID: + case AXI_DMAC_REG_SG_ADDRESS: + case AXI_DMAC_REG_SG_ADDRESS_HIGH: return true; default: return false; @@ -905,6 +951,10 @@ static int axi_dmac_detect_caps(struct axi_dmac *dmac, unsigned int version) if (axi_dmac_read(dmac, AXI_DMAC_REG_FLAGS) == AXI_DMAC_FLAG_CYCLIC) chan->hw_cyclic = true; + axi_dmac_write(dmac, AXI_DMAC_REG_SG_ADDRESS, 0xffffffff); + if (axi_dmac_read(dmac, AXI_DMAC_REG_SG_ADDRESS)) + chan->hw_sg = true; + axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, 1); if (axi_dmac_read(dmac, AXI_DMAC_REG_Y_LENGTH) == 1) chan->hw_2d = true; @@ -1005,6 +1055,7 @@ static int axi_dmac_probe(struct platform_device *pdev) dma_dev->dst_addr_widths = BIT(dmac->chan.dest_width); dma_dev->directions = BIT(dmac->chan.direction); dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; + dma_dev->max_sg_burst = 31; /* 31 SGs maximum in one burst */ INIT_LIST_HEAD(&dma_dev->channels); dmac->chan.vchan.desc_free = axi_dmac_desc_free; From patchwork Mon Dec 4 14:03:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Cercueil X-Patchwork-Id: 173355 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2782070vqy; Mon, 4 Dec 2023 06:04:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IHOVNn7+QCbCQXgrl8pd4pvg2LlNX2obw0vDq+i63b7LB9O4LaFEzSilj0nAupFZvKDvmCh X-Received: by 2002:a17:90b:4b46:b0:286:8daf:85b4 with SMTP id mi6-20020a17090b4b4600b002868daf85b4mr3083486pjb.2.1701698690374; Mon, 04 Dec 2023 06:04:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701698690; cv=none; d=google.com; s=arc-20160816; b=C2FwLEVeWW0/iCxzC7vR21g1TYm7CXeUHcud5xzf1CzNUTfVtlaidIk6ni9yB/i2Ho eI/CLgMXEMesIynwVO17I0W9+LWJdaAPp6uukt/kzv7+g7QOIHrAAxGvsrZUsErUmHMU NsOFceTljj+2STz0uROKi7bz2fu+pA1CrBJZLOUIT2Pd8c98yIHj8YA+6rfYI5RjnarT gVsrK6BEi1tP6tsGH7PCWi3y3+FoNLwJP5hkabdUpWf+pw1YrGNO5VVunpXR/F004jFF VBwhGCbeQLHxFt+OEIyQD5O+quNDSOWZAu9ScyAZsmyg3OjzH9luQG3Tb5iiJTdhRN7J 3ahg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EauQ2wrK+wrLp0dftTWFW30qJjPlPW+KVuBL537ybvE=; fh=lq3uIejE9+UCb9GhFK9s8FDaMDYUiH23kGvWD8KW0Eg=; b=oxxITYp0gXcAmto+maAq+0yZ5cYdguOUy6HRnB0T0G3XhxrkA0Jiw+5ZFWlw9U1XNJ DrGtGFCmGb0F60cjn6swiwPpVvIcmKkFBHkQ7mLbwv3pzaJIl0VDmX7BTUiSMu+rpNnm +RpavTzysSw7eiznlem6etZxyHHpVh9cG8gmRwj5RsTL8aPWit70HGg6SPo4CAMFhwsj XsWtt94BD+Teso8BCmG8reuKZf0w+gHV2LnoyObV4V/36Esp9j/Zydt/YMQ8YAvLtig2 mHSGNCOPdhsCmOuJldI8h+hsM6rEyI9BIh3GUwJtTKsaTn6d1rccCVggfosxZCgWYJpR 4qWw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=sqhSiRUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id pb16-20020a17090b3c1000b00286d452783esi16698pjb.5.2023.12.04.06.04.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:04:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=sqhSiRUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id DE1D380A5FAA; Mon, 4 Dec 2023 06:04:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344920AbjLDOEd (ORCPT + 99 others); Mon, 4 Dec 2023 09:04:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344944AbjLDOEc (ORCPT ); Mon, 4 Dec 2023 09:04:32 -0500 Received: from aposti.net (aposti.net [89.234.176.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6ED6106; Mon, 4 Dec 2023 06:04:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1701698645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EauQ2wrK+wrLp0dftTWFW30qJjPlPW+KVuBL537ybvE=; b=sqhSiRUnlnoXtZWveiRp1jISoIWmyTlvKkcLmp5C07pvU84zyl6Q7NTsW13auL+QliM2WW 9dKbELEL5y50o0nee7w61mwRwS8281vbiyKKMzADxg67OuPkfkADEvvCZGd3aM7S9YYVvl ZgkPq4XbNTsdIdVtknno/FA3aKeaRoY= From: Paul Cercueil To: Vinod Koul Cc: Lars-Peter Clausen , =?utf-8?q?Nuno_S=C3=A1?= , Michael Hennerich , dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Cercueil Subject: [PATCH 4/4] dmaengine: axi-dmac: Use only EOT interrupts when doing scatter-gather Date: Mon, 4 Dec 2023 15:03:52 +0100 Message-ID: <20231204140352.30420-5-paul@crapouillou.net> In-Reply-To: <20231204140352.30420-1-paul@crapouillou.net> References: <20231204140352.30420-1-paul@crapouillou.net> MIME-Version: 1.0 X-Spam: Yes X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:04:44 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784360405663570949 X-GMAIL-MSGID: 1784360405663570949 Instead of notifying userspace in the end-of-transfer (EOT) interrupt and program the hardware in the start-of-transfer (SOT) interrupt, we can do both things in the EOT, allowing us to mask the SOT, and halve the number of interrupts sent by the HDL core. Signed-off-by: Paul Cercueil --- drivers/dma/dma-axi-dmac.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c index 5109530b66de..beed91a8238c 100644 --- a/drivers/dma/dma-axi-dmac.c +++ b/drivers/dma/dma-axi-dmac.c @@ -415,6 +415,7 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan, list_del(&active->vdesc.node); vchan_cookie_complete(&active->vdesc); active = axi_dmac_active_desc(chan); + start_next = !!active; } } else { do { @@ -1000,6 +1001,7 @@ static int axi_dmac_probe(struct platform_device *pdev) struct axi_dmac *dmac; struct regmap *regmap; unsigned int version; + u32 irq_mask = 0; int ret; dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL); @@ -1067,7 +1069,10 @@ static int axi_dmac_probe(struct platform_device *pdev) dma_dev->copy_align = (dmac->chan.address_align_mask + 1); - axi_dmac_write(dmac, AXI_DMAC_REG_IRQ_MASK, 0x00); + if (dmac->chan.hw_sg) + irq_mask |= AXI_DMAC_IRQ_SOT; + + axi_dmac_write(dmac, AXI_DMAC_REG_IRQ_MASK, irq_mask); if (of_dma_is_coherent(pdev->dev.of_node)) { ret = axi_dmac_read(dmac, AXI_DMAC_REG_COHERENCY_DESC);