Message ID | 20231219175009.65482-4-paul@crapouillou.net |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2114272dyi; Tue, 19 Dec 2023 09:56:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IGPjKOkf+LVbzc/s9taEhKYkg5Vn4JxapJr/mPkfTi/HFh6Tz1OoVJrNf68VRPCw6+ddhdK X-Received: by 2002:a05:6358:63aa:b0:172:e0d4:43c9 with SMTP id k42-20020a05635863aa00b00172e0d443c9mr3008371rwh.10.1703008563842; Tue, 19 Dec 2023 09:56:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703008563; cv=none; d=google.com; s=arc-20160816; b=tRicgLhm5dLff/lNR6glZAmiEXg1PON4H+x+xrr4re/kP3Ty8CI/ON6yKVn4B2EV5j meC22p1Q+0ifzwNdio9lapnRrGoSugh3f9inguSSCsqrFSqV1P5WkyGqClazsUZCGfUw Iw2FTjkrRAgrntm1u/W6sJZKUb5yk0j241jSJaxWZ0v051V3wFrGCkHHmVlzY7vJQv9C hDJylC/6XXaoyFCKhhfAFFA1Wf3c00UEoLmkoHpYB8xm4ZhlHVIngXMXMhdQjIrW6a5D buHmRhWwGw/Td79nE5tG+pvWqeoULbR4Na3nX32m67E/i+PAkDCRKzF5XPlhZLycCg6i zveQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ciNLk5XaUXAioykenKcf7E1KkXSWDnIcRp3Oxutuxu0=; fh=auNSsmEiVxlQmEqX4u8fIxXR19mbteTVXbUGSektHww=; b=rdGov6+Nqn5TiS5QlcH0xrF2TUNS976YrRgUTCMAHG4kh9fgcscyrrvW+F+UF9JcDo Kl01Us3NSqTVWAnWyAmDY5DCVSTN8812ahwpAHzLslAzD7uw7Asrozk7V5XnAVe8plrA //1/Iy0oyDOllM/BarE2M98NQd00c02jCY7A5EPctsFefx0zDMmcTb7OfwYYnisnaLSt LBQE+t51+WzLI+X6eP2DH+GQWSbtwBYEdI29uHsG73qJBQzNLHew8V3gaISLyqG41Lhw BcTN6qT7nRITNHMCb2rmFvMY90djJ3Yf9M1yOTQ0NkUL62JFrHcG3n0m2GMtgfdooEsA qonQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=UP5c1Sxc; spf=pass (google.com: domain of linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id g21-20020aa78195000000b006ce7f143603si19748336pfi.184.2023.12.19.09.56.03 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 09:56:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=UP5c1Sxc; spf=pass (google.com: domain of linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5738-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 966312876DA for <ouuuleilei@gmail.com>; Tue, 19 Dec 2023 17:56:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0AF2741761; Tue, 19 Dec 2023 17:50:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=crapouillou.net header.i=@crapouillou.net header.b="UP5c1Sxc" X-Original-To: linux-kernel@vger.kernel.org Received: from aposti.net (aposti.net [89.234.176.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7F3E3B78E; Tue, 19 Dec 2023 17:50:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=crapouillou.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=crapouillou.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1703008221; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ciNLk5XaUXAioykenKcf7E1KkXSWDnIcRp3Oxutuxu0=; b=UP5c1Sxcxd0EHUS1lhLee/aLiy44BhJinhMdEny4JnaaZrz/16ZQVxyFmmzX5VFb9P/3UV neuPGHc3Zp/6kG6dyhc1/ryGu89tDkHCiLorkOwyJQIX/6Vrz/0mVcdrdI/uOJbyqMWvcG Y/ZC881VtK7MJ25/041T5ofdpUJRMuc= From: Paul Cercueil <paul@crapouillou.net> To: Jonathan Cameron <jic23@kernel.org>, Lars-Peter Clausen <lars@metafoo.de>, Sumit Semwal <sumit.semwal@linaro.org>, =?utf-8?q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, Vinod Koul <vkoul@kernel.org>, Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-iio@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, =?utf-8?q?Nuno_S=C3=A1?= <noname.nuno@gmail.com>, Michael Hennerich <Michael.Hennerich@analog.com>, Paul Cercueil <paul@crapouillou.net> Subject: [PATCH v5 3/8] dmaengine: Add API function dmaengine_prep_slave_dma_vec() Date: Tue, 19 Dec 2023 18:50:04 +0100 Message-ID: <20231219175009.65482-4-paul@crapouillou.net> In-Reply-To: <20231219175009.65482-1-paul@crapouillou.net> References: <20231219175009.65482-1-paul@crapouillou.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam: Yes X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785733908096158064 X-GMAIL-MSGID: 1785733908096158064 |
Series |
iio: new DMABUF based API, v5
|
|
Commit Message
Paul Cercueil
Dec. 19, 2023, 5:50 p.m. UTC
This function can be used to initiate a scatter-gather DMA transfer,
where the address and size of each segment is located in one entry of
the dma_vec array.
The major difference with dmaengine_prep_slave_sg() is that it supports
specifying the lengths of each DMA transfer; as trying to override the
length of the transfer with dmaengine_prep_slave_sg() is a very tedious
process. The introduction of a new API function is also justified by the
fact that scatterlists are on their way out.
Note that dmaengine_prep_interleaved_dma() is not helpful either in that
case, as it assumes that the address of each segment will be higher than
the one of the previous segment, which we just cannot guarantee in case
of a scatter-gather transfer.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
v3: New patch
v5: Replace with function dmaengine_prep_slave_dma_vec(), and struct
'dma_vec'.
Note that at some point we will need to support cyclic transfers
using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags"
parameter to the function?
---
include/linux/dmaengine.h | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
Comments
On Tue, 19 Dec 2023 18:50:04 +0100 Paul Cercueil <paul@crapouillou.net> wrote: > This function can be used to initiate a scatter-gather DMA transfer, > where the address and size of each segment is located in one entry of > the dma_vec array. > > The major difference with dmaengine_prep_slave_sg() is that it supports > specifying the lengths of each DMA transfer; as trying to override the > length of the transfer with dmaengine_prep_slave_sg() is a very tedious > process. The introduction of a new API function is also justified by the > fact that scatterlists are on their way out. > > Note that dmaengine_prep_interleaved_dma() is not helpful either in that > case, as it assumes that the address of each segment will be higher than > the one of the previous segment, which we just cannot guarantee in case > of a scatter-gather transfer. > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> This and the next patch look fine to me as clearly simplify things for our usecases, but they are really something for the dmaengine maintainers to comment on. Jonathan
On 19-12-23, 18:50, Paul Cercueil wrote: > This function can be used to initiate a scatter-gather DMA transfer, > where the address and size of each segment is located in one entry of > the dma_vec array. > > The major difference with dmaengine_prep_slave_sg() is that it supports > specifying the lengths of each DMA transfer; as trying to override the > length of the transfer with dmaengine_prep_slave_sg() is a very tedious > process. The introduction of a new API function is also justified by the > fact that scatterlists are on their way out. > > Note that dmaengine_prep_interleaved_dma() is not helpful either in that > case, as it assumes that the address of each segment will be higher than > the one of the previous segment, which we just cannot guarantee in case > of a scatter-gather transfer. > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > --- > v3: New patch > > v5: Replace with function dmaengine_prep_slave_dma_vec(), and struct > 'dma_vec'. > Note that at some point we will need to support cyclic transfers > using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags" > parameter to the function? > --- > include/linux/dmaengine.h | 25 +++++++++++++++++++++++++ > 1 file changed, 25 insertions(+) > > diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h > index 3df70d6131c8..ee5931ddb42f 100644 > --- a/include/linux/dmaengine.h > +++ b/include/linux/dmaengine.h > @@ -160,6 +160,16 @@ struct dma_interleaved_template { > struct data_chunk sgl[]; > }; > > +/** > + * struct dma_vec - DMA vector > + * @addr: Bus address of the start of the vector > + * @len: Length in bytes of the DMA vector > + */ > +struct dma_vec { > + dma_addr_t addr; > + size_t len; > +}; so you want to transfer multiple buffers, right? why not use dmaengine_prep_slave_sg(). If there is reason for not using that one? Furthermore I missed replying to your email earlier on use of dmaengine_prep_interleaved_dma(), my apologies. That can be made to work for you as well. Please see the notes where icg can be ignored and it does not need icg value to be set Infact, interleaved api can be made to work in most of these cases I can think of... > + > /** > * enum dma_ctrl_flags - DMA flags to augment operation preparation, > * control completion, and communicate status. > @@ -910,6 +920,10 @@ struct dma_device { > struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)( > struct dma_chan *chan, unsigned long flags); > > + struct dma_async_tx_descriptor *(*device_prep_slave_dma_vec)( > + struct dma_chan *chan, const struct dma_vec *vecs, > + size_t nents, enum dma_transfer_direction direction, > + unsigned long flags); > struct dma_async_tx_descriptor *(*device_prep_slave_sg)( > struct dma_chan *chan, struct scatterlist *sgl, > unsigned int sg_len, enum dma_transfer_direction direction, > @@ -972,6 +986,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single( > dir, flags, NULL); > } > > +static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_dma_vec( > + struct dma_chan *chan, const struct dma_vec *vecs, size_t nents, > + enum dma_transfer_direction dir, unsigned long flags) > +{ > + if (!chan || !chan->device || !chan->device->device_prep_slave_dma_vec) > + return NULL; > + > + return chan->device->device_prep_slave_dma_vec(chan, vecs, nents, > + dir, flags); > +} > + > static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_sg( > struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len, > enum dma_transfer_direction dir, unsigned long flags) > -- > 2.43.0
Hi Vinod, Le jeudi 21 décembre 2023 à 20:44 +0530, Vinod Koul a écrit : > On 19-12-23, 18:50, Paul Cercueil wrote: > > This function can be used to initiate a scatter-gather DMA > > transfer, > > where the address and size of each segment is located in one entry > > of > > the dma_vec array. > > > > The major difference with dmaengine_prep_slave_sg() is that it > > supports > > specifying the lengths of each DMA transfer; as trying to override > > the > > length of the transfer with dmaengine_prep_slave_sg() is a very > > tedious > > process. The introduction of a new API function is also justified > > by the > > fact that scatterlists are on their way out. > > > > Note that dmaengine_prep_interleaved_dma() is not helpful either in > > that > > case, as it assumes that the address of each segment will be higher > > than > > the one of the previous segment, which we just cannot guarantee in > > case > > of a scatter-gather transfer. > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > > > --- > > v3: New patch > > > > v5: Replace with function dmaengine_prep_slave_dma_vec(), and > > struct > > 'dma_vec'. > > Note that at some point we will need to support cyclic > > transfers > > using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags" > > parameter to the function? > > --- > > include/linux/dmaengine.h | 25 +++++++++++++++++++++++++ > > 1 file changed, 25 insertions(+) > > > > diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h > > index 3df70d6131c8..ee5931ddb42f 100644 > > --- a/include/linux/dmaengine.h > > +++ b/include/linux/dmaengine.h > > @@ -160,6 +160,16 @@ struct dma_interleaved_template { > > struct data_chunk sgl[]; > > }; > > > > +/** > > + * struct dma_vec - DMA vector > > + * @addr: Bus address of the start of the vector > > + * @len: Length in bytes of the DMA vector > > + */ > > +struct dma_vec { > > + dma_addr_t addr; > > + size_t len; > > +}; > > so you want to transfer multiple buffers, right? why not use > dmaengine_prep_slave_sg(). If there is reason for not using that one? Well I think I answer that in the commit message, don't I? > Furthermore I missed replying to your email earlier on use of > dmaengine_prep_interleaved_dma(), my apologies. > That can be made to work for you as well. Please see the notes where > icg > can be ignored and it does not need icg value to be set > > Infact, interleaved api can be made to work in most of these cases I > can > think of... So if I want to transfer 16 bytes from 0x10, then 16 bytes from 0x0, then 16 bytes from 0x20, how should I configure the dma_interleaved_template? Cheers, -Paul > > + > > /** > > * enum dma_ctrl_flags - DMA flags to augment operation > > preparation, > > * control completion, and communicate status. > > @@ -910,6 +920,10 @@ struct dma_device { > > struct dma_async_tx_descriptor > > *(*device_prep_dma_interrupt)( > > struct dma_chan *chan, unsigned long flags); > > > > + struct dma_async_tx_descriptor > > *(*device_prep_slave_dma_vec)( > > + struct dma_chan *chan, const struct dma_vec *vecs, > > + size_t nents, enum dma_transfer_direction > > direction, > > + unsigned long flags); > > struct dma_async_tx_descriptor *(*device_prep_slave_sg)( > > struct dma_chan *chan, struct scatterlist *sgl, > > unsigned int sg_len, enum dma_transfer_direction > > direction, > > @@ -972,6 +986,17 @@ static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_single( > > dir, flags, > > NULL); > > } > > > > +static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_dma_vec( > > + struct dma_chan *chan, const struct dma_vec *vecs, size_t > > nents, > > + enum dma_transfer_direction dir, unsigned long flags) > > +{ > > + if (!chan || !chan->device || !chan->device- > > >device_prep_slave_dma_vec) > > + return NULL; > > + > > + return chan->device->device_prep_slave_dma_vec(chan, vecs, > > nents, > > + dir, > > flags); > > +} > > + > > static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_sg( > > struct dma_chan *chan, struct scatterlist > > *sgl, unsigned int sg_len, > > enum dma_transfer_direction dir, unsigned long flags) > > -- > > 2.43.0 >
Hi Vinod, Le jeudi 21 décembre 2023 à 20:44 +0530, Vinod Koul a écrit : > On 19-12-23, 18:50, Paul Cercueil wrote: > > This function can be used to initiate a scatter-gather DMA > > transfer, > > where the address and size of each segment is located in one entry > > of > > the dma_vec array. > > > > The major difference with dmaengine_prep_slave_sg() is that it > > supports > > specifying the lengths of each DMA transfer; as trying to override > > the > > length of the transfer with dmaengine_prep_slave_sg() is a very > > tedious > > process. The introduction of a new API function is also justified > > by the > > fact that scatterlists are on their way out. > > > > Note that dmaengine_prep_interleaved_dma() is not helpful either in > > that > > case, as it assumes that the address of each segment will be higher > > than > > the one of the previous segment, which we just cannot guarantee in > > case > > of a scatter-gather transfer. > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > > > --- > > v3: New patch > > > > v5: Replace with function dmaengine_prep_slave_dma_vec(), and > > struct > > 'dma_vec'. > > Note that at some point we will need to support cyclic > > transfers > > using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags" > > parameter to the function? > > --- > > include/linux/dmaengine.h | 25 +++++++++++++++++++++++++ > > 1 file changed, 25 insertions(+) > > > > diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h > > index 3df70d6131c8..ee5931ddb42f 100644 > > --- a/include/linux/dmaengine.h > > +++ b/include/linux/dmaengine.h > > @@ -160,6 +160,16 @@ struct dma_interleaved_template { > > struct data_chunk sgl[]; > > }; > > > > +/** > > + * struct dma_vec - DMA vector > > + * @addr: Bus address of the start of the vector > > + * @len: Length in bytes of the DMA vector > > + */ > > +struct dma_vec { > > + dma_addr_t addr; > > + size_t len; > > +}; I don't want to be pushy, but I'd like to know how to solve this now, otherwise I'll just send the same patches for my v6. > so you want to transfer multiple buffers, right? why not use > dmaengine_prep_slave_sg(). If there is reason for not using that one? The reason is that we want to have the possibility to transfer less than the total size of the scatterlist, and that's currently very hard to do - scatterlists were designed to not be tampered with. Christian König then suggested to introduce a "dma_vec" which had been on his TODO list for a while now. > Furthermore I missed replying to your email earlier on use of > dmaengine_prep_interleaved_dma(), my apologies. > That can be made to work for you as well. Please see the notes where > icg > can be ignored and it does not need icg value to be set > > Infact, interleaved api can be made to work in most of these cases I > can > think of... Interleaved API only supports incrementing addresses, I see no way to decrement the address (without using crude hacks e.g. overflowing size_t). I can't guarantee that my DMABUF's pages are ordered in memory. Cheers, -Paul > > + > > /** > > * enum dma_ctrl_flags - DMA flags to augment operation > > preparation, > > * control completion, and communicate status. > > @@ -910,6 +920,10 @@ struct dma_device { > > struct dma_async_tx_descriptor > > *(*device_prep_dma_interrupt)( > > struct dma_chan *chan, unsigned long flags); > > > > + struct dma_async_tx_descriptor > > *(*device_prep_slave_dma_vec)( > > + struct dma_chan *chan, const struct dma_vec *vecs, > > + size_t nents, enum dma_transfer_direction > > direction, > > + unsigned long flags); > > struct dma_async_tx_descriptor *(*device_prep_slave_sg)( > > struct dma_chan *chan, struct scatterlist *sgl, > > unsigned int sg_len, enum dma_transfer_direction > > direction, > > @@ -972,6 +986,17 @@ static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_single( > > dir, flags, > > NULL); > > } > > > > +static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_dma_vec( > > + struct dma_chan *chan, const struct dma_vec *vecs, size_t > > nents, > > + enum dma_transfer_direction dir, unsigned long flags) > > +{ > > + if (!chan || !chan->device || !chan->device- > > >device_prep_slave_dma_vec) > > + return NULL; > > + > > + return chan->device->device_prep_slave_dma_vec(chan, vecs, > > nents, > > + dir, > > flags); > > +} > > + > > static inline struct dma_async_tx_descriptor > > *dmaengine_prep_slave_sg( > > struct dma_chan *chan, struct scatterlist > > *sgl, unsigned int sg_len, > > enum dma_transfer_direction dir, unsigned long flags) > > -- > > 2.43.0 >
Hi Paul, On 08-01-24, 13:20, Paul Cercueil wrote: > Hi Vinod, > > Le jeudi 21 décembre 2023 à 20:44 +0530, Vinod Koul a écrit : > > On 19-12-23, 18:50, Paul Cercueil wrote: > > > This function can be used to initiate a scatter-gather DMA > > > transfer, > > > where the address and size of each segment is located in one entry > > > of > > > the dma_vec array. > > > > > > The major difference with dmaengine_prep_slave_sg() is that it > > > supports > > > specifying the lengths of each DMA transfer; as trying to override > > > the > > > length of the transfer with dmaengine_prep_slave_sg() is a very > > > tedious > > > process. The introduction of a new API function is also justified > > > by the > > > fact that scatterlists are on their way out. > > > > > > Note that dmaengine_prep_interleaved_dma() is not helpful either in > > > that > > > case, as it assumes that the address of each segment will be higher > > > than > > > the one of the previous segment, which we just cannot guarantee in > > > case > > > of a scatter-gather transfer. > > > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > > > > > --- > > > v3: New patch > > > > > > v5: Replace with function dmaengine_prep_slave_dma_vec(), and > > > struct > > > 'dma_vec'. > > > Note that at some point we will need to support cyclic > > > transfers > > > using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags" > > > parameter to the function? > > > --- > > > include/linux/dmaengine.h | 25 +++++++++++++++++++++++++ > > > 1 file changed, 25 insertions(+) > > > > > > diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h > > > index 3df70d6131c8..ee5931ddb42f 100644 > > > --- a/include/linux/dmaengine.h > > > +++ b/include/linux/dmaengine.h > > > @@ -160,6 +160,16 @@ struct dma_interleaved_template { > > > struct data_chunk sgl[]; > > > }; > > > > > > +/** > > > + * struct dma_vec - DMA vector > > > + * @addr: Bus address of the start of the vector > > > + * @len: Length in bytes of the DMA vector > > > + */ > > > +struct dma_vec { > > > + dma_addr_t addr; > > > + size_t len; > > > +}; > > I don't want to be pushy, but I'd like to know how to solve this now, > otherwise I'll just send the same patches for my v6. > > > so you want to transfer multiple buffers, right? why not use > > dmaengine_prep_slave_sg(). If there is reason for not using that one? > > The reason is that we want to have the possibility to transfer less > than the total size of the scatterlist, and that's currently very hard > to do - scatterlists were designed to not be tampered with. > > Christian König then suggested to introduce a "dma_vec" which had been > on his TODO list for a while now. Yeah for this interleaved seems overkill. Lets go with this api. I would suggest change the name of the API replacing slave with peripheral though
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 3df70d6131c8..ee5931ddb42f 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -160,6 +160,16 @@ struct dma_interleaved_template { struct data_chunk sgl[]; }; +/** + * struct dma_vec - DMA vector + * @addr: Bus address of the start of the vector + * @len: Length in bytes of the DMA vector + */ +struct dma_vec { + dma_addr_t addr; + size_t len; +}; + /** * enum dma_ctrl_flags - DMA flags to augment operation preparation, * control completion, and communicate status. @@ -910,6 +920,10 @@ struct dma_device { struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)( struct dma_chan *chan, unsigned long flags); + struct dma_async_tx_descriptor *(*device_prep_slave_dma_vec)( + struct dma_chan *chan, const struct dma_vec *vecs, + size_t nents, enum dma_transfer_direction direction, + unsigned long flags); struct dma_async_tx_descriptor *(*device_prep_slave_sg)( struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len, enum dma_transfer_direction direction, @@ -972,6 +986,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single( dir, flags, NULL); } +static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_dma_vec( + struct dma_chan *chan, const struct dma_vec *vecs, size_t nents, + enum dma_transfer_direction dir, unsigned long flags) +{ + if (!chan || !chan->device || !chan->device->device_prep_slave_dma_vec) + return NULL; + + return chan->device->device_prep_slave_dma_vec(chan, vecs, nents, + dir, flags); +} + static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_sg( struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len, enum dma_transfer_direction dir, unsigned long flags)