Message ID | 20231219175009.65482-8-paul@crapouillou.net |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2123096dyi; Tue, 19 Dec 2023 10:07:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IEL25ZHXQaFKbkrFAGKUtUTxbN1LTCXBafs7n7V1ZONqiXFGbLkjTgxuTytDGvIQ9vC0zg6 X-Received: by 2002:a17:907:3688:b0:a1d:a9a:70fe with SMTP id bi8-20020a170907368800b00a1d0a9a70femr5554597ejc.133.1703009230080; Tue, 19 Dec 2023 10:07:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703009230; cv=none; d=google.com; s=arc-20160816; b=GaF81lfNOKYoZX0fHCAiJVaVywzQRBSwU0GX+VDDlucAWTyZsghV43cwz1FBQ3IxJa TAoPAn1LXORx4Jdf1U+gsz+etBR7aXOCrAPAzeWLEyJFrE0c3GtbfS522JlI651IL6F5 E8YADSfYQMkkqqTpvRAVC8vkW59qW962ovmrggEcbhteHHBN1JpSNr7c6pHsalOokIDn ZMiEhCR3kP4u3oXb7OU2NOFIEtdIBvCQP89lJWHVQ7Cr3AuQang9j1nDE7tIGgUHrbtu 7VYaheZ8zco1jwI+nMQ9XMxbb14vkOtOx5wdHWw0pLV85FXpnI9YzGEZ7cgDu8023YUF xs3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=e7fwZsaolFkOpPsDd74Ly10WkyLo2LNDxuxjwogSFJQ=; fh=auNSsmEiVxlQmEqX4u8fIxXR19mbteTVXbUGSektHww=; b=EM/NRcxXhXZnDup9TtG1OnzEBv3Ypw22pXghHxDzB3nFa1SgGijqMLWqVJWUfnIqLO jrlJLz7lNzNzA+dpXlYeSrOZOBRlvLiylGLbMjW+vWYoSm5wO3UGTtJInN8A+Wddq269 7XVTdqcccejXy+fa+PIImhx/dhNJOdUIll/tzrvntqvMhOy8nz/+d8MJ8NmB6LW14psW HzcN7I9ZNjqUii+cLDfuWxyS1TVPz9HRyi6m1Y2TVkGBpdF8u7nHnhTwR4EKaVsJIgGV GfRRBBTsSDHP1yxydSma06l3TnOiv0nl/jmsrXveclHKT2qjNn4r8gUr3DkLr4FQOtpm cOSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=lfVWCHwS; spf=pass (google.com: domain of linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id c26-20020a17090603da00b00a2471c25b4dsi666916eja.604.2023.12.19.10.07.09 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 10:07:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=lfVWCHwS; spf=pass (google.com: domain of linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5743-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 793E71F289B8 for <ouuuleilei@gmail.com>; Tue, 19 Dec 2023 17:57:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 83FA5481C7; Tue, 19 Dec 2023 17:51:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=crapouillou.net header.i=@crapouillou.net header.b="lfVWCHwS" X-Original-To: linux-kernel@vger.kernel.org Received: from aposti.net (aposti.net [89.234.176.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2FFB45C17; Tue, 19 Dec 2023 17:51:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=crapouillou.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=crapouillou.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1703008225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e7fwZsaolFkOpPsDd74Ly10WkyLo2LNDxuxjwogSFJQ=; b=lfVWCHwSm37pIKUzOiE/NipNnaf+A+cN3rzXZWZVGJsgPu61ei5HWDEnxhfB+CB7eOyKwK CThUjG0QBJDE4KjA3pVK9aifL/tG4UaK8vV6PcZRx8k0gKU2HrbmKrVNeQmiODaCxWlJ99 EwLiV9FNjlmbB/f7jHrj0G/4nm4FwcI= From: Paul Cercueil <paul@crapouillou.net> To: Jonathan Cameron <jic23@kernel.org>, Lars-Peter Clausen <lars@metafoo.de>, Sumit Semwal <sumit.semwal@linaro.org>, =?utf-8?q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, Vinod Koul <vkoul@kernel.org>, Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-iio@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, =?utf-8?q?Nuno_S=C3=A1?= <noname.nuno@gmail.com>, Michael Hennerich <Michael.Hennerich@analog.com>, Paul Cercueil <paul@crapouillou.net> Subject: [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based userspace API Date: Tue, 19 Dec 2023 18:50:08 +0100 Message-ID: <20231219175009.65482-8-paul@crapouillou.net> In-Reply-To: <20231219175009.65482-1-paul@crapouillou.net> References: <20231219175009.65482-1-paul@crapouillou.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam: Yes X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785734606688709298 X-GMAIL-MSGID: 1785734606688709298 |
Series |
iio: new DMABUF based API, v5
|
|
Commit Message
Paul Cercueil
Dec. 19, 2023, 5:50 p.m. UTC
Use the functions provided by the buffer-dma core to implement the
DMABUF userspace API in the buffer-dmaengine IIO buffer implementation.
Since we want to be able to transfer an arbitrary number of bytes and
not necesarily the full DMABUF, the associated scatterlist is converted
to an array of DMA addresses + lengths, which is then passed to
dmaengine_prep_slave_dma_array().
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to
work with the new functions introduced in industrialio-buffer-dma.c.
v5: - Use the new dmaengine_prep_slave_dma_vec().
- Restrict to input buffers, since output buffers are not yet
supported by IIO buffers.
---
.../buffer/industrialio-buffer-dmaengine.c | 52 ++++++++++++++++---
1 file changed, 46 insertions(+), 6 deletions(-)
Comments
On Tue, 19 Dec 2023 18:50:08 +0100 Paul Cercueil <paul@crapouillou.net> wrote: > Use the functions provided by the buffer-dma core to implement the > DMABUF userspace API in the buffer-dmaengine IIO buffer implementation. > > Since we want to be able to transfer an arbitrary number of bytes and > not necesarily the full DMABUF, the associated scatterlist is converted > to an array of DMA addresses + lengths, which is then passed to > dmaengine_prep_slave_dma_array(). > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> One question inline. Otherwise looks fine to me. J > > --- > v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to > work with the new functions introduced in industrialio-buffer-dma.c. > > v5: - Use the new dmaengine_prep_slave_dma_vec(). > - Restrict to input buffers, since output buffers are not yet > supported by IIO buffers. > --- > .../buffer/industrialio-buffer-dmaengine.c | 52 ++++++++++++++++--- > 1 file changed, 46 insertions(+), 6 deletions(-) > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > index 5f85ba38e6f6..825d76a24a67 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > @@ -64,15 +64,51 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, > struct dmaengine_buffer *dmaengine_buffer = > iio_buffer_to_dmaengine_buffer(&queue->buffer); > struct dma_async_tx_descriptor *desc; > + unsigned int i, nents; > + struct scatterlist *sgl; > + struct dma_vec *vecs; > + size_t max_size; > dma_cookie_t cookie; > + size_t len_total; > > - block->bytes_used = min(block->size, dmaengine_buffer->max_size); > - block->bytes_used = round_down(block->bytes_used, > - dmaengine_buffer->align); > + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { > + /* We do not yet support output buffers. */ > + return -EINVAL; > + } > > - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM, > - DMA_PREP_INTERRUPT); > + if (block->sg_table) { > + sgl = block->sg_table->sgl; > + nents = sg_nents_for_len(sgl, block->bytes_used); Are we guaranteed the length in the sglist is enough? If not this can return an error code. > + > + vecs = kmalloc_array(nents, sizeof(*vecs), GFP_KERNEL); > + if (!vecs) > + return -ENOMEM; > + > + len_total = block->bytes_used; > + > + for (i = 0; i < nents; i++) { > + vecs[i].addr = sg_dma_address(sgl); > + vecs[i].len = min(sg_dma_len(sgl), len_total); > + len_total -= vecs[i].len; > + > + sgl = sg_next(sgl); > + } > + > + desc = dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan, > + vecs, nents, DMA_DEV_TO_MEM, > + DMA_PREP_INTERRUPT); > + kfree(vecs); > + } else { > + max_size = min(block->size, dmaengine_buffer->max_size); > + max_size = round_down(max_size, dmaengine_buffer->align); > + block->bytes_used = max_size; > + > + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > + block->phys_addr, > + block->bytes_used, > + DMA_DEV_TO_MEM, > + DMA_PREP_INTERRUPT); > + } > if (!desc) > return -ENOMEM; >
Hi Jonathan, Le jeudi 21 décembre 2023 à 16:12 +0000, Jonathan Cameron a écrit : > On Tue, 19 Dec 2023 18:50:08 +0100 > Paul Cercueil <paul@crapouillou.net> wrote: > > > Use the functions provided by the buffer-dma core to implement the > > DMABUF userspace API in the buffer-dmaengine IIO buffer > > implementation. > > > > Since we want to be able to transfer an arbitrary number of bytes > > and > > not necesarily the full DMABUF, the associated scatterlist is > > converted > > to an array of DMA addresses + lengths, which is then passed to > > dmaengine_prep_slave_dma_array(). > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > One question inline. Otherwise looks fine to me. > > J > > > > --- > > v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the > > code to > > work with the new functions introduced in industrialio-buffer- > > dma.c. > > > > v5: - Use the new dmaengine_prep_slave_dma_vec(). > > - Restrict to input buffers, since output buffers are not yet > > supported by IIO buffers. > > --- > > .../buffer/industrialio-buffer-dmaengine.c | 52 > > ++++++++++++++++--- > > 1 file changed, 46 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > index 5f85ba38e6f6..825d76a24a67 100644 > > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > @@ -64,15 +64,51 @@ static int > > iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue > > *queue, > > struct dmaengine_buffer *dmaengine_buffer = > > iio_buffer_to_dmaengine_buffer(&queue->buffer); > > struct dma_async_tx_descriptor *desc; > > + unsigned int i, nents; > > + struct scatterlist *sgl; > > + struct dma_vec *vecs; > > + size_t max_size; > > dma_cookie_t cookie; > > + size_t len_total; > > > > - block->bytes_used = min(block->size, dmaengine_buffer- > > >max_size); > > - block->bytes_used = round_down(block->bytes_used, > > - dmaengine_buffer->align); > > + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { > > + /* We do not yet support output buffers. */ > > + return -EINVAL; > > + } > > > > - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > > - block->phys_addr, block->bytes_used, > > DMA_DEV_TO_MEM, > > - DMA_PREP_INTERRUPT); > > + if (block->sg_table) { > > + sgl = block->sg_table->sgl; > > + nents = sg_nents_for_len(sgl, block->bytes_used); > > Are we guaranteed the length in the sglist is enough? If not this > can return an error code. The length of the sglist will always be enough, the iio_buffer_enqueue_dmabuf() function already checks that block- >bytes_used is equal or smaller than the size of the DMABUF. It is quite a few functions above in the call stack though, so I can handle the errors of sg_nents_for_len() here if you think makes sense. Cheers, -Paul > > + > > + vecs = kmalloc_array(nents, sizeof(*vecs), > > GFP_KERNEL); > > + if (!vecs) > > + return -ENOMEM; > > + > > + len_total = block->bytes_used; > > + > > + for (i = 0; i < nents; i++) { > > + vecs[i].addr = sg_dma_address(sgl); > > + vecs[i].len = min(sg_dma_len(sgl), > > len_total); > > + len_total -= vecs[i].len; > > + > > + sgl = sg_next(sgl); > > + } > > + > > + desc = > > dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan, > > + vecs, nents, > > DMA_DEV_TO_MEM, > > + > > DMA_PREP_INTERRUPT); > > + kfree(vecs); > > + } else { > > + max_size = min(block->size, dmaengine_buffer- > > >max_size); > > + max_size = round_down(max_size, dmaengine_buffer- > > >align); > > + block->bytes_used = max_size; > > + > > + desc = > > dmaengine_prep_slave_single(dmaengine_buffer->chan, > > + block- > > >phys_addr, > > + block- > > >bytes_used, > > + DMA_DEV_TO_MEM, > > + > > DMA_PREP_INTERRUPT); > > + } > > if (!desc) > > return -ENOMEM; > > >
On Thu, 2023-12-21 at 18:30 +0100, Paul Cercueil wrote: > Hi Jonathan, > > Le jeudi 21 décembre 2023 à 16:12 +0000, Jonathan Cameron a écrit : > > On Tue, 19 Dec 2023 18:50:08 +0100 > > Paul Cercueil <paul@crapouillou.net> wrote: > > > > > Use the functions provided by the buffer-dma core to implement the > > > DMABUF userspace API in the buffer-dmaengine IIO buffer > > > implementation. > > > > > > Since we want to be able to transfer an arbitrary number of bytes > > > and > > > not necesarily the full DMABUF, the associated scatterlist is > > > converted > > > to an array of DMA addresses + lengths, which is then passed to > > > dmaengine_prep_slave_dma_array(). > > > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > One question inline. Otherwise looks fine to me. > > > > J > > > > > > --- > > > v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the > > > code to > > > work with the new functions introduced in industrialio-buffer- > > > dma.c. > > > > > > v5: - Use the new dmaengine_prep_slave_dma_vec(). > > > - Restrict to input buffers, since output buffers are not yet > > > supported by IIO buffers. > > > --- > > > .../buffer/industrialio-buffer-dmaengine.c | 52 > > > ++++++++++++++++--- > > > 1 file changed, 46 insertions(+), 6 deletions(-) > > > > > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > index 5f85ba38e6f6..825d76a24a67 100644 > > > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > @@ -64,15 +64,51 @@ static int > > > iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue > > > *queue, > > > struct dmaengine_buffer *dmaengine_buffer = > > > iio_buffer_to_dmaengine_buffer(&queue->buffer); > > > struct dma_async_tx_descriptor *desc; > > > + unsigned int i, nents; > > > + struct scatterlist *sgl; > > > + struct dma_vec *vecs; > > > + size_t max_size; > > > dma_cookie_t cookie; > > > + size_t len_total; > > > > > > - block->bytes_used = min(block->size, dmaengine_buffer- > > > > max_size); > > > - block->bytes_used = round_down(block->bytes_used, > > > - dmaengine_buffer->align); > > > + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { > > > + /* We do not yet support output buffers. */ > > > + return -EINVAL; > > > + } > > > > > > - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > > > - block->phys_addr, block->bytes_used, > > > DMA_DEV_TO_MEM, > > > - DMA_PREP_INTERRUPT); > > > + if (block->sg_table) { > > > + sgl = block->sg_table->sgl; > > > + nents = sg_nents_for_len(sgl, block->bytes_used); > > > > Are we guaranteed the length in the sglist is enough? If not this > > can return an error code. > > The length of the sglist will always be enough, the > iio_buffer_enqueue_dmabuf() function already checks that block- > > bytes_used is equal or smaller than the size of the DMABUF. > > It is quite a few functions above in the call stack though, so I can > handle the errors of sg_nents_for_len() here if you think makes sense. Maybe putting something like the above in a comment? - Nuno Sá
On Fri, 22 Dec 2023 09:58:29 +0100 Nuno Sá <noname.nuno@gmail.com> wrote: > On Thu, 2023-12-21 at 18:30 +0100, Paul Cercueil wrote: > > Hi Jonathan, > > > > Le jeudi 21 décembre 2023 à 16:12 +0000, Jonathan Cameron a écrit : > > > On Tue, 19 Dec 2023 18:50:08 +0100 > > > Paul Cercueil <paul@crapouillou.net> wrote: > > > > > > > Use the functions provided by the buffer-dma core to implement the > > > > DMABUF userspace API in the buffer-dmaengine IIO buffer > > > > implementation. > > > > > > > > Since we want to be able to transfer an arbitrary number of bytes > > > > and > > > > not necesarily the full DMABUF, the associated scatterlist is > > > > converted > > > > to an array of DMA addresses + lengths, which is then passed to > > > > dmaengine_prep_slave_dma_array(). > > > > > > > > Signed-off-by: Paul Cercueil <paul@crapouillou.net> > > > One question inline. Otherwise looks fine to me. > > > > > > J > > > > > > > > --- > > > > v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the > > > > code to > > > > work with the new functions introduced in industrialio-buffer- > > > > dma.c. > > > > > > > > v5: - Use the new dmaengine_prep_slave_dma_vec(). > > > > - Restrict to input buffers, since output buffers are not yet > > > > supported by IIO buffers. > > > > --- > > > > .../buffer/industrialio-buffer-dmaengine.c | 52 > > > > ++++++++++++++++--- > > > > 1 file changed, 46 insertions(+), 6 deletions(-) > > > > > > > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > > index 5f85ba38e6f6..825d76a24a67 100644 > > > > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > > > > @@ -64,15 +64,51 @@ static int > > > > iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue > > > > *queue, > > > > struct dmaengine_buffer *dmaengine_buffer = > > > > iio_buffer_to_dmaengine_buffer(&queue->buffer); > > > > struct dma_async_tx_descriptor *desc; > > > > + unsigned int i, nents; > > > > + struct scatterlist *sgl; > > > > + struct dma_vec *vecs; > > > > + size_t max_size; > > > > dma_cookie_t cookie; > > > > + size_t len_total; > > > > > > > > - block->bytes_used = min(block->size, dmaengine_buffer- > > > > > max_size); > > > > - block->bytes_used = round_down(block->bytes_used, > > > > - dmaengine_buffer->align); > > > > + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { > > > > + /* We do not yet support output buffers. */ > > > > + return -EINVAL; > > > > + } > > > > > > > > - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > > > > - block->phys_addr, block->bytes_used, > > > > DMA_DEV_TO_MEM, > > > > - DMA_PREP_INTERRUPT); > > > > + if (block->sg_table) { > > > > + sgl = block->sg_table->sgl; > > > > + nents = sg_nents_for_len(sgl, block->bytes_used); > > > > > > Are we guaranteed the length in the sglist is enough? If not this > > > can return an error code. > > > > The length of the sglist will always be enough, the > > iio_buffer_enqueue_dmabuf() function already checks that block- > > > bytes_used is equal or smaller than the size of the DMABUF. > > > > It is quite a few functions above in the call stack though, so I can > > handle the errors of sg_nents_for_len() here if you think makes sense. > > Maybe putting something like the above in a comment? Either comment, or an explicit check so we don't need the comment is fine as far as I'm concerned. Jonathan > > - Nuno Sá > >
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 5f85ba38e6f6..825d76a24a67 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -64,15 +64,51 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, struct dmaengine_buffer *dmaengine_buffer = iio_buffer_to_dmaengine_buffer(&queue->buffer); struct dma_async_tx_descriptor *desc; + unsigned int i, nents; + struct scatterlist *sgl; + struct dma_vec *vecs; + size_t max_size; dma_cookie_t cookie; + size_t len_total; - block->bytes_used = min(block->size, dmaengine_buffer->max_size); - block->bytes_used = round_down(block->bytes_used, - dmaengine_buffer->align); + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) { + /* We do not yet support output buffers. */ + return -EINVAL; + } - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT); + if (block->sg_table) { + sgl = block->sg_table->sgl; + nents = sg_nents_for_len(sgl, block->bytes_used); + + vecs = kmalloc_array(nents, sizeof(*vecs), GFP_KERNEL); + if (!vecs) + return -ENOMEM; + + len_total = block->bytes_used; + + for (i = 0; i < nents; i++) { + vecs[i].addr = sg_dma_address(sgl); + vecs[i].len = min(sg_dma_len(sgl), len_total); + len_total -= vecs[i].len; + + sgl = sg_next(sgl); + } + + desc = dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan, + vecs, nents, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + kfree(vecs); + } else { + max_size = min(block->size, dmaengine_buffer->max_size); + max_size = round_down(max_size, dmaengine_buffer->align); + block->bytes_used = max_size; + + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, + block->phys_addr, + block->bytes_used, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT); + } if (!desc) return -ENOMEM; @@ -120,6 +156,10 @@ static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = { .data_available = iio_dma_buffer_data_available, .release = iio_dmaengine_buffer_release, + .enqueue_dmabuf = iio_dma_buffer_enqueue_dmabuf, + .attach_dmabuf = iio_dma_buffer_attach_dmabuf, + .detach_dmabuf = iio_dma_buffer_detach_dmabuf, + .modes = INDIO_BUFFER_HARDWARE, .flags = INDIO_BUFFER_FLAG_FIXED_WATERMARK, };