[v2,3/3] swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()

Message ID 20240131122543.14791-4-will@kernel.org
State New
Headers
Series Fix double allocation in swiotlb_alloc() |

Commit Message

Will Deacon Jan. 31, 2024, 12:25 p.m. UTC
  core-api/dma-api-howto.rst states the following properties of
dma_alloc_coherent():

  | The CPU virtual address and the DMA address are both guaranteed to
  | be aligned to the smallest PAGE_SIZE order which is greater than or
  | equal to the requested size.

However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
parameter of swiotlb_find_slots() and so this property is not upheld.
Instead, allocations larger than a page are aligned to PAGE_SIZE,

Calculate the mask corresponding to the page order suitable for holding
the allocation and pass that to swiotlb_find_slots().

Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Cc: Dexuan Cui <decui@microsoft.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
  

Comments

Robin Murphy Jan. 31, 2024, 4:03 p.m. UTC | #1
On 31/01/2024 12:25 pm, Will Deacon wrote:
> core-api/dma-api-howto.rst states the following properties of
> dma_alloc_coherent():
> 
>    | The CPU virtual address and the DMA address are both guaranteed to
>    | be aligned to the smallest PAGE_SIZE order which is greater than or
>    | equal to the requested size.
> 
> However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
> parameter of swiotlb_find_slots() and so this property is not upheld.
> Instead, allocations larger than a page are aligned to PAGE_SIZE,
> 
> Calculate the mask corresponding to the page order suitable for holding
> the allocation and pass that to swiotlb_find_slots().

I guess this goes back to at least e81e99bacc9f ("swiotlb: Support 
aligned swiotlb buffers") when the explicit argument was added - not 
sure what we do about 5.15 LTS though (unless the answer is to not care...)

As before, though, how much of patch #1 is needed if this comes first?

Cheers,
Robin.

> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
> Cc: Dexuan Cui <decui@microsoft.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>   kernel/dma/swiotlb.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 4485f216e620..8ec37006ac70 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1632,12 +1632,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
>   	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>   	struct io_tlb_pool *pool;
>   	phys_addr_t tlb_addr;
> +	unsigned int align;
>   	int index;
>   
>   	if (!mem)
>   		return NULL;
>   
> -	index = swiotlb_find_slots(dev, 0, size, 0, &pool);
> +	align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
> +	index = swiotlb_find_slots(dev, 0, size, align, &pool);
>   	if (index == -1)
>   		return NULL;
>
  
Will Deacon Feb. 1, 2024, 12:52 p.m. UTC | #2
On Wed, Jan 31, 2024 at 04:03:38PM +0000, Robin Murphy wrote:
> On 31/01/2024 12:25 pm, Will Deacon wrote:
> > core-api/dma-api-howto.rst states the following properties of
> > dma_alloc_coherent():
> > 
> >    | The CPU virtual address and the DMA address are both guaranteed to
> >    | be aligned to the smallest PAGE_SIZE order which is greater than or
> >    | equal to the requested size.
> > 
> > However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
> > parameter of swiotlb_find_slots() and so this property is not upheld.
> > Instead, allocations larger than a page are aligned to PAGE_SIZE,
> > 
> > Calculate the mask corresponding to the page order suitable for holding
> > the allocation and pass that to swiotlb_find_slots().
> 
> I guess this goes back to at least e81e99bacc9f ("swiotlb: Support aligned
> swiotlb buffers") when the explicit argument was added - not sure what we do
> about 5.15 LTS though (unless the answer is to not care...)

Thanks. I'll add the Fixes: tag but, to be honest, if we backport the first
patch then I'm not hugely fussed about this one in -stable kernels simply
because I spotted it my inspection rather than an real failure.

> As before, though, how much of patch #1 is needed if this comes first?

See my reply over there, but I think we need all of this.

Will
  

Patch

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 4485f216e620..8ec37006ac70 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1632,12 +1632,14 @@  struct page *swiotlb_alloc(struct device *dev, size_t size)
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	struct io_tlb_pool *pool;
 	phys_addr_t tlb_addr;
+	unsigned int align;
 	int index;
 
 	if (!mem)
 		return NULL;
 
-	index = swiotlb_find_slots(dev, 0, size, 0, &pool);
+	align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
+	index = swiotlb_find_slots(dev, 0, size, align, &pool);
 	if (index == -1)
 		return NULL;