[v2,2/3] swiotlb: Enforce page alignment in swiotlb_alloc()

Message ID 20240131122543.14791-3-will@kernel.org
State New
Headers
Series Fix double allocation in swiotlb_alloc() |

Commit Message

Will Deacon Jan. 31, 2024, 12:25 p.m. UTC
  When allocating pages from a restricted DMA pool in swiotlb_alloc(),
the buffer address is blindly converted to a 'struct page *' that is
returned to the caller. In the unlikely event of an allocation bug,
page-unaligned addresses are not detected and slots can silently be
double-allocated.

Add a simple check of the buffer alignment in swiotlb_alloc() to make
debugging a little easier if something has gone wonky.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Cc: Dexuan Cui <decui@microsoft.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 6 ++++++
 1 file changed, 6 insertions(+)
  

Comments

Robin Murphy Jan. 31, 2024, 3:14 p.m. UTC | #1
On 31/01/2024 12:25 pm, Will Deacon wrote:
> When allocating pages from a restricted DMA pool in swiotlb_alloc(),
> the buffer address is blindly converted to a 'struct page *' that is
> returned to the caller. In the unlikely event of an allocation bug,
> page-unaligned addresses are not detected and slots can silently be
> double-allocated.
> 
> Add a simple check of the buffer alignment in swiotlb_alloc() to make
> debugging a little easier if something has gone wonky.
> 
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
> Cc: Dexuan Cui <decui@microsoft.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>   kernel/dma/swiotlb.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 56cc08b1fbd6..4485f216e620 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
>   		return NULL;
>   
>   	tlb_addr = slot_addr(pool->start, index);
> +	if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
> +		dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
> +			      &tlb_addr);

Nit: if there's cause for another respin, I'd be inclined to use a 
straightforward "if (WARN_ON(...))" here - this condition should 
represent SWIOTLB itself going badly wrong, which isn't really 
attributable to whatever device happened to be involved in the call.

Cheers,
Robin.

> +		swiotlb_release_slots(dev, tlb_addr);
> +		return NULL;
> +	}
>   
>   	return pfn_to_page(PFN_DOWN(tlb_addr));
>   }
  
Will Deacon Feb. 1, 2024, 12:48 p.m. UTC | #2
On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote:
> On 31/01/2024 12:25 pm, Will Deacon wrote:
> > When allocating pages from a restricted DMA pool in swiotlb_alloc(),
> > the buffer address is blindly converted to a 'struct page *' that is
> > returned to the caller. In the unlikely event of an allocation bug,
> > page-unaligned addresses are not detected and slots can silently be
> > double-allocated.
> > 
> > Add a simple check of the buffer alignment in swiotlb_alloc() to make
> > debugging a little easier if something has gone wonky.
> > 
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> > Cc: Robin Murphy <robin.murphy@arm.com>
> > Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
> > Cc: Dexuan Cui <decui@microsoft.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >   kernel/dma/swiotlb.c | 6 ++++++
> >   1 file changed, 6 insertions(+)
> > 
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 56cc08b1fbd6..4485f216e620 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
> >   		return NULL;
> >   	tlb_addr = slot_addr(pool->start, index);
> > +	if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
> > +		dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
> > +			      &tlb_addr);
> 
> Nit: if there's cause for another respin, I'd be inclined to use a
> straightforward "if (WARN_ON(...))" here - this condition should represent
> SWIOTLB itself going badly wrong, which isn't really attributable to
> whatever device happened to be involved in the call.

Well, there'll definitely be a v3 thanks to my idiotic dropping of the
'continue' statement when I reworked the searching loop for v2.

However, given that we're returning NULL, I think printing the device is
helpful as we're likely to cause some horrible error (e.g. probe failure)
in the caller and then it will be obvious why that happened from looking
at the logs. So I'd prefer to keep it unless you insist.

Cheers,

Will
  
Robin Murphy Feb. 1, 2024, 1:53 p.m. UTC | #3
On 01/02/2024 12:48 pm, Will Deacon wrote:
> On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote:
>> On 31/01/2024 12:25 pm, Will Deacon wrote:
>>> When allocating pages from a restricted DMA pool in swiotlb_alloc(),
>>> the buffer address is blindly converted to a 'struct page *' that is
>>> returned to the caller. In the unlikely event of an allocation bug,
>>> page-unaligned addresses are not detected and slots can silently be
>>> double-allocated.
>>>
>>> Add a simple check of the buffer alignment in swiotlb_alloc() to make
>>> debugging a little easier if something has gone wonky.
>>>
>>> Cc: Christoph Hellwig <hch@lst.de>
>>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>>> Cc: Robin Murphy <robin.murphy@arm.com>
>>> Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
>>> Cc: Dexuan Cui <decui@microsoft.com>
>>> Signed-off-by: Will Deacon <will@kernel.org>
>>> ---
>>>    kernel/dma/swiotlb.c | 6 ++++++
>>>    1 file changed, 6 insertions(+)
>>>
>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>> index 56cc08b1fbd6..4485f216e620 100644
>>> --- a/kernel/dma/swiotlb.c
>>> +++ b/kernel/dma/swiotlb.c
>>> @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
>>>    		return NULL;
>>>    	tlb_addr = slot_addr(pool->start, index);
>>> +	if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
>>> +		dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
>>> +			      &tlb_addr);
>>
>> Nit: if there's cause for another respin, I'd be inclined to use a
>> straightforward "if (WARN_ON(...))" here - this condition should represent
>> SWIOTLB itself going badly wrong, which isn't really attributable to
>> whatever device happened to be involved in the call.
> 
> Well, there'll definitely be a v3 thanks to my idiotic dropping of the
> 'continue' statement when I reworked the searching loop for v2.
> 
> However, given that we're returning NULL, I think printing the device is
> helpful as we're likely to cause some horrible error (e.g. probe failure)
> in the caller and then it will be obvious why that happened from looking
> at the logs. So I'd prefer to keep it unless you insist.

No, helping to clarify any knock-on effects sounds like a reasonable 
justification to me - I hadn't really considered that angle. I'd still 
be inclined to condense it down to "if (dev_WARN_ONCE(...))" to minimise 
redundancy, but that's really just a matter of personal preference.

Cheers,
Robin.
  

Patch

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 56cc08b1fbd6..4485f216e620 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1642,6 +1642,12 @@  struct page *swiotlb_alloc(struct device *dev, size_t size)
 		return NULL;
 
 	tlb_addr = slot_addr(pool->start, index);
+	if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
+		dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
+			      &tlb_addr);
+		swiotlb_release_slots(dev, tlb_addr);
+		return NULL;
+	}
 
 	return pfn_to_page(PFN_DOWN(tlb_addr));
 }