Message ID | 20240131122543.14791-3-will@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp1850357dyb; Wed, 31 Jan 2024 04:27:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IHrMFJekCZxvkfChSXWr3gj24KobJYZZocIp5sWNqi1d9k93ixiXzOd8uD0RY5PdvolGS4g X-Received: by 2002:a17:906:2b0e:b0:a36:3b53:8f07 with SMTP id a14-20020a1709062b0e00b00a363b538f07mr870340ejg.71.1706704041737; Wed, 31 Jan 2024 04:27:21 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706704041; cv=pass; d=google.com; s=arc-20160816; b=OXc7JmjUtQAkoPHFPrZf4lHKu14dSa6OmRkecNBLRjxptyNnGh7gFQ5SSvEKjjNSc2 za54NEb7hRd8ujCQ69IhYB3BEl3AVqdQR+ZNFu65HpqEqYbHemA/kFEhOQqqMLkMsFRD Qs6GnaR5t4JQJFTkHKMkXwMwI9HK5gPOH4cb6OWX76TeIdsCl7sEPGPf7//xFD6agu+0 g6hVaZDy+3wKO6hqCA1m60kgeHyAXl74ZnEpS7MCVo0+EdK7X2U6ZN96NIRvR1+VA4KP ysG8lYvuW+rErHmFSAfao9NeVxGBDydKmKj/KiNx/SjutAIrbu4k8tjQThB/5GExNlPR j1Rw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=gfrJ/MvOpfNo3jOw+ZQoYujZ58OWTw2bWKwzL7hKU98=; fh=vNnWmzI6l4e5KT+bs4l1nJie8frkq6PggQpJlPhg9og=; b=cJ02/jmoX1TGxSOERG+eD4TsPiZiq2CsBz8cfwwWhxAa6qt0dpymkwpBaveq9c4Sk1 N+mv3XVZnQBtYjPNyOpBvSJILOoj/VxpWZfB4ID5CgV2cYmJG8ZFDd4qQIwOQhXc/i5B nqQzcpyei6cLKnt4AB5hd0d9CRCfKYhYq6FyQcUW90PVsG6yw0KdoBntkjEttMLh/MKc LA+Hc1c+MEZuIINk6U+dU7WNaE/qLnFYmW74woYHefoIB+fQRfed8z7IGQiGf8eRk6yX MK/MfacO+Mynuk9UOdVEZvXxShFeJr91h7D2Uj/sQA16RLTNzpr0U8T2D4HLyvu4HOzZ zmGg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=psoPX5uw; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Forwarded-Encrypted: i=1; AJvYcCVmExNOzJzfyaQfp+1FiAtkHCVUYEClps7esc9dQXqz9pYIA0Bz877Q90kFGR7ZARsmo4Bm3wfNRbBA42GfofCe0Z3qSw== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id b1-20020a1709063ca100b00a35eab484b9si2367839ejh.894.2024.01.31.04.27.21 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 04:27:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=psoPX5uw; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46417-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6785E1F296F7 for <ouuuleilei@gmail.com>; Wed, 31 Jan 2024 12:26:46 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BE2A47993F; Wed, 31 Jan 2024 12:25:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="psoPX5uw" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 250757AE43; Wed, 31 Jan 2024 12:25:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706703954; cv=none; b=oYlFCJIPrL/AiPggh29n6+X8ZcJmx3RhANmp5uRX9RLXQuWPhx8le0Q9Hde7HiF4AGhhktjj1Gi3PLrNsQaFPY8CsNbn5HZOF6JNrOa3Bh5bzxnlwZuN2yUGPqoLyVXSQV6ikf3JeB3UfAezpO05X9oW38x31o5etlmjOjCRChI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706703954; c=relaxed/simple; bh=tvZfP5ZDpz93rBnkWzn8NpisHuCrYWdGw0rDfTGSi0w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WBCC5JC6QIQ4WZkaYHAfzdB49yS88jDXXkjjrb2kSScPCdP0RuRz/LkLNXHGEibV6IDwZ04VnHCq+K/UbtrB5vuaio+pu4SxOJCh1bikLym/Q2wAzGLOaz+SQs/Fwvc0d3ut0BvkmM00TP4idac45nc0j14rP9p9keb7W0X4E2U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=psoPX5uw; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF82CC43399; Wed, 31 Jan 2024 12:25:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706703953; bh=tvZfP5ZDpz93rBnkWzn8NpisHuCrYWdGw0rDfTGSi0w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=psoPX5uw05sqXOPDqQR+NcDpcVMXHTxr+ToULGVDXA91pOO4sDKAW6xGUtj1rRygW D1EtgsNsBW9z5MOfJUcBMMPzBLOjsRe5XTga8bzLOa9b4zvWWAc2G2Q2iTA5t4xNLl KtKJroPbgWfp2SuaPIwX5pRBEAo2p5SdfaIqjyriutK+J0vITqUdA8IevRtxbZ7C04 OzPdkiEMEfkJhGuV3ncoGiA+2hyHBoPDwsAJF/MLOQHkHbTppxE4W+zQfG4r1Zm9El u4b0V5kMmLRzZWIqNX2v763gsFq1lQNl7pyDMsvD0ODRbE5iBJ23OxIZf3HL5EqGZG v8VuNBl1Xzb+w== From: Will Deacon <will@kernel.org> To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon <will@kernel.org>, iommu@lists.linux.dev, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, Robin Murphy <robin.murphy@arm.com>, Petr Tesarik <petr.tesarik1@huawei-partners.com>, Dexuan Cui <decui@microsoft.com> Subject: [PATCH v2 2/3] swiotlb: Enforce page alignment in swiotlb_alloc() Date: Wed, 31 Jan 2024 12:25:42 +0000 Message-Id: <20240131122543.14791-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20240131122543.14791-1-will@kernel.org> References: <20240131122543.14791-1-will@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789608897112554770 X-GMAIL-MSGID: 1789608897112554770 |
Series |
Fix double allocation in swiotlb_alloc()
|
|
Commit Message
Will Deacon
Jan. 31, 2024, 12:25 p.m. UTC
When allocating pages from a restricted DMA pool in swiotlb_alloc(),
the buffer address is blindly converted to a 'struct page *' that is
returned to the caller. In the unlikely event of an allocation bug,
page-unaligned addresses are not detected and slots can silently be
double-allocated.
Add a simple check of the buffer alignment in swiotlb_alloc() to make
debugging a little easier if something has gone wonky.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Cc: Dexuan Cui <decui@microsoft.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
kernel/dma/swiotlb.c | 6 ++++++
1 file changed, 6 insertions(+)
Comments
On 31/01/2024 12:25 pm, Will Deacon wrote: > When allocating pages from a restricted DMA pool in swiotlb_alloc(), > the buffer address is blindly converted to a 'struct page *' that is > returned to the caller. In the unlikely event of an allocation bug, > page-unaligned addresses are not detected and slots can silently be > double-allocated. > > Add a simple check of the buffer alignment in swiotlb_alloc() to make > debugging a little easier if something has gone wonky. > > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com> > Cc: Dexuan Cui <decui@microsoft.com> > Signed-off-by: Will Deacon <will@kernel.org> > --- > kernel/dma/swiotlb.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 56cc08b1fbd6..4485f216e620 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) > return NULL; > > tlb_addr = slot_addr(pool->start, index); > + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { > + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", > + &tlb_addr); Nit: if there's cause for another respin, I'd be inclined to use a straightforward "if (WARN_ON(...))" here - this condition should represent SWIOTLB itself going badly wrong, which isn't really attributable to whatever device happened to be involved in the call. Cheers, Robin. > + swiotlb_release_slots(dev, tlb_addr); > + return NULL; > + } > > return pfn_to_page(PFN_DOWN(tlb_addr)); > }
On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote: > On 31/01/2024 12:25 pm, Will Deacon wrote: > > When allocating pages from a restricted DMA pool in swiotlb_alloc(), > > the buffer address is blindly converted to a 'struct page *' that is > > returned to the caller. In the unlikely event of an allocation bug, > > page-unaligned addresses are not detected and slots can silently be > > double-allocated. > > > > Add a simple check of the buffer alignment in swiotlb_alloc() to make > > debugging a little easier if something has gone wonky. > > > > Cc: Christoph Hellwig <hch@lst.de> > > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > > Cc: Robin Murphy <robin.murphy@arm.com> > > Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com> > > Cc: Dexuan Cui <decui@microsoft.com> > > Signed-off-by: Will Deacon <will@kernel.org> > > --- > > kernel/dma/swiotlb.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > > index 56cc08b1fbd6..4485f216e620 100644 > > --- a/kernel/dma/swiotlb.c > > +++ b/kernel/dma/swiotlb.c > > @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) > > return NULL; > > tlb_addr = slot_addr(pool->start, index); > > + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { > > + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", > > + &tlb_addr); > > Nit: if there's cause for another respin, I'd be inclined to use a > straightforward "if (WARN_ON(...))" here - this condition should represent > SWIOTLB itself going badly wrong, which isn't really attributable to > whatever device happened to be involved in the call. Well, there'll definitely be a v3 thanks to my idiotic dropping of the 'continue' statement when I reworked the searching loop for v2. However, given that we're returning NULL, I think printing the device is helpful as we're likely to cause some horrible error (e.g. probe failure) in the caller and then it will be obvious why that happened from looking at the logs. So I'd prefer to keep it unless you insist. Cheers, Will
On 01/02/2024 12:48 pm, Will Deacon wrote: > On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote: >> On 31/01/2024 12:25 pm, Will Deacon wrote: >>> When allocating pages from a restricted DMA pool in swiotlb_alloc(), >>> the buffer address is blindly converted to a 'struct page *' that is >>> returned to the caller. In the unlikely event of an allocation bug, >>> page-unaligned addresses are not detected and slots can silently be >>> double-allocated. >>> >>> Add a simple check of the buffer alignment in swiotlb_alloc() to make >>> debugging a little easier if something has gone wonky. >>> >>> Cc: Christoph Hellwig <hch@lst.de> >>> Cc: Marek Szyprowski <m.szyprowski@samsung.com> >>> Cc: Robin Murphy <robin.murphy@arm.com> >>> Cc: Petr Tesarik <petr.tesarik1@huawei-partners.com> >>> Cc: Dexuan Cui <decui@microsoft.com> >>> Signed-off-by: Will Deacon <will@kernel.org> >>> --- >>> kernel/dma/swiotlb.c | 6 ++++++ >>> 1 file changed, 6 insertions(+) >>> >>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c >>> index 56cc08b1fbd6..4485f216e620 100644 >>> --- a/kernel/dma/swiotlb.c >>> +++ b/kernel/dma/swiotlb.c >>> @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) >>> return NULL; >>> tlb_addr = slot_addr(pool->start, index); >>> + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { >>> + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", >>> + &tlb_addr); >> >> Nit: if there's cause for another respin, I'd be inclined to use a >> straightforward "if (WARN_ON(...))" here - this condition should represent >> SWIOTLB itself going badly wrong, which isn't really attributable to >> whatever device happened to be involved in the call. > > Well, there'll definitely be a v3 thanks to my idiotic dropping of the > 'continue' statement when I reworked the searching loop for v2. > > However, given that we're returning NULL, I think printing the device is > helpful as we're likely to cause some horrible error (e.g. probe failure) > in the caller and then it will be obvious why that happened from looking > at the logs. So I'd prefer to keep it unless you insist. No, helping to clarify any knock-on effects sounds like a reasonable justification to me - I hadn't really considered that angle. I'd still be inclined to condense it down to "if (dev_WARN_ONCE(...))" to minimise redundancy, but that's really just a matter of personal preference. Cheers, Robin.
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 56cc08b1fbd6..4485f216e620 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) return NULL; tlb_addr = slot_addr(pool->start, index); + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", + &tlb_addr); + swiotlb_release_slots(dev, tlb_addr); + return NULL; + } return pfn_to_page(PFN_DOWN(tlb_addr)); }