Message ID | 20240104185327.177376-1-olekstysh@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp5802280dyb; Thu, 4 Jan 2024 10:54:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IFvvSqMRi46XaSTe/NprQLMPGJhD8WrOKNK1iCgnqWN/DyDtdhZ7N48Ekop07c8g6ZpDrHS X-Received: by 2002:a05:620a:470d:b0:783:7a2:ed58 with SMTP id bs13-20020a05620a470d00b0078307a2ed58mr63384qkb.13.1704394468602; Thu, 04 Jan 2024 10:54:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704394468; cv=none; d=google.com; s=arc-20160816; b=E8nPl3TgbX9dDW0tIEygeNeFx9ZdsOrZNuwHpr35mwlRMDSQBh/3MOeEZpruwjUPZH tl4+5EM6ynXlqCOqsGVRr+MazEhjpjOWC3wyLnTJ6s0dvu6a8OFtVFXa3Aow6BOWdKMB pbURpX6eJ7YeJDn4WSteDnosfR03F711rb9VB0P05lsa3hfrT+bF753vlFceUAViuBUk 2cuDKh4wzJRRkiZBS/Ub8Q8sy9oK4Etrggy88A/GTqu7k3WJ429YscejpClOJnsgP3B3 Tu50kATBZ8zbOsigol0QWGqDhiwBGnmn9dKRqbTQvWis7bEUM9heIRvx3oAf8VWgd+xM Sw+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=jKANwIhqa+LvmlQt2znhD615O1LJ668Njt6YHHfEb+I=; fh=tm6NCpiMhYeJuG+8o3EbgeKkWzoYkPBlUNdWVVu/Xm4=; b=COf4uZL013XaQA27nM2MBjllj9XS5yrrEOVMKgZjo51H9rOxkSyHsj51oHGrfhJBmo FddmLOjQn2CWIeKbQTn5Icxdh31FeTrEfk/0OUp08Xu5VfYECMwiz0r8D/CmaVwsS3MP 1ONhI4S0Vn6BleXC0sLmzH2uyZUM35D3MEFiSK5CWa30eq2Z5r9sIGK/AQqU0XMU1Ws9 JoEG/j8plb3T4jSPtI6ANoyw1zgGpob21uLyH5f6AzDgz9lsccqVzZA7olaQmwJHWXJA vQGuXTIxnBhLXbuAqgIPmq17LscOiXa5BwOkTzu0YA1uYjp1oz97lHdKQYLXFCWD2agJ TbYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="T/5VnVSX"; spf=pass (google.com: domain of linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id pz14-20020a05620a640e00b00783053697f4si29508qkn.760.2024.01.04.10.54.28 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 10:54:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="T/5VnVSX"; spf=pass (google.com: domain of linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-17113-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4C0D41C2266B for <ouuuleilei@gmail.com>; Thu, 4 Jan 2024 18:54:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 11B3F2C18F; Thu, 4 Jan 2024 18:54:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T/5VnVSX" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D06E52C188 for <linux-kernel@vger.kernel.org>; Thu, 4 Jan 2024 18:54:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-f45.google.com with SMTP id 4fb4d7f45d1cf-555aa7fd668so1014252a12.0 for <linux-kernel@vger.kernel.org>; Thu, 04 Jan 2024 10:54:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704394446; x=1704999246; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=jKANwIhqa+LvmlQt2znhD615O1LJ668Njt6YHHfEb+I=; b=T/5VnVSX7LBA4lR2IMr+1+18gZVAK1z6EJ2Mfun4mSqoVA/aJJUlGQ+vwm/8iiQQ53 /cOKXAO9URtteGBf9Tyn2uR9rcmjtZtwzIbVgtZ0vBC0H8csVjTOfaS8jJwXeN4Q7qAT Wat0lcIzemlYj9SV6Sh6y6ajtRv+BqTz0ZBYyzhTLaQmg+YSzkdWC3myNOA1UrLV/WZD lad6u8e06dmtImSXaRkWKJN8QL/Qg/6lYSrVO71aZTNN6rNIWsOkGFTharXIKx6cg7pX A2IwPfHDw3lLWzN1v9fWYspZPUPpoC4XJxeR/wwAEZRNYNmZdnpholTDDQSr/6QvZgk7 aOLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704394446; x=1704999246; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jKANwIhqa+LvmlQt2znhD615O1LJ668Njt6YHHfEb+I=; b=kWVIeIb9anLu+Uc1aqCYoXHy75qQli4jwkH6ALn/4v6OLOnt38AgwtXIs3ZFP25Kvs BJr5V8qqEl28xiv2R+erKZY7s/1n+FgM1x8nuPyXiC0gceNk3AM2wdzA5ns+9ogMwPVG PVay3BVEE488Rg2+TIqPs0KljRo9/tWvjrI54SJi/JhwA8x/7QvZReZ3weKgvqVnOL9s 4iPtos3yBOYYaI7Ij2bFRN1bEYxSR8Tmcl2Pgrpv1Zc1uyl0hJvnoKnrEC0B+DKX62KO 0WfqDpIbsfWOjNgEl0hzphLbZOUIGuI6XYp+vvogPgeoRmIQXCmDvXo9stSz4dZYa3PF qGeA== X-Gm-Message-State: AOJu0YxIyB3BcT11Tbvq5LVmy7h6BXtjktRUu/gNReHYQSSwAmIfSeFA NRxvi/gVD9kBwaWIbAKbHcA= X-Received: by 2002:a17:906:f810:b0:a23:44e8:81b with SMTP id kh16-20020a170906f81000b00a2344e8081bmr562135ejb.73.1704394445767; Thu, 04 Jan 2024 10:54:05 -0800 (PST) Received: from EPUAKYIW03DD.. ([91.123.150.198]) by smtp.gmail.com with ESMTPSA id d21-20020a170906305500b00a2699b0fd49sm13966430ejd.86.2024.01.04.10.54.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 10:54:05 -0800 (PST) From: Oleksandr Tyshchenko <olekstysh@gmail.com> To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, =?utf-8?q?Christia?= =?utf-8?q?n_K=C3=B6nig?= <christian.koenig@amd.com>, Daniel Vetter <daniel@ffwll.ch>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org> Subject: [PATCH] xen/gntdev: Fix the abuse of underlying struct page in DMA-buf import Date: Thu, 4 Jan 2024 20:53:27 +0200 Message-Id: <20240104185327.177376-1-olekstysh@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787187134213589328 X-GMAIL-MSGID: 1787187134213589328 |
Series |
xen/gntdev: Fix the abuse of underlying struct page in DMA-buf import
|
|
Commit Message
Oleksandr Tyshchenko
Jan. 4, 2024, 6:53 p.m. UTC
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> DO NOT access the underlying struct page of an sg table exported by DMA-buf in dmabuf_imp_to_refs(), this is not allowed. Please see drivers/dma-buf/dma-buf.c:mangle_sg_table() for details. Fortunately, here (for special Xen device) we can avoid using pages and calculate gfns directly from dma addresses provided by the sg table. Suggested-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> --- Please note, I didn't manage to test the patch against the latest master branch on real HW (patch was only build tested there). Patch was tested on Arm64 guests using Linux v5.10.41 from vendor's BSP, this is the environment where running this use-case is possible and to which I have an access (Xen PV display with zero-copy and backend domain as a buffer provider - be-alloc=1, so dma-buf import part was involved). A little bit old, but the dma-buf import code in gntdev-dmabuf.c hasn't been changed much since that time, all context remains allmost the same according to my code inspection. --- --- drivers/xen/gntdev-dmabuf.c | 42 +++++++++++++++---------------------- 1 file changed, 17 insertions(+), 25 deletions(-)
Comments
On Thu, 4 Jan 2024, Oleksandr Tyshchenko wrote: > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> > > DO NOT access the underlying struct page of an sg table exported > by DMA-buf in dmabuf_imp_to_refs(), this is not allowed. > Please see drivers/dma-buf/dma-buf.c:mangle_sg_table() for details. > > Fortunately, here (for special Xen device) we can avoid using > pages and calculate gfns directly from dma addresses provided by > the sg table. > > Suggested-by: Daniel Vetter <daniel@ffwll.ch> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> > --- > Please note, I didn't manage to test the patch against the latest master branch > on real HW (patch was only build tested there). Patch was tested on Arm64 > guests using Linux v5.10.41 from vendor's BSP, this is the environment where > running this use-case is possible and to which I have an access (Xen PV display > with zero-copy and backend domain as a buffer provider - be-alloc=1, so dma-buf > import part was involved). A little bit old, but the dma-buf import code > in gntdev-dmabuf.c hasn't been changed much since that time, all context > remains allmost the same according to my code inspection. > --- > --- > drivers/xen/gntdev-dmabuf.c | 42 +++++++++++++++---------------------- > 1 file changed, 17 insertions(+), 25 deletions(-) > > diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c > index 4440e626b797..0dde49fca9a5 100644 > --- a/drivers/xen/gntdev-dmabuf.c > +++ b/drivers/xen/gntdev-dmabuf.c > @@ -11,6 +11,7 @@ > #include <linux/kernel.h> > #include <linux/errno.h> > #include <linux/dma-buf.h> > +#include <linux/dma-direct.h> > #include <linux/slab.h> > #include <linux/types.h> > #include <linux/uaccess.h> > @@ -50,7 +51,7 @@ struct gntdev_dmabuf { > > /* Number of pages this buffer has. */ > int nr_pages; > - /* Pages of this buffer. */ > + /* Pages of this buffer (only for dma-buf export). */ > struct page **pages; > }; > > @@ -484,7 +485,7 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags, > /* DMA buffer import support. */ > > static int > -dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, > +dmabuf_imp_grant_foreign_access(unsigned long *gfns, u32 *refs, > int count, int domid) > { > grant_ref_t priv_gref_head; > @@ -507,7 +508,7 @@ dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, > } > > gnttab_grant_foreign_access_ref(cur_ref, domid, > - xen_page_to_gfn(pages[i]), 0); > + gfns[i], 0); > refs[i] = cur_ref; > } > > @@ -529,7 +530,6 @@ static void dmabuf_imp_end_foreign_access(u32 *refs, int count) > > static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf) > { > - kfree(gntdev_dmabuf->pages); > kfree(gntdev_dmabuf->u.imp.refs); > kfree(gntdev_dmabuf); > } > @@ -549,12 +549,6 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count) > if (!gntdev_dmabuf->u.imp.refs) > goto fail; > > - gntdev_dmabuf->pages = kcalloc(count, > - sizeof(gntdev_dmabuf->pages[0]), > - GFP_KERNEL); > - if (!gntdev_dmabuf->pages) > - goto fail; > - > gntdev_dmabuf->nr_pages = count; > > for (i = 0; i < count; i++) > @@ -576,7 +570,8 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, > struct dma_buf *dma_buf; > struct dma_buf_attachment *attach; > struct sg_table *sgt; > - struct sg_page_iter sg_iter; > + struct sg_dma_page_iter sg_iter; > + unsigned long *gfns; > int i; > > dma_buf = dma_buf_get(fd); > @@ -624,26 +619,23 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, > > gntdev_dmabuf->u.imp.sgt = sgt; > > - /* Now convert sgt to array of pages and check for page validity. */ > + gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL); > + if (!gfns) > + goto fail_unmap; > + > + /* Now convert sgt to array of gfns without accessing underlying pages. */ > i = 0; > - for_each_sgtable_page(sgt, &sg_iter, 0) { > - struct page *page = sg_page_iter_page(&sg_iter); > - /* > - * Check if page is valid: this can happen if we are given > - * a page from VRAM or other resources which are not backed > - * by a struct page. > - */ > - if (!pfn_valid(page_to_pfn(page))) { > - ret = ERR_PTR(-EINVAL); > - goto fail_unmap; > - } > + for_each_sgtable_dma_page(sgt, &sg_iter, 0) { > + dma_addr_t addr = sg_page_iter_dma_address(&sg_iter); > + unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr))); > > - gntdev_dmabuf->pages[i++] = page; > + gfns[i++] = pfn_to_gfn(pfn); > } > > - ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages, > + ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns, > gntdev_dmabuf->u.imp.refs, > count, domid)); > + kfree(gfns); > if (IS_ERR(ret)) > goto fail_end_access; > > -- > 2.34.1 >
Am 04.01.24 um 19:53 schrieb Oleksandr Tyshchenko: > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> > > DO NOT access the underlying struct page of an sg table exported > by DMA-buf in dmabuf_imp_to_refs(), this is not allowed. > Please see drivers/dma-buf/dma-buf.c:mangle_sg_table() for details. > > Fortunately, here (for special Xen device) we can avoid using > pages and calculate gfns directly from dma addresses provided by > the sg table. > > Suggested-by: Daniel Vetter <daniel@ffwll.ch> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> I can't say that I can judge the full technical background, but that looks reasonable to me. Acked-by: Christian König <christian.koenig@amd.com> > --- > Please note, I didn't manage to test the patch against the latest master branch > on real HW (patch was only build tested there). Patch was tested on Arm64 > guests using Linux v5.10.41 from vendor's BSP, this is the environment where > running this use-case is possible and to which I have an access (Xen PV display > with zero-copy and backend domain as a buffer provider - be-alloc=1, so dma-buf > import part was involved). A little bit old, but the dma-buf import code > in gntdev-dmabuf.c hasn't been changed much since that time, all context > remains allmost the same according to my code inspection. > --- > --- > drivers/xen/gntdev-dmabuf.c | 42 +++++++++++++++---------------------- > 1 file changed, 17 insertions(+), 25 deletions(-) > > diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c > index 4440e626b797..0dde49fca9a5 100644 > --- a/drivers/xen/gntdev-dmabuf.c > +++ b/drivers/xen/gntdev-dmabuf.c > @@ -11,6 +11,7 @@ > #include <linux/kernel.h> > #include <linux/errno.h> > #include <linux/dma-buf.h> > +#include <linux/dma-direct.h> > #include <linux/slab.h> > #include <linux/types.h> > #include <linux/uaccess.h> > @@ -50,7 +51,7 @@ struct gntdev_dmabuf { > > /* Number of pages this buffer has. */ > int nr_pages; > - /* Pages of this buffer. */ > + /* Pages of this buffer (only for dma-buf export). */ > struct page **pages; > }; > > @@ -484,7 +485,7 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags, > /* DMA buffer import support. */ > > static int > -dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, > +dmabuf_imp_grant_foreign_access(unsigned long *gfns, u32 *refs, > int count, int domid) > { > grant_ref_t priv_gref_head; > @@ -507,7 +508,7 @@ dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, > } > > gnttab_grant_foreign_access_ref(cur_ref, domid, > - xen_page_to_gfn(pages[i]), 0); > + gfns[i], 0); > refs[i] = cur_ref; > } > > @@ -529,7 +530,6 @@ static void dmabuf_imp_end_foreign_access(u32 *refs, int count) > > static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf) > { > - kfree(gntdev_dmabuf->pages); > kfree(gntdev_dmabuf->u.imp.refs); > kfree(gntdev_dmabuf); > } > @@ -549,12 +549,6 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count) > if (!gntdev_dmabuf->u.imp.refs) > goto fail; > > - gntdev_dmabuf->pages = kcalloc(count, > - sizeof(gntdev_dmabuf->pages[0]), > - GFP_KERNEL); > - if (!gntdev_dmabuf->pages) > - goto fail; > - > gntdev_dmabuf->nr_pages = count; > > for (i = 0; i < count; i++) > @@ -576,7 +570,8 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, > struct dma_buf *dma_buf; > struct dma_buf_attachment *attach; > struct sg_table *sgt; > - struct sg_page_iter sg_iter; > + struct sg_dma_page_iter sg_iter; > + unsigned long *gfns; > int i; > > dma_buf = dma_buf_get(fd); > @@ -624,26 +619,23 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, > > gntdev_dmabuf->u.imp.sgt = sgt; > > - /* Now convert sgt to array of pages and check for page validity. */ > + gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL); > + if (!gfns) > + goto fail_unmap; > + > + /* Now convert sgt to array of gfns without accessing underlying pages. */ > i = 0; > - for_each_sgtable_page(sgt, &sg_iter, 0) { > - struct page *page = sg_page_iter_page(&sg_iter); > - /* > - * Check if page is valid: this can happen if we are given > - * a page from VRAM or other resources which are not backed > - * by a struct page. > - */ > - if (!pfn_valid(page_to_pfn(page))) { > - ret = ERR_PTR(-EINVAL); > - goto fail_unmap; > - } > + for_each_sgtable_dma_page(sgt, &sg_iter, 0) { > + dma_addr_t addr = sg_page_iter_dma_address(&sg_iter); > + unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr))); > > - gntdev_dmabuf->pages[i++] = page; > + gfns[i++] = pfn_to_gfn(pfn); > } > > - ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages, > + ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns, > gntdev_dmabuf->u.imp.refs, > count, domid)); > + kfree(gfns); > if (IS_ERR(ret)) > goto fail_end_access; >
Hi Oleksandr, kernel test robot noticed the following build warnings: [auto build test WARNING on xen-tip/linux-next] [also build test WARNING on linus/master v6.7-rc8 next-20240105] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Oleksandr-Tyshchenko/xen-gntdev-Fix-the-abuse-of-underlying-struct-page-in-DMA-buf-import/20240105-025516 base: https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next patch link: https://lore.kernel.org/r/20240104185327.177376-1-olekstysh%40gmail.com patch subject: [PATCH] xen/gntdev: Fix the abuse of underlying struct page in DMA-buf import config: x86_64-allmodconfig (https://download.01.org/0day-ci/archive/20240106/202401062122.it6zvLG0-lkp@intel.com/config) compiler: ClangBuiltLinux clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240106/202401062122.it6zvLG0-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202401062122.it6zvLG0-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/xen/gntdev-dmabuf.c:623:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] 623 | if (!gfns) | ^~~~~ drivers/xen/gntdev-dmabuf.c:660:9: note: uninitialized use occurs here 660 | return ret; | ^~~ drivers/xen/gntdev-dmabuf.c:623:2: note: remove the 'if' if its condition is always false 623 | if (!gfns) | ^~~~~~~~~~ 624 | goto fail_unmap; | ~~~~~~~~~~~~~~~ drivers/xen/gntdev-dmabuf.c:569:43: note: initialize the variable 'ret' to silence this warning 569 | struct gntdev_dmabuf *gntdev_dmabuf, *ret; | ^ | = NULL 1 warning generated. vim +623 drivers/xen/gntdev-dmabuf.c 564 565 static struct gntdev_dmabuf * 566 dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, 567 int fd, int count, int domid) 568 { 569 struct gntdev_dmabuf *gntdev_dmabuf, *ret; 570 struct dma_buf *dma_buf; 571 struct dma_buf_attachment *attach; 572 struct sg_table *sgt; 573 struct sg_dma_page_iter sg_iter; 574 unsigned long *gfns; 575 int i; 576 577 dma_buf = dma_buf_get(fd); 578 if (IS_ERR(dma_buf)) 579 return ERR_CAST(dma_buf); 580 581 gntdev_dmabuf = dmabuf_imp_alloc_storage(count); 582 if (IS_ERR(gntdev_dmabuf)) { 583 ret = gntdev_dmabuf; 584 goto fail_put; 585 } 586 587 gntdev_dmabuf->priv = priv; 588 gntdev_dmabuf->fd = fd; 589 590 attach = dma_buf_attach(dma_buf, dev); 591 if (IS_ERR(attach)) { 592 ret = ERR_CAST(attach); 593 goto fail_free_obj; 594 } 595 596 gntdev_dmabuf->u.imp.attach = attach; 597 598 sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); 599 if (IS_ERR(sgt)) { 600 ret = ERR_CAST(sgt); 601 goto fail_detach; 602 } 603 604 /* Check that we have zero offset. */ 605 if (sgt->sgl->offset) { 606 ret = ERR_PTR(-EINVAL); 607 pr_debug("DMA buffer has %d bytes offset, user-space expects 0\n", 608 sgt->sgl->offset); 609 goto fail_unmap; 610 } 611 612 /* Check number of pages that imported buffer has. */ 613 if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) { 614 ret = ERR_PTR(-EINVAL); 615 pr_debug("DMA buffer has %zu pages, user-space expects %d\n", 616 attach->dmabuf->size, gntdev_dmabuf->nr_pages); 617 goto fail_unmap; 618 } 619 620 gntdev_dmabuf->u.imp.sgt = sgt; 621 622 gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL); > 623 if (!gfns) 624 goto fail_unmap; 625 626 /* Now convert sgt to array of gfns without accessing underlying pages. */ 627 i = 0; 628 for_each_sgtable_dma_page(sgt, &sg_iter, 0) { 629 dma_addr_t addr = sg_page_iter_dma_address(&sg_iter); 630 unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr))); 631 632 gfns[i++] = pfn_to_gfn(pfn); 633 } 634 635 ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns, 636 gntdev_dmabuf->u.imp.refs, 637 count, domid)); 638 kfree(gfns); 639 if (IS_ERR(ret)) 640 goto fail_end_access; 641 642 pr_debug("Imported DMA buffer with fd %d\n", fd); 643 644 mutex_lock(&priv->lock); 645 list_add(&gntdev_dmabuf->next, &priv->imp_list); 646 mutex_unlock(&priv->lock); 647 648 return gntdev_dmabuf; 649 650 fail_end_access: 651 dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); 652 fail_unmap: 653 dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); 654 fail_detach: 655 dma_buf_detach(dma_buf, attach); 656 fail_free_obj: 657 dmabuf_imp_free_storage(gntdev_dmabuf); 658 fail_put: 659 dma_buf_put(dma_buf); 660 return ret; 661 } 662
diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 4440e626b797..0dde49fca9a5 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -11,6 +11,7 @@ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/dma-buf.h> +#include <linux/dma-direct.h> #include <linux/slab.h> #include <linux/types.h> #include <linux/uaccess.h> @@ -50,7 +51,7 @@ struct gntdev_dmabuf { /* Number of pages this buffer has. */ int nr_pages; - /* Pages of this buffer. */ + /* Pages of this buffer (only for dma-buf export). */ struct page **pages; }; @@ -484,7 +485,7 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags, /* DMA buffer import support. */ static int -dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, +dmabuf_imp_grant_foreign_access(unsigned long *gfns, u32 *refs, int count, int domid) { grant_ref_t priv_gref_head; @@ -507,7 +508,7 @@ dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, } gnttab_grant_foreign_access_ref(cur_ref, domid, - xen_page_to_gfn(pages[i]), 0); + gfns[i], 0); refs[i] = cur_ref; } @@ -529,7 +530,6 @@ static void dmabuf_imp_end_foreign_access(u32 *refs, int count) static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf) { - kfree(gntdev_dmabuf->pages); kfree(gntdev_dmabuf->u.imp.refs); kfree(gntdev_dmabuf); } @@ -549,12 +549,6 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count) if (!gntdev_dmabuf->u.imp.refs) goto fail; - gntdev_dmabuf->pages = kcalloc(count, - sizeof(gntdev_dmabuf->pages[0]), - GFP_KERNEL); - if (!gntdev_dmabuf->pages) - goto fail; - gntdev_dmabuf->nr_pages = count; for (i = 0; i < count; i++) @@ -576,7 +570,8 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, struct dma_buf *dma_buf; struct dma_buf_attachment *attach; struct sg_table *sgt; - struct sg_page_iter sg_iter; + struct sg_dma_page_iter sg_iter; + unsigned long *gfns; int i; dma_buf = dma_buf_get(fd); @@ -624,26 +619,23 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, gntdev_dmabuf->u.imp.sgt = sgt; - /* Now convert sgt to array of pages and check for page validity. */ + gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL); + if (!gfns) + goto fail_unmap; + + /* Now convert sgt to array of gfns without accessing underlying pages. */ i = 0; - for_each_sgtable_page(sgt, &sg_iter, 0) { - struct page *page = sg_page_iter_page(&sg_iter); - /* - * Check if page is valid: this can happen if we are given - * a page from VRAM or other resources which are not backed - * by a struct page. - */ - if (!pfn_valid(page_to_pfn(page))) { - ret = ERR_PTR(-EINVAL); - goto fail_unmap; - } + for_each_sgtable_dma_page(sgt, &sg_iter, 0) { + dma_addr_t addr = sg_page_iter_dma_address(&sg_iter); + unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr))); - gntdev_dmabuf->pages[i++] = page; + gfns[i++] = pfn_to_gfn(pfn); } - ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages, + ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns, gntdev_dmabuf->u.imp.refs, count, domid)); + kfree(gfns); if (IS_ERR(ret)) goto fail_end_access;