Message ID | 20230519065843.10653-1-yan.y.zhao@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1062049vqo; Fri, 19 May 2023 01:02:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6RKVSKASFTMDEzyBFFc8YhVUobc9rQzoZ/8vS6XgVaRVbLNyYxv6COHeguXbiYMyYDIJaK X-Received: by 2002:a05:6a00:847:b0:646:7234:cbfc with SMTP id q7-20020a056a00084700b006467234cbfcmr2396985pfk.27.1684483338555; Fri, 19 May 2023 01:02:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684483338; cv=none; d=google.com; s=arc-20160816; b=oJkC+sz/3ko7RRxv+O/fxmDNlZ6rzUnSOsf4Oj8uk7h3LDcu4iVdqRaQAVddZeNpwL CzmyDr1m/5xhRR4wgTZzEkzqx/9L/DXRd2Si4J+H2fpSJ7K7sHAc3r6REOJcMEDGaUGh hFZWWXWX0C1tIepN4VVpHl20Em2tIucl33KYzNDMmSEOZmdhcbmAng5EBadYs/I0b+sK cIdSwXAbpEAP6+34eOS97H8YdOhywAUxiLqGMHf1XCEYYDGcDRyiPaK8v7QyIJ/FlXRY DeBhJV62pdMzM8kvCO4VcGS20kxWTmNjwPJwmi5DMc9no5g4O0oPeZCPe/3bA8RKZqrM IHSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:subject:cc:to:from :dkim-signature; bh=BpUW9Zmks8X6HHR4Rnm+58CSW4VUmTPjoc+I617lUy0=; b=q5MlDZA5TjqkGHu9jaFb/ImWF4Ldg4qZwKI6zfy2W7XoQHpqubuHdJW4Ax511ehBGI IsjsSswi3iD4ib1sviLp33foC9vWNVJ6lOkxUF571uWMMATiD4FEyjZvszPy1Fo7vHJ8 X+M3Ywu7tqeRMq3/wLZQWiaCrURezbEsIUAWtgDNX99Fr/tHjvaAdlG+moZ8PeZQVZHq FlM0G4uvjgP2fEX3V3mo4JLqCs59gG33oFEKYY69WqITd3nZKBhfOB/tBp98eolanSpq H9fDNQXGzPmCoMimzGpRrxkMEUv167o7eTrlqL0aOyMcpbBgLipfnMmFytGcl6HWSa2Q wWXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BDl4W2RP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 66-20020a621845000000b006436ead4abesi3129880pfy.227.2023.05.19.01.02.05; Fri, 19 May 2023 01:02:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BDl4W2RP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230073AbjESHX5 (ORCPT <rfc822;cscallsign@gmail.com> + 99 others); Fri, 19 May 2023 03:23:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbjESHXz (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 19 May 2023 03:23:55 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85FED109; Fri, 19 May 2023 00:23:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684481034; x=1716017034; h=from:to:cc:subject:date:message-id; bh=+qJJtos13heGRt9cQEvKtBbo1R9ZeJIEF8B5TXfKWnw=; b=BDl4W2RPIEn+lY2IZKH3o2IPgXk/DFdg0yWBe8dR98C0D/iMve8AAa9u yEQjQ5dk8BhCBvB7vnmzTbpD8dsAQLITOApq8CJUrd0XjholN5J1zNIcM 5BmOcan3egKrdhG49zg8Vmea04CG/ypFYvMgIaYEikxuofsC6Njc6iLYS lQ2wirBWhfyRA2cNyoj9OmLqwxqnYEZqQZIrWyBmwjCPsjfQji/BJckO0 T/0KwH0c4m+86JDIfQa/M7NLn6Awd1c/xcAKEya4bK0OBkg0mDcPYKOqK MsTeR0Fh8YsMfuyx9/ZcqeiVjW2gDpfgAkWDeEd4TSD/uygDWqVjTXq0r A==; X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="349819458" X-IronPort-AV: E=Sophos;i="6.00,176,1681196400"; d="scan'208";a="349819458" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2023 00:23:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="772201167" X-IronPort-AV: E=Sophos;i="6.00,176,1681196400"; d="scan'208";a="772201167" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2023 00:23:52 -0700 From: Yan Zhao <yan.y.zhao@intel.com> To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: alex.williamson@redhat.com, kevin.tian@intel.com, jgg@nvidia.com, Yan Zhao <yan.y.zhao@intel.com>, Sean Christopherson <seanjc@google.com> Subject: [PATCH v2] vfio/type1: check pfn valid before converting to struct page Date: Fri, 19 May 2023 14:58:43 +0800 Message-Id: <20230519065843.10653-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766044657235064349?= X-GMAIL-MSGID: =?utf-8?q?1766308801138151056?= |
Series |
[v2] vfio/type1: check pfn valid before converting to struct page
|
|
Commit Message
Yan Zhao
May 19, 2023, 6:58 a.m. UTC
Check physical PFN is valid before converting the PFN to a struct page pointer to be returned to caller of vfio_pin_pages(). vfio_pin_pages() pins user pages with contiguous IOVA. If the IOVA of a user page to be pinned belongs to vma of vm_flags VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning struct page address for this PFN. This is because usually this kind of PFN (e.g. MMIO PFN) has no valid struct page address associated. Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. While previously vfio_pin_pages() returns to caller PFN arrays directly, after commit 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), PFNs will be converted to "struct page *" unconditionally and therefore the returned "struct page *" array may contain invalid struct page addresses. Given current in-tree users of vfio_pin_pages() only expect "struct page * returned, check PFN validity and return -EINVAL to let the caller be aware of IOVAs to be pinned containing PFN not able to be returned in "struct page *" array. So that, the caller will not consume the returned pointer (e.g. test PageReserved()) and avoid error like "supervisor read access in kernel mode". Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") Cc: Sean Christopherson <seanjc@google.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> --- v2: update commit message to explain background/problem clearly. (Sean) --- drivers/vfio/vfio_iommu_type1.c | 5 +++++ 1 file changed, 5 insertions(+) base-commit: b3c98052d46948a8d65d2778c7f306ff38366aac
Comments
On Fri, May 19, 2023, Yan Zhao wrote: > Check physical PFN is valid before converting the PFN to a struct page > pointer to be returned to caller of vfio_pin_pages(). > > vfio_pin_pages() pins user pages with contiguous IOVA. > If the IOVA of a user page to be pinned belongs to vma of vm_flags > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > struct page address for this PFN. This is because usually this kind of PFN > (e.g. MMIO PFN) has no valid struct page address associated. > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > after commit > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > PFNs will be converted to "struct page *" unconditionally and therefore > the returned "struct page *" array may contain invalid struct page > addresses. > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > returned, check PFN validity and return -EINVAL to let the caller be > aware of IOVAs to be pinned containing PFN not able to be returned in > "struct page *" array. So that, the caller will not consume the returned > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > access in kernel mode". > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > Cc: Sean Christopherson <seanjc@google.com> > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Reviewed-by: Sean Christopherson <seanjc@google.com>
On Fri, 19 May 2023 14:58:43 +0800 Yan Zhao <yan.y.zhao@intel.com> wrote: > Check physical PFN is valid before converting the PFN to a struct page > pointer to be returned to caller of vfio_pin_pages(). > > vfio_pin_pages() pins user pages with contiguous IOVA. > If the IOVA of a user page to be pinned belongs to vma of vm_flags > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > struct page address for this PFN. This is because usually this kind of PFN > (e.g. MMIO PFN) has no valid struct page address associated. > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > after commit > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > PFNs will be converted to "struct page *" unconditionally and therefore > the returned "struct page *" array may contain invalid struct page > addresses. > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > returned, check PFN validity and return -EINVAL to let the caller be > aware of IOVAs to be pinned containing PFN not able to be returned in > "struct page *" array. So that, the caller will not consume the returned > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > access in kernel mode". > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > Cc: Sean Christopherson <seanjc@google.com> > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> > > --- > v2: update commit message to explain background/problem clearly. (Sean) > --- > drivers/vfio/vfio_iommu_type1.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 493c31de0edb..0620dbe5cca0 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, > if (ret) > goto pin_unwind; > > + if (!pfn_valid(phys_pfn)) { Why wouldn't we use our is_invalid_reserved_pfn() test here? Doing so would also make it more consistent why we don't need to call put_pfn() or rewind accounting for this page. Thanks, Alex > + ret = -EINVAL; > + goto pin_unwind; > + } > + > ret = vfio_add_to_pfn_list(dma, iova, phys_pfn); > if (ret) { > if (put_pfn(phys_pfn, dma->prot) && do_accounting) > > base-commit: b3c98052d46948a8d65d2778c7f306ff38366aac
On Mon, May 22, 2023 at 01:00:30PM -0600, Alex Williamson wrote: > On Fri, 19 May 2023 14:58:43 +0800 > Yan Zhao <yan.y.zhao@intel.com> wrote: > > > Check physical PFN is valid before converting the PFN to a struct page > > pointer to be returned to caller of vfio_pin_pages(). > > > > vfio_pin_pages() pins user pages with contiguous IOVA. > > If the IOVA of a user page to be pinned belongs to vma of vm_flags > > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > > struct page address for this PFN. This is because usually this kind of PFN > > (e.g. MMIO PFN) has no valid struct page address associated. > > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > > after commit > > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > > PFNs will be converted to "struct page *" unconditionally and therefore > > the returned "struct page *" array may contain invalid struct page > > addresses. > > > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > > returned, check PFN validity and return -EINVAL to let the caller be > > aware of IOVAs to be pinned containing PFN not able to be returned in > > "struct page *" array. So that, the caller will not consume the returned > > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > > access in kernel mode". > > > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > > Cc: Sean Christopherson <seanjc@google.com> > > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> > > > > --- > > v2: update commit message to explain background/problem clearly. (Sean) > > --- > > drivers/vfio/vfio_iommu_type1.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > > index 493c31de0edb..0620dbe5cca0 100644 > > --- a/drivers/vfio/vfio_iommu_type1.c > > +++ b/drivers/vfio/vfio_iommu_type1.c > > @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, > > if (ret) > > goto pin_unwind; > > > > + if (!pfn_valid(phys_pfn)) { > > Why wouldn't we use our is_invalid_reserved_pfn() test here? Doing > so would also make it more consistent why we don't need to call > put_pfn() or rewind accounting for this page. Thanks, > I actually struggled in choosing is_invalid_reserved_pfn() or pfn_valid() when writing this patch. Choosing pfn_valid() is because invalid PFN obviously cannot have struct page address and it's a bug fix. While declining reserved pages will have the IOVA range supported by vfio_pin_pages() even more reduced. So I don't know if there's enough justification to do so, given that (1) device zone memory usually has PG_reserved set. (2) vm_normal_page() also contains reserved page. Thanks Yan > > > + ret = -EINVAL; > > + goto pin_unwind; > > + } > > + > > ret = vfio_add_to_pfn_list(dma, iova, phys_pfn); > > if (ret) { > > if (put_pfn(phys_pfn, dma->prot) && do_accounting) > > > > base-commit: b3c98052d46948a8d65d2778c7f306ff38366aac >
On Tue, 23 May 2023 13:48:22 +0800 Yan Zhao <yan.y.zhao@intel.com> wrote: > On Mon, May 22, 2023 at 01:00:30PM -0600, Alex Williamson wrote: > > On Fri, 19 May 2023 14:58:43 +0800 > > Yan Zhao <yan.y.zhao@intel.com> wrote: > > > > > Check physical PFN is valid before converting the PFN to a struct page > > > pointer to be returned to caller of vfio_pin_pages(). > > > > > > vfio_pin_pages() pins user pages with contiguous IOVA. > > > If the IOVA of a user page to be pinned belongs to vma of vm_flags > > > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > > > struct page address for this PFN. This is because usually this kind of PFN > > > (e.g. MMIO PFN) has no valid struct page address associated. > > > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > > > > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > > > after commit > > > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > > > PFNs will be converted to "struct page *" unconditionally and therefore > > > the returned "struct page *" array may contain invalid struct page > > > addresses. > > > > > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > > > returned, check PFN validity and return -EINVAL to let the caller be > > > aware of IOVAs to be pinned containing PFN not able to be returned in > > > "struct page *" array. So that, the caller will not consume the returned > > > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > > > access in kernel mode". > > > > > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > > > Cc: Sean Christopherson <seanjc@google.com> > > > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > > > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> > > > > > > --- > > > v2: update commit message to explain background/problem clearly. (Sean) > > > --- > > > drivers/vfio/vfio_iommu_type1.c | 5 +++++ > > > 1 file changed, 5 insertions(+) > > > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > > > index 493c31de0edb..0620dbe5cca0 100644 > > > --- a/drivers/vfio/vfio_iommu_type1.c > > > +++ b/drivers/vfio/vfio_iommu_type1.c > > > @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, > > > if (ret) > > > goto pin_unwind; > > > > > > + if (!pfn_valid(phys_pfn)) { > > > > Why wouldn't we use our is_invalid_reserved_pfn() test here? Doing > > so would also make it more consistent why we don't need to call > > put_pfn() or rewind accounting for this page. Thanks, > > > I actually struggled in choosing is_invalid_reserved_pfn() or > pfn_valid() when writing this patch. > > Choosing pfn_valid() is because invalid PFN obviously cannot have > struct page address and it's a bug fix. > > While declining reserved pages will have the IOVA range supported by > vfio_pin_pages() even more reduced. So I don't know if there's enough > justification to do so, given that (1) device zone memory usually has > PG_reserved set. (2) vm_normal_page() also contains reserved page. Based on the exclusion we have in vaddr_get_pfn() where we unpin zero-page pfns because they hit on the is_invalid_reserved_pfn() test and break our accounting otherwise, this does seem like the correct choice. I can imagine a scenario where the device wants to do a DMA read from VM memory backed by the zero page. Ok. Thanks, Alex
On Fri, 19 May 2023 14:58:43 +0800 Yan Zhao <yan.y.zhao@intel.com> wrote: > Check physical PFN is valid before converting the PFN to a struct page > pointer to be returned to caller of vfio_pin_pages(). > > vfio_pin_pages() pins user pages with contiguous IOVA. > If the IOVA of a user page to be pinned belongs to vma of vm_flags > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > struct page address for this PFN. This is because usually this kind of PFN > (e.g. MMIO PFN) has no valid struct page address associated. > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > after commit > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > PFNs will be converted to "struct page *" unconditionally and therefore > the returned "struct page *" array may contain invalid struct page > addresses. > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > returned, check PFN validity and return -EINVAL to let the caller be > aware of IOVAs to be pinned containing PFN not able to be returned in > "struct page *" array. So that, the caller will not consume the returned > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > access in kernel mode". > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > Cc: Sean Christopherson <seanjc@google.com> > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> > > --- > v2: update commit message to explain background/problem clearly. (Sean) > --- > drivers/vfio/vfio_iommu_type1.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 493c31de0edb..0620dbe5cca0 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, > if (ret) > goto pin_unwind; > > + if (!pfn_valid(phys_pfn)) { > + ret = -EINVAL; > + goto pin_unwind; > + } > + > ret = vfio_add_to_pfn_list(dma, iova, phys_pfn); > if (ret) { > if (put_pfn(phys_pfn, dma->prot) && do_accounting) > > base-commit: b3c98052d46948a8d65d2778c7f306ff38366aac Applied to vfio for-linus branch for v6.4. Thanks! Alex
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 493c31de0edb..0620dbe5cca0 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, if (ret) goto pin_unwind; + if (!pfn_valid(phys_pfn)) { + ret = -EINVAL; + goto pin_unwind; + } + ret = vfio_add_to_pfn_list(dma, iova, phys_pfn); if (ret) { if (put_pfn(phys_pfn, dma->prot) && do_accounting)