Message ID | 20230104154202.1152198-2-schnelle@linux.ibm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp5208916wrt; Wed, 4 Jan 2023 07:43:14 -0800 (PST) X-Google-Smtp-Source: AMrXdXthCtQW0+bKHoQ2kzo6mOGCSeOCIjCcMMsNL/6Iilre9+3+MI9+RO3KGEvcvLirYdjPIyCt X-Received: by 2002:a17:906:1c47:b0:7e9:db3d:6c60 with SMTP id l7-20020a1709061c4700b007e9db3d6c60mr39062924ejg.2.1672846994024; Wed, 04 Jan 2023 07:43:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672846994; cv=none; d=google.com; s=arc-20160816; b=RkKy/qy4srcsaam7jcWMcOxU5usX4B09kBl4d4zFDQs2W2o/167LrVYIP3JRSXEuq1 2D5EAwWpOgPNp6eI8CH1eDjIMKabZHEVBl01wXzQ5Lq9UdY1QU+g8vlKN+/lcopHWPq1 GDTGSKN+en5ml09QH30m3pPxmLyPK0imnC8chHc/X9Q5ICin1aPaZ0ozuN8kDFbwGw1k Y+lk244jUvWYHWcfNBdvnyJg8f+DhnLQ0rLSLCn8YHpsaGoRsdVjHrN9zkbUtlzHPYK9 RGkbsMUEjueHDq6CnZOVJuZtlU1lob4VyIstrCHb1SqUrIiP+r1ngcqz2iYHGIOau/9h Ypag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=quDaVa99DaLInrnXMR/h2qSnYFHNuCut6AuCHIYMMiE=; b=m0JOWaWvpr1xLcF0QQfSDTBlJjmVZFA7Ioe7M1RYtJCNmYMR8z87/r5FOTWk7lup1D ejwA/FJt8zzYYr6mv73uKNYcReWNaaQWe9NS4WqOQKB5f3huvITbKA1F2SacZnZX+Ki5 KKAu2cPJiUTbTmM+57INJUXa1ZcqTh/aOWc1zzz/1vmmFJ4M9lXzoimCqyDefSdvHd7U bkqai2OPZrC6n5uQsT8inGwXD2BnJ0sDkdtpxXauDJYq/xKDDyqtXJ13H9X46eA/YB6Z 6caHJFMjROa3G37iu1uACNT319BkZzXcduKzuFeTEdLNvuBmFBY5UBaIoH155nsOTKKW Eusg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=S9UTVxwK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sg30-20020a170907a41e00b007c0c9bd6211si32227489ejc.357.2023.01.04.07.42.49; Wed, 04 Jan 2023 07:43:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=S9UTVxwK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239678AbjADPmR (ORCPT <rfc822;tmhikaru@gmail.com> + 99 others); Wed, 4 Jan 2023 10:42:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239500AbjADPmL (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 4 Jan 2023 10:42:11 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9385E32186; Wed, 4 Jan 2023 07:42:10 -0800 (PST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 304F0EQG029448; Wed, 4 Jan 2023 15:42:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=quDaVa99DaLInrnXMR/h2qSnYFHNuCut6AuCHIYMMiE=; b=S9UTVxwK/fZZvvw3zP5Wo6IuDPvBEjbgc+nctP/SqZjEG98g/XMZElIOvjkSe7sDqmam fg7FuK44ALuk3iYcviic+VYOtRHOJEb7mG+utNQZW3NtYtDD/JCCn8V2oY/QoaUvSGqY DQGZPp/qlZEEVKTmwOGQgOILPu++NL4sq+eBkuo5Y71GKuG9CzCJfC4L9OrXrX4pFVY6 KnTV0fwATaKFbFHpOtukI2+AnLQbrN3nP5KMi79JwyWwD7k1VZgiB/M7Y2+GurmMEm1u /Uq/hIMnKG03s3FPzit45C5NdcPXPiafQ0iUTW66+JtAkR4+CEjVKK8/HhiwB5T/yOZk 8Q== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3mwbqeh4n4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Jan 2023 15:42:09 +0000 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 304FTsgJ022826; Wed, 4 Jan 2023 15:42:09 GMT Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3mwbqeh4md-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Jan 2023 15:42:08 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 304CX4uO001927; Wed, 4 Jan 2023 15:42:07 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma03ams.nl.ibm.com (PPS) with ESMTPS id 3mtcq6dh0u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Jan 2023 15:42:06 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 304Fg3hs52298206 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 4 Jan 2023 15:42:03 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 673A920049; Wed, 4 Jan 2023 15:42:03 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 19D9B20040; Wed, 4 Jan 2023 15:42:03 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 4 Jan 2023 15:42:03 +0000 (GMT) From: Niklas Schnelle <schnelle@linux.ibm.com> To: Alex Williamson <alex.williamson@redhat.com>, Cornelia Huck <cohuck@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca>, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, Matthew Rosato <mjrosato@linux.ibm.com>, Pierre Morel <pmorel@linux.ibm.com>, =?utf-8?q?Christian_Borntr=C3=A4ger?= <borntraeger@linux.ibm.com> Subject: [PATCH v2 1/1] vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp() Date: Wed, 4 Jan 2023 16:42:02 +0100 Message-Id: <20230104154202.1152198-2-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230104154202.1152198-1-schnelle@linux.ibm.com> References: <20230104154202.1152198-1-schnelle@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: iu8PGGP6ldxfR2d2VLe6uwFS4htCa8Bc X-Proofpoint-ORIG-GUID: HtcMWrHcZtob5hqcl4MEQe2xHm38sBu9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2023-01-04_07,2023-01-04_02,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 phishscore=0 mlxscore=0 adultscore=0 malwarescore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999 spamscore=0 impostorscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301040130 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754107209795802085?= X-GMAIL-MSGID: =?utf-8?q?1754107209795802085?= |
Series |
vfio/type1: Fix vfio-pci pass-through of ISM devices
|
|
Commit Message
Niklas Schnelle
Jan. 4, 2023, 3:42 p.m. UTC
Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain
aperture shrinking") the s390 IOMMU driver uses reserved regions for the
system provided DMA ranges of PCI devices. Previously it reduced the
size of the IOMMU aperture and checked it on each mapping operation.
On current machines the system denies use of DMA addresses below 2^32 for
all PCI devices.
Usually mapping IOVAs in a reserved regions is harmless until a DMA
actually tries to utilize the mapping. However on s390 there is
a virtual PCI device called ISM which is implemented in firmware and
used for cross LPAR communication. Unlike real PCI devices this device
does not use the hardware IOMMU but inspects IOMMU translation tables
directly on IOTLB flush (s390 RPCIT instruction). If it detects IOVA
mappings outside the allowed ranges it goes into an error state. This
error state then causes the device to be unavailable to the KVM guest.
Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA
address 0 irrespective of the IOMMUs reserved regions. Even if usually
harmless this seems wrong in the general case so instead go through the
freshly updated IOVA list and try to find a range that isn't reserved,
and fits 2 pages, is PAGE_SIZE * 2 aligned. If found use that for
testing for fine grained super pages.
Fixes: 6fe1010d6d9c ("vfio/type1: DMA unmap chunking")
Reported-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
v1 -> v2:
- Reworded commit message to hopefully explain things a bit better and
highlight that usually just mapping but not issuing DMAs for IOVAs in
a resverved region is harmless but still breaks things with ISM devices.
- Added a check for PAGE_SIZE * 2 alignment (Jason)
drivers/vfio/vfio_iommu_type1.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
Comments
On 1/4/23 10:42 AM, Niklas Schnelle wrote: > Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain > aperture shrinking") the s390 IOMMU driver uses reserved regions for the > system provided DMA ranges of PCI devices. Previously it reduced the > size of the IOMMU aperture and checked it on each mapping operation. > On current machines the system denies use of DMA addresses below 2^32 for > all PCI devices. > > Usually mapping IOVAs in a reserved regions is harmless until a DMA > actually tries to utilize the mapping. However on s390 there is > a virtual PCI device called ISM which is implemented in firmware and > used for cross LPAR communication. Unlike real PCI devices this device > does not use the hardware IOMMU but inspects IOMMU translation tables > directly on IOTLB flush (s390 RPCIT instruction). If it detects IOVA > mappings outside the allowed ranges it goes into an error state. This > error state then causes the device to be unavailable to the KVM guest. > > Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA > address 0 irrespective of the IOMMUs reserved regions. Even if usually > harmless this seems wrong in the general case so instead go through the > freshly updated IOVA list and try to find a range that isn't reserved, > and fits 2 pages, is PAGE_SIZE * 2 aligned. If found use that for > testing for fine grained super pages. > > Fixes: 6fe1010d6d9c ("vfio/type1: DMA unmap chunking") > Reported-by: Matthew Rosato <mjrosato@linux.ibm.com> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Thanks, this fixes the issue I'm seeing with ISM. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
On Wed, 4 Jan 2023 16:42:02 +0100 Niklas Schnelle <schnelle@linux.ibm.com> wrote: > Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain > aperture shrinking") the s390 IOMMU driver uses reserved regions for the Are you asking for this in v6.2? Seems like the above was introduced in v6.2 and I can't tell if this is sufficiently prevalent that we need a fix in the same release. > system provided DMA ranges of PCI devices. Previously it reduced the > size of the IOMMU aperture and checked it on each mapping operation. > On current machines the system denies use of DMA addresses below 2^32 for > all PCI devices. > > Usually mapping IOVAs in a reserved regions is harmless until a DMA > actually tries to utilize the mapping. However on s390 there is > a virtual PCI device called ISM which is implemented in firmware and > used for cross LPAR communication. Unlike real PCI devices this device > does not use the hardware IOMMU but inspects IOMMU translation tables > directly on IOTLB flush (s390 RPCIT instruction). If it detects IOVA > mappings outside the allowed ranges it goes into an error state. This > error state then causes the device to be unavailable to the KVM guest. > > Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA > address 0 irrespective of the IOMMUs reserved regions. Even if usually > harmless this seems wrong in the general case so instead go through the > freshly updated IOVA list and try to find a range that isn't reserved, > and fits 2 pages, is PAGE_SIZE * 2 aligned. If found use that for > testing for fine grained super pages. > > Fixes: 6fe1010d6d9c ("vfio/type1: DMA unmap chunking") Nit, the above patch pre-dates any notion of reserved regions, so isn't this actually fixing the implementation of reserved regions in type1 to include this test? ie. Fixes: af029169b8fd ("vfio/type1: Check reserved region conflict and update iovalist") > Reported-by: Matthew Rosato <mjrosato@linux.ibm.com> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > --- > v1 -> v2: > - Reworded commit message to hopefully explain things a bit better and > highlight that usually just mapping but not issuing DMAs for IOVAs in > a resverved region is harmless but still breaks things with ISM devices. > - Added a check for PAGE_SIZE * 2 alignment (Jason) > > drivers/vfio/vfio_iommu_type1.c | 30 +++++++++++++++++++----------- > 1 file changed, 19 insertions(+), 11 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 23c24fe98c00..87b27ffb93d0 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -1856,24 +1856,32 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, > * significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when > * hugetlbfs is in use. > */ > -static void vfio_test_domain_fgsp(struct vfio_domain *domain) > +static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions) > { > - struct page *pages; > int ret, order = get_order(PAGE_SIZE * 2); > + struct vfio_iova *region; > + struct page *pages; > > pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); > if (!pages) > return; > > - ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, > - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); > - if (!ret) { > - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); > + list_for_each_entry(region, regions, list) { > + if (region->end - region->start < PAGE_SIZE * 2 || > + region->start % (PAGE_SIZE*2)) Maybe this falls into the noise, but we don't care if region->start is aligned to a double page, so long as we can map an aligned double page within the region. Maybe something like: dma_addr_t start = ALIGN(region->start, PAGE_SIZE * 2); if (start >= region->end || (region->end - start < PAGE_SIZE * 2)) continue; s/region->// for below if so. Thanks, Alex > + continue; > > - if (unmapped == PAGE_SIZE) > - iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE); > - else > - domain->fgsp = true; > + ret = iommu_map(domain->domain, region->start, page_to_phys(pages), PAGE_SIZE * 2, > + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); > + if (!ret) { > + size_t unmapped = iommu_unmap(domain->domain, region->start, PAGE_SIZE); > + > + if (unmapped == PAGE_SIZE) > + iommu_unmap(domain->domain, region->start + PAGE_SIZE, PAGE_SIZE); > + else > + domain->fgsp = true; > + } > + break; > } > > __free_pages(pages, order); > @@ -2326,7 +2334,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > } > } > > - vfio_test_domain_fgsp(domain); > + vfio_test_domain_fgsp(domain, &iova_copy); > > /* replay mappings on new domains */ > ret = vfio_iommu_replay(iommu, domain);
On Fri, Jan 06, 2023 at 10:24:50AM -0700, Alex Williamson wrote: > > - ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, > > - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); > > - if (!ret) { > > - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); > > + list_for_each_entry(region, regions, list) { > > + if (region->end - region->start < PAGE_SIZE * 2 || > > + region->start % (PAGE_SIZE*2)) > > Maybe this falls into the noise, but we don't care if region->start is > aligned to a double page, so long as we can map an aligned double page > within the region. Maybe something like: > dma_addr_t start = ALIGN(region->start, PAGE_SIZE * 2); > > if (start >= region->end || (region->end - start < PAGE_SIZE * 2)) > continue; Yeah, that is more technically correct Jason
On Fri, 2023-01-06 at 14:03 -0400, Jason Gunthorpe wrote: > On Fri, Jan 06, 2023 at 10:24:50AM -0700, Alex Williamson wrote: > > > > - ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, > > > - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); > > > - if (!ret) { > > > - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); > > > + list_for_each_entry(region, regions, list) { > > > + if (region->end - region->start < PAGE_SIZE * 2 || > > > + region->start % (PAGE_SIZE*2)) > > > > Maybe this falls into the noise, but we don't care if region->start is > > aligned to a double page, so long as we can map an aligned double page > > within the region. Maybe something like: > > > dma_addr_t start = ALIGN(region->start, PAGE_SIZE * 2); > > > > if (start >= region->end || (region->end - start < PAGE_SIZE * 2)) > > continue; > > Yeah, that is more technically correct > > Jason Makes sense, will incorporate this into v3. Thanks, Niklas
On Fri, 2023-01-06 at 10:24 -0700, Alex Williamson wrote: > On Wed, 4 Jan 2023 16:42:02 +0100 > Niklas Schnelle <schnelle@linux.ibm.com> wrote: > > > Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain > > aperture shrinking") the s390 IOMMU driver uses reserved regions for the > > Are you asking for this in v6.2? Seems like the above was introduced > in v6.2 and I can't tell if this is sufficiently prevalent that we need > a fix in the same release. If possible yes I'd hope for this to go into v6.2 so we don't break ISM pass-through. Support for ISM pass-through has only been available since v6.0 where Matt added the interpretation support but it is one of the most useful pass-through uses at the moment since the ISM device uses long lived DMA mappings and as such is pretty much unaffected by the performance impact of our virtualized IOMMUs. Together with SMD-D this allows high performance communication of VMs with other VMs or LPARs. Now we don't have this problem in distribution kernels that have Matt's patches as backports but lack my newer IOMMU changes and few if any of our customers run upstream but still of course I'd prefer not to have known broken upstream releases. Thanks, Niklas
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 23c24fe98c00..87b27ffb93d0 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1856,24 +1856,32 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, * significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when * hugetlbfs is in use. */ -static void vfio_test_domain_fgsp(struct vfio_domain *domain) +static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions) { - struct page *pages; int ret, order = get_order(PAGE_SIZE * 2); + struct vfio_iova *region; + struct page *pages; pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!pages) return; - ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); - if (!ret) { - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); + list_for_each_entry(region, regions, list) { + if (region->end - region->start < PAGE_SIZE * 2 || + region->start % (PAGE_SIZE*2)) + continue; - if (unmapped == PAGE_SIZE) - iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE); - else - domain->fgsp = true; + ret = iommu_map(domain->domain, region->start, page_to_phys(pages), PAGE_SIZE * 2, + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); + if (!ret) { + size_t unmapped = iommu_unmap(domain->domain, region->start, PAGE_SIZE); + + if (unmapped == PAGE_SIZE) + iommu_unmap(domain->domain, region->start + PAGE_SIZE, PAGE_SIZE); + else + domain->fgsp = true; + } + break; } __free_pages(pages, order); @@ -2326,7 +2334,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } } - vfio_test_domain_fgsp(domain); + vfio_test_domain_fgsp(domain, &iova_copy); /* replay mappings on new domains */ ret = vfio_iommu_replay(iommu, domain);