[v8,0/4] cover-letter: Add IO page table replacement support

Message ID cover.1690226015.git.nicolinc@nvidia.com
Headers
Series cover-letter: Add IO page table replacement support |

Message

Nicolin Chen July 24, 2023, 7:47 p.m. UTC
  [ This series depends on the VFIO device cdev series ]

Changelog
v8:
 * Rebased on top of Jason's iommufd_hwpt series and then cdev v15 series:
   https://lore.kernel.org/all/0-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com/
   https://lore.kernel.org/kvm/20230718135551.6592-1-yi.l.liu@intel.com/
 * Changed the order of detach() and attach() in replace(), to fix a bug
v7:
 https://lore.kernel.org/all/cover.1683593831.git.nicolinc@nvidia.com/
 * Rebased on top of v6.4-rc1 and cdev v11 candidate
 * Fixed a wrong file in replace() API patch
 * Added Kevin's "Reviewed-by" to replace() API patch
v6:
 https://lore.kernel.org/all/cover.1679939952.git.nicolinc@nvidia.com/
 * Rebased on top of cdev v8 series
   https://lore.kernel.org/kvm/20230327094047.47215-1-yi.l.liu@intel.com/
 * Added "Reviewed-by" from Kevin to PATCH-4
 * Squashed access->ioas updating lines into iommufd_access_change_pt(),
   and changed function return type accordingly for simplification.
v5:
 https://lore.kernel.org/all/cover.1679559476.git.nicolinc@nvidia.com/
 * Kept the cmd->id in the iommufd_test_create_access() so the access can
   be created with an ioas by default. Then, renamed the previous ioctl
   IOMMU_TEST_OP_ACCESS_SET_IOAS to IOMMU_TEST_OP_ACCESS_REPLACE_IOAS, so
   it would be used to replace an access->ioas pointer.
 * Added iommufd_access_replace() API after the introductions of the other
   two APIs iommufd_access_attach() and iommufd_access_detach().
 * Since vdev->iommufd_attached is also set in emulated pathway too, call
   iommufd_access_update(), similar to the physical pathway.
v4:
 https://lore.kernel.org/all/cover.1678284812.git.nicolinc@nvidia.com/
 * Rebased on top of Jason's series adding replace() and hwpt_alloc()
 https://lore.kernel.org/all/0-v2-51b9896e7862+8a8c-iommufd_alloc_jgg@nvidia.com/
 * Rebased on top of cdev series v6
 https://lore.kernel.org/kvm/20230308132903.465159-1-yi.l.liu@intel.com/
 * Dropped the patch that's moved to cdev series.
 * Added unmap function pointer sanity before calling it.
 * Added "Reviewed-by" from Kevin and Yi.
 * Added back the VFIO change updating the ATTACH uAPI.
v3:
 https://lore.kernel.org/all/cover.1677288789.git.nicolinc@nvidia.com/
 * Rebased on top of Jason's iommufd_hwpt branch:
 https://lore.kernel.org/all/0-v2-406f7ac07936+6a-iommufd_hwpt_jgg@nvidia.com/
 * Dropped patches from this series accordingly. There were a couple of
   VFIO patches that will be submitted after the VFIO cdev series. Also,
   renamed the series to be "emulated".
 * Moved dma_unmap sanity patch to the first in the series.
 * Moved dma_unmap sanity to cover both VFIO and IOMMUFD pathways.
 * Added Kevin's "Reviewed-by" to two of the patches.
 * Fixed a NULL pointer bug in vfio_iommufd_emulated_bind().
 * Moved unmap() call to the common place in iommufd_access_set_ioas().
v2:
 https://lore.kernel.org/all/cover.1675802050.git.nicolinc@nvidia.com/
 * Rebased on top of vfio_device cdev v2 series.
 * Update the kdoc and commit message of iommu_group_replace_domain().
 * Dropped revert-to-core-domain part in iommu_group_replace_domain().
 * Dropped !ops->dma_unmap check in vfio_iommufd_emulated_attach_ioas().
 * Added missing rc value in vfio_iommufd_emulated_attach_ioas() from the
   iommufd_access_set_ioas() call.
 * Added a new patch in vfio_main to deny vfio_pin/unpin_pages() calls if
   vdev->ops->dma_unmap is not implemented.
 * Added a __iommmufd_device_detach helper and let the replace routine do
   a partial detach().
 * Added restriction on auto_domains to use the replace feature.
 * Added the patch "iommufd/device: Make hwpt_list list_add/del symmetric"
   from the has_group removal series.
v1:
 https://lore.kernel.org/all/cover.1675320212.git.nicolinc@nvidia.com/

Hi all,

The existing IOMMU APIs provide a pair of functions: iommu_attach_group()
for callers to attach a device from the default_domain (NULL if not being
supported) to a given iommu domain, and iommu_detach_group() for callers
to detach a device from a given domain to the default_domain. Internally,
the detach_dev op is deprecated for the newer drivers with default_domain.
This means that those drivers likely can switch an attaching domain to
another one, without stagging the device at a blocking or default domain,
for use cases such as:
1) vPASID mode, when a guest wants to replace a single pasid (PASID=0)
   table with a larger table (PASID=N)
2) Nesting mode, when switching the attaching device from an S2 domain
   to an S1 domain, or when switching between relevant S1 domains.

This series is rebased on top of Jason Gunthorpe's series that introduces
iommu_group_replace_domain API and IOMMUFD infrastructure for the IOMMUFD
"physical" devices. The IOMMUFD "emulated" deivces will need some extra
steps to replace the access->ioas object and its iopt pointer.

You can also find this series on Github:
https://github.com/nicolinc/iommufd/commits/iommu_group_replace_domain-v8

Thank you
Nicolin Chen

Nicolin Chen (4):
  vfio: Do not allow !ops->dma_unmap in vfio_pin/unpin_pages()
  iommufd: Add iommufd_access_replace() API
  iommufd/selftest: Add IOMMU_TEST_OP_ACCESS_REPLACE_IOAS coverage
  vfio: Support IO page table replacement

 drivers/iommu/iommufd/device.c                | 72 ++++++++++++++-----
 drivers/iommu/iommufd/iommufd_test.h          |  4 ++
 drivers/iommu/iommufd/selftest.c              | 19 +++++
 drivers/vfio/iommufd.c                        | 11 +--
 drivers/vfio/vfio_main.c                      |  4 ++
 include/linux/iommufd.h                       |  1 +
 include/uapi/linux/vfio.h                     |  6 ++
 tools/testing/selftests/iommu/iommufd.c       | 29 +++++++-
 tools/testing/selftests/iommu/iommufd_utils.h | 19 +++++
 9 files changed, 142 insertions(+), 23 deletions(-)
  

Comments

Nicolin Chen July 27, 2023, 7:30 a.m. UTC | #1
On Wed, Jul 26, 2023 at 07:59:17PM -0700, Nicolin Chen wrote:
 
> > > > +	if (new_ioas) {
> > > > +		rc = iopt_add_access(&new_ioas->iopt, access);
> > > > +		if (rc) {
> > > > +			iommufd_put_object(&new_ioas->obj);
> > > > +			access->ioas = cur_ioas;
> > > > +			return rc;
> > > > +		}
> > > > +		iommufd_ref_to_users(&new_ioas->obj);
> > > > +	}
> > > > +
> > > > +	access->ioas = new_ioas;
> > > > +	access->ioas_unpin = new_ioas;
> > > >  	iopt_remove_access(&cur_ioas->iopt, access);
> > > 
> > > There was a bug in my earlier version, having the same flow by
> > > calling iopt_add_access() prior to iopt_remove_access(). But,
> > > doing that would override the access->iopt_access_list_id and
> > > it would then get unset by the following iopt_remove_access().
> > 
> > Ah, I was wondering about that order but didn't check it.
> > 
> > Maybe we just need to pass the ID into iopt_remove_access and keep the
> > right version on the stack.
> > 
> > > So, I came up with this version calling an iopt_remove_access()
> > > prior to iopt_add_access(), which requires an add-back the old
> > > ioas upon an failure at iopt_add_access(new_ioas).
> > 
> > That is also sort of reasonable if the refcounting is organized like
> > this does.
> 
> I just realized that either my v8 or your version calls unmap()
> first at the entire cur_ioas. So, there seems to be no point in
> doing that fallback re-add routine since the cur_ioas isn't the
> same, which I don't feel quite right...
> 
> Perhaps we should pass the ID into iopt_add/remove_access like
> you said above. And then we attach the new_ioas, piror to the
> detach the cur_ioas?

I sent v9 having the iopt_remove_access trick, so we can do an
iopt_remove_access only upon success. Let's continue there.

Thanks
Nic
  
Jason Gunthorpe July 27, 2023, 12:03 p.m. UTC | #2
On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:

> I just realized that either my v8 or your version calls unmap()
> first at the entire cur_ioas. So, there seems to be no point in
> doing that fallback re-add routine since the cur_ioas isn't the
> same, which I don't feel quite right...

The point is to restore the access back to how it should be on failure
so future use of the accesss still does the right thing.

We already have built into this a certain non-atomicity for mdevs,
they can see a pin failure during replace if they race an access
during this unmap window. This is similar to the real HW iommu's
without atomic replace.

> Perhaps we should pass the ID into iopt_add/remove_access like
> you said above. And then we attach the new_ioas, piror to the
> detach the cur_ioas?

If it is simple this seems like the most robust

Jason
  
Nicolin Chen July 27, 2023, 7:04 p.m. UTC | #3
On Thu, Jul 27, 2023 at 09:03:01AM -0300, Jason Gunthorpe wrote:
> On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:
> 
> > I just realized that either my v8 or your version calls unmap()
> > first at the entire cur_ioas. So, there seems to be no point in
> > doing that fallback re-add routine since the cur_ioas isn't the
> > same, which I don't feel quite right...
> 
> The point is to restore the access back to how it should be on failure
> so future use of the accesss still does the right thing.
> 
> We already have built into this a certain non-atomicity for mdevs,
> they can see a pin failure during replace if they race an access
> during this unmap window. This is similar to the real HW iommu's
> without atomic replace.

I was concerned about, after the replace, mdev losing all the
mappings due to the unmap() call, which means the fallback is
not really a status quo. Do you mean that they could pin those
lost mappings back?
  
Tian, Kevin July 28, 2023, 3:45 a.m. UTC | #4
> From: Nicolin Chen <nicolinc@nvidia.com>
> Sent: Friday, July 28, 2023 3:04 AM
> 
> On Thu, Jul 27, 2023 at 09:03:01AM -0300, Jason Gunthorpe wrote:
> > On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:
> >
> > > I just realized that either my v8 or your version calls unmap()
> > > first at the entire cur_ioas. So, there seems to be no point in
> > > doing that fallback re-add routine since the cur_ioas isn't the
> > > same, which I don't feel quite right...
> >
> > The point is to restore the access back to how it should be on failure
> > so future use of the accesss still does the right thing.
> >
> > We already have built into this a certain non-atomicity for mdevs,
> > they can see a pin failure during replace if they race an access
> > during this unmap window. This is similar to the real HW iommu's
> > without atomic replace.
> 
> I was concerned about, after the replace, mdev losing all the
> mappings due to the unmap() call, which means the fallback is
> not really a status quo. Do you mean that they could pin those
> lost mappings back?

None of mdev drivers does that.

but we need think about the actual usage. I don't think the user
can request ioas change w/o actually reconfiguring the mdev
device. Presumably the latter could lead to reconstructure of pinned
pages.

so in code-level as Jason said we just need ensure the access is
back to an usable state.
  
Nicolin Chen July 28, 2023, 4:43 a.m. UTC | #5
On Fri, Jul 28, 2023 at 03:45:39AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Friday, July 28, 2023 3:04 AM
> >
> > On Thu, Jul 27, 2023 at 09:03:01AM -0300, Jason Gunthorpe wrote:
> > > On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:
> > >
> > > > I just realized that either my v8 or your version calls unmap()
> > > > first at the entire cur_ioas. So, there seems to be no point in
> > > > doing that fallback re-add routine since the cur_ioas isn't the
> > > > same, which I don't feel quite right...
> > >
> > > The point is to restore the access back to how it should be on failure
> > > so future use of the accesss still does the right thing.
> > >
> > > We already have built into this a certain non-atomicity for mdevs,
> > > they can see a pin failure during replace if they race an access
> > > during this unmap window. This is similar to the real HW iommu's
> > > without atomic replace.
> >
> > I was concerned about, after the replace, mdev losing all the
> > mappings due to the unmap() call, which means the fallback is
> > not really a status quo. Do you mean that they could pin those
> > lost mappings back?
> 
> None of mdev drivers does that.
> 
> but we need think about the actual usage. I don't think the user
> can request ioas change w/o actually reconfiguring the mdev
> device. Presumably the latter could lead to reconstructure of pinned
> pages.

I can understand that the user should reconfigure the IOAS on
success. Yet, should we expect it to reconfigure on a failure
also?

Thanks!
Nic
  
Tian, Kevin July 28, 2023, 6:20 a.m. UTC | #6
> From: Nicolin Chen <nicolinc@nvidia.com>
> Sent: Friday, July 28, 2023 12:43 PM
> 
> On Fri, Jul 28, 2023 at 03:45:39AM +0000, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc@nvidia.com>
> > > Sent: Friday, July 28, 2023 3:04 AM
> > >
> > > On Thu, Jul 27, 2023 at 09:03:01AM -0300, Jason Gunthorpe wrote:
> > > > On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:
> > > >
> > > > > I just realized that either my v8 or your version calls unmap()
> > > > > first at the entire cur_ioas. So, there seems to be no point in
> > > > > doing that fallback re-add routine since the cur_ioas isn't the
> > > > > same, which I don't feel quite right...
> > > >
> > > > The point is to restore the access back to how it should be on failure
> > > > so future use of the accesss still does the right thing.
> > > >
> > > > We already have built into this a certain non-atomicity for mdevs,
> > > > they can see a pin failure during replace if they race an access
> > > > during this unmap window. This is similar to the real HW iommu's
> > > > without atomic replace.
> > >
> > > I was concerned about, after the replace, mdev losing all the
> > > mappings due to the unmap() call, which means the fallback is
> > > not really a status quo. Do you mean that they could pin those
> > > lost mappings back?
> >
> > None of mdev drivers does that.
> >
> > but we need think about the actual usage. I don't think the user
> > can request ioas change w/o actually reconfiguring the mdev
> > device. Presumably the latter could lead to reconstructure of pinned
> > pages.
> 
> I can understand that the user should reconfigure the IOAS on
> success. Yet, should we expect it to reconfigure on a failure
> also?
> 

I thought the user will likely stop the device before changing IOAS
and then re-enable device DMA afterwards. If that is the typical
flow then no matter this replace request succeeds or fails the
re-enabling sequence should lead to the addition of pinned pages
back to the current IOAS.

But this does imply inconsistent behavior between success and failure.
Not sure whether it's worth a fix e.g. introducing another notifier for
mdev drivers to re-pin...
  
Jason Gunthorpe July 28, 2023, 12:27 p.m. UTC | #7
On Thu, Jul 27, 2023 at 12:04:00PM -0700, Nicolin Chen wrote:
> On Thu, Jul 27, 2023 at 09:03:01AM -0300, Jason Gunthorpe wrote:
> > On Wed, Jul 26, 2023 at 07:59:11PM -0700, Nicolin Chen wrote:
> > 
> > > I just realized that either my v8 or your version calls unmap()
> > > first at the entire cur_ioas. So, there seems to be no point in
> > > doing that fallback re-add routine since the cur_ioas isn't the
> > > same, which I don't feel quite right...
> > 
> > The point is to restore the access back to how it should be on failure
> > so future use of the accesss still does the right thing.
> > 
> > We already have built into this a certain non-atomicity for mdevs,
> > they can see a pin failure during replace if they race an access
> > during this unmap window. This is similar to the real HW iommu's
> > without atomic replace.
> 
> I was concerned about, after the replace, mdev losing all the
> mappings due to the unmap() call, which means the fallback is
> not really a status quo. Do you mean that they could pin those
> lost mappings back?

At this point their shouldn't be mappings in any path with a chance of
success, as I said it is racy already. Not sure we need to fuss about
it futher.

Jason
  
Jason Gunthorpe July 28, 2023, 12:28 p.m. UTC | #8
On Fri, Jul 28, 2023 at 06:20:56AM +0000, Tian, Kevin wrote:

> But this does imply inconsistent behavior between success and failure.
> Not sure whether it's worth a fix e.g. introducing another notifier for
> mdev drivers to re-pin...

After unmap drivers should re-establish their DMA mappings when they
are next required. It is a mdev driver bug if they don't do this..

Jason