[3/8] iommufd: Support attach/replace hwpt per pasid

Message ID 20231127063428.127436-4-yi.l.liu@intel.com
State New
Headers
Series iommufd support pasid attach/replace |

Commit Message

Yi Liu Nov. 27, 2023, 6:34 a.m. UTC
  From: Kevin Tian <kevin.tian@intel.com>

This introduces three APIs for device drivers to manage pasid attach/
replace/detach.

    int iommufd_device_pasid_attach(struct iommufd_device *idev,
				    u32 pasid, u32 *pt_id);
    int iommufd_device_pasid_replace(struct iommufd_device *idev,
				     u32 pasid, u32 *pt_id);
    void iommufd_device_pasid_detach(struct iommufd_device *idev,
				     u32 pasid);

pasid operations have different implications when comparing to device
operations:

 - No connection to iommufd_group since pasid is a device capability
   and can be enabled only in singleton group;

 - no reserved region per pasid otherwise SVA architecture is already
   broken (CPU address space doesn't count device reserved regions);

 - accordingly no sw_msi trick;

 - immediated_attach is not supported, expecting that arm-smmu driver
   will already remove that requirement before supporting this pasid
   operation. This avoids unnecessary change in iommufd_hw_pagetable_alloc()
   to carry the pasid from device.c.

With above differences, this puts all pasid related logics into a new
pasid.c file.

Cache coherency enforcement is still applied to pasid operations since
it is about memory accesses post page table walking (no matter the walk
is per RID or per PASID).

Since the attach is per PASID, this introduces a pasid_hwpts xarray to
track the per-pasid attach data.

Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/iommu/iommufd/Makefile          |   1 +
 drivers/iommu/iommufd/device.c          |  17 ++-
 drivers/iommu/iommufd/iommufd_private.h |  15 +++
 drivers/iommu/iommufd/pasid.c           | 138 ++++++++++++++++++++++++
 include/linux/iommufd.h                 |   6 ++
 5 files changed, 176 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/iommufd/pasid.c
  

Comments

Jason Gunthorpe Jan. 15, 2024, 5:24 p.m. UTC | #1
On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> @@ -534,7 +537,17 @@ iommufd_device_do_replace(struct iommufd_device *idev,
>  static struct iommufd_hw_pagetable *do_attach(struct iommufd_device *idev,
>  		struct iommufd_hw_pagetable *hwpt, struct attach_data *data)
>  {
> -	return data->attach_fn(idev, hwpt);
> +	if (data->pasid == IOMMU_PASID_INVALID) {
> +		BUG_ON((data->attach_fn != iommufd_device_do_attach) &&
> +		       (data->attach_fn != iommufd_device_do_replace));
> +		return data->attach_fn(idev, hwpt);
> +	} else {
> +		BUG_ON((data->pasid_attach_fn !=
> +			iommufd_device_pasid_do_attach) &&
> +		       (data->pasid_attach_fn !=
> +			iommufd_device_pasid_do_replace));
> +		return data->pasid_attach_fn(idev, data->pasid, hwpt);
> +	}

Seems like the BUG_ON's are pointless

> +/**
> + * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an iommu_domain
> + * @idev: device to detach
> + * @pasid: pasid to detach
> + *
> + * Undo iommufd_device_pasid_attach(). This disconnects the idev/pasid from
> + * the previously attached pt_id.
> + */
> +void iommufd_device_pasid_detach(struct iommufd_device *idev, u32 pasid)
> +{
> +	struct iommufd_hw_pagetable *hwpt;
> +
> +	hwpt = xa_load(&idev->pasid_hwpts, pasid);
> +	if (!hwpt)
> +		return;
> +	xa_erase(&idev->pasid_hwpts, pasid);
> +	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
> +	iommufd_hw_pagetable_put(idev->ictx, hwpt);
> +}

None of this xarray stuff looks locked properly

Jason
  
Tian, Kevin Jan. 16, 2024, 1:18 a.m. UTC | #2
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Tuesday, January 16, 2024 1:25 AM
> 
> On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> > +/**
> > + * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an
> iommu_domain
> > + * @idev: device to detach
> > + * @pasid: pasid to detach
> > + *
> > + * Undo iommufd_device_pasid_attach(). This disconnects the idev/pasid
> from
> > + * the previously attached pt_id.
> > + */
> > +void iommufd_device_pasid_detach(struct iommufd_device *idev, u32
> pasid)
> > +{
> > +	struct iommufd_hw_pagetable *hwpt;
> > +
> > +	hwpt = xa_load(&idev->pasid_hwpts, pasid);
> > +	if (!hwpt)
> > +		return;
> > +	xa_erase(&idev->pasid_hwpts, pasid);
> > +	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
> > +	iommufd_hw_pagetable_put(idev->ictx, hwpt);
> > +}
> 
> None of this xarray stuff looks locked properly
> 

I had an impression from past discussions that the caller should not
race attach/detach/replace on same device or pasid, otherwise it is
already a problem in a higher level.

and the original intention of the group lock was to ensure all devices
in the group have a same view. Not exactly to guard concurrent
attach/detach.

If this understanding is incorrect we can add a lock for sure. 😊
  
Jason Gunthorpe Jan. 16, 2024, 12:57 p.m. UTC | #3
On Tue, Jan 16, 2024 at 01:18:12AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Tuesday, January 16, 2024 1:25 AM
> > 
> > On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> > > +/**
> > > + * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an
> > iommu_domain
> > > + * @idev: device to detach
> > > + * @pasid: pasid to detach
> > > + *
> > > + * Undo iommufd_device_pasid_attach(). This disconnects the idev/pasid
> > from
> > > + * the previously attached pt_id.
> > > + */
> > > +void iommufd_device_pasid_detach(struct iommufd_device *idev, u32
> > pasid)
> > > +{
> > > +	struct iommufd_hw_pagetable *hwpt;
> > > +
> > > +	hwpt = xa_load(&idev->pasid_hwpts, pasid);
> > > +	if (!hwpt)
> > > +		return;
> > > +	xa_erase(&idev->pasid_hwpts, pasid);
> > > +	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
> > > +	iommufd_hw_pagetable_put(idev->ictx, hwpt);
> > > +}
> > 
> > None of this xarray stuff looks locked properly
> > 
> 
> I had an impression from past discussions that the caller should not
> race attach/detach/replace on same device or pasid, otherwise it is
> already a problem in a higher level.

I thought that was just at the iommu layer? We want VFIO to do the
same? Then why do we need the dual xarrays?

Still, it looks really wrong to have code like this, we don't need to
- it can be locked properly, it isn't a performance path..

> and the original intention of the group lock was to ensure all devices
> in the group have a same view. Not exactly to guard concurrent
> attach/detach.

We don't have a group lock here, this is in iommufd.

Use the xarray lock..

eg 

hwpt = xa_erase(&idev->pasid_hwpts, pasid);
if (WARN_ON(!hwpt))
   return

xa_erase is atomic.

Jason
  
Tian, Kevin Jan. 17, 2024, 4:17 a.m. UTC | #4
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Tuesday, January 16, 2024 8:58 PM
> 
> On Tue, Jan 16, 2024 at 01:18:12AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Tuesday, January 16, 2024 1:25 AM
> > >
> > > On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> > > > +/**
> > > > + * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an
> > > iommu_domain
> > > > + * @idev: device to detach
> > > > + * @pasid: pasid to detach
> > > > + *
> > > > + * Undo iommufd_device_pasid_attach(). This disconnects the
> idev/pasid
> > > from
> > > > + * the previously attached pt_id.
> > > > + */
> > > > +void iommufd_device_pasid_detach(struct iommufd_device *idev, u32
> > > pasid)
> > > > +{
> > > > +	struct iommufd_hw_pagetable *hwpt;
> > > > +
> > > > +	hwpt = xa_load(&idev->pasid_hwpts, pasid);
> > > > +	if (!hwpt)
> > > > +		return;
> > > > +	xa_erase(&idev->pasid_hwpts, pasid);
> > > > +	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
> > > > +	iommufd_hw_pagetable_put(idev->ictx, hwpt);
> > > > +}
> > >
> > > None of this xarray stuff looks locked properly
> > >
> >
> > I had an impression from past discussions that the caller should not
> > race attach/detach/replace on same device or pasid, otherwise it is
> > already a problem in a higher level.
> 
> I thought that was just at the iommu layer? We want VFIO to do the
> same? Then why do we need the dual xarrays?
> 
> Still, it looks really wrong to have code like this, we don't need to
> - it can be locked properly, it isn't a performance path..

OK, let's add a lock for this.

> 
> > and the original intention of the group lock was to ensure all devices
> > in the group have a same view. Not exactly to guard concurrent
> > attach/detach.
> 
> We don't have a group lock here, this is in iommufd.

I meant the lock in iommufd_group.

> 
> Use the xarray lock..
> 
> eg
> 
> hwpt = xa_erase(&idev->pasid_hwpts, pasid);
> if (WARN_ON(!hwpt))
>    return
> 
> xa_erase is atomic.
> 

yes, that's better.
  
Yi Liu Jan. 17, 2024, 8:24 a.m. UTC | #5
On 2024/1/17 12:17, Tian, Kevin wrote:
>> From: Jason Gunthorpe <jgg@nvidia.com>
>> Sent: Tuesday, January 16, 2024 8:58 PM
>>
>> On Tue, Jan 16, 2024 at 01:18:12AM +0000, Tian, Kevin wrote:
>>>> From: Jason Gunthorpe <jgg@nvidia.com>
>>>> Sent: Tuesday, January 16, 2024 1:25 AM
>>>>
>>>> On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
>>>>> +/**
>>>>> + * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an
>>>> iommu_domain
>>>>> + * @idev: device to detach
>>>>> + * @pasid: pasid to detach
>>>>> + *
>>>>> + * Undo iommufd_device_pasid_attach(). This disconnects the
>> idev/pasid
>>>> from
>>>>> + * the previously attached pt_id.
>>>>> + */
>>>>> +void iommufd_device_pasid_detach(struct iommufd_device *idev, u32
>>>> pasid)
>>>>> +{
>>>>> +	struct iommufd_hw_pagetable *hwpt;
>>>>> +
>>>>> +	hwpt = xa_load(&idev->pasid_hwpts, pasid);
>>>>> +	if (!hwpt)
>>>>> +		return;
>>>>> +	xa_erase(&idev->pasid_hwpts, pasid);
>>>>> +	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
>>>>> +	iommufd_hw_pagetable_put(idev->ictx, hwpt);
>>>>> +}
>>>>
>>>> None of this xarray stuff looks locked properly
>>>>
>>>
>>> I had an impression from past discussions that the caller should not
>>> race attach/detach/replace on same device or pasid, otherwise it is
>>> already a problem in a higher level.
>>
>> I thought that was just at the iommu layer? We want VFIO to do the
>> same? Then why do we need the dual xarrays?
>>
>> Still, it looks really wrong to have code like this, we don't need to
>> - it can be locked properly, it isn't a performance path..
> 
> OK, let's add a lock for this.
> 
>>
>>> and the original intention of the group lock was to ensure all devices
>>> in the group have a same view. Not exactly to guard concurrent
>>> attach/detach.
>>
>> We don't have a group lock here, this is in iommufd.
> 
> I meant the lock in iommufd_group.
> 
>>
>> Use the xarray lock..
>>
>> eg
>>
>> hwpt = xa_erase(&idev->pasid_hwpts, pasid);
>> if (WARN_ON(!hwpt))
>>     return
>>
>> xa_erase is atomic.
>>
> 
> yes, that's better.

Above indeed makes more sense if there can be concurrent attach/replace/detach
on a single pasid. Just have one doubt should we add lock to protect the
whole attach/replace/detach paths. In the attach/replace path[1] [2], the
xarray entry is verified firstly, and then updated after returning from
iommu attach/replace API. It is uneasy to protect the xarray operations only
with xa_lock as a detach path can acquire xa_lock right after attach/replace
path checks the xarray. To avoid it, may need a mutex to protect the whole
attach/replace/detach path to avoid race. Or maybe the attach/replace path
should mark the corresponding entry as a special state that can block the
other path like detach until the attach/replace path update the final hwpt to
the xarray. Is there such state in xarray?

[1] iommufd_device_pasid_attach() -> iommufd_device_pasid_do_attach() -> 
__iommufd_device_pasid_do_attach()
[2] iommufd_device_pasid_replace -> iommufd_device_pasid_do_replace -> 
__iommufd_device_pasid_do_attach

Regards,
Yi Liu
  
Jason Gunthorpe Jan. 17, 2024, 12:56 p.m. UTC | #6
On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
> Above indeed makes more sense if there can be concurrent attach/replace/detach
> on a single pasid. Just have one doubt should we add lock to protect the
> whole attach/replace/detach paths. In the attach/replace path[1] [2], the
> xarray entry is verified firstly, and then updated after returning from
> iommu attach/replace API. It is uneasy to protect the xarray operations only
> with xa_lock as a detach path can acquire xa_lock right after attach/replace
> path checks the xarray. To avoid it, may need a mutex to protect the whole
> attach/replace/detach path to avoid race. Or maybe the attach/replace path
> should mark the corresponding entry as a special state that can block the
> other path like detach until the attach/replace path update the final hwpt to
> the xarray. Is there such state in xarray?

If the caller is not allowed to make concurrent attaches/detaches to
the same pasid then you can document that in a comment, but it is
still better to use xarray in a self-consistent way.

Jason
  
Yi Liu Jan. 18, 2024, 9:28 a.m. UTC | #7
On 2024/1/17 20:56, Jason Gunthorpe wrote:
> On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
>> Above indeed makes more sense if there can be concurrent attach/replace/detach
>> on a single pasid. Just have one doubt should we add lock to protect the
>> whole attach/replace/detach paths. In the attach/replace path[1] [2], the
>> xarray entry is verified firstly, and then updated after returning from
>> iommu attach/replace API. It is uneasy to protect the xarray operations only
>> with xa_lock as a detach path can acquire xa_lock right after attach/replace
>> path checks the xarray. To avoid it, may need a mutex to protect the whole
>> attach/replace/detach path to avoid race. Or maybe the attach/replace path
>> should mark the corresponding entry as a special state that can block the
>> other path like detach until the attach/replace path update the final hwpt to
>> the xarray. Is there such state in xarray?
> 
> If the caller is not allowed to make concurrent attaches/detaches to
> the same pasid then you can document that in a comment,

yes. I can document it. Otherwise, we may need a mutex for pasid to allow
concurrent attaches/detaches.

> but it is
> still better to use xarray in a self-consistent way.

sure. I'll try. At least in the detach path, xarray should be what you've
suggested in prior email. Currently in the attach path, the logic is as
below. Perhaps I can skip the check on old_hwpt since
iommu_attach_device_pasid() should fail if there is an existing domain
attached on the pasid. Then the xarray should be more consistent. what
about your opinion?

	old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
	if (old_hwpt) {
		/* Attach does not allow overwrite */
		if (old_hwpt == hwpt)
			return NULL;
		else
			return ERR_PTR(-EINVAL);
	}

	rc = iommu_attach_device_pasid(hwpt->domain, idev->dev, pasid);
	if (rc)
		return ERR_PTR(rc);

	refcount_inc(&hwpt->obj.users);
	xa_store(&idev->pasid_hwpts, pasid, hwpt, GFP_KERNEL);
  
Jason Gunthorpe Jan. 18, 2024, 1:38 p.m. UTC | #8
On Thu, Jan 18, 2024 at 05:28:01PM +0800, Yi Liu wrote:
> On 2024/1/17 20:56, Jason Gunthorpe wrote:
> > On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
> > > Above indeed makes more sense if there can be concurrent attach/replace/detach
> > > on a single pasid. Just have one doubt should we add lock to protect the
> > > whole attach/replace/detach paths. In the attach/replace path[1] [2], the
> > > xarray entry is verified firstly, and then updated after returning from
> > > iommu attach/replace API. It is uneasy to protect the xarray operations only
> > > with xa_lock as a detach path can acquire xa_lock right after attach/replace
> > > path checks the xarray. To avoid it, may need a mutex to protect the whole
> > > attach/replace/detach path to avoid race. Or maybe the attach/replace path
> > > should mark the corresponding entry as a special state that can block the
> > > other path like detach until the attach/replace path update the final hwpt to
> > > the xarray. Is there such state in xarray?
> > 
> > If the caller is not allowed to make concurrent attaches/detaches to
> > the same pasid then you can document that in a comment,
> 
> yes. I can document it. Otherwise, we may need a mutex for pasid to allow
> concurrent attaches/detaches.
> 
> > but it is
> > still better to use xarray in a self-consistent way.
> 
> sure. I'll try. At least in the detach path, xarray should be what you've
> suggested in prior email. Currently in the attach path, the logic is as
> below. Perhaps I can skip the check on old_hwpt since
> iommu_attach_device_pasid() should fail if there is an existing domain
> attached on the pasid. Then the xarray should be more consistent. what
> about your opinion?
> 
> 	old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
> 	if (old_hwpt) {
> 		/* Attach does not allow overwrite */
> 		if (old_hwpt == hwpt)
> 			return NULL;
> 		else
> 			return ERR_PTR(-EINVAL);
> 	}
> 
> 	rc = iommu_attach_device_pasid(hwpt->domain, idev->dev, pasid);
> 	if (rc)
> 		return ERR_PTR(rc);
> 
> 	refcount_inc(&hwpt->obj.users);
> 	xa_store(&idev->pasid_hwpts, pasid, hwpt, GFP_KERNEL);

Use xa_cmpxchg()

Jason
  
Yi Liu Jan. 19, 2024, 10:15 a.m. UTC | #9
On 2024/1/18 21:38, Jason Gunthorpe wrote:
> On Thu, Jan 18, 2024 at 05:28:01PM +0800, Yi Liu wrote:
>> On 2024/1/17 20:56, Jason Gunthorpe wrote:
>>> On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
>>>> Above indeed makes more sense if there can be concurrent attach/replace/detach
>>>> on a single pasid. Just have one doubt should we add lock to protect the
>>>> whole attach/replace/detach paths. In the attach/replace path[1] [2], the
>>>> xarray entry is verified firstly, and then updated after returning from
>>>> iommu attach/replace API. It is uneasy to protect the xarray operations only
>>>> with xa_lock as a detach path can acquire xa_lock right after attach/replace
>>>> path checks the xarray. To avoid it, may need a mutex to protect the whole
>>>> attach/replace/detach path to avoid race. Or maybe the attach/replace path
>>>> should mark the corresponding entry as a special state that can block the
>>>> other path like detach until the attach/replace path update the final hwpt to
>>>> the xarray. Is there such state in xarray?
>>>
>>> If the caller is not allowed to make concurrent attaches/detaches to
>>> the same pasid then you can document that in a comment,
>>
>> yes. I can document it. Otherwise, we may need a mutex for pasid to allow
>> concurrent attaches/detaches.
>>
>>> but it is
>>> still better to use xarray in a self-consistent way.
>>
>> sure. I'll try. At least in the detach path, xarray should be what you've
>> suggested in prior email. Currently in the attach path, the logic is as
>> below. Perhaps I can skip the check on old_hwpt since
>> iommu_attach_device_pasid() should fail if there is an existing domain
>> attached on the pasid. Then the xarray should be more consistent. what
>> about your opinion?
>>
>> 	old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
>> 	if (old_hwpt) {
>> 		/* Attach does not allow overwrite */
>> 		if (old_hwpt == hwpt)
>> 			return NULL;
>> 		else
>> 			return ERR_PTR(-EINVAL);
>> 	}
>>
>> 	rc = iommu_attach_device_pasid(hwpt->domain, idev->dev, pasid);
>> 	if (rc)
>> 		return ERR_PTR(rc);
>>
>> 	refcount_inc(&hwpt->obj.users);
>> 	xa_store(&idev->pasid_hwpts, pasid, hwpt, GFP_KERNEL);
> 
> Use xa_cmpxchg()

sure.
  

Patch

diff --git a/drivers/iommu/iommufd/Makefile b/drivers/iommu/iommufd/Makefile
index 34b446146961..4b4d516b025c 100644
--- a/drivers/iommu/iommufd/Makefile
+++ b/drivers/iommu/iommufd/Makefile
@@ -6,6 +6,7 @@  iommufd-y := \
 	ioas.o \
 	main.o \
 	pages.o \
+	pasid.o \
 	vfio_compat.o
 
 iommufd-$(CONFIG_IOMMUFD_TEST) += selftest.o
diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
index 0992d9d46af9..a7574d4d5ffa 100644
--- a/drivers/iommu/iommufd/device.c
+++ b/drivers/iommu/iommufd/device.c
@@ -136,6 +136,7 @@  void iommufd_device_destroy(struct iommufd_object *obj)
 	struct iommufd_device *idev =
 		container_of(obj, struct iommufd_device, obj);
 
+	WARN_ON(!xa_empty(&idev->pasid_hwpts));
 	iommu_device_release_dma_owner(idev->dev);
 	iommufd_put_group(idev->igroup);
 	if (!iommufd_selftest_is_mock_dev(idev->dev))
@@ -216,6 +217,8 @@  struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx,
 	/* igroup refcount moves into iommufd_device */
 	idev->igroup = igroup;
 
+	xa_init(&idev->pasid_hwpts);
+
 	/*
 	 * If the caller fails after this success it must call
 	 * iommufd_unbind_device() which is safe since we hold this refcount.
@@ -534,7 +537,17 @@  iommufd_device_do_replace(struct iommufd_device *idev,
 static struct iommufd_hw_pagetable *do_attach(struct iommufd_device *idev,
 		struct iommufd_hw_pagetable *hwpt, struct attach_data *data)
 {
-	return data->attach_fn(idev, hwpt);
+	if (data->pasid == IOMMU_PASID_INVALID) {
+		BUG_ON((data->attach_fn != iommufd_device_do_attach) &&
+		       (data->attach_fn != iommufd_device_do_replace));
+		return data->attach_fn(idev, hwpt);
+	} else {
+		BUG_ON((data->pasid_attach_fn !=
+			iommufd_device_pasid_do_attach) &&
+		       (data->pasid_attach_fn !=
+			iommufd_device_pasid_do_replace));
+		return data->pasid_attach_fn(idev, data->pasid, hwpt);
+	}
 }
 
 /*
@@ -684,6 +697,7 @@  int iommufd_device_attach(struct iommufd_device *idev, u32 *pt_id)
 	int rc;
 	struct attach_data data = {
 		.attach_fn = &iommufd_device_do_attach,
+		.pasid = IOMMU_PASID_INVALID,
 	};
 
 	rc = iommufd_device_change_pt(idev, pt_id, &data);
@@ -718,6 +732,7 @@  int iommufd_device_replace(struct iommufd_device *idev, u32 *pt_id)
 {
 	struct attach_data data = {
 		.attach_fn = &iommufd_device_do_replace,
+		.pasid = IOMMU_PASID_INVALID,
 	};
 
 	return iommufd_device_change_pt(idev, pt_id, &data);
diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
index 24fee2c37ce8..d37b7d0bfffe 100644
--- a/drivers/iommu/iommufd/iommufd_private.h
+++ b/drivers/iommu/iommufd/iommufd_private.h
@@ -349,6 +349,7 @@  struct iommufd_device {
 	struct list_head group_item;
 	/* always the physical device */
 	struct device *dev;
+	struct xarray pasid_hwpts;
 	bool enforce_cache_coherency;
 };
 
@@ -368,9 +369,23 @@  struct attach_data {
 		struct iommufd_hw_pagetable *(*attach_fn)(
 				struct iommufd_device *idev,
 				struct iommufd_hw_pagetable *hwpt);
+		struct iommufd_hw_pagetable *(*pasid_attach_fn)(
+				struct iommufd_device *idev, u32 pasid,
+				struct iommufd_hw_pagetable *hwpt);
 	};
+	u32 pasid;
 };
 
+int iommufd_device_change_pt(struct iommufd_device *idev, u32 *pt_id,
+			     struct attach_data *data);
+
+struct iommufd_hw_pagetable *
+iommufd_device_pasid_do_attach(struct iommufd_device *idev, u32 pasid,
+			       struct iommufd_hw_pagetable *hwpt);
+struct iommufd_hw_pagetable *
+iommufd_device_pasid_do_replace(struct iommufd_device *idev, u32 pasid,
+				struct iommufd_hw_pagetable *hwpt);
+
 struct iommufd_access {
 	struct iommufd_object obj;
 	struct iommufd_ctx *ictx;
diff --git a/drivers/iommu/iommufd/pasid.c b/drivers/iommu/iommufd/pasid.c
new file mode 100644
index 000000000000..75499a1d92a1
--- /dev/null
+++ b/drivers/iommu/iommufd/pasid.c
@@ -0,0 +1,138 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2023, Intel Corporation
+ */
+#include <linux/iommufd.h>
+#include <linux/iommu.h>
+#include "../iommu-priv.h"
+
+#include "iommufd_private.h"
+
+static int __iommufd_device_pasid_do_attach(struct iommufd_device *idev,
+					    u32 pasid,
+					    struct iommufd_hw_pagetable *hwpt,
+					    bool replace)
+{
+	int rc;
+
+	if (!replace)
+		rc = iommu_attach_device_pasid(hwpt->domain, idev->dev, pasid);
+	else
+		rc = iommu_replace_device_pasid(hwpt->domain, idev->dev, pasid);
+	if (rc)
+		return rc;
+
+	refcount_inc(&hwpt->obj.users);
+	xa_store(&idev->pasid_hwpts, pasid, hwpt, GFP_KERNEL);
+	return 0;
+}
+
+struct iommufd_hw_pagetable *
+iommufd_device_pasid_do_attach(struct iommufd_device *idev, u32 pasid,
+			       struct iommufd_hw_pagetable *hwpt)
+{
+	struct iommufd_hw_pagetable *old_hwpt;
+	int rc;
+
+	old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
+	if (old_hwpt) {
+		/* Attach does not allow overwrite */
+		if (old_hwpt == hwpt)
+			return NULL;
+		else
+			return ERR_PTR(-EINVAL);
+	}
+
+	rc = __iommufd_device_pasid_do_attach(idev, pasid, hwpt, false);
+	return rc ? ERR_PTR(rc) : NULL;
+}
+
+struct iommufd_hw_pagetable *
+iommufd_device_pasid_do_replace(struct iommufd_device *idev, u32 pasid,
+				struct iommufd_hw_pagetable *hwpt)
+{
+	struct iommufd_hw_pagetable *old_hwpt;
+	int rc;
+
+	old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
+	if (!old_hwpt)
+		return ERR_PTR(-EINVAL);
+
+	if (hwpt == old_hwpt)
+		return NULL;
+
+	rc = __iommufd_device_pasid_do_attach(idev, pasid, hwpt, true);
+	/* Caller must destroy old_hwpt */
+	return rc ? ERR_PTR(rc) : old_hwpt;
+}
+
+/**
+ * iommufd_device_pasid_attach - Connect a {device, pasid} to an iommu_domain
+ * @idev: device to attach
+ * @pasid: pasid to attach
+ * @pt_id: Input a IOMMUFD_OBJ_IOAS, or IOMMUFD_OBJ_HW_PAGETABLE
+ *         Output the IOMMUFD_OBJ_HW_PAGETABLE ID
+ *
+ * This connects a pasid of the device to an iommu_domain. Once this
+ * completes the device could do DMA with the pasid.
+ *
+ * This function is undone by calling iommufd_device_detach_pasid().
+ */
+int iommufd_device_pasid_attach(struct iommufd_device *idev,
+				u32 pasid, u32 *pt_id)
+{
+	struct attach_data data = {
+		.pasid_attach_fn = &iommufd_device_pasid_do_attach,
+		.pasid = pasid,
+	};
+
+	return iommufd_device_change_pt(idev, pt_id, &data);
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_device_pasid_attach, IOMMUFD);
+
+/**
+ * iommufd_device_pasid_replace- Change the {device, pasid}'s iommu_domain
+ * @idev: device to change
+ * @pasid: pasid to change
+ * @pt_id: Input a IOMMUFD_OBJ_IOAS, or IOMMUFD_OBJ_HW_PAGETABLE
+ *         Output the IOMMUFD_OBJ_HW_PAGETABLE ID
+ *
+ * This is the same as
+ *   iommufd_device_pasid_detach();
+ *   iommufd_device_pasid_attach();
+ *
+ * If it fails then no change is made to the attachment. The iommu driver may
+ * implement this so there is no disruption in translation. This can only be
+ * called if iommufd_device_pasid_attach() has already succeeded.
+ */
+int iommufd_device_pasid_replace(struct iommufd_device *idev,
+				 u32 pasid, u32 *pt_id)
+{
+	struct attach_data data = {
+		.pasid_attach_fn = &iommufd_device_pasid_do_replace,
+		.pasid = pasid,
+	};
+
+	return iommufd_device_change_pt(idev, pt_id, &data);
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_device_pasid_replace, IOMMUFD);
+
+/**
+ * iommufd_device_pasid_detach - Disconnect a {device, pasid} to an iommu_domain
+ * @idev: device to detach
+ * @pasid: pasid to detach
+ *
+ * Undo iommufd_device_pasid_attach(). This disconnects the idev/pasid from
+ * the previously attached pt_id.
+ */
+void iommufd_device_pasid_detach(struct iommufd_device *idev, u32 pasid)
+{
+	struct iommufd_hw_pagetable *hwpt;
+
+	hwpt = xa_load(&idev->pasid_hwpts, pasid);
+	if (!hwpt)
+		return;
+	xa_erase(&idev->pasid_hwpts, pasid);
+	iommu_detach_device_pasid(hwpt->domain, idev->dev, pasid);
+	iommufd_hw_pagetable_put(idev->ictx, hwpt);
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_device_pasid_detach, IOMMUFD);
diff --git a/include/linux/iommufd.h b/include/linux/iommufd.h
index ffc3a949f837..0b007c376306 100644
--- a/include/linux/iommufd.h
+++ b/include/linux/iommufd.h
@@ -26,6 +26,12 @@  int iommufd_device_attach(struct iommufd_device *idev, u32 *pt_id);
 int iommufd_device_replace(struct iommufd_device *idev, u32 *pt_id);
 void iommufd_device_detach(struct iommufd_device *idev);
 
+int iommufd_device_pasid_attach(struct iommufd_device *idev,
+				u32 pasid, u32 *pt_id);
+int iommufd_device_pasid_replace(struct iommufd_device *idev,
+				 u32 pasid, u32 *pt_id);
+void iommufd_device_pasid_detach(struct iommufd_device *idev, u32 pasid);
+
 struct iommufd_ctx *iommufd_device_to_ictx(struct iommufd_device *idev);
 u32 iommufd_device_to_id(struct iommufd_device *idev);