Message ID | 20230511143844.22693-2-yi.l.liu@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp4435918vqo; Thu, 11 May 2023 07:54:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ594kEl42So0C3HpX/w7FWThWr1/exsCqX6aBkI5uq0Ar3i5t6akEJShoU5FPGYc2Ac42Gi X-Received: by 2002:a05:6a20:6a0f:b0:102:f6f2:c962 with SMTP id p15-20020a056a206a0f00b00102f6f2c962mr7478866pzk.54.1683816858423; Thu, 11 May 2023 07:54:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683816858; cv=none; d=google.com; s=arc-20160816; b=vsW6tSTzh9RPi5W+UhYptzAjXIJszPcL5gKcg6uHXsI+MoIUKxHZqBZZYkS1KM6l/B WH8S2BZfOUZRT3X3daIduzBVLgsoJeDACzER0+45TptP8PGp/UY7VIL7JFJh2U7BwZFy 3PUBKfovL/DXBxf8wDYYSP2Y5CD9h3kxX4bRiYR4vyWD+iEg3Lfeh9TZSgZmrNGUpguo 3eumj5gazXvWkIPG8KHmvc2RCmP69fcRnefsV42YGudC8A2Iww9KdeC5P5eMQX17aICD 8a36yVsy0MxeefL2+/2tFsM0VhicSg4vA0+uIxaB6dfDQWsdSFYvtS332e4b/U7DJFhr 8XJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=vs6mgI929VPcsZbNcxkZqesJSbo74Vl48nTHuNVeL/k=; b=mwV9JO56kj0SRa3D1ujQjUS6DwBzU7xUBctnWVprxcSOxVH+Ziek0yhLCqkfVjYGvU 6coZrnOByWBQI/EW7/asqC1BRd7IQRel1JW0i4K8ZHjQHJL0b/dTFigOULXSP+xP73dz 6OoqtK2APoScTgEdiSABX8ZAAId8DzuhHEfuY9Ks4d+KkqVIdlxilXFmcy7g/o5KcH2W E07pybQBFomejbPEHYE/vEUCYTyQz6g5BJ87Z9jhFtbgn6VSqMMirVKJiznDLoL0HOxX ABV3BFtyEITdPhe0yfme1LbbeTGNHh7x3cxGWjKyD2OJHfr7EiL36KmzNdkr3q12m3BY +kzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IK8cRKaV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y17-20020a63b511000000b005031abe8d8bsi1665684pge.745.2023.05.11.07.54.02; Thu, 11 May 2023 07:54:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IK8cRKaV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238638AbjEKOm4 (ORCPT <rfc822;peekingduck44@gmail.com> + 99 others); Thu, 11 May 2023 10:42:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238340AbjEKOma (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 11 May 2023 10:42:30 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1391310E42; Thu, 11 May 2023 07:38:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683815928; x=1715351928; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LthEg6EGU/MY/viwuO2PF5RAMF9yzjhBqwVfgZnbeuQ=; b=IK8cRKaVh986bbPDLgLOiqOcYBFSdFbX4ZAS1EKLOpODF9FCS9R1T+a7 ug01Tv7xuj68qxoHBd94+HJI4TDI/Xo+98Wz5EnnzrsdR+7KUXdcQRH/l ZCYedpXJzqwT9LQMdK9rBMBtNC5hm1hwgPUefe1wBL1xRKpLXYF5GHPCd 0k7GoaFnd7HHBYtv2gzFwpRmiLJajBCy6CXG3xxjUwqfUNP10Z0xhNd9N YYQhElV3/rBY7Xq50IyP6vuOpv0lLjnU/eSuFRTOLMDQZrLx1vqfIoJZF PZDDkC2QB+qFh4K2SYiQtPyrhOKkHvmKsUCKT282/Mxp2BKM9OHsm3Uk3 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="339812846" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="339812846" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2023 07:38:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="730382596" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="730382596" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga008.jf.intel.com with ESMTP; 11 May 2023 07:38:47 -0700 From: Yi Liu <yi.l.liu@intel.com> To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com Subject: [PATCH v2 01/11] iommu: Add new iommu op to create domains owned by userspace Date: Thu, 11 May 2023 07:38:34 -0700 Message-Id: <20230511143844.22693-2-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230511143844.22693-1-yi.l.liu@intel.com> References: <20230511143844.22693-1-yi.l.liu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765609945903891522?= X-GMAIL-MSGID: =?utf-8?q?1765609945903891522?= |
Series |
iommufd: Add nesting infrastructure
|
|
Commit Message
Yi Liu
May 11, 2023, 2:38 p.m. UTC
From: Lu Baolu <baolu.lu@linux.intel.com> Introduce a new iommu_domain op to create domains owned by userspace, e.g. through iommufd. These domains have a few different properties compares to kernel owned domains: - They may be UNMANAGED domains, but created with special parameters. For instance aperture size changes/number of levels, different IOPTE formats, or other things necessary to make a vIOMMU work - We have to track all the memory allocations with GFP_KERNEL_ACCOUNT to make the cgroup sandbox stronger - Device-specialty domains, such as NESTED domains can be created by iommufd. The new op clearly says the domain is being created by IOMMUFD, that the domain is intended for userspace use, and it provides a way to pass a driver specific uAPI structure to customize the created domain to exactly what the vIOMMU userspace driver requires. iommu drivers that cannot support VFIO/IOMMUFD should not support this op. This includes any driver that cannot provide a fully functional UNMANAGED domain. This op chooses to make the special parameters opaque to the core. This suits the current usage model where accessing any of the IOMMU device special parameters does require a userspace driver that matches the kernel driver. If a need for common parameters, implemented similarly by several drivers, arises then there is room in the design to grow a generic parameter set as well. This new op for now is only supposed to be used by iommufd, hence no wrapper for it. iommufd would call the callback directly. As for domain free, iommufd would use iommu_domain_free(). Also, add an op to return the length of supported user data structures that must be added to include/uapi/include/iommufd.h file. This helps the iommufd core to sanitize the input data before it forwards the data to an iommu driver. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Co-developed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> --- include/linux/iommu.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
Comments
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, May 11, 2023 10:39 PM > @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { > * after use. Return the data buffer if success, or ERR_PTR on > * failure. > * @domain_alloc: allocate iommu domain > + * @domain_alloc_user: allocate user iommu domain > + * @domain_alloc_user_data_len: return the required length of the user > data > + * to allocate a specific type user iommu domain. > + * @hwpt_type is defined as enum iommu_hwpt_type > + * in include/uapi/linux/iommufd.h. The returned > + * length is the corresponding sizeof driver data > + * structures in include/uapi/linux/iommufd.h. > + * -EOPNOTSUPP would be returned if the input > + * @hwpt_type is not supported by the driver. Can this be merged with earlier @hw_info callback? That will already report a list of supported hwpt types. is there a problem to further describe the data length for each type in that interface?
On Fri, May 19, 2023 at 08:47:45AM +0000, Tian, Kevin wrote: > > From: Liu, Yi L <yi.l.liu@intel.com> > > Sent: Thursday, May 11, 2023 10:39 PM > > @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { > > * after use. Return the data buffer if success, or ERR_PTR on > > * failure. > > * @domain_alloc: allocate iommu domain > > + * @domain_alloc_user: allocate user iommu domain > > + * @domain_alloc_user_data_len: return the required length of the user > > data > > + * to allocate a specific type user iommu domain. > > + * @hwpt_type is defined as enum iommu_hwpt_type > > + * in include/uapi/linux/iommufd.h. The returned > > + * length is the corresponding sizeof driver data > > + * structures in include/uapi/linux/iommufd.h. > > + * -EOPNOTSUPP would be returned if the input > > + * @hwpt_type is not supported by the driver. > > Can this be merged with earlier @hw_info callback? That will already > report a list of supported hwpt types. is there a problem to further > describe the data length for each type in that interface? Yi and I had a last minute talk before he sent this version actually... This version of hw_info no longer reports a list of supported hwpt types. We previously did that in a bitmap, but we found that a bitmap will not be sufficient eventually if there are more than 64 hwpt_types. And this domain_alloc_user_data_len might not be necessary, because in this version the IOMMUFD core doesn't really care about the actual data_len since it copies the data into the ucmd_buffer, i.e. we would probably only need a bool op like "hwpt_type_is_supported". Thanks Nic
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Saturday, May 20, 2023 2:45 AM > > On Fri, May 19, 2023 at 08:47:45AM +0000, Tian, Kevin wrote: > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > Sent: Thursday, May 11, 2023 10:39 PM > > > @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { > > > * after use. Return the data buffer if success, or ERR_PTR on > > > * failure. > > > * @domain_alloc: allocate iommu domain > > > + * @domain_alloc_user: allocate user iommu domain > > > + * @domain_alloc_user_data_len: return the required length of the user > > > data > > > + * to allocate a specific type user iommu domain. > > > + * @hwpt_type is defined as enum iommu_hwpt_type > > > + * in include/uapi/linux/iommufd.h. The returned > > > + * length is the corresponding sizeof driver data > > > + * structures in include/uapi/linux/iommufd.h. > > > + * -EOPNOTSUPP would be returned if the input > > > + * @hwpt_type is not supported by the driver. > > > > Can this be merged with earlier @hw_info callback? That will already > > report a list of supported hwpt types. is there a problem to further > > describe the data length for each type in that interface? > > Yi and I had a last minute talk before he sent this version > actually... This version of hw_info no longer reports a list > of supported hwpt types. We previously did that in a bitmap, > but we found that a bitmap will not be sufficient eventually > if there are more than 64 hwpt_types. > > And this domain_alloc_user_data_len might not be necessary, > because in this version the IOMMUFD core doesn't really care > about the actual data_len since it copies the data into the > ucmd_buffer, i.e. we would probably only need a bool op like > "hwpt_type_is_supported". > Or just pass to the @domain_alloc_user ops which should fail if the type is not supported?
On Wed, May 24, 2023 at 05:02:19AM +0000, Tian, Kevin wrote: > > From: Nicolin Chen <nicolinc@nvidia.com> > > Sent: Saturday, May 20, 2023 2:45 AM > > > > On Fri, May 19, 2023 at 08:47:45AM +0000, Tian, Kevin wrote: > > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > > Sent: Thursday, May 11, 2023 10:39 PM > > > > @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { > > > > * after use. Return the data buffer if success, or ERR_PTR on > > > > * failure. > > > > * @domain_alloc: allocate iommu domain > > > > + * @domain_alloc_user: allocate user iommu domain > > > > + * @domain_alloc_user_data_len: return the required length of the user > > > > data > > > > + * to allocate a specific type user iommu domain. > > > > + * @hwpt_type is defined as enum iommu_hwpt_type > > > > + * in include/uapi/linux/iommufd.h. The returned > > > > + * length is the corresponding sizeof driver data > > > > + * structures in include/uapi/linux/iommufd.h. > > > > + * -EOPNOTSUPP would be returned if the input > > > > + * @hwpt_type is not supported by the driver. > > > > > > Can this be merged with earlier @hw_info callback? That will already > > > report a list of supported hwpt types. is there a problem to further > > > describe the data length for each type in that interface? > > > > Yi and I had a last minute talk before he sent this version > > actually... This version of hw_info no longer reports a list > > of supported hwpt types. We previously did that in a bitmap, > > but we found that a bitmap will not be sufficient eventually > > if there are more than 64 hwpt_types. > > > > And this domain_alloc_user_data_len might not be necessary, > > because in this version the IOMMUFD core doesn't really care > > about the actual data_len since it copies the data into the > > ucmd_buffer, i.e. we would probably only need a bool op like > > "hwpt_type_is_supported". > > > > Or just pass to the @domain_alloc_user ops which should fail > if the type is not supported? The domain_alloc_user returns NULL, which then would be turned into an ENOMEM error code. It might be confusing from the user space perspective. Having an op at least allows the user space to realize that something is wrong with the input structure? Thanks Nic
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Wednesday, May 24, 2023 1:24 PM > > On Wed, May 24, 2023 at 05:02:19AM +0000, Tian, Kevin wrote: > > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > Sent: Saturday, May 20, 2023 2:45 AM > > > > > > On Fri, May 19, 2023 at 08:47:45AM +0000, Tian, Kevin wrote: > > > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > > > Sent: Thursday, May 11, 2023 10:39 PM > > > > > @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { > > > > > * after use. Return the data buffer if success, or ERR_PTR on > > > > > * failure. > > > > > * @domain_alloc: allocate iommu domain > > > > > + * @domain_alloc_user: allocate user iommu domain > > > > > + * @domain_alloc_user_data_len: return the required length of the > user > > > > > data > > > > > + * to allocate a specific type user iommu domain. > > > > > + * @hwpt_type is defined as enum > iommu_hwpt_type > > > > > + * in include/uapi/linux/iommufd.h. The returned > > > > > + * length is the corresponding sizeof driver data > > > > > + * structures in include/uapi/linux/iommufd.h. > > > > > + * -EOPNOTSUPP would be returned if the input > > > > > + * @hwpt_type is not supported by the driver. > > > > > > > > Can this be merged with earlier @hw_info callback? That will already > > > > report a list of supported hwpt types. is there a problem to further > > > > describe the data length for each type in that interface? > > > > > > Yi and I had a last minute talk before he sent this version > > > actually... This version of hw_info no longer reports a list > > > of supported hwpt types. We previously did that in a bitmap, > > > but we found that a bitmap will not be sufficient eventually > > > if there are more than 64 hwpt_types. > > > > > > And this domain_alloc_user_data_len might not be necessary, > > > because in this version the IOMMUFD core doesn't really care > > > about the actual data_len since it copies the data into the > > > ucmd_buffer, i.e. we would probably only need a bool op like > > > "hwpt_type_is_supported". > > > > > > > Or just pass to the @domain_alloc_user ops which should fail > > if the type is not supported? > > The domain_alloc_user returns NULL, which then would be turned > into an ENOMEM error code. It might be confusing from the user > space perspective. Having an op at least allows the user space > to realize that something is wrong with the input structure? > this is a new callback. any reason why it cannot be defined to allow returning ERR_PTR?
On Wed, May 24, 2023 at 07:48:46AM +0000, Tian, Kevin wrote: > > > > > > * after use. Return the data buffer if success, or ERR_PTR on > > > > > > * failure. > > > > > > * @domain_alloc: allocate iommu domain > > > > > > + * @domain_alloc_user: allocate user iommu domain > > > > > > + * @domain_alloc_user_data_len: return the required length of the > > user > > > > > > data > > > > > > + * to allocate a specific type user iommu domain. > > > > > > + * @hwpt_type is defined as enum > > iommu_hwpt_type > > > > > > + * in include/uapi/linux/iommufd.h. The returned > > > > > > + * length is the corresponding sizeof driver data > > > > > > + * structures in include/uapi/linux/iommufd.h. > > > > > > + * -EOPNOTSUPP would be returned if the input > > > > > > + * @hwpt_type is not supported by the driver. > > > > > > > > > > Can this be merged with earlier @hw_info callback? That will already > > > > > report a list of supported hwpt types. is there a problem to further > > > > > describe the data length for each type in that interface? > > > > > > > > Yi and I had a last minute talk before he sent this version > > > > actually... This version of hw_info no longer reports a list > > > > of supported hwpt types. We previously did that in a bitmap, > > > > but we found that a bitmap will not be sufficient eventually > > > > if there are more than 64 hwpt_types. > > > > > > > > And this domain_alloc_user_data_len might not be necessary, > > > > because in this version the IOMMUFD core doesn't really care > > > > about the actual data_len since it copies the data into the > > > > ucmd_buffer, i.e. we would probably only need a bool op like > > > > "hwpt_type_is_supported". > > > > > > > > > > Or just pass to the @domain_alloc_user ops which should fail > > > if the type is not supported? > > > > The domain_alloc_user returns NULL, which then would be turned > > into an ENOMEM error code. It might be confusing from the user > > space perspective. Having an op at least allows the user space > > to realize that something is wrong with the input structure? > > > > this is a new callback. any reason why it cannot be defined to > allow returning ERR_PTR? Upon a quick check, I think we could. Though it'd be slightly mismatched with the domain_alloc op, it should be fine since iommufd is likely to be the only caller. So, I think we can just take the approach letting user space try a hwpt_type and see if the ioctl would fail with -EINVAL. Thanks Nic
On Wed, May 24, 2023 at 06:41:41PM -0700, Nicolin Chen wrote: > Upon a quick check, I think we could. Though it'd be slightly > mismatched with the domain_alloc op, it should be fine since > iommufd is likely to be the only caller. Ideally the main op would return ERR_PTR too Jason
On Tue, Jun 06, 2023 at 11:08:44AM -0300, Jason Gunthorpe wrote: > On Wed, May 24, 2023 at 06:41:41PM -0700, Nicolin Chen wrote: > > > Upon a quick check, I think we could. Though it'd be slightly > > mismatched with the domain_alloc op, it should be fine since > > iommufd is likely to be the only caller. > > Ideally the main op would return ERR_PTR too Yea. It just seems to be a bit painful to change it for that. Worth a big series? Thanks Nic
On Tue, Jun 06, 2023 at 12:43:44PM -0700, Nicolin Chen wrote: > On Tue, Jun 06, 2023 at 11:08:44AM -0300, Jason Gunthorpe wrote: > > On Wed, May 24, 2023 at 06:41:41PM -0700, Nicolin Chen wrote: > > > > > Upon a quick check, I think we could. Though it'd be slightly > > > mismatched with the domain_alloc op, it should be fine since > > > iommufd is likely to be the only caller. > > > > Ideally the main op would return ERR_PTR too > > Yea. It just seems to be a bit painful to change it for that. > > Worth a big series? Probably not.. Jason
diff --git a/include/linux/iommu.h b/include/linux/iommu.h index a748d60206e7..7f2046fa53a3 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -220,6 +220,15 @@ struct iommu_iotlb_gather { bool queued; }; +/* + * The user data to allocate a specific type user iommu domain + * + * This includes the corresponding driver data structures in + * include/uapi/linux/iommufd.h. + */ +union iommu_domain_user_data { +}; + /** * struct iommu_ops - iommu ops and capabilities * @capable: check capability @@ -229,6 +238,15 @@ struct iommu_iotlb_gather { * after use. Return the data buffer if success, or ERR_PTR on * failure. * @domain_alloc: allocate iommu domain + * @domain_alloc_user: allocate user iommu domain + * @domain_alloc_user_data_len: return the required length of the user data + * to allocate a specific type user iommu domain. + * @hwpt_type is defined as enum iommu_hwpt_type + * in include/uapi/linux/iommufd.h. The returned + * length is the corresponding sizeof driver data + * structures in include/uapi/linux/iommufd.h. + * -EOPNOTSUPP would be returned if the input + * @hwpt_type is not supported by the driver. * @probe_device: Add device to iommu driver handling * @release_device: Remove device from iommu driver handling * @probe_finalize: Do final setup work after the device is added to an IOMMU @@ -269,6 +287,10 @@ struct iommu_ops { /* Domain allocation and freeing by the iommu driver */ struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type); + struct iommu_domain *(*domain_alloc_user)(struct device *dev, + struct iommu_domain *parent, + const union iommu_domain_user_data *user_data); + int (*domain_alloc_user_data_len)(u32 hwpt_type); struct iommu_device *(*probe_device)(struct device *dev); void (*release_device)(struct device *dev);