Message ID | 20230511143844.22693-1-yi.l.liu@intel.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp4435078vqo; Thu, 11 May 2023 07:52:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6P+yU6hWv8hc8qHea9CveYrfnUSpbUtABQQsPXbN4oSphUIC/Ixon703CqeKO91sc6yytc X-Received: by 2002:a05:6a00:2392:b0:645:ac97:52a4 with SMTP id f18-20020a056a00239200b00645ac9752a4mr20204975pfc.8.1683816779189; Thu, 11 May 2023 07:52:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683816779; cv=none; d=google.com; s=arc-20160816; b=wPOuVyAxvmx+el+/dpUp9W/2uGiC7L38/43NhLjggYaP4ymceWjGmPCVvURdVHxrox g4fEX7L8I09xGvgKMysA+aAC9kM3osLYkxtepj/N4s8UMM/QxY8Ub9jjbsW5DknQ3iBh x5FSNacEM7JrORlQ2Hw0n1XjYPea/CAY6vKtFwReCfzPYDzcq44afSFjGSLPutmlcGLF 5uMoffk42XNwzAt+yEn3GClH4Pkl1WDRQyL8cXezVi077H+8wFkpC9sQ9g8DnKcL0fIu uG0hiEgRsSV0MN/T3o47aEWrTCjEcU8XHeExIDCnOf2kjGFdFOJEQulXSCVxodzzb4Lh nmQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=Ip0h+fshlpzKVzs4hrmY2JfNZ68ik6bcTQTPp6sZ9ow=; b=zxs9tGh+6vY+lfzTXxYaF5tTpF0Jt5VrO/w0Bme9f55Y0JE0hyEHBOCaUU7rCN+BmV meDiPmSv9Lo7ec0RnwEUUlD/Q9MPs20v9lvvYeD0uPYYRROXn+OS4FH5qEKsAPmVm2/9 cfNOwejnOJggy3cOuMFrgcJyZvhguI3x+pWI+cSdNE9s3AkvCXhXyQKTcVirDKft8W/J H55c1n0iYaPSq19jahqkegdEql0QuJKloLplFVqTaaKHstUnWm2fkOXpsgP6nGFOs7r+ iDvad/IC0TfjHqv7zzm3jR1eQc9Co4s8vo0I9L10/xxELZNyfzKLLBG0F0FhPoZmmOck V1Hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mZmhOWUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 190-20020a6300c7000000b0050bea5bf413si6931564pga.705.2023.05.11.07.52.47; Thu, 11 May 2023 07:52:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mZmhOWUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238662AbjEKOm7 (ORCPT <rfc822;peekingduck44@gmail.com> + 99 others); Thu, 11 May 2023 10:42:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238273AbjEKOma (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 11 May 2023 10:42:30 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBFB6273E; Thu, 11 May 2023 07:38:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683815927; x=1715351927; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=hB7sN8KBNaFlRFygVg+2VFuPkNFm9CNTHJkLgvR0w9I=; b=mZmhOWUntaIpdUJddCsiTB/Zx720Yu4OLlzJ1EuLZ5iHHFxIZ2I8JILq Apn5EbiIAEByPWS8svXgrM/W/6RjNOkhDsxbUXzDfp5deSYkF7AaQR5ut iL63upuZyepefwMK4YJzMFsoD3wt4HofC6a3erYMCBseZWo2aehF4XUxu AWJRyABcaNzPjHg06URbyXuO9Tz6zSTYcFLdytjAvlXU+JghDjpK3ZBwN OiopBPXtrWSX6rnoHrphyUTAVkh6pO6AV+vAjLMJ3luVcHO+c/kfJw/jL XNH8mBWN8F764BYTM6aO0R09OkVduPXQ0o/JOhK7k/MNYn8j91wyAHdVS Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="339812829" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="339812829" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2023 07:38:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="730382587" X-IronPort-AV: E=Sophos;i="5.99,266,1677571200"; d="scan'208";a="730382587" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga008.jf.intel.com with ESMTP; 11 May 2023 07:38:45 -0700 From: Yi Liu <yi.l.liu@intel.com> To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com Subject: [PATCH v2 00/11] iommufd: Add nesting infrastructure Date: Thu, 11 May 2023 07:38:33 -0700 Message-Id: <20230511143844.22693-1-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765609862886892098?= X-GMAIL-MSGID: =?utf-8?q?1765609862886892098?= |
Series |
iommufd: Add nesting infrastructure
|
|
Message
Yi Liu
May 11, 2023, 2:38 p.m. UTC
Nested translation is a hardware feature that is supported by many modern IOMMU hardwares. It has two stages (stage-1, stage-2) address translation to get access to the physical address. stage-1 translation table is owned by userspace (e.g. by a guest OS), while stage-2 is owned by kernel. Changes to stage-1 translation table should be followed by an IOTLB invalidation. Take Intel VT-d as an example, the stage-1 translation table is I/O page table. As the below diagram shows, guest I/O page table pointer in GPA (guest physical address) is passed to host and be used to perform the stage-1 address translation. Along with it, modifications to present mappings in the guest I/O page table should be followed with an IOTLB invalidation. .-------------. .---------------------------. | vIOMMU | | Guest I/O page table | | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush --+ '-------------' | | | V | | I/O page table pointer in GPA '-------------' Guest ------| Shadow |--------------------------|-------- v v v Host .-------------. .------------------------. | pIOMMU | | FS for GIOVA->GPA | | | '------------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.----------------------------------. | | | SS for GPA->HPA, unmanaged domain| | | '----------------------------------' '-------------' Where: - FS = First stage page tables - SS = Second stage page tables <Intel VT-d Nested translation> In IOMMUFD, all the translation tables are tracked by hw_pagetable (hwpt) and each has an iommu_domain allocated from iommu driver. So in this series hw_pagetable and iommu_domain means the same thing if no special note. IOMMUFD has already supported allocating hw_pagetable that is linked with an IOAS. However, nesting requires IOMMUFD to allow allocating hw_pagetable with driver specific parameters and interface to sync stage-1 IOTLB as user owns the stage-1 translation table. This series is based on the iommu hw info reporting series [1]. It first introduces new iommu op for allocating domains with user data and the op for syncing stage-1 IOTLB, and then extend the IOMMUFD internal infrastructure to accept user_data and parent hwpt, then relay the data to iommu core to allocate iommu_domain. After it, extend the ioctl IOMMU_HWPT_ALLOC to accept user data and stage-2 hwpt ID to allocate hwpt. Along with it, ioctl IOMMU_HWPT_INVALIDATE is added to invalidate stage-1 IOTLB. This is needed for user-managed hwpts. Selftest is added as well to cover the new ioctls. Complete code can be found in [2], QEMU could can be found in [3]. At last, this is a team work together with Nicolin Chen, Lu Baolu. Thanks them for the help. ^_^. Look forward to your feedbacks. base-commit: cf905391237ded2331388e75adb5afbabeddc852 [1] https://lore.kernel.org/linux-iommu/20230511143024.19542-1-yi.l.liu@intel.com/ [2] https://github.com/yiliu1765/iommufd/tree/iommufd_nesting [3] https://github.com/yiliu1765/qemu/tree/wip/iommufd_rfcv4.mig.reset.v4_var3%2Bnesting Change log: v2: - Add union iommu_domain_user_data to include all user data structures to avoid passing void * in kernel APIs. - Add iommu op to return user data length for user domain allocation - Rename struct iommu_hwpt_alloc::data_type to be hwpt_type - Store the invalidation data length in iommu_domain_ops::cache_invalidate_user_data_len - Convert cache_invalidate_user op to be int instead of void - Remove @data_type in struct iommu_hwpt_invalidate - Remove out_hwpt_type_bitmap in struct iommu_hw_info hence drop patch 08 of v1 v1: https://lore.kernel.org/linux-iommu/20230309080910.607396-1-yi.l.liu@intel.com/ Thanks, Yi Liu Lu Baolu (2): iommu: Add new iommu op to create domains owned by userspace iommu: Add nested domain support Nicolin Chen (5): iommufd/hw_pagetable: Do not populate user-managed hw_pagetables iommufd/selftest: Add domain_alloc_user() support in iommu mock iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with user data iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl Yi Liu (4): iommufd/hw_pagetable: Use domain_alloc_user op for domain allocation iommufd: Pass parent hwpt and user_data to iommufd_hw_pagetable_alloc() iommufd: IOMMU_HWPT_ALLOC allocation with user data iommufd: Add IOMMU_HWPT_INVALIDATE drivers/iommu/iommufd/device.c | 2 +- drivers/iommu/iommufd/hw_pagetable.c | 191 +++++++++++++++++- drivers/iommu/iommufd/iommufd_private.h | 16 +- drivers/iommu/iommufd/iommufd_test.h | 30 +++ drivers/iommu/iommufd/main.c | 5 +- drivers/iommu/iommufd/selftest.c | 119 ++++++++++- include/linux/iommu.h | 36 ++++ include/uapi/linux/iommufd.h | 58 +++++- tools/testing/selftests/iommu/iommufd.c | 126 +++++++++++- tools/testing/selftests/iommu/iommufd_utils.h | 70 +++++++ 10 files changed, 629 insertions(+), 24 deletions(-)
Comments
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, May 11, 2023 10:39 PM > > Lu Baolu (2): > iommu: Add new iommu op to create domains owned by userspace > iommu: Add nested domain support > > Nicolin Chen (5): > iommufd/hw_pagetable: Do not populate user-managed hw_pagetables > iommufd/selftest: Add domain_alloc_user() support in iommu mock > iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with user data > iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op > iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl > > Yi Liu (4): > iommufd/hw_pagetable: Use domain_alloc_user op for domain allocation > iommufd: Pass parent hwpt and user_data to > iommufd_hw_pagetable_alloc() > iommufd: IOMMU_HWPT_ALLOC allocation with user data > iommufd: Add IOMMU_HWPT_INVALIDATE > I didn't see any change in iommufd_hw_pagetable_attach() to handle stage-1 hwpt differently. In concept whatever reserved regions existing on a device should be directly reflected on the hwpt which the device is attached to. So with nesting presumably the reserved regions of the device have been reported to the userspace and it's user's responsibility to avoid allocating IOVA from those reserved regions in stage-1 hwpt. It's not necessarily to add reserved regions to the IOAS of the parent hwpt since the device doesn't access that address space after it's attached to stage-1. The parent is used only for address translation in the iommu side. This series kind of ignores this fact which is probably the reason why you store an ioas pointer even in the stage-1 hwpt. Thanks Kevin
On Fri, May 19, 2023 at 09:56:04AM +0000, Tian, Kevin wrote: > > From: Liu, Yi L <yi.l.liu@intel.com> > > Sent: Thursday, May 11, 2023 10:39 PM > > > > Lu Baolu (2): > > iommu: Add new iommu op to create domains owned by userspace > > iommu: Add nested domain support > > > > Nicolin Chen (5): > > iommufd/hw_pagetable: Do not populate user-managed hw_pagetables > > iommufd/selftest: Add domain_alloc_user() support in iommu mock > > iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with user data > > iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op > > iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl > > > > Yi Liu (4): > > iommufd/hw_pagetable: Use domain_alloc_user op for domain allocation > > iommufd: Pass parent hwpt and user_data to > > iommufd_hw_pagetable_alloc() > > iommufd: IOMMU_HWPT_ALLOC allocation with user data > > iommufd: Add IOMMU_HWPT_INVALIDATE > > > > I didn't see any change in iommufd_hw_pagetable_attach() to handle > stage-1 hwpt differently. > > In concept whatever reserved regions existing on a device should be > directly reflected on the hwpt which the device is attached to. > > So with nesting presumably the reserved regions of the device have > been reported to the userspace and it's user's responsibility to avoid > allocating IOVA from those reserved regions in stage-1 hwpt. Presumably > It's not necessarily to add reserved regions to the IOAS of the parent > hwpt since the device doesn't access that address space after it's > attached to stage-1. The parent is used only for address translation > in the iommu side. But if we don't put them in the IOAS of the parent there is no way for userspace to learn what they are to forward to the VM ? Since we expect the parent IOAS to be usable in an identity mode I think they should be added, at least I can't see a reason not to add them. Which is definately complicating some parts of this.. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Friday, May 19, 2023 7:50 PM > > On Fri, May 19, 2023 at 09:56:04AM +0000, Tian, Kevin wrote: > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > Sent: Thursday, May 11, 2023 10:39 PM > > > > > > Lu Baolu (2): > > > iommu: Add new iommu op to create domains owned by userspace > > > iommu: Add nested domain support > > > > > > Nicolin Chen (5): > > > iommufd/hw_pagetable: Do not populate user-managed hw_pagetables > > > iommufd/selftest: Add domain_alloc_user() support in iommu mock > > > iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with user > data > > > iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op > > > iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl > > > > > > Yi Liu (4): > > > iommufd/hw_pagetable: Use domain_alloc_user op for domain > allocation > > > iommufd: Pass parent hwpt and user_data to > > > iommufd_hw_pagetable_alloc() > > > iommufd: IOMMU_HWPT_ALLOC allocation with user data > > > iommufd: Add IOMMU_HWPT_INVALIDATE > > > > > > > I didn't see any change in iommufd_hw_pagetable_attach() to handle > > stage-1 hwpt differently. > > > > In concept whatever reserved regions existing on a device should be > > directly reflected on the hwpt which the device is attached to. > > > > So with nesting presumably the reserved regions of the device have > > been reported to the userspace and it's user's responsibility to avoid > > allocating IOVA from those reserved regions in stage-1 hwpt. > > Presumably > > > It's not necessarily to add reserved regions to the IOAS of the parent > > hwpt since the device doesn't access that address space after it's > > attached to stage-1. The parent is used only for address translation > > in the iommu side. > > But if we don't put them in the IOAS of the parent there is no way for > userspace to learn what they are to forward to the VM ? emmm I wonder whether that is the right interface to report per-device reserved regions. e.g. does it imply that all devices will be reported to the guest with the exact same set of reserved regions merged in the parent IOAS? it works but looks unclear in concept. By definition the list of reserved regions on a device should be static/fixed instead of being dynamic upon which IOAS this device is attached to and how many other devices are sharing the same IOAS... IOAS_IOVA_RANGES kind of follows what vfio type1 provides today IMHO probably we should have DEVICE_IOVA_RANGES in the first place instead of doing it via IOAS_IOVA_RANGES which is then described as being dynamic upon the list of currently attached devices. > > Since we expect the parent IOAS to be usable in an identity mode I > think they should be added, at least I can't see a reason not to add > them. this is a good point. for SMMU this sounds a must-have as identity mode is configured in CD with nested translation always enabled. It is out of the host awareness hence reserved regions must be added to the parent IOAS. for VT-d identity must be configured explicitly and the hardware doesn't support stage-1 identity in nested mode. It essentially means not using nested translation and the user just explicitly attaches the associated RID or {RID, PASID} to the parent IOAS then get reserved regions covered already. With that it makes more sense to make it a vendor specific choice. Probably can have a flag bit when creating nested hwpt to mark that identity mode might be used in this nested configuration then iommufd should add device reserved regions to the parent IOAS? > > Which is definately complicating some parts of this.. > > Jason
On Wed, May 24, 2023 at 03:48:43AM +0000, Tian, Kevin wrote: > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Friday, May 19, 2023 7:50 PM > > > > On Fri, May 19, 2023 at 09:56:04AM +0000, Tian, Kevin wrote: > > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > > Sent: Thursday, May 11, 2023 10:39 PM > > > > > > > > Lu Baolu (2): > > > > iommu: Add new iommu op to create domains owned by userspace > > > > iommu: Add nested domain support > > > > > > > > Nicolin Chen (5): > > > > iommufd/hw_pagetable: Do not populate user-managed hw_pagetables > > > > iommufd/selftest: Add domain_alloc_user() support in iommu mock > > > > iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with user > > data > > > > iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op > > > > iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl > > > > > > > > Yi Liu (4): > > > > iommufd/hw_pagetable: Use domain_alloc_user op for domain > > allocation > > > > iommufd: Pass parent hwpt and user_data to > > > > iommufd_hw_pagetable_alloc() > > > > iommufd: IOMMU_HWPT_ALLOC allocation with user data > > > > iommufd: Add IOMMU_HWPT_INVALIDATE > > > > > > > > > > I didn't see any change in iommufd_hw_pagetable_attach() to handle > > > stage-1 hwpt differently. > > > > > > In concept whatever reserved regions existing on a device should be > > > directly reflected on the hwpt which the device is attached to. > > > > > > So with nesting presumably the reserved regions of the device have > > > been reported to the userspace and it's user's responsibility to avoid > > > allocating IOVA from those reserved regions in stage-1 hwpt. > > > > Presumably > > > > > It's not necessarily to add reserved regions to the IOAS of the parent > > > hwpt since the device doesn't access that address space after it's > > > attached to stage-1. The parent is used only for address translation > > > in the iommu side. > > > > But if we don't put them in the IOAS of the parent there is no way for > > userspace to learn what they are to forward to the VM ? > > emmm I wonder whether that is the right interface to report > per-device reserved regions. The iommu driver needs to report different reserved regions for the S1 and S2 iommu_domains, and the IOAS should only get the reserved regions for the S2. Currently the API has no way to report per-domain reserved regions and that is possibly OK for now. The S2 really doesn't have reserved regions beyond the domain aperture. So an ioctl to directly query the reserved regions for a dev_id makes sense. > > Since we expect the parent IOAS to be usable in an identity mode I > > think they should be added, at least I can't see a reason not to add > > them. > > this is a good point. But it mixes things The S2 doesn't have reserved ranges restrictions, we always have some model of a S1, even for identity mode, that would carry the reserved ranges. > With that it makes more sense to make it a vendor specific choice. It isn't vendor specific, the ranges come from the domain that is attached to the IOAS, and we simply don't import ranges for a S2 domain. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Tuesday, June 6, 2023 10:18 PM > > > > > It's not necessarily to add reserved regions to the IOAS of the parent > > > > hwpt since the device doesn't access that address space after it's > > > > attached to stage-1. The parent is used only for address translation > > > > in the iommu side. > > > > > > But if we don't put them in the IOAS of the parent there is no way for > > > userspace to learn what they are to forward to the VM ? > > > > emmm I wonder whether that is the right interface to report > > per-device reserved regions. > > The iommu driver needs to report different reserved regions for the S1 > and S2 iommu_domains, I can see the difference between RID and RID+PASID, but not sure whether it's a actual requirement regarding to attached domain. e.g. if only talking about RID then the same set of reserved regions should be reported for both S1 attach and S2 attach. > and the IOAS should only get the reserved regions for the S2. > > Currently the API has no way to report per-domain reserved regions and > that is possibly OK for now. The S2 really doesn't have reserved > regions beyond the domain aperture. > > So an ioctl to directly query the reserved regions for a dev_id makes > sense. Or more specifically query the reserved regions for RID-based access. Ideally for PASID there is no reserved region otherwise SVA won't work. 😊 > > > > Since we expect the parent IOAS to be usable in an identity mode I > > > think they should be added, at least I can't see a reason not to add > > > them. > > > > this is a good point. > > But it mixes things > > The S2 doesn't have reserved ranges restrictions, we always have some > model of a S1, even for identity mode, that would carry the reserved > ranges. > > > With that it makes more sense to make it a vendor specific choice. > > It isn't vendor specific, the ranges come from the domain that is > attached to the IOAS, and we simply don't import ranges for a S2 > domain. > With above I think the ranges are static per device. When talking about RID-based nesting alone, ARM needs to add reserved regions to the parent IOAS as identity is a valid S1 mode in nesting. But for Intel RID nesting excludes identity (which becomes a direct attach to S2) so the reserved regions apply to S1 instead of the parent IOAS.
On Fri, Jun 16, 2023 at 02:43:13AM +0000, Tian, Kevin wrote: > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Tuesday, June 6, 2023 10:18 PM > > > > > > > It's not necessarily to add reserved regions to the IOAS of the parent > > > > > hwpt since the device doesn't access that address space after it's > > > > > attached to stage-1. The parent is used only for address translation > > > > > in the iommu side. > > > > > > > > But if we don't put them in the IOAS of the parent there is no way for > > > > userspace to learn what they are to forward to the VM ? > > > > > > emmm I wonder whether that is the right interface to report > > > per-device reserved regions. > > > > The iommu driver needs to report different reserved regions for the S1 > > and S2 iommu_domains, > > I can see the difference between RID and RID+PASID, but not sure whether > it's a actual requirement regarding to attached domain. No, it isn't RID or RID+PASID here The S2 has a different set of reserved regsions than the S1 because the S2's IOVA does not appear on the bus. So the S2's reserved regions are entirely an artifact of how the IOMMU HW itself works when nesting. We can probably get by with some documented slightly messy rules that the reserved_regions only applies to directly RID attached domains. S2 and PASID attachments always have no reserved spaces. > When talking about RID-based nesting alone, ARM needs to add reserved > regions to the parent IOAS as identity is a valid S1 mode in nesting. No, definately not. The S2 has no reserved regions because it is an internal IOVA, and we should not abuse that. Reflecting the requirements for an identity map is something all iommu HW needs to handle, we should figure out how to do that properly. > But for Intel RID nesting excludes identity (which becomes a direct > attach to S2) so the reserved regions apply to S1 instead of the parent IOAS. IIRC all the HW models will assign their S2's as a RID attached "S1" during boot time to emulate "no translation"? They all need to learn what the allowed identiy mapping is so that the VMM can construct a compatible guest address space, independently of any IOAS restrictions. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Monday, June 19, 2023 8:37 PM > > On Fri, Jun 16, 2023 at 02:43:13AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@nvidia.com> > > > Sent: Tuesday, June 6, 2023 10:18 PM > > > > > > > > > It's not necessarily to add reserved regions to the IOAS of the parent > > > > > > hwpt since the device doesn't access that address space after it's > > > > > > attached to stage-1. The parent is used only for address translation > > > > > > in the iommu side. > > > > > > > > > > But if we don't put them in the IOAS of the parent there is no way for > > > > > userspace to learn what they are to forward to the VM ? > > > > > > > > emmm I wonder whether that is the right interface to report > > > > per-device reserved regions. > > > > > > The iommu driver needs to report different reserved regions for the S1 > > > and S2 iommu_domains, > > > > I can see the difference between RID and RID+PASID, but not sure whether > > it's a actual requirement regarding to attached domain. > > No, it isn't RID or RID+PASID here > > The S2 has a different set of reserved regsions than the S1 because > the S2's IOVA does not appear on the bus. > > So the S2's reserved regions are entirely an artifact of how the IOMMU > HW itself works when nesting. > > We can probably get by with some documented slightly messy rules that > the reserved_regions only applies to directly RID attached domains. S2 > and PASID attachments always have no reserved spaces. > > > When talking about RID-based nesting alone, ARM needs to add reserved > > regions to the parent IOAS as identity is a valid S1 mode in nesting. > > No, definately not. The S2 has no reserved regions because it is an > internal IOVA, and we should not abuse that. > > Reflecting the requirements for an identity map is something all iommu > HW needs to handle, we should figure out how to do that properly. I wonder whether we have argued passed each other. This series adds reserved regions to S2. I challenged the necessity as S2 is not directly accessed by the device. Then you replied that doing so still made sense to support identity S1. But now looks you also agree that reserved regions should not be added to S2 except supporting identity S1 needs more thought? > > > But for Intel RID nesting excludes identity (which becomes a direct > > attach to S2) so the reserved regions apply to S1 instead of the parent IOAS. > > IIRC all the HW models will assign their S2's as a RID attached "S1" > during boot time to emulate "no translation"? I'm not sure what it means... > > They all need to learn what the allowed identiy mapping is so that the > VMM can construct a compatible guest address space, independently of > any IOAS restrictions. > Intel VT-d supports 4 configurations: - passthrough (i.e. identity mapped) - S1 only - S2 only - nested 'S2 only' is used when vIOMMU is configured in passthrough. 'nested' is used when vIOMMU is configured in 'S1 only'. So in any case 'identity' is not a business of nesting in the VT-d context. My understanding of ARM SMMU is that from host p.o.v. the CD is the S1 in the nested configuration. 'identity' is one configuration in the CD then it's in the business of nesting. My preference was that ALLOC_HWPT allows vIOMMU to opt whether reserved regions of dev_id should be added to the IOAS of the parent S2 hwpt.
On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > I wonder whether we have argued passed each other. > > This series adds reserved regions to S2. I challenged the necessity as > S2 is not directly accessed by the device. > > Then you replied that doing so still made sense to support identity > S1. I think I said/ment if we attach the "s2" iommu domain as a direct attach for identity - eg at boot time, then the IOAS must gain the reserved regions. This is our normal protocol. But when we use the "s2" iommu domain as an actual nested S2 then we don't gain reserved regions. > Intel VT-d supports 4 configurations: > - passthrough (i.e. identity mapped) > - S1 only > - S2 only > - nested > > 'S2 only' is used when vIOMMU is configured in passthrough. S2 only is modeled as attaching an S2 format iommu domain to the RID, and when this is done the IOAS should gain the reserved regions because it is no different behavior than attaching any other iommu domain to a RID. When the S2 is replaced with a S1 nest then the IOAS should loose those reserved regions since it is no longer attached to a RID. > My understanding of ARM SMMU is that from host p.o.v. the CD is the > S1 in the nested configuration. 'identity' is one configuration in the CD > then it's in the business of nesting. I think it is the same. A CD doesn't come into the picture until the guest installs a CD pointing STE. Until that time the S2 is being used as identity. It sounds like the same basic flow. > My preference was that ALLOC_HWPT allows vIOMMU to opt whether > reserved regions of dev_id should be added to the IOAS of the parent > S2 hwpt. Having an API to explicitly load reserved regions of a specific device to an IOAS makes some sense to me. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Tuesday, June 20, 2023 8:47 PM > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > I wonder whether we have argued passed each other. > > > > This series adds reserved regions to S2. I challenged the necessity as > > S2 is not directly accessed by the device. > > > > Then you replied that doing so still made sense to support identity > > S1. > > I think I said/ment if we attach the "s2" iommu domain as a direct > attach for identity - eg at boot time, then the IOAS must gain the > reserved regions. This is our normal protocol. > > But when we use the "s2" iommu domain as an actual nested S2 then we > don't gain reserved regions. Then we're aligned. Yi/Nicolin, please update this series to not automatically add reserved regions to S2 in the nesting configuration. It also implies that the user cannot rely on IOAS_IOVA_RANGES to learn reserved regions for arranging addresses in S1. Then we also need a new ioctl to report reserved regions per dev_id. > > > Intel VT-d supports 4 configurations: > > - passthrough (i.e. identity mapped) > > - S1 only > > - S2 only > > - nested > > > > 'S2 only' is used when vIOMMU is configured in passthrough. > > S2 only is modeled as attaching an S2 format iommu domain to the RID, > and when this is done the IOAS should gain the reserved regions > because it is no different behavior than attaching any other iommu > domain to a RID. > > When the S2 is replaced with a S1 nest then the IOAS should loose > those reserved regions since it is no longer attached to a RID. yes > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > S1 in the nested configuration. 'identity' is one configuration in the CD > > then it's in the business of nesting. > > I think it is the same. A CD doesn't come into the picture until the > guest installs a CD pointing STE. Until that time the S2 is being used > as identity. > > It sounds like the same basic flow. After a CD table is installed in a STE I assume the SMMU still allows to configure an individual CD entry as identity? e.g. while vSVA is enabled on a device the guest can continue to keep CD#0 as identity when the default domain of the device is set as 'passthrough'. In this case the IOAS still needs to gain reserved regions even though S2 is not directly attached from host p.o.v. > > > My preference was that ALLOC_HWPT allows vIOMMU to opt whether > > reserved regions of dev_id should be added to the IOAS of the parent > > S2 hwpt. > > Having an API to explicitly load reserved regions of a specific device > to an IOAS makes some sense to me. > > Jason
> From: Tian, Kevin <kevin.tian@intel.com> > Sent: Wednesday, June 21, 2023 2:02 PM > > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Tuesday, June 20, 2023 8:47 PM > > > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > I wonder whether we have argued passed each other. > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > S2 is not directly accessed by the device. > > > > > > Then you replied that doing so still made sense to support identity > > > S1. > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > attach for identity - eg at boot time, then the IOAS must gain the > > reserved regions. This is our normal protocol. > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > don't gain reserved regions. > > Then we're aligned. > > Yi/Nicolin, please update this series to not automatically add reserved > regions to S2 in the nesting configuration. Got it. > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > learn reserved regions for arranging addresses in S1. > > Then we also need a new ioctl to report reserved regions per dev_id. Shall we add it now? I suppose yes. > > > > > Intel VT-d supports 4 configurations: > > > - passthrough (i.e. identity mapped) > > > - S1 only > > > - S2 only > > > - nested > > > > > > 'S2 only' is used when vIOMMU is configured in passthrough. > > > > S2 only is modeled as attaching an S2 format iommu domain to the RID, > > and when this is done the IOAS should gain the reserved regions > > because it is no different behavior than attaching any other iommu > > domain to a RID. > > > > When the S2 is replaced with a S1 nest then the IOAS should loose > > those reserved regions since it is no longer attached to a RID. > > yes Makes sense. Regards, Yi Liu > > > > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > then it's in the business of nesting. > > > > I think it is the same. A CD doesn't come into the picture until the > > guest installs a CD pointing STE. Until that time the S2 is being used > > as identity. > > > > It sounds like the same basic flow. > > After a CD table is installed in a STE I assume the SMMU still allows to > configure an individual CD entry as identity? e.g. while vSVA is enabled > on a device the guest can continue to keep CD#0 as identity when the > default domain of the device is set as 'passthrough'. In this case the > IOAS still needs to gain reserved regions even though S2 is not directly > attached from host p.o.v. > > > > > > My preference was that ALLOC_HWPT allows vIOMMU to opt whether > > > reserved regions of dev_id should be added to the IOAS of the parent > > > S2 hwpt. > > > > Having an API to explicitly load reserved regions of a specific device > > to an IOAS makes some sense to me. > > > > Jason
>-----Original Message----- >From: Jason Gunthorpe <jgg@nvidia.com> >Sent: Tuesday, June 20, 2023 8:47 PM >Subject: Re: [PATCH v2 00/11] iommufd: Add nesting infrastructure > >On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: >> I wonder whether we have argued passed each other. >> >> This series adds reserved regions to S2. I challenged the necessity as >> S2 is not directly accessed by the device. >> >> Then you replied that doing so still made sense to support identity >> S1. > >I think I said/ment if we attach the "s2" iommu domain as a direct attach for >identity - eg at boot time, then the IOAS must gain the reserved regions. This is >our normal protocol. There is code to fail the attaching for device with RMRR in intel iommu driver, do we plan to remove below check for IOMMUFD soon or later? static int intel_iommu_attach_device(struct iommu_domain *domain, struct device *dev) { struct device_domain_info *info = dev_iommu_priv_get(dev); int ret; if (domain->type == IOMMU_DOMAIN_UNMANAGED && device_is_rmrr_locked(dev)) { dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.\n"); return -EPERM; } Thanks Zhenzhong
On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > then it's in the business of nesting. > > > > I think it is the same. A CD doesn't come into the picture until the > > guest installs a CD pointing STE. Until that time the S2 is being used > > as identity. > > > > It sounds like the same basic flow. > > After a CD table is installed in a STE I assume the SMMU still allows to > configure an individual CD entry as identity? e.g. while vSVA is enabled > on a device the guest can continue to keep CD#0 as identity when the > default domain of the device is set as 'passthrough'. In this case the > IOAS still needs to gain reserved regions even though S2 is not directly > attached from host p.o.v. In any nesting configuration the hypervisor cannot directly restrict what IOVA the guest will use. The VM could make a normal nest and try to use unusable IOVA. Identity is not really special. The VMM should construct the guest memory map so that an identity iommu_domain can meet the reserved requirements - it needs to do this anyhow for the initial boot part. It shouuld try to forward the reserved regions to the guest via ACPI/etc. Being able to explicitly load reserved regions into an IOAS seems like a useful way to help construct this. Jason
On Wed, Jun 21, 2023 at 08:29:09AM +0000, Duan, Zhenzhong wrote: > >-----Original Message----- > >From: Jason Gunthorpe <jgg@nvidia.com> > >Sent: Tuesday, June 20, 2023 8:47 PM > >Subject: Re: [PATCH v2 00/11] iommufd: Add nesting infrastructure > > > >On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > >> I wonder whether we have argued passed each other. > >> > >> This series adds reserved regions to S2. I challenged the necessity as > >> S2 is not directly accessed by the device. > >> > >> Then you replied that doing so still made sense to support identity > >> S1. > > > >I think I said/ment if we attach the "s2" iommu domain as a direct attach for > >identity - eg at boot time, then the IOAS must gain the reserved regions. This is > >our normal protocol. > There is code to fail the attaching for device with RMRR in intel iommu driver, > do we plan to remove below check for IOMMUFD soon or later? > > static int intel_iommu_attach_device(struct iommu_domain *domain, > struct device *dev) > { > struct device_domain_info *info = dev_iommu_priv_get(dev); > int ret; > > if (domain->type == IOMMU_DOMAIN_UNMANAGED && > device_is_rmrr_locked(dev)) { > dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.\n"); > return -EPERM; > } Not really, systems with RMRR cannot support VFIO at all. Baolu sent a series lifting this restriction up higher in the stack: https://lore.kernel.org/all/20230607035145.343698-1-baolu.lu@linux.intel.com/ Jason
On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > I wonder whether we have argued passed each other. > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > S2 is not directly accessed by the device. > > > > > > Then you replied that doing so still made sense to support identity > > > S1. > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > attach for identity - eg at boot time, then the IOAS must gain the > > reserved regions. This is our normal protocol. > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > don't gain reserved regions. > > Then we're aligned. > > Yi/Nicolin, please update this series to not automatically add reserved > regions to S2 in the nesting configuration. I'm a bit late for the conversation here. Yet, how about the IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd still call iommufd_group_setup_msi() on the S2 HWPT, despite attaching the device to a nested S1 HWPT right? > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > learn reserved regions for arranging addresses in S1. > > Then we also need a new ioctl to report reserved regions per dev_id. So, in a nesting configuration, QEMU would poll a device's S2 MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? Thanks Nic
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 21, 2023 8:05 PM > > On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > > then it's in the business of nesting. > > > > > > I think it is the same. A CD doesn't come into the picture until the > > > guest installs a CD pointing STE. Until that time the S2 is being used > > > as identity. > > > > > > It sounds like the same basic flow. > > > > After a CD table is installed in a STE I assume the SMMU still allows to > > configure an individual CD entry as identity? e.g. while vSVA is enabled > > on a device the guest can continue to keep CD#0 as identity when the > > default domain of the device is set as 'passthrough'. In this case the > > IOAS still needs to gain reserved regions even though S2 is not directly > > attached from host p.o.v. > > In any nesting configuration the hypervisor cannot directly restrict > what IOVA the guest will use. The VM could make a normal nest and try > to use unusable IOVA. Identity is not really special. Sure. What I talked is the end result e.g. after the user explicitly requests to load reserved regions into an IOAS. > > The VMM should construct the guest memory map so that an identity > iommu_domain can meet the reserved requirements - it needs to do this > anyhow for the initial boot part. It shouuld try to forward the > reserved regions to the guest via ACPI/etc. Yes. > > Being able to explicitly load reserved regions into an IOAS seems like > a useful way to help construct this. > And it's correct in concept because the IOAS is 'implicitly' accessed by the device when the guest domain is identity in this case.
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Thursday, June 22, 2023 1:13 AM > > On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > > I wonder whether we have argued passed each other. > > > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > > S2 is not directly accessed by the device. > > > > > > > > Then you replied that doing so still made sense to support identity > > > > S1. > > > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > > attach for identity - eg at boot time, then the IOAS must gain the > > > reserved regions. This is our normal protocol. > > > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > > don't gain reserved regions. > > > > Then we're aligned. > > > > Yi/Nicolin, please update this series to not automatically add reserved > > regions to S2 in the nesting configuration. > > I'm a bit late for the conversation here. Yet, how about the > IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd > still call iommufd_group_setup_msi() on the S2 HWPT, despite > attaching the device to a nested S1 HWPT right? Yes, based on current design of ARM nesting. But please special case it instead of pretending that all reserved regions are added to IOAS which is wrong in concept based on the discussion. > > > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > > learn reserved regions for arranging addresses in S1. > > > > Then we also need a new ioctl to report reserved regions per dev_id. > > So, in a nesting configuration, QEMU would poll a device's S2 > MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? > Qemu needs to know all the reserved regions of the device and skip them when arranging S1 layout. I'm not sure whether the MSI region needs a special MSI type or just a general RESV_DIRECT type for 1:1 mapping, though.
On Mon, Jun 26, 2023 at 06:42:58AM +0000, Tian, Kevin wrote: > I'm not sure whether the MSI region needs a special MSI type or > just a general RESV_DIRECT type for 1:1 mapping, though. It probably always needs a special type :( Jason
On Mon, Jun 26, 2023 at 06:42:58AM +0000, Tian, Kevin wrote: > > > Yi/Nicolin, please update this series to not automatically add reserved > > > regions to S2 in the nesting configuration. > > > > I'm a bit late for the conversation here. Yet, how about the > > IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd > > still call iommufd_group_setup_msi() on the S2 HWPT, despite > > attaching the device to a nested S1 HWPT right? > > Yes, based on current design of ARM nesting. > > But please special case it instead of pretending that all reserved regions > are added to IOAS which is wrong in concept based on the discussion. Ack. Yi made a version of change dropping it completely along with the iommufd_group_setup_msi() call for a nested S1 HWPT. So I thought there was a misalignment. I made another version preserving the pathway for MSI on ARM, and perhaps we should go with this one: https://github.com/nicolinc/iommufd/commit/c63829a12d35f2d7a390f42821a079f8a294cff8 > > > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > > > learn reserved regions for arranging addresses in S1. > > > > > > Then we also need a new ioctl to report reserved regions per dev_id. > > > > So, in a nesting configuration, QEMU would poll a device's S2 > > MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? > > > > Qemu needs to know all the reserved regions of the device and skip > them when arranging S1 layout. OK. > I'm not sure whether the MSI region needs a special MSI type or > just a general RESV_DIRECT type for 1:1 mapping, though. I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI and IOMMU_RESV_SW_MSI? Or does it juset mean we should report the iommu_resv_type along with reserved regions in new ioctl? Thanks Nic
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Tuesday, June 27, 2023 1:29 AM > > > I'm not sure whether the MSI region needs a special MSI type or > > just a general RESV_DIRECT type for 1:1 mapping, though. > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > the iommu_resv_type along with reserved regions in new ioctl? > Currently those are iommu internal types. When defining the new ioctl we need think about what are necessary presenting to the user. Probably just a list of reserved regions plus a flag to mark which one is SW_MSI? Except SW_MSI all other reserved region types just need the user to reserve them w/o knowing more detail.
On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > From: Nicolin Chen <nicolinc@nvidia.com> > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > the iommu_resv_type along with reserved regions in new ioctl? > > > > Currently those are iommu internal types. When defining the new > ioctl we need think about what are necessary presenting to the user. > > Probably just a list of reserved regions plus a flag to mark which > one is SW_MSI? Except SW_MSI all other reserved region types > just need the user to reserve them w/o knowing more detail. I think I prefer the idea we just import the reserved regions from a devid and do not expose any of this detail to userspace. Kernel can make only the SW_MSI a mandatory cut out when the S2 is attached. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 28, 2023 12:01 AM > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > Currently those are iommu internal types. When defining the new > > ioctl we need think about what are necessary presenting to the user. > > > > Probably just a list of reserved regions plus a flag to mark which > > one is SW_MSI? Except SW_MSI all other reserved region types > > just need the user to reserve them w/o knowing more detail. > > I think I prefer the idea we just import the reserved regions from a > devid and do not expose any of this detail to userspace. > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > attached. > I'm confused. The VMM needs to know reserved regions per dev_id and report them to the guest. And we have aligned on that reserved regions (except SW_MSI) should not be automatically added to S2 in nesting case. Then the VMM cannot rely on IOAS_IOVA_RANGES to identify the reserved regions. So there needs a new interface for the user to discover reserved regions per dev_id, within which the SW_MSI region should be marked out so identity mapping can be installed properly for it in S1. Did I misunderstand your point in previous discussion?
On Wed, Jun 28, 2023 at 02:47:02AM +0000, Tian, Kevin wrote: > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Wednesday, June 28, 2023 12:01 AM > > > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > > > > Currently those are iommu internal types. When defining the new > > > ioctl we need think about what are necessary presenting to the user. > > > > > > Probably just a list of reserved regions plus a flag to mark which > > > one is SW_MSI? Except SW_MSI all other reserved region types > > > just need the user to reserve them w/o knowing more detail. > > > > I think I prefer the idea we just import the reserved regions from a > > devid and do not expose any of this detail to userspace. > > > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > > attached. > > > > I'm confused. > > The VMM needs to know reserved regions per dev_id and report them > to the guest. > > And we have aligned on that reserved regions (except SW_MSI) should > not be automatically added to S2 in nesting case. Then the VMM cannot > rely on IOAS_IOVA_RANGES to identify the reserved regions. We also said we need a way to load the reserved regions to create an identity compatible version of the HWPT So we have a model where the VMM will want to load in regions beyond the currently attached device needs > So there needs a new interface for the user to discover reserved regions > per dev_id, within which the SW_MSI region should be marked out so > identity mapping can be installed properly for it in S1. > > Did I misunderstand your point in previous discussion? This is another discussion, if the vmm needs this then we probably need a new API to get it. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 28, 2023 8:36 PM > > On Wed, Jun 28, 2023 at 02:47:02AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@nvidia.com> > > > Sent: Wednesday, June 28, 2023 12:01 AM > > > > > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > > > > > > > Currently those are iommu internal types. When defining the new > > > > ioctl we need think about what are necessary presenting to the user. > > > > > > > > Probably just a list of reserved regions plus a flag to mark which > > > > one is SW_MSI? Except SW_MSI all other reserved region types > > > > just need the user to reserve them w/o knowing more detail. > > > > > > I think I prefer the idea we just import the reserved regions from a > > > devid and do not expose any of this detail to userspace. > > > > > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > > > attached. > > > > > > > I'm confused. > > > > The VMM needs to know reserved regions per dev_id and report them > > to the guest. > > > > And we have aligned on that reserved regions (except SW_MSI) should > > not be automatically added to S2 in nesting case. Then the VMM cannot > > rely on IOAS_IOVA_RANGES to identify the reserved regions. > > We also said we need a way to load the reserved regions to create an > identity compatible version of the HWPT > > So we have a model where the VMM will want to load in regions beyond > the currently attached device needs No question on this. > > > So there needs a new interface for the user to discover reserved regions > > per dev_id, within which the SW_MSI region should be marked out so > > identity mapping can be installed properly for it in S1. > > > > Did I misunderstand your point in previous discussion? > > This is another discussion, if the vmm needs this then we probably > need a new API to get it. > Then it's clear. 😊