Message ID | 20230110143137.54517-4-suravee.suthikulpanit@amd.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp2779946wrt; Tue, 10 Jan 2023 06:33:57 -0800 (PST) X-Google-Smtp-Source: AMrXdXucKIE2YWdYUaThFTCb8RatlbQvrO/9BsLb9C2ABFT5fh4t33ePve4bokeGwA7ri2C5jNZM X-Received: by 2002:a17:907:c689:b0:84d:122d:4af3 with SMTP id ue9-20020a170907c68900b0084d122d4af3mr17512746ejc.27.1673361237139; Tue, 10 Jan 2023 06:33:57 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1673361237; cv=pass; d=google.com; s=arc-20160816; b=RwqTUUJhjKEU3HoOTZwJg06p6GgxtMN/mrpkWvzwL5etSLfbjw8Fwgvo5VZbqWgNXO Tl+i+2+EXoq4K7wtQ5+n7ACIjiGvR1PBWJmXs8LLRVJhrgr1klf0FNDLcxT5TA/2sYDK dGE0DCfQ5kDKpbxYbW/DDfGAbE0tcbkHkW5/cHjDhhP2IeHJx+FcgY6EeYxvE58GZwNv Wc92m2QAVv7wg0A+PvRtm7bp1EZdg8iDxaowsEIw+XDOH8DW+gm3qkkFQACN04F+KpFY 5m8DaQMxLPQjsy84Qj3XLBiia4isV4UlheOKPP3lPJ4+GPoL/q174bX/HL8nq8epn83Q 2/9Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ObSra1/iNsAygRiv1o7a006n18bIyMdju24xnrqtOsQ=; b=fPG8ZAVqWVxj41K28BTjdUvGD81YCDfU7Fufx0vos6TcFbufV51PGQ8P364QTw4XdC 3XYQV5ZKeTXrRHWLXUsohX0DWHb9cleotp2FKJ00eZaXNqpUfp9sEcdoqTEgazKJW+E/ w2NJ7g84FPcZxpkzIpm3uvoDGFCoLlqQDMGtiZgh5yLQ/xk4q2ONx1EQ0tgkhg/6C5+4 sWi+T0YS6rJhoBFJf3Okn4+wpLbujkIHQh8k7UiB6EHvtiqMuJw5r6Yt8faFjn1zMzXv a7HmVPiRBhNRkm7IqnQiFvJ0gKOWl8G+xv8cIZTE0z0pJhUCET2NTcYp5om8U70HkTyv xi4Q== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=qkjD0Mss; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt33-20020a1709072da100b007f20a95eae0si13463875ejc.262.2023.01.10.06.33.33; Tue, 10 Jan 2023 06:33:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=qkjD0Mss; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238481AbjAJOcx (ORCPT <rfc822;syz17693488234@gmail.com> + 99 others); Tue, 10 Jan 2023 09:32:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238529AbjAJOcX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 10 Jan 2023 09:32:23 -0500 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2055.outbound.protection.outlook.com [40.107.223.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B537435920 for <linux-kernel@vger.kernel.org>; Tue, 10 Jan 2023 06:32:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eWi6imvJdllrdjJConVgEzKBdURG6IimciYyLUY7uBnHTV+MjwuhefSorgjEA+jC2aKfVP4o+0loEqmL+jEs7jhU0d5ZaXPmPHkgBYakaFgeu8VeaCH/tdpakSw3NMcSHwUw0/r5iyiL5m/eWCOfvOvnS20QtHejRt9z8EBWyQDsX508MinqJNCM9K0vvLNsW/gsBfHNWiyuOiaOVhNhQw2VHlvkVHo7IOYwknduFMPivfO0GAPv0+TNNQGVTzXN/z2YoTws8yJNTFZwBJuu1YwJZjm6EEUbM804qMJk0WrGz5AS3GmqbnTOQWhnFD/VYOJR0YQjjORjHALksqDqPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ObSra1/iNsAygRiv1o7a006n18bIyMdju24xnrqtOsQ=; b=izP8FD3BupP/F8sm++VbTGWEq9NrvUouYy9Rh4dZsgkx9JsuiODImwl4+ZmLsd0cMZu82ZBYI5/348ijBWhqO3Hr2JDZDTvz8ZYPjs7XjHPn07b4uFovGAYBo1p+2lIluevBVL9I6j1GusXa9aj3R7AYjp+vbIhF2/mXNB1h3AKfZ96LmLQov0FIO68gQ0Z8TTeDd1wNfQ7PnKGCJznbO89/44Je04NqH/08yzDpQ9DOo2vefVdFH/WbQBenWAplclI1MwXvmirui8U1VUtmk8phShfHacWtsolhpCsw9JhAe4TQ7ZdtCzi4FpzfX3AryKGVm+zg9Is7SlORuOnnAg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ObSra1/iNsAygRiv1o7a006n18bIyMdju24xnrqtOsQ=; b=qkjD0MssUalUPgwmfPczvbeKOGF+dw5WjDhtEmNxdEJ79jpIucKJzVBoSc7gtKEMnMbk5bNWAF/hEpYJUqWOmU3C8kYmaiXNDPCAVIyAargBypvF9wH734xrRv5KlSziMNo/ZdaFsbMqHixDJC7hdvJMxwq9zolmifOKUNnwyTk= Received: from MN2PR15CA0010.namprd15.prod.outlook.com (2603:10b6:208:1b4::23) by DM6PR12MB4960.namprd12.prod.outlook.com (2603:10b6:5:1bc::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan 2023 14:32:20 +0000 Received: from BL02EPF0000C404.namprd05.prod.outlook.com (2603:10b6:208:1b4:cafe::cb) by MN2PR15CA0010.outlook.office365.com (2603:10b6:208:1b4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 14:32:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BL02EPF0000C404.mail.protection.outlook.com (10.167.241.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6002.11 via Frontend Transport; Tue, 10 Jan 2023 14:32:20 +0000 Received: from sp5-759chost.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 10 Jan 2023 08:31:56 -0600 From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> To: <linux-kernel@vger.kernel.org>, <iommu@lists.linux.dev> CC: <joro@8bytes.org>, <robin.murphy@arm.com>, <ashish.kalra@amd.com>, <thomas.lendacky@amd.com>, <vasant.hegde@amd.com>, <jon.grimm@amd.com>, Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Subject: [PATCH 3/4] iommu: Introduce IOMMU call-back for processing struct KVM assigned to VFIO Date: Tue, 10 Jan 2023 08:31:36 -0600 Message-ID: <20230110143137.54517-4-suravee.suthikulpanit@amd.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20230110143137.54517-1-suravee.suthikulpanit@amd.com> References: <20230110143137.54517-1-suravee.suthikulpanit@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0000C404:EE_|DM6PR12MB4960:EE_ X-MS-Office365-Filtering-Correlation-Id: 7be5586d-05a8-4307-d13e-08daf3177b6b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sGxayu5ztePtiY4bjXpfNqFmhifipGaWnnyoF8DrcxhiBLscdInjdwDUiCb5xNuhJzir9gTG/v/J+fLSSt1g661aSrmGTHIF5wBm8ZizWAMmnQ/LYfwkVusVRSC8L81xpFcHHPb1494D2KXoa7a38K9B4IdayD8roL9DinhQIsnSNvLgSeDf8NmCtiM/GfQY0p+EeWVx/ujCkcH2GfmOsfcxj9+Q0of8CBZXxwoKJlhb65xY/d3yEQSSpH2fG6nC/7F/vUJGmhhmVkpfGcKf+knWpTckV/KozyDZ2CQ00Plm3N9gWVK94teMLxmCWPBEjDVCz+oiibjW8iGQutgmbrdUjqWMLi4LN/6uEOMUWMRrsECFwujjEXaJ5BHkzz2OgsiblykhcGYSvPosG9oRNzWwpeF+vaPsl3eIclRcEpYaC1y+m5UjCiG/xz8dgBQUChuXKkeMfSCZNN9Nz1QH4w+trQNffk3SwAFYuBozx3Mv1rSg+QvAWKaCb2a9eo4ezjqYZ43yNJlGmJvDGNn+LrmLWof5FtrJPPZIYGnkPNWFjW5vztMsAh7DDv3T2QMWkRKNbHhDrgzRS6qSdq9uzexPVDNA3b8/XnN1Fqs4PgIc3FwZ/4RgowNO1ol2YuBv7TaMaDaZ72OwXzO1YZgrbFDDfBNUpmSZlLz6GDhXO5iOzUYMcdTj5adZgEOHuwLF08xhnJwfU6BQXQjkpz8hYUYcUDyksxojL4mFoNWblBc= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199015)(46966006)(40470700004)(36840700001)(36860700001)(82740400003)(2906002)(6666004)(81166007)(356005)(2616005)(1076003)(26005)(44832011)(7696005)(186003)(16526019)(40480700001)(478600001)(316002)(5660300002)(82310400005)(8936002)(83380400001)(36756003)(86362001)(47076005)(426003)(40460700003)(41300700001)(70586007)(8676002)(70206006)(4326008)(54906003)(336012)(110136005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 14:32:20.6338 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7be5586d-05a8-4307-d13e-08daf3177b6b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0000C404.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4960 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754646432624926401?= X-GMAIL-MSGID: =?utf-8?q?1754646432624926401?= |
Series |
iommu/amd: Force SNP-enabled VFIO domain to 4K page size
|
|
Commit Message
Suravee Suthikulpanit
Jan. 10, 2023, 2:31 p.m. UTC
Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for assigning
a KVM structure to a VFIO group. The information in struct KVM is also
useful for IOMMU drivers when setting up VFIO domain.
Introduce struct iommu_domain_ops.set_kvm call-back function to allow
IOMMU drivers to provide call-back to process the struct KVM assigned.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
drivers/iommu/iommu.c | 10 ++++++++++
drivers/vfio/vfio_main.c | 1 +
include/linux/iommu.h | 4 ++++
3 files changed, 15 insertions(+)
Comments
On 2023-01-10 14:31, Suravee Suthikulpanit wrote: > Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for assigning > a KVM structure to a VFIO group. The information in struct KVM is also > useful for IOMMU drivers when setting up VFIO domain. > > Introduce struct iommu_domain_ops.set_kvm call-back function to allow > IOMMU drivers to provide call-back to process the struct KVM assigned. Hmm, it sounds like this has quite some overlap of intent with the existing "enable_nesting" op, and my gut feeling is that it's not great to have two completely different "this is a VFIO domain" mechanisms... :/ Robin. > Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > --- > drivers/iommu/iommu.c | 10 ++++++++++ > drivers/vfio/vfio_main.c | 1 + > include/linux/iommu.h | 4 ++++ > 3 files changed, 15 insertions(+) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index 65a3b3d886dc..5116d5fe35f2 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -3231,3 +3231,13 @@ bool iommu_group_dma_owner_claimed(struct iommu_group *group) > return user; > } > EXPORT_SYMBOL_GPL(iommu_group_dma_owner_claimed); > + > +void iommu_set_kvm(struct iommu_group *group, struct kvm *kvm) > +{ > + if (!group || !group->domain || !group->domain->ops) > + return; > + > + if (group->domain->ops->set_kvm) > + group->domain->ops->set_kvm(group->domain, kvm); > +} > +EXPORT_SYMBOL_GPL(iommu_set_kvm); > diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c > index 2d168793d4e1..7641e3a0c986 100644 > --- a/drivers/vfio/vfio_main.c > +++ b/drivers/vfio/vfio_main.c > @@ -1652,6 +1652,7 @@ void vfio_file_set_kvm(struct file *file, struct kvm *kvm) > > mutex_lock(&group->group_lock); > group->kvm = kvm; > + iommu_set_kvm(group->iommu_group, kvm); > mutex_unlock(&group->group_lock); > } > EXPORT_SYMBOL_GPL(vfio_file_set_kvm); > diff --git a/include/linux/iommu.h b/include/linux/iommu.h > index 3c9da1f8979e..43000231d3d7 100644 > --- a/include/linux/iommu.h > +++ b/include/linux/iommu.h > @@ -42,6 +42,7 @@ struct notifier_block; > struct iommu_sva; > struct iommu_fault_event; > struct iommu_dma_cookie; > +struct kvm; > > /* iommu fault flags */ > #define IOMMU_FAULT_READ 0x0 > @@ -314,6 +315,8 @@ struct iommu_domain_ops { > unsigned long quirks); > > void (*free)(struct iommu_domain *domain); > + > + void (*set_kvm)(struct iommu_domain *domain, struct kvm *kvm); > }; > > /** > @@ -391,6 +394,7 @@ void iommu_device_sysfs_remove(struct iommu_device *iommu); > int iommu_device_link(struct iommu_device *iommu, struct device *link); > void iommu_device_unlink(struct iommu_device *iommu, struct device *link); > int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain); > +void iommu_set_kvm(struct iommu_group *group, struct kvm *kvm); > > static inline struct iommu_device *dev_to_iommu_device(struct device *dev) > {
On Tue, Jan 10, 2023 at 08:31:36AM -0600, Suravee Suthikulpanit wrote: > Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for assigning > a KVM structure to a VFIO group. The information in struct KVM is also > useful for IOMMU drivers when setting up VFIO domain. > > Introduce struct iommu_domain_ops.set_kvm call-back function to allow > IOMMU drivers to provide call-back to process the struct KVM > assigned. Also NAK Connecting the iommu driver to KVM has to be properly architected though iommufd. > @@ -1652,6 +1652,7 @@ void vfio_file_set_kvm(struct file *file, struct kvm *kvm) > > mutex_lock(&group->group_lock); > group->kvm = kvm; > + iommu_set_kvm(group->iommu_group, kvm); > mutex_unlock(&group->group_lock); > } This also has obvious lifetime bugs Jason
Hi Robin, On 1/10/2023 10:11 PM, Robin Murphy wrote: > On 2023-01-10 14:31, Suravee Suthikulpanit wrote: >> Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for >> assigning >> a KVM structure to a VFIO group. The information in struct KVM is also >> useful for IOMMU drivers when setting up VFIO domain. >> >> Introduce struct iommu_domain_ops.set_kvm call-back function to allow >> IOMMU drivers to provide call-back to process the struct KVM assigned. > > Hmm, it sounds like this has quite some overlap of intent with the > existing "enable_nesting" op, and my gut feeling is that it's not great > to have two completely different "this is a VFIO domain" mechanisms... :/ > > Robin. Actually, the intention is to communicate KVM information, which is already available to the VFIO down to the AMD IOMMU driver layer. I am not sure if the enable_nesting() has enough information or the same intention since that only communicates VFIO domain information. Thanks, Suravee
Hi Jason, On 1/13/2023 10:35 PM, Jason Gunthorpe wrote: > On Tue, Jan 10, 2023 at 08:31:36AM -0600, Suravee Suthikulpanit wrote: >> Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for assigning >> a KVM structure to a VFIO group. The information in struct KVM is also >> useful for IOMMU drivers when setting up VFIO domain. >> >> Introduce struct iommu_domain_ops.set_kvm call-back function to allow >> IOMMU drivers to provide call-back to process the struct KVM >> assigned. > > Also NAK > > Connecting the iommu driver to KVM has to be properly architected > though iommufd. > My understanding is the kvm_vfio_file_set_kvm() from the following call-path: * kvm_vfio_group_add() * kvm_vfio_group_del() * kvm_vfio_destroy() to attach/detach KVM to/from a particular VFIO domain. This is an existing interface from kvm_vfio_set_group() Here is the call-path: kvm_vfio_file_set_kvm() vfio_file_set_kvm() iommu_set_kvm() <-- New interface amd_iommu_set_kvm() Could you please elaborate what you have in mind for a properly architected interface via iommufd? >> @@ -1652,6 +1652,7 @@ void vfio_file_set_kvm(struct file *file, struct kvm *kvm) >> >> mutex_lock(&group->group_lock); >> group->kvm = kvm; >> + iommu_set_kvm(group->iommu_group, kvm); >> mutex_unlock(&group->group_lock); >> } > > This also has obvious lifetime bugs Could you please also elaborate on this part? For detaching case, KVM is NULL, and the same information is passed to the IOMMU driver to handle the detaching case. Am I missing anything? Thanks, Suravee > > Jason
On 2023-01-17 04:20, Suthikulpanit, Suravee wrote: > Hi Robin, > > On 1/10/2023 10:11 PM, Robin Murphy wrote: >> On 2023-01-10 14:31, Suravee Suthikulpanit wrote: >>> Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for >>> assigning >>> a KVM structure to a VFIO group. The information in struct KVM is also >>> useful for IOMMU drivers when setting up VFIO domain. >>> >>> Introduce struct iommu_domain_ops.set_kvm call-back function to allow >>> IOMMU drivers to provide call-back to process the struct KVM assigned. >> >> Hmm, it sounds like this has quite some overlap of intent with the >> existing "enable_nesting" op, and my gut feeling is that it's not >> great to have two completely different "this is a VFIO domain" >> mechanisms... :/ >> >> Robin. > > Actually, the intention is to communicate KVM information, which is > already available to the VFIO down to the AMD IOMMU driver layer. I am > not sure if the enable_nesting() has enough information or the same > intention since that only communicates VFIO domain information. Sure, but from the high level view, we have on the one hand an API for "I want to use this domain in a VM (with nested paging)" and on other an API for "I want to use nested paging in this domain (for a VM)", so clearly it would be most sensible to converge on a single API for what is ultimately one single overarching use-case. I'm not claiming that the existing enable_nesting op is anywhere near the right design either; in fact I'm pretty sure it isn't, if the Arm SMMU drivers ever want to contemplate sharing stage 2 pagetables with KVM also. Cheers, Robin.
On Tue, Jan 17, 2023 at 12:31:07PM +0700, Suthikulpanit, Suravee wrote: > Hi Jason, > > On 1/13/2023 10:35 PM, Jason Gunthorpe wrote: > > On Tue, Jan 10, 2023 at 08:31:36AM -0600, Suravee Suthikulpanit wrote: > > > Currently, VFIO provide an kvm_vfio_file_set_kvm() interface for assigning > > > a KVM structure to a VFIO group. The information in struct KVM is also > > > useful for IOMMU drivers when setting up VFIO domain. > > > > > > Introduce struct iommu_domain_ops.set_kvm call-back function to allow > > > IOMMU drivers to provide call-back to process the struct KVM > > > assigned. > > > > Also NAK > > > > Connecting the iommu driver to KVM has to be properly architected > > though iommufd. > > > > My understanding is the kvm_vfio_file_set_kvm() from the following > call-path: > > * kvm_vfio_group_add() > * kvm_vfio_group_del() > * kvm_vfio_destroy() > > to attach/detach KVM to/from a particular VFIO domain. No, it has nothing to do with a VFIO domain. It is intended to connect the KVM to a VFIO device for use in architecture specific ways (primarily s390), and to support broken-by-design code in GVT's mdev. We currenly have no connection between kvm and the iommu domain at all. > Could you please elaborate what you have in mind for a properly architected > interface via iommufd? You'd need to explain what this is trying to do. As I said, I want to see a comprehensive VFIO solution for CC from the people interested in it that supports all three major architectures currently available. I really don't want to see three different almost-the-same but unmaintainable different versions of this. Frankly I'm really not clear what role the IOMMU driver should be playing in CC at all, certainly not with details about what AMD's design requires. AFAIK ARM expects the the IOMMU will be controlled by the realm manager. How can AMD be different from this and still be secure? The translation of IOVA for DMA is a security critical operation. I would expect the KVM page table and the IOMMU S2 to be hardwired together. So if you need hypervisor involvment you need to start there by defining what exactly your architecture needs for iommu programming and create a special iommu_domain that encapsulates whatever that is. > > > @@ -1652,6 +1652,7 @@ void vfio_file_set_kvm(struct file *file, struct kvm *kvm) > > > mutex_lock(&group->group_lock); > > > group->kvm = kvm; > > > + iommu_set_kvm(group->iommu_group, kvm); > > > mutex_unlock(&group->group_lock); > > > } > > > > This also has obvious lifetime bugs > > Could you please also elaborate on this part? For detaching case, KVM is > NULL, and the same information is passed to the IOMMU driver to handle the > detaching case. Am I missing anything? The kvm pointer is only valid so long as the group_lock is held. Jason
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 65a3b3d886dc..5116d5fe35f2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -3231,3 +3231,13 @@ bool iommu_group_dma_owner_claimed(struct iommu_group *group) return user; } EXPORT_SYMBOL_GPL(iommu_group_dma_owner_claimed); + +void iommu_set_kvm(struct iommu_group *group, struct kvm *kvm) +{ + if (!group || !group->domain || !group->domain->ops) + return; + + if (group->domain->ops->set_kvm) + group->domain->ops->set_kvm(group->domain, kvm); +} +EXPORT_SYMBOL_GPL(iommu_set_kvm); diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index 2d168793d4e1..7641e3a0c986 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -1652,6 +1652,7 @@ void vfio_file_set_kvm(struct file *file, struct kvm *kvm) mutex_lock(&group->group_lock); group->kvm = kvm; + iommu_set_kvm(group->iommu_group, kvm); mutex_unlock(&group->group_lock); } EXPORT_SYMBOL_GPL(vfio_file_set_kvm); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3c9da1f8979e..43000231d3d7 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -42,6 +42,7 @@ struct notifier_block; struct iommu_sva; struct iommu_fault_event; struct iommu_dma_cookie; +struct kvm; /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 @@ -314,6 +315,8 @@ struct iommu_domain_ops { unsigned long quirks); void (*free)(struct iommu_domain *domain); + + void (*set_kvm)(struct iommu_domain *domain, struct kvm *kvm); }; /** @@ -391,6 +394,7 @@ void iommu_device_sysfs_remove(struct iommu_device *iommu); int iommu_device_link(struct iommu_device *iommu, struct device *link); void iommu_device_unlink(struct iommu_device *iommu, struct device *link); int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain); +void iommu_set_kvm(struct iommu_group *group, struct kvm *kvm); static inline struct iommu_device *dev_to_iommu_device(struct device *dev) {