Message ID | 20230914191406.54656-1-shannon.nelson@amd.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp648217vqi; Thu, 14 Sep 2023 14:59:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFpWXE+ilEFP0Z3SKcVJc9h3deNc7DU3oDPKV7RRsApo1076dhTR8rgLRzP8uQTT4DEFsGx X-Received: by 2002:a17:903:22c2:b0:1c0:b82a:b0d9 with SMTP id y2-20020a17090322c200b001c0b82ab0d9mr7736731plg.29.1694728746367; Thu, 14 Sep 2023 14:59:06 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1694728746; cv=pass; d=google.com; s=arc-20160816; b=QWS00NCJowIBiEL2+VSg80fZdx/p54XGxmZT3xDtOE2JRrlOKxGx0B1PkGsooqcWZc vr8SRO8sV62bfywjTSsO4hFlHSVRUKsmM3jbS2B1sJOdbEyR1mg4MTTWj/czBTpSypWj R+YJAdWAU60EfApG65uGvHWcx34QmARe6siNnYCTgCK4lRZ0eF+eo6lnJFAEBcfbJXcN i/bDgKbJs54OoOkPawAI85+09BCIdYncokNeSsAJFbnL9x0NtAZ8Dg3OqTB5xlt+tf6S z82tBh40kkvKqV5sGsy3Nppp061SCOdYiMGGMJYxwPWz6wu369lUjO8lGUlpO8mU/H1n g2tA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=DYWzJKwTnWNKMhTwUC4282dIQkyhbLWvPyqLk3IU39g=; fh=1AVxafyp9dnbDcSr7y+g6zMk1PIKfHrkQ+Aj4P2IMGY=; b=hoerJwoH0Lop1DeiRSlN1NXLtUOHk9fM3e13TScZb8M5pJxOzZSeQ59MNEtcuCjrGC GNNhiYtXwrkiPf2l7agGQsvvVtOlavYIWIuK0p/wNApR58Gz0SMXvqEN+Xj0Z+jIZgqq z0LkFfvvStblqGHD/VKg72BzMyJQeRsPhvwfcNSMxgptYRXx5KqOrseRk8qvNfq45/AM GjeD1mpyCDW5aL/3U7ppqtwETrxXG5nMfrrVFakXsHILSEo6ZNOW/U+GrZD8lpjCAnmd Rm4JWHE/rz9/95jxSUAvquX83knFTW2L13bGd0WoCtHCbbWFzQIGO332M2Q4KRzybEs8 yhtA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=XU6XS9C3; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id u4-20020a170902b28400b001bb29476f47si2244329plr.503.2023.09.14.14.59.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 14:59:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=XU6XS9C3; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 8B20D8371D07; Thu, 14 Sep 2023 12:15:01 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241310AbjINTOs (ORCPT <rfc822;pwkd43@gmail.com> + 33 others); Thu, 14 Sep 2023 15:14:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237398AbjINTOq (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 14 Sep 2023 15:14:46 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2082.outbound.protection.outlook.com [40.107.220.82]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2960D213C; Thu, 14 Sep 2023 12:14:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TTkFYnRyfsLYa49q0YFhLFo0FN6pVhPBOs4T3Bvw0R15s8nEz2/eAr/KzwGtgUEtKui8VZdG66Uj8dOdMxZFRvbO2CluSyByQdSJWlAYjv9IwLVKx9qmroKspGk4dA32jUUuOD1A0B5LcDRNFg0PNcfs6uT5Jl35c2mX0GuzyfmGkvFJtDcaU+rRTft0WeMAjUcI4Ul8osd3VTQBuq2Ak7TXNuMv6RHJu6HZIqF3NS0u/NDaCYQvyjagBLdwXYmP6OodLe08df0guuftQgjlex77uZVivyFoDsjGB9+gO2rC71C6D8un/G1wpa4JkGJFINX6KTjcIYtMXzqYehMFsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DYWzJKwTnWNKMhTwUC4282dIQkyhbLWvPyqLk3IU39g=; b=kSkkcEo+/P8uIc27wAfL+5bn3uCYzPoIIx2UK4bJBRChc+5sxh3osNxTtj7NFppXOJQmP1doi5TioQEX3FwN7M5W7D0s0gVdVOSN3Ce9L18ljxXCa9jYgu3eY1JnApNLMlgH4/t9trV1BxTF6xfA+NSwCcbVh3562eH/ZQPFwJzZJKUEGx/XUXx3VcZGWBgAbdHLSsMwSniNN0roSCcFR2VhTql+1AA5n5DyvZFwl8MzLDASWpggbgkx1m11aTXpiN3kMcgZ9X+8i8H9VB9J91rTiCuhJ8wfXXgfB46+VdJzDbGpyZm0w1p8wvkiH/hJsgb+6wuFz4Xmly+MIbT8aA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DYWzJKwTnWNKMhTwUC4282dIQkyhbLWvPyqLk3IU39g=; b=XU6XS9C3eX9dQzBTq04Ina2+D6dx+UyZzpbOE6apmu6r9DG7BPIA+tjJpSOEFr4f1DZKrki1ihRYs85f+TDhSPAZgb1Vt+d7LHql9C+etDvPwg/0hEJI8G6QWuuomZWSFQG5AW9RNIepuWKiBhveuDRQ98nAgRfHYKguavuckhk= Received: from DM6PR02CA0063.namprd02.prod.outlook.com (2603:10b6:5:177::40) by DM6PR12MB4861.namprd12.prod.outlook.com (2603:10b6:5:1bd::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.20; Thu, 14 Sep 2023 19:14:39 +0000 Received: from SN1PEPF00026368.namprd02.prod.outlook.com (2603:10b6:5:177:cafe::ab) by DM6PR02CA0063.outlook.office365.com (2603:10b6:5:177::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.21 via Frontend Transport; Thu, 14 Sep 2023 19:14:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SN1PEPF00026368.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6792.20 via Frontend Transport; Thu, 14 Sep 2023 19:14:38 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 14 Sep 2023 14:14:37 -0500 From: Shannon Nelson <shannon.nelson@amd.com> To: <alex.williamson@redhat.com>, <jgg@ziepe.ca>, <kevin.tian@intel.com>, <reinette.chatre@intel.com>, <tglx@linutronix.de>, <kvm@vger.kernel.org> CC: <shannon.nelson@amd.com>, <brett.creeley@amd.com>, <linux-kernel@vger.kernel.org> Subject: [PATCH vfio] vfio/pci: remove msi domain on msi disable Date: Thu, 14 Sep 2023 12:14:06 -0700 Message-ID: <20230914191406.54656-1-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF00026368:EE_|DM6PR12MB4861:EE_ X-MS-Office365-Filtering-Correlation-Id: 781570f6-45e9-4ccc-1ab3-08dbb556d765 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: b1dF1rxwmBwl5NO6R2Q88buAyoMUi30q117ZEinKwh7/wNKrpaKi9sw7j/I0p9ItQ7WuFXi47Xr7aMwsQvbvWAQ9qaE8fYDI96G2qKMuyzKFF6MCfBak43ISCJJjdE5EOnVDqIBDRAFPg33vj65QZvTdOb41a/d7beOAQP7ugmRHMa73OCjml7HY97Mn9pQvYmcYkx6KWO5My5hVkcO38NQm0XqZopaXos6ENFUsjYumxDSnRRRIn2fxt6N2Gdpt10pvnZFN0c2A9HYQNbmV10NIX0R2Wbqs/swVqcylakTuPG3PeB9jY6383gm5sjg6K3K8YdKBgK2y+6mCj659eS3L1lwJN1TDBJ3SGfndQPks896jL5aGnZAyShNLcg0PnrWaHKRoy04z/jYzie1CRdqnJbkCf0YSDVyRIa8VShhz1ACm7zjjBkdyIT6hocQrp70ISzF9Ql8pKEz4hXjHwBlby6n6wWL53/gesjDbKsXUEScFG6ONlAZx6jaD+anClvs32KindM3Sid98vR5NFPIxMH36xhc8p+qOJs+XWZtOZiWGzFDLinJFTvxu+PwgJZGCrq+gJHQXlqJQLFniJqjblUlYim2yvdvp/uPjKCJXR2r9rj3IOAdQVm8Kc6QGfoTyi7ApuIO7nbAWxRLkbV6gccOxqB/Kfex1ssRb/I0XZtcddYIt+8gN5Z3PwhIHYz0skB/v6RekzmpGwrYMFU0cI/GI5d/W5ar9YjkdrxNXyhCPo5jTJ22UW+NqfhEW6HO22eYynzEUbN4UJC5/KrplzuCnKQ/ylSEmd6xKMaY= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199024)(82310400011)(1800799009)(186009)(36840700001)(40470700004)(46966006)(86362001)(8936002)(8676002)(81166007)(356005)(82740400003)(44832011)(5660300002)(4326008)(40480700001)(36860700001)(316002)(70586007)(70206006)(110136005)(47076005)(41300700001)(54906003)(40460700003)(2906002)(1076003)(478600001)(6666004)(966005)(336012)(16526019)(26005)(426003)(36756003)(83380400001)(2616005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2023 19:14:38.7818 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 781570f6-45e9-4ccc-1ab3-08dbb556d765 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF00026368.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4861 Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 12:15:01 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777051889590690118 X-GMAIL-MSGID: 1777051889590690118 |
Series |
[vfio] vfio/pci: remove msi domain on msi disable
|
|
Commit Message
Nelson, Shannon
Sept. 14, 2023, 7:14 p.m. UTC
The new MSI dynamic allocation machinery is great for making the irq
management more flexible. It includes caching information about the
MSI domain which gets reused on each new open of a VFIO fd. However,
this causes an issue when the underlying hardware has flexible MSI-x
configurations, as a changed configuration doesn't get seen between
new opens, and is only refreshed between PCI unbind/bind cycles.
In our device we can change the per-VF MSI-x resource allocation
without the need for rebooting or function reset. For example,
1. Initial power up and kernel boot:
# lspci -s 2e:00.1 -vv | grep MSI-X
Capabilities: [a0] MSI-X: Enable+ Count=8 Masked-
2. Device VF configuration change happens with no reset
3. New MSI-x count value seen:
# lspci -s 2e:00.1 -vv | grep MSI-X
Capabilities: [a0] MSI-X: Enable- Count=64 Masked-
This allows for per-VF dynamic reconfiguration of interrupt resources
for the VMs using the VFIO devices supported by our hardware.
The problem comes where the dynamic IRQ management created the MSI
domain when the VFIO device creates the first IRQ in the first ioctl()
VFIO_DEVICE_SET_IRQS request. The current MSI-x count (hwsize) is read
when setting up the irq vectors under pci_alloc_irq_vectors_affinity(),
and the MSI domain information is set up, which includes the hwsize.
When the VFIO fd is closed, the IRQs are removed, but the MSI domain
information is kept for later use since we're only closing the current
VFIO fd, not unbinding the PCI device connection. When a new VFIO fd
open happens and a new VFIO_DEVICE_SET_IRQS request comes down, the cycle
starts again, reusing the existing MSI domain with the previous hwsize.
This is fine until this new QEMU instance has read the new larger MSI-x
count from PCI config space (QEMU:vfio_msi_enable()) and tries to create
more IRQs than were available before. We fail in msi_insert_desc()
because the MSI domain still is set up for the earlier hwsize and has no
room for the n+1 IRQ.
This can be easily fixed by simply adding msi_remove_device_irq_domain()
into vfio_msi_disable() which is called when the VFIO IRQs are removed
either by an ioctl() call from QEMU or from the VFIO fd close. This forces
the MSI domain to be recreated with the new MSI-x count on the next
VFIO_DEVICE_SET_IRQS request.
Link: https://lore.kernel.org/all/cover.1683740667.git.reinette.chatre@intel.com/
Link: https://lore.kernel.org/r/20221124232325.798556374@linutronix.de
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
drivers/vfio/pci/vfio_pci_intrs.c | 1 +
1 file changed, 1 insertion(+)
Comments
On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: > The new MSI dynamic allocation machinery is great for making the irq > management more flexible. It includes caching information about the > MSI domain which gets reused on each new open of a VFIO fd. However, > this causes an issue when the underlying hardware has flexible MSI-x > configurations, as a changed configuration doesn't get seen between > new opens, and is only refreshed between PCI unbind/bind cycles. > > In our device we can change the per-VF MSI-x resource allocation > without the need for rebooting or function reset. For example, > > 1. Initial power up and kernel boot: > # lspci -s 2e:00.1 -vv | grep MSI-X > Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- > > 2. Device VF configuration change happens with no reset Is this an out of tree driver problem? The intree way to alter the MSI configuration is via sriov_set_msix_vec_count, and there is only one in-tree driver that uses it right now. If something is going wrong here it should be fixed in the sriov_set_msix_vec_count() machinery, possibly in the pci core to synchronize the msi_domain view of the world. Jason
On 9/18/2023 7:17 AM, Jason Gunthorpe wrote: > > On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: >> The new MSI dynamic allocation machinery is great for making the irq >> management more flexible. It includes caching information about the >> MSI domain which gets reused on each new open of a VFIO fd. However, >> this causes an issue when the underlying hardware has flexible MSI-x >> configurations, as a changed configuration doesn't get seen between >> new opens, and is only refreshed between PCI unbind/bind cycles. >> >> In our device we can change the per-VF MSI-x resource allocation >> without the need for rebooting or function reset. For example, >> >> 1. Initial power up and kernel boot: >> # lspci -s 2e:00.1 -vv | grep MSI-X >> Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- >> >> 2. Device VF configuration change happens with no reset > > Is this an out of tree driver problem? No, not an out-of-tree driver, this is the vfio pci core. > > The intree way to alter the MSI configuration is via > sriov_set_msix_vec_count, and there is only one in-tree driver that > uses it right now. > > If something is going wrong here it should be fixed in the > sriov_set_msix_vec_count() machinery, possibly in the pci core to > synchronize the msi_domain view of the world. > > Jason The sriov_set_msix_vec_count method assumes (a) the unbind/bind cycle on the VF, and (b) VF MSIx count change configured from the host neither of which are the case in our situation. In our case, the VF device's msix count value found in PCI config space is changed by device configuration management outside of the baremetal host and read by the QEMU instance when it starts up, and then read by the vfio PCI core when QEMU requests the first IRQ. The core code enables the msix range on first IRQ request, and disables it when the vfio fd is closed. It creates the msi_domain on the first call if it doesn't exist, but it does not remove the msi_domain when the irqs are disabled. The IRQ request call trace looks like: QEMU: vfio_msix_vector_do_use->ioctl() (driver.vfio_device_ops.ioctl = vfio_pci_core_ioctl) vfio_pci_core_ioctl vfio_pci_ioctl_set_irqs vfio_pci_set_irqs_ioctl vfio_pci_set_msi_trigger vfio_msi_enable pci_alloc_irq_vectors pci_alloc_irq_vectors_affinity __pci_enable_msix_range pci_setup_msix_device_domain return if msi_domain exists pci_create_device_domain msi_create_device_irq_domain __msi_create_irq_domain - sets info->hwsize msi_capability_init msi_setup_msi_desc msi_insert_msi_desc msi_domain_insert_msi_desc msi_insert_desc fail if index >= hwsize On close of the vfio fd, the trace is: QEMU: close() driver.vfio_device_ops.close vfio_pci_core_close_device vfio_pci_core_disable vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER, vdev->irq_type, 0, 0, NULL); vfio_pci_set_msi_trigger vfio_msi_disable pci_free_irq_vectors The msix vectors are freed, but the msi_domain is not, and the msi_domain holds the MSIx count that it read when it was created. If the device's MSIx count is increased, the next QEMU session will see the new number in PCI config space and try to use that new larger number, but the msi_domain is still using the smaller hwsize and the QEMU IRQ setup fails in msi_insert_desc(). This patch adds a msi_remove_device_irq_domain() call when the irqs are disabled in order to force a new read on the next IRQ allocation cycle. This is limited to only the vfio use of the msi_domain. I suppose we could add this to the trailing end of callbacks in our own driver, but this looks more like a generic vfio/msi issue than a driver specific thing. The other possibility is to force the user to always do a bind cycle between QEMU sessions using the VF. This seems to be unnecessary overhead and was not necessary when using the v6.1 kernel. To the user, this looks like a regression - this is how it was reported to me. sln
On Mon, Sep 18 2023 at 11:17, Jason Gunthorpe wrote: > On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: >> The new MSI dynamic allocation machinery is great for making the irq >> management more flexible. It includes caching information about the >> MSI domain which gets reused on each new open of a VFIO fd. However, >> this causes an issue when the underlying hardware has flexible MSI-x >> configurations, as a changed configuration doesn't get seen between >> new opens, and is only refreshed between PCI unbind/bind cycles. >> >> In our device we can change the per-VF MSI-x resource allocation >> without the need for rebooting or function reset. For example, >> >> 1. Initial power up and kernel boot: >> # lspci -s 2e:00.1 -vv | grep MSI-X >> Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- >> >> 2. Device VF configuration change happens with no reset > > Is this an out of tree driver problem? > > The intree way to alter the MSI configuration is via > sriov_set_msix_vec_count, and there is only one in-tree driver that > uses it right now. Right, but that only addresses the driver specific issues. > If something is going wrong here it should be fixed in the > sriov_set_msix_vec_count() machinery, possibly in the pci core to > synchronize the msi_domain view of the world. Right, we should definitely not do that on a per driver basis. Thanks, tglx
On Mon, Sep 18, 2023 at 10:48:54AM -0700, Nelson, Shannon wrote: > In our case, the VF device's msix count value found in PCI config space is > changed by device configuration management outside of the baremetal host and > read by the QEMU instance when it starts up, and then read by the vfio PCI > core when QEMU requests the first IRQ. Oh, you definitely can't do that! PCI config space is not allowed to change outside the OS's view and we added sriov_set_msix_vec_count() specifically as a way to provide the necessary synchronization between all the parts. Randomly changing, what should be immutable, parts of the config space from under a running OS is just non-compliant PCI behavior. > The msix vectors are freed, but the msi_domain is not, and the msi_domain > holds the MSIx count that it read when it was created. If the device's MSIx > count is increased, the next QEMU session will see the new number in PCI > config space and try to use that new larger number, but the msi_domain is > still using the smaller hwsize and the QEMU IRQ setup fails in > msi_insert_desc(). Correct, devices are not allowed to change these parameters autonomously, so there is no reason to accommodate this. > This patch adds a msi_remove_device_irq_domain() call when the irqs are > disabled in order to force a new read on the next IRQ allocation cycle. This > is limited to only the vfio use of the msi_domain. Definately no. > I suppose we could add this to the trailing end of callbacks in our own > driver, but this looks more like a generic vfio/msi issue than a driver > specific thing. Certainly not. > The other possibility is to force the user to always do a bind cycle between > QEMU sessions using the VF. This seems to be unnecessary overhead and was > not necessary when using the v6.1 kernel. To the user, this looks like a > regression - this is how it was reported to me. You need to use sriov_set_msix_vec_count() and only sriov_set_msix_vec_count() to change this parameter or I expect you will constantly experience problems. Jason
On Mon, Sep 18, 2023 at 08:43:21PM +0200, Thomas Gleixner wrote: > On Mon, Sep 18 2023 at 11:17, Jason Gunthorpe wrote: > > On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: > >> The new MSI dynamic allocation machinery is great for making the irq > >> management more flexible. It includes caching information about the > >> MSI domain which gets reused on each new open of a VFIO fd. However, > >> this causes an issue when the underlying hardware has flexible MSI-x > >> configurations, as a changed configuration doesn't get seen between > >> new opens, and is only refreshed between PCI unbind/bind cycles. > >> > >> In our device we can change the per-VF MSI-x resource allocation > >> without the need for rebooting or function reset. For example, > >> > >> 1. Initial power up and kernel boot: > >> # lspci -s 2e:00.1 -vv | grep MSI-X > >> Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- > >> > >> 2. Device VF configuration change happens with no reset > > > > Is this an out of tree driver problem? > > > > The intree way to alter the MSI configuration is via > > sriov_set_msix_vec_count, and there is only one in-tree driver that > > uses it right now. > > Right, but that only addresses the driver specific issues. Sort of.. sriov_vf_msix_count_store() is intended to be the entry point for this and if the kernel grows places that cache the value or something then this function should flush those caches too. I suppose flushing happens implicitly because Shannon reports that things work fine if the driver is rebound. Since sriov_vf_msix_count_store() ensures there is no driver bound before proceeding it probe/unprobe must be flushing out everything? Jason
On Mon, Sep 18 2023 at 20:37, Jason Gunthorpe wrote: > On Mon, Sep 18, 2023 at 08:43:21PM +0200, Thomas Gleixner wrote: >> On Mon, Sep 18 2023 at 11:17, Jason Gunthorpe wrote: >> > On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: >> >> The new MSI dynamic allocation machinery is great for making the irq >> >> management more flexible. It includes caching information about the >> >> MSI domain which gets reused on each new open of a VFIO fd. However, >> >> this causes an issue when the underlying hardware has flexible MSI-x >> >> configurations, as a changed configuration doesn't get seen between >> >> new opens, and is only refreshed between PCI unbind/bind cycles. >> >> >> >> In our device we can change the per-VF MSI-x resource allocation >> >> without the need for rebooting or function reset. For example, >> >> >> >> 1. Initial power up and kernel boot: >> >> # lspci -s 2e:00.1 -vv | grep MSI-X >> >> Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- >> >> >> >> 2. Device VF configuration change happens with no reset >> > >> > Is this an out of tree driver problem? >> > >> > The intree way to alter the MSI configuration is via >> > sriov_set_msix_vec_count, and there is only one in-tree driver that >> > uses it right now. >> >> Right, but that only addresses the driver specific issues. > > Sort of.. sriov_vf_msix_count_store() is intended to be the entry > point for this and if the kernel grows places that cache the value or > something then this function should flush those caches too. Sorry. What I wanted to say is that the driver callback is not the right place to reload the MSI domains after the change. > I suppose flushing happens implicitly because Shannon reports that > things work fine if the driver is rebound. Since > sriov_vf_msix_count_store() ensures there is no driver bound before > proceeding it probe/unprobe must be flushing out everything? Correct. So sriov_set_msix_vec_count() could just do: ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); if (!ret) teardown_msi_domain(pdev); Right? Thanks, tglx
On Tue, Sep 19, 2023 at 01:47:37AM +0200, Thomas Gleixner wrote: > On Mon, Sep 18 2023 at 20:37, Jason Gunthorpe wrote: > > On Mon, Sep 18, 2023 at 08:43:21PM +0200, Thomas Gleixner wrote: > >> On Mon, Sep 18 2023 at 11:17, Jason Gunthorpe wrote: > >> > On Thu, Sep 14, 2023 at 12:14:06PM -0700, Shannon Nelson wrote: > >> >> The new MSI dynamic allocation machinery is great for making the irq > >> >> management more flexible. It includes caching information about the > >> >> MSI domain which gets reused on each new open of a VFIO fd. However, > >> >> this causes an issue when the underlying hardware has flexible MSI-x > >> >> configurations, as a changed configuration doesn't get seen between > >> >> new opens, and is only refreshed between PCI unbind/bind cycles. > >> >> > >> >> In our device we can change the per-VF MSI-x resource allocation > >> >> without the need for rebooting or function reset. For example, > >> >> > >> >> 1. Initial power up and kernel boot: > >> >> # lspci -s 2e:00.1 -vv | grep MSI-X > >> >> Capabilities: [a0] MSI-X: Enable+ Count=8 Masked- > >> >> > >> >> 2. Device VF configuration change happens with no reset > >> > > >> > Is this an out of tree driver problem? > >> > > >> > The intree way to alter the MSI configuration is via > >> > sriov_set_msix_vec_count, and there is only one in-tree driver that > >> > uses it right now. > >> > >> Right, but that only addresses the driver specific issues. > > > > Sort of.. sriov_vf_msix_count_store() is intended to be the entry > > point for this and if the kernel grows places that cache the value or > > something then this function should flush those caches too. > > Sorry. What I wanted to say is that the driver callback is not the right > place to reload the MSI domains after the change. Oh, that isn't even what Shannon's patch does, it patched VFIO's main PCI driver - not a sriov_set_msix_vec_count() callback :( Shannon's scenario doesn't even use sriov_vf_msix_count_store() at all - the AMD device just randomly changes its MSI count whenever it likes. > > I suppose flushing happens implicitly because Shannon reports that > > things work fine if the driver is rebound. Since > > sriov_vf_msix_count_store() ensures there is no driver bound before > > proceeding it probe/unprobe must be flushing out everything? > > Correct. So sriov_set_msix_vec_count() could just do: > > ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); > if (!ret) > teardown_msi_domain(pdev); > > Right? It subtly isn't needed, sriov_vf_msix_count_store() already requires no driver is associated with the device and this: int msi_setup_device_data(struct device *dev) { struct msi_device_data *md; int ret, i; if (dev->msi.data) return 0; md = devres_alloc(msi_device_data_release, sizeof(*md), GFP_KERNEL); if (!md) return -ENOMEM; Already ensured that msi_remove_device_irq_domain() was called via msi_device_data_release() triggering as part of the devm shutdown of the bound driver. So, the intree mechanism to change the MSI vector size works. The crazy mechanism where the device just changes its value without synchronizing to the OS does not. I don't think we need to try and fix that.. Jason
On 9/18/2023 4:32 PM, Jason Gunthorpe wrote: > > On Mon, Sep 18, 2023 at 10:48:54AM -0700, Nelson, Shannon wrote: > >> In our case, the VF device's msix count value found in PCI config space is >> changed by device configuration management outside of the baremetal host and >> read by the QEMU instance when it starts up, and then read by the vfio PCI >> core when QEMU requests the first IRQ. > > Oh, you definitely can't do that! > > PCI config space is not allowed to change outside the OS's view and we > added sriov_set_msix_vec_count() specifically as a way to provide the > necessary synchronization between all the parts. > > Randomly changing, what should be immutable, parts of the config space > from under a running OS is just non-compliant PCI behavior. Hmmm... I guess I need to have a little chat with my friendly HW/FW folks. Thanks, sln
On Mon, Sep 18 2023 at 21:02, Jason Gunthorpe wrote: > On Tue, Sep 19, 2023 at 01:47:37AM +0200, Thomas Gleixner wrote: >> >> > The intree way to alter the MSI configuration is via >> >> > sriov_set_msix_vec_count, and there is only one in-tree driver that >> >> > uses it right now. >> >> >> >> Right, but that only addresses the driver specific issues. >> > >> > Sort of.. sriov_vf_msix_count_store() is intended to be the entry >> > point for this and if the kernel grows places that cache the value or >> > something then this function should flush those caches too. >> >> Sorry. What I wanted to say is that the driver callback is not the right >> place to reload the MSI domains after the change. > > Oh, that isn't even what Shannon's patch does, it patched VFIO's main > PCI driver - not a sriov_set_msix_vec_count() callback :( Shannon's > scenario doesn't even use sriov_vf_msix_count_store() at all - the AMD > device just randomly changes its MSI count whenever it likes. Ooops. When real hardware changes things behind the kernels back we consider it a hardware bug. The same applies to virtualization muck. So all we should do is add some code which yells when the "hardware" plays silly buggers. >> > I suppose flushing happens implicitly because Shannon reports that >> > things work fine if the driver is rebound. Since >> > sriov_vf_msix_count_store() ensures there is no driver bound before >> > proceeding it probe/unprobe must be flushing out everything? >> >> Correct. So sriov_set_msix_vec_count() could just do: >> >> ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); >> if (!ret) >> teardown_msi_domain(pdev); >> >> Right? > > It subtly isn't needed, sriov_vf_msix_count_store() already requires > no driver is associated with the device and this: > > int msi_setup_device_data(struct device *dev) > { > struct msi_device_data *md; > int ret, i; > > if (dev->msi.data) > return 0; > > md = devres_alloc(msi_device_data_release, sizeof(*md), GFP_KERNEL); > if (!md) > return -ENOMEM; > > Already ensured that msi_remove_device_irq_domain() was called via > msi_device_data_release() triggering as part of the devm shutdown of > the bound driver. Indeed. > So, the intree mechanism to change the MSI vector size works. The > crazy mechanism where the device just changes its value without > synchronizing to the OS does not. > > I don't think we need to try and fix that.. We might want to detect it and yell about it, right? Thanks, tglx
On Tue, Sep 19, 2023 at 02:25:32AM +0200, Thomas Gleixner wrote: > > I don't think we need to try and fix that.. > > We might want to detect it and yell about it, right? It strikes me as a good idea, yes. If it doesn't cost anything. Jason
On Mon, Sep 18 2023 at 21:32, Jason Gunthorpe wrote: > On Tue, Sep 19, 2023 at 02:25:32AM +0200, Thomas Gleixner wrote: > >> > I don't think we need to try and fix that.. >> >> We might want to detect it and yell about it, right? > > It strikes me as a good idea, yes. If it doesn't cost anything. It should not be expensive in the interrupt allocation/deallocation path, where hardware needs to be accessed anyway. So one extra read is not the end of the world. Thanks, tglx
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index cbb4bcbfbf83..f66d5e7e078b 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -538,6 +538,7 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) cmd = vfio_pci_memory_lock_and_enable(vdev); pci_free_irq_vectors(pdev); vfio_pci_memory_unlock_and_restore(vdev, cmd); + msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN); /* * Both disable paths above use pci_intx_for_msi() to clear DisINTx