Message ID | 20240211174705.31992-1-ankita@nvidia.com |
---|---|
Headers |
Return-Path: <linux-kernel+bounces-60894-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp2033854dyd; Sun, 11 Feb 2024 09:47:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IGckgmECvBduhgPOvdUiwm+MgZe+9YarTRWXPmV8GQbtC/0rw0DKl5m6+lNG8Wu1GsfXMfq X-Received: by 2002:aa7:d8d8:0:b0:560:bea6:50c9 with SMTP id k24-20020aa7d8d8000000b00560bea650c9mr2900170eds.14.1707673676716; Sun, 11 Feb 2024 09:47:56 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCU8kBImN1zmzKRFP5hLj46UDLraZXvxqp6UJikgVM/SzNKebot4kq7/buhLKW1X1If8+Campnr9J7q3zDuUa2rg/AfiSw== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id h16-20020a056402095000b00560c3207796si1912303edz.641.2024.02.11.09.47.56 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Feb 2024 09:47:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60894-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=jDyyCa3e; arc=fail (signature failed); spf=pass (google.com: domain of linux-kernel+bounces-60894-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60894-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 25FDF1F2289B for <ouuuleilei@gmail.com>; Sun, 11 Feb 2024 17:47:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 32C565D497; Sun, 11 Feb 2024 17:47:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="jDyyCa3e" Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2067.outbound.protection.outlook.com [40.107.92.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C9A15B5C8; Sun, 11 Feb 2024 17:47:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.67 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707673652; cv=fail; b=d8Y08oTzli3cETQ4DQ8yqhckE2zmXXmjCOfM5Vn4q/LUxVJRz7RGCVnrSM1B0qfnvC0LJVY6tl99ftQexgujycHWH2byFhReDiQUsKHdL7/4K2BIuo0B+y7EJpp2HSYX9LAGmItSlaoKG24G+pumDr9ou+Bo6eJHepAAwSwapvo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707673652; c=relaxed/simple; bh=NbkIpdAs6j24EK+X6X+/UJwHxYdPAR4ARoGbRyBl9fo=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=UoyfgQWyljv6l5x+P0f+S2a/WE8cbAeBF3BB2RMb6z9MlEYDJBgiu6d/brDdRu3TXaneaIUlo5mVZJ6Ck/nuyRZ6m+6XJs3w/ei31JjigOkk5KIqchdhTdl8vCgtLT58F6uokbVBcYmYnqxvI9gX1lYxidHRZQ9PL2B6/M28u38= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=jDyyCa3e; arc=fail smtp.client-ip=40.107.92.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LkRxADkZDwqo1+xPQOBX3UFlrRvnBomQG8owwqwyLkCyFrQzXnNZt6LFfFHy0w27jsAYkksazlgzaOg8WiN6TujIKCOkwC18qFkOi6zTcLrAifRWhDslIlaMm9GqiFpTdwhkRn5siFpQFQajjpWRHWSnjT1KbB9iYbFft5y9649ePloFbVa3muoHm4QSzeZ1/nlj2988SUU7gGKT/mb9SF9Xkv4/pfQSC8+HP6yV9w2kK5TRXWxPFEFflMdd01sVxXAzRwsfbP8jyGo7u/0qBZKRx3+sPgbr2VR3gYkG+aU0rV1pjbhl2OEG2MwFiwY6qO5nT6+0MDVpQgYSsThGSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RdtTdLsbyXcnVxp0s3Ht0s2Pbye1RL+I/5RuQ+dkVKY=; b=brWNLOauBtLfwumnfkVceQ/dlWo7uVIMM2VX1MdsuZ2P2UUFmB18yGyeG4DPxHRpEX2nV0AANDm4QpnzLx9xbZvz6FZt2pntdjt4e/Dge6tQixoYFxKftWjblfr65D6HaNMlRQ1wW4JqZFAfANs8y/SF1B0/mLx4dr4/tC63nnJvLV8dSSER5ReqGXgAvHIBQSOpD+uP11EQ07PFH5zfw+HEQkBnVxEY0FpzamJ+FDyLkdrPBgeqB5MI81jZpXJqgzFVBUajzPak1Zl38hWhbmr+osXM3RdsHCf24sxChZFKJaOOU6SSWca4ARqiSV1YaUJULSW/ihHrIuYl5XH8ow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RdtTdLsbyXcnVxp0s3Ht0s2Pbye1RL+I/5RuQ+dkVKY=; b=jDyyCa3ezM15bFOT+FbBao3fBLPlz09lmNT3SPIhVCZSytb4jOYcUqpNTX8FVa02yiOkzGSEABjGCHcoF0qJf3w5TlAZlPeZK/XDSOQk5prmrb1jJfURexS7xlbwEpuGh4vJqsnPA3kaTSSiotVWnwiOcqPHvwE7cCU1NsBTkxqHt+WNT7QsKtT1wEp4b+HScQrXVbTm+oz2VJSMYugWKkWS5tIYRB6TPYdwaYmi8nbTtH7xzOhFFZkBPoxrSZL49XYA28AeeOLwRLjP42Wmbu0iUYFzL7Y9OmWtzKyQf3Fl0m3Vdnc6kIZRlx8mcxFcG0/DTBWDZjo5svBU1fF7cw== Received: from DM6PR03CA0056.namprd03.prod.outlook.com (2603:10b6:5:100::33) by MW3PR12MB4395.namprd12.prod.outlook.com (2603:10b6:303:5c::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.18; Sun, 11 Feb 2024 17:47:26 +0000 Received: from DS3PEPF000099D6.namprd04.prod.outlook.com (2603:10b6:5:100:cafe::9c) by DM6PR03CA0056.outlook.office365.com (2603:10b6:5:100::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.37 via Frontend Transport; Sun, 11 Feb 2024 17:47:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS3PEPF000099D6.mail.protection.outlook.com (10.167.17.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Sun, 11 Feb 2024 17:47:25 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 11 Feb 2024 09:47:18 -0800 Received: from drhqmail203.nvidia.com (10.126.190.182) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Sun, 11 Feb 2024 09:47:17 -0800 Received: from sgarnayak-dt.nvidia.com (10.127.8.9) by mail.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12 via Frontend Transport; Sun, 11 Feb 2024 09:47:07 -0800 From: <ankita@nvidia.com> To: <ankita@nvidia.com>, <jgg@nvidia.com>, <maz@kernel.org>, <oliver.upton@linux.dev>, <james.morse@arm.com>, <suzuki.poulose@arm.com>, <yuzenghui@huawei.com>, <reinette.chatre@intel.com>, <surenb@google.com>, <stefanha@redhat.com>, <brauner@kernel.org>, <catalin.marinas@arm.com>, <will@kernel.org>, <mark.rutland@arm.com>, <alex.williamson@redhat.com>, <kevin.tian@intel.com>, <yi.l.liu@intel.com>, <ardb@kernel.org>, <akpm@linux-foundation.org>, <andreyknvl@gmail.com>, <wangjinchao@xfusion.com>, <gshan@redhat.com>, <shahuang@redhat.com>, <ricarkol@google.com>, <linux-mm@kvack.org>, <lpieralisi@kernel.org>, <rananta@google.com>, <ryan.roberts@arm.com>, <david@redhat.com>, <linus.walleij@linaro.org>, <bhe@redhat.com> CC: <aniketa@nvidia.com>, <cjia@nvidia.com>, <kwankhede@nvidia.com>, <targupta@nvidia.com>, <vsethi@nvidia.com>, <acurrid@nvidia.com>, <apopple@nvidia.com>, <jhubbard@nvidia.com>, <danw@nvidia.com>, <kvmarm@lists.linux.dev>, <mochs@nvidia.com>, <zhiw@nvidia.com>, <kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <linux-arm-kernel@lists.infradead.org> Subject: [PATCH v7 0/4] kvm: arm64: allow the VM to select DEVICE_* and NORMAL_NC for IO memory Date: Sun, 11 Feb 2024 23:17:01 +0530 Message-ID: <20240211174705.31992-1-ankita@nvidia.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099D6:EE_|MW3PR12MB4395:EE_ X-MS-Office365-Filtering-Correlation-Id: 50e7f7f4-570d-47f8-9d40-08dc2b29822d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: L/IvJvM3dG6uJokmSd+0BXysE6pZlijNspFqG7ixztH+Skgh0s71pYsHnKZpkmfcktkAX9E6RcxxP2EN+w6qX72NoRCzI9vIjXa2QyDNJmCg3M6DM8lXJnlzVPid95xMk9xYRi704o6P3ophhfCH5ugjZNQc5OoSJzSg8c6BjSCsp0b9FGltJHPCAGHGRjnCVt+sYlwE3DDCxfel/10fVuVam2cXBM+0yIC227ip+XJ3zF8Tyk7wCblA/izDCgDYPSXFGGnQh70YPlgK34DiqRzqd2+lFiJ/9HuMdqcutLQBDJNTcGU4St13iWtLcSJjjpPzD+oJOaJ/csElkULrb7GVlpqA4Hez9n2STr38PcZJPRitXYJ1BiJQ+eWOs7+k/WRnakx9sC75HFpUnOg2FynQxIB2Uzpypvl2lNSz7o8INVlJ/3NGOQpKxfCRoFDA49Ga/yIlMAF9Lp8nEQpPj8cbcBizt5miU75C+PXtaJ/RwdJIX+tY89OaDMsLyvDZUWLWgq3fuCzXawU0wfTIfSlCvm2qGFS8VWcBtC2umALWzWNmNkdT5DCKvNeyAABtXdcatI2OWs7WqFRIAPW2qtDvF/YT3lnbbhvikKWk2O9b45/4ySRy/LU2vO2LHa5qlTdmVg13mQMfhjWS1ir2M+RmhCjThM1dTyTBPayyC3RAo7/UttdS+Gf1NEfCV7T0 X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(396003)(136003)(376002)(346002)(39860400002)(230922051799003)(230273577357003)(64100799003)(1800799012)(451199024)(186009)(82310400011)(46966006)(36840700001)(40470700004)(5660300002)(54906003)(316002)(7416002)(7406005)(110136005)(41300700001)(2876002)(2906002)(83380400001)(426003)(2616005)(921011)(1076003)(26005)(356005)(7636003)(86362001)(336012)(36756003)(82740400003)(70206006)(4326008)(70586007)(8936002)(8676002)(478600001)(7696005)(6666004)(966005);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2024 17:47:25.6198 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 50e7f7f4-570d-47f8-9d40-08dc2b29822d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099D6.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4395 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790625633001231127 X-GMAIL-MSGID: 1790625633001231127 |
Series |
kvm: arm64: allow the VM to select DEVICE_* and NORMAL_NC for IO memory
|
|
Message
Ankit Agrawal
Feb. 11, 2024, 5:47 p.m. UTC
From: Ankit Agrawal <ankita@nvidia.com> Currently, KVM for ARM64 maps at stage 2 memory that is considered device with DEVICE_nGnRE memory attributes; this setting overrides (per ARM architecture [1]) any device MMIO mapping present at stage 1, resulting in a set-up whereby a guest operating system cannot determine device MMIO mapping memory attributes on its own but it is always overridden by the KVM stage 2 default. This set-up does not allow guest operating systems to select device memory attributes independently from KVM stage-2 mappings (refer to [1], "Combining stage 1 and stage 2 memory type attributes"), which turns out to be an issue in that guest operating systems (e.g. Linux) may request to map devices MMIO regions with memory attributes that guarantee better performance (e.g. gathering attribute - that for some devices can generate larger PCIe memory writes TLPs) and specific operations (e.g. unaligned transactions) such as the NormalNC memory type. The default device stage 2 mapping was chosen in KVM for ARM64 since it was considered safer (i.e. it would not allow guests to trigger uncontained failures ultimately crashing the machine) but this turned out to be asynchronous (SError) defeating the purpose. For these reasons, relax the KVM stage 2 device memory attributes from DEVICE_nGnRE to Normal-NC. Generalizing to other devices may be problematic, however. E.g. GICv2 VCPU interface, which is effectively a shared peripheral, can allow a guest to affect another guest's interrupt distribution. Hence limit the change to VFIO PCI as caution. This is achieved by making the VFIO PCI core module set a flag that is tested by KVM to activate the code. This could be extended to other devices in the future once that is deemed safe. [1] section D8.5 - DDI0487J_a_a-profile_architecture_reference_manual.pdf Applied over v6.8-rc2. History ======= v6 -> v7 - Changed VM_VFIO_ALLOW_WC to VM_ALLOW_ANY_UNCACHED based on suggestion from Alex Williamson. - Refactored stage2_set_prot_attr() based on Will's suggestion to reorganize the switch cases. Also updated the case to return -EINVAL when both KVM_PGTABLE_PROT_DEVICE and KVM_PGTABLE_PROT_NORMAL_NC set. - Fixed nits pointed by Oliver and Catalin. v5 -> v6 - Rebased to v6.8-rc2 v4 -> v5 - Moved the cover letter description text to patch 1/4. - Cleaned up stage2_set_prot_attr() based on Marc Zyngier suggestions. - Moved the mm header file changes to a separate patch. - Rebased to v6.7-rc3. v3 -> v4 - Moved the vfio-pci change to use the VM_VFIO_ALLOW_WC into separate patch. - Added check to warn on the case NORMAL_NC and DEVICE are set simultaneously. - Fixed miscellaneous nitpicks suggested in v3. v2 -> v3 - Added a new patch (and converted to patch series) suggested by Catalin Marinas to ensure the code changes are restricted to VFIO PCI devices. - Introduced VM_VFIO_ALLOW_WC flag for VFIO PCI to communicate with VMM. - Reverted GIC mapping to DEVICE. v1 -> v2 - Updated commit log to the one posted by Lorenzo Pieralisi <lpieralisi@kernel.org> (Thanks!) - Added new flag to represent the NORMAL_NC setting. Updated stage2_set_prot_attr() to handle new flag. v6 Link: https://lore.kernel.org/all/20240207204652.22954-1-ankita@nvidia.com/ Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Ankit Agrawal (4): kvm: arm64: introduce new flag for non-cacheable IO memory mm: introduce new flag to indicate wc safe kvm: arm64: set io memory s2 pte as normalnc for vfio pci device vfio: convey kvm that the vfio-pci device is wc safe arch/arm64/include/asm/kvm_pgtable.h | 2 ++ arch/arm64/include/asm/memory.h | 2 ++ arch/arm64/kvm/hyp/pgtable.c | 24 +++++++++++++++++++----- arch/arm64/kvm/mmu.c | 14 ++++++++++---- drivers/vfio/pci/vfio_pci_core.c | 6 +++++- include/linux/mm.h | 14 ++++++++++++++ 6 files changed, 52 insertions(+), 10 deletions(-)
Comments
On 11.02.24 18:47, ankita@nvidia.com wrote: > From: Ankit Agrawal <ankita@nvidia.com> > Hi, > Currently, KVM for ARM64 maps at stage 2 memory that is considered device > with DEVICE_nGnRE memory attributes; this setting overrides (per > ARM architecture [1]) any device MMIO mapping present at stage 1, > resulting in a set-up whereby a guest operating system cannot > determine device MMIO mapping memory attributes on its own but > it is always overridden by the KVM stage 2 default. > > This set-up does not allow guest operating systems to select device > memory attributes independently from KVM stage-2 mappings > (refer to [1], "Combining stage 1 and stage 2 memory type attributes"), > which turns out to be an issue in that guest operating systems > (e.g. Linux) may request to map devices MMIO regions with memory > attributes that guarantee better performance (e.g. gathering > attribute - that for some devices can generate larger PCIe memory > writes TLPs) and specific operations (e.g. unaligned transactions) > such as the NormalNC memory type. > > The default device stage 2 mapping was chosen in KVM for ARM64 since > it was considered safer (i.e. it would not allow guests to trigger > uncontained failures ultimately crashing the machine) but this > turned out to be asynchronous (SError) defeating the purpose. > > For these reasons, relax the KVM stage 2 device memory attributes > from DEVICE_nGnRE to Normal-NC. > > Generalizing to other devices may be problematic, however. E.g. > GICv2 VCPU interface, which is effectively a shared peripheral, can > allow a guest to affect another guest's interrupt distribution. Hence > limit the change to VFIO PCI as caution. This is achieved by > making the VFIO PCI core module set a flag that is tested by KVM > to activate the code. This could be extended to other devices in > the future once that is deemed safe. I still have to digest some of the stuff I learned about this issue, please bear with me :) (1) PCI BARs might contain mixtures of RAM and MMIO, the exact locations/semantics within a BAR are only really known to the actual device driver. We must not unconditionally map PFNs "the wrong way", because it can have undesired side effects. Side effects might include read-speculation, that can be very problematic with MMIO regions. The safe way (for the host) is DEVICE_nGnRE. But that is actually problematic for performance (where we want WC?) and unaligned accesses (where we want NC?). We can trigger both cases right now inside VMs, where we want the device driver to actually make the decision. (2) For a VM, that device driver lives inside the VM, for DPDK and friends, it lives in user space. They have this information. We only focus here on optimizing (fixing?) the mapping for VMs, DPDK is out of the picture. So we want to allow the VM to achieve a WC/NC mapping by using a relaxed (NC) mapping in stage-1. Whatever is set in stage-2 wins. (3) vfio knows whether using WC (and NC?) could be problematic, and must forbid it, if that is the case. There are cases where we could otherwise cause harm (bring down the host?). We must keep mapping the memory as DEVICE_nGnRE when in doubt. Now, what the new mmap() flag does is tell the world "using the wrong mapping type cannot bring down the host", and KVM uses that to use a different mapping type (NC) in stage-1 as setup by vfio in the user space page tables. I was trying to find ways of avoiding a mmap() flag and was hoping that we could just use a PTE bit that does not have semantics in VM_PFNMAP mappings. Unfortunately, arm64 does not support uffd-wp, which I had in mind, so it's not that easy. Further, I was wondering if there would be a way to let DPDK similarly benefit, because it looks like we are happily ignoring that (I was told they apply some hacks to work around that). In essence, user space knows how it will consume that memory: QEMU wants to mmap() it only to get it into stage-1 and not access it via the user page tables. DPDK wants to mmap() it to actually access it from user space. So I am curious, is the following problematic, and why: (a) User space tells VFIO which parts of a BAR it would like to have mapped differently. For QEMU, this would mean, requesting a NC mapping for the whole BAR. For DPDK, it could mean requesting different types for parts of a BAR. (b) VFIO decides if it is safe to use a relaxed mapping. If in doubt, it falls back to existing (legacy) handling -- DEVICE_nGnRE. (c) KVM simply uses the existing mapping type instead of diverging from the one in the user space mapping. That would mean, that we would map NC already in QEMU. I wonder if that could be a problem with read speculation, even if QEMU never really accesses that mmap'ed region. Something like that would of course require user space changes. Handling it without such changes (ignoring DPDK of course) would require some information exchange between KVM and vfio, like the mmap flag proposed.
On Mon, Feb 12, 2024 at 11:26:12AM +0100, David Hildenbrand wrote: > I still have to digest some of the stuff I learned about this issue, please > bear with me :) > > (1) PCI BARs might contain mixtures of RAM and MMIO, the exact > locations/semantics within a BAR are only really known to the actual device > driver. Nit: Not RAM and MMIO but different kinds of MMIO that have different access patterns. The conclusion is correct. > We must not unconditionally map PFNs "the wrong way", because it can have > undesired side effects. Side effects might include read-speculation, that > can be very problematic with MMIO regions. It is worse that some hand wavey "side effect". If you map memory with NORMAL_NC (ie for write combining) then writel() doesn't work correctly at all. The memory must be mapped according to which kernel APIs the actual driver in the VM will use. writel() vs __iowrite64_copy(). > We can trigger both cases right now inside VMs, where we want the device > driver to actually make the decision. Yes > (2) For a VM, that device driver lives inside the VM, for DPDK and friends, > it lives in user space. They have this information. Yes > We only focus here on optimizing (fixing?) the mapping for VMs, DPDK is out > of the picture. DPDK will be solved through some VFIO ioctl, we know how to do it, just nobody has cared enough to do it. > So we want to allow the VM to achieve a WC/NC mapping by using a > relaxed (NC) mapping in stage-1. Whatever is set in stage-2 wins. Yes > > (3) vfio knows whether using WC (and NC?) could be problematic, and must > forbid it, if that is the case. There are cases where we could otherwise > cause harm (bring down the host?). We must keep mapping the memory as > DEVICE_nGnRE when in doubt. Yes, there is an unspecific fear that on ARM platforms using NORMAL_NC in the wrong way can trigger a catastrophic error and kill the host. There is no way to know if the platform has this bug, so the agreement was to be conservative and only allow it for vfio-pci, based on some specific details of how PCI has to be implemented and ARM guidance on PCI integration.. > Now, what the new mmap() flag does is tell the world "using the wrong > mapping type cannot bring down the host", and KVM uses that to use a > different mapping type (NC) in stage-1 as setup by vfio in the user space > page tables. The inverse meaning, we assume VMAs with the flag can bring down the host, but yes. > I was trying to find ways of avoiding a mmap() flag and was hoping that we > could just use a PTE bit that does not have semantics in VM_PFNMAP mappings. > Unfortunately, arm64 does not support uffd-wp, which I had in mind, so it's > not that easy. Seems like a waste of a valuable PTE bit to me. > Further, I was wondering if there would be a way to let DPDK similarly > benefit, because it looks like we are happily ignoring that (I was told they > apply some hacks to work around that). dpdk doesn't need the VMA bit, we know how to solve it with vfio ioctls, it is very straightforward. dpdk just does a ioctl & mmap and VFIO will create a vma with pgprote_writecombine(). Completely trivial, the only nasty bit is fitting this into the VFIO uAPI. > (a) User space tells VFIO which parts of a BAR it would like to have mapped > differently. For QEMU, this would mean, requesting a NC mapping for the > whole BAR. For DPDK, it could mean requesting different types for parts of a > BAR. We don't want to have have the memory mapped as NC in qemu. As I said above if it is mapped NC then writel() doesn't work. We can't have conflicting mappings that go toward NC when the right answer is DEVICE. writel() on NC will malfunction. __iowrite64_copy() on DEVICE will be functionally correct but slower. The S2 mapping that KVM creates is special because it doesn't actually map it once the VM kernel gets started. The VM kernel always supplies a S1 table that sets the correct type. So if qemu has DEVICE, the S2 has NC and the VM's S1 has DEVICE then the mapping is realiably made to be DEVICE. The hidden S2 doesn't cause a problem. > That would mean, that we would map NC already in QEMU. I wonder if that > could be a problem with read speculation, even if QEMU never really accesses > that mmap'ed region. Also correct. Further, qemu may need to do emulation for MMIO in various cases and the qemu logic for this requires a DEVICE mapping or the emulation will malfunction. Using NC in qemu is off the table. Jason
Hi Jason, Thanks for all the details (some might be valuable to document in more detail, but I'm not that experienced with all of the mapping types on arm64, so it might "just be me"). > It is worse that some hand wavey "side effect". If you map memory with > NORMAL_NC (ie for write combining) then writel() doesn't work > correctly at all. > > The memory must be mapped according to which kernel APIs the actual > driver in the VM will use. writel() vs __iowrite64_copy(). > >> We can trigger both cases right now inside VMs, where we want the device >> driver to actually make the decision. > > Yes > >> (2) For a VM, that device driver lives inside the VM, for DPDK and friends, >> it lives in user space. They have this information. > > Yes > >> We only focus here on optimizing (fixing?) the mapping for VMs, DPDK is out >> of the picture. > > DPDK will be solved through some VFIO ioctl, we know how to do it, > just nobody has cared enough to do it. Good! > >> So we want to allow the VM to achieve a WC/NC mapping by using a >> relaxed (NC) mapping in stage-1. Whatever is set in stage-2 wins. > > Yes > >> >> (3) vfio knows whether using WC (and NC?) could be problematic, and must >> forbid it, if that is the case. There are cases where we could otherwise >> cause harm (bring down the host?). We must keep mapping the memory as >> DEVICE_nGnRE when in doubt. > > Yes, there is an unspecific fear that on ARM platforms using NORMAL_NC > in the wrong way can trigger a catastrophic error and kill the > host. There is no way to know if the platform has this bug, so the > agreement was to be conservative and only allow it for vfio-pci, based > on some specific details of how PCI has to be implemented and ARM > guidance on PCI integration.. > >> Now, what the new mmap() flag does is tell the world "using the wrong >> mapping type cannot bring down the host", and KVM uses that to use a >> different mapping type (NC) in stage-1 as setup by vfio in the user space >> page tables. > > The inverse meaning, we assume VMAs with the flag can bring down the > host, but yes. Got it, will have a closer look at the patch soon. > >> I was trying to find ways of avoiding a mmap() flag and was hoping that we >> could just use a PTE bit that does not have semantics in VM_PFNMAP mappings. >> Unfortunately, arm64 does not support uffd-wp, which I had in mind, so it's >> not that easy. > > Seems like a waste of a valuable PTE bit to me. It would rather have been "it's already unused there, so let's reuse it". But there was no such low-hanging gruit. > >> Further, I was wondering if there would be a way to let DPDK similarly >> benefit, because it looks like we are happily ignoring that (I was told they >> apply some hacks to work around that). > > dpdk doesn't need the VMA bit, we know how to solve it with vfio > ioctls, it is very straightforward. dpdk just does a ioctl & mmap and > VFIO will create a vma with pgprote_writecombine(). Completely > trivial, the only nasty bit is fitting this into the VFIO uAPI. That's what I thought. > >> (a) User space tells VFIO which parts of a BAR it would like to have mapped >> differently. For QEMU, this would mean, requesting a NC mapping for the >> whole BAR. For DPDK, it could mean requesting different types for parts of a >> BAR. > > We don't want to have have the memory mapped as NC in qemu. As I said > above if it is mapped NC then writel() doesn't work. We can't have > conflicting mappings that go toward NC when the right answer is > DEVICE. I was wondering who would trigger that, but as I read below it could be MMIO emulation. > > writel() on NC will malfunction. > > __iowrite64_copy() on DEVICE will be functionally correct but slower. > > The S2 mapping that KVM creates is special because it doesn't actually > map it once the VM kernel gets started. The VM kernel always supplies > a S1 table that sets the correct type. > > So if qemu has DEVICE, the S2 has NC and the VM's S1 has DEVICE then > the mapping is realiably made to be DEVICE. The hidden S2 doesn't > cause a problem. > >> That would mean, that we would map NC already in QEMU. I wonder if that >> could be a problem with read speculation, even if QEMU never really accesses >> that mmap'ed region. > > Also correct. > > Further, qemu may need to do emulation for MMIO in various cases and > the qemu logic for this requires a DEVICE mapping or the emulation > will malfunction. > > Using NC in qemu is off the table. Good, thanks for the details, all makes sense to me.
On Sun, Feb 11, 2024 at 11:17:01PM +0530, ankita@nvidia.com wrote: > From: Ankit Agrawal <ankita@nvidia.com> > > Currently, KVM for ARM64 maps at stage 2 memory that is considered device > with DEVICE_nGnRE memory attributes; this setting overrides (per > ARM architecture [1]) any device MMIO mapping present at stage 1, > resulting in a set-up whereby a guest operating system cannot > determine device MMIO mapping memory attributes on its own but > it is always overridden by the KVM stage 2 default. > > This set-up does not allow guest operating systems to select device > memory attributes independently from KVM stage-2 mappings > (refer to [1], "Combining stage 1 and stage 2 memory type attributes"), > which turns out to be an issue in that guest operating systems > (e.g. Linux) may request to map devices MMIO regions with memory > attributes that guarantee better performance (e.g. gathering > attribute - that for some devices can generate larger PCIe memory > writes TLPs) and specific operations (e.g. unaligned transactions) > such as the NormalNC memory type. > > The default device stage 2 mapping was chosen in KVM for ARM64 since > it was considered safer (i.e. it would not allow guests to trigger > uncontained failures ultimately crashing the machine) but this > turned out to be asynchronous (SError) defeating the purpose. > > For these reasons, relax the KVM stage 2 device memory attributes > from DEVICE_nGnRE to Normal-NC. Hi Ankit, Thanks for being responsive in respinning the series according to the feedback. I think we're pretty close here, but it'd be good to address the comment / changelog feedback as well. Can you respin this once more? Hopefully we can get this stuff soaking in -next thereafter.
>> >> The default device stage 2 mapping was chosen in KVM for ARM64 since >> it was considered safer (i.e. it would not allow guests to trigger >> uncontained failures ultimately crashing the machine) but this >> turned out to be asynchronous (SError) defeating the purpose. >> >> For these reasons, relax the KVM stage 2 device memory attributes >> from DEVICE_nGnRE to Normal-NC. > > Hi Ankit, > > Thanks for being responsive in respinning the series according to the > feedback. I think we're pretty close here, but it'd be good to address > the comment / changelog feedback as well. > > Can you respin this once more? Hopefully we can get this stuff soaking > in -next thereafter. Hi Oliver, yes I am planning to refresh it in the next few days after incorporating the comments.