Message ID | 20231230161954.569267-5-michael.roth@amd.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp3165251dyb; Sat, 30 Dec 2023 08:34:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IETW5H0AzZE4kPZfcADFLYmOYKZh12r0NDhR1NFBrnv2TxfOjPhNrQNIY0nLSSfjd/sjMQv X-Received: by 2002:a05:6512:5d4:b0:50e:7bcd:7c06 with SMTP id o20-20020a05651205d400b0050e7bcd7c06mr1743761lfo.37.1703954072975; Sat, 30 Dec 2023 08:34:32 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1703954072; cv=pass; d=google.com; s=arc-20160816; b=SrvMUd1QEJjaxs5faBel/3JsCdJ1LAXSgTlj7B8E0hhTCKVRweI0BRI7arOFIlANGx WeLI1cT0KmAOxhSztVPJyrrhJ8BZ89XT0CccBySydYV+Z/WCpVxTrBngwVowHQv7SIlq auyCT7FPtKd6QDVlFXNCXMr88xlkRuRnVOFqwmqDPXbnIOiFOg6Rdv5F46CiG+iyeKEM mZXAmHgC90bokiT4wd/y5V7VmW0r4dpEg9g9c97uhQwasUN+aZ9+BmI+K65RRSrmJgoQ hJb/OWlIsMPnlNWwkFgcHyrSFjQutDFqpsMYq9l+NS0lByFQ12a7vdlJxRjZmZrxbSkF VGLQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=OhTFObKbairMGEk15tYxLwI5g62xwgKJDEcdqgz6Ngw=; fh=YuDVHH1DHQ4WDZaeokIEOdzeyuFcwO51464lgqeNGK0=; b=Dh+ECAEg53VvcYi1t3lESsjTyIjR7e59ekODjRjsFGr6+mwwjLeGx3NeTvqJXX95Tm bYYgMVfPaTaP0ravVM45rC0JRODrHseOEtme4OF/uvRNCma3lktjxcPASpgs44kEZ09p CM8H+glM2tPvKQorMS/fd2gF6msGJIyxCooLCJHPnvjFGFMn3dN7A9s0ZibB1rzTnA0w tkaBudP/cO2n+QIaOk8nyoDBrEVvSCZvUEVOzGZfRyiNUFdWZCJvgPhzMwlz7Q9dzvdE ACV60bbSyZpJY7sWHXITWki+zFp+R5ysV4UsQFtB1ZoVgJchj4wysmNOZYpq+A2a3qHi YP+A== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=N6OrfSxp; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id b8-20020a170906038800b00a2688f2ea1esi8752468eja.887.2023.12.30.08.34.32 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Dec 2023 08:34:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=N6OrfSxp; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-13508-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6AD9E1F21D72 for <ouuuleilei@gmail.com>; Sat, 30 Dec 2023 16:34:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 55C0B156D9; Sat, 30 Dec 2023 16:30:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="N6OrfSxp" X-Original-To: linux-kernel@vger.kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2066.outbound.protection.outlook.com [40.107.94.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1203B168A7; Sat, 30 Dec 2023 16:30:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FjtaULWjEm1/e+/B7pe1HfZyXYX+bQD7ArizQMNyFsONOLOBkcu29FWTYnopEUPHRCRw0nlsk0JtRZmvKhCc5BOgNzp3YUhUkXcxRJ8hIZXsIO1O4wAf8aiSFUWUmPDgS+oBIoo8XejcNHbhcIa1EzUsAr91E83rtH92Vb5sddLA9szKnwVMOb7H7J6Bzc/psJS1h/5xN4ueiAKRR+3SW9ifMaaUeD/RzxT2GQ8kEWO0LfJMkCvzHcWOJKygN9TIVcU0fepfznMipY5UXU6shwqhNLCheXJWEe+wghFyldC7JvlLFKGbt9YZGqCVGj7QeDJrQjFwzgYysomdpQ9cUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OhTFObKbairMGEk15tYxLwI5g62xwgKJDEcdqgz6Ngw=; b=D4K/ljpPbnlQDnUUhHQ/uY9Srnonv3pfZ+YEJzPWtBBuSoKu2WzGAObgjb3hjb/x7+39mZU/aXMkTD8/UKx5bQvNAeMsIB3AdXQs5YjUs1armpI7sACMbMnhErLE40llLsig9pmY9k/kFvINJ/SnH2lfdZQC4G5wMGol8Cs+jx7IudgPKRX3w+EOQ9/FRO7UT72NVkEwgDJXDtvFIw9R4TK101fa4zhLQPzpeacOb06aOAuBQo8LDs9zjuRVEe8g3yucTCfXL+Z4fp6KIgRA6mvFvQeMTXwVaQZ6Kxvmk/PB0/7RJjRaP/QMJypOhlf72U7pTssfxQxEf0yG5AoOkA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OhTFObKbairMGEk15tYxLwI5g62xwgKJDEcdqgz6Ngw=; b=N6OrfSxpCYune0roQX4NtvoPWvDxsBnz0FasIqKKe1kEbm0fWMBmcj4bvi0b0ImSsVY5FFHboxuxtv9jjPBd/PVXiWqgnpSndx+zLEfWEbHXJzK30GQNm/jWeEAzpGlMhvQhldMO4yL7tsTT8M/IvIN+QLzT0m9XWC7Yaej9mU8= Received: from PR1P264CA0172.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:344::20) by MW3PR12MB4585.namprd12.prod.outlook.com (2603:10b6:303:54::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7135.21; Sat, 30 Dec 2023 16:30:11 +0000 Received: from SN1PEPF0002BA52.namprd03.prod.outlook.com (2603:10a6:102:344:cafe::af) by PR1P264CA0172.outlook.office365.com (2603:10a6:102:344::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7091.26 via Frontend Transport; Sat, 30 Dec 2023 16:30:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SN1PEPF0002BA52.mail.protection.outlook.com (10.167.242.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7159.9 via Frontend Transport; Sat, 30 Dec 2023 16:30:10 +0000 Received: from localhost (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.34; Sat, 30 Dec 2023 10:30:05 -0600 From: Michael Roth <michael.roth@amd.com> To: <x86@kernel.org> CC: <kvm@vger.kernel.org>, <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>, <linux-crypto@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <tglx@linutronix.de>, <mingo@redhat.com>, <jroedel@suse.de>, <thomas.lendacky@amd.com>, <hpa@zytor.com>, <ardb@kernel.org>, <pbonzini@redhat.com>, <seanjc@google.com>, <vkuznets@redhat.com>, <jmattson@google.com>, <luto@kernel.org>, <dave.hansen@linux.intel.com>, <slp@redhat.com>, <pgonda@google.com>, <peterz@infradead.org>, <srinivas.pandruvada@linux.intel.com>, <rientjes@google.com>, <tobin@ibm.com>, <bp@alien8.de>, <vbabka@suse.cz>, <kirill@shutemov.name>, <ak@linux.intel.com>, <tony.luck@intel.com>, <sathyanarayanan.kuppuswamy@linux.intel.com>, <alpergun@google.com>, <jarkko@kernel.org>, <ashish.kalra@amd.com>, <nikunj.dadhania@amd.com>, <pankaj.gupta@amd.com>, <liam.merwick@oracle.com>, <zhi.a.wang@intel.com>, Brijesh Singh <brijesh.singh@amd.com> Subject: [PATCH v1 04/26] x86/sev: Add the host SEV-SNP initialization support Date: Sat, 30 Dec 2023 10:19:32 -0600 Message-ID: <20231230161954.569267-5-michael.roth@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231230161954.569267-1-michael.roth@amd.com> References: <20231230161954.569267-1-michael.roth@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA52:EE_|MW3PR12MB4585:EE_ X-MS-Office365-Filtering-Correlation-Id: 0b50c9fd-2ac2-49e4-42d3-08dc09549791 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rGQfRAr/tpKq8pXudKo1fGGvHf7ZWWQ0ZIEKZBxDWjdicwtJgLuAU7JLSY0jG4s1eMIygA9OxMVJEooisil29HAQCKV0oEfUzoTU2VrkOR0vcBl8+hjWfyqUw3fziUkCSDzNPdEF7APOCjad0dB/5/GWgFc3ZxwIf7pvRSnNqkx2kzW78igss29LyFrTX+f/eqoL5d0rnntQ7+YmbftSR78vPLeug8H+Idhcio5zh2EDGG14wSXl/qjes78M/uuWDXjC3M7XjA6VCtBLtimtVCTLu+U6pRWB4Q6ag9KW6NuQK/KPkN5hUSo0fMQQNZYa46z5yeGIgIzRSXQeWOGYFnJBI464gCJ0Jq5kk0nw3LklDGrvVgnsZj6tl42HdTXW0RPpYaHsb7iRCBR2LodCbb0IyEWZHuJax9BjzH1POq8VcXZZm2yfdG02VK81atniGwFvaT2k3t4jJqU+yNMMnV3c8YSEiCdcAyAlVez2IyyDPD6J2uBcGzUANwo9k/xekaFatUF7EPVyqoe51hjLnRhDtCCRVSYNottDsPMFMOVZheM2noeiHz/q4BtnN6GM5ieKt7RgYrqsU1Vr0J2bd1DZoz3cLYaZT0Px53wPJILHlgnQgp4WH31AnOC0fVj4OidvFbWbtIMy+V0XxOoRCe6Jy2xXekcKT1uGwEAh4MOyg2pTESMOX6LROy7ddpieiYotGuYG9rocQ4rBWrlqB83hJZQmJuBWs28kjZLKR4hsEiMyWG9vcZxw+YmD8rt5pf8zrrPuaMjMec3L8R/GBQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(4636009)(136003)(396003)(376002)(39860400002)(346002)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(82310400011)(40470700004)(36840700001)(46966006)(26005)(1076003)(336012)(83380400001)(81166007)(16526019)(356005)(426003)(2616005)(47076005)(36860700001)(41300700001)(82740400003)(2906002)(316002)(4326008)(54906003)(30864003)(7416002)(70586007)(5660300002)(40480700001)(70206006)(44832011)(6916009)(40460700003)(36756003)(86362001)(8676002)(6666004)(8936002)(478600001)(7406005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Dec 2023 16:30:10.3925 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0b50c9fd-2ac2-49e4-42d3-08dc09549791 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA52.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4585 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786725345895572128 X-GMAIL-MSGID: 1786725345895572128 |
Series |
Add AMD Secure Nested Paging (SEV-SNP) Initialization Support
|
|
Commit Message
Michael Roth
Dec. 30, 2023, 4:19 p.m. UTC
From: Brijesh Singh <brijesh.singh@amd.com> The memory integrity guarantees of SEV-SNP are enforced through a new structure called the Reverse Map Table (RMP). The RMP is a single data structure shared across the system that contains one entry for every 4K page of DRAM that may be used by SEV-SNP VMs. APM2 section 15.36 details a number of steps needed to detect/enable SEV-SNP and RMP table support on the host: - Detect SEV-SNP support based on CPUID bit - Initialize the RMP table memory reported by the RMP base/end MSR registers and configure IOMMU to be compatible with RMP access restrictions - Set the MtrrFixDramModEn bit in SYSCFG MSR - Set the SecureNestedPagingEn and VMPLEn bits in the SYSCFG MSR - Configure IOMMU RMP table entry format is non-architectural and it can vary by processor. It is defined by the PPR. Restrict SNP support to CPU models/families which are compatible with the current RMP table entry format to guard against any undefined behavior when running on other system types. Future models/support will handle this through an architectural mechanism to allow for broader compatibility. SNP host code depends on CONFIG_KVM_AMD_SEV config flag, which may be enabled even when CONFIG_AMD_MEM_ENCRYPT isn't set, so update the SNP-specific IOMMU helpers used here to rely on CONFIG_KVM_AMD_SEV instead of CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Co-developed-by: Ashish Kalra <ashish.kalra@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Co-developed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Co-developed-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Michael Roth <michael.roth@amd.com> --- arch/x86/Kbuild | 2 + arch/x86/include/asm/msr-index.h | 11 +- arch/x86/include/asm/sev.h | 6 + arch/x86/kernel/cpu/amd.c | 15 +++ arch/x86/virt/svm/Makefile | 3 + arch/x86/virt/svm/sev.c | 219 +++++++++++++++++++++++++++++++ 6 files changed, 255 insertions(+), 1 deletion(-) create mode 100644 arch/x86/virt/svm/Makefile create mode 100644 arch/x86/virt/svm/sev.c
Comments
On 30/12/2023 17:19, Michael Roth wrote: > From: Brijesh Singh <brijesh.singh@amd.com> > > The memory integrity guarantees of SEV-SNP are enforced through a new > structure called the Reverse Map Table (RMP). The RMP is a single data > structure shared across the system that contains one entry for every 4K > page of DRAM that may be used by SEV-SNP VMs. APM2 section 15.36 details > a number of steps needed to detect/enable SEV-SNP and RMP table support > on the host: > > - Detect SEV-SNP support based on CPUID bit > - Initialize the RMP table memory reported by the RMP base/end MSR > registers and configure IOMMU to be compatible with RMP access > restrictions > - Set the MtrrFixDramModEn bit in SYSCFG MSR > - Set the SecureNestedPagingEn and VMPLEn bits in the SYSCFG MSR > - Configure IOMMU > > RMP table entry format is non-architectural and it can vary by > processor. It is defined by the PPR. Restrict SNP support to CPU > models/families which are compatible with the current RMP table entry > format to guard against any undefined behavior when running on other > system types. Future models/support will handle this through an > architectural mechanism to allow for broader compatibility. > > SNP host code depends on CONFIG_KVM_AMD_SEV config flag, which may be > enabled even when CONFIG_AMD_MEM_ENCRYPT isn't set, so update the > SNP-specific IOMMU helpers used here to rely on CONFIG_KVM_AMD_SEV > instead of CONFIG_AMD_MEM_ENCRYPT. > > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> > Co-developed-by: Ashish Kalra <ashish.kalra@amd.com> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> > Co-developed-by: Tom Lendacky <thomas.lendacky@amd.com> > Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> > Co-developed-by: Michael Roth <michael.roth@amd.com> > Signed-off-by: Michael Roth <michael.roth@amd.com> > --- > arch/x86/Kbuild | 2 + > arch/x86/include/asm/msr-index.h | 11 +- > arch/x86/include/asm/sev.h | 6 + > arch/x86/kernel/cpu/amd.c | 15 +++ > arch/x86/virt/svm/Makefile | 3 + > arch/x86/virt/svm/sev.c | 219 +++++++++++++++++++++++++++++++ > 6 files changed, 255 insertions(+), 1 deletion(-) > create mode 100644 arch/x86/virt/svm/Makefile > create mode 100644 arch/x86/virt/svm/sev.c > > diff --git a/arch/x86/Kbuild b/arch/x86/Kbuild > index 5a83da703e87..6a1f36df6a18 100644 > --- a/arch/x86/Kbuild > +++ b/arch/x86/Kbuild > @@ -28,5 +28,7 @@ obj-y += net/ > > obj-$(CONFIG_KEXEC_FILE) += purgatory/ > > +obj-y += virt/svm/ > + > # for cleaning > subdir- += boot tools > diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h > index f1bd7b91b3c6..15ce1269f270 100644 > --- a/arch/x86/include/asm/msr-index.h > +++ b/arch/x86/include/asm/msr-index.h > @@ -599,6 +599,8 @@ > #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) > #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) > #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) > +#define MSR_AMD64_RMP_BASE 0xc0010132 > +#define MSR_AMD64_RMP_END 0xc0010133 > > /* SNP feature bits enabled by the hypervisor */ > #define MSR_AMD64_SNP_VTOM BIT_ULL(3) > @@ -709,7 +711,14 @@ > #define MSR_K8_TOP_MEM2 0xc001001d > #define MSR_AMD64_SYSCFG 0xc0010010 > #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 > -#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) > +#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) > +#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 > +#define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT) > +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 > +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) > +#define MSR_AMD64_SYSCFG_MFDM_BIT 19 > +#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) > + > #define MSR_K8_INT_PENDING_MSG 0xc0010055 > /* C1E active bits in int pending message */ > #define K8_INTP_C1E_ACTIVE_MASK 0x18000000 > diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h > index 5b4a1ce3d368..1f59d8ba9776 100644 > --- a/arch/x86/include/asm/sev.h > +++ b/arch/x86/include/asm/sev.h > @@ -243,4 +243,10 @@ static inline u64 snp_get_unsupported_features(u64 status) { return 0; } > static inline u64 sev_get_status(void) { return 0; } > #endif > > +#ifdef CONFIG_KVM_AMD_SEV > +bool snp_probe_rmptable_info(void); > +#else > +static inline bool snp_probe_rmptable_info(void) { return false; } > +#endif > + > #endif > diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c > index 9a17165dfe84..0f0d425f0440 100644 > --- a/arch/x86/kernel/cpu/amd.c > +++ b/arch/x86/kernel/cpu/amd.c > @@ -20,6 +20,7 @@ > #include <asm/delay.h> > #include <asm/debugreg.h> > #include <asm/resctrl.h> > +#include <asm/sev.h> > > #ifdef CONFIG_X86_64 > # include <asm/mmconfig.h> > @@ -574,6 +575,20 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) > break; > } > > + if (cpu_has(c, X86_FEATURE_SEV_SNP)) { > + /* > + * RMP table entry format is not architectural and it can vary by processor > + * and is defined by the per-processor PPR. Restrict SNP support on the > + * known CPU model and family for which the RMP table entry format is > + * currently defined for. > + */ > + if (!(c->x86 == 0x19 && c->x86_model <= 0xaf) && > + !(c->x86 == 0x1a && c->x86_model <= 0xf)) > + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); > + else if (!snp_probe_rmptable_info()) > + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); Is there a really good reason to perform the snp_probe_smptable_info() check at this point (instead of in snp_rmptable_init). snp_rmptable_init will also clear the cap on failure, and bsp_init_amd() runs too early to allow for the kernel to allocate the rmptable itself. I pointed out in the previous review that kernel allocation of rmptable is necessary in SNP-host capable VMs in Azure. > + } > + > return; > > warn: > diff --git a/arch/x86/virt/svm/Makefile b/arch/x86/virt/svm/Makefile > new file mode 100644 > index 000000000000..ef2a31bdcc70 > --- /dev/null > +++ b/arch/x86/virt/svm/Makefile > @@ -0,0 +1,3 @@ > +# SPDX-License-Identifier: GPL-2.0 > + > +obj-$(CONFIG_KVM_AMD_SEV) += sev.o > diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c > new file mode 100644 > index 000000000000..ce7ede9065ed > --- /dev/null > +++ b/arch/x86/virt/svm/sev.c > @@ -0,0 +1,219 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * AMD SVM-SEV Host Support. > + * > + * Copyright (C) 2023 Advanced Micro Devices, Inc. > + * > + * Author: Ashish Kalra <ashish.kalra@amd.com> > + * > + */ > + > +#include <linux/cc_platform.h> > +#include <linux/printk.h> > +#include <linux/mm_types.h> > +#include <linux/set_memory.h> > +#include <linux/memblock.h> > +#include <linux/kernel.h> > +#include <linux/mm.h> > +#include <linux/cpumask.h> > +#include <linux/iommu.h> > +#include <linux/amd-iommu.h> > + > +#include <asm/sev.h> > +#include <asm/processor.h> > +#include <asm/setup.h> > +#include <asm/svm.h> > +#include <asm/smp.h> > +#include <asm/cpu.h> > +#include <asm/apic.h> > +#include <asm/cpuid.h> > +#include <asm/cmdline.h> > +#include <asm/iommu.h> > + > +/* > + * The RMP entry format is not architectural. The format is defined in PPR > + * Family 19h Model 01h, Rev B1 processor. > + */ > +struct rmpentry { > + u64 assigned : 1, > + pagesize : 1, > + immutable : 1, > + rsvd1 : 9, > + gpa : 39, > + asid : 10, > + vmsa : 1, > + validated : 1, > + rsvd2 : 1; > + u64 rsvd3; > +} __packed; > + > +/* > + * The first 16KB from the RMP_BASE is used by the processor for the > + * bookkeeping, the range needs to be added during the RMP entry lookup. > + */ > +#define RMPTABLE_CPU_BOOKKEEPING_SZ 0x4000 > + > +static u64 probed_rmp_base, probed_rmp_size; > +static struct rmpentry *rmptable __ro_after_init; > +static u64 rmptable_max_pfn __ro_after_init; > + > +#undef pr_fmt > +#define pr_fmt(fmt) "SEV-SNP: " fmt > + > +static int __mfd_enable(unsigned int cpu) > +{ > + u64 val; > + > + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > + return 0; > + > + rdmsrl(MSR_AMD64_SYSCFG, val); > + > + val |= MSR_AMD64_SYSCFG_MFDM; > + > + wrmsrl(MSR_AMD64_SYSCFG, val); > + > + return 0; > +} > + > +static __init void mfd_enable(void *arg) > +{ > + __mfd_enable(smp_processor_id()); > +} > + > +static int __snp_enable(unsigned int cpu) > +{ > + u64 val; > + > + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > + return 0; > + > + rdmsrl(MSR_AMD64_SYSCFG, val); > + > + val |= MSR_AMD64_SYSCFG_SNP_EN; > + val |= MSR_AMD64_SYSCFG_SNP_VMPL_EN; > + > + wrmsrl(MSR_AMD64_SYSCFG, val); > + > + return 0; > +} > + > +static __init void snp_enable(void *arg) > +{ > + __snp_enable(smp_processor_id()); > +} > + > +#define RMP_ADDR_MASK GENMASK_ULL(51, 13) > + > +bool snp_probe_rmptable_info(void) > +{ > + u64 max_rmp_pfn, calc_rmp_sz, rmp_sz, rmp_base, rmp_end; > + > + rdmsrl(MSR_AMD64_RMP_BASE, rmp_base); > + rdmsrl(MSR_AMD64_RMP_END, rmp_end); > + > + if (!(rmp_base & RMP_ADDR_MASK) || !(rmp_end & RMP_ADDR_MASK)) { > + pr_err("Memory for the RMP table has not been reserved by BIOS\n"); > + return false; > + } > + > + if (rmp_base > rmp_end) { > + pr_err("RMP configuration not valid: base=%#llx, end=%#llx\n", rmp_base, rmp_end); > + return false; > + } > + > + rmp_sz = rmp_end - rmp_base + 1; > + > + /* > + * Calculate the amount the memory that must be reserved by the BIOS to > + * address the whole RAM, including the bookkeeping area. The RMP itself > + * must also be covered. > + */ > + max_rmp_pfn = max_pfn; > + if (PHYS_PFN(rmp_end) > max_pfn) > + max_rmp_pfn = PHYS_PFN(rmp_end); > + > + calc_rmp_sz = (max_rmp_pfn << 4) + RMPTABLE_CPU_BOOKKEEPING_SZ; > + > + if (calc_rmp_sz > rmp_sz) { > + pr_err("Memory reserved for the RMP table does not cover full system RAM (expected 0x%llx got 0x%llx)\n", > + calc_rmp_sz, rmp_sz); > + return false; > + } > + > + probed_rmp_base = rmp_base; > + probed_rmp_size = rmp_sz; > + > + pr_info("RMP table physical range [0x%016llx - 0x%016llx]\n", > + probed_rmp_base, probed_rmp_base + probed_rmp_size - 1); > + > + return true; > +} > + > +static int __init __snp_rmptable_init(void) > +{ > + u64 rmptable_size; > + void *rmptable_start; > + u64 val; > + > + if (!probed_rmp_size) > + return 1; > + > + rmptable_start = memremap(probed_rmp_base, probed_rmp_size, MEMREMAP_WB); > + if (!rmptable_start) { > + pr_err("Failed to map RMP table\n"); > + return 1; > + } > + > + /* > + * Check if SEV-SNP is already enabled, this can happen in case of > + * kexec boot. > + */ > + rdmsrl(MSR_AMD64_SYSCFG, val); > + if (val & MSR_AMD64_SYSCFG_SNP_EN) > + goto skip_enable; > + > + memset(rmptable_start, 0, probed_rmp_size); > + > + /* Flush the caches to ensure that data is written before SNP is enabled. */ > + wbinvd_on_all_cpus(); > + > + /* MtrrFixDramModEn must be enabled on all the CPUs prior to enabling SNP. */ > + on_each_cpu(mfd_enable, NULL, 1); > + > + on_each_cpu(snp_enable, NULL, 1); > + > +skip_enable: > + rmptable_start += RMPTABLE_CPU_BOOKKEEPING_SZ; > + rmptable_size = probed_rmp_size - RMPTABLE_CPU_BOOKKEEPING_SZ; > + > + rmptable = (struct rmpentry *)rmptable_start; > + rmptable_max_pfn = rmptable_size / sizeof(struct rmpentry) - 1; > + > + return 0; > +} > + > +static int __init snp_rmptable_init(void) > +{ > + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > + return 0; > + > + if (!amd_iommu_snp_en) > + return 0; Looks better - do you think it'll be OK to add a X86_FEATURE_HYPERVISOR check at this point later to account for SNP-host capable VMs with no access to an iommu? Jeremi > + > + if (__snp_rmptable_init()) > + goto nosnp; > + > + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/rmptable_init:online", __snp_enable, NULL); > + > + return 0; > + > +nosnp: > + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); > + return -ENOSYS; > +} > + > +/* > + * This must be called after the IOMMU has been initialized. > + */ > +device_initcall(snp_rmptable_init);
On Sat, Dec 30, 2023 at 10:19:32AM -0600, Michael Roth wrote: > From: Brijesh Singh <brijesh.singh@amd.com> > > The memory integrity guarantees of SEV-SNP are enforced through a new > structure called the Reverse Map Table (RMP). The RMP is a single data > structure shared across the system that contains one entry for every 4K > page of DRAM that may be used by SEV-SNP VMs. APM2 section 15.36 details > a number of steps needed to detect/enable SEV-SNP and RMP table support > on the host: > > - Detect SEV-SNP support based on CPUID bit > - Initialize the RMP table memory reported by the RMP base/end MSR > registers and configure IOMMU to be compatible with RMP access > restrictions > - Set the MtrrFixDramModEn bit in SYSCFG MSR > - Set the SecureNestedPagingEn and VMPLEn bits in the SYSCFG MSR > - Configure IOMMU > > RMP table entry format is non-architectural and it can vary by > processor. It is defined by the PPR. Restrict SNP support to CPU > models/families which are compatible with the current RMP table entry > format to guard against any undefined behavior when running on other > system types. Future models/support will handle this through an > architectural mechanism to allow for broader compatibility. > > SNP host code depends on CONFIG_KVM_AMD_SEV config flag, which may be > enabled even when CONFIG_AMD_MEM_ENCRYPT isn't set, so update the > SNP-specific IOMMU helpers used here to rely on CONFIG_KVM_AMD_SEV > instead of CONFIG_AMD_MEM_ENCRYPT. Small fixups to the commit message: The memory integrity guarantees of SEV-SNP are enforced through a new structure called the Reverse Map Table (RMP). The RMP is a single data structure shared across the system that contains one entry for every 4K page of DRAM that may be used by SEV-SNP VMs. The APM v2 section on Secure Nested Paging (SEV-SNP) details a number of steps needed to detect/enable SEV-SNP and RMP table support on the host: - Detect SEV-SNP support based on CPUID bit - Initialize the RMP table memory reported by the RMP base/end MSR registers and configure IOMMU to be compatible with RMP access restrictions - Set the MtrrFixDramModEn bit in SYSCFG MSR - Set the SecureNestedPagingEn and VMPLEn bits in the SYSCFG MSR - Configure IOMMU The RMP table entry format is non-architectural and it can vary by processor. It is defined by the PPR document for each respective CPU family. Restrict SNP support to CPU models/families which are compatible with the current RMP table entry format to guard against any undefined behavior when running on other system types. Future models/support will handle this through an architectural mechanism to allow for broader compatibility. The SNP host code depends on CONFIG_KVM_AMD_SEV config flag which may be enabled even when CONFIG_AMD_MEM_ENCRYPT isn't set, so update the SNP-specific IOMMU helpers used here to rely on CONFIG_KVM_AMD_SEV instead of CONFIG_AMD_MEM_ENCRYPT. > diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h > index f1bd7b91b3c6..15ce1269f270 100644 > --- a/arch/x86/include/asm/msr-index.h > +++ b/arch/x86/include/asm/msr-index.h > @@ -599,6 +599,8 @@ > #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) > #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) > #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) > +#define MSR_AMD64_RMP_BASE 0xc0010132 > +#define MSR_AMD64_RMP_END 0xc0010133 > > /* SNP feature bits enabled by the hypervisor */ > #define MSR_AMD64_SNP_VTOM BIT_ULL(3) > @@ -709,7 +711,14 @@ > #define MSR_K8_TOP_MEM2 0xc001001d > #define MSR_AMD64_SYSCFG 0xc0010010 > #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 > -#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) > +#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) > +#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 > +#define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT) > +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 > +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) > +#define MSR_AMD64_SYSCFG_MFDM_BIT 19 > +#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) > + Fix the vertical alignment: diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 15ce1269f270..f482bc6a5ae7 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -710,14 +710,14 @@ #define MSR_K8_TOP_MEM1 0xc001001a #define MSR_K8_TOP_MEM2 0xc001001d #define MSR_AMD64_SYSCFG 0xc0010010 -#define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 -#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) -#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 +#define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 +#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) +#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 #define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT) -#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 -#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) -#define MSR_AMD64_SYSCFG_MFDM_BIT 19 -#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) +#define MSR_AMD64_SYSCFG_MFDM_BIT 19 +#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) #define MSR_K8_INT_PENDING_MSG 0xc0010055 /* C1E active bits in int pending message */
On Sat, Dec 30, 2023 at 10:19:32AM -0600, Michael Roth wrote: > + if (cpu_has(c, X86_FEATURE_SEV_SNP)) { > + /* > + * RMP table entry format is not architectural and it can vary by processor > + * and is defined by the per-processor PPR. Restrict SNP support on the > + * known CPU model and family for which the RMP table entry format is > + * currently defined for. > + */ > + if (!(c->x86 == 0x19 && c->x86_model <= 0xaf) && > + !(c->x86 == 0x1a && c->x86_model <= 0xf)) > + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); > + else if (!snp_probe_rmptable_info()) > + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); > + } IOW, this below. Lemme send the ZEN5 thing as a separate patch. diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 9492dcad560d..0fa702673e73 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -81,10 +81,8 @@ #define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */ #define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */ #define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */ - -/* CPU types for specific tunings: */ #define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */ -/* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */ +#define X86_FEATURE_ZEN5 ( 3*32+ 5) /* "" CPU based on Zen5 microarchitecture */ #define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */ #define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */ #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 0f0d425f0440..46335c2df083 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -539,7 +539,7 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) /* Figure out Zen generations: */ switch (c->x86) { - case 0x17: { + case 0x17: switch (c->x86_model) { case 0x00 ... 0x2f: case 0x50 ... 0x5f: @@ -555,8 +555,8 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) goto warn; } break; - } - case 0x19: { + + case 0x19: switch (c->x86_model) { case 0x00 ... 0x0f: case 0x20 ... 0x5f: @@ -570,20 +570,31 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) goto warn; } break; - } + + case 0x1a: + switch (c->x86_model) { + case 0x00 ... 0x0f: + setup_force_cpu_cap(X86_FEATURE_ZEN5); + break; + default: + goto warn; + } + break; + default: break; } if (cpu_has(c, X86_FEATURE_SEV_SNP)) { /* - * RMP table entry format is not architectural and it can vary by processor + * RMP table entry format is not architectural, can vary by processor * and is defined by the per-processor PPR. Restrict SNP support on the * known CPU model and family for which the RMP table entry format is * currently defined for. */ - if (!(c->x86 == 0x19 && c->x86_model <= 0xaf) && - !(c->x86 == 0x1a && c->x86_model <= 0xf)) + if (!boot_cpu_has(X86_FEATURE_ZEN3) && + !boot_cpu_has(X86_FEATURE_ZEN4) && + !boot_cpu_has(X86_FEATURE_ZEN5)) setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); else if (!snp_probe_rmptable_info()) setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); @@ -1055,6 +1066,11 @@ static void init_amd_zen4(struct cpuinfo_x86 *c) msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT); } +static void init_amd_zen5(struct cpuinfo_x86 *c) +{ + init_amd_zen_common(); +} + static void init_amd(struct cpuinfo_x86 *c) { u64 vm_cr; @@ -1100,6 +1116,8 @@ static void init_amd(struct cpuinfo_x86 *c) init_amd_zen3(c); else if (boot_cpu_has(X86_FEATURE_ZEN4)) init_amd_zen4(c); + else if (boot_cpu_has(X86_FEATURE_ZEN5)) + init_amd_zen5(c); /* * Enable workaround for FXSAVE leak on CPUs
On Thu, Jan 04, 2024 at 12:05:27PM +0100, Jeremi Piotrowski wrote: > Is there a really good reason to perform the snp_probe_smptable_info() check at this > point (instead of in snp_rmptable_init). snp_rmptable_init will also clear the cap > on failure, and bsp_init_amd() runs too early to allow for the kernel to allocate the > rmptable itself. I pointed out in the previous review that kernel allocation of rmptable > is necessary in SNP-host capable VMs in Azure. What does that even mean? That function is doing some calculations after reading two MSRs. What can possibly go wrong?!
On Fri, Jan 05, 2024 at 05:09:16PM +0100, Borislav Petkov wrote: > On Thu, Jan 04, 2024 at 12:05:27PM +0100, Jeremi Piotrowski wrote: > > Is there a really good reason to perform the snp_probe_smptable_info() check at this > > point (instead of in snp_rmptable_init). snp_rmptable_init will also clear the cap > > on failure, and bsp_init_amd() runs too early to allow for the kernel to allocate the > > rmptable itself. I pointed out in the previous review that kernel allocation of rmptable > > is necessary in SNP-host capable VMs in Azure. > > What does that even mean? > > That function is doing some calculations after reading two MSRs. What > can possibly go wrong?! That could be one reason perhaps: "It needs to be called early enough to allow for AutoIBRS to not be disabled just because SNP is supported. By calling it where it is currently called, the SNP feature can be cleared if, even though supported, SNP can't be used, allowing AutoIBRS to be used as a more performant Spectre mitigation." https://lore.kernel.org/r/8ec38db1-5ccf-4684-bc0d-d48579ebf0d0@amd.com
On Sat, Dec 30, 2023 at 10:19:32AM -0600, Michael Roth wrote: > +static int __init __snp_rmptable_init(void) > +{ > + u64 rmptable_size; > + void *rmptable_start; > + u64 val; ... Ontop: diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c index ce7ede9065ed..566bb6f39665 100644 --- a/arch/x86/virt/svm/sev.c +++ b/arch/x86/virt/svm/sev.c @@ -150,6 +150,11 @@ bool snp_probe_rmptable_info(void) return true; } +/* + * Do the necessary preparations which are verified by the firmware as + * described in the SNP_INIT_EX firmware command description in the SNP + * firmware ABI spec. + */ static int __init __snp_rmptable_init(void) { u64 rmptable_size;
On Sat, Dec 30, 2023 at 10:19:32AM -0600, Michael Roth wrote: > +static int __init __snp_rmptable_init(void) I already asked a year ago: https://lore.kernel.org/all/Y9ubi0i4Z750gdMm@zn.tnic/ why is the __ version - __snp_rmptable_init - carved out but crickets. It simply gets ignored. :-\ So let me do it myself, diff below. Please add to the next version: Co-developed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> after incorporating all the changes. Thx. --- diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c index 566bb6f39665..feed65f80776 100644 --- a/arch/x86/virt/svm/sev.c +++ b/arch/x86/virt/svm/sev.c @@ -155,19 +155,25 @@ bool snp_probe_rmptable_info(void) * described in the SNP_INIT_EX firmware command description in the SNP * firmware ABI spec. */ -static int __init __snp_rmptable_init(void) +static int __init snp_rmptable_init(void) { - u64 rmptable_size; void *rmptable_start; + u64 rmptable_size; u64 val; + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) + return 0; + + if (!amd_iommu_snp_en) + return 0; + if (!probed_rmp_size) - return 1; + goto nosnp; rmptable_start = memremap(probed_rmp_base, probed_rmp_size, MEMREMAP_WB); if (!rmptable_start) { pr_err("Failed to map RMP table\n"); - return 1; + goto nosnp; } /* @@ -195,20 +201,6 @@ static int __init __snp_rmptable_init(void) rmptable = (struct rmpentry *)rmptable_start; rmptable_max_pfn = rmptable_size / sizeof(struct rmpentry) - 1; - return 0; -} - -static int __init snp_rmptable_init(void) -{ - if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) - return 0; - - if (!amd_iommu_snp_en) - return 0; - - if (__snp_rmptable_init()) - goto nosnp; - cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/rmptable_init:online", __snp_enable, NULL); return 0;
On 05/01/2024 17:21, Borislav Petkov wrote: > On Fri, Jan 05, 2024 at 05:09:16PM +0100, Borislav Petkov wrote: >> On Thu, Jan 04, 2024 at 12:05:27PM +0100, Jeremi Piotrowski wrote: >>> Is there a really good reason to perform the snp_probe_smptable_info() check at this >>> point (instead of in snp_rmptable_init). snp_rmptable_init will also clear the cap >>> on failure, and bsp_init_amd() runs too early to allow for the kernel to allocate the >>> rmptable itself. I pointed out in the previous review that kernel allocation of rmptable >>> is necessary in SNP-host capable VMs in Azure. >> >> What does that even mean?>> >> That function is doing some calculations after reading two MSRs. What >> can possibly go wrong?! > What I wrote: "allow for the kernel to allocate the rmptable". Until the kernel allocates a rmptable the two MSRs are not initialized in a VM. This is specific to SNP-host VMs because they don't have access to the system-wide rmptable (or a virtualized version of it), and the rmptable is only useful for kernel internal tracking in this case. So we don't strictly need one and could save the overhead but not having one would complicate the KVM SNP code so I'd rather allocate one for now. It makes most sense to perform the rmptable allocation later in kernel init, after platform detection and e820 setup. It isn't really used until device_initcall. https://lore.kernel.org/lkml/20230213103402.1189285-2-jpiotrowski@linux.microsoft.com/ (I'll be posting updated patches soon). > That could be one reason perhaps: > > "It needs to be called early enough to allow for AutoIBRS to not be disabled > just because SNP is supported. By calling it where it is currently called, the > SNP feature can be cleared if, even though supported, SNP can't be used, > allowing AutoIBRS to be used as a more performant Spectre mitigation." > > https://lore.kernel.org/r/8ec38db1-5ccf-4684-bc0d-d48579ebf0d0@amd.com > This logic seems twisted. Why use firmware rmptable allocation as a proxy for SEV-SNP enablement if BIOS provides an explicit flag to enable/disable SEV-SNP support. That would be a better signal to use to control AutoIBRS enablement.
On Mon, Jan 08, 2024 at 05:49:01PM +0100, Jeremi Piotrowski wrote:
> What I wrote: "allow for the kernel to allocate the rmptable".
What?!
"15.36.5 Hypervisor RMP Management
...
Because the RMP is initialized by the AMD-SP to prevent direct access to
the RMP, the hypervisor must use the RMPUPDATE instruction to alter the
entries of the RMP. RMPUPDATE allows the hypervisor to alter the
Guest_Physical_Address, Assigned, Page_Size, Immutable, and ASID fields
of an RMP entry."
What you want is something that you should keep far and away from the
upstream kernel.
On 08/01/2024 18:04, Borislav Petkov wrote: > On Mon, Jan 08, 2024 at 05:49:01PM +0100, Jeremi Piotrowski wrote: >> What I wrote: "allow for the kernel to allocate the rmptable". > > What?! > > "15.36.5 Hypervisor RMP Management > > ... > > Because the RMP is initialized by the AMD-SP to prevent direct access to > the RMP, the hypervisor must use the RMPUPDATE instruction to alter the > entries of the RMP. RMPUPDATE allows the hypervisor to alter the > Guest_Physical_Address, Assigned, Page_Size, Immutable, and ASID fields > of an RMP entry." >> What you want is something that you should keep far and away from the > upstream kernel. > Can we please not assume I am acting in bad faith. I am explicitly trying to integrate nicely with AMD's KVM SNP host patches to cover an additional usecase and get something upstreamable. Let's separate RMP allocation from who (and how) maintains the entries. """ 15.36.4 Initializing the RMP .. Software must program RMP_BASE and RMP_END identically for each core in the system and before enabling SEV-SNP globally. """ KVM expects UEFI to do this, Hyper-V does the allocation itself (on bare-metal). Both are valid. Afaik it is the SNP_INIT command that hands over control of the RMP from software to AMD-SP. When it comes to "who and how maintains the rmp" - that is of course the AMD-SP and hypervisor issues RMPUPDATE instructions. The paragraph you cite talks about the physical RMP and AMD-SP - not virtualized SNP (aka "SNP-host VM"/nested SNP). AMD specified an MSR-based RMPUPDATE for us for that usecase (15.36.19 SEV-SNP Instruction Virtualization). The RMP inside the SNP-host VM is not related to the physical RMP and is an entirely software based construct. The RMP in nested SNP is only used for kernel bookkeeping and so its allocation is optional. KVM could do without reading the RMP directly altogether (by tracking the assigned bit somewhere) but that would be a design change and I'd rather see the KVM SNP host patches merged in their current shape. Which is why the patch I linked allocates a (shadow) RMP from the kernel. I would very much appreciate if we would not prevent that usecase from working - that's why I've been reviewing and testing multiple revisions of these patches and providing feedback all along.
On Tue, Jan 09, 2024 at 12:56:17PM +0100, Jeremi Piotrowski wrote: > Can we please not assume I am acting in bad faith. No you're not acting with bad faith. What you're doing, in my experience so far is, you come with some weird HV + guest models which has been invented somewhere, behind some closed doors, then you come with some desire that the upstream kernel should support it and you're not even documenting it properly and I'm left with asking questions all the time, what is this, what's the use case, blabla. Don't take this personally - I guess this is all due to NDAs, development schedules, and whatever else and yes, I've heard it all. But just because you want this, we're not going to jump on it and support it unconditionally. It needs to integrate properly with the rest of the kernel and if it doesn't, it is not going upstream. That simple. > I am explicitly trying to integrate nicely with AMD's KVM SNP host > patches to cover an additional usecase and get something upstreamable. And yet I still have no clue what your use case is. I always have to go ask behind the scenes and get some half-answers about *maybe* this is what they support. Looking at the patch you pointed at I see there a proper explanation of your nested SNP stuff. Finally! From now on, please make sure your use case is properly explained before you come with patches. > The RMP in nested SNP is only used for kernel bookkeeping and so its > allocation is optional. KVM could do without reading the RMP directly > altogether (by tracking the assigned bit somewhere) but that would be > a design change and I'd rather see the KVM SNP host patches merged in > their current shape. Which is why the patch I linked allocates > a (shadow) RMP from the kernel. At least three issues I see with that: - the allocation can fail so it is a lot more convenient when the firmware prepares it - the RMP_BASE and RMP_END writes need to be verified they actially did set up the RMP range because if they haven't, you might as well throw SNP security out of the window. In general, letting the kernel do the RMP allocation needs to be verified very very thoroughly. - a future feature might make this more complicated > I would very much appreciate if we would not prevent that usecase from > working - that's why I've been reviewing and testing multiple > revisions of these patches and providing feedback all along. I very much appreciate the help but we need to get the main SNP host stuff in first and then we can talk about modifications.
On Tue, Jan 09, 2024 at 01:29:06PM +0100, Borislav Petkov wrote: > At least three issues I see with that: > > - the allocation can fail so it is a lot more convenient when the > firmware prepares it > > - the RMP_BASE and RMP_END writes need to be verified they actially did > set up the RMP range because if they haven't, you might as well > throw SNP security out of the window. In general, letting the kernel > do the RMP allocation needs to be verified very very thoroughly. > > - a future feature might make this more complicated - What do you do if you boot on a system which has the RMP already allocated in the BIOS? - How do you detect that it is the L1 kernel that must allocate the RMP? - Why can't you use the BIOS allocated RMP in your scenario too instead of the L1 kernel allocating it? - ... I might think of more.
On 09/01/2024 13:44, Borislav Petkov wrote: > On Tue, Jan 09, 2024 at 01:29:06PM +0100, Borislav Petkov wrote: >> At least three issues I see with that: >> >> - the allocation can fail so it is a lot more convenient when the >> firmware prepares it >> >> - the RMP_BASE and RMP_END writes need to be verified they actially did >> set up the RMP range because if they haven't, you might as well >> throw SNP security out of the window. In general, letting the kernel >> do the RMP allocation needs to be verified very very thoroughly. >> >> - a future feature might make this more complicated > > - What do you do if you boot on a system which has the RMP already > allocated in the BIOS? > > - How do you detect that it is the L1 kernel that must allocate the RMP? > > - Why can't you use the BIOS allocated RMP in your scenario too instead > of the L1 kernel allocating it? > > - ... > > I might think of more. > Sorry for not replying back sooner. I agree, lets get the base SNP stuff in and then talk about extensions. I want to sync up with Michael to make sure he's onboard with what I'm proposing. I'll add more design/documentation/usecase descriptions with the next submission and will make sure to address all the issues you brought up. Jeremi
diff --git a/arch/x86/Kbuild b/arch/x86/Kbuild index 5a83da703e87..6a1f36df6a18 100644 --- a/arch/x86/Kbuild +++ b/arch/x86/Kbuild @@ -28,5 +28,7 @@ obj-y += net/ obj-$(CONFIG_KEXEC_FILE) += purgatory/ +obj-y += virt/svm/ + # for cleaning subdir- += boot tools diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index f1bd7b91b3c6..15ce1269f270 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -599,6 +599,8 @@ #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) +#define MSR_AMD64_RMP_BASE 0xc0010132 +#define MSR_AMD64_RMP_END 0xc0010133 /* SNP feature bits enabled by the hypervisor */ #define MSR_AMD64_SNP_VTOM BIT_ULL(3) @@ -709,7 +711,14 @@ #define MSR_K8_TOP_MEM2 0xc001001d #define MSR_AMD64_SYSCFG 0xc0010010 #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 -#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) +#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) +#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 +#define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT) +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 +#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) +#define MSR_AMD64_SYSCFG_MFDM_BIT 19 +#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) + #define MSR_K8_INT_PENDING_MSG 0xc0010055 /* C1E active bits in int pending message */ #define K8_INTP_C1E_ACTIVE_MASK 0x18000000 diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 5b4a1ce3d368..1f59d8ba9776 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -243,4 +243,10 @@ static inline u64 snp_get_unsupported_features(u64 status) { return 0; } static inline u64 sev_get_status(void) { return 0; } #endif +#ifdef CONFIG_KVM_AMD_SEV +bool snp_probe_rmptable_info(void); +#else +static inline bool snp_probe_rmptable_info(void) { return false; } +#endif + #endif diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 9a17165dfe84..0f0d425f0440 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -20,6 +20,7 @@ #include <asm/delay.h> #include <asm/debugreg.h> #include <asm/resctrl.h> +#include <asm/sev.h> #ifdef CONFIG_X86_64 # include <asm/mmconfig.h> @@ -574,6 +575,20 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) break; } + if (cpu_has(c, X86_FEATURE_SEV_SNP)) { + /* + * RMP table entry format is not architectural and it can vary by processor + * and is defined by the per-processor PPR. Restrict SNP support on the + * known CPU model and family for which the RMP table entry format is + * currently defined for. + */ + if (!(c->x86 == 0x19 && c->x86_model <= 0xaf) && + !(c->x86 == 0x1a && c->x86_model <= 0xf)) + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); + else if (!snp_probe_rmptable_info()) + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); + } + return; warn: diff --git a/arch/x86/virt/svm/Makefile b/arch/x86/virt/svm/Makefile new file mode 100644 index 000000000000..ef2a31bdcc70 --- /dev/null +++ b/arch/x86/virt/svm/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_KVM_AMD_SEV) += sev.o diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c new file mode 100644 index 000000000000..ce7ede9065ed --- /dev/null +++ b/arch/x86/virt/svm/sev.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD SVM-SEV Host Support. + * + * Copyright (C) 2023 Advanced Micro Devices, Inc. + * + * Author: Ashish Kalra <ashish.kalra@amd.com> + * + */ + +#include <linux/cc_platform.h> +#include <linux/printk.h> +#include <linux/mm_types.h> +#include <linux/set_memory.h> +#include <linux/memblock.h> +#include <linux/kernel.h> +#include <linux/mm.h> +#include <linux/cpumask.h> +#include <linux/iommu.h> +#include <linux/amd-iommu.h> + +#include <asm/sev.h> +#include <asm/processor.h> +#include <asm/setup.h> +#include <asm/svm.h> +#include <asm/smp.h> +#include <asm/cpu.h> +#include <asm/apic.h> +#include <asm/cpuid.h> +#include <asm/cmdline.h> +#include <asm/iommu.h> + +/* + * The RMP entry format is not architectural. The format is defined in PPR + * Family 19h Model 01h, Rev B1 processor. + */ +struct rmpentry { + u64 assigned : 1, + pagesize : 1, + immutable : 1, + rsvd1 : 9, + gpa : 39, + asid : 10, + vmsa : 1, + validated : 1, + rsvd2 : 1; + u64 rsvd3; +} __packed; + +/* + * The first 16KB from the RMP_BASE is used by the processor for the + * bookkeeping, the range needs to be added during the RMP entry lookup. + */ +#define RMPTABLE_CPU_BOOKKEEPING_SZ 0x4000 + +static u64 probed_rmp_base, probed_rmp_size; +static struct rmpentry *rmptable __ro_after_init; +static u64 rmptable_max_pfn __ro_after_init; + +#undef pr_fmt +#define pr_fmt(fmt) "SEV-SNP: " fmt + +static int __mfd_enable(unsigned int cpu) +{ + u64 val; + + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) + return 0; + + rdmsrl(MSR_AMD64_SYSCFG, val); + + val |= MSR_AMD64_SYSCFG_MFDM; + + wrmsrl(MSR_AMD64_SYSCFG, val); + + return 0; +} + +static __init void mfd_enable(void *arg) +{ + __mfd_enable(smp_processor_id()); +} + +static int __snp_enable(unsigned int cpu) +{ + u64 val; + + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) + return 0; + + rdmsrl(MSR_AMD64_SYSCFG, val); + + val |= MSR_AMD64_SYSCFG_SNP_EN; + val |= MSR_AMD64_SYSCFG_SNP_VMPL_EN; + + wrmsrl(MSR_AMD64_SYSCFG, val); + + return 0; +} + +static __init void snp_enable(void *arg) +{ + __snp_enable(smp_processor_id()); +} + +#define RMP_ADDR_MASK GENMASK_ULL(51, 13) + +bool snp_probe_rmptable_info(void) +{ + u64 max_rmp_pfn, calc_rmp_sz, rmp_sz, rmp_base, rmp_end; + + rdmsrl(MSR_AMD64_RMP_BASE, rmp_base); + rdmsrl(MSR_AMD64_RMP_END, rmp_end); + + if (!(rmp_base & RMP_ADDR_MASK) || !(rmp_end & RMP_ADDR_MASK)) { + pr_err("Memory for the RMP table has not been reserved by BIOS\n"); + return false; + } + + if (rmp_base > rmp_end) { + pr_err("RMP configuration not valid: base=%#llx, end=%#llx\n", rmp_base, rmp_end); + return false; + } + + rmp_sz = rmp_end - rmp_base + 1; + + /* + * Calculate the amount the memory that must be reserved by the BIOS to + * address the whole RAM, including the bookkeeping area. The RMP itself + * must also be covered. + */ + max_rmp_pfn = max_pfn; + if (PHYS_PFN(rmp_end) > max_pfn) + max_rmp_pfn = PHYS_PFN(rmp_end); + + calc_rmp_sz = (max_rmp_pfn << 4) + RMPTABLE_CPU_BOOKKEEPING_SZ; + + if (calc_rmp_sz > rmp_sz) { + pr_err("Memory reserved for the RMP table does not cover full system RAM (expected 0x%llx got 0x%llx)\n", + calc_rmp_sz, rmp_sz); + return false; + } + + probed_rmp_base = rmp_base; + probed_rmp_size = rmp_sz; + + pr_info("RMP table physical range [0x%016llx - 0x%016llx]\n", + probed_rmp_base, probed_rmp_base + probed_rmp_size - 1); + + return true; +} + +static int __init __snp_rmptable_init(void) +{ + u64 rmptable_size; + void *rmptable_start; + u64 val; + + if (!probed_rmp_size) + return 1; + + rmptable_start = memremap(probed_rmp_base, probed_rmp_size, MEMREMAP_WB); + if (!rmptable_start) { + pr_err("Failed to map RMP table\n"); + return 1; + } + + /* + * Check if SEV-SNP is already enabled, this can happen in case of + * kexec boot. + */ + rdmsrl(MSR_AMD64_SYSCFG, val); + if (val & MSR_AMD64_SYSCFG_SNP_EN) + goto skip_enable; + + memset(rmptable_start, 0, probed_rmp_size); + + /* Flush the caches to ensure that data is written before SNP is enabled. */ + wbinvd_on_all_cpus(); + + /* MtrrFixDramModEn must be enabled on all the CPUs prior to enabling SNP. */ + on_each_cpu(mfd_enable, NULL, 1); + + on_each_cpu(snp_enable, NULL, 1); + +skip_enable: + rmptable_start += RMPTABLE_CPU_BOOKKEEPING_SZ; + rmptable_size = probed_rmp_size - RMPTABLE_CPU_BOOKKEEPING_SZ; + + rmptable = (struct rmpentry *)rmptable_start; + rmptable_max_pfn = rmptable_size / sizeof(struct rmpentry) - 1; + + return 0; +} + +static int __init snp_rmptable_init(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) + return 0; + + if (!amd_iommu_snp_en) + return 0; + + if (__snp_rmptable_init()) + goto nosnp; + + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/rmptable_init:online", __snp_enable, NULL); + + return 0; + +nosnp: + setup_clear_cpu_cap(X86_FEATURE_SEV_SNP); + return -ENOSYS; +} + +/* + * This must be called after the IOMMU has been initialized. + */ +device_initcall(snp_rmptable_init);