From patchwork Thu Dec 8 15:29:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 31417 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp262016wrr; Thu, 8 Dec 2022 07:31:56 -0800 (PST) X-Google-Smtp-Source: AA0mqf4LfkJ9K71oMWgjpxI51KD89h9pgeAhcmTtprdL3P8bDb/dYXbBd2KeeJsUOHuapt6UWvCN X-Received: by 2002:a17:906:30c3:b0:7ba:a674:22e4 with SMTP id b3-20020a17090630c300b007baa67422e4mr54732023ejb.279.1670513516403; Thu, 08 Dec 2022 07:31:56 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1670513516; cv=pass; d=google.com; s=arc-20160816; b=dl1TgQxx8at9ujHdexvcjorgyQAm13j47UDXe9lOUIGWU+5gT44+LEIEXkt1/iXqio rPT3NcQCUEoL0Frt+NBCUB+kp9dgcpFUj1/HiVmfgljjPRHn7BQCMeum6shfFT1y9g3g 5Ho5ppi9WbsGlMCBgFON5+fNyG09maaCvBHZxgVvx2g0QIh/DNEFEioCP1NE2Ol3TKXb mnvu/5ap2MheVPqhcF4pB2AaiUvtu7cfO4mj+hcKaee+IFxmuEw3zBZwOxqC7xnlQqkp JuYFgLopuPDwS9ZkshUdvHGBomyHWHl/e2ZB+b+7UWgbvhN1VJO8XUTxDt3VXVbwxGbs Krug== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cMbd6t318giH/5MNnxD31shZCKsR7hDf/YbWNyKT7SY=; b=MZmz4SpYXaAKwMWzDQkpSJeaUq0wvLwS8zcGjxLmYumhCp0Y3LXhHKxwbWNG8u3xzo YVsJXmoHonNglDm+dIpvHa+NVFN965fEFcbkvF2CrDuVHa40SckJSldpLk/ZBfGIQLGj PmyKJY0ZRJDzW1fRpDhH0izDLXxj33ei3psCzMxSsMeIMycFMi+126wHL5kzonOgwylI LvMlsJvI84diVXK2jKZkewHzb/eQCbbwmOqYP9poRUhwj7TuKgFq1Dz98cvjyvJJOeY5 NXYvZhFpE6odzLzAyaBjjfbGinAnQu7hPn/VZo7dL2BSlx94VQxKQU+xrsmczAK152iH uftg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b="dk1lwK/7"; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q6-20020a1709064cc600b0078e063fc787si15144390ejt.433.2022.12.08.07.31.29; Thu, 08 Dec 2022 07:31:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b="dk1lwK/7"; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbiLHP3v (ORCPT + 99 others); Thu, 8 Dec 2022 10:29:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230255AbiLHP3i (ORCPT ); Thu, 8 Dec 2022 10:29:38 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on20609.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e89::609]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A94C78682 for ; Thu, 8 Dec 2022 07:29:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cWd6yKXNqWgj3Y5l8wsQhfp4QyJmy/9D8jc/11bacgGru9uGf13/fQede8Fl2+08Jzj9rffSPFTgSHtfwC3ZeU4qImsLrW7GCi0mebHhwBltE3ksPJeNwrw1YqoEKnhpvaLHzqL+2x2YYLwznvBcd8R3d41Y/aruXwHpcAYOrsZlYiqOC3BPDYGgPayEl0AJazbsLOQvnNNP4Vy4w6zJ1qMAF//t568E8TIU+WP4P4R9BChVmQ6ZII0tWSpuNxsaYiVjHPgU7n9tzR0/pWLymhOUB0mGNozWEN+TzxZmeH3wQIh0EnQaOA+TH+finK5wTZBMd4hFwvFjpQTYPFofBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cMbd6t318giH/5MNnxD31shZCKsR7hDf/YbWNyKT7SY=; b=DZe/HB5jPX5a4ZkDsbsH09txBLZCrc/0w3lqgezRji7yCwN4Odd+onEe1FxBUc9H8QMTM8Z3t7kDVGw0N7N/UxoWlJ7MogzSITJ+2BGV/AScQ07eCckmsx6FDi8mc0k7hkq16+DQfApltORerCvnaJhNyaYSeiEaphPPFywmOALaF1Lj/zukSyfwHsTT1Ri6rvewaur/qqOZCO7T6MpOefpwMcTSOdo2gPaK1Nr7MqSg0mdiqgU5pgZtDd0SRDqdmq6L01OlZoPPJseIocDensTPlv74nrS/CSHobs3KNRSDeki6fSrh6VcOZ4baXxzq5o1Ll8Ctu1c+NvvXwDJ5kQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cMbd6t318giH/5MNnxD31shZCKsR7hDf/YbWNyKT7SY=; b=dk1lwK/7bTTdXFt5kaqJrQxFyq2fZp9RpdSf8G7s+zPmDSBiQ1Grwi7+HqgAQnWK9Uf4ep6q+IAwyPUUJVBLvDNYR6RtqVhaqCl0LxPvlu4ll0wLRYKySZE6J+OA89fbjDv2YxNAdu82qdFSeCc4j1AXJ7I0QcSzKScLJF8l8+I= Received: from BN8PR03CA0015.namprd03.prod.outlook.com (2603:10b6:408:94::28) by SN7PR12MB7417.namprd12.prod.outlook.com (2603:10b6:806:2a4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14; Thu, 8 Dec 2022 15:29:34 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::69) by BN8PR03CA0015.outlook.office365.com (2603:10b6:408:94::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14 via Frontend Transport; Thu, 8 Dec 2022 15:29:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5901.17 via Frontend Transport; Thu, 8 Dec 2022 15:29:33 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 8 Dec 2022 09:29:33 -0600 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v6 1/5] x86/sev: Fix calculation of end address based on number of pages Date: Thu, 8 Dec 2022 09:29:09 -0600 Message-ID: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT059:EE_|SN7PR12MB7417:EE_ X-MS-Office365-Filtering-Correlation-Id: eec0ada5-8866-4a83-3adc-08dad9310239 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4OnUdH6+8xRCuZxG5dG26GhdiaCGvRNBKx3CaWzqmu6d0z513piD5KgdZCguaQUY/VtpOt/uydkncnVXuWl9FacQtaHcQKBNdE5NlaHGq62P2RadGaYiqCtwF19kqRpUGuTKGhYX4HXdLXRtuaTl23sVf6hTWVh61SqaQg7WkxtgMS+cUL1NqTuqCvXfTIWhJEJ3W9sbjj3I4MdYUmxu3wqdOEravToEnp5jf8eHc61tPV5NmPc6fcKZ2j7T/SREscs7IDPxleTpVf+3Yp5n+xQp5a1DMJUkpyXEoSE1QHCKsf7VJigYSW05kb0fPqLIiPbcX9jqmKE5O5zN+FJgPLvplwS7YuxT+TDlK8v7DJ+ppN/ttI/k8lrtO5FggEagdlsVnPAg5hB4rG0inChNpxMM2jHWc5bKSzBLlW07VB3ihHfOg9np1KIgEWMJ5igjuB1jT1JYB4qoB3fL0RWrwjl9dD5cN+aOvrhP1w79d6HsIUYGynoemd1ANrJ0SWuJzGYuURryn2YSDdHjXH0N/6zBvG65wTHdT25jPH1Xw/RLtib4xR5UqGDACtrb47ycejwAOkWMx96WZzuqj5JCjo3VXPvZqPWFKMKewMrEB1wFKBFcHYErpe7OAOKv9Xu8ClgXSG+J+SqszPqGzqPl7vwHoBlavfkDvJDdn3EL6BKdJjPWUDkf5gh79lkdtzsVTlLvPPFMmskgqC02C7Jt7ndpxIZT2vmFXSxwitbhI7E= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199015)(40470700004)(36840700001)(46966006)(186003)(16526019)(110136005)(2616005)(26005)(336012)(316002)(36860700001)(7696005)(83380400001)(6666004)(4326008)(81166007)(7416002)(40460700003)(2906002)(8676002)(478600001)(41300700001)(82740400003)(36756003)(47076005)(70206006)(70586007)(54906003)(5660300002)(426003)(356005)(82310400005)(86362001)(8936002)(40480700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2022 15:29:33.9807 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eec0ada5-8866-4a83-3adc-08dad9310239 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7417 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751660381009245792?= X-GMAIL-MSGID: =?utf-8?q?1751660381009245792?= When calculating an end address based on an unsigned int number of pages, any value greater than or equal to 0x100000 that is shift PAGE_SHIFT bits results in a 0 value, resulting in an invalid end address. Change the number of pages variable in various routines from an unsigned int to an unsigned long to calculate the end address correctly. Fixes: 5e5ccff60a29 ("x86/sev: Add helper for validating pages in early enc attribute changes") Fixes: dc3f3d2474b8 ("x86/mm: Validate memory when changing the C-bit") Signed-off-by: Tom Lendacky --- arch/x86/include/asm/sev.h | 16 ++++++++-------- arch/x86/kernel/sev.c | 14 +++++++------- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index ebc271bb6d8e..a0a58c4122ec 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -187,12 +187,12 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) } void setup_ghcb(void); void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, - unsigned int npages); + unsigned long npages); void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, - unsigned int npages); + unsigned long npages); void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op); -void snp_set_memory_shared(unsigned long vaddr, unsigned int npages); -void snp_set_memory_private(unsigned long vaddr, unsigned int npages); +void snp_set_memory_shared(unsigned long vaddr, unsigned long npages); +void snp_set_memory_private(unsigned long vaddr, unsigned long npages); void snp_set_wakeup_secondary_cpu(void); bool snp_init(struct boot_params *bp); void __init __noreturn snp_abort(void); @@ -207,12 +207,12 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs) { return 0; } static inline void setup_ghcb(void) { } static inline void __init -early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages) { } +early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { } static inline void __init -early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages) { } +early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { } static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { } -static inline void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) { } -static inline void snp_set_memory_private(unsigned long vaddr, unsigned int npages) { } +static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { } +static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { } static inline void snp_set_wakeup_secondary_cpu(void) { } static inline bool snp_init(struct boot_params *bp) { return false; } static inline void snp_abort(void) { } diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a428c62330d3..6b823f913c97 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -643,7 +643,7 @@ static u64 __init get_jump_table_addr(void) return ret; } -static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate) +static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool validate) { unsigned long vaddr_end; int rc; @@ -660,7 +660,7 @@ static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool valid } } -static void __init early_set_pages_state(unsigned long paddr, unsigned int npages, enum psc_op op) +static void __init early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op) { unsigned long paddr_end; u64 val; @@ -699,7 +699,7 @@ static void __init early_set_pages_state(unsigned long paddr, unsigned int npage } void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, - unsigned int npages) + unsigned long npages) { /* * This can be invoked in early boot while running identity mapped, so @@ -721,7 +721,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd } void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, - unsigned int npages) + unsigned long npages) { /* * This can be invoked in early boot while running identity mapped, so @@ -877,7 +877,7 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); } -static void set_pages_state(unsigned long vaddr, unsigned int npages, int op) +static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) { unsigned long vaddr_end, next_vaddr; struct snp_psc_desc *desc; @@ -902,7 +902,7 @@ static void set_pages_state(unsigned long vaddr, unsigned int npages, int op) kfree(desc); } -void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) +void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; @@ -912,7 +912,7 @@ void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED); } -void snp_set_memory_private(unsigned long vaddr, unsigned int npages) +void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; From patchwork Thu Dec 8 15:29:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 31418 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp264121wrr; Thu, 8 Dec 2022 07:35:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf4yWx/2OrQiLttGaFdXbV5pb0DcE1HDWOQvJLH4y18A4RantLyQP4G3/CBLpUs/DIOZKGAb X-Received: by 2002:a17:906:3411:b0:7c0:d3e6:cce with SMTP id c17-20020a170906341100b007c0d3e60ccemr17930321ejb.742.1670513736631; Thu, 08 Dec 2022 07:35:36 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1670513736; cv=pass; d=google.com; s=arc-20160816; b=jZ+nuD5k0MRx60nCfL/QkPO15z7wLayTFSG5guutwmT+Ly4j6do8dIuXjo1gNg+DSn oKxkqnuJ5VHHVXIN5PiPnMqFSW3h3jQHbszw7ul+SusocHlUwdqv+F3ezprkdZQDPaGh 5RhE8t0VUty2RJ1rZL/LyW4i7cjGFBmB4YfdAE2QSUCjkK/M/VVEXlr4KEpKU7okx/kp G8deO0phQ1fjX4Kgfbb9hegAxjOI6w7AKZ1r70fBIUvmo0r77xTt0NSj/o2dCINWCcot p3HYhlNG5v1yPKFD0aQW9AsVkG+gCCRYIPNjsU2zObF9ujckVATvIB8NrHdLM4w8FVL/ kb8w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=s3/5KXFRbhkIlZ9d4rNFPK+YWh58t2diCXT5iAPwzTI=; b=bfLWoJgx8i0OKT5mpBRZgY+ZJpqDWIN8Zr0w6Jjl8uhRuiomtru9kCfYV8AiavXsCp g9VhQIQ6qPRvhsaM6U9Y3tVppZVwwDIP8UqFZVxvpcdxi5vhut9yKtC9yfzxfBJOqkqX e3aD1WSt8gbHzrMQnK+sglUe4vodM3sckt9Oe5hXfT9qFuS2pPG9u1iHApaXDlxC5B8w v+dTXrWBRzlEt+N4TZsNvfkzT4uMLiK0N7twoCIZ4IbPRLp9O4PpkQgm8lnw4Hu9fkXm fLi3eLjskqMs5ICtzV3SMc4HdBq+u0eMigDvoYNWUXNJqfySmcy1mjrC19Zh7VF/kpxe R9Vw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b="Md1x/pzL"; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j14-20020a05640211ce00b0046bdaa564bdsi9063397edw.419.2022.12.08.07.35.10; Thu, 08 Dec 2022 07:35:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b="Md1x/pzL"; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230263AbiLHPaM (ORCPT + 99 others); Thu, 8 Dec 2022 10:30:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230335AbiLHP3u (ORCPT ); Thu, 8 Dec 2022 10:29:50 -0500 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2046.outbound.protection.outlook.com [40.107.237.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C027786B8 for ; Thu, 8 Dec 2022 07:29:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L6WgMTbFzQR2mV1SO21Ja7EI6PJ4mrRLqkZNIMdgoyBu6uxjKAzfzETz3WzpI51Cw3ioqHcCDGCrgjYmXmaY97m29eMa9h39MRg1kqv6J50k2gpjgUQJ+DQhrYoE3dzg/fx0GhmcsLdn1+f5skmGwDygJRaRYTfH/6TTbnD5WO+yRD6AE/k5pP66Mg9DgTmXO/BQHEr7HZklLy737KPQfQVtBZf2dUqNk9LTGg7j1RTo6y+w0NVNVJ4PPXIP1YeTJMJZmOOqq4heDEZhGj7ZnH+QJK1cVvR8JUxwBRBn3sO+YtZ1G409C3/4DAeMaY16HEd/Pht/+uyuejGcFV0b0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s3/5KXFRbhkIlZ9d4rNFPK+YWh58t2diCXT5iAPwzTI=; b=HCHI7UbKuevjTlVzOKYnavQR5HIfvoNoG2C9nBzEBVpm/PpZ1hQ+F857I0T74eH/lpy5z/UrvwKKzQmF8KOrWohUpY+eu1flkYITHpgzxB/ExoPUlBy2Naa7tTHEEagBcgXpeSdo3LgEhIVO1brm+tjKjrL8+FnZuHf0dm6y/w0fz0gr65mv7SDLcuGW3dVfLLc0D6M9CIe89tvUNJVruYLmj1PY9z9DQrR/Q7ZBToPBZuT73gMgDQys4NNhpKlPD3DHIumbRpuyJg+9sUC2wiFxwcp4/we1El5Qz3+rr66qyZHVfHSA5yF62fLFR8dqNGUYLLruT55nbjFdYvVZkg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=s3/5KXFRbhkIlZ9d4rNFPK+YWh58t2diCXT5iAPwzTI=; b=Md1x/pzLBXvzETwsfTFd6WARjD1lZpCyCnjXubPWEK9S0ipK+nnOnfsFmN84YDG/8UYJoYNkVUkxyefsvKt+jfE2r9lQaxAZ2GzwXbjueMtCx/DAn58xkmFlkgQbNS0jlaJBdnJvBN+9SEM6TMtoJf9MC5mYWZQLOi93BOHqZsk= Received: from BN9PR03CA0211.namprd03.prod.outlook.com (2603:10b6:408:f8::6) by PH7PR12MB6812.namprd12.prod.outlook.com (2603:10b6:510:1b6::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14; Thu, 8 Dec 2022 15:29:42 +0000 Received: from BN8NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::28) by BN9PR03CA0211.outlook.office365.com (2603:10b6:408:f8::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14 via Frontend Transport; Thu, 8 Dec 2022 15:29:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT036.mail.protection.outlook.com (10.13.177.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5901.17 via Frontend Transport; Thu, 8 Dec 2022 15:29:41 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 8 Dec 2022 09:29:40 -0600 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v6 2/5] x86/sev: Put PSC struct on the stack in prep for unaccepted memory support Date: Thu, 8 Dec 2022 09:29:10 -0600 Message-ID: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT036:EE_|PH7PR12MB6812:EE_ X-MS-Office365-Filtering-Correlation-Id: 1cac1db8-dfbc-4c34-6dc1-08dad93106af X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 02RuLV0M2Sb7ZMHBoe8hu0gYekwa+BqC/btxF5BZ/1hrnmn16OlJSqwabI0Do+S8van0eOnHQ0jl2yp1B+8dYUz+8QTtARNPgtV5TKfhAEbZlwCDfeqQh+NElsHnucXKTwLVQ73G0+Jnbe4piP9AXEUuPyKRulYExS42VtyGxnpjaC3Vrrk/eVTssC18n/CoWZkfuMr3ltvE+FnJRyJqq78sLV+MVl6CfcnaSADk4PBfbOr4u/JPGQAU9B94RcGolKRJPPueuGxbXr8mDeBzXs58JZHE+hw7mmTQWSw3uxoEOgTFY8+a5byAx7sMmxBRufTmU1erSbIyf+uffJHwfbofQyXj+g22W+mAvFVQmhV761T6RtfQsyAJeCO+8jEoBTcdESKGl1+UEa0hl6VZX1kDHlUKG+rGQ3RJYsL+I+kctLO4kWbLiaqcWdV3/NXw8HmMj6+7ZJUSQYnDB9Q6rgBNQmWKgvfU0MZ0B5uJNfQN4jwJfTG1u30I+2a+wf+BYiRpQErunKh6vluIoX+6CfzhK8+8HX/88gZ+Bo/nYLiyFAbRx9W0wMq8L6n2B7nGf5Hla/fIv1WHk0tc3ZzShDMtNG9k/4x10ULxFOLYwkpVJePxvkOIDeXtAkXbLzzad5KcyiiW93thMfvfu5OJhDyT6dlN3xweK0nNwk29+cL03EOv+YwrnyiCCubQc1XNhPStuUgkuZn9a+NF06gTbdIxud3biavByT+cuNG61K9Lk12R3yI/fuy/s+d87Y0hXM2+yBErX4b1OQPnp+GIaA== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199015)(40470700004)(36840700001)(46966006)(36756003)(81166007)(356005)(86362001)(2906002)(41300700001)(4326008)(70586007)(36860700001)(70206006)(8676002)(8936002)(7416002)(966005)(83380400001)(110136005)(316002)(40460700003)(54906003)(2616005)(478600001)(82740400003)(40480700001)(82310400005)(26005)(186003)(6666004)(5660300002)(16526019)(426003)(336012)(47076005)(7696005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2022 15:29:41.4819 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1cac1db8-dfbc-4c34-6dc1-08dad93106af X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6812 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751660611749511592?= X-GMAIL-MSGID: =?utf-8?q?1751660611749511592?= In advance of providing support for unaccepted memory, switch from using kmalloc() for allocating the Page State Change (PSC) structure to using a local variable that lives on the stack. This is needed to avoid a possible recursive call into set_pages_state() if the kmalloc() call requires (more) memory to be accepted, which would result in a hang. The current size of the PSC struct is 2,032 bytes. To make the struct more stack friendly, reduce the number of PSC entries from 253 down to 64, resulting in a size of 520 bytes. This is a nice compromise on struct size and total PSC requests while still allowing parallel PSC operations across vCPUs. If the reduction in PSC entries results in any kind of performance issue (that is not seen at the moment), use of a larger static PSC struct, with fallback to the smaller stack version, can be investigated. For more background info on this decision, see the subthread in the Link: tag below. Signed-off-by: Tom Lendacky Link: https://lore.kernel.org/lkml/658c455c40e8950cb046dd885dd19dc1c52d060a.1659103274.git.thomas.lendacky@amd.com --- arch/x86/include/asm/sev-common.h | 9 +++++++-- arch/x86/kernel/sev.c | 10 ++-------- 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h index b8357d6ecd47..8ddfdbe521d4 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -106,8 +106,13 @@ enum psc_op { #define GHCB_HV_FT_SNP BIT_ULL(0) #define GHCB_HV_FT_SNP_AP_CREATION BIT_ULL(1) -/* SNP Page State Change NAE event */ -#define VMGEXIT_PSC_MAX_ENTRY 253 +/* + * SNP Page State Change NAE event + * The VMGEXIT_PSC_MAX_ENTRY determines the size of the PSC structure, which + * is a local stack variable in set_pages_state(). Do not increase this value + * without evaluating the impact to stack usage. + */ +#define VMGEXIT_PSC_MAX_ENTRY 64 struct psc_hdr { u16 cur_entry; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 6b823f913c97..f60733674731 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -880,11 +880,7 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) { unsigned long vaddr_end, next_vaddr; - struct snp_psc_desc *desc; - - desc = kmalloc(sizeof(*desc), GFP_KERNEL_ACCOUNT); - if (!desc) - panic("SNP: failed to allocate memory for PSC descriptor\n"); + struct snp_psc_desc desc; vaddr = vaddr & PAGE_MASK; vaddr_end = vaddr + (npages << PAGE_SHIFT); @@ -894,12 +890,10 @@ static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) next_vaddr = min_t(unsigned long, vaddr_end, (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr); - __set_pages_state(desc, vaddr, next_vaddr, op); + __set_pages_state(&desc, vaddr, next_vaddr, op); vaddr = next_vaddr; } - - kfree(desc); } void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) From patchwork Thu Dec 8 15:29:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 31419 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp264700wrr; Thu, 8 Dec 2022 07:36:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf7GAFQzPDTdOXBSWxx3/jsUWMJpOG7K60gtJrx+audMvlBlcR26Gt428+f/u8aoYJIGdsm5 X-Received: by 2002:a05:6a00:3204:b0:574:31bb:a576 with SMTP id bm4-20020a056a00320400b0057431bba576mr73184582pfb.46.1670513804021; Thu, 08 Dec 2022 07:36:44 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1670513804; cv=pass; d=google.com; s=arc-20160816; b=IGXd9rVvGPDnl1Tx4vx2sF27TlAlVp3zGrPRQ6gcVCe6TN3LnIZJg/gUmyOa1Yndwh t+iVfbtJUxLDIuErogShL/edWMvj+CD13LR2QjZqN0zbp2l2Gg+AQByOkHVfDMxbCUuz DXYYHIEEv47fGAh1p3nN8mYKa0E4qIXyGgYdzbOi0UB7FnHJoENNJuKSKLwdnvrJP+0N WUHDAWI/ujmcutr4JZWL5Xf9XA2oE9+DYbnpmaAP81LxN48+tbDMdsVow55/9RMuwVlH INx8N2Q4sniNl1uyeqhAgxMDMiF9w2jQYcq5aq3YmBshEriRvQjITG13j181/fEBv4VJ rqEw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QARjkQZFMOq7gtsotRHcZIPaXMpDlTYUxQ3E6T1ZA+s=; b=ICfP7hf4JQMrWp5N3/rdeIMRqtaC/z5dlXUZrMO9ebHNLekvIHmq0seXg5k24rP/vG c4wXxRiAMa9V8++wtcfyZZkjsGsf9GcBVh0iJC3BJL1DFbQYRvpdcRETrGxwrqvk7D/q E6zEr16HpsuuYmjbwiX17aSijguJooLjrGlFIi2+KiyTs3kE38+pIDaLX0Q5pOPuDOdu u2wKKY5ddUQdi1Q0ZOyqWm2tjTaCdYacQkn3GBVtXSmDwHv3hIdoG6fHFGsQaI7+K0/Z 5YxZOojyuAzKN6kTHO3p6Dg6Eu6xbhntTct+p+c65tkMvB32UAMSATbn7cIOdHULW4Pd W8BA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=m5B6wAUk; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p15-20020a170902e74f00b0018731d9aabbsi24940947plf.497.2022.12.08.07.36.30; Thu, 08 Dec 2022 07:36:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=m5B6wAUk; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229791AbiLHPbD (ORCPT + 99 others); Thu, 8 Dec 2022 10:31:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbiLHPaN (ORCPT ); Thu, 8 Dec 2022 10:30:13 -0500 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2061.outbound.protection.outlook.com [40.107.96.61]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF493786BF for ; Thu, 8 Dec 2022 07:30:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aOS+CnovhpWk3kxaa6Vj1HK5f9IM3V4lHc7sDj7woFSf0z4N982XSOpm/xm8HT6oehmA5oah+zQeRcNayFs+wC0lKwS2unvZXFkvMbDojnBS5moeMVkIJC/BcKSX21ZpHKcrROyQRqH5SuzP3/v60b+7/TTf/2V1TlRDjV8T02vEbgTWZ/6KrwbEmT7XkXs/RBQvtwvcFyc729rUHZUVvfpav1PQyzj9xf7gl0JNYaFe27UuDtv3t6qB+j46IHF1KLmujvHoyuiRP+e/8GuaqeyePq/pRE9v7QAKdRT6uHb334r4k8jmaN3TMpWooFIyQy/a8bYAu9g3m5fm5NODvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QARjkQZFMOq7gtsotRHcZIPaXMpDlTYUxQ3E6T1ZA+s=; b=irsrZn/rmpIrB3TbkGzEU24KDvfCB59VJE9s4NYsOI/8I2dkP3fZiZKSLDmnFcz1oice/hqjvF0PHOzwebRquLzwU9t4+xE3scAKkKW+oAxGtqp1L6gnDYOyoER84H3ZVEJ3JaEWe/GHHm50Z0Xk3jzGW5GLMiKct99J5dSCx9K7OZHjYcMJDXtA5Hlti3upr2hVwvyaf9QZrpig6Q2RR6Lk8+lTKxw0kqWdgNlkLFvaf0VwTn76CG33vhIWmD2SySD7iYg8kYdSMt/OsygaJH1f53mw2nnSxayrcT/NPiUolbN84Wa5ezB0u0uK2qlDSYKjuk6vHat6IWI4RZ+bAw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QARjkQZFMOq7gtsotRHcZIPaXMpDlTYUxQ3E6T1ZA+s=; b=m5B6wAUkQIFgmdxanUhheTR7G26mA6QYnEweTmazTWkzRv6evK3rkxPcuE0OXNZrCdQbrW743Td/HX1d59jVCppnu9ivjYqfTkyO3k9EaF8nnfKFaPET7QHKvLIkKgArlgOUBS8oYqnjUL7HhZ3yY1Z0QtzN3vKguzheXbuILAE= Received: from BN9PR03CA0589.namprd03.prod.outlook.com (2603:10b6:408:10d::24) by SA3PR12MB7784.namprd12.prod.outlook.com (2603:10b6:806:317::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.16; Thu, 8 Dec 2022 15:30:03 +0000 Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::38) by BN9PR03CA0589.outlook.office365.com (2603:10b6:408:10d::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14 via Frontend Transport; Thu, 8 Dec 2022 15:30:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5901.17 via Frontend Transport; Thu, 8 Dec 2022 15:30:03 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 8 Dec 2022 09:30:00 -0600 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v6 3/5] x86/sev: Allow for use of the early boot GHCB for PSC requests Date: Thu, 8 Dec 2022 09:29:11 -0600 Message-ID: <27bb9ed1a64712ea85acc94450c1095f1a67b9a2.1670513353.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT046:EE_|SA3PR12MB7784:EE_ X-MS-Office365-Filtering-Correlation-Id: 33bc01f2-59c0-4031-3b63-08dad93113e4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OVBlWkYzzod1rGLFE8nU6nPTANhITLkWBnq+CZgJeaYxdh83yV0uLSb0fWuBw6owc0UCpW7oarefnFcLivlt2j5Tzrk4E2PsIPSuH/WB0tzmU51sebWptxJSAuppL+X/2R2yJVBZjbnL5eyhsgIbSJuVIq1YmXj+suZAaKQoSavopZiTnAUP8uxxHC90jIufVZMkraZIKfoITzcXCD38KK0/tGbfx/j2TYfewo/nvCexzKIp5tqTlMxophf3Tu6rEDWL5CH6P1fzqZkDTeDytODLxMg+CGXh6B+yUDadBTyhmifskwGfTdOWrMvewG/9tUy4HOhyJPKZWYXwtkpwYiJR7Wvm/zBUBR+XtKc3gqyI2bElrUmwyFayul/VfNrN3YFnyLA2LJLDvqy2SuQjD62JBttEWLMITPrXM6V37iK9jnZYZvGkRoCHTVigVFV28Uz0RnprLmxw28js+nOIJa+8YS8p2VyXoXfJnS7oY/KH4ay8tlz+lQlvuRd32Y/y6L1xbQG0sQGCupWYET8aOg7lpeXT4lqeP5hGS9tmKxVqS8clwRVnqnaYk4FHU70MFgbRR5DD3MWsKk6eYy9Skoc873e23i7Iqi0bmy0PFeL5d+jfabz2kVN+jL+diZ8uBk+9rUceresvqVwGTqBnfkF5Kmin6IfwkxHjCo8eHEriBXkEk9o7mwBtVONGFABm7uXfl38F8+POtDghfav2GB/gTcQjuoPIIqWpcqpZ1k0= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(86362001)(336012)(16526019)(83380400001)(40480700001)(2616005)(70586007)(40460700003)(2906002)(36756003)(7416002)(5660300002)(8936002)(356005)(81166007)(41300700001)(316002)(4326008)(8676002)(70206006)(6666004)(54906003)(110136005)(426003)(47076005)(82310400005)(7696005)(82740400003)(26005)(478600001)(186003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2022 15:30:03.6227 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 33bc01f2-59c0-4031-3b63-08dad93113e4 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7784 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751660682832435521?= X-GMAIL-MSGID: =?utf-8?q?1751660682832435521?= Using a GHCB for a page stage change (as opposed to the MSR protocol) allows for multiple pages to be processed in a single request. In prep for early PSC requests in support of unaccepted memory, update the invocation of vmgexit_psc() to be able to use the early boot GHCB and not just the per-CPU GHCB structure. In order to use the proper GHCB (early boot vs per-CPU), set a flag that indicates when the per-CPU GHCBs are available and registered. For APs, the per-CPU GHCBs are created before they are started and registered upon startup, so this flag can be used globally for the BSP and APs instead of creating a per-CPU flag. This will allow for a significant reduction in the number of MSR protocol page state change requests when accepting memory. Signed-off-by: Tom Lendacky --- arch/x86/kernel/sev.c | 61 +++++++++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index f60733674731..8f40f9377602 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -117,7 +117,19 @@ static DEFINE_PER_CPU(struct sev_es_save_area *, sev_vmsa); struct sev_config { __u64 debug : 1, - __reserved : 63; + + /* + * A flag used by __set_pages_state() that indicates when the + * per-CPU GHCB has been created and registered and thus can be + * used by the BSP instead of the early boot GHCB. + * + * For APs, the per-CPU GHCB is created before they are started + * and registered upon startup, so this flag can be used globally + * for the BSP and APs. + */ + ghcbs_initialized : 1, + + __reserved : 62; }; static struct sev_config sev_cfg __read_mostly; @@ -660,7 +672,7 @@ static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool vali } } -static void __init early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op) +static void early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op) { unsigned long paddr_end; u64 val; @@ -754,26 +766,13 @@ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } -static int vmgexit_psc(struct snp_psc_desc *desc) +static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) { int cur_entry, end_entry, ret = 0; struct snp_psc_desc *data; - struct ghcb_state state; struct es_em_ctxt ctxt; - unsigned long flags; - struct ghcb *ghcb; - /* - * __sev_get_ghcb() needs to run with IRQs disabled because it is using - * a per-CPU GHCB. - */ - local_irq_save(flags); - - ghcb = __sev_get_ghcb(&state); - if (!ghcb) { - ret = 1; - goto out_unlock; - } + vc_ghcb_invalidate(ghcb); /* Copy the input desc into GHCB shared buffer */ data = (struct snp_psc_desc *)ghcb->shared_buffer; @@ -830,20 +829,18 @@ static int vmgexit_psc(struct snp_psc_desc *desc) } out: - __sev_put_ghcb(&state); - -out_unlock: - local_irq_restore(flags); - return ret; } static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, unsigned long vaddr_end, int op) { + struct ghcb_state state; struct psc_hdr *hdr; struct psc_entry *e; + unsigned long flags; unsigned long pfn; + struct ghcb *ghcb; int i; hdr = &data->hdr; @@ -873,8 +870,20 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, i++; } - if (vmgexit_psc(data)) + local_irq_save(flags); + + if (sev_cfg.ghcbs_initialized) + ghcb = __sev_get_ghcb(&state); + else + ghcb = boot_ghcb; + + if (!ghcb || vmgexit_psc(ghcb, data)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + if (sev_cfg.ghcbs_initialized) + __sev_put_ghcb(&state); + + local_irq_restore(flags); } static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) @@ -882,6 +891,10 @@ static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) unsigned long vaddr_end, next_vaddr; struct snp_psc_desc desc; + /* Use the MSR protocol when a GHCB is not available. */ + if (!boot_ghcb) + return early_set_pages_state(__pa(vaddr), npages, op); + vaddr = vaddr & PAGE_MASK; vaddr_end = vaddr + (npages << PAGE_SHIFT); @@ -1259,6 +1272,8 @@ void setup_ghcb(void) if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) snp_register_per_cpu_ghcb(); + sev_cfg.ghcbs_initialized = true; + return; } From patchwork Thu Dec 8 15:29:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 31420 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp264736wrr; Thu, 8 Dec 2022 07:36:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf5U8YSR+0aByHT6N4YvJA4Ww19CK0rJ9vaajYn8b8Vu8ce7rJpnUEFfmDwiIcE5PNlp8AEI X-Received: by 2002:a17:902:b40a:b0:188:635d:4b43 with SMTP id x10-20020a170902b40a00b00188635d4b43mr81706525plr.69.1670513807339; Thu, 08 Dec 2022 07:36:47 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1670513807; cv=pass; d=google.com; s=arc-20160816; b=KKyEQ6P2koe50LSqzuzw8tcprTZjfs8UjMIvMXiiMPUtlan7eFxU34YiEZ8TrP6gme ZyZSwJjS6l0EsN7cM7deJk37tNaz5kgnnZ3F6scLfyqqq8uoU6ZkXx13I7feBOvyBqzX paHYC1rLV6J87p6dKhjTC0IWUSN9EmtjR1pNVFMMvFzmzuwHnU/6iYub69wh+DdXs7YO vL7yAVylasMECszb6FJ/PutdlEbBBaG1uxG9vXZx+4gikob8bRUbT2+XZlnMMtSGuJsn IR0Urb1Fh02p3NTZgc/1Xt71HT4MV9r0LBG4feBSbTb1vFxJlV2CpdBG3ouiHXQhZZOn 3wgA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MxMzqp2Z2pxGGIzmVZJMAt/Z6apMwiRB4n6378Cj0Uw=; b=VMgoP4AlqWcrb/5bexRYbpE2hodoxyeSfaOvmvtYGA/drVWVDhC2AlJ9+20kC8RMvf j14NUSgn05xpyrNufL0OJsW0W38K/wEhU2GxAn9lBYc9axlhiOMt6mCyXNs6/TBJqSRX IrAhXNzYstHznY+vhZaRnqj1/1l9GZN2kTexJ93bmY318PKYMuOpwFlCMB7o+l3amhJW tMBlIj7O3l87f3rJMbh2oCOoHzVpF3J/vjnGy++4m9Pyqx12dMtdGUPYJ16QOm69vrbe l6GIW/ljKtUX0jX/LjCpwhESU4ianjicoC7GLe7+SQcd7IGq8naP2UlllzZWW22AQDZp YtUQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=5AO68NL3; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id my18-20020a17090b4c9200b002195bc923efsi4828889pjb.118.2022.12.08.07.36.33; Thu, 08 Dec 2022 07:36:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=5AO68NL3; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230362AbiLHPbI (ORCPT + 99 others); Thu, 8 Dec 2022 10:31:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229885AbiLHPaN (ORCPT ); Thu, 8 Dec 2022 10:30:13 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2056.outbound.protection.outlook.com [40.107.243.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60FC9786BC for ; Thu, 8 Dec 2022 07:30:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CYZUS+jQhERyiptvX6DcvJGVCG5TvGl2VeLQUkWcgNwPMSh23j+B9jWSAlj25R3F2vgD5gkKjIK1SNmsg/mR31tH0fZ/dMC5XoctdUJohdx02kHLT5mETvb4cC6WcvJW9cRQ/XgOvZYe1WbN5nsVEWxQPO7vlzs2iXd5Vx9KrJZJdyApEgrswNBOJdHNwmSix/Jyxz0iQ+jAtPItkm1JuasrFMVEBiInzdx2wLJBhgzRORDrvqnFt57BLV5oYMhOpbpkXyBVn1w/oBeYlJoisDwxmL22Aprq06h6TnSlOgShmpaQsfp7wDGopKjH/8EWuABRJs+kBMBqc3o8LHXtQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MxMzqp2Z2pxGGIzmVZJMAt/Z6apMwiRB4n6378Cj0Uw=; b=VAxHMkkrswG0mwjrvpxk1kUXyCyn//t5h5UzbPzeT34964/2eEEVI1Z23bdJJjdg8MMLF8zW2HSLc9MRBuVAmPK5jBcYS2j3PTXANfKrJ7PYDza7fzQPxVOT8d4vwlZdjd2ndccARtrpiRCa4gBJ8BQaMo68ANa2QiQ6m68LOR7Wc/MbLm8YrtcsvlNftD7bC+HHqN8HawJ6ncrEkm6NT6deDRb4zF3BG9WiVO4i4aWDjcpztbcxcah7DBNK7OhU876gsRpQ7OcE21NOnNGsK7u+HJm5tmHKxutt3gv95+qKRWDcZCn7F6uCa7xoqjnqKQ64Gg7Oh+GPzsU2TQ12kg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MxMzqp2Z2pxGGIzmVZJMAt/Z6apMwiRB4n6378Cj0Uw=; b=5AO68NL30/hilWfVyShnFO6NGzymzgHSz0XvttVnTDaOQec5wIHaHtoHAQDIbrNu2lXjruXDUhMYHGt84uHJBg3FKsMTzJqkHhh747ziJCBBkakwaeKd+XtVoCdtNRzz3RhJPxjvK2wL7VNtMVJ3ymBj2HiVDjCnI3JnroREiWg= Received: from BN9PR03CA0160.namprd03.prod.outlook.com (2603:10b6:408:f4::15) by CY5PR12MB6324.namprd12.prod.outlook.com (2603:10b6:930:f::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14; Thu, 8 Dec 2022 15:30:09 +0000 Received: from BN8NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f4:cafe::b5) by BN9PR03CA0160.outlook.office365.com (2603:10b6:408:f4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14 via Frontend Transport; Thu, 8 Dec 2022 15:30:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT060.mail.protection.outlook.com (10.13.177.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5901.17 via Frontend Transport; Thu, 8 Dec 2022 15:30:09 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 8 Dec 2022 09:30:08 -0600 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v6 4/5] x86/sev: Use large PSC requests if applicable Date: Thu, 8 Dec 2022 09:29:12 -0600 Message-ID: <926e256b9159293f162d9068c8fd327e4819b76f.1670513353.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT060:EE_|CY5PR12MB6324:EE_ X-MS-Office365-Filtering-Correlation-Id: 335c0322-b9e8-469d-f203-08dad931172c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +4I6LFfZdUckX+DXXL8RwXB+fTGz8pP8P6b5t7VRbmGydtqvpkOUtH8PbQwJa2e87pWlsN+LCyDYfrmzi8rOHii8hune3q6cxBc2QLjSOQRdDqYwT/qpby6UzewtEcYnbqG+G/ePfJYn+jcjhTW+jDNEUI6kM8M4kEqBBNzmdAqnZsshzBooBQbRZD0OQ/FJqDPwNuifTxtctRvK0lSO1WxDsQAAFgRXeLk0kZ35xlgNEQ7asNbE+y1/D+iXNZOHXufdoYJPZ8XYFj5MtXB3thVQzmC5l8WLc4GMdupeMiEn2JUMSnyGmgLTDDmap3SLcTcXAQT6+ekZ0L5Od/dp57bm9NoNR/afE7yMofVXLAtmDWfiL7eCuNwkkOdb/z/hs6svG7mDBDL0gqGE8YwijxMWsEjPtQPtHpJELjiU6cVRWBHWIC5dKjFMs6temK9EcOz/rB0WNiPK47vi86iTE7cmYZ4uoW81qxZw7l6upuGiNdLORX2bJwDVii+o/+ScVbteduc5m5l4XWzhPVw89oGc302kQBeJFWHmXUacGS2PbUdQR7ubfh7AzEj5K3wscGDS3wOgTj1vSK0w6QIxZtWHC/jMj/IfNRLDg5XTkmzMjdIVBTN8E6ZY+MgrTOIPIZkbxiAfaYMVXLKN6bQ61Ij17UKWLblv1Glv4VwS/hfAKlOml3oz9grE8rZuYQbfaYLxJeFHoi+clk2K5v5R7Rr4lHCWxlvcLYJFo228Clc= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199015)(46966006)(40470700004)(36840700001)(36860700001)(478600001)(83380400001)(8936002)(86362001)(356005)(40460700003)(2906002)(70586007)(81166007)(5660300002)(7416002)(4326008)(41300700001)(70206006)(47076005)(82310400005)(8676002)(26005)(7696005)(336012)(16526019)(186003)(6666004)(426003)(40480700001)(110136005)(2616005)(316002)(82740400003)(54906003)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2022 15:30:09.1324 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 335c0322-b9e8-469d-f203-08dad931172c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6324 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_FILL_THIS_FORM_SHORT autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751660685838682488?= X-GMAIL-MSGID: =?utf-8?q?1751660685838682488?= In advance of providing support for unaccepted memory, request 2M Page State Change (PSC) requests when the address range allows for it. By using a 2M page size, more PSC operations can be handled in a single request to the hypervisor. The hypervisor will determine if it can accommodate the larger request by checking the mapping in the nested page table. If mapped as a large page, then the 2M page request can be performed, otherwise the 2M page request will be broken down into 512 4K page requests. This is still more efficient than having the guest perform multiple PSC requests in order to process the 512 4K pages. In conjunction with the 2M PSC requests, attempt to perform the associated PVALIDATE instruction of the page using the 2M page size. If PVALIDATE fails with a size mismatch, then fallback to validating 512 4K pages. To do this, page validation is modified to work with the PSC structure and not just a virtual address range. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/sev.h | 4 ++ arch/x86/kernel/sev.c | 125 ++++++++++++++++++++++++------------- 2 files changed, 84 insertions(+), 45 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index a0a58c4122ec..91b4f712ef18 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -78,11 +78,15 @@ extern void vc_no_ghcb(void); extern void vc_boot_ghcb(void); extern bool handle_vc_boot_ghcb(struct pt_regs *regs); +/* PVALIDATE return codes */ +#define PVALIDATE_FAIL_SIZEMISMATCH 6 + /* Software defined (when rFlags.CF = 1) */ #define PVALIDATE_FAIL_NOUPDATE 255 /* RMP page size */ #define RMP_PG_SIZE_4K 0 +#define RMP_PG_SIZE_2M 1 #define RMPADJUST_VMSA_PAGE_BIT BIT(16) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 8f40f9377602..a5b0a75d9e56 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -655,32 +655,58 @@ static u64 __init get_jump_table_addr(void) return ret; } -static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool validate) +static void pvalidate_pages(struct snp_psc_desc *desc) { - unsigned long vaddr_end; + struct psc_entry *e; + unsigned long vaddr; + unsigned int size; + unsigned int i; + bool validate; int rc; - vaddr = vaddr & PAGE_MASK; - vaddr_end = vaddr + (npages << PAGE_SHIFT); + for (i = 0; i <= desc->hdr.end_entry; i++) { + e = &desc->entries[i]; + + vaddr = (unsigned long)pfn_to_kaddr(e->gfn); + size = e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; + validate = (e->operation == SNP_PAGE_STATE_PRIVATE) ? true : false; + + rc = pvalidate(vaddr, size, validate); + if (rc == PVALIDATE_FAIL_SIZEMISMATCH && size == RMP_PG_SIZE_2M) { + unsigned long vaddr_end = vaddr + PMD_SIZE; + + for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) { + rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + if (rc) + break; + } + } - while (vaddr < vaddr_end) { - rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate); if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); - - vaddr = vaddr + PAGE_SIZE; } } -static void early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op) +static void early_set_pages_state(unsigned long vaddr, unsigned long paddr, + unsigned long npages, enum psc_op op) { unsigned long paddr_end; u64 val; + int ret; + + vaddr = vaddr & PAGE_MASK; paddr = paddr & PAGE_MASK; paddr_end = paddr + (npages << PAGE_SHIFT); while (paddr < paddr_end) { + if (op == SNP_PAGE_STATE_SHARED) { + /* Page validation must be rescinded before changing to shared */ + ret = pvalidate(vaddr, RMP_PG_SIZE_4K, false); + if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret)) + goto e_term; + } + /* * Use the MSR protocol because this function can be called before * the GHCB is established. @@ -701,7 +727,15 @@ static void early_set_pages_state(unsigned long paddr, unsigned long npages, enu paddr, GHCB_MSR_PSC_RESP_VAL(val))) goto e_term; - paddr = paddr + PAGE_SIZE; + if (op == SNP_PAGE_STATE_PRIVATE) { + /* Page validation must be performed after changing to private */ + ret = pvalidate(vaddr, RMP_PG_SIZE_4K, true); + if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret)) + goto e_term; + } + + vaddr += PAGE_SIZE; + paddr += PAGE_SIZE; } return; @@ -726,10 +760,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd * Ask the hypervisor to mark the memory pages as private in the RMP * table. */ - early_set_pages_state(paddr, npages, SNP_PAGE_STATE_PRIVATE); - - /* Validate the memory pages after they've been added in the RMP table. */ - pvalidate_pages(vaddr, npages, true); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE); } void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, @@ -744,11 +775,8 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) return; - /* Invalidate the memory pages before they are marked shared in the RMP table. */ - pvalidate_pages(vaddr, npages, false); - /* Ask hypervisor to mark the memory pages shared in the RMP table. */ - early_set_pages_state(paddr, npages, SNP_PAGE_STATE_SHARED); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED); } void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) @@ -832,10 +860,11 @@ static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) return ret; } -static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, - unsigned long vaddr_end, int op) +static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, + unsigned long vaddr_end, int op) { struct ghcb_state state; + bool use_large_entry; struct psc_hdr *hdr; struct psc_entry *e; unsigned long flags; @@ -849,27 +878,37 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, memset(data, 0, sizeof(*data)); i = 0; - while (vaddr < vaddr_end) { - if (is_vmalloc_addr((void *)vaddr)) + while (vaddr < vaddr_end && i < ARRAY_SIZE(data->entries)) { + hdr->end_entry = i; + + if (is_vmalloc_addr((void *)vaddr)) { pfn = vmalloc_to_pfn((void *)vaddr); - else + use_large_entry = false; + } else { pfn = __pa(vaddr) >> PAGE_SHIFT; + use_large_entry = true; + } e->gfn = pfn; e->operation = op; - hdr->end_entry = i; - /* - * Current SNP implementation doesn't keep track of the RMP page - * size so use 4K for simplicity. - */ - e->pagesize = RMP_PG_SIZE_4K; + if (use_large_entry && IS_ALIGNED(vaddr, PMD_SIZE) && + (vaddr_end - vaddr) >= PMD_SIZE) { + e->pagesize = RMP_PG_SIZE_2M; + vaddr += PMD_SIZE; + } else { + e->pagesize = RMP_PG_SIZE_4K; + vaddr += PAGE_SIZE; + } - vaddr = vaddr + PAGE_SIZE; e++; i++; } + /* Page validation must be rescinded before changing to shared */ + if (op == SNP_PAGE_STATE_SHARED) + pvalidate_pages(data); + local_irq_save(flags); if (sev_cfg.ghcbs_initialized) @@ -877,6 +916,7 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, else ghcb = boot_ghcb; + /* Invoke the hypervisor to perform the page state changes */ if (!ghcb || vmgexit_psc(ghcb, data)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); @@ -884,29 +924,28 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, __sev_put_ghcb(&state); local_irq_restore(flags); + + /* Page validation must be performed after changing to private */ + if (op == SNP_PAGE_STATE_PRIVATE) + pvalidate_pages(data); + + return vaddr; } static void set_pages_state(unsigned long vaddr, unsigned long npages, int op) { - unsigned long vaddr_end, next_vaddr; struct snp_psc_desc desc; + unsigned long vaddr_end; /* Use the MSR protocol when a GHCB is not available. */ if (!boot_ghcb) - return early_set_pages_state(__pa(vaddr), npages, op); + return early_set_pages_state(vaddr, __pa(vaddr), npages, op); vaddr = vaddr & PAGE_MASK; vaddr_end = vaddr + (npages << PAGE_SHIFT); - while (vaddr < vaddr_end) { - /* Calculate the last vaddr that fits in one struct snp_psc_desc. */ - next_vaddr = min_t(unsigned long, vaddr_end, - (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr); - - __set_pages_state(&desc, vaddr, next_vaddr, op); - - vaddr = next_vaddr; - } + while (vaddr < vaddr_end) + vaddr = __set_pages_state(&desc, vaddr, vaddr_end, op); } void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) @@ -914,8 +953,6 @@ void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; - pvalidate_pages(vaddr, npages, false); - set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED); } @@ -925,8 +962,6 @@ void snp_set_memory_private(unsigned long vaddr, unsigned long npages) return; set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); - - pvalidate_pages(vaddr, npages, true); } static int snp_set_vmsa(void *va, bool vmsa) From patchwork Thu Dec 8 15:29:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 31422 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp265103wrr; Thu, 8 Dec 2022 07:37:30 -0800 (PST) X-Google-Smtp-Source: AA0mqf7fld/cbcA+Lc2b8fQSwe5Xk4pAxd8pe+DxGTcZE8FCsZsVq4iBlWWDgy+nTxlCvqrTGyFs X-Received: by 2002:a17:907:98e8:b0:7c0:f685:31df with SMTP id ke8-20020a17090798e800b007c0f68531dfmr13892969ejc.142.1670513849879; Thu, 08 Dec 2022 07:37:29 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1670513849; cv=pass; d=google.com; s=arc-20160816; b=slcQHHdj6OqJCnvjgWZKAbpXOH+ya+511lclkViOKY+NSO0Ly29AcXaPesJmjnc23h ixiAVT7zv6dyVY3E3fyKtIloMFvjYa8E7jypjHSKGhLA4FpMQe6SE8D6m4oWz4YlzpMx DMCwH9ZS5OWz/0LhbfGoFYpNFS/ePJmTO33l6IyCGF27yHFk/GZJEZ5efBYh6+gcHybl hBGoTNwoo3hVm70bB2cMwE0HFRdWGfEBwqc6TsH+4Z2jQ/Gjp3K7mX/fH8myYa4wnD3b BMEQoPsIsxKvhMrlU4pP1pAM6s2ISuGY5eYgSDKaazkpCoUQy5wcEIBQi3Vh2LVz4RRK H0gA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7U8b+39Cpvjb8XA6qbynWNlCEqyo+Qz4FAT4Co+Wml4=; b=zIBhUnK7sVyQ0SMUs90LMS5oVOW7wM4BjDR+7erR2m8A/LIEXbNBidDMuJW79zJKmF l4ChqKVJ66hCXRswchQluXwry3v/917fU3129OfC3FbdxlC10HjNy2OALJ7vI8Az1Q6v iO0EYb1k/O9JzsHAQ2HsWK0MRxRi52zg3nyC8paQ2JY/UxwM+9Fq5jrCH72LQ8ahIojI shd368jNSHti4whY2b8+s7IW1BS0c8b1Q6Mk0qrPXacWhPNVICX2sVmgOd+eoxVaG59x wIQAqfO8EpmgH4/7cKwLLMexsSvbD/D2cOxCNu2ztLR6cbp9DboELfM9mx1LKmGpJ6sd EPaw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=XoNVc6XH; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l19-20020a056402255300b0046cf113b9c0si7678297edb.550.2022.12.08.07.37.05; Thu, 08 Dec 2022 07:37:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=XoNVc6XH; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230323AbiLHPbi (ORCPT + 99 others); Thu, 8 Dec 2022 10:31:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230377AbiLHPal (ORCPT ); Thu, 8 Dec 2022 10:30:41 -0500 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2078.outbound.protection.outlook.com [40.107.237.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED9957E438 for ; Thu, 8 Dec 2022 07:30:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TNSH+a8qZRW7smHCSCl2whgTDh1B7DaSdBQBah+WJODjXFefDn6kGgJrw3Ad31+IUlL38yZy/rL4Hzo5mqnriY/vxzI63GALtEKILzTBnPkl2nQW5I6x6qvkvJ07tOll7b3YkTbcW5aMvXOAQxwHcHsleJ+0WH1/u7wvq+hF7pR8O+Y2AZvTCRlEHVc5uyNSriBvwbLKCnVqQPMhV81OgvLx6Ohqu9gnhI9el8xoQmOqt7TXnC5Gjo5cXg6Pxu8lVsHP6yfNg9LB/uvicTXuaJB3QumaT3Jd3MChzbKYe9UBGWuEkprNeWlXcVeIXkpQAu2hHSx9emxQ0KWBTWIikA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7U8b+39Cpvjb8XA6qbynWNlCEqyo+Qz4FAT4Co+Wml4=; b=gfrN2N1C5vdY2ltr35Xl2pOxzDOFfTOxdS7xUiSrI6ZEO7hcE8o4hKCZqTy7EdsPCMGYSeQGyuiG4AF8ZHp2JrGG/DW63Htp1LfJSxjK5IUZLxheejOHup7wD3WWtp/DW/XZTnOVcIKCqwRQWprwqQM5zR/XgTDPxz+4cj5uKB4YWiqVVNJf1h8hhL95606rQoVlc0MQuC3j24JslgyrbyWgGwWsIePKv+Jz0jgJ/082a5H+oH5hvqcsNYPDmoUfB6UnepPuAf7zJ4INcZF+AmYOXWM1qObmIc335R1q591RXDMbbxqxkFP1Vg2lnCjUpt6HdfCRG1sam5qoS7PbQA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7U8b+39Cpvjb8XA6qbynWNlCEqyo+Qz4FAT4Co+Wml4=; b=XoNVc6XHs7ngTIM0lxtR3MEtz1LeH0Mcdw8dzlyuY5jRGn0BNr0oUG+gDWLCQlhTKHlvnWcRqDFGiCDhrQdS5i7QLJuTIG05+bmnndenJhs+R0vc4I21P4rOC9ooqdn8kmvPYTquR2t8WLjJoptbFMX0NwmUJlvRRhmHVuwWieI= Received: from CY8PR10CA0008.namprd10.prod.outlook.com (2603:10b6:930:4f::17) by MN2PR12MB4301.namprd12.prod.outlook.com (2603:10b6:208:1d4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14; Thu, 8 Dec 2022 15:30:31 +0000 Received: from CY4PEPF0000C96E.namprd02.prod.outlook.com (2603:10b6:930:4f:cafe::7) by CY8PR10CA0008.outlook.office365.com (2603:10b6:930:4f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.16 via Frontend Transport; Thu, 8 Dec 2022 15:30:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000C96E.mail.protection.outlook.com (10.167.242.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5880.8 via Frontend Transport; Thu, 8 Dec 2022 15:30:29 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 8 Dec 2022 09:30:27 -0600 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v6 5/5] x86/sev: Add SNP-specific unaccepted memory support Date: Thu, 8 Dec 2022 09:29:13 -0600 Message-ID: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C96E:EE_|MN2PR12MB4301:EE_ X-MS-Office365-Filtering-Correlation-Id: b6c40ccb-67ae-4081-b9b5-08dad9312365 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 06JeC/FUzyYxFpcYi47HtxzbeOfjMZMJHM01EkEt2YxrmoRPxgvV93sum52RwNEQjlc9DFGJgRa4tOFpAYiyAY1CVPS5LxMSFMSofHxZWn/tVEST8LK4Qf5c0pk7eIv9Miul6KcIuaXwWGLiet7AE0eDWEPDTE3LJj3pMqFD12aSvAylbaQUYBbKZNOCqYcJRoztvEbRa82i5KWtvCvp7qvP6/WPkOcTR6M6GjxQIujRl2b5y7bAllx4lYaSAMyPkF8zfDLUqWdHEkBginbgBCF09zROSMauTCn/ORDV7+W7M7Qpu9dmTDmfAnVc4BP72/A4bY7xCH9uqnTV3tnEZjkcZzgC4SNSippMsa7IV+NulhX7BMFW/BOuSOSKZXN3JNb3oQJlwZl1rjz1YASog68054wE2QfY5FBIO0meZg2K1E+gkQH1pZK41OExfiJZKq54fztnnbS2fKX47cfx0XTvVvex+ZCiRT5af6dHb0SFU0ujUoSdC8+ByVe6tZEaAK932qKPbYf2qRXnfvu8yciYMIDrcnGCXK7IUx4RwWtDa5Ohh3apD4AOur2ZwtEcb/RDSZcFWT1RfwCXb2X1syufA2vHW0dSHk9wz6NqXGDABxhjhHf3/+IGLxIZ1GLhHaUPwPr3DZfPY+6dnBSmIm9o8OGxPzv3zjdyFsF3IlolHe5OyEdg+Khqf6Ve7b+XRFXD4+72FlXtS5vkmIF+D9n1xHuaEKrcgDKuTKSFTZo= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199015)(46966006)(36840700001)(40470700004)(81166007)(70206006)(36860700001)(40480700001)(70586007)(478600001)(4326008)(6666004)(82740400003)(8676002)(356005)(86362001)(2906002)(26005)(16526019)(186003)(336012)(54906003)(110136005)(7696005)(83380400001)(8936002)(426003)(36756003)(47076005)(5660300002)(40460700003)(316002)(2616005)(7416002)(41300700001)(30864003)(82310400005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2022 15:30:29.5747 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b6c40ccb-67ae-4081-b9b5-08dad9312365 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C96E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4301 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_FILL_THIS_FORM_SHORT autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751660730697866795?= X-GMAIL-MSGID: =?utf-8?q?1751660730697866795?= Add SNP-specific hooks to the unaccepted memory support in the boot path (__accept_memory()) and the core kernel (accept_memory()) in order to support booting SNP guests when unaccepted memory is present. Without this support, SNP guests will fail to boot and/or panic() when unaccepted memory is present in the EFI memory map. The process of accepting memory under SNP involves invoking the hypervisor to perform a page state change for the page to private memory and then issuing a PVALIDATE instruction to accept the page. Since the boot path and the core kernel paths perform similar operations, move the pvalidate_pages() and vmgexit_psc() functions into sev-shared.c to avoid code duplication. Create the new header file arch/x86/boot/compressed/sev.h because adding the function declaration to any of the existing SEV related header files pulls in too many other header files, causing the build to fail. Signed-off-by: Tom Lendacky --- arch/x86/Kconfig | 1 + arch/x86/boot/compressed/mem.c | 3 + arch/x86/boot/compressed/sev.c | 54 ++++++++++++++- arch/x86/boot/compressed/sev.h | 23 +++++++ arch/x86/include/asm/sev.h | 3 + arch/x86/kernel/sev-shared.c | 103 +++++++++++++++++++++++++++++ arch/x86/kernel/sev.c | 112 ++++---------------------------- arch/x86/mm/unaccepted_memory.c | 4 ++ 8 files changed, 204 insertions(+), 99 deletions(-) create mode 100644 arch/x86/boot/compressed/sev.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index d88f61940aa7..0704d4795919 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1549,6 +1549,7 @@ config AMD_MEM_ENCRYPT select INSTRUCTION_DECODER select ARCH_HAS_CC_PLATFORM select X86_MEM_ENCRYPT + select UNACCEPTED_MEMORY help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index 23a84c46aa9b..585916b34a9c 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -6,6 +6,7 @@ #include "find.h" #include "math.h" #include "tdx.h" +#include "sev.h" #include #define PMD_SHIFT 21 @@ -45,6 +46,8 @@ static inline void __accept_memory(phys_addr_t start, phys_addr_t end) /* Platform-specific memory-acceptance call goes here */ if (early_is_tdx_guest()) tdx_accept_memory(start, end); + else if (sev_snp_enabled()) + snp_accept_memory(start, end); else error("Cannot accept memory: unknown platform\n"); } diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index e4d3a2da2eb9..8af1ccca4ad3 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -115,7 +115,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, /* Include code for early handlers */ #include "../../kernel/sev-shared.c" -static inline bool sev_snp_enabled(void) +bool sev_snp_enabled(void) { return sev_status & MSR_AMD64_SEV_SNP_ENABLED; } @@ -181,6 +181,58 @@ static bool early_setup_ghcb(void) return true; } +static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc, + phys_addr_t pa, phys_addr_t pa_end) +{ + struct psc_hdr *hdr; + struct psc_entry *e; + unsigned int i; + + hdr = &desc->hdr; + memset(hdr, 0, sizeof(*hdr)); + + e = desc->entries; + + i = 0; + while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) { + hdr->end_entry = i; + + e->gfn = pa >> PAGE_SHIFT; + e->operation = SNP_PAGE_STATE_PRIVATE; + if (IS_ALIGNED(pa, PMD_SIZE) && (pa_end - pa) >= PMD_SIZE) { + e->pagesize = RMP_PG_SIZE_2M; + pa += PMD_SIZE; + } else { + e->pagesize = RMP_PG_SIZE_4K; + pa += PAGE_SIZE; + } + + e++; + i++; + } + + if (vmgexit_psc(boot_ghcb, desc)) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + pvalidate_pages(desc); + + return pa; +} + +void snp_accept_memory(phys_addr_t start, phys_addr_t end) +{ + struct snp_psc_desc desc = {}; + unsigned int i; + phys_addr_t pa; + + if (!boot_ghcb && !early_setup_ghcb()) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + pa = start; + while (pa < end) + pa = __snp_accept_memory(&desc, pa, end); +} + void sev_es_shutdown_ghcb(void) { if (!boot_ghcb) diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h new file mode 100644 index 000000000000..fc725a981b09 --- /dev/null +++ b/arch/x86/boot/compressed/sev.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * AMD SEV header for early boot related functions. + * + * Author: Tom Lendacky + */ + +#ifndef BOOT_COMPRESSED_SEV_H +#define BOOT_COMPRESSED_SEV_H + +#ifdef CONFIG_AMD_MEM_ENCRYPT + +bool sev_snp_enabled(void); +void snp_accept_memory(phys_addr_t start, phys_addr_t end); + +#else + +static inline bool sev_snp_enabled(void) { return false; } +static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } + +#endif + +#endif diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 91b4f712ef18..67e81141a873 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -201,6 +201,7 @@ void snp_set_wakeup_secondary_cpu(void); bool snp_init(struct boot_params *bp); void __init __noreturn snp_abort(void); int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, unsigned long *fw_err); +void snp_accept_memory(phys_addr_t start, phys_addr_t end); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } @@ -225,6 +226,8 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in { return -ENOTTY; } + +static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } #endif #endif diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index 3a5b0c9c4fcc..be312db48a49 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -12,6 +12,9 @@ #ifndef __BOOT_COMPRESSED #define error(v) pr_err(v) #define has_cpuflag(f) boot_cpu_has(f) +#else +#undef WARN +#define WARN(condition, format...) (!!(condition)) #endif /* I/O parameters for CPUID-related helpers */ @@ -991,3 +994,103 @@ static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_info) cpuid_ext_range_max = fn->eax; } } + +static void pvalidate_pages(struct snp_psc_desc *desc) +{ + struct psc_entry *e; + unsigned long vaddr; + unsigned int size; + unsigned int i; + bool validate; + int rc; + + for (i = 0; i <= desc->hdr.end_entry; i++) { + e = &desc->entries[i]; + + vaddr = (unsigned long)pfn_to_kaddr(e->gfn); + size = e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; + validate = (e->operation == SNP_PAGE_STATE_PRIVATE) ? true : false; + + rc = pvalidate(vaddr, size, validate); + if (rc == PVALIDATE_FAIL_SIZEMISMATCH && size == RMP_PG_SIZE_2M) { + unsigned long vaddr_end = vaddr + PMD_SIZE; + + for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) { + rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + if (rc) + break; + } + } + + if (rc) { + WARN(1, "Failed to validate address 0x%lx ret %d", vaddr, rc); + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); + } + } +} + +static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) +{ + int cur_entry, end_entry, ret = 0; + struct snp_psc_desc *data; + struct es_em_ctxt ctxt; + + vc_ghcb_invalidate(ghcb); + + /* Copy the input desc into GHCB shared buffer */ + data = (struct snp_psc_desc *)ghcb->shared_buffer; + memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof(*desc))); + + /* + * As per the GHCB specification, the hypervisor can resume the guest + * before processing all the entries. Check whether all the entries + * are processed. If not, then keep retrying. Note, the hypervisor + * will update the data memory directly to indicate the status, so + * reference the data->hdr everywhere. + * + * The strategy here is to wait for the hypervisor to change the page + * state in the RMP table before guest accesses the memory pages. If the + * page state change was not successful, then later memory access will + * result in a crash. + */ + cur_entry = data->hdr.cur_entry; + end_entry = data->hdr.end_entry; + + while (data->hdr.cur_entry <= data->hdr.end_entry) { + ghcb_set_sw_scratch(ghcb, (u64)__pa(data)); + + /* This will advance the shared buffer data points to. */ + ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_PSC, 0, 0); + + /* + * Page State Change VMGEXIT can pass error code through + * exit_info_2. + */ + if (WARN(ret || ghcb->save.sw_exit_info_2, + "SNP: PSC failed ret=%d exit_info_2=%llx\n", + ret, ghcb->save.sw_exit_info_2)) { + ret = 1; + goto out; + } + + /* Verify that reserved bit is not set */ + if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n")) { + ret = 1; + goto out; + } + + /* + * Sanity check that entry processing is not going backwards. + * This will happen only if hypervisor is tricking us. + */ + if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_entry, +"SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d (got %d)\n", + end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) { + ret = 1; + goto out; + } + } + +out: + return ret; +} diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a5b0a75d9e56..aa1032695355 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -655,38 +655,6 @@ static u64 __init get_jump_table_addr(void) return ret; } -static void pvalidate_pages(struct snp_psc_desc *desc) -{ - struct psc_entry *e; - unsigned long vaddr; - unsigned int size; - unsigned int i; - bool validate; - int rc; - - for (i = 0; i <= desc->hdr.end_entry; i++) { - e = &desc->entries[i]; - - vaddr = (unsigned long)pfn_to_kaddr(e->gfn); - size = e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; - validate = (e->operation == SNP_PAGE_STATE_PRIVATE) ? true : false; - - rc = pvalidate(vaddr, size, validate); - if (rc == PVALIDATE_FAIL_SIZEMISMATCH && size == RMP_PG_SIZE_2M) { - unsigned long vaddr_end = vaddr + PMD_SIZE; - - for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) { - rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate); - if (rc) - break; - } - } - - if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc)) - sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); - } -} - static void early_set_pages_state(unsigned long vaddr, unsigned long paddr, unsigned long npages, enum psc_op op) { @@ -794,72 +762,6 @@ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } -static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) -{ - int cur_entry, end_entry, ret = 0; - struct snp_psc_desc *data; - struct es_em_ctxt ctxt; - - vc_ghcb_invalidate(ghcb); - - /* Copy the input desc into GHCB shared buffer */ - data = (struct snp_psc_desc *)ghcb->shared_buffer; - memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof(*desc))); - - /* - * As per the GHCB specification, the hypervisor can resume the guest - * before processing all the entries. Check whether all the entries - * are processed. If not, then keep retrying. Note, the hypervisor - * will update the data memory directly to indicate the status, so - * reference the data->hdr everywhere. - * - * The strategy here is to wait for the hypervisor to change the page - * state in the RMP table before guest accesses the memory pages. If the - * page state change was not successful, then later memory access will - * result in a crash. - */ - cur_entry = data->hdr.cur_entry; - end_entry = data->hdr.end_entry; - - while (data->hdr.cur_entry <= data->hdr.end_entry) { - ghcb_set_sw_scratch(ghcb, (u64)__pa(data)); - - /* This will advance the shared buffer data points to. */ - ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_PSC, 0, 0); - - /* - * Page State Change VMGEXIT can pass error code through - * exit_info_2. - */ - if (WARN(ret || ghcb->save.sw_exit_info_2, - "SNP: PSC failed ret=%d exit_info_2=%llx\n", - ret, ghcb->save.sw_exit_info_2)) { - ret = 1; - goto out; - } - - /* Verify that reserved bit is not set */ - if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n")) { - ret = 1; - goto out; - } - - /* - * Sanity check that entry processing is not going backwards. - * This will happen only if hypervisor is tricking us. - */ - if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_entry, -"SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d (got %d)\n", - end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) { - ret = 1; - goto out; - } - } - -out: - return ret; -} - static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, unsigned long vaddr_end, int op) { @@ -964,6 +866,20 @@ void snp_set_memory_private(unsigned long vaddr, unsigned long npages) set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); } +void snp_accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long vaddr; + unsigned int npages; + + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + vaddr = (unsigned long)__va(start); + npages = (end - start) >> PAGE_SHIFT; + + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); +} + static int snp_set_vmsa(void *va, bool vmsa) { u64 attrs; diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 45706c684ca5..dc5cd152c43b 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -9,6 +9,7 @@ #include #include #include +#include /* Protects unaccepted memory bitmap */ static DEFINE_SPINLOCK(unaccepted_memory_lock); @@ -66,6 +67,9 @@ void accept_memory(phys_addr_t start, phys_addr_t end) if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { tdx_accept_memory(range_start * PMD_SIZE, range_end * PMD_SIZE); + } else if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) { + snp_accept_memory(range_start * PMD_SIZE, + range_end * PMD_SIZE); } else { panic("Cannot accept memory: unknown platform\n"); }