Message ID | 20231030063652.68675-10-nikunj@amd.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp2018524vqb; Sun, 29 Oct 2023 23:39:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHEcmWf6sNSHkuQfWfm0n4zzwST7iHcnzUU/LSNPXfd/2G/CgctuG6m3VzDTQ/4cNpv8Uiv X-Received: by 2002:a05:6871:a4c2:b0:1e9:a8f0:d6b6 with SMTP id wb2-20020a056871a4c200b001e9a8f0d6b6mr10544456oab.39.1698647988780; Sun, 29 Oct 2023 23:39:48 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1698647988; cv=pass; d=google.com; s=arc-20160816; b=Ew1l2BWVUZokrzLf+1GiONwHEesuaYYxNJ94Y65WHFDJmXH5BLiMJD5GlptCUKp5zL j/p8rewhGgCBBlt1W7hfYpC6BXhT4Ro4zqQfJQQhZkJtOo438LQGnw4c8XrlbKGlakhD JxmVLPhhLvzFtrk+zliPiRnaJsFG/ZBKuNQ5eE6ogVasH3YTZCQGFknpZ+jiTSwsESNu lUwkRM5oA9kTlCeAigBOd9/ev9oqd2UaKzv2zhVPi5bSirG55CMPNd59h121eXg0fNYa a+gfhnVpiRFroA660YkBApHUCAkPdvQMkJZpYZ3Wz5DuC546KWajZh66XIjcCxu9cok9 HbOA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bp4jAUO4eKwVXWemSIa0DVAncJYRGB33+fKn4KBC5gA=; fh=GrBYTfaCmKSBoJ0OQBiloTDpWB/NcGHvBdTqN4MlWVs=; b=UnmrOVsFuhX+H5gULEyDdJpKbv0NH48JwYhtcbnoAvlaMqDIuwblIU/GphVl5r6qwJ rzVGNcvsy4WRcERYRdu+Yao9YL1qDKK1aj9QAgnAcbSKouw63i2y0BCZgKBS2G7WO0DL RHZausAuGYOl9hnYqz3jjkWWyMGZUv0ad7W1WRlybOXcPT4Lmf0cq8SldHZyD2cJ+gG5 OEwkIUj9Vn4K+lHr90ku9bccJXxy5IZfWsBIYHfvLcXqWyABZ133Ernus078m3yMFv/x 7lFaHnHGEeseTGjoafTYHWHBJ7El4NK9OBcTYNH8sg+oOjKmPAs2lM+cZ/RdIy4Q8S7x t1vw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=NyzccndW; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id s186-20020a632cc3000000b005aad5164a45si4562358pgs.444.2023.10.29.23.39.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 23:39:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=NyzccndW; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 23BAF80620A6; Sun, 29 Oct 2023 23:39:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231912AbjJ3Giy (ORCPT <rfc822;chrisjones.unixmen@gmail.com> + 32 others); Mon, 30 Oct 2023 02:38:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231846AbjJ3Gim (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 30 Oct 2023 02:38:42 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2065.outbound.protection.outlook.com [40.107.101.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0BF2124; Sun, 29 Oct 2023 23:38:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ku11pd1BzaR6SYmuUXT8lVfnojZUVc2ZNgt0S9siWTVCp3kMCgZkDSbY3EEDOmNzMqZxwa5rxv347AVQ6egeVPRVS7prmZ9pzPWnJfW1b3QU/FzEs70NhdDMGQgY/mHpapicisNZJQyQSPBnOEC3ScRDQzLoMvNbW+6lkBE6zN9qsL6HUEMfeE4m4FEpNa4Xw0DclyOY9SHb9dI8scnMus3DS8Q5OaoW0Wln6TU0vI2g56bQ8R5ij/hHt5R38nfOLZaAxu4XNdB7ePXxdWZO3MHYv7ZWFYiDtYn3OBETDG4XUbIOSAyT7+B0Fx29it7gKuPacHdCA3dijDzsQyNSAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bp4jAUO4eKwVXWemSIa0DVAncJYRGB33+fKn4KBC5gA=; b=B9ARgX4YHlON34AsuRvme061ZT441QEdRZBjkpC3BSg1JCw7fJBZMvM45TTcmowxzcIh9SPflZInnqjWYGa4UR0JcF9PnidhyJrxweZv/HKek1mcQLDpTA6dmMmk55C/8Ci9hXt6FBvGoQBlxB+gT4VoxRkth35FdhBRxUgL4S7QWuxh7OqMKvVs8h3mlN6in5x5N7Imm/9f7LrIgQewJ+YH3nrrvL4S20TFRAUElq9svZpEz3RWvcEElJhpIdtlLP+wBqc6efp7+sBjUznuJuugwVIs6+GTg/ALbA4/6Bha4iqEOQlPZ6JcfT0SRE0w4WgMcK0NXdjZVD0SbXCQxg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bp4jAUO4eKwVXWemSIa0DVAncJYRGB33+fKn4KBC5gA=; b=NyzccndWiR4FXAKJ/Sr0CbO3fLlxYg35bWWze0bQpNKeddD9VXLQ4nUsfAKCvDiQoh2Nx6wNuvJY+mM3ue7O55uJKptxybiK3IJvHMWSy/z8x/rZ3dVUCe7Ta8vxr61p6qLZeQPKm/nuSFvcCmBpVn1kVBVkGgADI1cCzzTDHLU= Received: from BL1PR13CA0278.namprd13.prod.outlook.com (2603:10b6:208:2bc::13) by IA0PR12MB7628.namprd12.prod.outlook.com (2603:10b6:208:436::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.27; Mon, 30 Oct 2023 06:38:32 +0000 Received: from BL6PEPF0001AB78.namprd02.prod.outlook.com (2603:10b6:208:2bc:cafe::3c) by BL1PR13CA0278.outlook.office365.com (2603:10b6:208:2bc::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.15 via Frontend Transport; Mon, 30 Oct 2023 06:38:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BL6PEPF0001AB78.mail.protection.outlook.com (10.167.242.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6933.15 via Frontend Transport; Mon, 30 Oct 2023 06:38:32 +0000 Received: from gomati.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.32; Mon, 30 Oct 2023 01:38:28 -0500 From: Nikunj A Dadhania <nikunj@amd.com> To: <linux-kernel@vger.kernel.org>, <thomas.lendacky@amd.com>, <x86@kernel.org>, <kvm@vger.kernel.org> CC: <bp@alien8.de>, <mingo@redhat.com>, <tglx@linutronix.de>, <dave.hansen@linux.intel.com>, <dionnaglaze@google.com>, <pgonda@google.com>, <seanjc@google.com>, <pbonzini@redhat.com>, <nikunj@amd.com> Subject: [PATCH v5 09/14] x86/sev: Add Secure TSC support for SNP guests Date: Mon, 30 Oct 2023 12:06:47 +0530 Message-ID: <20231030063652.68675-10-nikunj@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231030063652.68675-1-nikunj@amd.com> References: <20231030063652.68675-1-nikunj@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB78:EE_|IA0PR12MB7628:EE_ X-MS-Office365-Filtering-Correlation-Id: d7fdd2b4-4572-441e-1745-08dbd912d5f4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2PFXBzKyV9bnlC+UyFtLBOK3v8Woup36hYfkr9V3rL167+tyCutFTxL5qeHBYi1Q1J+1ewKudsw/9URi76m4U+FDx0OqPbjrdT+15jCqk+HxtXqiIJRO6ktuzrjm48lnJSyATj3Ubdq4fdYTFGkajCNhL7jTRKiwCND0glBZqYQakyFoQ5H/StnasrOOCpFfKznlUofKT512stuscXKRyfrisd0hmW7rhOCZM6VXaaoSQsaTTcuKGVaGMBipaNkfRUXcLQIzyGaBqPqyF0gLm+e9JXPWuUu9GodHACIOrG2pP1rPnQp3Jg4B6SlY09LDq7FjOTzJTq2RsmfUfFISQKAhWAlHMSPmwdT36KcUCEbuyb8QuXRRSnj0XIGINLx4cvdmQHfTiqtRpf65qSH1U9yi8DJ4pr5i4POS2NIUKFx/DzGi8o4w6+UYbXgg6AxOegygPQwYB/BklaXUuecDMSa1jt5ptkMWb3j4WYq+KBZmTXG1P/80hGCXyFx0Mcc1Iw4tniyRLU4pM6kaUlTWOrjSqAP11XDLMN6KlZG09q1ViNBTahpUMt0kDBY3C7mtG/ce3ocwkMldDFXhQKw/tQ4coyz8RNIjWd2Kh/KQCgSQTerSwqfpajRcEMVbkFQM9yDh2OrAt27OP1VV1dsjSE+THPmI5R0dptuAugAIrDmwnsV1qYQbEmtD7J/qsdMG9RnjEddMopXH6ABwEytRwrP+6Ldsu6hoagsAhnaxYNlOJvl0CwsBlOzPAD/GJ6jj/wD4ULiARy+ZzNI3YxPelQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(4636009)(376002)(136003)(346002)(396003)(39860400002)(230922051799003)(82310400011)(186009)(1800799009)(64100799003)(451199024)(36840700001)(46966006)(40470700004)(2906002)(40460700003)(5660300002)(7416002)(41300700001)(4326008)(8676002)(8936002)(36756003)(40480700001)(82740400003)(316002)(54906003)(36860700001)(47076005)(356005)(81166007)(70206006)(70586007)(110136005)(83380400001)(426003)(16526019)(336012)(2616005)(1076003)(26005)(478600001)(7696005)(6666004)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2023 06:38:32.4770 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d7fdd2b4-4572-441e-1745-08dbd912d5f4 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB78.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7628 X-Spam-Status: No, score=-1.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 23:39:39 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781161513358125455 X-GMAIL-MSGID: 1781161513358125455 |
Series |
Add Secure TSC support for SNP guests
|
|
Commit Message
Nikunj A. Dadhania
Oct. 30, 2023, 6:36 a.m. UTC
Add support for Secure TSC in SNP enabled guests. Secure TSC allows
guest to securely use RDTSC/RDTSCP instructions as the parameters
being used cannot be changed by hypervisor once the guest is launched.
During the boot-up of the secondary cpus, SecureTSC enabled guests
need to query TSC info from AMD Security Processor. This communication
channel is encrypted between the AMD Security Processor and the guest,
the hypervisor is just the conduit to deliver the guest messages to
the AMD Security Processor. Each message is protected with an
AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP
Guest messages to communicate with the PSP.
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/coco/core.c | 3 ++
arch/x86/include/asm/sev-guest.h | 18 +++++++
arch/x86/include/asm/sev.h | 2 +
arch/x86/include/asm/svm.h | 6 ++-
arch/x86/kernel/sev.c | 82 ++++++++++++++++++++++++++++++++
arch/x86/mm/mem_encrypt_amd.c | 6 +++
include/linux/cc_platform.h | 8 ++++
7 files changed, 123 insertions(+), 2 deletions(-)
Comments
On Sun, Oct 29, 2023 at 11:38 PM Nikunj A Dadhania <nikunj@amd.com> wrote: > > Add support for Secure TSC in SNP enabled guests. Secure TSC allows > guest to securely use RDTSC/RDTSCP instructions as the parameters > being used cannot be changed by hypervisor once the guest is launched. > > During the boot-up of the secondary cpus, SecureTSC enabled guests > need to query TSC info from AMD Security Processor. This communication > channel is encrypted between the AMD Security Processor and the guest, > the hypervisor is just the conduit to deliver the guest messages to > the AMD Security Processor. Each message is protected with an > AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP > Guest messages to communicate with the PSP. > > Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> > --- > arch/x86/coco/core.c | 3 ++ > arch/x86/include/asm/sev-guest.h | 18 +++++++ > arch/x86/include/asm/sev.h | 2 + > arch/x86/include/asm/svm.h | 6 ++- > arch/x86/kernel/sev.c | 82 ++++++++++++++++++++++++++++++++ > arch/x86/mm/mem_encrypt_amd.c | 6 +++ > include/linux/cc_platform.h | 8 ++++ > 7 files changed, 123 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c > index eeec9986570e..5d5d4d03c543 100644 > --- a/arch/x86/coco/core.c > +++ b/arch/x86/coco/core.c > @@ -89,6 +89,9 @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) > case CC_ATTR_GUEST_SEV_SNP: > return sev_status & MSR_AMD64_SEV_SNP_ENABLED; > > + case CC_ATTR_GUEST_SECURE_TSC: > + return sev_status & MSR_AMD64_SNP_SECURE_TSC; > + > default: > return false; > } > diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h > index e6f94208173d..58739173eba9 100644 > --- a/arch/x86/include/asm/sev-guest.h > +++ b/arch/x86/include/asm/sev-guest.h > @@ -39,6 +39,8 @@ enum msg_type { > SNP_MSG_ABSORB_RSP, > SNP_MSG_VMRK_REQ, > SNP_MSG_VMRK_RSP, > + SNP_MSG_TSC_INFO_REQ = 17, > + SNP_MSG_TSC_INFO_RSP, > > SNP_MSG_TYPE_MAX > }; > @@ -111,6 +113,22 @@ struct snp_guest_req { > u8 msg_type; > }; > > +struct snp_tsc_info_req { > +#define SNP_TSC_INFO_REQ_SZ 128 > + /* Must be zero filled */ > + u8 rsvd[SNP_TSC_INFO_REQ_SZ]; > +} __packed; > + > +struct snp_tsc_info_resp { > + /* Status of TSC_INFO message */ > + u32 status; > + u32 rsvd1; > + u64 tsc_scale; > + u64 tsc_offset; > + u32 tsc_factor; > + u8 rsvd2[100]; > +} __packed; > + > int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev); > int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req, > struct snp_guest_request_ioctl *rio); > diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h > index 783150458864..038a5a15d937 100644 > --- a/arch/x86/include/asm/sev.h > +++ b/arch/x86/include/asm/sev.h > @@ -200,6 +200,7 @@ void __init __noreturn snp_abort(void); > void snp_accept_memory(phys_addr_t start, phys_addr_t end); > u64 snp_get_unsupported_features(u64 status); > u64 sev_get_status(void); > +void __init snp_secure_tsc_prepare(void); > #else > static inline void sev_es_ist_enter(struct pt_regs *regs) { } > static inline void sev_es_ist_exit(void) { } > @@ -223,6 +224,7 @@ static inline void snp_abort(void) { } > static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } > static inline u64 snp_get_unsupported_features(u64 status) { return 0; } > static inline u64 sev_get_status(void) { return 0; } > +static inline void __init snp_secure_tsc_prepare(void) { } > #endif > > #endif > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h > index 3ac0ffc4f3e2..ee35c0488f56 100644 > --- a/arch/x86/include/asm/svm.h > +++ b/arch/x86/include/asm/svm.h > @@ -414,7 +414,9 @@ struct sev_es_save_area { > u8 reserved_0x298[80]; > u32 pkru; > u32 tsc_aux; > - u8 reserved_0x2f0[24]; > + u64 tsc_scale; > + u64 tsc_offset; > + u8 reserved_0x300[8]; > u64 rcx; > u64 rdx; > u64 rbx; > @@ -546,7 +548,7 @@ static inline void __unused_size_checks(void) > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x1c0); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x248); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x298); > - BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x2f0); > + BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x300); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x320); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x380); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x3f0); > diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c > index fb3b1feb1b84..9468809d02c7 100644 > --- a/arch/x86/kernel/sev.c > +++ b/arch/x86/kernel/sev.c > @@ -76,6 +76,10 @@ static u64 sev_hv_features __ro_after_init; > /* Secrets page physical address from the CC blob */ > static u64 secrets_pa __ro_after_init; > > +/* Secure TSC values read using TSC_INFO SNP Guest request */ > +static u64 guest_tsc_scale __ro_after_init; > +static u64 guest_tsc_offset __ro_after_init; > + > /* #VC handler runtime per-CPU data */ > struct sev_es_runtime_data { > struct ghcb ghcb_page; > @@ -1393,6 +1397,78 @@ bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id) > } > EXPORT_SYMBOL_GPL(snp_assign_vmpck); > > +static struct snp_guest_dev tsc_snp_dev __initdata; > + > +static int __init snp_get_tsc_info(void) > +{ > + static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN]; > + struct snp_guest_request_ioctl rio; > + struct snp_tsc_info_resp tsc_resp; > + struct snp_tsc_info_req tsc_req; > + struct snp_guest_req req; > + int rc, resp_len; > + > + /* > + * The intermediate response buffer is used while decrypting the > + * response payload. Make sure that it has enough space to cover the > + * authtag. > + */ > + resp_len = sizeof(tsc_resp) + AUTHTAG_LEN; > + if (sizeof(buf) < resp_len) > + return -EINVAL; > + > + memset(&tsc_req, 0, sizeof(tsc_req)); > + memset(&req, 0, sizeof(req)); > + memset(&rio, 0, sizeof(rio)); > + memset(buf, 0, sizeof(buf)); > + > + if (!snp_assign_vmpck(&tsc_snp_dev, 0)) > + return -EINVAL; > + I don't see a requirement for VMPL0 in the API docs. I just see "When a guest creates its own VMSA, it must query the PSP for information with the TSC_INFO message to determine the correct values to write into GUEST_TSC_SCALE and GUEST_TSC_OFFSET". In that case, I don't see a particular use for this request in Linux. I would expect it either in the UEFI or in SVSM. Is this code path explicitly for direct boot to Linux? If so, did I miss that documentation in this patch series? > + /* Initialize the PSP channel to send snp messages */ > + if (snp_setup_psp_messaging(&tsc_snp_dev)) > + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); > + > + req.msg_version = MSG_HDR_VER; > + req.msg_type = SNP_MSG_TSC_INFO_REQ; > + req.vmpck_id = tsc_snp_dev.vmpck_id; > + req.req_buf = &tsc_req; > + req.req_sz = sizeof(tsc_req); > + req.resp_buf = buf; > + req.resp_sz = resp_len; > + req.exit_code = SVM_VMGEXIT_GUEST_REQUEST; > + rc = snp_send_guest_request(&tsc_snp_dev, &req, &rio); > + if (rc) > + goto err_req; > + > + memcpy(&tsc_resp, buf, sizeof(tsc_resp)); > + pr_debug("%s: Valid response status %x scale %llx offset %llx factor %x\n", > + __func__, tsc_resp.status, tsc_resp.tsc_scale, tsc_resp.tsc_offset, > + tsc_resp.tsc_factor); > + > + guest_tsc_scale = tsc_resp.tsc_scale; > + guest_tsc_offset = tsc_resp.tsc_offset; > + > +err_req: > + /* The response buffer contains the sensitive data, explicitly clear it. */ > + memzero_explicit(buf, sizeof(buf)); > + memzero_explicit(&tsc_resp, sizeof(tsc_resp)); > + memzero_explicit(&req, sizeof(req)); > + > + return rc; > +} > + > +void __init snp_secure_tsc_prepare(void) > +{ > + if (!cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) > + return; > + > + if (snp_get_tsc_info()) > + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); > + > + pr_debug("SecureTSC enabled\n"); > +} > + > static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) > { > struct sev_es_save_area *cur_vmsa, *vmsa; > @@ -1493,6 +1569,12 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) > vmsa->vmpl = 0; > vmsa->sev_features = sev_status >> 2; > > + /* Setting Secure TSC parameters */ > + if (cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) { > + vmsa->tsc_scale = guest_tsc_scale; > + vmsa->tsc_offset = guest_tsc_offset; > + } > + > /* Switch the page over to a VMSA page now that it is initialized */ > ret = snp_set_vmsa(vmsa, true); > if (ret) { > diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c > index 6faea41e99b6..9935fc506e99 100644 > --- a/arch/x86/mm/mem_encrypt_amd.c > +++ b/arch/x86/mm/mem_encrypt_amd.c > @@ -215,6 +215,11 @@ void __init sme_map_bootdata(char *real_mode_data) > __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); > } > > +void __init amd_enc_init(void) > +{ > + snp_secure_tsc_prepare(); > +} > + > void __init sev_setup_arch(void) > { > phys_addr_t total_mem = memblock_phys_mem_size(); > @@ -502,6 +507,7 @@ void __init sme_early_init(void) > x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish; > x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; > x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; > + x86_platform.guest.enc_init = amd_enc_init; > > /* > * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the > diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h > index cb0d6cd1c12f..e081ca4d5da2 100644 > --- a/include/linux/cc_platform.h > +++ b/include/linux/cc_platform.h > @@ -90,6 +90,14 @@ enum cc_attr { > * Examples include TDX Guest. > */ > CC_ATTR_HOTPLUG_DISABLED, > + > + /** > + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. > + * > + * The platform/OS is running as a guest/virtual machine and actively > + * using AMD SEV-SNP Secure TSC feature. > + */ > + CC_ATTR_GUEST_SECURE_TSC, > }; > > #ifdef CONFIG_ARCH_HAS_CC_PLATFORM > -- > 2.34.1 >
On 10/30/23 01:36, Nikunj A Dadhania wrote: > Add support for Secure TSC in SNP enabled guests. Secure TSC allows > guest to securely use RDTSC/RDTSCP instructions as the parameters > being used cannot be changed by hypervisor once the guest is launched. > > During the boot-up of the secondary cpus, SecureTSC enabled guests > need to query TSC info from AMD Security Processor. This communication > channel is encrypted between the AMD Security Processor and the guest, > the hypervisor is just the conduit to deliver the guest messages to > the AMD Security Processor. Each message is protected with an > AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP > Guest messages to communicate with the PSP. Add to this commit message that you're using the enc_init hook to perform some Secure TSC initialization and why you have to do that. > > Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> > --- > arch/x86/coco/core.c | 3 ++ > arch/x86/include/asm/sev-guest.h | 18 +++++++ > arch/x86/include/asm/sev.h | 2 + > arch/x86/include/asm/svm.h | 6 ++- > arch/x86/kernel/sev.c | 82 ++++++++++++++++++++++++++++++++ > arch/x86/mm/mem_encrypt_amd.c | 6 +++ > include/linux/cc_platform.h | 8 ++++ > 7 files changed, 123 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c > index eeec9986570e..5d5d4d03c543 100644 > --- a/arch/x86/coco/core.c > +++ b/arch/x86/coco/core.c > @@ -89,6 +89,9 @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) > case CC_ATTR_GUEST_SEV_SNP: > return sev_status & MSR_AMD64_SEV_SNP_ENABLED; > > + case CC_ATTR_GUEST_SECURE_TSC: > + return sev_status & MSR_AMD64_SNP_SECURE_TSC; > + > default: > return false; > } > diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h > index e6f94208173d..58739173eba9 100644 > --- a/arch/x86/include/asm/sev-guest.h > +++ b/arch/x86/include/asm/sev-guest.h > @@ -39,6 +39,8 @@ enum msg_type { > SNP_MSG_ABSORB_RSP, > SNP_MSG_VMRK_REQ, > SNP_MSG_VMRK_RSP, > + SNP_MSG_TSC_INFO_REQ = 17, > + SNP_MSG_TSC_INFO_RSP, > > SNP_MSG_TYPE_MAX > }; > @@ -111,6 +113,22 @@ struct snp_guest_req { > u8 msg_type; > }; > > +struct snp_tsc_info_req { > +#define SNP_TSC_INFO_REQ_SZ 128 Please move this to before the struct definition. > + /* Must be zero filled */ > + u8 rsvd[SNP_TSC_INFO_REQ_SZ]; > +} __packed; > + > +struct snp_tsc_info_resp { > + /* Status of TSC_INFO message */ > + u32 status; > + u32 rsvd1; > + u64 tsc_scale; > + u64 tsc_offset; > + u32 tsc_factor; > + u8 rsvd2[100]; > +} __packed; > + > int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev); > int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req, > struct snp_guest_request_ioctl *rio); > diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h > index 783150458864..038a5a15d937 100644 > --- a/arch/x86/include/asm/sev.h > +++ b/arch/x86/include/asm/sev.h > @@ -200,6 +200,7 @@ void __init __noreturn snp_abort(void); > void snp_accept_memory(phys_addr_t start, phys_addr_t end); > u64 snp_get_unsupported_features(u64 status); > u64 sev_get_status(void); > +void __init snp_secure_tsc_prepare(void); > #else > static inline void sev_es_ist_enter(struct pt_regs *regs) { } > static inline void sev_es_ist_exit(void) { } > @@ -223,6 +224,7 @@ static inline void snp_abort(void) { } > static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } > static inline u64 snp_get_unsupported_features(u64 status) { return 0; } > static inline u64 sev_get_status(void) { return 0; } > +static inline void __init snp_secure_tsc_prepare(void) { } > #endif > > #endif > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h > index 3ac0ffc4f3e2..ee35c0488f56 100644 > --- a/arch/x86/include/asm/svm.h > +++ b/arch/x86/include/asm/svm.h > @@ -414,7 +414,9 @@ struct sev_es_save_area { > u8 reserved_0x298[80]; > u32 pkru; > u32 tsc_aux; > - u8 reserved_0x2f0[24]; > + u64 tsc_scale; > + u64 tsc_offset; > + u8 reserved_0x300[8]; > u64 rcx; > u64 rdx; > u64 rbx; > @@ -546,7 +548,7 @@ static inline void __unused_size_checks(void) > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x1c0); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x248); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x298); > - BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x2f0); > + BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x300); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x320); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x380); > BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x3f0); > diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c > index fb3b1feb1b84..9468809d02c7 100644 > --- a/arch/x86/kernel/sev.c > +++ b/arch/x86/kernel/sev.c > @@ -76,6 +76,10 @@ static u64 sev_hv_features __ro_after_init; > /* Secrets page physical address from the CC blob */ > static u64 secrets_pa __ro_after_init; > > +/* Secure TSC values read using TSC_INFO SNP Guest request */ > +static u64 guest_tsc_scale __ro_after_init; > +static u64 guest_tsc_offset __ro_after_init; s/guest_/snp_/ > + > /* #VC handler runtime per-CPU data */ > struct sev_es_runtime_data { > struct ghcb ghcb_page; > @@ -1393,6 +1397,78 @@ bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id) > } > EXPORT_SYMBOL_GPL(snp_assign_vmpck); > > +static struct snp_guest_dev tsc_snp_dev __initdata; > + > +static int __init snp_get_tsc_info(void) > +{ > + static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN]; > + struct snp_guest_request_ioctl rio; > + struct snp_tsc_info_resp tsc_resp; > + struct snp_tsc_info_req tsc_req; > + struct snp_guest_req req; > + int rc, resp_len; > + > + /* > + * The intermediate response buffer is used while decrypting the > + * response payload. Make sure that it has enough space to cover the > + * authtag. > + */ > + resp_len = sizeof(tsc_resp) + AUTHTAG_LEN; > + if (sizeof(buf) < resp_len) > + return -EINVAL; > + > + memset(&tsc_req, 0, sizeof(tsc_req)); > + memset(&req, 0, sizeof(req)); > + memset(&rio, 0, sizeof(rio)); > + memset(buf, 0, sizeof(buf)); > + > + if (!snp_assign_vmpck(&tsc_snp_dev, 0)) > + return -EINVAL; > + > + /* Initialize the PSP channel to send snp messages */ > + if (snp_setup_psp_messaging(&tsc_snp_dev)) > + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); This should just return the non-zero return code from snp_setup_psp_messaging(), no? rc = snp_setup_psp_messaging(&tsc_snp_dev); if (rc) return rc; > + > + req.msg_version = MSG_HDR_VER; > + req.msg_type = SNP_MSG_TSC_INFO_REQ; > + req.vmpck_id = tsc_snp_dev.vmpck_id; > + req.req_buf = &tsc_req; > + req.req_sz = sizeof(tsc_req); > + req.resp_buf = buf; > + req.resp_sz = resp_len; > + req.exit_code = SVM_VMGEXIT_GUEST_REQUEST; > + rc = snp_send_guest_request(&tsc_snp_dev, &req, &rio); Aren't you supposed to hold a mutex before calling this since it will eventually call the message sequence number functions? > + if (rc) > + goto err_req; > + > + memcpy(&tsc_resp, buf, sizeof(tsc_resp)); > + pr_debug("%s: Valid response status %x scale %llx offset %llx factor %x\n", > + __func__, tsc_resp.status, tsc_resp.tsc_scale, tsc_resp.tsc_offset, > + tsc_resp.tsc_factor); > + > + guest_tsc_scale = tsc_resp.tsc_scale; > + guest_tsc_offset = tsc_resp.tsc_offset; > + > +err_req: > + /* The response buffer contains the sensitive data, explicitly clear it. */ > + memzero_explicit(buf, sizeof(buf)); > + memzero_explicit(&tsc_resp, sizeof(tsc_resp)); > + memzero_explicit(&req, sizeof(req)); > + > + return rc; > +} > + > +void __init snp_secure_tsc_prepare(void) > +{ > + if (!cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) > + return; > + > + if (snp_get_tsc_info()) > + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); How about using SEV_TERM_SET_LINUX and a new GHCB_TERM_SECURE_TSC_INFO. > + > + pr_debug("SecureTSC enabled\n"); > +} > + > static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) > { > struct sev_es_save_area *cur_vmsa, *vmsa; > @@ -1493,6 +1569,12 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) > vmsa->vmpl = 0; > vmsa->sev_features = sev_status >> 2; > > + /* Setting Secure TSC parameters */ > + if (cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) { > + vmsa->tsc_scale = guest_tsc_scale; > + vmsa->tsc_offset = guest_tsc_offset; > + } > + > /* Switch the page over to a VMSA page now that it is initialized */ > ret = snp_set_vmsa(vmsa, true); > if (ret) { > diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c > index 6faea41e99b6..9935fc506e99 100644 > --- a/arch/x86/mm/mem_encrypt_amd.c > +++ b/arch/x86/mm/mem_encrypt_amd.c > @@ -215,6 +215,11 @@ void __init sme_map_bootdata(char *real_mode_data) > __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); > } > > +void __init amd_enc_init(void) > +{ > + snp_secure_tsc_prepare(); > +} > + > void __init sev_setup_arch(void) > { > phys_addr_t total_mem = memblock_phys_mem_size(); > @@ -502,6 +507,7 @@ void __init sme_early_init(void) > x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish; > x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; > x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; > + x86_platform.guest.enc_init = amd_enc_init; > > /* > * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the > diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h > index cb0d6cd1c12f..e081ca4d5da2 100644 > --- a/include/linux/cc_platform.h > +++ b/include/linux/cc_platform.h > @@ -90,6 +90,14 @@ enum cc_attr { > * Examples include TDX Guest. > */ > CC_ATTR_HOTPLUG_DISABLED, > + > + /** > + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. > + * > + * The platform/OS is running as a guest/virtual machine and actively > + * using AMD SEV-SNP Secure TSC feature. I think TDX also has a secure TSC like feature, so can this be generic? Thanks, Tom > + */ > + CC_ATTR_GUEST_SECURE_TSC, > }; > > #ifdef CONFIG_ARCH_HAS_CC_PLATFORM
On 10/30/2023 10:16 PM, Dionna Amalie Glaze wrote: > On Sun, Oct 29, 2023 at 11:38 PM Nikunj A Dadhania <nikunj@amd.com> wrote: >> >> @@ -1393,6 +1397,78 @@ bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id) >> } >> EXPORT_SYMBOL_GPL(snp_assign_vmpck); >> >> +static struct snp_guest_dev tsc_snp_dev __initdata; >> + >> +static int __init snp_get_tsc_info(void) >> +{ >> + static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN]; >> + struct snp_guest_request_ioctl rio; >> + struct snp_tsc_info_resp tsc_resp; >> + struct snp_tsc_info_req tsc_req; >> + struct snp_guest_req req; >> + int rc, resp_len; >> + >> + /* >> + * The intermediate response buffer is used while decrypting the >> + * response payload. Make sure that it has enough space to cover the >> + * authtag. >> + */ >> + resp_len = sizeof(tsc_resp) + AUTHTAG_LEN; >> + if (sizeof(buf) < resp_len) >> + return -EINVAL; >> + >> + memset(&tsc_req, 0, sizeof(tsc_req)); >> + memset(&req, 0, sizeof(req)); >> + memset(&rio, 0, sizeof(rio)); >> + memset(buf, 0, sizeof(buf)); >> + >> + if (!snp_assign_vmpck(&tsc_snp_dev, 0)) >> + return -EINVAL; >> + > > I don't see a requirement for VMPL0 in the API docs. I just see "When > a guest creates its own VMSA, it must query the PSP for information > with the TSC_INFO message to determine the correct values to write > into GUEST_TSC_SCALE and GUEST_TSC_OFFSET". The request should work irrespective of the VMPL level. > In that case, I don't see > a particular use for this request in Linux. I would expect it either > in the UEFI or in SVSM. Is this code path explicitly for direct boot > to Linux? If so, did I miss that documentation in this patch series? This works with UEFI boot. I havent tried this with SVSM yet. Thanks Nikunj
On 10/31/2023 1:56 AM, Tom Lendacky wrote: > On 10/30/23 01:36, Nikunj A Dadhania wrote: >> Add support for Secure TSC in SNP enabled guests. Secure TSC allows >> guest to securely use RDTSC/RDTSCP instructions as the parameters >> being used cannot be changed by hypervisor once the guest is launched. >> >> During the boot-up of the secondary cpus, SecureTSC enabled guests >> need to query TSC info from AMD Security Processor. This communication >> channel is encrypted between the AMD Security Processor and the guest, >> the hypervisor is just the conduit to deliver the guest messages to >> the AMD Security Processor. Each message is protected with an >> AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP >> Guest messages to communicate with the PSP. > > Add to this commit message that you're using the enc_init hook to perform some Secure TSC initialization and why you have to do that. Sure, will add. >> >> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> >> --- >> arch/x86/coco/core.c | 3 ++ >> arch/x86/include/asm/sev-guest.h | 18 +++++++ >> arch/x86/include/asm/sev.h | 2 + >> arch/x86/include/asm/svm.h | 6 ++- >> arch/x86/kernel/sev.c | 82 ++++++++++++++++++++++++++++++++ >> arch/x86/mm/mem_encrypt_amd.c | 6 +++ >> include/linux/cc_platform.h | 8 ++++ >> 7 files changed, 123 insertions(+), 2 deletions(-) >> >> diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c >> index eeec9986570e..5d5d4d03c543 100644 >> --- a/arch/x86/coco/core.c >> +++ b/arch/x86/coco/core.c >> @@ -89,6 +89,9 @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) >> case CC_ATTR_GUEST_SEV_SNP: >> return sev_status & MSR_AMD64_SEV_SNP_ENABLED; >> + case CC_ATTR_GUEST_SECURE_TSC: >> + return sev_status & MSR_AMD64_SNP_SECURE_TSC; >> + >> default: >> return false; >> } >> diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h >> index e6f94208173d..58739173eba9 100644 >> --- a/arch/x86/include/asm/sev-guest.h >> +++ b/arch/x86/include/asm/sev-guest.h >> @@ -39,6 +39,8 @@ enum msg_type { >> SNP_MSG_ABSORB_RSP, >> SNP_MSG_VMRK_REQ, >> SNP_MSG_VMRK_RSP, >> + SNP_MSG_TSC_INFO_REQ = 17, >> + SNP_MSG_TSC_INFO_RSP, >> SNP_MSG_TYPE_MAX >> }; >> @@ -111,6 +113,22 @@ struct snp_guest_req { >> u8 msg_type; >> }; >> +struct snp_tsc_info_req { >> +#define SNP_TSC_INFO_REQ_SZ 128 > > Please move this to before the struct definition. > >> + /* Must be zero filled */ >> + u8 rsvd[SNP_TSC_INFO_REQ_SZ]; >> +} __packed; >> + >> +struct snp_tsc_info_resp { >> + /* Status of TSC_INFO message */ >> + u32 status; >> + u32 rsvd1; >> + u64 tsc_scale; >> + u64 tsc_offset; >> + u32 tsc_factor; >> + u8 rsvd2[100]; >> +} __packed; >> + >> int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev); >> int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req, >> struct snp_guest_request_ioctl *rio); >> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h >> index 783150458864..038a5a15d937 100644 >> --- a/arch/x86/include/asm/sev.h >> +++ b/arch/x86/include/asm/sev.h >> @@ -200,6 +200,7 @@ void __init __noreturn snp_abort(void); >> void snp_accept_memory(phys_addr_t start, phys_addr_t end); >> u64 snp_get_unsupported_features(u64 status); >> u64 sev_get_status(void); >> +void __init snp_secure_tsc_prepare(void); >> #else >> static inline void sev_es_ist_enter(struct pt_regs *regs) { } >> static inline void sev_es_ist_exit(void) { } >> @@ -223,6 +224,7 @@ static inline void snp_abort(void) { } >> static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } >> static inline u64 snp_get_unsupported_features(u64 status) { return 0; } >> static inline u64 sev_get_status(void) { return 0; } >> +static inline void __init snp_secure_tsc_prepare(void) { } >> #endif >> #endif >> diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h >> index 3ac0ffc4f3e2..ee35c0488f56 100644 >> --- a/arch/x86/include/asm/svm.h >> +++ b/arch/x86/include/asm/svm.h >> @@ -414,7 +414,9 @@ struct sev_es_save_area { >> u8 reserved_0x298[80]; >> u32 pkru; >> u32 tsc_aux; >> - u8 reserved_0x2f0[24]; >> + u64 tsc_scale; >> + u64 tsc_offset; >> + u8 reserved_0x300[8]; >> u64 rcx; >> u64 rdx; >> u64 rbx; >> @@ -546,7 +548,7 @@ static inline void __unused_size_checks(void) >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x1c0); >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x248); >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x298); >> - BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x2f0); >> + BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x300); >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x320); >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x380); >> BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x3f0); >> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c >> index fb3b1feb1b84..9468809d02c7 100644 >> --- a/arch/x86/kernel/sev.c >> +++ b/arch/x86/kernel/sev.c >> @@ -76,6 +76,10 @@ static u64 sev_hv_features __ro_after_init; >> /* Secrets page physical address from the CC blob */ >> static u64 secrets_pa __ro_after_init; >> +/* Secure TSC values read using TSC_INFO SNP Guest request */ >> +static u64 guest_tsc_scale __ro_after_init; >> +static u64 guest_tsc_offset __ro_after_init; > > s/guest_/snp_/ > >> + >> /* #VC handler runtime per-CPU data */ >> struct sev_es_runtime_data { >> struct ghcb ghcb_page; >> @@ -1393,6 +1397,78 @@ bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id) >> } >> EXPORT_SYMBOL_GPL(snp_assign_vmpck); >> +static struct snp_guest_dev tsc_snp_dev __initdata; >> + >> +static int __init snp_get_tsc_info(void) >> +{ >> + static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN]; >> + struct snp_guest_request_ioctl rio; >> + struct snp_tsc_info_resp tsc_resp; >> + struct snp_tsc_info_req tsc_req; >> + struct snp_guest_req req; >> + int rc, resp_len; >> + >> + /* >> + * The intermediate response buffer is used while decrypting the >> + * response payload. Make sure that it has enough space to cover the >> + * authtag. >> + */ >> + resp_len = sizeof(tsc_resp) + AUTHTAG_LEN; >> + if (sizeof(buf) < resp_len) >> + return -EINVAL; >> + >> + memset(&tsc_req, 0, sizeof(tsc_req)); >> + memset(&req, 0, sizeof(req)); >> + memset(&rio, 0, sizeof(rio)); >> + memset(buf, 0, sizeof(buf)); >> + >> + if (!snp_assign_vmpck(&tsc_snp_dev, 0)) >> + return -EINVAL; >> + >> + /* Initialize the PSP channel to send snp messages */ >> + if (snp_setup_psp_messaging(&tsc_snp_dev)) >> + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); > > This should just return the non-zero return code from snp_setup_psp_messaging(), no? > > rc = snp_setup_psp_messaging(&tsc_snp_dev); > if (rc) > return rc; Yes, that will also have the same behaviour, snp_get_tsc_info() will send the termination request. >> + >> + req.msg_version = MSG_HDR_VER; >> + req.msg_type = SNP_MSG_TSC_INFO_REQ; >> + req.vmpck_id = tsc_snp_dev.vmpck_id; >> + req.req_buf = &tsc_req; >> + req.req_sz = sizeof(tsc_req); >> + req.resp_buf = buf; >> + req.resp_sz = resp_len; >> + req.exit_code = SVM_VMGEXIT_GUEST_REQUEST; >> + rc = snp_send_guest_request(&tsc_snp_dev, &req, &rio); > > Aren't you supposed to hold a mutex before calling this since it will eventually call the message sequence number functions? Yes, I will need to otherwise lockdep will complain. This is being called from boot processor, so there is no parallel execution. >> + if (rc) >> + goto err_req; >> + >> + memcpy(&tsc_resp, buf, sizeof(tsc_resp)); >> + pr_debug("%s: Valid response status %x scale %llx offset %llx factor %x\n", >> + __func__, tsc_resp.status, tsc_resp.tsc_scale, tsc_resp.tsc_offset, >> + tsc_resp.tsc_factor); >> + >> + guest_tsc_scale = tsc_resp.tsc_scale; >> + guest_tsc_offset = tsc_resp.tsc_offset; >> + >> +err_req: >> + /* The response buffer contains the sensitive data, explicitly clear it. */ >> + memzero_explicit(buf, sizeof(buf)); >> + memzero_explicit(&tsc_resp, sizeof(tsc_resp)); >> + memzero_explicit(&req, sizeof(req)); >> + >> + return rc; >> +} >> + >> +void __init snp_secure_tsc_prepare(void) >> +{ >> + if (!cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) >> + return; >> + >> + if (snp_get_tsc_info()) >> + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); > > How about using SEV_TERM_SET_LINUX and a new GHCB_TERM_SECURE_TSC_INFO. Yes, we can do that, I remember you had said this will required GHCB spec change and then thought of sticking with the current return code. > >> + >> + pr_debug("SecureTSC enabled\n"); >> +} >> + >> static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) >> { >> struct sev_es_save_area *cur_vmsa, *vmsa; >> @@ -1493,6 +1569,12 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) >> vmsa->vmpl = 0; >> vmsa->sev_features = sev_status >> 2; >> + /* Setting Secure TSC parameters */ >> + if (cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) { >> + vmsa->tsc_scale = guest_tsc_scale; >> + vmsa->tsc_offset = guest_tsc_offset; >> + } >> + >> /* Switch the page over to a VMSA page now that it is initialized */ >> ret = snp_set_vmsa(vmsa, true); >> if (ret) { >> diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c >> index 6faea41e99b6..9935fc506e99 100644 >> --- a/arch/x86/mm/mem_encrypt_amd.c >> +++ b/arch/x86/mm/mem_encrypt_amd.c >> @@ -215,6 +215,11 @@ void __init sme_map_bootdata(char *real_mode_data) >> __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); >> } >> +void __init amd_enc_init(void) >> +{ >> + snp_secure_tsc_prepare(); >> +} >> + >> void __init sev_setup_arch(void) >> { >> phys_addr_t total_mem = memblock_phys_mem_size(); >> @@ -502,6 +507,7 @@ void __init sme_early_init(void) >> x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish; >> x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; >> x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; >> + x86_platform.guest.enc_init = amd_enc_init; >> /* >> * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the Regards Nikunj
On 10/31/2023 1:56 AM, Tom Lendacky wrote: >> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h >> index cb0d6cd1c12f..e081ca4d5da2 100644 >> --- a/include/linux/cc_platform.h >> +++ b/include/linux/cc_platform.h >> @@ -90,6 +90,14 @@ enum cc_attr { >> * Examples include TDX Guest. >> */ >> CC_ATTR_HOTPLUG_DISABLED, >> + >> + /** >> + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. >> + * >> + * The platform/OS is running as a guest/virtual machine and actively >> + * using AMD SEV-SNP Secure TSC feature. > > I think TDX also has a secure TSC like feature, so can this be generic? Yes, we can do that. In SNP case SecureTSC is an optional feature, not sure if that is the case for TDX as well. Kirill any inputs ? > > Thanks, > Tom > >> + */ >> + CC_ATTR_GUEST_SECURE_TSC, >> }; >> #ifdef CONFIG_ARCH_HAS_CC_PLATFORM Regards Nikunj
On Thu, Nov 02, 2023 at 11:11:52AM +0530, Nikunj A. Dadhania wrote: > On 10/31/2023 1:56 AM, Tom Lendacky wrote: > >> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h > >> index cb0d6cd1c12f..e081ca4d5da2 100644 > >> --- a/include/linux/cc_platform.h > >> +++ b/include/linux/cc_platform.h > >> @@ -90,6 +90,14 @@ enum cc_attr { > >> * Examples include TDX Guest. > >> */ > >> CC_ATTR_HOTPLUG_DISABLED, > >> + > >> + /** > >> + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. > >> + * > >> + * The platform/OS is running as a guest/virtual machine and actively > >> + * using AMD SEV-SNP Secure TSC feature. > > > > I think TDX also has a secure TSC like feature, so can this be generic? > > Yes, we can do that. In SNP case SecureTSC is an optional feature, not sure if that is the case for TDX as well. > > Kirill any inputs ? We have several X86_FEATURE_ flags to indicate quality of TSC. Do we really need a CC_ATTR on top of that? Maybe SEV code could just set X86_FEATURE_ according to what its TSC can do?
On 11/2/23 00:36, Nikunj A. Dadhania wrote: > On 10/31/2023 1:56 AM, Tom Lendacky wrote: >> On 10/30/23 01:36, Nikunj A Dadhania wrote: >>> Add support for Secure TSC in SNP enabled guests. Secure TSC allows >>> guest to securely use RDTSC/RDTSCP instructions as the parameters >>> being used cannot be changed by hypervisor once the guest is launched. >>> >>> During the boot-up of the secondary cpus, SecureTSC enabled guests >>> need to query TSC info from AMD Security Processor. This communication >>> channel is encrypted between the AMD Security Processor and the guest, >>> the hypervisor is just the conduit to deliver the guest messages to >>> the AMD Security Processor. Each message is protected with an >>> AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP >>> Guest messages to communicate with the PSP. >> >> Add to this commit message that you're using the enc_init hook to perform some Secure TSC initialization and why you have to do that. > > Sure, will add. > >>> >>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> >>> --- >>> arch/x86/coco/core.c | 3 ++ >>> arch/x86/include/asm/sev-guest.h | 18 +++++++ >>> arch/x86/include/asm/sev.h | 2 + >>> arch/x86/include/asm/svm.h | 6 ++- >>> arch/x86/kernel/sev.c | 82 ++++++++++++++++++++++++++++++++ >>> arch/x86/mm/mem_encrypt_amd.c | 6 +++ >>> include/linux/cc_platform.h | 8 ++++ >>> 7 files changed, 123 insertions(+), 2 deletions(-) >>> >>> +void __init snp_secure_tsc_prepare(void) >>> +{ >>> + if (!cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) >>> + return; >>> + >>> + if (snp_get_tsc_info()) >>> + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); >> >> How about using SEV_TERM_SET_LINUX and a new GHCB_TERM_SECURE_TSC_INFO. > > Yes, we can do that, I remember you had said this will required GHCB spec change and then thought of sticking with the current return code. No spec change needed. The base SNP support is already using it, so not an issue to add a new error code. Thanks, Tom
On 11/2/2023 4:06 PM, Kirill A. Shutemov wrote: > On Thu, Nov 02, 2023 at 11:11:52AM +0530, Nikunj A. Dadhania wrote: >> On 10/31/2023 1:56 AM, Tom Lendacky wrote: >>>> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h >>>> index cb0d6cd1c12f..e081ca4d5da2 100644 >>>> --- a/include/linux/cc_platform.h >>>> +++ b/include/linux/cc_platform.h >>>> @@ -90,6 +90,14 @@ enum cc_attr { >>>> * Examples include TDX Guest. >>>> */ >>>> CC_ATTR_HOTPLUG_DISABLED, >>>> + >>>> + /** >>>> + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. >>>> + * >>>> + * The platform/OS is running as a guest/virtual machine and actively >>>> + * using AMD SEV-SNP Secure TSC feature. >>> >>> I think TDX also has a secure TSC like feature, so can this be generic? >> >> Yes, we can do that. In SNP case SecureTSC is an optional feature, not sure if that is the case for TDX as well. >> >> Kirill any inputs ? > > We have several X86_FEATURE_ flags to indicate quality of TSC. Do we > really need a CC_ATTR on top of that? Maybe SEV code could just set > X86_FEATURE_ according to what its TSC can do? For SEV-SNP, SEV_STATUS MSR has the information of various features that have been enabled by the hypervisor. We will need a CC_ATTR for these optional features. Regards Nikunj
On Mon, Nov 06, 2023 at 04:15:59PM +0530, Nikunj A. Dadhania wrote: > On 11/2/2023 4:06 PM, Kirill A. Shutemov wrote: > > On Thu, Nov 02, 2023 at 11:11:52AM +0530, Nikunj A. Dadhania wrote: > >> On 10/31/2023 1:56 AM, Tom Lendacky wrote: > >>>> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h > >>>> index cb0d6cd1c12f..e081ca4d5da2 100644 > >>>> --- a/include/linux/cc_platform.h > >>>> +++ b/include/linux/cc_platform.h > >>>> @@ -90,6 +90,14 @@ enum cc_attr { > >>>> * Examples include TDX Guest. > >>>> */ > >>>> CC_ATTR_HOTPLUG_DISABLED, > >>>> + > >>>> + /** > >>>> + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. > >>>> + * > >>>> + * The platform/OS is running as a guest/virtual machine and actively > >>>> + * using AMD SEV-SNP Secure TSC feature. > >>> > >>> I think TDX also has a secure TSC like feature, so can this be generic? > >> > >> Yes, we can do that. In SNP case SecureTSC is an optional feature, not sure if that is the case for TDX as well. > >> > >> Kirill any inputs ? > > > > We have several X86_FEATURE_ flags to indicate quality of TSC. Do we > > really need a CC_ATTR on top of that? Maybe SEV code could just set > > X86_FEATURE_ according to what its TSC can do? > > For SEV-SNP, SEV_STATUS MSR has the information of various features > that have been enabled by the hypervisor. We will need a CC_ATTR for > these optional features. If all users of the attribute is withing x86, I would rather add synthetic X86_FEATURE_ flags than CC_ATTR_. We have better instrumentation around features.
diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c index eeec9986570e..5d5d4d03c543 100644 --- a/arch/x86/coco/core.c +++ b/arch/x86/coco/core.c @@ -89,6 +89,9 @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) case CC_ATTR_GUEST_SEV_SNP: return sev_status & MSR_AMD64_SEV_SNP_ENABLED; + case CC_ATTR_GUEST_SECURE_TSC: + return sev_status & MSR_AMD64_SNP_SECURE_TSC; + default: return false; } diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h index e6f94208173d..58739173eba9 100644 --- a/arch/x86/include/asm/sev-guest.h +++ b/arch/x86/include/asm/sev-guest.h @@ -39,6 +39,8 @@ enum msg_type { SNP_MSG_ABSORB_RSP, SNP_MSG_VMRK_REQ, SNP_MSG_VMRK_RSP, + SNP_MSG_TSC_INFO_REQ = 17, + SNP_MSG_TSC_INFO_RSP, SNP_MSG_TYPE_MAX }; @@ -111,6 +113,22 @@ struct snp_guest_req { u8 msg_type; }; +struct snp_tsc_info_req { +#define SNP_TSC_INFO_REQ_SZ 128 + /* Must be zero filled */ + u8 rsvd[SNP_TSC_INFO_REQ_SZ]; +} __packed; + +struct snp_tsc_info_resp { + /* Status of TSC_INFO message */ + u32 status; + u32 rsvd1; + u64 tsc_scale; + u64 tsc_offset; + u32 tsc_factor; + u8 rsvd2[100]; +} __packed; + int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev); int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req, struct snp_guest_request_ioctl *rio); diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 783150458864..038a5a15d937 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -200,6 +200,7 @@ void __init __noreturn snp_abort(void); void snp_accept_memory(phys_addr_t start, phys_addr_t end); u64 snp_get_unsupported_features(u64 status); u64 sev_get_status(void); +void __init snp_secure_tsc_prepare(void); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } @@ -223,6 +224,7 @@ static inline void snp_abort(void) { } static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } static inline u64 snp_get_unsupported_features(u64 status) { return 0; } static inline u64 sev_get_status(void) { return 0; } +static inline void __init snp_secure_tsc_prepare(void) { } #endif #endif diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index 3ac0ffc4f3e2..ee35c0488f56 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -414,7 +414,9 @@ struct sev_es_save_area { u8 reserved_0x298[80]; u32 pkru; u32 tsc_aux; - u8 reserved_0x2f0[24]; + u64 tsc_scale; + u64 tsc_offset; + u8 reserved_0x300[8]; u64 rcx; u64 rdx; u64 rbx; @@ -546,7 +548,7 @@ static inline void __unused_size_checks(void) BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x1c0); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x248); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x298); - BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x2f0); + BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x300); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x320); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x380); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x3f0); diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index fb3b1feb1b84..9468809d02c7 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -76,6 +76,10 @@ static u64 sev_hv_features __ro_after_init; /* Secrets page physical address from the CC blob */ static u64 secrets_pa __ro_after_init; +/* Secure TSC values read using TSC_INFO SNP Guest request */ +static u64 guest_tsc_scale __ro_after_init; +static u64 guest_tsc_offset __ro_after_init; + /* #VC handler runtime per-CPU data */ struct sev_es_runtime_data { struct ghcb ghcb_page; @@ -1393,6 +1397,78 @@ bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id) } EXPORT_SYMBOL_GPL(snp_assign_vmpck); +static struct snp_guest_dev tsc_snp_dev __initdata; + +static int __init snp_get_tsc_info(void) +{ + static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN]; + struct snp_guest_request_ioctl rio; + struct snp_tsc_info_resp tsc_resp; + struct snp_tsc_info_req tsc_req; + struct snp_guest_req req; + int rc, resp_len; + + /* + * The intermediate response buffer is used while decrypting the + * response payload. Make sure that it has enough space to cover the + * authtag. + */ + resp_len = sizeof(tsc_resp) + AUTHTAG_LEN; + if (sizeof(buf) < resp_len) + return -EINVAL; + + memset(&tsc_req, 0, sizeof(tsc_req)); + memset(&req, 0, sizeof(req)); + memset(&rio, 0, sizeof(rio)); + memset(buf, 0, sizeof(buf)); + + if (!snp_assign_vmpck(&tsc_snp_dev, 0)) + return -EINVAL; + + /* Initialize the PSP channel to send snp messages */ + if (snp_setup_psp_messaging(&tsc_snp_dev)) + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); + + req.msg_version = MSG_HDR_VER; + req.msg_type = SNP_MSG_TSC_INFO_REQ; + req.vmpck_id = tsc_snp_dev.vmpck_id; + req.req_buf = &tsc_req; + req.req_sz = sizeof(tsc_req); + req.resp_buf = buf; + req.resp_sz = resp_len; + req.exit_code = SVM_VMGEXIT_GUEST_REQUEST; + rc = snp_send_guest_request(&tsc_snp_dev, &req, &rio); + if (rc) + goto err_req; + + memcpy(&tsc_resp, buf, sizeof(tsc_resp)); + pr_debug("%s: Valid response status %x scale %llx offset %llx factor %x\n", + __func__, tsc_resp.status, tsc_resp.tsc_scale, tsc_resp.tsc_offset, + tsc_resp.tsc_factor); + + guest_tsc_scale = tsc_resp.tsc_scale; + guest_tsc_offset = tsc_resp.tsc_offset; + +err_req: + /* The response buffer contains the sensitive data, explicitly clear it. */ + memzero_explicit(buf, sizeof(buf)); + memzero_explicit(&tsc_resp, sizeof(tsc_resp)); + memzero_explicit(&req, sizeof(req)); + + return rc; +} + +void __init snp_secure_tsc_prepare(void) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) + return; + + if (snp_get_tsc_info()) + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); + + pr_debug("SecureTSC enabled\n"); +} + static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) { struct sev_es_save_area *cur_vmsa, *vmsa; @@ -1493,6 +1569,12 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) vmsa->vmpl = 0; vmsa->sev_features = sev_status >> 2; + /* Setting Secure TSC parameters */ + if (cc_platform_has(CC_ATTR_GUEST_SECURE_TSC)) { + vmsa->tsc_scale = guest_tsc_scale; + vmsa->tsc_offset = guest_tsc_offset; + } + /* Switch the page over to a VMSA page now that it is initialized */ ret = snp_set_vmsa(vmsa, true); if (ret) { diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 6faea41e99b6..9935fc506e99 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -215,6 +215,11 @@ void __init sme_map_bootdata(char *real_mode_data) __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); } +void __init amd_enc_init(void) +{ + snp_secure_tsc_prepare(); +} + void __init sev_setup_arch(void) { phys_addr_t total_mem = memblock_phys_mem_size(); @@ -502,6 +507,7 @@ void __init sme_early_init(void) x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish; x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; + x86_platform.guest.enc_init = amd_enc_init; /* * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h index cb0d6cd1c12f..e081ca4d5da2 100644 --- a/include/linux/cc_platform.h +++ b/include/linux/cc_platform.h @@ -90,6 +90,14 @@ enum cc_attr { * Examples include TDX Guest. */ CC_ATTR_HOTPLUG_DISABLED, + + /** + * @CC_ATTR_GUEST_SECURE_TSC: Secure TSC is active. + * + * The platform/OS is running as a guest/virtual machine and actively + * using AMD SEV-SNP Secure TSC feature. + */ + CC_ATTR_GUEST_SECURE_TSC, }; #ifdef CONFIG_ARCH_HAS_CC_PLATFORM