From patchwork Fri Jan 27 11:29:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49250 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp782745wrn; Fri, 27 Jan 2023 03:32:11 -0800 (PST) X-Google-Smtp-Source: AK7set+Ip3C/vZ7Q3Nwog97vkTq+287bqImFh+C92IgQY6bUEZTcUfMMhZoDdWwuHvSxDgNwl0rj X-Received: by 2002:a17:90b:1e4e:b0:22b:f834:3fac with SMTP id pi14-20020a17090b1e4e00b0022bf8343facmr12496886pjb.11.1674819130827; Fri, 27 Jan 2023 03:32:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819130; cv=none; d=google.com; s=arc-20160816; b=uffqCf1gD4teJdB21JrarKf7l35YLZy9Sj//flsNoPtQ0LlFUxE6PZoAo5xUeAz5qu YU1Cwiza04Wm3vwRp4E7Hz6+TXh8l1I/jcrAe0U7UbxtvZ0PvISYhHh4b1yc20wSv25b GnO7aFu3JQXdxjPGotMVbZRJdtwp4tH4dBfH4aS1qMaqI9CiPYXadi2YX7K8xPr6tjlY mnc7/rK7iN8Po815228jqJqaWLevycjFnweQLJd1DEGKCphN2zeVISZYTl1nwcJVUvfK lGLf3mfklDdh1zi3wi0LLdLWoak0P/jqPQOu3M+S50sDHIK6Rd8tlqwfgmzlqk0xSSDm gEKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=q4/NoU6qlMn8d3Oh5+eGUIXnEv2RQDzN/vMlFJTD6nM=; b=cr4nPVOVxywU/JqU99z1IsozI6lz5ZZAawM+6p+WwaNZNIpU3NWwmAdHUH3miWVpl7 t98bxuR0YiLEED68htTXtFu6CY4YSavDwkJCFVDT7t6GC+L1H/v712lUI9u6EV1p4qCO e6PYyHR9KSjnhicqiEfB4DOMVkQNzOUs2nrbU3Gm+dl0+2lTSEeRkMra2oueERPberI4 iZTZ5HPVw7JW8b7IcmBn5szkI1mwl/Op7/wOtbCfbwwG0SQwZgA1krAaLA5byGKm27PA 7KtcACGgAkBDm6JI6RqysNjz0BLYvrmfosLGD4o2CxJekwyO5U3DvmI9JTAncP+7CUJv zM7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b7-20020a17090a550700b0022998516fd7si4164189pji.179.2023.01.27.03.31.58; Fri, 27 Jan 2023 03:32:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233585AbjA0Lbq (ORCPT + 99 others); Fri, 27 Jan 2023 06:31:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233493AbjA0LbV (ORCPT ); Fri, 27 Jan 2023 06:31:21 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 24EA27C724; Fri, 27 Jan 2023 03:29:53 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5829F1570; Fri, 27 Jan 2023 03:30:27 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 44C4A3F64C; Fri, 27 Jan 2023 03:29:42 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 01/28] arm64: RME: Handle Granule Protection Faults (GPFs) Date: Fri, 27 Jan 2023 11:29:05 +0000 Message-Id: <20230127112932.38045-2-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175144888205988?= X-GMAIL-MSGID: =?utf-8?q?1756175144888205988?= If the host attempts to access granules that have been delegated for use in a realm these accesses will be caught and will trigger a Granule Protection Fault (GPF). A fault during a page walk signals a bug in the kernel and is handled by oopsing the kernel. A non-page walk fault could be caused by user space having access to a page which has been delegated to the kernel and will trigger a SIGBUS to allow debugging why user space is trying to access a delegated page. Signed-off-by: Steven Price --- arch/arm64/mm/fault.c | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 596f46dabe4e..fd84be115657 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -756,6 +756,25 @@ static int do_tag_check_fault(unsigned long far, unsigned long esr, return 0; } +static int do_gpf_ptw(unsigned long far, unsigned long esr, struct pt_regs *regs) +{ + const struct fault_info *inf = esr_to_fault_info(esr); + + die_kernel_fault(inf->name, far, esr, regs); + return 0; +} + +static int do_gpf(unsigned long far, unsigned long esr, struct pt_regs *regs) +{ + const struct fault_info *inf = esr_to_fault_info(esr); + + if (!is_el1_instruction_abort(esr) && fixup_exception(regs)) + return 0; + + arm64_notify_die(inf->name, regs, inf->sig, inf->code, far, esr); + return 0; +} + static const struct fault_info fault_info[] = { { do_bad, SIGKILL, SI_KERNEL, "ttbr address size fault" }, { do_bad, SIGKILL, SI_KERNEL, "level 1 address size fault" }, @@ -793,11 +812,11 @@ static const struct fault_info fault_info[] = { { do_alignment_fault, SIGBUS, BUS_ADRALN, "alignment fault" }, { do_bad, SIGKILL, SI_KERNEL, "unknown 34" }, { do_bad, SIGKILL, SI_KERNEL, "unknown 35" }, - { do_bad, SIGKILL, SI_KERNEL, "unknown 36" }, - { do_bad, SIGKILL, SI_KERNEL, "unknown 37" }, - { do_bad, SIGKILL, SI_KERNEL, "unknown 38" }, - { do_bad, SIGKILL, SI_KERNEL, "unknown 39" }, - { do_bad, SIGKILL, SI_KERNEL, "unknown 40" }, + { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 0" }, + { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 1" }, + { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 2" }, + { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 3" }, + { do_gpf, SIGBUS, SI_KERNEL, "Granule Protection Fault not on table walk" }, { do_bad, SIGKILL, SI_KERNEL, "unknown 41" }, { do_bad, SIGKILL, SI_KERNEL, "unknown 42" }, { do_bad, SIGKILL, SI_KERNEL, "unknown 43" }, From patchwork Fri Jan 27 11:29:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49253 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp782999wrn; Fri, 27 Jan 2023 03:32:45 -0800 (PST) X-Google-Smtp-Source: AK7set8JrepCkLoNNYGNULd14FgbWkwGXFymG7JN/5SM+yxYvOYuA4HsZNZkyKFFV7nGzHtk/7LI X-Received: by 2002:a17:902:f54f:b0:196:44d4:2464 with SMTP id h15-20020a170902f54f00b0019644d42464mr5575139plf.28.1674819164693; Fri, 27 Jan 2023 03:32:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819164; cv=none; d=google.com; s=arc-20160816; b=GGKxERgb5HKz81FjZ4YZYNRkPNmv6h5VNgNed43TRTi7lY8tGJf3xh5NSmh+jam3lr zitDVTcySFPa33V48mPcviZtf6Py1pn6IC5FI+N5pdUnQi4ygnJbmUbv4KCTDXTUlmG3 BAJHs/UFMcGd1Nx5GE0Nll4e1Mek8RSm1wLgikPmGmu5xXqTBoK+DcMho7CZejyqeoTD f+ucIwj62+5cb9CU9k4C31JHP+xlFqgHQGM2M1BnqpU4HRhuV1jhzTfnScQ8ys9G53cy fjHh5UeniT3WxLVdzkNMqKlb5i6ufPMXn/MMCYduk3d5+lRh4NQ4RYuNsuyY03ysQ1AR 0UFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=sNujmiKdeh5yB9PS/ObVxfzsbwNqQ03so74LQD59f+8=; b=v1r7SOHa6PhDq4KmHGHr1YN6UQdloRv9ePTXMCgaaK6bN3UaIU8TYXCM5Xr70P611r 1kIEYGtb73bZX30klv73+EQLLSDQhMPvvugSh/Vos7bFwYRxcrCkQN4uE3ihzyQj9zFd liDK1W42mblxaAjXzwvg1r5fTXqQrDVLruiZXXGSu5V4Kvx1H9i2byS93yNte9WoxypU kZU7i5IDBkbWtxJQqv0bcWvzp5q5yTPjM0SQgeArQ+/ZBmJ9xFZMR1A3zm80uOdWF6Mh amtas+EYEyOhgFt+qyQfj1lEIEOoTyDtkaumZ159SGgbJYCfS+xXkc+6xJlJ4Sr2dOfV kZoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y2-20020a17090322c200b00186989178b0si5007471plg.132.2023.01.27.03.32.31; Fri, 27 Jan 2023 03:32:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233408AbjA0Lby (ORCPT + 99 others); Fri, 27 Jan 2023 06:31:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233322AbjA0LbZ (ORCPT ); Fri, 27 Jan 2023 06:31:25 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 58A817D6EC; Fri, 27 Jan 2023 03:30:00 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2D7DE1576; Fri, 27 Jan 2023 03:30:30 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D355E3F64C; Fri, 27 Jan 2023 03:29:45 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 02/28] arm64: RME: Add SMC definitions for calling the RMM Date: Fri, 27 Jan 2023 11:29:06 +0000 Message-Id: <20230127112932.38045-3-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175180766914098?= X-GMAIL-MSGID: =?utf-8?q?1756175180766914098?= The RMM (Realm Management Monitor) provides functionality that can be accessed by SMC calls from the host. The SMC definitions are based on DEN0137[1] version A-bet0. [1] https://developer.arm.com/documentation/den0137/latest Signed-off-by: Steven Price --- arch/arm64/include/asm/rmi_smc.h | 235 +++++++++++++++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 arch/arm64/include/asm/rmi_smc.h diff --git a/arch/arm64/include/asm/rmi_smc.h b/arch/arm64/include/asm/rmi_smc.h new file mode 100644 index 000000000000..16ff65090f3a --- /dev/null +++ b/arch/arm64/include/asm/rmi_smc.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_RME_SMC_H +#define __ASM_RME_SMC_H + +#include + +#define SMC_RxI_CALL(func) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD, \ + (func)) + +/* FID numbers from alp10 specification */ + +#define SMC_RMI_DATA_CREATE SMC_RxI_CALL(0x0153) +#define SMC_RMI_DATA_CREATE_UNKNOWN SMC_RxI_CALL(0x0154) +#define SMC_RMI_DATA_DESTROY SMC_RxI_CALL(0x0155) +#define SMC_RMI_FEATURES SMC_RxI_CALL(0x0165) +#define SMC_RMI_GRANULE_DELEGATE SMC_RxI_CALL(0x0151) +#define SMC_RMI_GRANULE_UNDELEGATE SMC_RxI_CALL(0x0152) +#define SMC_RMI_PSCI_COMPLETE SMC_RxI_CALL(0x0164) +#define SMC_RMI_REALM_ACTIVATE SMC_RxI_CALL(0x0157) +#define SMC_RMI_REALM_CREATE SMC_RxI_CALL(0x0158) +#define SMC_RMI_REALM_DESTROY SMC_RxI_CALL(0x0159) +#define SMC_RMI_REC_AUX_COUNT SMC_RxI_CALL(0x0167) +#define SMC_RMI_REC_CREATE SMC_RxI_CALL(0x015a) +#define SMC_RMI_REC_DESTROY SMC_RxI_CALL(0x015b) +#define SMC_RMI_REC_ENTER SMC_RxI_CALL(0x015c) +#define SMC_RMI_RTT_CREATE SMC_RxI_CALL(0x015d) +#define SMC_RMI_RTT_DESTROY SMC_RxI_CALL(0x015e) +#define SMC_RMI_RTT_FOLD SMC_RxI_CALL(0x0166) +#define SMC_RMI_RTT_INIT_RIPAS SMC_RxI_CALL(0x0168) +#define SMC_RMI_RTT_MAP_UNPROTECTED SMC_RxI_CALL(0x015f) +#define SMC_RMI_RTT_READ_ENTRY SMC_RxI_CALL(0x0161) +#define SMC_RMI_RTT_SET_RIPAS SMC_RxI_CALL(0x0169) +#define SMC_RMI_RTT_UNMAP_UNPROTECTED SMC_RxI_CALL(0x0162) +#define SMC_RMI_VERSION SMC_RxI_CALL(0x0150) + +#define RMI_ABI_MAJOR_VERSION 1 +#define RMI_ABI_MINOR_VERSION 0 + +#define RMI_UNASSIGNED 0 +#define RMI_DESTROYED 1 +#define RMI_ASSIGNED 2 +#define RMI_TABLE 3 +#define RMI_VALID_NS 4 + +#define RMI_ABI_VERSION_GET_MAJOR(version) ((version) >> 16) +#define RMI_ABI_VERSION_GET_MINOR(version) ((version) & 0xFFFF) + +#define RMI_RETURN_STATUS(ret) ((ret) & 0xFF) +#define RMI_RETURN_INDEX(ret) (((ret) >> 8) & 0xFF) + +#define RMI_SUCCESS 0 +#define RMI_ERROR_INPUT 1 +#define RMI_ERROR_REALM 2 +#define RMI_ERROR_REC 3 +#define RMI_ERROR_RTT 4 +#define RMI_ERROR_IN_USE 5 + +#define RMI_EMPTY 0 +#define RMI_RAM 1 + +#define RMI_NO_MEASURE_CONTENT 0 +#define RMI_MEASURE_CONTENT 1 + +#define RMI_FEATURE_REGISTER_0_S2SZ GENMASK(7, 0) +#define RMI_FEATURE_REGISTER_0_LPA2 BIT(8) +#define RMI_FEATURE_REGISTER_0_SVE_EN BIT(9) +#define RMI_FEATURE_REGISTER_0_SVE_VL GENMASK(13, 10) +#define RMI_FEATURE_REGISTER_0_NUM_BPS GENMASK(17, 14) +#define RMI_FEATURE_REGISTER_0_NUM_WPS GENMASK(21, 18) +#define RMI_FEATURE_REGISTER_0_PMU_EN BIT(22) +#define RMI_FEATURE_REGISTER_0_PMU_NUM_CTRS GENMASK(27, 23) +#define RMI_FEATURE_REGISTER_0_HASH_SHA_256 BIT(28) +#define RMI_FEATURE_REGISTER_0_HASH_SHA_512 BIT(29) + +struct realm_params { + union { + u64 features_0; + u8 padding_1[0x100]; + }; + union { + u8 measurement_algo; + u8 padding_2[0x300]; + }; + union { + u8 rpv[64]; + u8 padding_3[0x400]; + }; + union { + struct { + u16 vmid; + u8 padding_4[6]; + u64 rtt_base; + u64 rtt_level_start; + u32 rtt_num_start; + }; + u8 padding_5[0x800]; + }; +}; + +/* + * The number of GPRs (starting from X0) that are + * configured by the host when a REC is created. + */ +#define REC_CREATE_NR_GPRS 8 + +#define REC_PARAMS_FLAG_RUNNABLE BIT_ULL(0) + +#define REC_PARAMS_AUX_GRANULES 16 + +struct rec_params { + union { + u64 flags; + u8 padding1[0x100]; + }; + union { + u64 mpidr; + u8 padding2[0x100]; + }; + union { + u64 pc; + u8 padding3[0x100]; + }; + union { + u64 gprs[REC_CREATE_NR_GPRS]; + u8 padding4[0x500]; + }; + u64 num_rec_aux; + u64 aux[REC_PARAMS_AUX_GRANULES]; +}; + +#define RMI_EMULATED_MMIO BIT(0) +#define RMI_INJECT_SEA BIT(1) +#define RMI_TRAP_WFI BIT(2) +#define RMI_TRAP_WFE BIT(3) + +#define REC_RUN_GPRS 31 +#define REC_GIC_NUM_LRS 16 + +struct rec_entry { + union { /* 0x000 */ + u64 flags; + u8 padding0[0x200]; + }; + union { /* 0x200 */ + u64 gprs[REC_RUN_GPRS]; + u8 padding2[0x100]; + }; + union { /* 0x300 */ + struct { + u64 gicv3_hcr; + u64 gicv3_lrs[REC_GIC_NUM_LRS]; + }; + u8 padding3[0x100]; + }; + u8 padding4[0x400]; +}; + +struct rec_exit { + union { /* 0x000 */ + u8 exit_reason; + u8 padding0[0x100]; + }; + union { /* 0x100 */ + struct { + u64 esr; + u64 far; + u64 hpfar; + }; + u8 padding1[0x100]; + }; + union { /* 0x200 */ + u64 gprs[REC_RUN_GPRS]; + u8 padding2[0x100]; + }; + union { /* 0x300 */ + struct { + u64 gicv3_hcr; + u64 gicv3_lrs[REC_GIC_NUM_LRS]; + u64 gicv3_misr; + u64 gicv3_vmcr; + }; + u8 padding3[0x100]; + }; + union { /* 0x400 */ + struct { + u64 cntp_ctl; + u64 cntp_cval; + u64 cntv_ctl; + u64 cntv_cval; + }; + u8 padding4[0x100]; + }; + union { /* 0x500 */ + struct { + u64 ripas_base; + u64 ripas_size; + u64 ripas_value; /* Only lowest bit */ + }; + u8 padding5[0x100]; + }; + union { /* 0x600 */ + u16 imm; + u8 padding6[0x100]; + }; + union { /* 0x700 */ + struct { + u64 pmu_ovf; + u64 pmu_intr_en; + u64 pmu_cntr_en; + }; + u8 padding7[0x100]; + }; +}; + +struct rec_run { + struct rec_entry entry; + struct rec_exit exit; +}; + +#define RMI_EXIT_SYNC 0x00 +#define RMI_EXIT_IRQ 0x01 +#define RMI_EXIT_FIQ 0x02 +#define RMI_EXIT_PSCI 0x03 +#define RMI_EXIT_RIPAS_CHANGE 0x04 +#define RMI_EXIT_HOST_CALL 0x05 +#define RMI_EXIT_SERROR 0x06 + +#endif From patchwork Fri Jan 27 11:29:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49251 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp782957wrn; Fri, 27 Jan 2023 03:32:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXthqyWfKpel8JVju7l9wVnfTDQu/XFpKzcBBSQ2o1cq50ajKGhqzqaXT31JfMa8LG5UJ1Q5 X-Received: by 2002:a17:90a:7347:b0:226:b52e:f1b8 with SMTP id j7-20020a17090a734700b00226b52ef1b8mr41429604pjs.24.1674819160252; Fri, 27 Jan 2023 03:32:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819160; cv=none; d=google.com; s=arc-20160816; b=iDgK44PBMwv6Hi0u82o7bHL+Nk4uS43m9Gehf5A/P/KC78cixh3qm/6rH4ThEafq6W PNseedzCQDWOh4vI3dfo+izNuouMQkBxzPBsdpwbW3B+Iz8K8GhGuMxKiQW6pHWa75ZR 0Ega+4NSFPWcGV+UYPrXfqik4qEfv4xwPkDryV/C+x1l5O2xQIv9VN7lgH7dc0D8z6W/ Xra1IVKod83OwOYhmwhRUSA0cCQh+PRSCUmKgM2WUmQHD7i+ZTXdvjIQxJKdhX+r+VJC XUF9uiCVMMM8p6d2Y5ejvhlgOeboEcMSO8R00eCHwgd5V8HHSRmVTyWZmT6dMgKhr0FH 2YtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=eIUb1QmquQpZ00KKSF2zQZo1pxt1W+J8qwnYEaI8i3g=; b=cQcuYVDkz/yPvKcT0RLDcg2Ui3sktTbNrrnt1e4AALHnu/qZv808jX3Pvrw6hY/oaJ 8u8AhGP9Vat8g9/RN0zut4IU5zDLwanS88rr+0zMQojIVhfssM2zzJ+c9oRKI/aACNm+ E+82qlFoyJ8Oi0A//smrDfUvzqljqvap8StdigIbWYmpKk2N5im70kBREp8hNKVyvi/d 0hWyOg0CPt7CQRMwNKiBRHgVqyR81jEvG3dJS1tXNwT6H6jsi2D4q6V8HkK6hl8u9pwM ytd77YIbVJxJ0i81HDGNBbpbm0afqdcWsKv5AJDgjohuZwUSJlcbYhU0XmJefwl6428a +1OA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o11-20020a63920b000000b004dadd4cc10esi4199679pgd.602.2023.01.27.03.32.27; Fri, 27 Jan 2023 03:32:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233606AbjA0Lbt (ORCPT + 99 others); Fri, 27 Jan 2023 06:31:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233015AbjA0LbY (ORCPT ); Fri, 27 Jan 2023 06:31:24 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 59BC17D6ED; Fri, 27 Jan 2023 03:30:00 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DC59F1595; Fri, 27 Jan 2023 03:30:32 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8133F3F64C; Fri, 27 Jan 2023 03:29:48 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 03/28] arm64: RME: Add wrappers for RMI calls Date: Fri, 27 Jan 2023 11:29:07 +0000 Message-Id: <20230127112932.38045-4-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175176082506197?= X-GMAIL-MSGID: =?utf-8?q?1756175176082506197?= The wrappers make the call sites easier to read and deal with the boiler plate of handling the error codes from the RMM. Signed-off-by: Steven Price --- arch/arm64/include/asm/rmi_cmds.h | 259 ++++++++++++++++++++++++++++++ 1 file changed, 259 insertions(+) create mode 100644 arch/arm64/include/asm/rmi_cmds.h diff --git a/arch/arm64/include/asm/rmi_cmds.h b/arch/arm64/include/asm/rmi_cmds.h new file mode 100644 index 000000000000..d5468ee46f35 --- /dev/null +++ b/arch/arm64/include/asm/rmi_cmds.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_RMI_CMDS_H +#define __ASM_RMI_CMDS_H + +#include + +#include + +struct rtt_entry { + unsigned long walk_level; + unsigned long desc; + int state; + bool ripas; +}; + +static inline int rmi_data_create(unsigned long data, unsigned long rd, + unsigned long map_addr, unsigned long src, + unsigned long flags) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_DATA_CREATE, data, rd, map_addr, src, + flags, &res); + + return res.a0; +} + +static inline int rmi_data_create_unknown(unsigned long data, + unsigned long rd, + unsigned long map_addr) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_DATA_CREATE_UNKNOWN, data, rd, map_addr, + &res); + + return res.a0; +} + +static inline int rmi_data_destroy(unsigned long rd, unsigned long map_addr) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_DATA_DESTROY, rd, map_addr, &res); + + return res.a0; +} + +static inline int rmi_features(unsigned long index, unsigned long *out) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_FEATURES, index, &res); + + *out = res.a1; + return res.a0; +} + +static inline int rmi_granule_delegate(unsigned long phys) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_GRANULE_DELEGATE, phys, &res); + + return res.a0; +} + +static inline int rmi_granule_undelegate(unsigned long phys) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_GRANULE_UNDELEGATE, phys, &res); + + return res.a0; +} + +static inline int rmi_psci_complete(unsigned long calling_rec, + unsigned long target_rec) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_PSCI_COMPLETE, calling_rec, target_rec, + &res); + + return res.a0; +} + +static inline int rmi_realm_activate(unsigned long rd) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REALM_ACTIVATE, rd, &res); + + return res.a0; +} + +static inline int rmi_realm_create(unsigned long rd, unsigned long params_ptr) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REALM_CREATE, rd, params_ptr, &res); + + return res.a0; +} + +static inline int rmi_realm_destroy(unsigned long rd) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REALM_DESTROY, rd, &res); + + return res.a0; +} + +static inline int rmi_rec_aux_count(unsigned long rd, unsigned long *aux_count) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REC_AUX_COUNT, rd, &res); + + *aux_count = res.a1; + return res.a0; +} + +static inline int rmi_rec_create(unsigned long rec, unsigned long rd, + unsigned long params_ptr) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REC_CREATE, rec, rd, params_ptr, &res); + + return res.a0; +} + +static inline int rmi_rec_destroy(unsigned long rec) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REC_DESTROY, rec, &res); + + return res.a0; +} + +static inline int rmi_rec_enter(unsigned long rec, unsigned long run_ptr) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_REC_ENTER, rec, run_ptr, &res); + + return res.a0; +} + +static inline int rmi_rtt_create(unsigned long rtt, unsigned long rd, + unsigned long map_addr, unsigned long level) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_CREATE, rtt, rd, map_addr, level, + &res); + + return res.a0; +} + +static inline int rmi_rtt_destroy(unsigned long rtt, unsigned long rd, + unsigned long map_addr, unsigned long level) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_DESTROY, rtt, rd, map_addr, level, + &res); + + return res.a0; +} + +static inline int rmi_rtt_fold(unsigned long rtt, unsigned long rd, + unsigned long map_addr, unsigned long level) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_FOLD, rtt, rd, map_addr, level, &res); + + return res.a0; +} + +static inline int rmi_rtt_init_ripas(unsigned long rd, unsigned long map_addr, + unsigned long level) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_INIT_RIPAS, rd, map_addr, level, &res); + + return res.a0; +} + +static inline int rmi_rtt_map_unprotected(unsigned long rd, + unsigned long map_addr, + unsigned long level, + unsigned long desc) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_MAP_UNPROTECTED, rd, map_addr, level, + desc, &res); + + return res.a0; +} + +static inline int rmi_rtt_read_entry(unsigned long rd, unsigned long map_addr, + unsigned long level, struct rtt_entry *rtt) +{ + struct arm_smccc_1_2_regs regs = { + SMC_RMI_RTT_READ_ENTRY, + rd, map_addr, level + }; + + arm_smccc_1_2_smc(®s, ®s); + + rtt->walk_level = regs.a1; + rtt->state = regs.a2 & 0xFF; + rtt->desc = regs.a3; + rtt->ripas = regs.a4 & 1; + + return regs.a0; +} + +static inline int rmi_rtt_set_ripas(unsigned long rd, unsigned long rec, + unsigned long map_addr, unsigned long level, + unsigned long ripas) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_SET_RIPAS, rd, rec, map_addr, level, + ripas, &res); + + return res.a0; +} + +static inline int rmi_rtt_unmap_unprotected(unsigned long rd, + unsigned long map_addr, + unsigned long level) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RMI_RTT_UNMAP_UNPROTECTED, rd, map_addr, + level, &res); + + return res.a0; +} + +static inline phys_addr_t rmi_rtt_get_phys(struct rtt_entry *rtt) +{ + return rtt->desc & GENMASK(47, 12); +} + +#endif From patchwork Fri Jan 27 11:29:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49254 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783047wrn; Fri, 27 Jan 2023 03:32:52 -0800 (PST) X-Google-Smtp-Source: AK7set+KVMjDJe5eo9mmSiuWGCSxAkA3N+QLHnOhu//ngo7185duIfXuQQOBiThzj9z3q3PvgB7W X-Received: by 2002:a17:90b:1d8a:b0:22c:792:d342 with SMTP id pf10-20020a17090b1d8a00b0022c0792d342mr10151886pjb.26.1674819172395; Fri, 27 Jan 2023 03:32:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819172; cv=none; d=google.com; s=arc-20160816; b=bfwuGJswa7+riugx1drLlVpS+a/MYKkq+ZYRyCS2I9JEj5jo6IcQiY8vABElB9+u9r Tu64x+++hBpXkENaMy1p1g6U77LE2gphZdg6eHLbkzyRaKH6oqPTXtiH+6+pxqjmrWn6 tO8md7VxC59zCcto6pGIXkEvkGudrhdbAfG/+5i4HSqgWGfRkdJhGPSh9bpSXg3aKO5P cB8j38ttFuaHxC3Ez6RWY7tQu9W14Xr+UazuTeJK4jYMnejSHVR63ECq6iyNuzldLE7R v0mZ1evq8uUQTyrmlMh1gFxYUxi4vK0jG09GQ+9fNqt5bqkVG++hjTZTkY/QT1z0MgY5 SxVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=roJYGgGN8L+TcOMqBXQZ6iza09j7wM+OeCW+U0uL6eA=; b=KroSfTecTqt74O0vbh62ejuMc3phvMHhERjU6XOUy28lcV75x2kzSA+DJMckZAfwyh aQoPKZtrRNEnRFFSyrGLQBCLlQWLevnaQsawOdXZYtpdJjKgPJ9gL9do8IvKNjMrN2Ix /klfWc+gKoW+gpUTFdfpxsQ2PjAz39Ny4DfTP8Ocqz/Ch3GblzpSVZevlOilKIY0dkJe Y0jVkhh0hQvPR+6mk70eAINN74CeLnflJpTrQqLn2GMG4Uz7fohH0ikR8NYBuTZjtB40 mM7mWnaRtuuQnV1MPUuoXH/KetK814tdK9qakv5Mv1TWm/8tU4ZEJqjiYRXQMkBtA1gi Lxnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i19-20020a633c53000000b004d2c0d8e5d9si4141940pgn.402.2023.01.27.03.32.40; Fri, 27 Jan 2023 03:32:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233631AbjA0Lb4 (ORCPT + 99 others); Fri, 27 Jan 2023 06:31:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233570AbjA0Lba (ORCPT ); Fri, 27 Jan 2023 06:31:30 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 73F5BB46D; Fri, 27 Jan 2023 03:30:04 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 872351596; Fri, 27 Jan 2023 03:30:35 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3B9823F64C; Fri, 27 Jan 2023 03:29:51 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 04/28] arm64: RME: Check for RME support at KVM init Date: Fri, 27 Jan 2023 11:29:08 +0000 Message-Id: <20230127112932.38045-5-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175188827989216?= X-GMAIL-MSGID: =?utf-8?q?1756175188827989216?= Query the RMI version number and check if it is a compatible version. A static key is also provided to signal that a supported RMM is available. Functions are provided to query if a VM or VCPU is a realm (or rec) which currently will always return false. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_emulate.h | 17 ++++++++++ arch/arm64/include/asm/kvm_host.h | 4 +++ arch/arm64/include/asm/kvm_rme.h | 22 +++++++++++++ arch/arm64/include/asm/virt.h | 1 + arch/arm64/kvm/Makefile | 3 +- arch/arm64/kvm/arm.c | 8 +++++ arch/arm64/kvm/rme.c | 49 ++++++++++++++++++++++++++++ 7 files changed, 103 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/kvm_rme.h create mode 100644 arch/arm64/kvm/rme.c diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 9bdba47f7e14..5a2b7229e83f 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -490,4 +490,21 @@ static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) return test_bit(feature, vcpu->arch.features); } +static inline bool kvm_is_realm(struct kvm *kvm) +{ + if (static_branch_unlikely(&kvm_rme_is_available)) + return kvm->arch.is_realm; + return false; +} + +static inline enum realm_state kvm_realm_state(struct kvm *kvm) +{ + return READ_ONCE(kvm->arch.realm.state); +} + +static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu) +{ + return false; +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 35a159d131b5..04347c3a8c6b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -26,6 +26,7 @@ #include #include #include +#include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -240,6 +241,9 @@ struct kvm_arch { * the associated pKVM instance in the hypervisor. */ struct kvm_protected_vm pkvm; + + bool is_realm; + struct realm realm; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h new file mode 100644 index 000000000000..c26bc2c6770d --- /dev/null +++ b/arch/arm64/include/asm/kvm_rme.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_KVM_RME_H +#define __ASM_KVM_RME_H + +enum realm_state { + REALM_STATE_NONE, + REALM_STATE_NEW, + REALM_STATE_ACTIVE, + REALM_STATE_DYING +}; + +struct realm { + enum realm_state state; +}; + +int kvm_init_rme(void); + +#endif diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 4eb601e7de50..be1383e26626 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -80,6 +80,7 @@ void __hyp_set_vectors(phys_addr_t phys_vector_base); void __hyp_reset_vectors(void); DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); +DECLARE_STATIC_KEY_FALSE(kvm_rme_is_available); /* Reports the availability of HYP mode */ static inline bool is_hyp_mode_available(void) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 5e33c2d4645a..d2f0400c50da 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -20,7 +20,8 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ vgic/vgic-v3.o vgic/vgic-v4.o \ vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ - vgic/vgic-its.o vgic/vgic-debug.o + vgic/vgic-its.o vgic/vgic-debug.o \ + rme.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..d97b39d042ab 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -47,6 +48,7 @@ static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT; DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); +DEFINE_STATIC_KEY_FALSE(kvm_rme_is_available); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); @@ -2213,6 +2215,12 @@ int kvm_arch_init(void *opaque) in_hyp_mode = is_kernel_in_hyp_mode(); + if (in_hyp_mode) { + err = kvm_init_rme(); + if (err) + return err; + } + if (cpus_have_final_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) || cpus_have_final_cap(ARM64_WORKAROUND_1508412)) kvm_info("Guests without required CPU erratum workarounds can deadlock system!\n" \ diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c new file mode 100644 index 000000000000..f6b587bc116e --- /dev/null +++ b/arch/arm64/kvm/rme.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#include + +#include +#include + +static int rmi_check_version(void) +{ + struct arm_smccc_res res; + int version_major, version_minor; + + arm_smccc_1_1_invoke(SMC_RMI_VERSION, &res); + + if (res.a0 == SMCCC_RET_NOT_SUPPORTED) + return -ENXIO; + + version_major = RMI_ABI_VERSION_GET_MAJOR(res.a0); + version_minor = RMI_ABI_VERSION_GET_MINOR(res.a0); + + if (version_major != RMI_ABI_MAJOR_VERSION) { + kvm_err("Unsupported RMI ABI (version %d.%d) we support %d\n", + version_major, version_minor, + RMI_ABI_MAJOR_VERSION); + return -ENXIO; + } + + kvm_info("RMI ABI version %d.%d\n", version_major, version_minor); + + return 0; +} + +int kvm_init_rme(void) +{ + if (PAGE_SIZE != SZ_4K) + /* Only 4k page size on the host is supported */ + return 0; + + if (rmi_check_version()) + /* Continue without realm support */ + return 0; + + /* Future patch will enable static branch kvm_rme_is_available */ + + return 0; +} From patchwork Fri Jan 27 11:29:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49255 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783118wrn; Fri, 27 Jan 2023 03:33:01 -0800 (PST) X-Google-Smtp-Source: AK7set81UTgSZBakheBt+gNByVBqvFQUCCHJljgk0qbjHtcZmXrxrENxjvzsX48R9q0MnZFfD9bR X-Received: by 2002:a17:902:f550:b0:196:1682:6fe with SMTP id h16-20020a170902f55000b00196168206femr16890403plf.64.1674819181434; Fri, 27 Jan 2023 03:33:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819181; cv=none; d=google.com; s=arc-20160816; b=sUz3K2RYWWV9c5WBq4BdZT0jM3zSC93NhySe3MI5peS33/N/RQE0nT7O1PP6NCrWWv Ha6O5Sf3BaWZ53OjYyPmK0zOa4Hn9HI1rCYNIjyeW43VOzem/XD0pHBOpR//voUIeYBD d34GOyktNVid1bhxEsgYMBjPNbYM419yynuqpK/4vqIhPHDVqt3pmmopV6MSxtrdO0sZ ZGpSEh6gXzQaHDXuH1ZX+12h0UxuWe4WmHV8Bu4w56fEzlz39nChnJgWmiLENp4FrgzO bRb5Z4+/jq8TGehVAloV6dhKqYy7RQJknFBZ4SsrDa9Z9QWwlMmeettF3PFfeA+gRLzE DmOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lAYefdcE1lEEe71L7njJGKRJ3Lud4QfHf368xnUjN/4=; b=zPO32K9RWm1Sqg+pnn0yHZPf7MEweamAujgUNhstdbmxgJRDgNOarldDtAjmjlvlvF Hu4189obKwdBEF9PWJ5zKGQb/MJYLRac+x27iQT01JFY3Q9Ie6bIe+xrDtvFvLEfyjAt 1BTqFKSYU0nLAH6CosfepvjgZC4WN71kZcNP+YfXALEAeYMvXlCbUCzBeg8sLsZFj5XD Qbs80mAD4Cl+zMpOEyjKC0g+im3gf1regtjGq93Jc8AzoyTAy9wDfolxOpPvO5GkG7IL O6aBFH4dv3bbMcKV5L6Vp4t+Kcrah5ZL9UCCY10abCRO++5hLT0P8+4tvaLFqP16ePNk uU9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s6-20020a170902b18600b001873a81f2d1si4011456plr.87.2023.01.27.03.32.49; Fri, 27 Jan 2023 03:33:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233313AbjA0Lc1 (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233574AbjA0LcK (ORCPT ); Fri, 27 Jan 2023 06:32:10 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 427DF80038; Fri, 27 Jan 2023 03:30:37 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDEF815BF; Fri, 27 Jan 2023 03:30:37 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D62683F64C; Fri, 27 Jan 2023 03:29:53 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 05/28] arm64: RME: Define the user ABI Date: Fri, 27 Jan 2023 11:29:09 +0000 Message-Id: <20230127112932.38045-6-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175197792232594?= X-GMAIL-MSGID: =?utf-8?q?1756175197792232594?= There is one (multiplexed) CAP which can be used to create, populate and then activate the realm. Signed-off-by: Steven Price --- Documentation/virt/kvm/api.rst | 1 + arch/arm64/include/uapi/asm/kvm.h | 63 +++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 2 + 3 files changed, 66 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 0dd5d8733dd5..f1a59d6fb7fc 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4965,6 +4965,7 @@ Recognised values for feature: ===== =========================================== arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) + arm64 KVM_ARM_VCPU_REC (requires KVM_CAP_ARM_RME) ===== =========================================== Finalizes the configuration of the specified vcpu feature. diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index a7a857f1784d..fcc0b8dce29b 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -109,6 +109,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ +#define KVM_ARM_VCPU_REC 7 /* VCPU REC state as part of Realm */ struct kvm_vcpu_init { __u32 target; @@ -401,6 +402,68 @@ enum { #define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3 #define KVM_DEV_ARM_ITS_CTRL_RESET 4 +/* KVM_CAP_ARM_RME on VM fd */ +#define KVM_CAP_ARM_RME_CONFIG_REALM 0 +#define KVM_CAP_ARM_RME_CREATE_RD 1 +#define KVM_CAP_ARM_RME_INIT_IPA_REALM 2 +#define KVM_CAP_ARM_RME_POPULATE_REALM 3 +#define KVM_CAP_ARM_RME_ACTIVATE_REALM 4 + +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 0 +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512 1 + +#define KVM_CAP_ARM_RME_RPV_SIZE 64 + +/* List of configuration items accepted for KVM_CAP_ARM_RME_CONFIG_REALM */ +#define KVM_CAP_ARM_RME_CFG_RPV 0 +#define KVM_CAP_ARM_RME_CFG_HASH_ALGO 1 +#define KVM_CAP_ARM_RME_CFG_SVE 2 +#define KVM_CAP_ARM_RME_CFG_DBG 3 +#define KVM_CAP_ARM_RME_CFG_PMU 4 + +struct kvm_cap_arm_rme_config_item { + __u32 cfg; + union { + /* cfg == KVM_CAP_ARM_RME_CFG_RPV */ + struct { + __u8 rpv[KVM_CAP_ARM_RME_RPV_SIZE]; + }; + + /* cfg == KVM_CAP_ARM_RME_CFG_HASH_ALGO */ + struct { + __u32 hash_algo; + }; + + /* cfg == KVM_CAP_ARM_RME_CFG_SVE */ + struct { + __u32 sve_vq; + }; + + /* cfg == KVM_CAP_ARM_RME_CFG_DBG */ + struct { + __u32 num_brps; + __u32 num_wrps; + }; + + /* cfg == KVM_CAP_ARM_RME_CFG_PMU */ + struct { + __u32 num_pmu_cntrs; + }; + /* Fix the size of the union */ + __u8 reserved[256]; + }; +}; + +struct kvm_cap_arm_rme_populate_realm_args { + __u64 populate_ipa_base; + __u64 populate_ipa_size; +}; + +struct kvm_cap_arm_rme_init_ipa_args { + __u64 init_ipa_base; + __u64 init_ipa_size; +}; + /* Device Control API on vcpu fd */ #define KVM_ARM_VCPU_PMU_V3_CTRL 0 #define KVM_ARM_VCPU_PMU_V3_IRQ 0 diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 20522d4ba1e0..fec1909e8b73 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1176,6 +1176,8 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 +#define KVM_CAP_ARM_RME 300 // FIXME: Large number to prevent conflicts + #ifdef KVM_CAP_IRQ_ROUTING struct kvm_irq_routing_irqchip { From patchwork Fri Jan 27 11:29:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49256 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783152wrn; Fri, 27 Jan 2023 03:33:05 -0800 (PST) X-Google-Smtp-Source: AK7set9rrJe9diZBzVaVtGJfa7HfI33OO2YfGZjxfWd4DcrllR20JH/w/btRAxa/bcDP6Div9Ptn X-Received: by 2002:a17:902:db08:b0:196:58ac:c80b with SMTP id m8-20020a170902db0800b0019658acc80bmr1390151plx.29.1674819185266; Fri, 27 Jan 2023 03:33:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819185; cv=none; d=google.com; s=arc-20160816; b=g+RQbIr3FIv31XBukuGpGsO1YpH9RcoDJgcAnhoN9UI+6A95AKQnkx8QUlsh3LTdMy hNkyFp7+KZ8IpRctD8zRTi/MT9jK3+48ie8ZCNzAAGEpzNn8nKHYv+PRpoeAcYWUIMVj 7b674Agig60/U5tvAv9+YHipw6lFAXapwpsDlaqoTqwMi4TIF7xgTxrUC6awySvqMiEF 52A9Sz15T8e8XqvPmtk6tqo7YN+5zQmcd/S7o71j2fNsfdw2xCumQ/e0USZIpKwQ9U6a 3QabuaUBEnu9kcen79nniXEG6QXlDv0XS9UZVeXt+0+x7RxzC/OGZdhgZqb3AAH2ACyG tIPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=bQJiBz+C/QMA850p/t8sg2x8H3C8mvPUCTxjJLcZgPM=; b=0wj1LozTo3g3YYWdn8QTzR8Pcyg2f4auAH+JMV2XTB7DOgvmdh2vmSiqBxGv3TimQK NubNS7ka29I9Xdzov9CmCB227TjS7z1bQBtykR8RhNT+3IY4t9DzMbm/h0exThxfPCBr x1LoUz+ykGaNEdD6mHVv1JUqgQUR7auVtnpBlOf7/w2ygJS+liTkz1xUtMv1zExXk0qf mRklUN4fzAW16eQKCwy9eHycvBBTTo86cImGHSEiTvuSrZu2cFg+QH6elVmYCNXt+897 NX2Yr/exnDLEiTmoxtMKV4Gwtr2mnVKBm3mYZNSDpy9k1a13ctRYeBswR83fS99RFtJK r4yA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x8-20020a63b348000000b0046141525a73si3915625pgt.464.2023.01.27.03.32.52; Fri, 27 Jan 2023 03:33:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233618AbjA0Lca (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233575AbjA0LcK (ORCPT ); Fri, 27 Jan 2023 06:32:10 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 362D280020; Fri, 27 Jan 2023 03:30:37 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4D1A15DB; Fri, 27 Jan 2023 03:30:40 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 51AB53F64C; Fri, 27 Jan 2023 03:29:56 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 06/28] arm64: RME: ioctls to create and configure realms Date: Fri, 27 Jan 2023 11:29:10 +0000 Message-Id: <20230127112932.38045-7-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175202120937891?= X-GMAIL-MSGID: =?utf-8?q?1756175202120937891?= Add the KVM_CAP_ARM_RME_CREATE_FD ioctl to create a realm. This involves delegating pages to the RMM to hold the Realm Descriptor (RD) and for the base level of the Realm Translation Tables (RTT). A VMID also need to be picked, since the RMM has a separate VMID address space a dedicated allocator is added for this purpose. KVM_CAP_ARM_RME_CONFIG_REALM is provided to allow configuring the realm before it is created. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 14 ++ arch/arm64/kvm/arm.c | 19 ++ arch/arm64/kvm/mmu.c | 6 + arch/arm64/kvm/reset.c | 33 +++ arch/arm64/kvm/rme.c | 357 +++++++++++++++++++++++++++++++ 5 files changed, 429 insertions(+) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index c26bc2c6770d..055a22accc08 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -6,6 +6,8 @@ #ifndef __ASM_KVM_RME_H #define __ASM_KVM_RME_H +#include + enum realm_state { REALM_STATE_NONE, REALM_STATE_NEW, @@ -15,8 +17,20 @@ enum realm_state { struct realm { enum realm_state state; + + void *rd; + struct realm_params *params; + + unsigned long num_aux; + unsigned int vmid; + unsigned int ia_bits; }; int kvm_init_rme(void); +u32 kvm_realm_ipa_limit(void); + +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap); +int kvm_init_realm_vm(struct kvm *kvm); +void kvm_destroy_realm(struct kvm *kvm); #endif diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d97b39d042ab..50f54a63732a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -103,6 +103,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = 0; set_bit(KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED, &kvm->arch.flags); break; + case KVM_CAP_ARM_RME: + if (!static_branch_unlikely(&kvm_rme_is_available)) + return -EINVAL; + mutex_lock(&kvm->lock); + r = kvm_realm_enable_cap(kvm, cap); + mutex_unlock(&kvm->lock); + break; default: r = -EINVAL; break; @@ -172,6 +179,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) */ kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); + /* Initialise the realm bits after the generic bits are enabled */ + if (kvm_is_realm(kvm)) { + ret = kvm_init_realm_vm(kvm); + if (ret) + goto err_free_cpumask; + } + return 0; err_free_cpumask: @@ -204,6 +218,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_destroy_vcpus(kvm); kvm_unshare_hyp(kvm, kvm + 1); + + kvm_destroy_realm(kvm); } int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) @@ -300,6 +316,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PTRAUTH_GENERIC: r = system_has_full_ptr_auth(); break; + case KVM_CAP_ARM_RME: + r = static_key_enabled(&kvm_rme_is_available); + break; default: r = 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 31d7fa4c7c14..d0f707767d05 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -840,6 +840,12 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) struct kvm_pgtable *pgt = NULL; write_lock(&kvm->mmu_lock); + if (kvm_is_realm(kvm) && + kvm_realm_state(kvm) != REALM_STATE_DYING) { + /* TODO: teardown rtts */ + write_unlock(&kvm->mmu_lock); + return; + } pgt = mmu->pgt; if (pgt) { mmu->pgd_phys = 0; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e0267f672b8a..c165df174737 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -395,3 +395,36 @@ int kvm_set_ipa_limit(void) return 0; } + +int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) +{ + u64 mmfr0, mmfr1; + u32 phys_shift; + u32 ipa_limit = kvm_ipa_limit; + + if (kvm_is_realm(kvm)) + ipa_limit = kvm_realm_ipa_limit(); + + if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + return -EINVAL; + + phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); + if (phys_shift) { + if (phys_shift > ipa_limit || + phys_shift < ARM64_MIN_PARANGE_BITS) + return -EINVAL; + } else { + phys_shift = KVM_PHYS_SHIFT; + if (phys_shift > ipa_limit) { + pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n", + current->comm); + return -EINVAL; + } + } + + mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); + + return 0; +} diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index f6b587bc116e..9f8c5a91b8fc 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -5,9 +5,49 @@ #include +#include +#include #include #include +/************ FIXME: Copied from kvm/hyp/pgtable.c **********/ +#include + +struct kvm_pgtable_walk_data { + struct kvm_pgtable *pgt; + struct kvm_pgtable_walker *walker; + + u64 addr; + u64 end; +}; + +static u32 __kvm_pgd_page_idx(struct kvm_pgtable *pgt, u64 addr) +{ + u64 shift = kvm_granule_shift(pgt->start_level - 1); /* May underflow */ + u64 mask = BIT(pgt->ia_bits) - 1; + + return (addr & mask) >> shift; +} + +static u32 kvm_pgd_pages(u32 ia_bits, u32 start_level) +{ + struct kvm_pgtable pgt = { + .ia_bits = ia_bits, + .start_level = start_level, + }; + + return __kvm_pgd_page_idx(&pgt, -1ULL) + 1; +} + +/******************/ + +static unsigned long rmm_feat_reg0; + +static bool rme_supports(unsigned long feature) +{ + return !!u64_get_bits(rmm_feat_reg0, feature); +} + static int rmi_check_version(void) { struct arm_smccc_res res; @@ -33,8 +73,319 @@ static int rmi_check_version(void) return 0; } +static unsigned long create_realm_feat_reg0(struct kvm *kvm) +{ + unsigned long ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); + u64 feat_reg0 = 0; + + int num_bps = u64_get_bits(rmm_feat_reg0, + RMI_FEATURE_REGISTER_0_NUM_BPS); + int num_wps = u64_get_bits(rmm_feat_reg0, + RMI_FEATURE_REGISTER_0_NUM_WPS); + + feat_reg0 |= u64_encode_bits(ia_bits, RMI_FEATURE_REGISTER_0_S2SZ); + feat_reg0 |= u64_encode_bits(num_bps, RMI_FEATURE_REGISTER_0_NUM_BPS); + feat_reg0 |= u64_encode_bits(num_wps, RMI_FEATURE_REGISTER_0_NUM_WPS); + + return feat_reg0; +} + +u32 kvm_realm_ipa_limit(void) +{ + return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ); +} + +static u32 get_start_level(struct kvm *kvm) +{ + long sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, kvm->arch.vtcr); + + return VTCR_EL2_TGRAN_SL0_BASE - sl0; +} + +static int realm_create_rd(struct kvm *kvm) +{ + struct realm *realm = &kvm->arch.realm; + struct realm_params *params = realm->params; + void *rd = NULL; + phys_addr_t rd_phys, params_phys; + struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; + unsigned int pgd_sz; + int i, r; + + if (WARN_ON(realm->rd) || WARN_ON(!realm->params)) + return -EEXIST; + + rd = (void *)__get_free_page(GFP_KERNEL); + if (!rd) + return -ENOMEM; + + rd_phys = virt_to_phys(rd); + if (rmi_granule_delegate(rd_phys)) { + r = -ENXIO; + goto out; + } + + pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); + for (i = 0; i < pgd_sz; i++) { + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE; + + if (rmi_granule_delegate(pgd_phys)) { + r = -ENXIO; + goto out_undelegate_tables; + } + } + + params->rtt_level_start = get_start_level(kvm); + params->rtt_num_start = pgd_sz; + params->rtt_base = kvm->arch.mmu.pgd_phys; + params->vmid = realm->vmid; + + params_phys = virt_to_phys(params); + + if (rmi_realm_create(rd_phys, params_phys)) { + r = -ENXIO; + goto out_undelegate_tables; + } + + realm->rd = rd; + realm->ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); + + if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) { + WARN_ON(rmi_realm_destroy(rd_phys)); + goto out_undelegate_tables; + } + + return 0; + +out_undelegate_tables: + while (--i >= 0) { + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE; + + WARN_ON(rmi_granule_undelegate(pgd_phys)); + } + WARN_ON(rmi_granule_undelegate(rd_phys)); +out: + free_page((unsigned long)rd); + return r; +} + +/* Protects access to rme_vmid_bitmap */ +static DEFINE_SPINLOCK(rme_vmid_lock); +static unsigned long *rme_vmid_bitmap; + +static int rme_vmid_init(void) +{ + unsigned int vmid_count = 1 << kvm_get_vmid_bits(); + + rme_vmid_bitmap = bitmap_zalloc(vmid_count, GFP_KERNEL); + if (!rme_vmid_bitmap) { + kvm_err("%s: Couldn't allocate rme vmid bitmap\n", __func__); + return -ENOMEM; + } + + return 0; +} + +static int rme_vmid_reserve(void) +{ + int ret; + unsigned int vmid_count = 1 << kvm_get_vmid_bits(); + + spin_lock(&rme_vmid_lock); + ret = bitmap_find_free_region(rme_vmid_bitmap, vmid_count, 0); + spin_unlock(&rme_vmid_lock); + + return ret; +} + +static void rme_vmid_release(unsigned int vmid) +{ + spin_lock(&rme_vmid_lock); + bitmap_release_region(rme_vmid_bitmap, vmid, 0); + spin_unlock(&rme_vmid_lock); +} + +static int kvm_create_realm(struct kvm *kvm) +{ + struct realm *realm = &kvm->arch.realm; + int ret; + + if (!kvm_is_realm(kvm) || kvm_realm_state(kvm) != REALM_STATE_NONE) + return -EEXIST; + + ret = rme_vmid_reserve(); + if (ret < 0) + return ret; + realm->vmid = ret; + + ret = realm_create_rd(kvm); + if (ret) { + rme_vmid_release(realm->vmid); + return ret; + } + + WRITE_ONCE(realm->state, REALM_STATE_NEW); + + /* The realm is up, free the parameters. */ + free_page((unsigned long)realm->params); + realm->params = NULL; + + return 0; +} + +static int config_realm_hash_algo(struct realm *realm, + struct kvm_cap_arm_rme_config_item *cfg) +{ + switch (cfg->hash_algo) { + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256: + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_256)) + return -EINVAL; + break; + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512: + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_512)) + return -EINVAL; + break; + default: + return -EINVAL; + } + realm->params->measurement_algo = cfg->hash_algo; + return 0; +} + +static int config_realm_sve(struct realm *realm, + struct kvm_cap_arm_rme_config_item *cfg) +{ + u64 features_0 = realm->params->features_0; + int max_sve_vq = u64_get_bits(rmm_feat_reg0, + RMI_FEATURE_REGISTER_0_SVE_VL); + + if (!rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN)) + return -EINVAL; + + if (cfg->sve_vq > max_sve_vq) + return -EINVAL; + + features_0 &= ~(RMI_FEATURE_REGISTER_0_SVE_EN | + RMI_FEATURE_REGISTER_0_SVE_VL); + features_0 |= u64_encode_bits(1, RMI_FEATURE_REGISTER_0_SVE_EN); + features_0 |= u64_encode_bits(cfg->sve_vq, + RMI_FEATURE_REGISTER_0_SVE_VL); + + realm->params->features_0 = features_0; + return 0; +} + +static int kvm_rme_config_realm(struct kvm *kvm, struct kvm_enable_cap *cap) +{ + struct kvm_cap_arm_rme_config_item cfg; + struct realm *realm = &kvm->arch.realm; + int r = 0; + + if (kvm_realm_state(kvm) != REALM_STATE_NONE) + return -EBUSY; + + if (copy_from_user(&cfg, (void __user *)cap->args[1], sizeof(cfg))) + return -EFAULT; + + switch (cfg.cfg) { + case KVM_CAP_ARM_RME_CFG_RPV: + memcpy(&realm->params->rpv, &cfg.rpv, sizeof(cfg.rpv)); + break; + case KVM_CAP_ARM_RME_CFG_HASH_ALGO: + r = config_realm_hash_algo(realm, &cfg); + break; + case KVM_CAP_ARM_RME_CFG_SVE: + r = config_realm_sve(realm, &cfg); + break; + default: + r = -EINVAL; + } + + return r; +} + +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) +{ + int r = 0; + + switch (cap->args[0]) { + case KVM_CAP_ARM_RME_CONFIG_REALM: + r = kvm_rme_config_realm(kvm, cap); + break; + case KVM_CAP_ARM_RME_CREATE_RD: + if (kvm->created_vcpus) { + r = -EBUSY; + break; + } + + r = kvm_create_realm(kvm); + break; + default: + r = -EINVAL; + break; + } + + return r; +} + +void kvm_destroy_realm(struct kvm *kvm) +{ + struct realm *realm = &kvm->arch.realm; + struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; + unsigned int pgd_sz; + int i; + + if (realm->params) { + free_page((unsigned long)realm->params); + realm->params = NULL; + } + + if (kvm_realm_state(kvm) == REALM_STATE_NONE) + return; + + WRITE_ONCE(realm->state, REALM_STATE_DYING); + + rme_vmid_release(realm->vmid); + + if (realm->rd) { + phys_addr_t rd_phys = virt_to_phys(realm->rd); + + if (WARN_ON(rmi_realm_destroy(rd_phys))) + return; + if (WARN_ON(rmi_granule_undelegate(rd_phys))) + return; + free_page((unsigned long)realm->rd); + realm->rd = NULL; + } + + pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); + for (i = 0; i < pgd_sz; i++) { + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE; + + if (WARN_ON(rmi_granule_undelegate(pgd_phys))) + return; + } + + kvm_free_stage2_pgd(&kvm->arch.mmu); +} + +int kvm_init_realm_vm(struct kvm *kvm) +{ + struct realm_params *params; + + params = (struct realm_params *)get_zeroed_page(GFP_KERNEL); + if (!params) + return -ENOMEM; + + params->features_0 = create_realm_feat_reg0(kvm); + kvm->arch.realm.params = params; + return 0; +} + int kvm_init_rme(void) { + int ret; + if (PAGE_SIZE != SZ_4K) /* Only 4k page size on the host is supported */ return 0; @@ -43,6 +394,12 @@ int kvm_init_rme(void) /* Continue without realm support */ return 0; + ret = rme_vmid_init(); + if (ret) + return ret; + + WARN_ON(rmi_features(0, &rmm_feat_reg0)); + /* Future patch will enable static branch kvm_rme_is_available */ return 0; From patchwork Fri Jan 27 11:29:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49258 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783207wrn; Fri, 27 Jan 2023 03:33:13 -0800 (PST) X-Google-Smtp-Source: AK7set8mubbrIZI/d9nQqJZdjz03o/2j5K25TtJI+UDerPiwcKv3mkYUxOC76GVPKc4BLOHPzNRF X-Received: by 2002:a05:6a21:e38d:b0:b8:65b8:a37 with SMTP id cc13-20020a056a21e38d00b000b865b80a37mr6060438pzc.53.1674819193187; Fri, 27 Jan 2023 03:33:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819193; cv=none; d=google.com; s=arc-20160816; b=PNY3ux4ocC3zBMqr1VaZjcvwUZ4gd4HLpIPaigVOI4KPzzuKIPIZ5zVQbtmMmiHbCk pkmIQwMcYlLvl/s8KyK1i442swQe8WkhWNjib79G9FELK4deeD8dOIqQYxHIY/qp8B8Q jIHeB8/YxkiW6LCX/+poroUgLWaLmf5j/Y5K26FxosWSlJqLwtQd4r72MmnpIBanD3F8 bA10cMiHa5bZb60fx+XGMQeUhGRbW9Madx1NzuESJAJAkJdCp9cgOnVURmX7P87vwGL7 DOA3BJRzQ/Xamg7Q9z7an3e24DNw3P/Ebzy/rPJnV+WoK6h1KRZ/jyXwCrdsD2gZOtJ/ 5w6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Q8tEUtKwpLtOp+Esayq/leVKCO7Pl2AtdQWfyPLKwOk=; b=xT6NtCipm3sxccJWVeiefpqWpVPu3mKtnyC9GhFIN+ATQ5e34ykLCtDEHwl6y9Z5T2 YKcj+KCBjYnYnSYCbrIYc5dW0xMvBlkCw5zFyloV+5NtZmfU+YUuOnsRwv6LmkqBltuq etOufQh42thM5MQba1mwZjm+3ifS/stXRrsvgoxxurx5rUtzqyrrlIkuJ2LkJ+mumfN4 yfr/ET9eHO36mw3nLv2ds0vK+kpuz78TsxCFOyMdrcmlc42Ay7wn1yP27nx8VPIFmtK9 VCW6xOYKSG2DkCyKFaoyfkY6i9JPS05UWxlU159lOCro6Lf0f0G9nHO2vDY3qQJifYhw ctjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a8-20020aa79708000000b00587be58a022si4329537pfg.1.2023.01.27.03.33.00; Fri, 27 Jan 2023 03:33:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233572AbjA0Lcj (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233356AbjA0LcU (ORCPT ); Fri, 27 Jan 2023 06:32:20 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BA15F7C330; Fri, 27 Jan 2023 03:30:51 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 415511650; Fri, 27 Jan 2023 03:30:43 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 125F63F64C; Fri, 27 Jan 2023 03:29:58 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 07/28] arm64: kvm: Allow passing machine type in KVM creation Date: Fri, 27 Jan 2023 11:29:11 +0000 Message-Id: <20230127112932.38045-8-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175209984268299?= X-GMAIL-MSGID: =?utf-8?q?1756175209984268299?= Previously machine type was used purely for specifying the physical address size of the guest. Reserve the higher bits to specify an ARM specific machine type and declare a new type 'KVM_VM_TYPE_ARM_REALM' used to create a realm guest. Signed-off-by: Steven Price --- arch/arm64/kvm/arm.c | 13 +++++++++++++ arch/arm64/kvm/mmu.c | 3 --- arch/arm64/kvm/reset.c | 3 --- include/uapi/linux/kvm.h | 19 +++++++++++++++---- 4 files changed, 28 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 50f54a63732a..badd775547b8 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -147,6 +147,19 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int ret; + if (type & ~(KVM_VM_TYPE_ARM_MASK | KVM_VM_TYPE_ARM_IPA_SIZE_MASK)) + return -EINVAL; + + switch (type & KVM_VM_TYPE_ARM_MASK) { + case KVM_VM_TYPE_ARM_NORMAL: + break; + case KVM_VM_TYPE_ARM_REALM: + kvm->arch.is_realm = true; + break; + default: + return -EINVAL; + } + ret = kvm_share_hyp(kvm, kvm + 1); if (ret) return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0f707767d05..22c00274884a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -709,9 +709,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t u64 mmfr0, mmfr1; u32 phys_shift; - if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) - return -EINVAL; - phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); if (is_protected_kvm_enabled()) { phys_shift = kvm_ipa_limit; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index c165df174737..9e71d69e051f 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -405,9 +405,6 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) if (kvm_is_realm(kvm)) ipa_limit = kvm_realm_ipa_limit(); - if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) - return -EINVAL; - phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); if (phys_shift) { if (phys_shift > ipa_limit || diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index fec1909e8b73..bcfc4d58dc19 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -898,14 +898,25 @@ struct kvm_ppc_resize_hpt { #define KVM_S390_SIE_PAGE_OFFSET 1 /* - * On arm64, machine type can be used to request the physical - * address size for the VM. Bits[7-0] are reserved for the guest - * PA size shift (i.e, log2(PA_Size)). For backward compatibility, - * value 0 implies the default IPA size, 40bits. + * On arm64, machine type can be used to request both the machine type and + * the physical address size for the VM. + * + * Bits[11-8] are reserved for the ARM specific machine type. + * + * Bits[7-0] are reserved for the guest PA size shift (i.e, log2(PA_Size)). + * For backward compatibility, value 0 implies the default IPA size, 40bits. */ +#define KVM_VM_TYPE_ARM_SHIFT 8 +#define KVM_VM_TYPE_ARM_MASK (0xfULL << KVM_VM_TYPE_ARM_SHIFT) +#define KVM_VM_TYPE_ARM(_type) \ + (((_type) << KVM_VM_TYPE_ARM_SHIFT) & KVM_VM_TYPE_ARM_MASK) +#define KVM_VM_TYPE_ARM_NORMAL KVM_VM_TYPE_ARM(0) +#define KVM_VM_TYPE_ARM_REALM KVM_VM_TYPE_ARM(1) + #define KVM_VM_TYPE_ARM_IPA_SIZE_MASK 0xffULL #define KVM_VM_TYPE_ARM_IPA_SIZE(x) \ ((x) & KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + /* * ioctls for /dev/kvm fds: */ From patchwork Fri Jan 27 11:29:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49260 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783341wrn; Fri, 27 Jan 2023 03:33:32 -0800 (PST) X-Google-Smtp-Source: AK7set8E7M4tz0dxYq2SaayI0sNxq8Fd7U8FMmxjGGRrnW4QQ3n7b1+gvmTCkadxOp/IzTYv845p X-Received: by 2002:a17:903:18c:b0:196:4624:1691 with SMTP id z12-20020a170903018c00b0019646241691mr4971381plg.54.1674819212591; Fri, 27 Jan 2023 03:33:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819212; cv=none; d=google.com; s=arc-20160816; b=TwrFKno4bPX0GsqWPzMh7KaOiU9gg+9DMIkb4iJYPExm0zPPnxeT/DwChyjA0eVBNw iyLq8auKjxtEBfC3tzDVlV48r29FKZh2amTrGcsZigMQwxyhB0mX93hS4bk35jpTPYkx itjcj+WwFOIY8nRb27eqN427b8i3+8+sjAgUPOGuhxdWSjlLbGXW2/BD2rtknRLnV2Gv OaLkuvwpgkweP7GX9oF2MISI0mZLerd+ATC6BiL3DaqYUzycVYTL4nAhbNTYNGKXoz2Z /WBLXPkHoRTlsWpYFeVmWfS5HlJ1fuVfgDV3CoNZeh3WfPyiJWz4I3VgBGbdSphX6grx Jwjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=2I2n08TofPGijBSH9wVqYCWX98/c9uubddafT2lBccI=; b=T0X0TeZ/Zj8RREOf/7dT9fxa4V3nqTWWAihvVMpMcCxbxWP87LgJgwHWiIOCLa2Z74 P+pMU3M9Nro4UwPYoQHprcoLZP60iDTKrHQVwBCmdOTr1vf4/cIOOI2asx5uZdlt7V6y lPEM21NoPztHrDxz4ITENVBk8XmMCvsM52fFf0+S46+6sjH0lI3UnvzZ3k2H43qUk91u kTqjgD/E54gZBYCxQ94R+th3wanS+datfJsTIhKd7ZJB2jK1peH0KGdRlVw86A9E+SZr aYDol6/F18Adlu0AvxWUZkHglVXvgWx4xZ/oQsMEfj4UJKOVisUK/itcfXVergalWwFM 7SSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w11-20020a170902d3cb00b0019488aa379asi4494527plb.181.2023.01.27.03.33.19; Fri, 27 Jan 2023 03:33:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233604AbjA0Lcv (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231928AbjA0LcZ (ORCPT ); Fri, 27 Jan 2023 06:32:25 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 90DB17D28F; Fri, 27 Jan 2023 03:30:56 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1A3E165C; Fri, 27 Jan 2023 03:30:45 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 912C53F64C; Fri, 27 Jan 2023 03:30:01 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 08/28] arm64: RME: Keep a spare page delegated to the RMM Date: Fri, 27 Jan 2023 11:29:12 +0000 Message-Id: <20230127112932.38045-9-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175230358472282?= X-GMAIL-MSGID: =?utf-8?q?1756175230358472282?= Pages can only be populated/destroyed on the RMM at the 4KB granule, this requires creating the full depth of RTTs. However if the pages are going to be combined into a 4MB huge page the last RTT is only temporarily needed. Similarly when freeing memory the huge page must be temporarily split requiring temporary usage of the full depth oF RTTs. To avoid needing to perform a temporary allocation and delegation of a page for this purpose we keep a spare delegated page around. In particular this avoids the need for memory allocation while destroying the realm guest. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 3 +++ arch/arm64/kvm/rme.c | 6 ++++++ 2 files changed, 9 insertions(+) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 055a22accc08..a6318af3ed11 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -21,6 +21,9 @@ struct realm { void *rd; struct realm_params *params; + /* A spare already delegated page */ + phys_addr_t spare_page; + unsigned long num_aux; unsigned int vmid; unsigned int ia_bits; diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 9f8c5a91b8fc..0c9d70e4d9e6 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -148,6 +148,7 @@ static int realm_create_rd(struct kvm *kvm) } realm->rd = rd; + realm->spare_page = PHYS_ADDR_MAX; realm->ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) { @@ -357,6 +358,11 @@ void kvm_destroy_realm(struct kvm *kvm) free_page((unsigned long)realm->rd); realm->rd = NULL; } + if (realm->spare_page != PHYS_ADDR_MAX) { + if (!WARN_ON(rmi_granule_undelegate(realm->spare_page))) + free_page((unsigned long)phys_to_virt(realm->spare_page)); + realm->spare_page = PHYS_ADDR_MAX; + } pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); for (i = 0; i < pgd_sz; i++) { From patchwork Fri Jan 27 11:29:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49356 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp810089wrn; Fri, 27 Jan 2023 04:38:25 -0800 (PST) X-Google-Smtp-Source: AK7set9+oW92JJsQ4INhFaayaf4SC4FS4s0gABwMRM0fcVNYJWUfgBgdIWhbH+gs+KW4O0MZzFKl X-Received: by 2002:a17:906:3fca:b0:87a:dadd:c5aa with SMTP id k10-20020a1709063fca00b0087adaddc5aamr2418839ejj.2.1674823104949; Fri, 27 Jan 2023 04:38:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674823104; cv=none; d=google.com; s=arc-20160816; b=LUMI+SkwNyDBA3St6SWuJY50gJrPkX+pl2NXP+auizp4V7nzhZuAx4AwYRXMUxGTyL s53Q3h9/r8EzEkLwrvXo98pDHmYyzzxyRD38XR5Rv07J6TTVePD1dGcCZ+Yo6z3reRNT 5hwJ9k8t3rtbyQq7p+1FAotv3qbhpOyyt8FkG4XWOAR4hkmBWzJ9iR0zbtNsecherq3Q cHwTrnc+sxdFNFSBem1ZrKRL+sUmWPm5VsfG5os9nK4AkNl28V8O3PTyzNjG1h5yFqe+ IejzEy66O4NNPv5ixqEvwvMBYLvMcoG4V5vsxYvKfeWx4hDRvCzH+4WonNoBAJ65iEH4 guvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uGEahXI7b4tXuPfkkrUOJqRyBFa9NTm1icx0C+4AieY=; b=BrxBIVOFkDqCYYg1Ue1bigWBbvxIavV+Q8Ex69tNoZDol5Nb3MItLbM7SmTROHm9rt tImGFQDasnp/gkZm7pazc65zev7HoPazSnrej/RSWUiCUf5e+HWu6qododugV98pLoNX kZd4zUSwRypDaJr/3Mq9ENOBA8YCnBB/cGecEaKVvFFJGjPFye9BcFEOSXzBEH612JYD usvQe81SQ0oHDe34cEXz2lJtXieCQtk0OORAT98XqDA7JEqGkBDIIn4/iMydZY6qfx5f m416c0lGD725cMc2aaMcvBGBWOJqTmkYsDAxdF55PRzjdbiQBC7FwsHY+vWLB1Q//rSU pOGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wl18-20020a170907311200b007c177c92bcfsi4632884ejb.977.2023.01.27.04.38.01; Fri, 27 Jan 2023 04:38:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234112AbjA0M1f (ORCPT + 99 others); Fri, 27 Jan 2023 07:27:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233793AbjA0M1W (ORCPT ); Fri, 27 Jan 2023 07:27:22 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E1E257DBC9; Fri, 27 Jan 2023 04:26:02 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F65B1682; Fri, 27 Jan 2023 03:30:48 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2170A3F64C; Fri, 27 Jan 2023 03:30:04 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 09/28] arm64: RME: RTT handling Date: Fri, 27 Jan 2023 11:29:13 +0000 Message-Id: <20230127112932.38045-10-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756179312003288835?= X-GMAIL-MSGID: =?utf-8?q?1756179312003288835?= The RMM owns the stage 2 page tables for a realm, and KVM must request that the RMM creates/destroys entries as necessary. The physical pages to store the page tables are delegated to the realm as required, and can be undelegated when no longer used. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 19 +++++ arch/arm64/kvm/mmu.c | 7 +- arch/arm64/kvm/rme.c | 139 +++++++++++++++++++++++++++++++ 3 files changed, 162 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index a6318af3ed11..eea5118dfa8a 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -35,5 +35,24 @@ u32 kvm_realm_ipa_limit(void); int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap); int kvm_init_realm_vm(struct kvm *kvm); void kvm_destroy_realm(struct kvm *kvm); +void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level); + +#define RME_RTT_BLOCK_LEVEL 2 +#define RME_RTT_MAX_LEVEL 3 + +#define RME_PAGE_SHIFT 12 +#define RME_PAGE_SIZE BIT(RME_PAGE_SHIFT) +/* See ARM64_HW_PGTABLE_LEVEL_SHIFT() */ +#define RME_RTT_LEVEL_SHIFT(l) \ + ((RME_PAGE_SHIFT - 3) * (4 - (l)) + 3) +#define RME_L2_BLOCK_SIZE BIT(RME_RTT_LEVEL_SHIFT(2)) + +static inline unsigned long rme_rtt_level_mapsize(int level) +{ + if (WARN_ON(level > RME_RTT_MAX_LEVEL)) + return RME_PAGE_SIZE; + + return (1UL << RME_RTT_LEVEL_SHIFT(level)); +} #endif diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 22c00274884a..f29558c5dcbc 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -834,16 +834,17 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); - struct kvm_pgtable *pgt = NULL; + struct kvm_pgtable *pgt; write_lock(&kvm->mmu_lock); + pgt = mmu->pgt; if (kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_DYING) { - /* TODO: teardown rtts */ write_unlock(&kvm->mmu_lock); + kvm_realm_destroy_rtts(&kvm->arch.realm, pgt->ia_bits, + pgt->start_level); return; } - pgt = mmu->pgt; if (pgt) { mmu->pgd_phys = 0; mmu->pgt = NULL; diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 0c9d70e4d9e6..f7b0e5a779f8 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -73,6 +73,28 @@ static int rmi_check_version(void) return 0; } +static void realm_destroy_undelegate_range(struct realm *realm, + unsigned long ipa, + unsigned long addr, + ssize_t size) +{ + unsigned long rd = virt_to_phys(realm->rd); + int ret; + + while (size > 0) { + ret = rmi_data_destroy(rd, ipa); + WARN_ON(ret); + ret = rmi_granule_undelegate(addr); + + if (ret) + get_page(phys_to_page(addr)); + + addr += PAGE_SIZE; + ipa += PAGE_SIZE; + size -= PAGE_SIZE; + } +} + static unsigned long create_realm_feat_reg0(struct kvm *kvm) { unsigned long ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); @@ -170,6 +192,123 @@ static int realm_create_rd(struct kvm *kvm) return r; } +static int realm_rtt_destroy(struct realm *realm, unsigned long addr, + int level, phys_addr_t rtt_granule) +{ + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1)); + return rmi_rtt_destroy(rtt_granule, virt_to_phys(realm->rd), addr, + level); +} + +static int realm_destroy_free_rtt(struct realm *realm, unsigned long addr, + int level, phys_addr_t rtt_granule) +{ + if (realm_rtt_destroy(realm, addr, level, rtt_granule)) + return -ENXIO; + if (!WARN_ON(rmi_granule_undelegate(rtt_granule))) + put_page(phys_to_page(rtt_granule)); + + return 0; +} + +static int realm_rtt_create(struct realm *realm, + unsigned long addr, + int level, + phys_addr_t phys) +{ + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1)); + return rmi_rtt_create(phys, virt_to_phys(realm->rd), addr, level); +} + +static int realm_tear_down_rtt_range(struct realm *realm, int level, + unsigned long start, unsigned long end) +{ + phys_addr_t rd = virt_to_phys(realm->rd); + ssize_t map_size = rme_rtt_level_mapsize(level); + unsigned long addr, next_addr; + bool failed = false; + + for (addr = start; addr < end; addr = next_addr) { + phys_addr_t rtt_addr, tmp_rtt; + struct rtt_entry rtt; + unsigned long end_addr; + + next_addr = ALIGN(addr + 1, map_size); + + end_addr = min(next_addr, end); + + if (rmi_rtt_read_entry(rd, ALIGN_DOWN(addr, map_size), + level, &rtt)) { + failed = true; + continue; + } + + rtt_addr = rmi_rtt_get_phys(&rtt); + WARN_ON(level != rtt.walk_level); + + switch (rtt.state) { + case RMI_UNASSIGNED: + case RMI_DESTROYED: + break; + case RMI_TABLE: + if (realm_tear_down_rtt_range(realm, level + 1, + addr, end_addr)) { + failed = true; + break; + } + if (IS_ALIGNED(addr, map_size) && + next_addr <= end && + realm_destroy_free_rtt(realm, addr, level + 1, + rtt_addr)) + failed = true; + break; + case RMI_ASSIGNED: + WARN_ON(!rtt_addr); + /* + * If there is a block mapping, break it now, using the + * spare_page. We are sure to have a valid delegated + * page at spare_page before we enter here, otherwise + * WARN once, which will be followed by further + * warnings. + */ + tmp_rtt = realm->spare_page; + if (level == 2 && + !WARN_ON_ONCE(tmp_rtt == PHYS_ADDR_MAX) && + realm_rtt_create(realm, addr, + RME_RTT_MAX_LEVEL, tmp_rtt)) { + WARN_ON(1); + failed = true; + break; + } + realm_destroy_undelegate_range(realm, addr, + rtt_addr, map_size); + /* + * Collapse the last level table and make the spare page + * reusable again. + */ + if (level == 2 && + realm_rtt_destroy(realm, addr, RME_RTT_MAX_LEVEL, + tmp_rtt)) + failed = true; + break; + case RMI_VALID_NS: + WARN_ON(rmi_rtt_unmap_unprotected(rd, addr, level)); + break; + default: + WARN_ON(1); + failed = true; + break; + } + } + + return failed ? -EINVAL : 0; +} + +void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level) +{ + realm_tear_down_rtt_range(realm, start_level, 0, (1UL << ia_bits)); +} + /* Protects access to rme_vmid_bitmap */ static DEFINE_SPINLOCK(rme_vmid_lock); static unsigned long *rme_vmid_bitmap; From patchwork Fri Jan 27 11:29:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49357 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp810098wrn; Fri, 27 Jan 2023 04:38:26 -0800 (PST) X-Google-Smtp-Source: AMrXdXt5xwpUgRFWGEipARyoZKIh4qlZJSYi8ASDyQtCQrG9KRPPfCWfmJBJrHn5v0vkwcKr29v7 X-Received: by 2002:a05:6402:28c7:b0:47e:f535:e9a0 with SMTP id ef7-20020a05640228c700b0047ef535e9a0mr43660788edb.24.1674823106616; Fri, 27 Jan 2023 04:38:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674823106; cv=none; d=google.com; s=arc-20160816; b=mVzuIVZTwinAHdMj46BjRQaPZGrM7DIhv7wL4PheoUC3aHY7253Ql7FnMmfOYSkS5i 6wcA0wr/E33CbbyOb2F1f4G+pgELgRVeco5W256UMBypx9Ow5fn8Ti2rGgyPQL1i9zpj rgQOOSOVi2CBDMX01nYC//f9jVR3bD5iDqzPA84MbwWV/uaixLhJ842bAwlelDHfLsQe TMi3LGA7QDsv6GEwmfo00H57vAOp1NAwjT4DcDl+b2kZzl+4we2SEs9Gzn8zoDPI5LI3 TybHz6Ccd0vxYK5+5SjPlz/ugR9FX8M6k+QwV7XzKxyYe7vLg7Cvdnr3S9NGpIRA8GcQ owLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1XDiXaLJXI0dlTLA7LILNEofUdtJjmpK0QQJrT91lNk=; b=wSueevMLj5P+3NlKvHu9Mb5xSSvFG17ccHlfZsS/b2a4VPzjz7FlZpdakZayGVl7yq MTuw2T/qZeksqBvpo4JNHwtNAL6sJwRkCObh7QFiKYa+aoRhYoF9591DVbMhNWYXFflO Qhk3ea5oqEUXBUeXI8gdJ8f6sKX5cqzi+7aKymnihTpF9Kq7F7K2eAWDFOcz/eaNyrpI fs4BoZEOh2GXvI8v7uTWNe2oGzlMRboOlSsSTmchDNXmssCD9Kupy/e0hazMf2upN6UD JGHZ/zFYYg1PhzVw4/ws+KiY0uryfc0lRGkB+52S7St25B8g63/FtvMBzOvxOFAtb2iQ 7YOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fi6-20020a056402550600b0049d4fe939b9si4837220edb.434.2023.01.27.04.38.03; Fri, 27 Jan 2023 04:38:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233936AbjA0M1h (ORCPT + 99 others); Fri, 27 Jan 2023 07:27:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233804AbjA0M1W (ORCPT ); Fri, 27 Jan 2023 07:27:22 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D9AE47DBC3; Fri, 27 Jan 2023 04:26:02 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1BB72F; Fri, 27 Jan 2023 03:30:50 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AE3BF3F64C; Fri, 27 Jan 2023 03:30:06 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 10/28] arm64: RME: Allocate/free RECs to match vCPUs Date: Fri, 27 Jan 2023 11:29:14 +0000 Message-Id: <20230127112932.38045-11-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756179314154510558?= X-GMAIL-MSGID: =?utf-8?q?1756179314154510558?= The RMM maintains a data structure known as the Realm Execution Context (or REC). It is similar to struct kvm_vcpu and tracks the state of the virtual CPUs. KVM must delegate memory and request the structures are created when vCPUs are created, and suitably tear down on destruction. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_emulate.h | 2 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/include/asm/kvm_rme.h | 10 ++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/reset.c | 11 ++ arch/arm64/kvm/rme.c | 144 +++++++++++++++++++++++++++ 6 files changed, 171 insertions(+) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 5a2b7229e83f..285e62914ca4 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -504,6 +504,8 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm) static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu) { + if (static_branch_unlikely(&kvm_rme_is_available)) + return vcpu->arch.rec.mpidr != INVALID_HWID; return false; } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 04347c3a8c6b..ef497b718cdb 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -505,6 +505,9 @@ struct kvm_vcpu_arch { u64 last_steal; gpa_t base; } steal; + + /* Realm meta data */ + struct rec rec; }; /* diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index eea5118dfa8a..4b219ebe1400 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -6,6 +6,7 @@ #ifndef __ASM_KVM_RME_H #define __ASM_KVM_RME_H +#include #include enum realm_state { @@ -29,6 +30,13 @@ struct realm { unsigned int ia_bits; }; +struct rec { + unsigned long mpidr; + void *rec_page; + struct page *aux_pages[REC_PARAMS_AUX_GRANULES]; + struct rec_run *run; +}; + int kvm_init_rme(void); u32 kvm_realm_ipa_limit(void); @@ -36,6 +44,8 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap); int kvm_init_realm_vm(struct kvm *kvm); void kvm_destroy_realm(struct kvm *kvm); void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level); +int kvm_create_rec(struct kvm_vcpu *vcpu); +void kvm_destroy_rec(struct kvm_vcpu *vcpu); #define RME_RTT_BLOCK_LEVEL 2 #define RME_RTT_MAX_LEVEL 3 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index badd775547b8..52affed2f3cf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -373,6 +373,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Force users to call KVM_ARM_VCPU_INIT */ vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); + vcpu->arch.rec.mpidr = INVALID_HWID; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 9e71d69e051f..0c84392a4bf2 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -135,6 +135,11 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) return -EPERM; return kvm_vcpu_finalize_sve(vcpu); + case KVM_ARM_VCPU_REC: + if (!kvm_is_realm(vcpu->kvm)) + return -EINVAL; + + return kvm_create_rec(vcpu); } return -EINVAL; @@ -145,6 +150,11 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) return false; + if (kvm_is_realm(vcpu->kvm) && + !(vcpu_is_rec(vcpu) && + READ_ONCE(vcpu->kvm->arch.realm.state) == REALM_STATE_ACTIVE)) + return false; + return true; } @@ -157,6 +167,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); + kvm_destroy_rec(vcpu); } static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index f7b0e5a779f8..d79ed889ca4d 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -514,6 +514,150 @@ void kvm_destroy_realm(struct kvm *kvm) kvm_free_stage2_pgd(&kvm->arch.mmu); } +static void free_rec_aux(struct page **aux_pages, + unsigned int num_aux) +{ + unsigned int i; + + for (i = 0; i < num_aux; i++) { + phys_addr_t aux_page_phys = page_to_phys(aux_pages[i]); + + if (WARN_ON(rmi_granule_undelegate(aux_page_phys))) + continue; + + __free_page(aux_pages[i]); + } +} + +static int alloc_rec_aux(struct page **aux_pages, + u64 *aux_phys_pages, + unsigned int num_aux) +{ + int ret; + unsigned int i; + + for (i = 0; i < num_aux; i++) { + struct page *aux_page; + phys_addr_t aux_page_phys; + + aux_page = alloc_page(GFP_KERNEL); + if (!aux_page) { + ret = -ENOMEM; + goto out_err; + } + aux_page_phys = page_to_phys(aux_page); + if (rmi_granule_delegate(aux_page_phys)) { + __free_page(aux_page); + ret = -ENXIO; + goto out_err; + } + aux_pages[i] = aux_page; + aux_phys_pages[i] = aux_page_phys; + } + + return 0; +out_err: + free_rec_aux(aux_pages, i); + return ret; +} + +int kvm_create_rec(struct kvm_vcpu *vcpu) +{ + struct user_pt_regs *vcpu_regs = vcpu_gp_regs(vcpu); + unsigned long mpidr = kvm_vcpu_get_mpidr_aff(vcpu); + struct realm *realm = &vcpu->kvm->arch.realm; + struct rec *rec = &vcpu->arch.rec; + unsigned long rec_page_phys; + struct rec_params *params; + int r, i; + + if (kvm_realm_state(vcpu->kvm) != REALM_STATE_NEW) + return -ENOENT; + + /* + * The RMM will report PSCI v1.0 to Realms and the KVM_ARM_VCPU_PSCI_0_2 + * flag covers v0.2 and onwards. + */ + if (!test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) + return -EINVAL; + + BUILD_BUG_ON(sizeof(*params) > PAGE_SIZE); + BUILD_BUG_ON(sizeof(*rec->run) > PAGE_SIZE); + + params = (struct rec_params *)get_zeroed_page(GFP_KERNEL); + rec->rec_page = (void *)__get_free_page(GFP_KERNEL); + rec->run = (void *)get_zeroed_page(GFP_KERNEL); + if (!params || !rec->rec_page || !rec->run) { + r = -ENOMEM; + goto out_free_pages; + } + + for (i = 0; i < ARRAY_SIZE(params->gprs); i++) + params->gprs[i] = vcpu_regs->regs[i]; + + params->pc = vcpu_regs->pc; + + if (vcpu->vcpu_id == 0) + params->flags |= REC_PARAMS_FLAG_RUNNABLE; + + rec_page_phys = virt_to_phys(rec->rec_page); + + if (rmi_granule_delegate(rec_page_phys)) { + r = -ENXIO; + goto out_free_pages; + } + + r = alloc_rec_aux(rec->aux_pages, params->aux, realm->num_aux); + if (r) + goto out_undelegate_rmm_rec; + + params->num_rec_aux = realm->num_aux; + params->mpidr = mpidr; + + if (rmi_rec_create(rec_page_phys, + virt_to_phys(realm->rd), + virt_to_phys(params))) { + r = -ENXIO; + goto out_free_rec_aux; + } + + rec->mpidr = mpidr; + + free_page((unsigned long)params); + return 0; + +out_free_rec_aux: + free_rec_aux(rec->aux_pages, realm->num_aux); +out_undelegate_rmm_rec: + if (WARN_ON(rmi_granule_undelegate(rec_page_phys))) + rec->rec_page = NULL; +out_free_pages: + free_page((unsigned long)rec->run); + free_page((unsigned long)rec->rec_page); + free_page((unsigned long)params); + return r; +} + +void kvm_destroy_rec(struct kvm_vcpu *vcpu) +{ + struct realm *realm = &vcpu->kvm->arch.realm; + struct rec *rec = &vcpu->arch.rec; + unsigned long rec_page_phys; + + if (!vcpu_is_rec(vcpu)) + return; + + rec_page_phys = virt_to_phys(rec->rec_page); + + if (WARN_ON(rmi_rec_destroy(rec_page_phys))) + return; + if (WARN_ON(rmi_granule_undelegate(rec_page_phys))) + return; + + free_rec_aux(rec->aux_pages, realm->num_aux); + free_page((unsigned long)rec->rec_page); +} + int kvm_init_realm_vm(struct kvm *kvm) { struct realm_params *params; From patchwork Fri Jan 27 11:29:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49261 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783378wrn; Fri, 27 Jan 2023 03:33:37 -0800 (PST) X-Google-Smtp-Source: AMrXdXuy/OxK0p+zBuobkyKVxbhvVx+8ioQZY9uTARsXxQP38XWewsblDXGvZpbhdHQ/XJT36YJn X-Received: by 2002:a17:90b:4d8e:b0:229:ade0:d0cf with SMTP id oj14-20020a17090b4d8e00b00229ade0d0cfmr39071735pjb.3.1674819217023; Fri, 27 Jan 2023 03:33:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819217; cv=none; d=google.com; s=arc-20160816; b=Sf7wyZB4KZSkOLnX0BtNj3c904tvAEf4WeAMWLeM/wCeP2SyBp7cfCSgpNZt4ItaF2 5IMDBOH+YJ4vD7WlnNWV+IIfdxV0mrXRa3rPSjd+CGUGuZHDtvG6/9FRZor7OTf8PC2b xkjtA5XkCPE7FjEGDwAknNevez8QHAW1URTLmpHaanQgrjZnkHxzYHS06mtq0n+J+qp5 Re7jOe2AQA56mlzQAtWUjAoyGFJiRoJg8hB4VxzfZI+LhE0hDPpzyvEsaWmu2JW6ZUo1 x9OKds3fT0DgVQYhUAxGnx5++BRcyC8s/lPvzEfgcwk7pcmkgaXd0k1UQHrIRwnfLMzu AFsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=wEEdGUUdMb+fc1gEhnN3IuWD4xDaDhL+KRmmKmll3Tc=; b=BsUDAh7fh0+3FvTqHmu/Ja9Gge0Odema5+bVrIVtUy+pIDLV0X46jM4l8Te0/0Toju mF4tx88gcvq2qZaXuRzWj7NazSsdlxOIGATQryzh0BLl7ut/KamGj19Ac6aJszVMu9+h 1hj+yYpbd1G3YgGIYF7siUrq3kiNTrbTpMvUptGI3IUGqkIewGBLyy4kFfxlqdDVVsya PhZDGsjiCdoRGFRBWkXOEQn8G8XC4NUif7vuje7sLSUnxSlQvBzygjVEgRuzGFJhibqf uSeKsEkD1N7YuU+6kdvgPLlgw04x2uYm59W/dcyXsuhGz00C5CV4ZKxmJr9ul/JdTfPH 5UEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v18-20020a17090a899200b0022c4d84acdasi1493553pjn.23.2023.01.27.03.33.24; Fri, 27 Jan 2023 03:33:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233608AbjA0Lcz (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233574AbjA0Lc3 (ORCPT ); Fri, 27 Jan 2023 06:32:29 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 113E17D2A7; Fri, 27 Jan 2023 03:31:02 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C30A61684; Fri, 27 Jan 2023 03:30:53 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2C3FB3F64C; Fri, 27 Jan 2023 03:30:09 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 11/28] arm64: RME: Support for the VGIC in realms Date: Fri, 27 Jan 2023 11:29:15 +0000 Message-Id: <20230127112932.38045-12-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175235397226153?= X-GMAIL-MSGID: =?utf-8?q?1756175235397226153?= The RMM provides emulation of a VGIC to the realm guest but delegates much of the handling to the host. Implement support in KVM for saving/restoring state to/from the REC structure. Signed-off-by: Steven Price --- arch/arm64/kvm/arm.c | 15 +++++++++++--- arch/arm64/kvm/vgic/vgic-v3.c | 9 +++++++-- arch/arm64/kvm/vgic/vgic.c | 37 +++++++++++++++++++++++++++++++++-- 3 files changed, 54 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 52affed2f3cf..1b2547516f62 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -475,17 +475,22 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + kvm_timer_vcpu_put(vcpu); + kvm_vgic_put(vcpu); + + vcpu->cpu = -1; + + if (vcpu_is_rec(vcpu)) + return; + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) kvm_vcpu_put_sysregs_vhe(vcpu); - kvm_timer_vcpu_put(vcpu); - kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); kvm_arm_vmid_clear_active(); vcpu_clear_on_unsupported_cpu(vcpu); - vcpu->cpu = -1; } void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu) @@ -623,6 +628,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) } if (!irqchip_in_kernel(kvm)) { + /* Userspace irqchip not yet supported with Realms */ + if (kvm_is_realm(vcpu->kvm)) + return -EOPNOTSUPP; + /* * Tell the rest of the code that there are userspace irqchip * VMs in the wild. diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 826ff6f2a4e7..121c7a68c397 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -6,9 +6,11 @@ #include #include #include +#include #include #include #include +#include #include "vgic.h" @@ -669,7 +671,8 @@ int vgic_v3_probe(const struct gic_kvm_info *info) (unsigned long long)info->vcpu.start); } else if (kvm_get_mode() != KVM_MODE_PROTECTED) { kvm_vgic_global_state.vcpu_base = info->vcpu.start; - kvm_vgic_global_state.can_emulate_gicv2 = true; + if (!static_branch_unlikely(&kvm_rme_is_available)) + kvm_vgic_global_state.can_emulate_gicv2 = true; ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2); if (ret) { kvm_err("Cannot register GICv2 KVM device.\n"); @@ -744,7 +747,9 @@ void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - if (likely(cpu_if->vgic_sre)) + if (vcpu_is_rec(vcpu)) + cpu_if->vgic_vmcr = vcpu->arch.rec.run->exit.gicv3_vmcr; + else if (likely(cpu_if->vgic_sre)) cpu_if->vgic_vmcr = kvm_call_hyp_ret(__vgic_v3_read_vmcr); } diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index d97e6080b421..bc77660f7051 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -10,7 +10,9 @@ #include #include +#include #include +#include #include "vgic.h" @@ -848,10 +850,23 @@ static inline bool can_access_vgic_from_kernel(void) return !static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif) || has_vhe(); } +static inline void vgic_rmm_save_state(struct kvm_vcpu *vcpu) +{ + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; + int i; + + for (i = 0; i < kvm_vgic_global_state.nr_lr; i++) { + cpu_if->vgic_lr[i] = vcpu->arch.rec.run->exit.gicv3_lrs[i]; + vcpu->arch.rec.run->entry.gicv3_lrs[i] = 0; + } +} + static inline void vgic_save_state(struct kvm_vcpu *vcpu) { if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) vgic_v2_save_state(vcpu); + else if (vcpu_is_rec(vcpu)) + vgic_rmm_save_state(vcpu); else __vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3); } @@ -878,10 +893,28 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) vgic_prune_ap_list(vcpu); } +static inline void vgic_rmm_restore_state(struct kvm_vcpu *vcpu) +{ + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; + int i; + + for (i = 0; i < kvm_vgic_global_state.nr_lr; i++) { + vcpu->arch.rec.run->entry.gicv3_lrs[i] = cpu_if->vgic_lr[i]; + /* + * Also populate the rec.run->exit copies so that a late + * decision to back out from entering the realm doesn't cause + * the state to be lost + */ + vcpu->arch.rec.run->exit.gicv3_lrs[i] = cpu_if->vgic_lr[i]; + } +} + static inline void vgic_restore_state(struct kvm_vcpu *vcpu) { if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) vgic_v2_restore_state(vcpu); + else if (vcpu_is_rec(vcpu)) + vgic_rmm_restore_state(vcpu); else __vgic_v3_restore_state(&vcpu->arch.vgic_cpu.vgic_v3); } @@ -922,7 +955,7 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) void kvm_vgic_load(struct kvm_vcpu *vcpu) { - if (unlikely(!vgic_initialized(vcpu->kvm))) + if (unlikely(!vgic_initialized(vcpu->kvm)) || vcpu_is_rec(vcpu)) return; if (kvm_vgic_global_state.type == VGIC_V2) @@ -933,7 +966,7 @@ void kvm_vgic_load(struct kvm_vcpu *vcpu) void kvm_vgic_put(struct kvm_vcpu *vcpu) { - if (unlikely(!vgic_initialized(vcpu->kvm))) + if (unlikely(!vgic_initialized(vcpu->kvm)) || vcpu_is_rec(vcpu)) return; if (kvm_vgic_global_state.type == VGIC_V2) From patchwork Fri Jan 27 11:29:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49257 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783171wrn; Fri, 27 Jan 2023 03:33:08 -0800 (PST) X-Google-Smtp-Source: AK7set9dmjwhCdaZqqTW23+KM7tI4/5U+23MtT/6gV28OxkYy2xHLqX8EIZMt795N0TR7mu7RWdi X-Received: by 2002:a17:90a:1954:b0:22b:a73d:8a8a with SMTP id 20-20020a17090a195400b0022ba73d8a8amr6202607pjh.33.1674819188331; Fri, 27 Jan 2023 03:33:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819188; cv=none; d=google.com; s=arc-20160816; b=q6iEZAXLWTPNsIUwskKAVH5P9b2Ri4dDFfu7tz72Bt6ktQY+KqyDEJj3yW9n2DKT4f N9pTtxdbY68nWDPYyANPnHLibqPBGgV6twEGJtpBD9G4NHs4eZWw2h/U8hags9KL8oZU xM6xbhD8n/MLB6NlefhUKdcQjj5cL95jsjnlZ8HPTcyDqV3CMKnRjZJAnOG0FzFb7POu iWRNqjR20JSccxgB2ze+H8lZCi/aeZeiZF5qMXiwfOTiMMUwSdhgLOJiB8fXctCGASEO oYupxU4BXFJk9ZjfXDCuivvHALH0orRYjQ6bl86o5Pkwvd8j1adZM+vycCEZBYMXqar7 d9gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zOBJbfbBLY/2MCNscBymsdiOqw33Gz8DHfulhes0Aug=; b=DWqp5mNY6f+8Ec992TovHFGGOs3heJCvDTEPfzvFGKsgJU+N0oT7PNwMr7x7b5x61D lEHnN62tc4IMavF5JL87NcdqbTO1EOVFnB49xdE2d7OI44wMkm9U6tpPeQj6b3zfaEYA 6vIk8CTz+4wxdMM+kZRTdxjzHU35HVPAHpAKnSfYv9e+9DWJqT1XiUj2FiOMMvgpmzzC n5nSK9KB3UMjkhV0BDqPjPFD4FuBM/7h8MeUs0k4StK3nz50uYeio5XIcitFHxdSEm8V eRWo5OtQFmXsJ/a1FgMk4OcbeLfSd4sKU7lXNIOJmnil/+SjDnCmRJyJghaDUoY6+W0+ 0iEQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b1-20020a17090a6ac100b00219648ff3a5si4375551pjm.171.2023.01.27.03.32.55; Fri, 27 Jan 2023 03:33:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233463AbjA0Lcg (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233483AbjA0LcS (ORCPT ); Fri, 27 Jan 2023 06:32:18 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C0F798736A; Fri, 27 Jan 2023 03:30:46 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 292B31688; Fri, 27 Jan 2023 03:30:56 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DC773F64C; Fri, 27 Jan 2023 03:30:12 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 12/28] KVM: arm64: Support timers in realm RECs Date: Fri, 27 Jan 2023 11:29:16 +0000 Message-Id: <20230127112932.38045-13-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175205543263872?= X-GMAIL-MSGID: =?utf-8?q?1756175205543263872?= The RMM keeps track of the timer while the realm REC is running, but on exit to the normal world KVM is responsible for handling the timers. Signed-off-by: Steven Price --- arch/arm64/kvm/arch_timer.c | 53 ++++++++++++++++++++++++++++++++---- include/kvm/arm_arch_timer.h | 2 ++ 2 files changed, 49 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index bb24a76b4224..d4af9ee58550 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -130,6 +130,11 @@ static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) { struct kvm_vcpu *vcpu = ctxt->vcpu; + if (kvm_is_realm(vcpu->kvm)) { + WARN_ON(offset); + return; + } + switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: __vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset; @@ -411,6 +416,21 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, } } +void kvm_realm_timers_update(struct kvm_vcpu *vcpu) +{ + struct arch_timer_cpu *arch_timer = &vcpu->arch.timer_cpu; + int i; + + for (i = 0; i < NR_KVM_TIMERS; i++) { + struct arch_timer_context *timer = &arch_timer->timers[i]; + bool status = timer_get_ctl(timer) & ARCH_TIMER_CTRL_IT_STAT; + bool level = kvm_timer_irq_can_fire(timer) && status; + + if (level != timer->irq.level) + kvm_timer_update_irq(vcpu, level, timer); + } +} + /* Only called for a fully emulated timer */ static void timer_emulate(struct arch_timer_context *ctx) { @@ -621,6 +641,11 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) if (unlikely(!timer->enabled)) return; + kvm_timer_unblocking(vcpu); + + if (vcpu_is_rec(vcpu)) + return; + get_timer_map(vcpu, &map); if (static_branch_likely(&has_gic_active_state)) { @@ -633,8 +658,6 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) set_cntvoff(timer_get_offset(map.direct_vtimer)); - kvm_timer_unblocking(vcpu); - timer_restore_state(map.direct_vtimer); if (map.direct_ptimer) timer_restore_state(map.direct_ptimer); @@ -668,6 +691,9 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) if (unlikely(!timer->enabled)) return; + if (vcpu_is_rec(vcpu)) + goto out; + get_timer_map(vcpu, &map); timer_save_state(map.direct_vtimer); @@ -686,9 +712,6 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) if (map.emul_ptimer) soft_timer_cancel(&map.emul_ptimer->hrtimer); - if (kvm_vcpu_is_blocking(vcpu)) - kvm_timer_blocking(vcpu); - /* * The kernel may decide to run userspace after calling vcpu_put, so * we reset cntvoff to 0 to ensure a consistent read between user @@ -697,6 +720,11 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) * virtual offset of zero, so no need to zero CNTVOFF_EL2 register. */ set_cntvoff(0); + +out: + if (kvm_vcpu_is_blocking(vcpu)) + kvm_timer_blocking(vcpu); + } /* @@ -785,12 +813,18 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) struct arch_timer_cpu *timer = vcpu_timer(vcpu); struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); + u64 cntvoff; vtimer->vcpu = vcpu; ptimer->vcpu = vcpu; + if (kvm_is_realm(vcpu->kvm)) + cntvoff = 0; + else + cntvoff = kvm_phys_timer_read(); + /* Synchronize cntvoff across all vtimers of a VM. */ - update_vtimer_cntvoff(vcpu, kvm_phys_timer_read()); + update_vtimer_cntvoff(vcpu, cntvoff); timer_set_offset(ptimer, 0); hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD); @@ -1265,6 +1299,13 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu) return -EINVAL; } + /* + * We don't use mapped IRQs for Realms because the RMI doesn't allow + * us setting the LR.HW bit in the VGIC. + */ + if (vcpu_is_rec(vcpu)) + return 0; + get_timer_map(vcpu, &map); ret = kvm_vgic_map_phys_irq(vcpu, diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h index cd6d8f260eab..158280e15a33 100644 --- a/include/kvm/arm_arch_timer.h +++ b/include/kvm/arm_arch_timer.h @@ -76,6 +76,8 @@ int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +void kvm_realm_timers_update(struct kvm_vcpu *vcpu); + u64 kvm_phys_timer_read(void); void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu); From patchwork Fri Jan 27 11:29:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49294 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp787786wrn; Fri, 27 Jan 2023 03:45:22 -0800 (PST) X-Google-Smtp-Source: AMrXdXu1pa+Sb5gy1OYgaWCL41P9BNAQSigtx4lu2rk0Y/Nrb0Q/FIUA8gioB3cItx+pee6b2uJg X-Received: by 2002:a17:906:5417:b0:877:5dbc:da84 with SMTP id q23-20020a170906541700b008775dbcda84mr33899971ejo.72.1674819922168; Fri, 27 Jan 2023 03:45:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819922; cv=none; d=google.com; s=arc-20160816; b=j9E/TRgGDjdfNVpbBYif69NLVvNNEDbyzJTxrjtG9ZXjF0oklf157+bRSJFanl8m5w tK0rU7kSEpntQCkOh7L4Rez/YGYhLcvQ7d6g18roojKrLBR+A59GWBjksItu2fiaoUGy /8mdNKVZtrY1PkfIPhsbjNvBdM/TF6x6RaeVB96CmgI6p57JKRF0RaoPWHx2fEZe69LA hxOdJq2URyzpUOxCjstmWIYNJM3l7Zbp4XDHKay4XiiC+eNxr5EGkj+rBWRGkcuJrNdv EuPN2y2Xinz78/Bwq6ULKaAzxrUtRD1Q4fOhtrmGekK9xDdvcukJ8Wadg6O0wHJKwjZL 4pEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=qjSkyDxvKLRKXNDTDJnRyAsgMgw67rGs2taHa9z3sY8=; b=VSZnPLXxw39cfAE5GRB7Lsb6dMsI7CzqQz1yp5d5DSGu+No6itLI3AES79Je217i9v Me4J5oTb21tYqZaKbdVezmbXxT7TkBVYJ3nvzbA1b4E68udpfwug/OF1cAWdo9GB34SK T8MozEmm4lJdki2kDfqmo2OPVbDCw+YZH2xgf0Fh6J3tU/8C/dLSLg6fdwq0Lm5o8/KT 3aOu/tioeiCSI2CypvQf58LT7MpzvEWNAMQ6HKtUWPobUiV3ymHn/roTinxHRp7gG/5f a3kBhQkpzubh9iGqp2NNVzu8RMKjJLkwA9S9Z822Epkb8tBpiCz7xzHP53moOKz5XrJD mtAA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o17-20020aa7c511000000b0049e4243bcc1si4535303edq.493.2023.01.27.03.44.58; Fri, 27 Jan 2023 03:45:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232085AbjA0LmZ (ORCPT + 99 others); Fri, 27 Jan 2023 06:42:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233866AbjA0Llo (ORCPT ); Fri, 27 Jan 2023 06:41:44 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4265672649; Fri, 27 Jan 2023 03:41:10 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 871C2168F; Fri, 27 Jan 2023 03:30:58 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E7C93F64C; Fri, 27 Jan 2023 03:30:14 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 13/28] arm64: RME: Allow VMM to set RIPAS Date: Fri, 27 Jan 2023 11:29:17 +0000 Message-Id: <20230127112932.38045-14-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175974959416196?= X-GMAIL-MSGID: =?utf-8?q?1756175974959416196?= Each page within the protected region of the realm guest can be marked as either RAM or EMPTY. Allow the VMM to control this before the guest has started and provide the equivalent functions to change this (with the guest's approval) at runtime. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 4 + arch/arm64/kvm/rme.c | 288 +++++++++++++++++++++++++++++++ 2 files changed, 292 insertions(+) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 4b219ebe1400..3e75cedaad18 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -47,6 +47,10 @@ void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level); int kvm_create_rec(struct kvm_vcpu *vcpu); void kvm_destroy_rec(struct kvm_vcpu *vcpu); +int realm_set_ipa_state(struct kvm_vcpu *vcpu, + unsigned long addr, unsigned long end, + unsigned long ripas); + #define RME_RTT_BLOCK_LEVEL 2 #define RME_RTT_MAX_LEVEL 3 diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index d79ed889ca4d..b3ea79189839 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -73,6 +73,58 @@ static int rmi_check_version(void) return 0; } +static phys_addr_t __alloc_delegated_page(struct realm *realm, + struct kvm_mmu_memory_cache *mc, gfp_t flags) +{ + phys_addr_t phys = PHYS_ADDR_MAX; + void *virt; + + if (realm->spare_page != PHYS_ADDR_MAX) { + swap(realm->spare_page, phys); + goto out; + } + + if (mc) + virt = kvm_mmu_memory_cache_alloc(mc); + else + virt = (void *)__get_free_page(flags); + + if (!virt) + goto out; + + phys = virt_to_phys(virt); + + if (rmi_granule_delegate(phys)) { + free_page((unsigned long)virt); + + phys = PHYS_ADDR_MAX; + } + +out: + return phys; +} + +static phys_addr_t alloc_delegated_page(struct realm *realm, + struct kvm_mmu_memory_cache *mc) +{ + return __alloc_delegated_page(realm, mc, GFP_KERNEL); +} + +static void free_delegated_page(struct realm *realm, phys_addr_t phys) +{ + if (realm->spare_page == PHYS_ADDR_MAX) { + realm->spare_page = phys; + return; + } + + if (WARN_ON(rmi_granule_undelegate(phys))) { + /* Undelegate failed: leak the page */ + return; + } + + free_page((unsigned long)phys_to_virt(phys)); +} + static void realm_destroy_undelegate_range(struct realm *realm, unsigned long ipa, unsigned long addr, @@ -220,6 +272,30 @@ static int realm_rtt_create(struct realm *realm, return rmi_rtt_create(phys, virt_to_phys(realm->rd), addr, level); } +static int realm_create_rtt_levels(struct realm *realm, + unsigned long ipa, + int level, + int max_level, + struct kvm_mmu_memory_cache *mc) +{ + if (WARN_ON(level == max_level)) + return 0; + + while (level++ < max_level) { + phys_addr_t rtt = alloc_delegated_page(realm, mc); + + if (rtt == PHYS_ADDR_MAX) + return -ENOMEM; + + if (realm_rtt_create(realm, ipa, level, rtt)) { + free_delegated_page(realm, rtt); + return -ENXIO; + } + } + + return 0; +} + static int realm_tear_down_rtt_range(struct realm *realm, int level, unsigned long start, unsigned long end) { @@ -309,6 +385,206 @@ void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level) realm_tear_down_rtt_range(realm, start_level, 0, (1UL << ia_bits)); } +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size) +{ + u32 ia_bits = kvm->arch.mmu.pgt->ia_bits; + u32 start_level = kvm->arch.mmu.pgt->start_level; + unsigned long end = ipa + size; + struct realm *realm = &kvm->arch.realm; + phys_addr_t tmp_rtt = PHYS_ADDR_MAX; + + if (end > (1UL << ia_bits)) + end = 1UL << ia_bits; + /* + * Make sure we have a spare delegated page for tearing down the + * block mappings. We must use Atomic allocations as we are called + * with kvm->mmu_lock held. + */ + if (realm->spare_page == PHYS_ADDR_MAX) { + tmp_rtt = __alloc_delegated_page(realm, NULL, GFP_ATOMIC); + /* + * We don't have to check the status here, as we may not + * have a block level mapping. Delay any error to the point + * where we need it. + */ + realm->spare_page = tmp_rtt; + } + + realm_tear_down_rtt_range(&kvm->arch.realm, start_level, ipa, end); + + /* Free up the atomic page, if there were any */ + if (tmp_rtt != PHYS_ADDR_MAX) { + free_delegated_page(realm, tmp_rtt); + /* + * Update the spare_page after we have freed the + * above page to make sure it doesn't get cached + * in spare_page. + * We should re-write this part and always have + * a dedicated page for handling block mappings. + */ + realm->spare_page = PHYS_ADDR_MAX; + } +} + +static int set_ipa_state(struct kvm_vcpu *vcpu, + unsigned long ipa, + unsigned long end, + int level, + unsigned long ripas) +{ + struct kvm *kvm = vcpu->kvm; + struct realm *realm = &kvm->arch.realm; + struct rec *rec = &vcpu->arch.rec; + phys_addr_t rd_phys = virt_to_phys(realm->rd); + phys_addr_t rec_phys = virt_to_phys(rec->rec_page); + unsigned long map_size = rme_rtt_level_mapsize(level); + int ret; + + while (ipa < end) { + ret = rmi_rtt_set_ripas(rd_phys, rec_phys, ipa, level, ripas); + + if (!ret) { + if (!ripas) + kvm_realm_unmap_range(kvm, ipa, map_size); + } else if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + int walk_level = RMI_RETURN_INDEX(ret); + + if (walk_level < level) { + ret = realm_create_rtt_levels(realm, ipa, + walk_level, + level, NULL); + if (ret) + return ret; + continue; + } + + if (WARN_ON(level >= RME_RTT_MAX_LEVEL)) + return -EINVAL; + + /* Recurse one level lower */ + ret = set_ipa_state(vcpu, ipa, ipa + map_size, + level + 1, ripas); + if (ret) + return ret; + } else { + WARN(1, "Unexpected error in %s: %#x\n", __func__, + ret); + return -EINVAL; + } + ipa += map_size; + } + + return 0; +} + +static int realm_init_ipa_state(struct realm *realm, + unsigned long ipa, + unsigned long end, + int level) +{ + unsigned long map_size = rme_rtt_level_mapsize(level); + phys_addr_t rd_phys = virt_to_phys(realm->rd); + int ret; + + while (ipa < end) { + ret = rmi_rtt_init_ripas(rd_phys, ipa, level); + + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + int cur_level = RMI_RETURN_INDEX(ret); + + if (cur_level < level) { + ret = realm_create_rtt_levels(realm, ipa, + cur_level, + level, NULL); + if (ret) + return ret; + /* Retry with the RTT levels in place */ + continue; + } + + /* There's an entry at a lower level, recurse */ + if (WARN_ON(level >= RME_RTT_MAX_LEVEL)) + return -EINVAL; + + realm_init_ipa_state(realm, ipa, ipa + map_size, + level + 1); + } else if (WARN_ON(ret)) { + return -ENXIO; + } + + ipa += map_size; + } + + return 0; +} + +static int find_map_level(struct kvm *kvm, unsigned long start, unsigned long end) +{ + int level = RME_RTT_MAX_LEVEL; + + while (level > get_start_level(kvm) + 1) { + unsigned long map_size = rme_rtt_level_mapsize(level - 1); + + if (!IS_ALIGNED(start, map_size) || + (start + map_size) > end) + break; + + level--; + } + + return level; +} + +int realm_set_ipa_state(struct kvm_vcpu *vcpu, + unsigned long addr, unsigned long end, + unsigned long ripas) +{ + int ret = 0; + + while (addr < end) { + int level = find_map_level(vcpu->kvm, addr, end); + unsigned long map_size = rme_rtt_level_mapsize(level); + + ret = set_ipa_state(vcpu, addr, addr + map_size, level, ripas); + if (ret) + break; + + addr += map_size; + } + + return ret; +} + +static int kvm_init_ipa_range_realm(struct kvm *kvm, + struct kvm_cap_arm_rme_init_ipa_args *args) +{ + int ret = 0; + gpa_t addr, end; + struct realm *realm = &kvm->arch.realm; + + addr = args->init_ipa_base; + end = addr + args->init_ipa_size; + + if (end < addr) + return -EINVAL; + + if (kvm_realm_state(kvm) != REALM_STATE_NEW) + return -EBUSY; + + while (addr < end) { + int level = find_map_level(kvm, addr, end); + unsigned long map_size = rme_rtt_level_mapsize(level); + + ret = realm_init_ipa_state(realm, addr, addr + map_size, level); + if (ret) + break; + + addr += map_size; + } + + return ret; +} + /* Protects access to rme_vmid_bitmap */ static DEFINE_SPINLOCK(rme_vmid_lock); static unsigned long *rme_vmid_bitmap; @@ -460,6 +736,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) r = kvm_create_realm(kvm); break; + case KVM_CAP_ARM_RME_INIT_IPA_REALM: { + struct kvm_cap_arm_rme_init_ipa_args args; + void __user *argp = u64_to_user_ptr(cap->args[1]); + + if (copy_from_user(&args, argp, sizeof(args))) { + r = -EFAULT; + break; + } + + r = kvm_init_ipa_range_realm(kvm, &args); + break; + } default: r = -EINVAL; break; From patchwork Fri Jan 27 11:29:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49291 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp787735wrn; Fri, 27 Jan 2023 03:45:15 -0800 (PST) X-Google-Smtp-Source: AK7set/DYgR/qpZztikXnqVgBir4VAcbu7ZL6tdn04YgscQCV4kbI9q5evxqevp95tpkUzmcPnwf X-Received: by 2002:a62:1589:0:b0:591:b0ca:993c with SMTP id 131-20020a621589000000b00591b0ca993cmr5772232pfv.20.1674819914981; Fri, 27 Jan 2023 03:45:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819914; cv=none; d=google.com; s=arc-20160816; b=V2AWDJFq2sjw6oD/vOKo/vDO3tBRXXCZpGKE9mEnh06LCAXhtMU8G3q40e/wNmGGip v9dwQWu+MKZ4h8+5rwRTLJWnEH187nZPYhbK5RC5ENYTwWzO5FJ7c+JsLbzGrl28XhSz AdYV4fz/mfC9o/QF4l44cF084xmtUl1/o8+I3S1NS65nZ/z8v6fyo/8RQuJMnq3YbDCh Oj3VzyX0UnAwH2T+yw07x2e5iCYiQzg/1Vp6muQXZbYoTZ86VtKSbP8gRfyDcC7zl/4f HMQJSypjqd7ghrkLYyIMWWH95rXOvn7W9meP1w3uQFUfVKGAs7Xjw/HIoe4a97awEZt7 J56w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=THMH+ENE1TH3wkUStg9Tm2iPvs62R3gynUd12Tk8OmY=; b=uKFVVGGDgPrww2mtnha8O44eZPw2dnYo89Z+aORfzI6I/nqqYh5q5XnbnEPMhPCbJ8 3xnu8kBCzfMcgeOAgyKE5PS8cV/nmRyfhnKbHAmyJgT/oCe4wCmvIM5cO98unMiaYkvw /wC4VkthT3GToxOFBv3DcTiMaDt/A/4Khxa/od+clv4u3WyRLmwbchrMuxkkx0i3vakF 87OMU+EP5QQ9cGkPWmkr3GACgKooUTy7/GaY+7HCgCJBcxDbQe++g8+2EQufy4AWI8a7 NDbtH3Pr8kANpBAv+DCuh1Rh6qIDcYCh761e3MoM7xRFZV3TKiHEwBAq4ZEBIk96irlj HKIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f139-20020a623891000000b005924b9f5b9dsi3836103pfa.333.2023.01.27.03.45.02; Fri, 27 Jan 2023 03:45:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233884AbjA0LmN (ORCPT + 99 others); Fri, 27 Jan 2023 06:42:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233191AbjA0Llf (ORCPT ); Fri, 27 Jan 2023 06:41:35 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 35440790AD; Fri, 27 Jan 2023 03:41:07 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B0151570; Fri, 27 Jan 2023 03:31:01 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D64DB3F64C; Fri, 27 Jan 2023 03:30:16 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 14/28] arm64: RME: Handle realm enter/exit Date: Fri, 27 Jan 2023 11:29:18 +0000 Message-Id: <20230127112932.38045-15-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175967105034428?= X-GMAIL-MSGID: =?utf-8?q?1756175967105034428?= Entering a realm is done using a SMC call to the RMM. On exit the exit-codes need to be handled slightly differently to the normal KVM path so define our own functions for realm enter/exit and hook them in if the guest is a realm guest. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 11 ++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/arm.c | 19 +++- arch/arm64/kvm/rme-exit.c | 168 +++++++++++++++++++++++++++++++ arch/arm64/kvm/rme.c | 11 ++ 5 files changed, 205 insertions(+), 6 deletions(-) create mode 100644 arch/arm64/kvm/rme-exit.c diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 3e75cedaad18..9d1583c44a99 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -47,6 +47,9 @@ void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level); int kvm_create_rec(struct kvm_vcpu *vcpu); void kvm_destroy_rec(struct kvm_vcpu *vcpu); +int kvm_rec_enter(struct kvm_vcpu *vcpu); +int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_status); + int realm_set_ipa_state(struct kvm_vcpu *vcpu, unsigned long addr, unsigned long end, unsigned long ripas); @@ -69,4 +72,12 @@ static inline unsigned long rme_rtt_level_mapsize(int level) return (1UL << RME_RTT_LEVEL_SHIFT(level)); } +static inline bool realm_is_addr_protected(struct realm *realm, + unsigned long addr) +{ + unsigned int ia_bits = realm->ia_bits; + + return !(addr & ~(BIT(ia_bits - 1) - 1)); +} + #endif diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index d2f0400c50da..884c7c44439f 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -21,7 +21,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o \ - rme.o + rme.o rme-exit.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 1b2547516f62..fd9e28f48903 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -985,7 +985,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) trace_kvm_entry(*vcpu_pc(vcpu)); guest_timing_enter_irqoff(); - ret = kvm_arm_vcpu_enter_exit(vcpu); + if (vcpu_is_rec(vcpu)) + ret = kvm_rec_enter(vcpu); + else + ret = kvm_arm_vcpu_enter_exit(vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++; @@ -1039,10 +1042,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) local_irq_enable(); - trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); - /* Exit types that need handling before we can be preempted */ - handle_exit_early(vcpu, ret); + if (!vcpu_is_rec(vcpu)) { + trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), + *vcpu_pc(vcpu)); + + handle_exit_early(vcpu, ret); + } preempt_enable(); @@ -1065,7 +1071,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ret = ARM_EXCEPTION_IL; } - ret = handle_exit(vcpu, ret); + if (vcpu_is_rec(vcpu)) + ret = handle_rme_exit(vcpu, ret); + else + ret = handle_exit(vcpu, ret); } /* Tell userspace about in-kernel device output levels */ diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c new file mode 100644 index 000000000000..15a4ff3517db --- /dev/null +++ b/arch/arm64/kvm/rme-exit.c @@ -0,0 +1,168 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#include +#include + +#include +#include +#include +#include + +typedef int (*exit_handler_fn)(struct kvm_vcpu *vcpu); + +static int rec_exit_reason_notimpl(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + + pr_err("[vcpu %d] Unhandled exit reason from realm (ESR: %#llx)\n", + vcpu->vcpu_id, rec->run->exit.esr); + return -ENXIO; +} + +static int rec_exit_sync_dabt(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + + if (kvm_vcpu_dabt_iswrite(vcpu) && kvm_vcpu_dabt_isvalid(vcpu)) + vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), + rec->run->exit.gprs[0]); + + return kvm_handle_guest_abort(vcpu); +} + +static int rec_exit_sync_iabt(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + + pr_err("[vcpu %d] Unhandled instruction abort (ESR: %#llx).\n", + vcpu->vcpu_id, rec->run->exit.esr); + return -ENXIO; +} + +static int rec_exit_sys_reg(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + unsigned long esr = kvm_vcpu_get_esr(vcpu); + int rt = kvm_vcpu_sys_get_rt(vcpu); + bool is_write = !(esr & 1); + int ret; + + if (is_write) + vcpu_set_reg(vcpu, rt, rec->run->exit.gprs[0]); + + ret = kvm_handle_sys_reg(vcpu); + + if (ret >= 0 && !is_write) + rec->run->entry.gprs[0] = vcpu_get_reg(vcpu, rt); + + return ret; +} + +static exit_handler_fn rec_exit_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = rec_exit_reason_notimpl, + [ESR_ELx_EC_SYS64] = rec_exit_sys_reg, + [ESR_ELx_EC_DABT_LOW] = rec_exit_sync_dabt, + [ESR_ELx_EC_IABT_LOW] = rec_exit_sync_iabt +}; + +static int rec_exit_psci(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + int i; + + for (i = 0; i < REC_RUN_GPRS; i++) + vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]); + + return kvm_psci_call(vcpu); +} + +static int rec_exit_ripas_change(struct kvm_vcpu *vcpu) +{ + struct realm *realm = &vcpu->kvm->arch.realm; + struct rec *rec = &vcpu->arch.rec; + unsigned long base = rec->run->exit.ripas_base; + unsigned long size = rec->run->exit.ripas_size; + unsigned long ripas = rec->run->exit.ripas_value & 1; + int ret = -EINVAL; + + if (realm_is_addr_protected(realm, base) && + realm_is_addr_protected(realm, base + size)) + ret = realm_set_ipa_state(vcpu, base, base + size, ripas); + + WARN(ret, "Unable to satisfy SET_IPAS for %#lx - %#lx, ripas: %#lx\n", + base, base + size, ripas); + + return 1; +} + +static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + + __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = rec->run->exit.cntv_ctl; + __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = rec->run->exit.cntv_cval; + __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = rec->run->exit.cntp_ctl; + __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = rec->run->exit.cntp_cval; + + kvm_realm_timers_update(vcpu); +} + +/* + * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on + * proper exit to userspace. + */ +int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_ret) +{ + struct rec *rec = &vcpu->arch.rec; + u8 esr_ec = ESR_ELx_EC(rec->run->exit.esr); + unsigned long status, index; + + status = RMI_RETURN_STATUS(rec_run_ret); + index = RMI_RETURN_INDEX(rec_run_ret); + + /* + * If a PSCI_SYSTEM_OFF request raced with a vcpu executing, we might + * see the following status code and index indicating an attempt to run + * a REC when the RD state is SYSTEM_OFF. In this case, we just need to + * return to user space which can deal with the system event or will try + * to run the KVM VCPU again, at which point we will no longer attempt + * to enter the Realm because we will have a sleep request pending on + * the VCPU as a result of KVM's PSCI handling. + */ + if (status == RMI_ERROR_REALM && index == 1) { + vcpu->run->exit_reason = KVM_EXIT_UNKNOWN; + return 0; + } + + if (rec_run_ret) + return -ENXIO; + + vcpu->arch.fault.esr_el2 = rec->run->exit.esr; + vcpu->arch.fault.far_el2 = rec->run->exit.far; + vcpu->arch.fault.hpfar_el2 = rec->run->exit.hpfar; + + update_arch_timer_irq_lines(vcpu); + + /* Reset the emulation flags for the next run of the REC */ + rec->run->entry.flags = 0; + + switch (rec->run->exit.exit_reason) { + case RMI_EXIT_SYNC: + return rec_exit_handlers[esr_ec](vcpu); + case RMI_EXIT_IRQ: + case RMI_EXIT_FIQ: + return 1; + case RMI_EXIT_PSCI: + return rec_exit_psci(vcpu); + case RMI_EXIT_RIPAS_CHANGE: + return rec_exit_ripas_change(vcpu); + } + + kvm_pr_unimpl("Unsupported exit reason: %u\n", + rec->run->exit.exit_reason); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + return 0; +} diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index b3ea79189839..16e0bfea98b1 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -802,6 +802,17 @@ void kvm_destroy_realm(struct kvm *kvm) kvm_free_stage2_pgd(&kvm->arch.mmu); } +int kvm_rec_enter(struct kvm_vcpu *vcpu) +{ + struct rec *rec = &vcpu->arch.rec; + + if (kvm_realm_state(vcpu->kvm) != REALM_STATE_ACTIVE) + return -EINVAL; + + return rmi_rec_enter(virt_to_phys(rec->rec_page), + virt_to_phys(rec->run)); +} + static void free_rec_aux(struct page **aux_pages, unsigned int num_aux) { From patchwork Fri Jan 27 11:29:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49259 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783301wrn; Fri, 27 Jan 2023 03:33:28 -0800 (PST) X-Google-Smtp-Source: AK7set8v3OoTOgWD3gtB4qUjenbAhNqEfd211OOcKXtD8wotoDUNBDa55PpNbuj3KtnChOAxuj+K X-Received: by 2002:a17:90b:1b0f:b0:228:f21b:a3ff with SMTP id nu15-20020a17090b1b0f00b00228f21ba3ffmr6133773pjb.42.1674819207945; Fri, 27 Jan 2023 03:33:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819207; cv=none; d=google.com; s=arc-20160816; b=dupd6pNoHQ6qQFrfsAsRqdoxIraRwVf+s+/ldisPPYB0jIFBBADMhgYOnaEuPUYy1v V4+KdADkyfui16B09Sj9IwCJTjGS7/c3O7KGdlZalCYK0UvxUdiyEd1xLPPjYS/HcHBm Tln2HuyMb+LbmPseGk/jB1jj9OC/uAno+h0KCOLhtfxMxAxpMA9nNO6t8XNlSCCMg5bl pXKK59mwEtTzzwveB3yCNq3F1K1/SqzaK6S5yr5SlEUabaYNeTHYKMpq4nCKdHXtxqNh VHoZ9bV6y3p3blWl+On1VM7zTSQZIpqDH70pVWEEORq66IO3ereptioPHSBW2cH5ue6T BV+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Y9FNcCprYy/jdopUetMwcjSObdzGeE+b/o893YquEJc=; b=tGqbob4Qxl7fwlY8fY/sgzfBfmW6SArPu78l0NMm6HuDjK6RruBV/yV/71NXf3n6h7 +aErjBv0ss4cMe05Q79rJzffCQVx2AetagA+ku1Tn8AWd1mlcyb/G5tYLZv55yJj8Hld TVNihe/EMV971wYR21FOBgex5D14OIQrNlq6/Tm2e3KEy+O5G20EFKGbI4Xjogm2F3oZ N25nIGkEI+mNdPnS7dec/KFs4Wb5J0Efy2BxfKlr8ikD+GO1y02n/VkE1ufA8u3U94uP r19ynA0tF9Z/QGf+qSst/OyYWb2HfzqhQD2trvbOj7RczzGtMDF92F0XTJZcnNzZjFXe JQ1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g6-20020a63be46000000b004de9a71c3absi3967731pgo.130.2023.01.27.03.33.15; Fri, 27 Jan 2023 03:33:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231937AbjA0Lcs (ORCPT + 99 others); Fri, 27 Jan 2023 06:32:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233501AbjA0LcZ (ORCPT ); Fri, 27 Jan 2023 06:32:25 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 671A77D287; Fri, 27 Jan 2023 03:30:57 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A7AD1691; Fri, 27 Jan 2023 03:31:03 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E86A3F64C; Fri, 27 Jan 2023 03:30:19 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 15/28] KVM: arm64: Handle realm MMIO emulation Date: Fri, 27 Jan 2023 11:29:19 +0000 Message-Id: <20230127112932.38045-16-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175225827049893?= X-GMAIL-MSGID: =?utf-8?q?1756175225827049893?= MMIO emulation for a realm cannot be done directly with the VM's registers as they are protected from the host. However the RMM interface provides a structure member for providing the read/written value and we can transfer this to the appropriate VCPU's register entry and then depend on the generic MMIO handling code in KVM. Signed-off-by: Steven Price --- arch/arm64/kvm/mmio.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index 3dd38a151d2a..c4879fa3a8d3 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -6,6 +6,7 @@ #include #include +#include #include #include "trace.h" @@ -109,6 +110,9 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu) &data); data = vcpu_data_host_to_guest(vcpu, data, len); vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data); + + if (vcpu_is_rec(vcpu)) + vcpu->arch.rec.run->entry.gprs[0] = data; } /* @@ -179,6 +183,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) run->mmio.len = len; vcpu->mmio_needed = 1; + if (vcpu_is_rec(vcpu)) + vcpu->arch.rec.run->entry.flags |= RMI_EMULATED_MMIO; + if (!ret) { /* We handled the access successfully in the kernel. */ if (!is_write) From patchwork Fri Jan 27 11:29:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49269 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp786546wrn; Fri, 27 Jan 2023 03:41:51 -0800 (PST) X-Google-Smtp-Source: AK7set8qPl1wrw/Yuqo1L+W8PpDtV62Q09O6KnEarx55+3qRSdMZJu/Yli0XLb1TYRJUF2L+yuoQ X-Received: by 2002:a17:902:e5c9:b0:196:3b19:fc82 with SMTP id u9-20020a170902e5c900b001963b19fc82mr6551327plf.32.1674819710727; Fri, 27 Jan 2023 03:41:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819710; cv=none; d=google.com; s=arc-20160816; b=Lt4YchrwM0d5muEF2xaVDE9TVJJDLUperFnY42+D/PT3TapXnHXlx3E+QYQ25XMs3X ZA9T2h5WaBFdfdUZc4tPIAUN7ahU+1zoyErqyfkTRGNiHA/b6wQwTyCwiNGKsL2aeHC8 nnX2ydvy0J0SP1aNvd7nXAVGPPq63X5K5IAwFBvL/Mdx717mm6r7h/BZmmb56WJaz/xw skcHkShuJI0A8DJTuYVT0tjzArxIjDNT6HmuLECZs5PLN3ZcqGGaEUTTAFYLbLiclcw/ tove0DOxqrGF4I5ufVEAyDqcrzKcTGTHJXtaLsDyhMuE0xdZ35/NC2wwAxuPN/y63UYC gagg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=wFHroGK3D44nGd0cnCC3DCBVvtCIAyaKE8pfUxBGCJw=; b=dJr/24h+XJBxM5dKihMnLIRcKU6non0pE3J3Cae69SNsuusz9ZBYexwX37XNVyd0GW acIBo2ze9AbMTUzdHkqTolTrFABseb3DhSqJA2jI+U/XtfaFQO+gud6TN4MHw0sFQM4e Z6xqZ/VLTGjB5sFeqtivBOGGUhdQDWtyy1ETbfj9IfsHekwlrbwxUQqGhyv+z384BH9X SUZF76uwcAls1Dey0X6tQYUzprBctjGn7khYILMEpNg1kA/GHhqyoKNOyTwjhXaZy4Fu MQoUokduK+/PW2S7rmdFVm88CootxhrMBsAyk54p41o/Z97BbEK1usNSps4Z2LODmvbe uyKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s3-20020a170902ea0300b001910f9d152csi4611517plg.156.2023.01.27.03.41.35; Fri, 27 Jan 2023 03:41:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233822AbjA0LeK (ORCPT + 99 others); Fri, 27 Jan 2023 06:34:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233713AbjA0Ldy (ORCPT ); Fri, 27 Jan 2023 06:33:54 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6D8557267F; Fri, 27 Jan 2023 03:32:08 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 185F61576; Fri, 27 Jan 2023 03:31:06 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE4E13F64C; Fri, 27 Jan 2023 03:30:21 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 16/28] arm64: RME: Allow populating initial contents Date: Fri, 27 Jan 2023 11:29:20 +0000 Message-Id: <20230127112932.38045-17-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175753079864025?= X-GMAIL-MSGID: =?utf-8?q?1756175753079864025?= The VMM needs to populate the realm with some data before starting (e.g. a kernel and initrd). This is measured by the RMM and used as part of the attestation later on. Signed-off-by: Steven Price --- arch/arm64/kvm/rme.c | 366 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 366 insertions(+) diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 16e0bfea98b1..3405b43e1421 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -426,6 +427,359 @@ void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size) } } +static int realm_create_protected_data_page(struct realm *realm, + unsigned long ipa, + struct page *dst_page, + struct page *tmp_page) +{ + phys_addr_t dst_phys, tmp_phys; + int ret; + + copy_page(page_address(tmp_page), page_address(dst_page)); + + dst_phys = page_to_phys(dst_page); + tmp_phys = page_to_phys(tmp_page); + + if (rmi_granule_delegate(dst_phys)) + return -ENXIO; + + ret = rmi_data_create(dst_phys, virt_to_phys(realm->rd), ipa, tmp_phys, + RMI_MEASURE_CONTENT); + + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + /* Create missing RTTs and retry */ + int level = RMI_RETURN_INDEX(ret); + + ret = realm_create_rtt_levels(realm, ipa, level, + RME_RTT_MAX_LEVEL, NULL); + if (ret) + goto err; + + ret = rmi_data_create(dst_phys, virt_to_phys(realm->rd), ipa, + tmp_phys, RMI_MEASURE_CONTENT); + } + + if (ret) + goto err; + + return 0; + +err: + if (WARN_ON(rmi_granule_undelegate(dst_phys))) { + /* Page can't be returned to NS world so is lost */ + get_page(dst_page); + } + return -ENXIO; +} + +static int fold_rtt(phys_addr_t rd, unsigned long addr, int level, + struct realm *realm) +{ + struct rtt_entry rtt; + phys_addr_t rtt_addr; + + if (rmi_rtt_read_entry(rd, addr, level, &rtt)) + return -ENXIO; + + if (rtt.state != RMI_TABLE) + return -EINVAL; + + rtt_addr = rmi_rtt_get_phys(&rtt); + if (rmi_rtt_fold(rtt_addr, rd, addr, level + 1)) + return -ENXIO; + + free_delegated_page(realm, rtt_addr); + + return 0; +} + +int realm_map_protected(struct realm *realm, + unsigned long hva, + unsigned long base_ipa, + struct page *dst_page, + unsigned long map_size, + struct kvm_mmu_memory_cache *memcache) +{ + phys_addr_t dst_phys = page_to_phys(dst_page); + phys_addr_t rd = virt_to_phys(realm->rd); + unsigned long phys = dst_phys; + unsigned long ipa = base_ipa; + unsigned long size; + int map_level; + int ret = 0; + + if (WARN_ON(!IS_ALIGNED(ipa, map_size))) + return -EINVAL; + + switch (map_size) { + case PAGE_SIZE: + map_level = 3; + break; + case RME_L2_BLOCK_SIZE: + map_level = 2; + break; + default: + return -EINVAL; + } + + if (map_level < RME_RTT_MAX_LEVEL) { + /* + * A temporary RTT is needed during the map, precreate it, + * however if there is an error (e.g. missing parent tables) + * this will be handled below. + */ + realm_create_rtt_levels(realm, ipa, map_level, + RME_RTT_MAX_LEVEL, memcache); + } + + for (size = 0; size < map_size; size += PAGE_SIZE) { + if (rmi_granule_delegate(phys)) { + struct rtt_entry rtt; + + /* + * It's possible we raced with another VCPU on the same + * fault. If the entry exists and matches then exit + * early and assume the other VCPU will handle the + * mapping. + */ + if (rmi_rtt_read_entry(rd, ipa, RME_RTT_MAX_LEVEL, &rtt)) + goto err; + + // FIXME: For a block mapping this could race at level + // 2 or 3... + if (WARN_ON((rtt.walk_level != RME_RTT_MAX_LEVEL || + rtt.state != RMI_ASSIGNED || + rtt.desc != phys))) { + goto err; + } + + return 0; + } + + ret = rmi_data_create_unknown(phys, rd, ipa); + + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + /* Create missing RTTs and retry */ + int level = RMI_RETURN_INDEX(ret); + + ret = realm_create_rtt_levels(realm, ipa, level, + RME_RTT_MAX_LEVEL, + memcache); + WARN_ON(ret); + if (ret) + goto err_undelegate; + + ret = rmi_data_create_unknown(phys, rd, ipa); + } + WARN_ON(ret); + + if (ret) + goto err_undelegate; + + phys += PAGE_SIZE; + ipa += PAGE_SIZE; + } + + if (map_size == RME_L2_BLOCK_SIZE) + ret = fold_rtt(rd, base_ipa, map_level, realm); + if (WARN_ON(ret)) + goto err; + + return 0; + +err_undelegate: + if (WARN_ON(rmi_granule_undelegate(phys))) { + /* Page can't be returned to NS world so is lost */ + get_page(phys_to_page(phys)); + } +err: + while (size > 0) { + phys -= PAGE_SIZE; + size -= PAGE_SIZE; + ipa -= PAGE_SIZE; + + rmi_data_destroy(rd, ipa); + + if (WARN_ON(rmi_granule_undelegate(phys))) { + /* Page can't be returned to NS world so is lost */ + get_page(phys_to_page(phys)); + } + } + return -ENXIO; +} + +static int populate_par_region(struct kvm *kvm, + phys_addr_t ipa_base, + phys_addr_t ipa_end) +{ + struct realm *realm = &kvm->arch.realm; + struct kvm_memory_slot *memslot; + gfn_t base_gfn, end_gfn; + int idx; + phys_addr_t ipa; + int ret = 0; + struct page *tmp_page; + phys_addr_t rd = virt_to_phys(realm->rd); + + base_gfn = gpa_to_gfn(ipa_base); + end_gfn = gpa_to_gfn(ipa_end); + + idx = srcu_read_lock(&kvm->srcu); + memslot = gfn_to_memslot(kvm, base_gfn); + if (!memslot) { + ret = -EFAULT; + goto out; + } + + /* We require the region to be contained within a single memslot */ + if (memslot->base_gfn + memslot->npages < end_gfn) { + ret = -EINVAL; + goto out; + } + + tmp_page = alloc_page(GFP_KERNEL); + if (!tmp_page) { + ret = -ENOMEM; + goto out; + } + + mmap_read_lock(current->mm); + + ipa = ipa_base; + + while (ipa < ipa_end) { + struct vm_area_struct *vma; + unsigned long map_size; + unsigned int vma_shift; + unsigned long offset; + unsigned long hva; + struct page *page; + kvm_pfn_t pfn; + int level; + + hva = gfn_to_hva_memslot(memslot, gpa_to_gfn(ipa)); + vma = vma_lookup(current->mm, hva); + if (!vma) { + ret = -EFAULT; + break; + } + + if (is_vm_hugetlb_page(vma)) + vma_shift = huge_page_shift(hstate_vma(vma)); + else + vma_shift = PAGE_SHIFT; + + map_size = 1 << vma_shift; + + /* + * FIXME: This causes over mapping, but there's no good + * solution here with the ABI as it stands + */ + ipa = ALIGN_DOWN(ipa, map_size); + + switch (map_size) { + case RME_L2_BLOCK_SIZE: + level = 2; + break; + case PAGE_SIZE: + level = 3; + break; + default: + WARN_ONCE(1, "Unsupport vma_shift %d", vma_shift); + ret = -EFAULT; + break; + } + + pfn = gfn_to_pfn_memslot(memslot, gpa_to_gfn(ipa)); + + if (is_error_pfn(pfn)) { + ret = -EFAULT; + break; + } + + ret = rmi_rtt_init_ripas(rd, ipa, level); + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + ret = realm_create_rtt_levels(realm, ipa, + RMI_RETURN_INDEX(ret), + level, NULL); + if (ret) + break; + ret = rmi_rtt_init_ripas(rd, ipa, level); + if (ret) { + ret = -ENXIO; + break; + } + } + + if (level < RME_RTT_MAX_LEVEL) { + /* + * A temporary RTT is needed during the map, precreate + * it, however if there is an error (e.g. missing + * parent tables) this will be handled in the + * realm_create_protected_data_page() call. + */ + realm_create_rtt_levels(realm, ipa, level, + RME_RTT_MAX_LEVEL, NULL); + } + + page = pfn_to_page(pfn); + + for (offset = 0; offset < map_size && !ret; + offset += PAGE_SIZE, page++) { + phys_addr_t page_ipa = ipa + offset; + + ret = realm_create_protected_data_page(realm, page_ipa, + page, tmp_page); + } + if (ret) + goto err_release_pfn; + + if (level == 2) { + ret = fold_rtt(rd, ipa, level, realm); + if (ret) + goto err_release_pfn; + } + + ipa += map_size; + kvm_set_pfn_accessed(pfn); + kvm_set_pfn_dirty(pfn); + kvm_release_pfn_dirty(pfn); +err_release_pfn: + if (ret) { + kvm_release_pfn_clean(pfn); + break; + } + } + + mmap_read_unlock(current->mm); + __free_page(tmp_page); + +out: + srcu_read_unlock(&kvm->srcu, idx); + return ret; +} + +static int kvm_populate_realm(struct kvm *kvm, + struct kvm_cap_arm_rme_populate_realm_args *args) +{ + phys_addr_t ipa_base, ipa_end; + + if (kvm_realm_state(kvm) != REALM_STATE_NEW) + return -EBUSY; + + if (!IS_ALIGNED(args->populate_ipa_base, PAGE_SIZE) || + !IS_ALIGNED(args->populate_ipa_size, PAGE_SIZE)) + return -EINVAL; + + ipa_base = args->populate_ipa_base; + ipa_end = ipa_base + args->populate_ipa_size; + + if (ipa_end < ipa_base) + return -EINVAL; + + return populate_par_region(kvm, ipa_base, ipa_end); +} + static int set_ipa_state(struct kvm_vcpu *vcpu, unsigned long ipa, unsigned long end, @@ -748,6 +1102,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) r = kvm_init_ipa_range_realm(kvm, &args); break; } + case KVM_CAP_ARM_RME_POPULATE_REALM: { + struct kvm_cap_arm_rme_populate_realm_args args; + void __user *argp = u64_to_user_ptr(cap->args[1]); + + if (copy_from_user(&args, argp, sizeof(args))) { + r = -EFAULT; + break; + } + + r = kvm_populate_realm(kvm, &args); + break; + } default: r = -EINVAL; break; From patchwork Fri Jan 27 11:29:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49262 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783396wrn; Fri, 27 Jan 2023 03:33:39 -0800 (PST) X-Google-Smtp-Source: AMrXdXuENhc3PbtEiz+ouzDGRBxS0mPKgseAY1dyF0Vod486jiVAxlDt12ryQOLfMkm9ikOYqYlY X-Received: by 2002:a17:902:e846:b0:194:df3e:51b3 with SMTP id t6-20020a170902e84600b00194df3e51b3mr32283891plg.26.1674819219548; Fri, 27 Jan 2023 03:33:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819219; cv=none; d=google.com; s=arc-20160816; b=dRvPp/Ho5/RI/2RpFaDwbTjG8PJ45fj9o6r9S2MCK9rvDa63EIHoFn9DwLrEwN501V HnJatrLpeoZxrMziIpOeRYnQSKHBD2weB0qLfPOPGw7bmzJyQJKhIbPiUnHLoMAaqmJd s841VHXNn64jUFFKpoP6W1Ntm0d2vXaAOWxNdiqvATQR1q42sgAee/jP8xOhOysXSB5m QnEWvovG8M+X1RQaOieL3m3N5YJO2mQOa0E3bDansAavAeMgqUBQOWaiiKyObNpzjLqE /haN50Zxt2+HTDesGj+YHAvYGoLe3bBjqHaVSh5YKb5a2cf07/26jxnOZ8tdV1TB1e0T c+gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=gQREJThgEYWKR5xO69Xn8FGoEUpYeZslnKB3kWcslwg=; b=HuEoC5c36iyZJpGN+znCYxMbTq8uYJaDCThanfqlfBW4nhwHCwTsjTsgemKab/TbHx xMA3XBFw2NnFxqVy/SrGmahOleYDQhKPl24ZRVERo23AVC142aV0FZ4ns3zNKnpxviiT 2OCRLELTpkDjLU1xnwAzX65ndH8WBVXZ60wVbs7THMH0p7+HsxSxvHDecwGSZBSb2f7l D8q9SG1w2YJY3owBbyBNp0OtKeYHQ9YcLvd3GMayQkoffBMo3eeduJ0div8/BUJ7KY5T UaxSepWxT2nsC5G+y776s2MxpRZzQh8XCzG4bqDZyA6TtJTCEZXHNW7N4ikIEQxSt3gx FWNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e2-20020a17090301c200b00193373e40afsi5086968plh.172.2023.01.27.03.33.27; Fri, 27 Jan 2023 03:33:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233619AbjA0LdA (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233098AbjA0Lc1 (ORCPT ); Fri, 27 Jan 2023 06:32:27 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 42F497D29A; Fri, 27 Jan 2023 03:30:58 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B9291595; Fri, 27 Jan 2023 03:31:08 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67ACD3F64C; Fri, 27 Jan 2023 03:30:24 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 17/28] arm64: RME: Runtime faulting of memory Date: Fri, 27 Jan 2023 11:29:21 +0000 Message-Id: <20230127112932.38045-18-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175238151032597?= X-GMAIL-MSGID: =?utf-8?q?1756175238151032597?= At runtime if the realm guest accesses memory which hasn't yet been mapped then KVM needs to either populate the region or fault the guest. For memory in the lower (protected) region of IPA a fresh page is provided to the RMM which will zero the contents. For memory in the upper (shared) region of IPA, the memory from the memslot is mapped into the realm VM non secure. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_emulate.h | 10 +++++ arch/arm64/include/asm/kvm_rme.h | 12 ++++++ arch/arm64/kvm/mmu.c | 64 +++++++++++++++++++++++++--- arch/arm64/kvm/rme.c | 48 +++++++++++++++++++++ 4 files changed, 128 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 285e62914ca4..3a71b3d2e10a 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -502,6 +502,16 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm) return READ_ONCE(kvm->arch.realm.state); } +static inline gpa_t kvm_gpa_stolen_bits(struct kvm *kvm) +{ + if (kvm_is_realm(kvm)) { + struct realm *realm = &kvm->arch.realm; + + return BIT(realm->ia_bits - 1); + } + return 0; +} + static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_rme_is_available)) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 9d1583c44a99..303e4a5e5704 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -50,6 +50,18 @@ void kvm_destroy_rec(struct kvm_vcpu *vcpu); int kvm_rec_enter(struct kvm_vcpu *vcpu); int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_status); +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size); +int realm_map_protected(struct realm *realm, + unsigned long hva, + unsigned long base_ipa, + struct page *dst_page, + unsigned long map_size, + struct kvm_mmu_memory_cache *memcache); +int realm_map_non_secure(struct realm *realm, + unsigned long ipa, + struct page *page, + unsigned long map_size, + struct kvm_mmu_memory_cache *memcache); int realm_set_ipa_state(struct kvm_vcpu *vcpu, unsigned long addr, unsigned long end, unsigned long ripas); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f29558c5dcbc..5417c273861b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -235,8 +235,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap, - may_block)); + + if (kvm_is_realm(kvm)) + kvm_realm_unmap_range(kvm, start, size); + else + WARN_ON(stage2_apply_range(kvm, start, end, + kvm_pgtable_stage2_unmap, + may_block)); } static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size) @@ -250,7 +255,11 @@ static void stage2_flush_memslot(struct kvm *kvm, phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; phys_addr_t end = addr + PAGE_SIZE * memslot->npages; - stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush); + if (kvm_is_realm(kvm)) + kvm_realm_unmap_range(kvm, addr, end - addr); + else + stage2_apply_range_resched(kvm, addr, end, + kvm_pgtable_stage2_flush); } /** @@ -818,6 +827,10 @@ void stage2_unmap_vm(struct kvm *kvm) struct kvm_memory_slot *memslot; int idx, bkt; + /* For realms this is handled by the RMM so nothing to do here */ + if (kvm_is_realm(kvm)) + return; + idx = srcu_read_lock(&kvm->srcu); mmap_read_lock(current->mm); write_lock(&kvm->mmu_lock); @@ -840,6 +853,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) pgt = mmu->pgt; if (kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_DYING) { + unmap_stage2_range(mmu, 0, (~0ULL) & PAGE_MASK); write_unlock(&kvm->mmu_lock); kvm_realm_destroy_rtts(&kvm->arch.realm, pgt->ia_bits, pgt->start_level); @@ -1190,6 +1204,24 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa, unsigned long hva, + kvm_pfn_t pfn, unsigned long map_size, + enum kvm_pgtable_prot prot, + struct kvm_mmu_memory_cache *memcache) +{ + struct realm *realm = &kvm->arch.realm; + struct page *page = pfn_to_page(pfn); + + if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W))) + return -EFAULT; + + if (!realm_is_addr_protected(realm, ipa)) + return realm_map_non_secure(realm, ipa, page, map_size, + memcache); + + return realm_map_protected(realm, hva, ipa, page, map_size, memcache); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1210,9 +1242,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm); fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level); write_fault = kvm_is_write_fault(vcpu); + + /* Realms cannot map read-only */ + if (vcpu_is_rec(vcpu)) + write_fault = true; + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); VM_BUG_ON(write_fault && exec_fault); @@ -1272,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) fault_ipa &= ~(vma_pagesize - 1); - gfn = fault_ipa >> PAGE_SHIFT; + gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT; mmap_read_unlock(current->mm); /* @@ -1345,7 +1383,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * If we are not forced to use page mapping, check if we are * backed by a THP and thus use block mapping if possible. */ - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) { + /* FIXME: We shouldn't need to disable this for realms */ + if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) { if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE) vma_pagesize = fault_granule; else @@ -1382,6 +1421,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, */ if (fault_status == FSC_PERM && vma_pagesize == fault_granule) ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + else if (kvm_is_realm(kvm)) + ret = realm_map_ipa(kvm, fault_ipa, hva, pfn, vma_pagesize, + prot, memcache); else ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, @@ -1437,6 +1479,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) struct kvm_memory_slot *memslot; unsigned long hva; bool is_iabt, write_fault, writable; + gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm); gfn_t gfn; int ret, idx; @@ -1491,7 +1534,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) idx = srcu_read_lock(&vcpu->kvm->srcu); - gfn = fault_ipa >> PAGE_SHIFT; + gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT; memslot = gfn_to_memslot(vcpu->kvm, gfn); hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); write_fault = kvm_is_write_fault(vcpu); @@ -1536,6 +1579,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) * of the page size. */ fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1); + fault_ipa &= ~gpa_stolen_mask; ret = io_mem_abort(vcpu, fault_ipa); goto out_unlock; } @@ -1617,6 +1661,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; + /* We don't support aging for Realms */ + if (kvm_is_realm(kvm)) + return true; + WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); kpte = kvm_pgtable_stage2_mkold(kvm->arch.mmu.pgt, @@ -1630,6 +1678,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; + /* We don't support aging for Realms */ + if (kvm_is_realm(kvm)) + return true; + return kvm_pgtable_stage2_is_young(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT); } diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 3405b43e1421..3d46191798e5 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -608,6 +608,54 @@ int realm_map_protected(struct realm *realm, return -ENXIO; } +int realm_map_non_secure(struct realm *realm, + unsigned long ipa, + struct page *page, + unsigned long map_size, + struct kvm_mmu_memory_cache *memcache) +{ + phys_addr_t rd = virt_to_phys(realm->rd); + int map_level; + int ret = 0; + unsigned long desc = page_to_phys(page) | + PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) | + /* FIXME: Read+Write permissions for now */ + (3 << 6) | + PTE_SHARED; + + if (WARN_ON(!IS_ALIGNED(ipa, map_size))) + return -EINVAL; + + switch (map_size) { + case PAGE_SIZE: + map_level = 3; + break; + case RME_L2_BLOCK_SIZE: + map_level = 2; + break; + default: + return -EINVAL; + } + + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc); + + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + /* Create missing RTTs and retry */ + int level = RMI_RETURN_INDEX(ret); + + ret = realm_create_rtt_levels(realm, ipa, level, map_level, + memcache); + if (WARN_ON(ret)) + return -ENXIO; + + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc); + } + if (WARN_ON(ret)) + return -ENXIO; + + return 0; +} + static int populate_par_region(struct kvm *kvm, phys_addr_t ipa_base, phys_addr_t ipa_end) From patchwork Fri Jan 27 11:29:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49263 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783431wrn; Fri, 27 Jan 2023 03:33:46 -0800 (PST) X-Google-Smtp-Source: AMrXdXuNN7vQbSNlINp9qhx5+pEl2K09Vn9KdgxQPN79jKqMRYMlRY99zc/W8oAi2oMMQai/9FNH X-Received: by 2002:a05:6a20:d2c6:b0:b8:5ab4:279c with SMTP id ir6-20020a056a20d2c600b000b85ab4279cmr41903596pzb.30.1674819225962; Fri, 27 Jan 2023 03:33:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819225; cv=none; d=google.com; s=arc-20160816; b=wPn13lxZr3L9trDUsk5eDgEBONoLvnrmOQUUUzjLar/zahGh7ZFQ45cyzostqNRpjs i/2lcRkpfHsxdqS7DqmS4Z7RUjZ90AozsIa14w9s+frYl/OTdMaFjSqAh6YgMk8weL3K BN1b9y6MHXsPEbnFTkH91WaZMFang7ct49LETQGH2Pt9XTvgJYsyaHADM1r65WJ3kMA1 TJMklRcR1/iUwZGSo96hVaBCRLlKUxKZ0koT01seidYUNOaelJR4mhtjO8hjQKNvHH5o tNA4lza5pWwD4ga3Cl+j5XWgYSRIdRcVg2lnK4Vhww11U26x+MMjF1ARnR/Ue3Q0PFWj fzyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=OFOs0fxW6egAE3hhk4y//teQyeNTPZdogvEqaRcZbTg=; b=o9GpFO1jfzVeT3P8sNTkNeLangXYvx4rXL2/FsIDS4QzmOQCzFb+iKoAsJ2SS1QwwD 0KtMh6QSBMswfqqufgfn8J2XqOGX9xUO6tD5WYY4Mefzb4zHKy2wfBDWzFCDwTvt4mMQ ypMDYMXr7kz5Y1ybTANyQWx4LzugdoAGzRWVr5G1X2CSs2mASMhIBbOxdzi/4bTb6/IJ Nra2T2I0rHbk03J6o8mQeD/ooeuYFGfASGGs/dR2nweIE/6OKXmwhJrt84T6IuC/qnDq RIEd7iq0MlCfGhi07rxneo4DqtYQaUZH8PoCPqmmW9Mn+RdYLEjhuRGpCRMdeZsOfSXC RweQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c123-20020a633581000000b004df79405902si2420459pga.295.2023.01.27.03.33.33; Fri, 27 Jan 2023 03:33:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233694AbjA0LdN (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233638AbjA0Lcj (ORCPT ); Fri, 27 Jan 2023 06:32:39 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1DB821709; Fri, 27 Jan 2023 03:31:12 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0658A1692; Fri, 27 Jan 2023 03:31:11 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EC4773F64C; Fri, 27 Jan 2023 03:30:26 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 18/28] KVM: arm64: Handle realm VCPU load Date: Fri, 27 Jan 2023 11:29:22 +0000 Message-Id: <20230127112932.38045-19-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175244317057407?= X-GMAIL-MSGID: =?utf-8?q?1756175244317057407?= When loading a realm VCPU much of the work is handled by the RMM so only some of the actions are required. Rearrange kvm_arch_vcpu_load() slightly so we can bail out early for a realm guest. Signed-off-by: Steven Price --- arch/arm64/kvm/arm.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index fd9e28f48903..46c152a9a150 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -451,19 +451,25 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu->cpu = cpu; + if (single_task_running()) + vcpu_clear_wfx_traps(vcpu); + else + vcpu_set_wfx_traps(vcpu); + kvm_vgic_load(vcpu); kvm_timer_vcpu_load(vcpu); + + if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) + kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); + + /* No additional state needs to be loaded on Realmed VMs */ + if (vcpu_is_rec(vcpu)) + return; + if (has_vhe()) kvm_vcpu_load_sysregs_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); - if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) - kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); - - if (single_task_running()) - vcpu_clear_wfx_traps(vcpu); - else - vcpu_set_wfx_traps(vcpu); if (vcpu_has_ptrauth(vcpu)) vcpu_ptrauth_disable(vcpu); From patchwork Fri Jan 27 11:29:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49265 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783686wrn; Fri, 27 Jan 2023 03:34:19 -0800 (PST) X-Google-Smtp-Source: AK7set+t7DAOleNcaQWq0i7s9rJANwijQOqpJFlCMytOV793qyf9qgegn8EUYek0UfHUu1w000GO X-Received: by 2002:a17:90a:1d5:b0:22b:b78f:ba04 with SMTP id 21-20020a17090a01d500b0022bb78fba04mr6454351pjd.41.1674819259350; Fri, 27 Jan 2023 03:34:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819259; cv=none; d=google.com; s=arc-20160816; b=cGGanvxyGIGfrvyvw0XGMJdVFZgpaBKHFXPxCRB267rU7PuXaiVJBjdXsdBxgTsXbl K9E9XmwudyCAN+9ymkOTn//lGrwWTZVyzTnMG0OkjY0DiZFh1JNLBlzMu91qAcgTvUeB UiQmHbOnDzi14WkQdzTHfQJGueMBqCCxPPxE7wwWpTvRzid4OjYVW4ROiuOBDQgjduZ3 AjBG5SlIutV+kBEZBzBYONLC7PUe76Rh78KxxMhr7iR8cEQMu4jwPFTaCyCjxL7BHkq/ kGX/0rTiLf3Y27RC3UaaWc7z+tyINXxkLlMCG3p/FsKW7hPL0Qvhr7q3k8zM93mHm2ha Bftw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hx4lGEnxyRw1X8co5vAAFFXfXxfDw6RoHbyfvpUGgE4=; b=nhJ4B5Wtd5X+dYOaUQrrk45RzKl8k9INjKuoa4bO0mgT72iArch7y9mCvM1chliye+ W75HBK2VCojyGv5swm5gqlf7VPhckU9rAgMWSG+FaTJhQhJZkoMmbWpgtBUn5CDHufDk y49rXUFZIGo6iVcqIeS/1F4ynz23C/C7hB935H4dCuHyQkKxelSkY5Qr7BOZEoQ02leE WG98rbH2ClbUfhbPaD2j54unJ3PeTEr725qHOCuAFSecZj9Tz6LW1R7WE9KrZ4WbwBt3 McjGmnfDFUwsQTo8FBsa3SOCpggoYpIenkGJ5nj0qw2qgZlOqbTb+9rZHAro6Z0CHnZZ nwmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bb24-20020a17090b009800b0021918bc9a47si4236502pjb.174.2023.01.27.03.33.37; Fri, 27 Jan 2023 03:34:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233670AbjA0LdS (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233697AbjA0Lck (ORCPT ); Fri, 27 Jan 2023 06:32:40 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D17506BBCA; Fri, 27 Jan 2023 03:31:14 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 578EC169E; Fri, 27 Jan 2023 03:31:13 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 55A0F3F64C; Fri, 27 Jan 2023 03:30:29 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 19/28] KVM: arm64: Validate register access for a Realm VM Date: Fri, 27 Jan 2023 11:29:23 +0000 Message-Id: <20230127112932.38045-20-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175279662422326?= X-GMAIL-MSGID: =?utf-8?q?1756175279662422326?= The RMM only allows setting the lower GPRS (x0-x7) and PC for a realm guest. Check this in kvm_arm_set_reg() so that the VMM can receive a suitable error return if other registers are accessed. Signed-off-by: Steven Price --- arch/arm64/kvm/guest.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5626ddb540ce..93468bbfb50e 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -768,12 +768,38 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return kvm_arm_sys_reg_get_reg(vcpu, reg); } +/* + * The RMI ABI only enables setting the lower GPRs (x0-x7) and PC. + * All other registers are reset to architectural or otherwise defined reset + * values by the RMM + */ +static bool validate_realm_set_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + u64 off = core_reg_offset_from_id(reg->id); + + if ((reg->id & KVM_REG_ARM_COPROC_MASK) != KVM_REG_ARM_CORE) + return false; + + switch (off) { + case KVM_REG_ARM_CORE_REG(regs.regs[0]) ... + KVM_REG_ARM_CORE_REG(regs.regs[7]): + case KVM_REG_ARM_CORE_REG(regs.pc): + return true; + } + + return false; +} + int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { /* We currently use nothing arch-specific in upper 32 bits */ if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32) return -EINVAL; + if (kvm_is_realm(vcpu->kvm) && !validate_realm_set_reg(vcpu, reg)) + return -EINVAL; + switch (reg->id & KVM_REG_ARM_COPROC_MASK) { case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg); case KVM_REG_ARM_FW: From patchwork Fri Jan 27 11:29:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49299 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp788141wrn; Fri, 27 Jan 2023 03:46:12 -0800 (PST) X-Google-Smtp-Source: AMrXdXtRSGfIOuwJjveBrsrF6R1PZoSgehnQtSUXEAIHSNqqxlFACWyv5xa1nrZ85XBMnR5KvnCg X-Received: by 2002:a17:906:eb8e:b0:871:6b9d:dbc with SMTP id mh14-20020a170906eb8e00b008716b9d0dbcmr39753850ejb.21.1674819972324; Fri, 27 Jan 2023 03:46:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819972; cv=none; d=google.com; s=arc-20160816; b=aeCa/ZTLu2kyc6oLdWF6Kvo9UTMcKAm3aNkMDeCPwsvBkXS+pyf5W5zsjKTCYPBpAY d/or86T1FdS40s/45fcI2kW+Pz6BBf8irtkM0HmQzgBk3Q5HDew1WBeuejgoSPdoLIQv OiNV3/VRh7ZQpYlvwjsg1PZSC9rI/enNZ9X+QsFv+U4xEHpQCE9x3B11LhdR7e7niZ1d JyXBH6DpRBNtEP/nONsBjWj24X/tl0eyIKSBJ4i10yp6YLJ2Xl0LgNieTzpVW48EAl3b AnLfDwf7B3SygpcErVx0nvv9vXsLXjebey+8EztWi7BfPcafjHaPaysry2HbXZdB7RSU UE7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=9980iIMZ5Z7kLzyRzssdFLgJOIY90SHDOCFWoTx/F/w=; b=PcRD2dG1NY4RzcP7IyckUJ6y0FCY2j/MV4aDUxxPJ4oLUD+sOQ4aZJIJyp63PgOTVh ZP/rKovWCWNjCBmPCXYlmp+aQ8wvm9slenZYHaEj6P8A+JJ2iJUnJ51kGSO+iaYpqIxL qN7EAFH/KqPSDan1XdYX/HgVyls0k5wpkZq7IK12obflQ2KjJHHrBGtol6F4RTFssRFm vyq/ywOAEbU6NbMD92nOrNM9PXtOibVkm3ZtzKMaiuxWe5DoaU9obsXGF7i0W1D93lJq ecUaEyjKi7aANjEvP2MInsokmf0+O9Qh+NgW47dwOX7qzAg1bTBY8Jz47dZCspxyEM13 AMkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e6-20020a17090658c600b0084ca32731dcsi6262969ejs.675.2023.01.27.03.45.48; Fri, 27 Jan 2023 03:46:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233964AbjA0LmT (ORCPT + 99 others); Fri, 27 Jan 2023 06:42:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233741AbjA0Llo (ORCPT ); Fri, 27 Jan 2023 06:41:44 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 444947C306; Fri, 27 Jan 2023 03:41:10 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE14E1596; Fri, 27 Jan 2023 03:31:15 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A62613F64C; Fri, 27 Jan 2023 03:30:31 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 20/28] KVM: arm64: Handle Realm PSCI requests Date: Fri, 27 Jan 2023 11:29:24 +0000 Message-Id: <20230127112932.38045-21-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756176027424771399?= X-GMAIL-MSGID: =?utf-8?q?1756176027424771399?= The RMM needs to be informed of the target REC when a PSCI call is made with an MPIDR argument. Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 1 + arch/arm64/kvm/psci.c | 23 +++++++++++++++++++++++ arch/arm64/kvm/rme.c | 13 +++++++++++++ 3 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 303e4a5e5704..2254e28c855e 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -65,6 +65,7 @@ int realm_map_non_secure(struct realm *realm, int realm_set_ipa_state(struct kvm_vcpu *vcpu, unsigned long addr, unsigned long end, unsigned long ripas); +int realm_psci_complete(struct kvm_vcpu *calling, struct kvm_vcpu *target); #define RME_RTT_BLOCK_LEVEL 2 #define RME_RTT_MAX_LEVEL 3 diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 7fbc4c1b9df0..e2061cab9b26 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -76,6 +76,10 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) */ if (!vcpu) return PSCI_RET_INVALID_PARAMS; + + if (vcpu_is_rec(vcpu)) + realm_psci_complete(source_vcpu, vcpu); + if (!kvm_arm_vcpu_stopped(vcpu)) { if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1) return PSCI_RET_ALREADY_ON; @@ -135,6 +139,25 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu) /* Ignore other bits of target affinity */ target_affinity &= target_affinity_mask; + if (vcpu_is_rec(vcpu)) { + struct kvm_vcpu *target_vcpu; + + /* RMM supports only zero affinity level */ + if (lowest_affinity_level != 0) + return PSCI_RET_INVALID_PARAMS; + + target_vcpu = kvm_mpidr_to_vcpu(kvm, target_affinity); + if (!target_vcpu) + return PSCI_RET_INVALID_PARAMS; + + /* + * Provide the references of running and target RECs to the RMM + * so that the RMM can complete the PSCI request. + */ + realm_psci_complete(vcpu, target_vcpu); + return PSCI_RET_SUCCESS; + } + /* * If one or more VCPU matching target affinity are running * then ON else OFF diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 3d46191798e5..6ac50481a138 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -126,6 +126,19 @@ static void free_delegated_page(struct realm *realm, phys_addr_t phys) free_page((unsigned long)phys_to_virt(phys)); } +int realm_psci_complete(struct kvm_vcpu *calling, struct kvm_vcpu *target) +{ + int ret; + + ret = rmi_psci_complete(virt_to_phys(calling->arch.rec.rec_page), + virt_to_phys(target->arch.rec.rec_page)); + + if (ret) + return -EINVAL; + + return 0; +} + static void realm_destroy_undelegate_range(struct realm *realm, unsigned long ipa, unsigned long addr, From patchwork Fri Jan 27 11:29:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49264 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp783537wrn; Fri, 27 Jan 2023 03:34:01 -0800 (PST) X-Google-Smtp-Source: AK7set/nItxn9KHV28CZqi+RdUyvrHK7nNSczTKalCFhOMX/V1K/ao+iIWeEkzePrWaLXcI8Qqkg X-Received: by 2002:a17:902:d492:b0:196:3feb:1f1e with SMTP id c18-20020a170902d49200b001963feb1f1emr6522925plg.47.1674819241561; Fri, 27 Jan 2023 03:34:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819241; cv=none; d=google.com; s=arc-20160816; b=ndcPjdOvQJiNr3hVBMhWadoDgHI5KnZWxaJnwWhU5Ma6+wi+Pldl+mnDVBa0PoWqLL ObPskZRXBmrHiCc5HKanjxNTMzL4SFsFy3ipugQQKixSzMMgUBoLeiTsWDrPe3hhiWyV wR75G/mgsV0g5koP6Ucry9/DWczgyeIDrg/vkSg10DtB0w8AeiqfiWb9rI6oIQKcWrow 0m59uibUhnVDNATlI84g8if4L4tJel63i35djQ/FLpVtisqEjbdI7fFgOHhEl+jXbqSR 5RiriaPD3DnsdfaKRnaUn8HeyGNm/3m0ZcV0ZQ0rwBEKQnek6LVLrkKaXWWDX4+2stze iGvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+nXKcxRiBXWZWetNf+3NGbJ+8n9JkoB2ceCXG1dJ3oA=; b=er2y+GyNp2hkxxIInYgkzjDfyh0EpYL5CR3Oxvkzyu8z2jMuRJlQ6CZwke/KFT2cXd sUHq4+SqRDDp+OEI1k2y5DNXhQ+/T1HHsHqMmClaC4okJ5b/1CBUT2hnl9a7lpSBzs4O YNy7oRsaRoyCcWoeGM4fEIWWuEOjNFz5aPaglbNsnz6GJvEsjZcEScal8Q8uzkqzXYmk MsP6zwOR3fUqzh+VKNLsUV0/AuEAwiHg8Poh+zRHL7Lcxa3Pk+z+bryfGuMdjS5qPoDO JeukKBSriXwR9ym4FIaecSrAfz28bvDzziXaDfsYN8J9E809zTCDZAPhQqcBKnj3G1/d swdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r10-20020a170902c60a00b001871c762263si4129624plr.185.2023.01.27.03.33.49; Fri, 27 Jan 2023 03:34:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233764AbjA0LdW (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233719AbjA0Lcy (ORCPT ); Fri, 27 Jan 2023 06:32:54 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B1C4678ADC; Fri, 27 Jan 2023 03:31:20 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 43CBB169C; Fri, 27 Jan 2023 03:31:18 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 311753F64C; Fri, 27 Jan 2023 03:30:34 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 21/28] KVM: arm64: WARN on injected undef exceptions Date: Fri, 27 Jan 2023 11:29:25 +0000 Message-Id: <20230127112932.38045-22-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175261171671403?= X-GMAIL-MSGID: =?utf-8?q?1756175261171671403?= The RMM doesn't allow injection of a undefined exception into a realm guest. Add a WARN to catch if this ever happens. Signed-off-by: Steven Price --- arch/arm64/kvm/inject_fault.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index f32f4a2a347f..29966a3e5a71 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -175,6 +175,8 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu) */ void kvm_inject_undefined(struct kvm_vcpu *vcpu) { + if (vcpu_is_rec(vcpu)) + WARN(1, "Cannot inject undefined exception into REC. Continuing with unknown behaviour"); if (vcpu_el1_is_32bit(vcpu)) inject_undef32(vcpu); else From patchwork Fri Jan 27 11:29:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49270 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp786796wrn; Fri, 27 Jan 2023 03:42:31 -0800 (PST) X-Google-Smtp-Source: AMrXdXuJR3Y1Lk4fZvePrDpq71ljMdpmuOHYfTni/lIVdc1VOztfc+xCwOqTmmLGyBBlnH820r1P X-Received: by 2002:a17:902:c948:b0:194:6414:12e1 with SMTP id i8-20020a170902c94800b00194641412e1mr53148712pla.25.1674819750803; Fri, 27 Jan 2023 03:42:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819750; cv=none; d=google.com; s=arc-20160816; b=Ub0X0ZaeiF29ctXP1Vt+ioQsHejnwWJE01ViNu1NuzvnRc+3Z8fvo1IoLFG8+C9oer LfbxwUV9oIObQKVHl70Pi+ktn8hDtsnEOaUoXJcDGk6+fnAcbBgvn/UXfJZnpvCzJn4A DuRabhMbLHQb3Vei4f6tBxvM6AVqiA6WVwbBqTEU5PWhwwP9nwCiKAUh1qmpKXdaRnes DOf6JOILbqiGysxNAvOoTNAKgl8pKfFuFHE5+JFquEaM08yRP4IsSvSdYwIsF8cylaMZ IDkyXZ6M99+mk5hcl//2A1D3ICdiGWbUXID0oxaH7R8VbiINHPJcSCfMnhvqJwv6aWNM ijXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=q6aSkeHfa+QgDmcoSj2Xs/vTgTFVlKeTmNRgxwBzwS8=; b=ZmM9jNUzRbBuL4VFX34sib/Q6RbESlEIK3PowakySp+7NFSa5fRpAm6sW/xAtW5Eca FXAWA0UYGQWesbBFK/b2QUNGHPMvnHg+UstlxNtKroYVqNE9Mfzmxjm2hAd2i63RlCxf nOFCFPC23CfBS/NcremUpkDs6Nn6+Xv6ggm9zpd16fACF+o5ofVTI+pbVj3crBhnf2zj OpHHTZLPrB828R4IygGoJIDbKUoyT36HevYbokW/Rmlbv90e/eHFO1B65u7SFlAWHIJ0 aqD000reCsYu+e4Y928gi2ZqKjteXcA5staWk9y4/3grRNWHGzEmKWWYHu6m02cN9yyE SdSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c10-20020a170902d48a00b0019272f2ecc3si5035597plg.369.2023.01.27.03.42.17; Fri, 27 Jan 2023 03:42:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233716AbjA0Ld1 (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233718AbjA0Lcy (ORCPT ); Fri, 27 Jan 2023 06:32:54 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B185478AC3; Fri, 27 Jan 2023 03:31:20 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D3FBD16A3; Fri, 27 Jan 2023 03:31:20 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A3DDA3F8D6; Fri, 27 Jan 2023 03:30:36 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 22/28] arm64: Don't expose stolen time for realm guests Date: Fri, 27 Jan 2023 11:29:26 +0000 Message-Id: <20230127112932.38045-23-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175794729747507?= X-GMAIL-MSGID: =?utf-8?q?1756175794729747507?= It doesn't make much sense and with the ABI as it is it's a footgun for the VMM which makes fatal granule protection faults easy to trigger. Signed-off-by: Steven Price --- arch/arm64/kvm/arm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 46c152a9a150..645df5968e1e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -302,7 +302,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = system_supports_mte(); break; case KVM_CAP_STEAL_TIME: - r = kvm_arm_pvtime_supported(); + if (kvm && kvm_is_realm(kvm)) + r = 0; + else + r = kvm_arm_pvtime_supported(); break; case KVM_CAP_ARM_EL1_32BIT: r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1); From patchwork Fri Jan 27 11:29:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49309 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp788811wrn; Fri, 27 Jan 2023 03:47:47 -0800 (PST) X-Google-Smtp-Source: AMrXdXuu43MsyK3FH93OofzgITjtEo5Ebq4PZoZq1uhEFvXv97OrWilLW61X20eoxFOR8I4R5sgL X-Received: by 2002:a17:907:d684:b0:870:4986:2ce with SMTP id wf4-20020a170907d68400b00870498602cemr48371487ejc.58.1674820066974; Fri, 27 Jan 2023 03:47:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674820066; cv=none; d=google.com; s=arc-20160816; b=nUFaZVzw+h0gaQ0mRgaW2+8PazYtmeqh8l4FkqoLJwqLdywnItTw+MduCgQ6YumQ/U P9BRBjwLmPGz/483lZdvr1TXYjeOc/n5hE1/M4cYtSSntQ3VD1siNQVdtfiJbfRrOd76 Dp4nGf9dUsJD1oiVSQB2A8aFCSwTAtjJFXFnPrwWSY+gfTztXbWZsC5shhvBlEySEwoF peTr0sStGKaGuNvZJtlhsrOVRQYQg8ggfxGnixe0eUeNSgR3ll4gP4NR7aZ7Iq+uLUwO 0nZuAMn/UUY6chqvNC8Eskj9nRnG6dV9AU8uzL5ZaCGuck2ek4jlvJ7wGyxe8TW+nv0M S0QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hJSLkRhXrr/7kgcbSZHuw7AJYo9s/Io1J/Zgt7kQ7kY=; b=ySKsA/kghsXJU0/et5qyTc0eSA0YUvZah/yp2Z2aMDmVTzoMG9o13eQKBtjcVmXxhl ldTBQkZdkJZiGSPlaalPZLH0zNz+HlGKJ9pnf7dbrCxCTkwbf36ye1I67zVbxZg0Jag0 gHSUVm0Rw/5/4gbfaat4GPMee2Obw1zRsDgLxmoxKw/zijpj7rwlYxrlUDb+ph+pN7Y4 mVZ6khrQEzyHSR0KxZnjn5i8JXpGxAK4hteaqGzDqmTowWmsXMz+y4ml8pTk/K2pulAo UeKss3hrL7m54nP9zykMJLpI7nU4XuUtS4tYeMbncBImvGE/3VzvcUtvU/boWsHOYqb/ P39g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hw19-20020a170907a0d300b007c1852c08f0si4363070ejc.658.2023.01.27.03.47.23; Fri, 27 Jan 2023 03:47:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231641AbjA0LoI (ORCPT + 99 others); Fri, 27 Jan 2023 06:44:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234040AbjA0Lne (ORCPT ); Fri, 27 Jan 2023 06:43:34 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 78CB021971; Fri, 27 Jan 2023 03:42:19 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5808B16F2; Fri, 27 Jan 2023 03:31:23 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F1DA3F64C; Fri, 27 Jan 2023 03:30:39 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 23/28] KVM: arm64: Allow activating realms Date: Fri, 27 Jan 2023 11:29:27 +0000 Message-Id: <20230127112932.38045-24-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756176126514349038?= X-GMAIL-MSGID: =?utf-8?q?1756176126514349038?= Add the ioctl to activate a realm and set the static branch to enable access to the realm functionality if the RMM is detected. Signed-off-by: Steven Price --- arch/arm64/kvm/rme.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 6ac50481a138..543e8d10f532 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -1000,6 +1000,20 @@ static int kvm_init_ipa_range_realm(struct kvm *kvm, return ret; } +static int kvm_activate_realm(struct kvm *kvm) +{ + struct realm *realm = &kvm->arch.realm; + + if (kvm_realm_state(kvm) != REALM_STATE_NEW) + return -EBUSY; + + if (rmi_realm_activate(virt_to_phys(realm->rd))) + return -ENXIO; + + WRITE_ONCE(realm->state, REALM_STATE_ACTIVE); + return 0; +} + /* Protects access to rme_vmid_bitmap */ static DEFINE_SPINLOCK(rme_vmid_lock); static unsigned long *rme_vmid_bitmap; @@ -1175,6 +1189,9 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) r = kvm_populate_realm(kvm, &args); break; } + case KVM_CAP_ARM_RME_ACTIVATE_REALM: + r = kvm_activate_realm(kvm); + break; default: r = -EINVAL; break; @@ -1415,7 +1432,7 @@ int kvm_init_rme(void) WARN_ON(rmi_features(0, &rmm_feat_reg0)); - /* Future patch will enable static branch kvm_rme_is_available */ + static_branch_enable(&kvm_rme_is_available); return 0; } From patchwork Fri Jan 27 11:29:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49285 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp787605wrn; Fri, 27 Jan 2023 03:44:58 -0800 (PST) X-Google-Smtp-Source: AMrXdXssRt/tz3fF+LeCYJJ3yHjd+IE9Mm9EdjLfob7MpDsllxYiTMx6VUoOix1yKxthrClAEIgb X-Received: by 2002:a05:6a20:1bdc:b0:b6:3e6e:af94 with SMTP id cv28-20020a056a201bdc00b000b63e6eaf94mr35764903pzb.32.1674819898652; Fri, 27 Jan 2023 03:44:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819898; cv=none; d=google.com; s=arc-20160816; b=bY9tKzfNG7ASeF0VzyA0JvnUg/cLNLJYFgHM3NHHKTmKoNSuk6AorinQmXpF7z8erA menSoOoJ4BToS/DDgAdd5N0c6GkODyEwzfG381DDmy15xYX9st9rZKtBNSMXCFgfrRhd w1JCCYNgHZ2XD8+UfQUYl1g+illt4PmMmOEYIVkvi82VaFiaLR/qRY7Qkek6jEebR7cU l1HjhJQ4aodD/sA6qlAqJX6N0DfnD6ZGWUcNz70Fz/s02qz7OUlfMv+qbemUPo47Yxjb dQ0r2sdQVp1XecStpbORE2HRkUtthWFI3Bm/Z3V6eTmiQH9ZWcyVfaMx54KrD0EuS06Q BdhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lK6X2Smn4kQOFlX3SZprbm5a8P+9PDeKn3BlQl+LXLc=; b=S6X/SWx4RcpX57jP4WoQ9MzpWPYfKx+SNU2Mgb/b+EDcdQGUArXy+O3A1C1ExT9n1M abNpiw1Upe+KX9jGEJj2YEzLoZXmoxKlwrtMaAv8HYSLt+drFX5zWHi//2iWH37FIerf x4/9Zb33ItHxIaGzkR2k07M5U2tu8NPrsUojDgB83Cd3TLHZ/tOfzDYEiJ9XU5dAFGQl VqcSSiCO8ryNAxAg37mNKboSpXHqm/BPXTfkzdp0umBe7vFkCzD1GGRPDnudrxWf+IsE 9SFbf0eALuZ+IoqgTMcyJefeRpWApHiPVzcXabn0t7MYnb0j1wtSaoUMsmdxYpj6apM2 SDRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j5-20020a633c05000000b004ae2ceb2d39si4125879pga.805.2023.01.27.03.44.46; Fri, 27 Jan 2023 03:44:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232001AbjA0LmQ (ORCPT + 99 others); Fri, 27 Jan 2023 06:42:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233762AbjA0Llh (ORCPT ); Fri, 27 Jan 2023 06:41:37 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E60697A490; Fri, 27 Jan 2023 03:41:07 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B09E016F3; Fri, 27 Jan 2023 03:31:25 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A98B13F64C; Fri, 27 Jan 2023 03:30:41 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 24/28] arm64: rme: allow userspace to inject aborts Date: Fri, 27 Jan 2023 11:29:28 +0000 Message-Id: <20230127112932.38045-25-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175950248612897?= X-GMAIL-MSGID: =?utf-8?q?1756175950248612897?= From: Joey Gouly Extend KVM_SET_VCPU_EVENTS to support realms, where KVM cannot set the system registers, and the RMM must perform it on next REC entry. Signed-off-by: Joey Gouly Signed-off-by: Steven Price --- Documentation/virt/kvm/api.rst | 2 ++ arch/arm64/kvm/guest.c | 24 ++++++++++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index f1a59d6fb7fc..18a8ddaf31d8 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1238,6 +1238,8 @@ User space may need to inject several types of events to the guest. Set the pending SError exception state for this VCPU. It is not possible to 'cancel' an Serror that has been made pending. +User space cannot inject SErrors into Realms. + If the guest performed an access to I/O memory which could not be handled by userspace, for example because of missing instruction syndrome decode information or because there is no device mapped at the accessed IPA, then diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 93468bbfb50e..6e53e0ef2fba 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -851,6 +851,30 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, bool has_esr = events->exception.serror_has_esr; bool ext_dabt_pending = events->exception.ext_dabt_pending; + if (vcpu_is_rec(vcpu)) { + /* Cannot inject SError into a Realm. */ + if (serror_pending) + return -EINVAL; + + /* + * If a data abort is pending, set the flag and let the RMM + * inject an SEA when the REC is scheduled to be run. + */ + if (ext_dabt_pending) { + /* + * Can only inject SEA into a Realm if the previous exit + * was due to a data abort of an Unprotected IPA. + */ + if (!(vcpu->arch.rec.run->entry.flags & RMI_EMULATED_MMIO)) + return -EINVAL; + + vcpu->arch.rec.run->entry.flags &= ~RMI_EMULATED_MMIO; + vcpu->arch.rec.run->entry.flags |= RMI_INJECT_SEA; + } + + return 0; + } + if (serror_pending && has_esr) { if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) return -EINVAL; From patchwork Fri Jan 27 11:29:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49295 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp787797wrn; Fri, 27 Jan 2023 03:45:23 -0800 (PST) X-Google-Smtp-Source: AK7set94yTwBCtUGsbXiyInLw39SItvYXTabm+57n2x3DZ0/grOv30KMzGMSnA+VbFhdw6LdEXyO X-Received: by 2002:aa7:9e51:0:b0:593:40ba:3ee8 with SMTP id z17-20020aa79e51000000b0059340ba3ee8mr206869pfq.12.1674819923419; Fri, 27 Jan 2023 03:45:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819923; cv=none; d=google.com; s=arc-20160816; b=E/km518NHMLAd2yped5jt/mWPB2ghBnm7x6eFumvCJozFQZoy3VdqThzqTolkiZSOC iI8B76Rob98CPZgmFWxIT4rfPuT3+s3Xca5NhwXmIvsawkj5+6fIG+DuNPeFtjrBvaDD 9rqC4JpgnuoylGRx+f5cIJagfYPodTYAY2oyZRO+Y7lThjdXPdraBu2VuOIwKq1lZ16h bpt/77TQP4p5z6pp3uoRj8FKqO6aauEXitDmt9GmURFv8zKda5/liPPy+1qW8TrKbisc 8RoRH9GByug4ad2SgvVEchgNxdvcSt88v2nZ0+EPor+oaptyTOhWBl8xKLNABzIaKa51 EhGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=9Tl5yvJQsa5iuhfm2mfX7J+VsrVhLfl0rBPSM0/PMOU=; b=G2p7fu9Ixo0nCTzXQyy2BZVoaM6ypZk9FUtFR5T4qoxARHkeVpMbm8YY05kMmFHAZt DM7YndKsv3SJeJw3P7yMce1H8KWtRJlq7z8fRtabW1L8KcrmtqAmRNZEnOQZWPsY0Oxs 7OwMhArWVpiNlM2uwNbS6vxRMKJr6vQPwhDyWbLgpSfVlSGrdTq7igo/KQQqJa49nx8h p94ynFpmptenIY7omp/uel1GEkWNUELJbVssY7LZsVY7Bd5k1xRD5s7PzFjpvwTiEeF0 +JR3lFUCT8tqw2YCrvailK26Hq6GlDZ9MKfxpPTg/fJLy1U1NvFxvmQuhdwsUu/nl4zK Jgdw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y15-20020aa7942f000000b0058e11332e9bsi4139897pfo.152.2023.01.27.03.45.10; Fri, 27 Jan 2023 03:45:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233988AbjA0Lm3 (ORCPT + 99 others); Fri, 27 Jan 2023 06:42:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233802AbjA0Lln (ORCPT ); Fri, 27 Jan 2023 06:41:43 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 737A37A4B9; Fri, 27 Jan 2023 03:41:08 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 63CF01713; Fri, 27 Jan 2023 03:31:28 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0B9913F64C; Fri, 27 Jan 2023 03:30:43 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 25/28] arm64: rme: support RSI_HOST_CALL Date: Fri, 27 Jan 2023 11:29:29 +0000 Message-Id: <20230127112932.38045-26-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175975852314779?= X-GMAIL-MSGID: =?utf-8?q?1756175975852314779?= From: Joey Gouly Forward RSI_HOST_CALLS to KVM's HVC handler. Signed-off-by: Joey Gouly Signed-off-by: Steven Price --- arch/arm64/kvm/rme-exit.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c index 15a4ff3517db..fcdc87e8f6bc 100644 --- a/arch/arm64/kvm/rme-exit.c +++ b/arch/arm64/kvm/rme-exit.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -98,6 +99,29 @@ static int rec_exit_ripas_change(struct kvm_vcpu *vcpu) return 1; } +static int rec_exit_host_call(struct kvm_vcpu *vcpu) +{ + int ret, i; + struct rec *rec = &vcpu->arch.rec; + + vcpu->stat.hvc_exit_stat++; + + for (i = 0; i < REC_RUN_GPRS; i++) + vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]); + + ret = kvm_hvc_call_handler(vcpu); + + if (ret < 0) { + vcpu_set_reg(vcpu, 0, ~0UL); + ret = 1; + } + + for (i = 0; i < REC_RUN_GPRS; i++) + rec->run->entry.gprs[i] = vcpu_get_reg(vcpu, i); + + return ret; +} + static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu) { struct rec *rec = &vcpu->arch.rec; @@ -159,6 +183,8 @@ int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_ret) return rec_exit_psci(vcpu); case RMI_EXIT_RIPAS_CHANGE: return rec_exit_ripas_change(vcpu); + case RMI_EXIT_HOST_CALL: + return rec_exit_host_call(vcpu); } kvm_pr_unimpl("Unsupported exit reason: %u\n", From patchwork Fri Jan 27 11:29:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49272 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp787044wrn; Fri, 27 Jan 2023 03:43:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXuI8nGa87lbUlY/0ckS+DiLM2USQ9VUPv01UzguPzXHYyMw+tS4uEY8deORNE5iW+3JJQkx X-Received: by 2002:a05:6a00:2986:b0:58d:a7a7:580f with SMTP id cj6-20020a056a00298600b0058da7a7580fmr40123551pfb.19.1674819796060; Fri, 27 Jan 2023 03:43:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819796; cv=none; d=google.com; s=arc-20160816; b=qFxbdcdLKaSMPpfoWr2OjYu2z6IUW/UKmevCodTf0tXJkNhZJhQpJboljGzEM0ivsO stlr0+p/azwKdJTxA+xLrd0XOpeRbfu3sO06+YK9s4dX3MbQ0DHIYvscHpco+2pv8kcK rbh07x5RfZOOc6HvbZHNpTOC0O0IG/Bf7KZKlVfEzoDRXWdTTaeSetyTaNQUELHj+jSI MFSbdJNgbbjsz543RkVn7X70ecZjLOJsk5U5Thff0k1Nb1+2MywXSp+pYN+nPk3PbjiY B4vbk+0z0Jc4sKcbn341x+KKGvzJmagEXZg1hQ2ArcLnHD9KOkZLpKW6N79K9CUt14Ua 9Dig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Q3tuNIrHgJSG2AM15HwAdFmeQAndp5O7eeX42xHiGeY=; b=0W+5WOcKfhky2MBoUNaQEOpU7GmPvke1EkciWJVHmbS7hN404XJyju4MtPmrrwnqWZ nizMQkUNXx2p73TzKE43XblOllAUNyQ9G6E+tivltHYzMQhHeojGEwQgQpFecxlCYn6l WWYmc+pBSEuVZY5vuwymGuHBUpcgSArY8XQWgGlm9XRA/rMbMU62a8tuB0RGInJwhyMM 3TG0+WF5rHnFrnZOMevkPWNTHh1KpIp97TbGkD7a3D2xBmKFoc/gKLlGOTYBNRdwVk6v HIi6CWC8LhEQSJgJyfOJkXysbGTL5q9AIia+dzjXJRnGIOXJKz8BCKyiP7nwOH++8s4R U5tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t13-20020a056a00138d00b0058dbb4ccc35si4821220pfg.97.2023.01.27.03.43.03; Fri, 27 Jan 2023 03:43:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233792AbjA0Ldg (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233386AbjA0LdK (ORCPT ); Fri, 27 Jan 2023 06:33:10 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC64E7C33E; Fri, 27 Jan 2023 03:31:25 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AC2B16F8; Fri, 27 Jan 2023 03:31:31 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B2D173F64C; Fri, 27 Jan 2023 03:30:46 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 26/28] arm64: rme: Allow checking SVE on VM instance Date: Fri, 27 Jan 2023 11:29:30 +0000 Message-Id: <20230127112932.38045-27-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175842758354523?= X-GMAIL-MSGID: =?utf-8?q?1756175842758354523?= From: Suzuki K Poulose Given we have different types of VMs supported, check the support for SVE for the given instance of the VM to accurately report the status. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_rme.h | 2 ++ arch/arm64/kvm/arm.c | 5 ++++- arch/arm64/kvm/rme.c | 7 ++++++- 3 files changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h index 2254e28c855e..68e99e5107bc 100644 --- a/arch/arm64/include/asm/kvm_rme.h +++ b/arch/arm64/include/asm/kvm_rme.h @@ -40,6 +40,8 @@ struct rec { int kvm_init_rme(void); u32 kvm_realm_ipa_limit(void); +bool kvm_rme_supports_sve(void); + int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap); int kvm_init_realm_vm(struct kvm *kvm); void kvm_destroy_realm(struct kvm *kvm); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 645df5968e1e..1d0b8ac7314f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -326,7 +326,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = get_kvm_ipa_limit(); break; case KVM_CAP_ARM_SVE: - r = system_supports_sve(); + if (kvm && kvm_is_realm(kvm)) + r = kvm_rme_supports_sve(); + else + r = system_supports_sve(); break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 543e8d10f532..6ae7871aa6ed 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -49,6 +49,11 @@ static bool rme_supports(unsigned long feature) return !!u64_get_bits(rmm_feat_reg0, feature); } +bool kvm_rme_supports_sve(void) +{ + return rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN); +} + static int rmi_check_version(void) { struct arm_smccc_res res; @@ -1104,7 +1109,7 @@ static int config_realm_sve(struct realm *realm, int max_sve_vq = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_SVE_VL); - if (!rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN)) + if (!kvm_rme_supports_sve()) return -EINVAL; if (cfg->sve_vq > max_sve_vq) From patchwork Fri Jan 27 11:29:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49355 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp809729wrn; Fri, 27 Jan 2023 04:37:30 -0800 (PST) X-Google-Smtp-Source: AMrXdXvaiwYEsBs9DLO4XdCzQKeTS/tEzxhZVZTNaGp7FPwksR/iD6lfQG0rHsNJuueHdUgBRBfg X-Received: by 2002:a05:6402:e01:b0:49b:65cc:faa6 with SMTP id h1-20020a0564020e0100b0049b65ccfaa6mr51993353edh.16.1674823050016; Fri, 27 Jan 2023 04:37:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674823050; cv=none; d=google.com; s=arc-20160816; b=0RNMHgmaB+PBqcYQRuFhHupfz5ISvhL1rolKCYSGNBx8c56/0jd7U6C56Flbk/jRue vaZlOK8lMoepr7VJBF5s35SPVOwGZypYJ+/A3SD1vFgrGstl7ysSmvawMqjJ0YBWKSJV ebarU7WZJQcBaDlgPAsJhCBefCA38LW8ULv525CWM+QyVKgdMW2dF6P2pjWdgPanOdRJ KKmvNqPgc6Pm6Xv0rhqY3Aj/baAVlmICjV4AYEAlJU5424ZK9IXD+wFCVLzyaLEPsfEN 5hj4jde/sJjN+labdQLSbt31hneFObPlr02YVW95KP+Cw6V+Lk7WqyFyryjB3jEY1a7c Nthg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KBnfn2WQ5HPFsqny4y8tKoqjPbdK1tXZboMuGzjYtxY=; b=SoHjGAmGv2NI27vMVjGpDUJtb0RRBCR0jCUYGJcpJl46lb0SSgVRzMkiX+VQkTdFCk 0rmoifsRnggkS/RW8lMz/j98oQC4ohbA72yz3CNl9av7DEDdVhrButOv3F7g4xxZQcwn +5C69AugmV2WpxG0U9QDXEnDsuUbkSFv6RutW9FqWCKgqV8IB6WKoPSZ54hsmo/3BHM1 312EoUGH+ajV3vNpf7NbRankXe5n46nLdGKecy+7vu57fXmxMpp8Wp7mJWjuX7JwYwg0 gqzxBZSFU/1VVxgKD8JvnmCikDbCuyKAGFliDtAbqhpskMj/sZJ3bK0SHxjTW4azeKwQ /yaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s7-20020aa7c547000000b00497d974880dsi5163423edr.338.2023.01.27.04.37.06; Fri, 27 Jan 2023 04:37:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234180AbjA0MYP (ORCPT + 99 others); Fri, 27 Jan 2023 07:24:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234114AbjA0MX0 (ORCPT ); Fri, 27 Jan 2023 07:23:26 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5B5167B7AA; Fri, 27 Jan 2023 04:21:00 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 884A41756; Fri, 27 Jan 2023 03:31:33 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86C463F64C; Fri, 27 Jan 2023 03:30:49 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 27/28] arm64: RME: Always use 4k pages for realms Date: Fri, 27 Jan 2023 11:29:31 +0000 Message-Id: <20230127112932.38045-28-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756179254480802407?= X-GMAIL-MSGID: =?utf-8?q?1756179254480802407?= Always split up huge pages to avoid problems managing huge pages. There are two issues currently: 1. The uABI for the VMM allows populating memory on 4k boundaries even if the underlying allocator (e.g. hugetlbfs) is using a larger page size. Using a memfd for private allocations will push this issue onto the VMM as it will need to respect the granularity of the allocator. 2. The guest is able to request arbitrary ranges to be remapped as shared. Again with a memfd approach it will be up to the VMM to deal with the complexity and either overmap (need the huge mapping and add an additional 'overlapping' shared mapping) or reject the request as invalid due to the use of a huge page allocator. For now just break everything down to 4k pages in the RMM controlled stage 2. Signed-off-by: Steven Price --- arch/arm64/kvm/mmu.c | 4 ++++ arch/arm64/kvm/rme.c | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 5417c273861b..b5fc8d8f7049 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1278,6 +1278,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (logging_active) { force_pte = true; vma_shift = PAGE_SHIFT; + } else if (kvm_is_realm(kvm)) { + // Force PTE level mappings for realms + force_pte = true; + vma_shift = PAGE_SHIFT; } else { vma_shift = get_vma_page_shift(vma, hva); } diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 6ae7871aa6ed..1eb76cbee267 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -730,7 +730,9 @@ static int populate_par_region(struct kvm *kvm, break; } - if (is_vm_hugetlb_page(vma)) + // FIXME: To avoid the overmapping issue (see below comment) + // force the use of 4k pages + if (is_vm_hugetlb_page(vma) && 0) vma_shift = huge_page_shift(hstate_vma(vma)); else vma_shift = PAGE_SHIFT; From patchwork Fri Jan 27 11:29:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49271 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp786973wrn; Fri, 27 Jan 2023 03:43:03 -0800 (PST) X-Google-Smtp-Source: AK7set8iuM9AIPflydgNdHzBnExKmGUXeSXr+tWmHd0AOcgB2kJ3eWq1yKhvJoxl+OiDlm87VRLR X-Received: by 2002:a05:6a20:8427:b0:bb:c422:809f with SMTP id c39-20020a056a20842700b000bbc422809fmr15703960pzd.4.1674819783227; Fri, 27 Jan 2023 03:43:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674819783; cv=none; d=google.com; s=arc-20160816; b=TocghoLFsixK+HefyVinhFJQVc/eCHZT0Iu6yERL40/6ZZmdHfxYqabEcVNA2jNyhZ zcsvJvOQfOZDpWGdZXuZPttBKII1hy++bfBpLweN7GjCkroBxi4IF7bRX7ASdffG93Qw ZJ2XWmY5F9uB6G68JruJLYimUJcs+ZWqFVGUtiMTgNQehr9b9HCJMcBnsAe3ke34JUPY DuwClrsF5FVGtsERSjl7dEbG4ax3Vdw2sJhlkgEKeEl7H1oz6r8ignP98GZkMBGP5CMp CxMhBRth6UkpIh/vEN5+FRYYKRUj8vMmShJivPWETqo5IGieu2zNXDNgzATe4RWUOb9E rbZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kZJA/ys1VclTPGUWTGo4QpDJ9gGYmHyZgAYTZSfcaSs=; b=f7B//E0t1hzcEGiTu6eP+MmZY3hj2k6CNhE6e0kpAOHtzUxHNpyZ71z+Vo7iIX47vD iK3U3J5IcJIdkrU/6QqEpdNKP+pfEQmc1RE0rxD7QFYNg6SbcTT0gb40RAUnRUY5+NUd Cl0m2wqJEepxy0BzBJCbjJtyck9g15jlMr77tH6pChboimPjRB6mftnSeqMM+/mZls/R Ezter/VEfNz1JEjxnH9dq4bZbcNxus8EWKQp/WKd5c5AwUpsgAJ2agCV8JccleKnAqB4 TVMM7JJf4/wvvQ0OvXPeBjwu8yx2fzfzVQzVSVW1T3zb1pj7fBnMeJi5LyAmNtQ8CA1N sRjQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j29-20020a637a5d000000b0047854cf6b6csi4020946pgn.513.2023.01.27.03.42.50; Fri, 27 Jan 2023 03:43:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232965AbjA0Ldr (ORCPT + 99 others); Fri, 27 Jan 2023 06:33:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233754AbjA0LdP (ORCPT ); Fri, 27 Jan 2023 06:33:15 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EE6767C72A; Fri, 27 Jan 2023 03:31:34 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ED163175A; Fri, 27 Jan 2023 03:31:35 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DA7843F64C; Fri, 27 Jan 2023 03:30:51 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 28/28] HACK: Accept prototype RMI versions Date: Fri, 27 Jan 2023 11:29:32 +0000 Message-Id: <20230127112932.38045-29-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756175829100690172?= X-GMAIL-MSGID: =?utf-8?q?1756175829100690172?= The upstream RMM currently advertises the major version of an internal prototype (v56.0) rather than the expected version from the RMM architecture specification (v1.0). Add a config option to enable support for the prototype RMI v56.0. Signed-off-by: Steven Price --- arch/arm64/include/asm/rmi_smc.h | 7 +++++++ arch/arm64/kvm/Kconfig | 8 ++++++++ arch/arm64/kvm/rme.c | 8 ++++++++ 3 files changed, 23 insertions(+) diff --git a/arch/arm64/include/asm/rmi_smc.h b/arch/arm64/include/asm/rmi_smc.h index 16ff65090f3a..d6bbd7d92b8f 100644 --- a/arch/arm64/include/asm/rmi_smc.h +++ b/arch/arm64/include/asm/rmi_smc.h @@ -6,6 +6,13 @@ #ifndef __ASM_RME_SMC_H #define __ASM_RME_SMC_H +#ifdef CONFIG_RME_USE_PROTOTYPE_HACKS + +// Allow the prototype RMI version +#define PROTOTYPE_RMI_ABI_MAJOR_VERSION 56 + +#endif /* CONFIG_RME_USE_PROTOTYPE_HACKS */ + #include #define SMC_RxI_CALL(func) \ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 05da3c8f7e88..13858a5047fd 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -58,6 +58,14 @@ config NVHE_EL2_DEBUG If unsure, say N. +config RME_USE_PROTOTYPE_HACKS + bool "Allow RMM prototype version numbers" + default y + help + For compatibility with the the current RMM code allow versions + numbers from a prototype implementation as well as the expected + version number from the RMM specification. + config PROTECTED_NVHE_STACKTRACE bool "Protected KVM hypervisor stacktraces" depends on NVHE_EL2_DEBUG diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 1eb76cbee267..894060635226 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -67,6 +67,14 @@ static int rmi_check_version(void) version_major = RMI_ABI_VERSION_GET_MAJOR(res.a0); version_minor = RMI_ABI_VERSION_GET_MINOR(res.a0); +#ifdef PROTOTYPE_RMI_ABI_MAJOR_VERSION + // Support the prototype + if (version_major == PROTOTYPE_RMI_ABI_MAJOR_VERSION) { + kvm_err("Using prototype RMM support (version %d.%d)\n", + version_major, version_minor); + return 0; + } +#endif if (version_major != RMI_ABI_MAJOR_VERSION) { kvm_err("Unsupported RMI ABI (version %d.%d) we support %d\n", version_major, version_minor,