From patchwork Fri Sep 15 12:48:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 140431 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp1042865vqi; Fri, 15 Sep 2023 06:23:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHjYsuVrMz1VG2RtfRY5Wsk6Jc5bTUhYI8MC1ME4flVsS7Usn7Q/SScJhPZBi3iFMhh2sWg X-Received: by 2002:a05:6a20:938a:b0:13f:b028:789c with SMTP id x10-20020a056a20938a00b0013fb028789cmr1900423pzh.5.1694784184481; Fri, 15 Sep 2023 06:23:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694784184; cv=none; d=google.com; s=arc-20160816; b=JzZSnsCzDDM+4Bm/0SjBd1JIh71mnZPBt4qLTSuNiX50jbjqncdKFqAuW8xr8IDHgn c9QNma063MldWrBeEwHAZFAnAy4AiNwt8S286zTTQUX7oyyukl9+XrODT7q2Mu7XGioH l107IINtaW+Sn2XSr3Qpg68NbvEVHhD6aEEB/ocRLQluMbb7IA+TXyJZbeDTFn00a4/T ql93lGObpy7lVe9aYXWEQxqZEUC5/pbzIeO9dEUgvmJ+JIV/vps3j2FEYExLl7GC9xQR 3K0Dk+AVRBCNNlKMKrx1Y/KM9EBdijPSYolKh4CySE+2aQFiY1BVa7DIhHM/fb9Jv2Sr Ys6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/9+77pD3HUE1Mykaj2fXcL4ELVxttfyzzizfpgCbIos=; fh=2vqTlNPINEt8u2O5VUjjdtxE8OgDGwETaTVTh/NI6As=; b=kaLMimDb7I0UQsdAlWz+rgPD4vX17qxS8UAAhnihziHtx5DoGC+u/GjTJ0/Q5NQkT9 B7iR8FOHpzwMCo9V+K0hcBXSbD4pwNsR459J3W0UvbQmKSKTLzV7BXlaSsMBYRkP7nKq pJpKOvxovi5jUvxz5//ul54FLzq8lcPwLLyg7s3Gd0DIU56HXweZyK7nOQKiYrT/45eu V+PpnRvWs7bzoVh8+9T48uiSRCeVsFcfmZDcV4ziigiAJ8IF042bgZDGPL4Sf8DGngaY WrspRiSsii6+kHglgS3jzcAAsuWiEFb4KvxQfcYvV3yy3cWsNHjAjo5CLnUELML82Mp1 bPiw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id x7-20020a654147000000b00573fa4cfe4asi3216369pgp.39.2023.09.15.06.22.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 06:23:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 69A9D83A01A5; Fri, 15 Sep 2023 05:50:14 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235129AbjIOMtz (ORCPT + 32 others); Fri, 15 Sep 2023 08:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235074AbjIOMtu (ORCPT ); Fri, 15 Sep 2023 08:49:50 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A543CE7E for ; Fri, 15 Sep 2023 05:49:45 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4586C15; Fri, 15 Sep 2023 05:50:22 -0700 (PDT) Received: from e126864.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CB4103F5A1; Fri, 15 Sep 2023 05:49:42 -0700 (PDT) From: Kristina Martsenko To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Vladimir Murzin , Colton Lewis , linux-kernel@vger.kernel.org Subject: [PATCH 1/3] KVM: arm64: Configure HCRX_EL2 dynamically Date: Fri, 15 Sep 2023 13:48:38 +0100 Message-Id: <20230915124840.474888-2-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230915124840.474888-1-kristina.martsenko@arm.com> References: <20230915124840.474888-1-kristina.martsenko@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Fri, 15 Sep 2023 05:50:14 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777110021202231589 X-GMAIL-MSGID: 1777110021202231589 At the moment the HCRX_EL2 system register is always initialized to HCRX_GUEST_FLAGS when running a guest. Instead, choose the configuration at vcpu reset time and save it in the vcpu struct, similarly to how HCR_EL2 is set up. This will be needed in a subsequent change to configure the register based on CPU features detected at runtime. Signed-off-by: Kristina Martsenko --- arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 1 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 1 + 6 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 3d6725ff0bf6..64ea27e6deb1 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -134,6 +134,11 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); } +static inline void vcpu_reset_hcrx(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS; +} + static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) { return vcpu->arch.vsesr_el2; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index af06ccb7ee34..2764748756a7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -487,6 +487,7 @@ struct kvm_vcpu_arch { /* Values of trap registers for the guest. */ u64 hcr_el2; + u64 hcrx_el2; u64 mdcr_el2; u64 cptr_el2; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4866b3f7b4ea..fa359fd6fca9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1318,6 +1318,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); + vcpu_reset_hcrx(vcpu); vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); /* diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 9cfe6bd1dbe4..119b75344505 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -198,7 +198,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); if (cpus_have_final_cap(ARM64_HAS_HCX)) { - u64 hcrx = HCRX_GUEST_FLAGS; + u64 hcrx = vcpu->arch.hcrx_el2; if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) { u64 clr = 0, set = 0; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 857d9bc04fd4..26692083b80b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -35,6 +35,7 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; + hyp_vcpu->vcpu.arch.hcrx_el2 = host_vcpu->arch.hcrx_el2; hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2; diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 8033ef353a5d..8c8b37525dce 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -188,6 +188,7 @@ static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) /* Clear res0 and set res1 bits to trap potential new features. */ vcpu->arch.hcr_el2 &= ~(HCR_RES0); + vcpu->arch.hcrx_el2 &= ~(HCRX_EL2_RES0); vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); if (!has_hvhe()) { vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; From patchwork Fri Sep 15 12:48:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 140414 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp1020811vqi; Fri, 15 Sep 2023 05:56:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGc31SVZoDjy4KfC6urZdAqm9Bhim9jU2t91xgmV05c+uTJEbQygYje5AYbEsW11sWry3c5 X-Received: by 2002:a92:c54c:0:b0:345:79eb:e001 with SMTP id a12-20020a92c54c000000b0034579ebe001mr2123959ilj.19.1694782562465; Fri, 15 Sep 2023 05:56:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694782562; cv=none; d=google.com; s=arc-20160816; b=sfPzJi/CZJjIXUeKABJLx+Z+1Xkq8Mqejdhgx+JZbn5Jgqfv/sZxG8E41V86iGO45F 2EGyycUjFcpUzpb0/56iI4zDHUbcX528kn2JQap3QfrLhhsSya28ZpRniv+YTO4qk4VQ s+0WhZ2muvl2un24MqE4WB7xqfhC1POpDcLnwhs6zuelNzkEcUh1+XFWLJL6oai5pE/6 uTfKPEBX+VzpM3qxQ4j5LwctMurmCwStf5dZtKKgF5mZ3vQA+dYwZCsv0RZrSwmwqVMh DiRZGu+Zpnhy6ukRISRbxyCJ8pgQIsnza5McbXHsJvD+jb0tTi4GmB8DRirtOF8Ud28f /S/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jZrrM0a4hVccfYQg4s/uqK5Jf5+8UmhY279CP+/9BuU=; fh=2vqTlNPINEt8u2O5VUjjdtxE8OgDGwETaTVTh/NI6As=; b=yc098cfcTfsf91Rfe9SZMTD6z0ZFkn3uEBgBsBqWlETjCkMmPLT0naLKmmT6AbbPBy wMvEkxFi8vV/ri79tq8D+6vChSFuoQzTnVdPNQReaYWzDaxuxm+bSTrESxiGOuxYyqlC H8z1ZnwL5LbT7mDk3umVXE2R3U4p57W6FOk/+XLI3tZy10dYKsbuJZfOGqWWvsWwiWKF ULEH3xeagBWpSTmfAE9/uNp2tdYO5ZbnKX/lJrjXvaS17Trc7pQtT+sBoLBG1aPQi8fv nNJanM+8HdSznoErC6drPZL3Gp6ImeeAeCqLi1XgszmCECMYZblFiibx/UEeMYCr6oU2 irRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id z19-20020a63e113000000b00563db11bacbsi3227628pgh.466.2023.09.15.05.56.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 05:56:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 15E9682990C4; Fri, 15 Sep 2023 05:50:27 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235153AbjIOMuB (ORCPT + 32 others); Fri, 15 Sep 2023 08:50:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235127AbjIOMty (ORCPT ); Fri, 15 Sep 2023 08:49:54 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC8D0173B for ; Fri, 15 Sep 2023 05:49:48 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C711112FC; Fri, 15 Sep 2023 05:50:25 -0700 (PDT) Received: from e126864.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C3B683F5A1; Fri, 15 Sep 2023 05:49:45 -0700 (PDT) From: Kristina Martsenko To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Vladimir Murzin , Colton Lewis , linux-kernel@vger.kernel.org Subject: [PATCH 2/3] KVM: arm64: Add handler for MOPS exceptions Date: Fri, 15 Sep 2023 13:48:39 +0100 Message-Id: <20230915124840.474888-3-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230915124840.474888-1-kristina.martsenko@arm.com> References: <20230915124840.474888-1-kristina.martsenko@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Fri, 15 Sep 2023 05:50:27 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777108320108177683 X-GMAIL-MSGID: 1777108320108177683 An Armv8.8 FEAT_MOPS main or epilogue instruction will take an exception if executed on a CPU with a different MOPS implementation option (A or B) than the CPU where the preceding prologue instruction ran. In this case the OS exception handler is expected to reset the registers and restart execution from the prologue instruction. A KVM guest may use the instructions at EL1 at times when the guest is not able to handle the exception, expecting that the instructions will only run on one CPU (e.g. when running UEFI boot services in the guest). As KVM may reschedule the guest between different types of CPUs at any time (on an asymmetric system), it needs to also handle the resulting exception itself in case the guest is not able to. A similar situation will also occur in the future when live migrating a guest from one type of CPU to another. Add handling for the MOPS exception to KVM. The handling can be shared with the EL0 exception handler, as the logic and register layouts are the same. The exception can be handled right after exiting a guest, which avoids the cost of returning to the host exit handler. Similarly to the EL0 exception handler, in case the main or epilogue instruction is being single stepped, it makes sense to finish the step before executing the prologue instruction, so advance the single step state machine. Signed-off-by: Kristina Martsenko --- arch/arm64/include/asm/traps.h | 54 ++++++++++++++++++++++++- arch/arm64/kernel/traps.c | 48 +--------------------- arch/arm64/kvm/hyp/include/hyp/switch.h | 17 ++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 2 + arch/arm64/kvm/hyp/vhe/switch.c | 1 + 5 files changed, 73 insertions(+), 49 deletions(-) diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h index d66dfb3a72dd..eefe766d6161 100644 --- a/arch/arm64/include/asm/traps.h +++ b/arch/arm64/include/asm/traps.h @@ -9,10 +9,9 @@ #include #include +#include #include -struct pt_regs; - #ifdef CONFIG_ARMV8_DEPRECATED bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn); #else @@ -101,4 +100,55 @@ static inline unsigned long arm64_ras_serror_get_severity(unsigned long esr) bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned long esr); void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr); + +static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned long esr) +{ + bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; + bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; + int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); + int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); + int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); + unsigned long dst, src, size; + + dst = regs->regs[dstreg]; + src = regs->regs[srcreg]; + size = regs->regs[sizereg]; + + /* + * Put the registers back in the original format suitable for a + * prologue instruction, using the generic return routine from the + * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. + */ + if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { + /* SET* instruction */ + if (option_a ^ wrong_option) { + /* Format is from Option A; forward set */ + regs->regs[dstreg] = dst + size; + regs->regs[sizereg] = -size; + } + } else { + /* CPY* instruction */ + if (!(option_a ^ wrong_option)) { + /* Format is from Option B */ + if (regs->pstate & PSR_N_BIT) { + /* Backward copy */ + regs->regs[dstreg] = dst - size; + regs->regs[srcreg] = src - size; + } + } else { + /* Format is from Option A */ + if (size & BIT(63)) { + /* Forward copy */ + regs->regs[dstreg] = dst + size; + regs->regs[srcreg] = src + size; + regs->regs[sizereg] = -size; + } + } + } + + if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) + regs->pc -= 8; + else + regs->pc -= 4; +} #endif diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 8b70759cdbb9..ede65a20e7dc 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -516,53 +516,7 @@ void do_el1_fpac(struct pt_regs *regs, unsigned long esr) void do_el0_mops(struct pt_regs *regs, unsigned long esr) { - bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; - bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; - int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); - int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); - int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); - unsigned long dst, src, size; - - dst = pt_regs_read_reg(regs, dstreg); - src = pt_regs_read_reg(regs, srcreg); - size = pt_regs_read_reg(regs, sizereg); - - /* - * Put the registers back in the original format suitable for a - * prologue instruction, using the generic return routine from the - * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. - */ - if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { - /* SET* instruction */ - if (option_a ^ wrong_option) { - /* Format is from Option A; forward set */ - pt_regs_write_reg(regs, dstreg, dst + size); - pt_regs_write_reg(regs, sizereg, -size); - } - } else { - /* CPY* instruction */ - if (!(option_a ^ wrong_option)) { - /* Format is from Option B */ - if (regs->pstate & PSR_N_BIT) { - /* Backward copy */ - pt_regs_write_reg(regs, dstreg, dst - size); - pt_regs_write_reg(regs, srcreg, src - size); - } - } else { - /* Format is from Option A */ - if (size & BIT(63)) { - /* Forward copy */ - pt_regs_write_reg(regs, dstreg, dst + size); - pt_regs_write_reg(regs, srcreg, src + size); - pt_regs_write_reg(regs, sizereg, -size); - } - } - } - - if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) - regs->pc -= 8; - else - regs->pc -= 4; + arm64_mops_reset_regs(®s->user_regs, esr); /* * If single stepping then finish the step before executing the diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 119b75344505..c1db9ce0db07 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -30,6 +30,7 @@ #include #include #include +#include struct kvm_exception_table_entry { int insn, fixup; @@ -265,6 +266,22 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault); } +static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); + arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2); + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + + /* + * Finish potential single step before executing the prologue + * instruction. + */ + *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); + + return true; +} + static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index c353a06ee7e6..c50f8459e4fc 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -192,6 +192,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; static const exit_handler_fn pvm_exit_handlers[] = { @@ -203,6 +204,7 @@ static const exit_handler_fn pvm_exit_handlers[] = { [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 6537f58b1a8c..796202f2e08f 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -126,6 +126,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) From patchwork Fri Sep 15 12:48:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 140536 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp1171236vqi; Fri, 15 Sep 2023 09:29:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEw3cIMjsqq6viRtyCmS2FEzn1avfEw07lkfSd0MACkCIvr81VNUSPLjjW4275dELw7aHyi X-Received: by 2002:a17:90b:4b46:b0:267:ffcf:e9e3 with SMTP id mi6-20020a17090b4b4600b00267ffcfe9e3mr1977133pjb.46.1694795355593; Fri, 15 Sep 2023 09:29:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694795355; cv=none; d=google.com; s=arc-20160816; b=m4gBgN204rI2/kXKsd0iv0VQ36uFzBF4D/N6DcI9C/JmWDiDMPwiNWvyGtdQJqRxSl 9xVEu7Xqiog6IMWdZGTB/H1V7/2uDEBr1AJnDPm/Ey4qpS5yxQti6dJHE9wKt19NhbHQ kDCm5jP+MO8RSMkUWl+ca7vMh+DVlKK+xUhlaXJoY36OBChw7NAKT0jvZRcNKCcjDu6N 0W4ZuZk3idQ90jAXg9BxNEyWtNAGB3xzmQcvzM3g4mXEL6zW/flIjLElc7YQ/A6xWX01 d0g4ZV3Cl99iavwVLoagtFrw6Wn//5BW/mpsYjzm4FasllLrr+WPxb30fJNd4/VXPh7U b48Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=dgGyx9Qs2KOon8v0Rt78+WLZeQDWbTIsPywAW7+8PQw=; fh=2vqTlNPINEt8u2O5VUjjdtxE8OgDGwETaTVTh/NI6As=; b=STNhhn4EQjmCMmwOn+zTWxiCfUn2g0u94AjiH4Nu9BdjOAYdJSc1DzWJRizZ3TEyoy DUAdzijRjrtdf3RkqgZaF05RX6L9JWWqdQT+7h9ahboEkM8mosFKmOSXvcyhejtHBoC5 nQrHWNICb11KrBMEGor9MUfP6zIaRWOJ7EfyuSIXcrbT4lXoHWajuS0yTc3SELkjjfAP b6ztIKH6MUY4PCIPPNE/ASiOXRh9DV9/nmWG5IjPEqBbbQCBHt7qS4XWh4MmJB25ZmDs oS7S+6WVDPyXa0XkFvvnA/d6jzCXyMbnUOpTNkRuCX1Q9np43E+IInsfB3Xwnjn3od8Z iKBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id lp2-20020a17090b4a8200b0027000086c93si3868487pjb.102.2023.09.15.09.29.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 09:29:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 06142836D6FF; Fri, 15 Sep 2023 05:50:07 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235185AbjIOMuD (ORCPT + 32 others); Fri, 15 Sep 2023 08:50:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235139AbjIOMt6 (ORCPT ); Fri, 15 Sep 2023 08:49:58 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E6B061FDF for ; Fri, 15 Sep 2023 05:49:51 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDABE1FB; Fri, 15 Sep 2023 05:50:28 -0700 (PDT) Received: from e126864.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D56003F5A1; Fri, 15 Sep 2023 05:49:48 -0700 (PDT) From: Kristina Martsenko To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Vladimir Murzin , Colton Lewis , linux-kernel@vger.kernel.org Subject: [PATCH 3/3] KVM: arm64: Expose MOPS instructions to guests Date: Fri, 15 Sep 2023 13:48:40 +0100 Message-Id: <20230915124840.474888-4-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230915124840.474888-1-kristina.martsenko@arm.com> References: <20230915124840.474888-1-kristina.martsenko@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 15 Sep 2023 05:50:08 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777121734785278745 X-GMAIL-MSGID: 1777121734785278745 Expose the Armv8.8 FEAT_MOPS feature to guests in the ID register and allow the MOPS instructions to be run in a guest. Only expose MOPS if the whole system supports it. Note, it is expected that guests do not use these instructions on MMIO, similarly to other instructions where ESR_EL2.ISV==0 such as LDP/STP. Signed-off-by: Kristina Martsenko --- arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/kvm/hyp/include/nvhe/fixed_config.h | 3 ++- arch/arm64/kvm/sys_regs.c | 1 - 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 64ea27e6deb1..ba46004be0e8 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -137,6 +137,11 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) static inline void vcpu_reset_hcrx(struct kvm_vcpu *vcpu) { vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS; + + if (cpus_have_final_cap(ARM64_HAS_MOPS)) { + vcpu->arch.hcrx_el2 |= HCRX_EL2_MSCEn; + vcpu->arch.hcrx_el2 |= HCRX_EL2_MCE2; + } } static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index 37440e1dda93..e91922daa8ca 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -197,7 +197,8 @@ #define PVM_ID_AA64ISAR2_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) \ + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \ ) u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e92ec810d449..153baf2f72cb 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1338,7 +1338,6 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); if (!cpus_have_final_cap(ARM64_HAS_WFXT)) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS); break; case SYS_ID_AA64MMFR2_EL1: val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;