From patchwork Thu Oct 19 16:55:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 155655 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2010:b0:403:3b70:6f57 with SMTP id fe16csp524108vqb; Thu, 19 Oct 2023 09:56:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEkDDTxBEwqoA8TPOoGsfN3CZPhSw1FGgXDABBS21XcwI5MCBA2oQ6Mmq0+PZEFlIcQ2yr3 X-Received: by 2002:a05:6a20:54a3:b0:17a:7c2:d4a6 with SMTP id i35-20020a056a2054a300b0017a07c2d4a6mr3059440pzk.55.1697734616058; Thu, 19 Oct 2023 09:56:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697734616; cv=none; d=google.com; s=arc-20160816; b=h8jfI0yA4nX1X7bjBqazmnoPAF28l6X42vmBadL/jgSLlpJKG7k2h0d87nO5rZKfqv qi1w8YoshMfRZIs2innpcH6ExZIQlyoh3WIY8tZ9anjSCYfBkYh/r9lLnUPrQni3oMcm ZFQmE5uM3VZX2DbfhtCRMpdSqqwrNmDbEJcn61ONYBkCuBaNaZz1/Zs2kKwdY5ivVv7U KOcD9Zmf1z8FC2F/YEzs+smZ0EzyAAOnA7tCnZBQjEnlRpiL46xhgDiH2ocExk2n/Vxs 2Kv09ejlix93n8WubQzvamzmAIAZqAoZo/vXcGea036m9QQGezZSsx/jqGBs2lwXNPFl mO3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=NKC6xlmaids4T1A5C7Ror0pudX5xwxA1oGNsNPkGZG4=; fh=Vg4x3QwMIiTIfIqLTtppq64kOI2tJ9m3NuYBKVYWeQM=; b=Z87R9aLGM8eUSu7F7QMovaXinT+8RMzR1ZQZeOM4SuLLZUDsVUWFhKBXetA1t6VF31 wNEaNMmXqsq84/9qQpfaY54SEbWL4BoGGmG/xHfMHBNeT7L9yiHfGphS8ALo8/XXEgO6 CKxKr1NhF2UFzl6JSio9vjzr0EUANK9ufr+23y2b8v8HnH+1S3qBg+u0qAoHRcgkOYNq T8mCQtYEuYVRnDGPYx8PfGnM+Q1lck63BVAUvGPbuE6eOU7K+fXySYuW5N/PZ3+4X1aB r6sxHa4FIcazbxGAazYRnYEtQVCNuoSc+Q+Cda6B1Z1E85B4fNMeVY8d3K3ST2pTF9D2 BJaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id s12-20020a17090aad8c00b0027768e7a9aesi27931pjq.120.2023.10.19.09.56.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 09:56:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 9BA0F81B17AE; Thu, 19 Oct 2023 09:56:52 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345346AbjJSQ4k (ORCPT + 26 others); Thu, 19 Oct 2023 12:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346227AbjJSQ4d (ORCPT ); Thu, 19 Oct 2023 12:56:33 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3D381D48 for ; Thu, 19 Oct 2023 09:56:27 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDB95143D; Thu, 19 Oct 2023 09:57:07 -0700 (PDT) Received: from e127643.arm.com (unknown [10.57.67.150]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 74C1A3F5A1; Thu, 19 Oct 2023 09:56:23 -0700 (PDT) From: James Clark To: coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, maz@kernel.org, suzuki.poulose@arm.com Cc: broonie@kernel.org, James Clark , Oliver Upton , James Morse , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Leo Yan , Alexander Shishkin , Anshuman Khandual , Rob Herring , Jintack Lim , Fuad Tabba , Akihiko Odaki , Joey Gouly , linux-kernel@vger.kernel.org Subject: [PATCH v3 5/6] arm64: KVM: Write TRFCR value on guest switch with nVHE Date: Thu, 19 Oct 2023 17:55:03 +0100 Message-Id: <20231019165510.1966367-6-james.clark@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231019165510.1966367-1-james.clark@arm.com> References: <20231019165510.1966367-1-james.clark@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Thu, 19 Oct 2023 09:56:52 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780203772319618062 X-GMAIL-MSGID: 1780203772319618062 The guest value for TRFCR requested by the Coresight driver is saved in sysregs[TRFCR_EL1]. On guest switch this value needs to be written to the register. Currently TRFCR is only modified when we want to disable trace completely in guests due to an issue with TRBE. Expand the __debug_save_trace() function to always write to the register if a different value for guests is required, but also keep the existing TRBE disable behavior if that's required. The TRFCR restore function remains functionally the same, except a value of 0 doesn't mean "don't restore" anymore. Now that we save both guest and host values the register is restored any time the guest and host values differ. Signed-off-by: James Clark Reviewed-by: Suzuki K Poulose --- arch/arm64/include/asm/kvm_hyp.h | 6 ++- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 68 ++++++++++++++++++------------ arch/arm64/kvm/hyp/nvhe/switch.c | 4 +- 3 files changed, 48 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 52ac90d419e7..6286e580696e 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -103,8 +103,10 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu); void __debug_switch_to_host(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ -void __debug_save_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt); -void __debug_restore_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt); +void __debug_save_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt); +void __debug_restore_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt); #endif void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index f389ee59788c..6174f710948e 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -51,42 +51,57 @@ static void __debug_restore_spe(struct kvm_cpu_context *host_ctxt) write_sysreg_s(ctxt_sys_reg(host_ctxt, PMSCR_EL1), SYS_PMSCR_EL1); } -static void __debug_save_trace(struct kvm_cpu_context *host_ctxt) +/* + * Save TRFCR and disable trace completely if TRBE is being used, otherwise + * apply required guest TRFCR value. + */ +static void __debug_save_trace(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) { - ctxt_sys_reg(host_ctxt, TRFCR_EL1) = 0; + ctxt_sys_reg(host_ctxt, TRFCR_EL1) = read_sysreg_s(SYS_TRFCR_EL1); /* Check if the TRBE is enabled */ - if (!(read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E)) - return; - /* - * Prohibit trace generation while we are in guest. - * Since access to TRFCR_EL1 is trapped, the guest can't - * modify the filtering set by the host. - */ - ctxt_sys_reg(host_ctxt, TRFCR_EL1) = read_sysreg_s(SYS_TRFCR_EL1); - write_sysreg_s(0, SYS_TRFCR_EL1); - isb(); - /* Drain the trace buffer to memory */ - tsb_csync(); + if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_TRBE) && + (read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E)) { + /* + * Prohibit trace generation while we are in guest. Since access + * to TRFCR_EL1 is trapped, the guest can't modify the filtering + * set by the host. + */ + ctxt_sys_reg(guest_ctxt, TRFCR_EL1) = 0; + write_sysreg_s(0, SYS_TRFCR_EL1); + isb(); + /* Drain the trace buffer to memory */ + tsb_csync(); + } else { + /* + * Not using TRBE, so guest trace works. Apply the guest filters + * provided by the Coresight driver, if different. + */ + if (ctxt_sys_reg(host_ctxt, TRFCR_EL1) != + ctxt_sys_reg(guest_ctxt, TRFCR_EL1)) + write_sysreg_s(ctxt_sys_reg(guest_ctxt, TRFCR_EL1), + SYS_TRFCR_EL1); + } } -static void __debug_restore_trace(struct kvm_cpu_context *host_ctxt) +static void __debug_restore_trace(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) { - if (!ctxt_sys_reg(host_ctxt, TRFCR_EL1)) - return; - /* Restore trace filter controls */ - write_sysreg_s(ctxt_sys_reg(host_ctxt, TRFCR_EL1), SYS_TRFCR_EL1); + if (ctxt_sys_reg(host_ctxt, TRFCR_EL1) != ctxt_sys_reg(guest_ctxt, TRFCR_EL1)) + write_sysreg_s(ctxt_sys_reg(host_ctxt, TRFCR_EL1), SYS_TRFCR_EL1); } -void __debug_save_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt) +void __debug_save_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) { /* Disable and flush SPE data generation */ if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_SPE)) __debug_save_spe(host_ctxt); - /* Disable and flush Self-Hosted Trace generation */ - if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_TRBE)) - __debug_save_trace(host_ctxt); + + if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_TRFCR)) + __debug_save_trace(host_ctxt, guest_ctxt); } void __debug_switch_to_guest(struct kvm_vcpu *vcpu) @@ -94,12 +109,13 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) __debug_switch_to_guest_common(vcpu); } -void __debug_restore_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt) +void __debug_restore_host_buffers_nvhe(struct kvm_cpu_context *host_ctxt, + struct kvm_cpu_context *guest_ctxt) { if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_SPE)) __debug_restore_spe(host_ctxt); - if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_TRBE)) - __debug_restore_trace(host_ctxt); + if (vcpu_get_flag(host_ctxt->__hyp_running_vcpu, DEBUG_STATE_SAVE_TRFCR)) + __debug_restore_trace(host_ctxt, guest_ctxt); } void __debug_switch_to_host(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6b4b24ae077f..c7bea5cf672d 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -278,7 +278,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and * before we load guest Stage1. */ - __debug_save_host_buffers_nvhe(host_ctxt); + __debug_save_host_buffers_nvhe(host_ctxt, guest_ctxt); /* * We're about to restore some new MMU state. Make sure @@ -345,7 +345,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * This must come after restoring the host sysregs, since a non-VHE * system may enable SPE here and make use of the TTBRs. */ - __debug_restore_host_buffers_nvhe(host_ctxt); + __debug_restore_host_buffers_nvhe(host_ctxt, guest_ctxt); if (pmu_switch_needed) __pmu_switch_to_host(vcpu);