From patchwork Fri Nov 25 04:05:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 25841 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp3797333wrr; Thu, 24 Nov 2022 22:12:23 -0800 (PST) X-Google-Smtp-Source: AA0mqf4Wtjq0RRK3771KUnPI65mQQujKMZtonwpNHCqamMQ+gdBWRn7Z67Wey/q5TUmjs0g6F7jD X-Received: by 2002:a63:1d03:0:b0:46f:abcc:a793 with SMTP id d3-20020a631d03000000b0046fabcca793mr34401894pgd.234.1669356743649; Thu, 24 Nov 2022 22:12:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669356743; cv=none; d=google.com; s=arc-20160816; b=aLeNOQOuWjm9UUgnfsGO1s5Lwvz0fa2JziXpl08nmumSgxz/lHGkrDlYxZA/NVqmLR o7uXpeg8nLU74Ti5p8t0XLnrm5ll73UWcPYZ8TGs6MCiT1ShDZcMTSQ6WzqsRs2ySTgL xQI2WmN/0Efyvi5QPxYRTTdvf8SzGCKVrl9BS0Bx+zkKiKTeUiFYeUWv/L8K+EQkQIQP HJwvSN6R/73+vWzJRonlWNGBaPH7ZxgFcDXxQw7vKIcGxauOxeoPMCHbSdNw2MHeYrX9 +RLuVpviyW9dLUpziJabNgph5DxtMrgI4GMvtYR5gK7Q2gMV+Br1X8hVnqfaEkY1aHDE sL1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ow3CKi5U4BE4Cq4WcgxqEK1ZmIhl+Y2MNI4QkzWBzKg=; b=t7e43PP/GKag5fi8EjNVBj9l+2CL7ds2ac7lkceIyqR6NkbzTblAWcU2uLUNHSLbmW JvPbFu1WN2b+byVNdGOw4lRvIQZF7T1jQcKgkyN+qENhJTAlbE8MSfTp6qo1RbAuKO9h Nrf+8qxPfAbUBSQsAsIEqAJ1ihvyhXazvqkXwZ53SkpfRQdlxvQNzOv3Iutob50NhoTl C5hQDGQNSaYHNZxz6Nh5tsHILdmrrBi0lRlnVjTRBPPHP90H83oKmi2eQKeU23M+9721 TgcL+EUTWhURYSTGru61qYUtC1Mg0qn9ZvPPhePUO/65CS5o1yjj7NVwC6HaCl9C3d7G Y8iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="nCDcZx/L"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f6-20020a170902ce8600b001893740c58asi3340195plg.393.2022.11.24.22.12.10; Thu, 24 Nov 2022 22:12:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="nCDcZx/L"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229815AbiKYGKh (ORCPT + 99 others); Fri, 25 Nov 2022 01:10:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229720AbiKYGKO (ORCPT ); Fri, 25 Nov 2022 01:10:14 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0776222294; Thu, 24 Nov 2022 22:10:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669356612; x=1700892612; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XVMiy0FcFAZYiVIkFM8ypMBa329vVOR1LUMYPRJkpL4=; b=nCDcZx/L47hER35LaPRPdJFSeOs9IWi+1qk5mNnl/x0pxWunwTEZFhj+ w1JFIxTX2rbhfOIF9wg4TuWY7s+FdnMauyyJ+gt2vaa0xik90ARllN6lU R4j4etuUFgOa4+qn92gIsUlshxr/dtSk5ozhtbs/SElYPx2IRFobMTlA9 MJd7ze+iD2/4YeFgJM1JTupjxGT3yOvx7D5MhXzwQXqo7j2t8kuOqo0hh rYHbI+3wp+HZ7huUGgIG1VDk4BmeYnNqCqarGIleLDhzKbQP03BoNmh0j Ez21eXPGSamJmE/TDWRNrIcMV81RqJ3OZa+0vRcj4nneNY7d+vrzO6DAR g==; X-IronPort-AV: E=McAfee;i="6500,9779,10541"; a="313116827" X-IronPort-AV: E=Sophos;i="5.96,192,1665471600"; d="scan'208";a="313116827" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2022 22:10:08 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10541"; a="784838496" X-IronPort-AV: E=Sophos;i="5.96,192,1665471600"; d="scan'208";a="784838496" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2022 22:10:06 -0800 From: Yang Weijiang To: seanjc@google.com, pbonzini@redhat.com, jmattson@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: like.xu.linux@gmail.com, kan.liang@linux.intel.com, wei.w.wang@intel.com, weijiang.yang@intel.com, Like Xu Subject: [PATCH v2 10/15] KVM: x86/vmx: Check Arch LBR config when return perf capabilities Date: Thu, 24 Nov 2022 23:05:59 -0500 Message-Id: <20221125040604.5051-11-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221125040604.5051-1-weijiang.yang@intel.com> References: <20221125040604.5051-1-weijiang.yang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750447416766940038?= X-GMAIL-MSGID: =?utf-8?q?1750447416766940038?= Two new bit fields(VM_EXIT_CLEAR_IA32_LBR_CTL, VM_ENTRY_LOAD_IA32_LBR_CTL) are added to support guest Arch LBR. These two bits should be set in order to make Arch LBR workable in both guest and host. Since we don't support Arch LBR in nested guest, clear the two bits before run L2 VM. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Yang Weijiang --- arch/x86/include/asm/vmx.h | 2 ++ arch/x86/kvm/vmx/capabilities.h | 5 +++++ arch/x86/kvm/vmx/nested.c | 8 ++++++++ arch/x86/kvm/vmx/vmx.c | 11 +++++++++++ arch/x86/kvm/vmx/vmx.h | 6 ++++-- 5 files changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 8502c068202c..59f7c6baffd2 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -102,6 +102,7 @@ #define VM_EXIT_CLEAR_BNDCFGS 0x00800000 #define VM_EXIT_PT_CONCEAL_PIP 0x01000000 #define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000 +#define VM_EXIT_CLEAR_IA32_LBR_CTL 0x04000000 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff @@ -115,6 +116,7 @@ #define VM_ENTRY_LOAD_BNDCFGS 0x00010000 #define VM_ENTRY_PT_CONCEAL_PIP 0x00020000 #define VM_ENTRY_LOAD_IA32_RTIT_CTL 0x00040000 +#define VM_ENTRY_LOAD_IA32_LBR_CTL 0x00200000 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index cd2ac9536c99..5affd2b5123e 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -395,6 +395,11 @@ static inline bool vmx_pebs_supported(void) return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept; } +static inline bool cpu_has_vmx_arch_lbr(void) +{ + return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_LBR_CTL; +} + static inline bool cpu_has_notify_vmexit(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b28be793de29..59bdd9873fb5 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2360,6 +2360,10 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0 if (guest_efer != host_efer) exec_control |= VM_ENTRY_LOAD_IA32_EFER; } + + if (cpu_has_vmx_arch_lbr()) + exec_control &= ~VM_ENTRY_LOAD_IA32_LBR_CTL; + vm_entry_controls_set(vmx, exec_control); /* @@ -2374,6 +2378,10 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0 exec_control |= VM_EXIT_LOAD_IA32_EFER; else exec_control &= ~VM_EXIT_LOAD_IA32_EFER; + + if (cpu_has_vmx_arch_lbr()) + exec_control &= ~VM_EXIT_CLEAR_IA32_LBR_CTL; + vm_exit_controls_set(vmx, exec_control); /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2ab4c33b5008..359da38a19a1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2599,6 +2599,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, { VM_ENTRY_LOAD_IA32_EFER, VM_EXIT_LOAD_IA32_EFER }, { VM_ENTRY_LOAD_BNDCFGS, VM_EXIT_CLEAR_BNDCFGS }, { VM_ENTRY_LOAD_IA32_RTIT_CTL, VM_EXIT_CLEAR_IA32_RTIT_CTL }, + { VM_ENTRY_LOAD_IA32_LBR_CTL, VM_EXIT_CLEAR_IA32_LBR_CTL }, }; memset(vmcs_conf, 0, sizeof(*vmcs_conf)); @@ -4794,6 +4795,9 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vpid_sync_context(vmx->vpid); vmx_update_fb_clear_dis(vcpu, vmx); + + if (!init_event && cpu_has_vmx_arch_lbr()) + vmcs_write64(GUEST_IA32_LBR_CTL, 0); } static void vmx_enable_irq_window(struct kvm_vcpu *vcpu) @@ -6191,6 +6195,10 @@ void dump_vmcs(struct kvm_vcpu *vcpu) vmentry_ctl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) pr_err("PerfGlobCtl = 0x%016llx\n", vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL)); + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) && + vmentry_ctl & VM_ENTRY_LOAD_IA32_LBR_CTL) + pr_err("ArchLBRCtl = 0x%016llx\n", + vmcs_read64(GUEST_IA32_LBR_CTL)); if (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) pr_err("BndCfgS = 0x%016llx\n", vmcs_read64(GUEST_BNDCFGS)); pr_err("Interruptibility = %08x ActivityState = %08x\n", @@ -7700,6 +7708,9 @@ static u64 vmx_get_perf_capabilities(void) perf_cap &= ~PERF_CAP_PEBS_BASELINE; } + if (boot_cpu_has(X86_FEATURE_ARCH_LBR) && !cpu_has_vmx_arch_lbr()) + perf_cap &= ~PMU_CAP_LBR_FMT; + return perf_cap; } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a3da84f4ea45..f68c8a53a248 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -493,7 +493,8 @@ static inline u8 vmx_get_rvi(void) VM_ENTRY_LOAD_IA32_EFER | \ VM_ENTRY_LOAD_BNDCFGS | \ VM_ENTRY_PT_CONCEAL_PIP | \ - VM_ENTRY_LOAD_IA32_RTIT_CTL) + VM_ENTRY_LOAD_IA32_RTIT_CTL | \ + VM_ENTRY_LOAD_IA32_LBR_CTL) #define __KVM_REQUIRED_VMX_VM_EXIT_CONTROLS \ (VM_EXIT_SAVE_DEBUG_CONTROLS | \ @@ -515,7 +516,8 @@ static inline u8 vmx_get_rvi(void) VM_EXIT_LOAD_IA32_EFER | \ VM_EXIT_CLEAR_BNDCFGS | \ VM_EXIT_PT_CONCEAL_PIP | \ - VM_EXIT_CLEAR_IA32_RTIT_CTL) + VM_EXIT_CLEAR_IA32_RTIT_CTL | \ + VM_EXIT_CLEAR_IA32_LBR_CTL) #define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL \ (PIN_BASED_EXT_INTR_MASK | \