Message ID | 20230120004051.2043777-1-seanjc@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp624237wrn; Thu, 19 Jan 2023 16:42:35 -0800 (PST) X-Google-Smtp-Source: AMrXdXvICU2gLYp2QX07zL7qhaf39ugHocVT6THUh8vAV5YMbWtVW27iiF+QTseEvqtK/m/0El3f X-Received: by 2002:aa7:dd48:0:b0:497:233d:3ef7 with SMTP id o8-20020aa7dd48000000b00497233d3ef7mr13236241edw.7.1674175355736; Thu, 19 Jan 2023 16:42:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674175355; cv=none; d=google.com; s=arc-20160816; b=XoXmwfoVMMnCXZfg4yHzFbTqfGnRgoyxBU0lYkrVJpDeLtXddarV/oLd5ofBLDcR6O cqVlhVO9AvgIs10IMcuakePJWm8sIL6qRh7Wnse4wYcLRGvIsAo48TiPi6/lqI5zhhQ9 kiQXwN1BiL6v+ZGM+Di6ME7uRB8RyCew4U6xr+gpzTbQtyn5BUCcma6xZocUlqSREils jTaoQlUSYjynv7nzZ/nij2acmm0W3+ElVScn4KNBe2vLLmzpJ4Ma1/68xJ5sjSNgx/TE E4YAaLAS6Aj8yGYzAe0lLcSXNAvoN+RuoepByeHaPx+R3lBomhF84uzdQ+ldLAVcRvsw gDeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :reply-to:dkim-signature; bh=RChfojZSEu0VpWRA0QiZt+0hmKVb1/N9i6LB2MMx+/U=; b=ynRykZi2fzqbwnCNJW2SZal19Y46CagMziNCYwPGSqx6lA9bthjT3qhPjdc0F3Afob /bTc10EVEMLZA4b0vzYhJjB+iLQEMibVRfzkhTX1/QbIs0PsnpimxQ7DNv9OsnR5QPrO bjeNEd++T9H51pIdFp7XgVT9Ga4HUFo+Y5MYg91y+Iiyus5xfD8otFi4S9w9f99pTpr5 qJhRioff3xIt2lurneKNb1K/v/PaPW37fs3VRlqTBpfVOjvR63KgdOPD9XycjYrUbXgY s47LK7qj5+bwdREST+D/1mElOlgcT/uQ4I0uYj2Ptf5PHr4Gbuxceg+5tLr/lzOWfvOJ u9CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=TMRgq1lY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bx1-20020a0564020b4100b0048ef4766e9asi9739115edb.578.2023.01.19.16.42.12; Thu, 19 Jan 2023 16:42:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=TMRgq1lY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229718AbjATAlP (ORCPT <rfc822;pavtiger@gmail.com> + 99 others); Thu, 19 Jan 2023 19:41:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbjATAlH (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 19 Jan 2023 19:41:07 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C86FA57A9 for <linux-kernel@vger.kernel.org>; Thu, 19 Jan 2023 16:41:00 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id c2-20020a25a2c2000000b008016611ca77so221409ybn.9 for <linux-kernel@vger.kernel.org>; Thu, 19 Jan 2023 16:41:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=RChfojZSEu0VpWRA0QiZt+0hmKVb1/N9i6LB2MMx+/U=; b=TMRgq1lYspF4qjU1PoMZ91855bnDMMbB5ZvqtNh8zuBj9om107AphBBPpPmTuXiMQT KQdgVkt13M3F8MTGeCyc39U9jpNKozdRkpMT+fzuCrJ9TmkJu+F22zR4ys2tx5zyEDnA uv02szvjoHhVrdjZs9AeaNb9J5J0b/mDNQsSgGYcxmIgEFdVucpejAzv5YRqxOHy3Tsn FW85FW089TjyGzTSk6SIlVWwpr+FFzGJbJkkx07EwGDJjliVmtcK1wHLpezjrMF7aeLB pnZUCK7Qt8rlfz5dJaceTuKUlr0VVGLxD7S8zg/ctKMU4FlSg8aSIViy5BpLBwzBcVD4 Nrfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RChfojZSEu0VpWRA0QiZt+0hmKVb1/N9i6LB2MMx+/U=; b=OmA4Kv8MgconZTZRRMORu2judOnHMZKQcdRq9/QgrEMr6+fyuHJiqjLs1kcsLFNDQ1 BcF1p1cvY+MUmWFXuU5HmF6vL/7FVVT5Mlo5DYWzYnBAxU+sDgOP0aaLOcjndcu7AnHX 5yanzVDU/RrvI/wv55vEdwB2F2mOTw6xprYJjOGyB1XxfruBIrW8KIeSPH4ni6T5Ceb+ P7/7YDhrAhX1UkAgqBRIqrOQAN30QGIF/glDplmTTsXbLFUTzHfBbTb5g3gNL7ixfUSo 1cuQ3vtipR6o4iLzdgkPkGYdM5COkvsh4opBTAEW9WZkx1jqgiICWIwOCaGFVUFPjPU4 JDJg== X-Gm-Message-State: AFqh2ko8X6cxAi7+thR8rD2++9B64PFdxT6QCcLZzXC9Z7ompX6vW3r8 Sm6KkpkbInkgwGa+LJmqXc7HIOVi7gM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:990a:0:b0:4b6:dce2:93 with SMTP id q10-20020a81990a000000b004b6dce20093mr1600405ywg.164.1674175260087; Thu, 19 Jan 2023 16:41:00 -0800 (PST) Reply-To: Sean Christopherson <seanjc@google.com> Date: Fri, 20 Jan 2023 00:40:51 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230120004051.2043777-1-seanjc@google.com> Subject: [PATCH] perf/x86: KVM: Disable vPMU support on hybrid CPUs (host PMUs) From: Sean Christopherson <seanjc@google.com> To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Arnaldo Carvalho de Melo <acme@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org Cc: Mark Rutland <mark.rutland@arm.com>, Alexander Shishkin <alexander.shishkin@linux.intel.com>, Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Jianfeng Gao <jianfeng.gao@intel.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Kan Liang <kan.liang@linux.intel.com>, Andi Kleen <ak@linux.intel.com>, Sean Christopherson <seanjc@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755500097925961365?= X-GMAIL-MSGID: =?utf-8?q?1755500097925961365?= |
Series |
perf/x86: KVM: Disable vPMU support on hybrid CPUs (host PMUs)
|
|
Commit Message
Sean Christopherson
Jan. 20, 2023, 12:40 a.m. UTC
Disable KVM support for virtualizing PMUs on hosts with hybrid PMUs until
KVM gains a sane way to enumeration the hybrid vPMU to userspace and/or
gains a mechanism to let userspace opt-in to the dangers of exposing a
hybrid vPMU to KVM guests.
Virtualizing a hybrid PMU, or at least part of a hybrid PMU, is possible,
but it requires userspace to pin vCPUs to pCPUs to prevent migrating a
vCPU between a big core and a little core, requires the VMM to accurately
enumerate the topology to the guest (if exposing a hybrid CPU to the
guest), and also requires the VMM to accurately enumerate the vPMU
capabilities to the guest.
The last point is especially problematic, as KVM doesn't control which
pCPU it runs on when enumerating KVM's vPMU capabilities to userspace.
For now, simply disable vPMU support on hybrid CPUs to avoid inducing
seemingly random #GPs in guests.
Reported-by: Jianfeng Gao <jianfeng.gao@intel.com>
Cc: stable@vger.kernel.org
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: https://lore.kernel.org/all/20220818181530.2355034-1-kan.liang@linux.intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
Lightly tested as I don't have hybrid hardware. For the record, I'm not
against supporting hybrid vPMUs in KVM, but it needs to be a dedicated
effort and not implicitly rely on userspace to do the right thing (or get
lucky).
arch/x86/events/core.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
base-commit: de60733246ff4545a0483140c1f21426b8d7cb7f
Comments
On 2023-01-19 7:40 p.m., Sean Christopherson wrote: > Disable KVM support for virtualizing PMUs on hosts with hybrid PMUs until > KVM gains a sane way to enumeration the hybrid vPMU to userspace and/or > gains a mechanism to let userspace opt-in to the dangers of exposing a > hybrid vPMU to KVM guests. > > Virtualizing a hybrid PMU, or at least part of a hybrid PMU, is possible, > but it requires userspace to pin vCPUs to pCPUs to prevent migrating a > vCPU between a big core and a little core, requires the VMM to accurately > enumerate the topology to the guest (if exposing a hybrid CPU to the > guest), and also requires the VMM to accurately enumerate the vPMU > capabilities to the guest. Current kernel only return the common counters to KVM, which is available on both e-core and p-core. In theory, there should be no problem with the migration between cores. You don't have to pin vCPU. The only problem is that you probably can only use the architecture events. There is nothing wrong for the information provided by the kernel. I think it should be a KVM issue (my guess is the CPUID enumeration.) we should fix rather than simply disable the PMU for entire hybrid machines. Thanks, Kan > > The last point is especially problematic, as KVM doesn't control which > pCPU it runs on when enumerating KVM's vPMU capabilities to userspace. > For now, simply disable vPMU support on hybrid CPUs to avoid inducing > seemingly random #GPs in guests. > > Reported-by: Jianfeng Gao <jianfeng.gao@intel.com> > Cc: stable@vger.kernel.org > Cc: Andrew Cooper <Andrew.Cooper3@citrix.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Kan Liang <kan.liang@linux.intel.com> > Cc: Andi Kleen <ak@linux.intel.com> > Link: https://lore.kernel.org/all/20220818181530.2355034-1-kan.liang@linux.intel.com > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > > Lightly tested as I don't have hybrid hardware. For the record, I'm not > against supporting hybrid vPMUs in KVM, but it needs to be a dedicated > effort and not implicitly rely on userspace to do the right thing (or get > lucky). > > arch/x86/events/core.c | 13 +++++++------ > 1 file changed, 7 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c > index 85a63a41c471..a67667c41cc8 100644 > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -2974,17 +2974,18 @@ unsigned long perf_misc_flags(struct pt_regs *regs) > > void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) > { > - if (!x86_pmu_initialized()) { > + /* > + * Hybrid PMUs don't play nice with virtualization unless userspace > + * pins vCPUs _and_ can enumerate accurate information to the guest. > + * Disable vPMU support for hybrid PMUs until KVM gains a way to let > + * userspace opt into the dangers of hybrid vPMUs. > + */ > + if (!x86_pmu_initialized() || is_hybrid()) { > memset(cap, 0, sizeof(*cap)); > return; > } > > cap->version = x86_pmu.version; > - /* > - * KVM doesn't support the hybrid PMU yet. > - * Return the common value in global x86_pmu, > - * which available for all cores. > - */ > cap->num_counters_gp = x86_pmu.num_counters; > cap->num_counters_fixed = x86_pmu.num_counters_fixed; > cap->bit_width_gp = x86_pmu.cntval_bits; > > base-commit: de60733246ff4545a0483140c1f21426b8d7cb7f
On Fri, Jan 20, 2023, Liang, Kan wrote: > > On 2023-01-19 7:40 p.m., Sean Christopherson wrote: > > Disable KVM support for virtualizing PMUs on hosts with hybrid PMUs until > > KVM gains a sane way to enumeration the hybrid vPMU to userspace and/or > > gains a mechanism to let userspace opt-in to the dangers of exposing a > > hybrid vPMU to KVM guests. > > > > Virtualizing a hybrid PMU, or at least part of a hybrid PMU, is possible, > > but it requires userspace to pin vCPUs to pCPUs to prevent migrating a > > vCPU between a big core and a little core, requires the VMM to accurately > > enumerate the topology to the guest (if exposing a hybrid CPU to the > > guest), and also requires the VMM to accurately enumerate the vPMU > > capabilities to the guest. > > Current kernel only return the common counters to KVM, which is > available on both e-core and p-core. In theory, there should be no > problem with the migration between cores. You don't have to pin vCPU. > The only problem is that you probably can only use the architecture events. And how exactly is KVM supposed to tell the guest that it can only use architectural events? I see CPUID bits that enumerate which architectural events are supported, but I'm not seeing anything that says _only_ architectural events are supported. > There is nothing wrong for the information provided by the kernel. I > think it should be a KVM issue (my guess is the CPUID enumeration.) we > should fix rather than simply disable the PMU for entire hybrid machines. I'm not arguing this isn't KVM's problem, and I'm all for proper enabling in KVM, but I'm not seeing any patches being posted. In the meantime, we've got bug reports coming in about KVM guests having PMU problems on hybrid hosts, and a pile of evidence that strongly suggests this isn't going to be fixed by a one-line patch. Again, I'm not against enabling vPMU on hybrid CPUs, but AFAICT the enabling is non-trivial and may require new uAPI to provide the necessary information to userspace. As a short term fix, and something that can be backported to stable trees, I don't see a better alternative than disabling vPMU support.
On 2023-01-20 12:32 p.m., Sean Christopherson wrote: > On Fri, Jan 20, 2023, Liang, Kan wrote: >> >> On 2023-01-19 7:40 p.m., Sean Christopherson wrote: >>> Disable KVM support for virtualizing PMUs on hosts with hybrid PMUs until >>> KVM gains a sane way to enumeration the hybrid vPMU to userspace and/or >>> gains a mechanism to let userspace opt-in to the dangers of exposing a >>> hybrid vPMU to KVM guests. >>> >>> Virtualizing a hybrid PMU, or at least part of a hybrid PMU, is possible, >>> but it requires userspace to pin vCPUs to pCPUs to prevent migrating a >>> vCPU between a big core and a little core, requires the VMM to accurately >>> enumerate the topology to the guest (if exposing a hybrid CPU to the >>> guest), and also requires the VMM to accurately enumerate the vPMU >>> capabilities to the guest. >> >> Current kernel only return the common counters to KVM, which is >> available on both e-core and p-core. In theory, there should be no >> problem with the migration between cores. You don't have to pin vCPU. >> The only problem is that you probably can only use the architecture events. > > And how exactly is KVM supposed to tell the guest that it can only use > architectural events? I see CPUID bits that enumerate which architectural events > are supported, but I'm not seeing anything that says _only_ architectural events > are supported. I think we have to use a white list in KVM. For the unsupported event, KVM will not create the event. > >> There is nothing wrong for the information provided by the kernel. I >> think it should be a KVM issue (my guess is the CPUID enumeration.) we >> should fix rather than simply disable the PMU for entire hybrid machines. > > I'm not arguing this isn't KVM's problem, and I'm all for proper enabling in KVM, > but I'm not seeing any patches being posted. In the meantime, we've got bug reports > coming in about KVM guests having PMU problems on hybrid hosts, and a pile of > evidence that strongly suggests this isn't going to be fixed by a one-line patch. > > Again, I'm not against enabling vPMU on hybrid CPUs, but AFAICT the enabling is > non-trivial and may require new uAPI to provide the necessary information to > userspace. As a short term fix, and something that can be backported to stable > trees, I don't see a better alternative than disabling vPMU support. I just did some tests with the latest kernel on a RPL machine, and observed the below error in the guest. [ 0.118214] unchecked MSR access error: WRMSR to 0x38f (tried to write 0x00011000f0000003f) at rIP: 0xffffffff83082124 (native_write_msr+0x4/0x30) [ 0.118949] Call Trace: [ 0.119092] <TASK> [ 0.119215] ? __intel_pmu_enable_all.constprop.0+0x88/0xe0 [ 0.119533] intel_pmu_enable_all+0x15/0x20 [ 0.119778] x86_pmu_enable+0x17c/0x320 The error is caused by the access to an unsupported bit (bit 48). The bit is to enable the Perf Metrics feature, which is a p-core only feature. KVM doesn't support the feature, so the corresponding bit of PERF_CAPABILITIES MSR is not exposed to the guest. For a non-hybrid platform, guest checks the bit. Everything works well. However, the current implementation in perf kernel for ADL and RPL doesn't check the bit. Because the bit is not reliable on ADL and RPL. Perf assumes that the p-core hardware always has the feature enabled. There is no problem for the bare metal, but seems bring troubles on KVM. There is no such issue for the later platforms anymore, e.g., MTL, since we enhance the PMU features enumeration for the hybrid platforms. Please find the enhancement in Chapter 10 NEXT GENERATION PERFORMANCE MONITORING UNIT (PMU) https://cdrdv2-public.intel.com/671368/architecture-instruction-set-extensions-programming-reference.pdf I think, for a short term fix, we should fix the issue in the perf kernel for ADL and RPL, rather than disable the entire vPMU on a hybrid platform. How about the below patch? diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index dfd2c124cdf8..d667e8b79286 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -6459,7 +6459,13 @@ __init int intel_pmu_init(void) __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1, 0, pmu->num_counters, 0, 0); pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities; - pmu->intel_cap.perf_metrics = 1; + /* + * The perf metrics bit is not reliable on ADL and RPL. For bare + * metal, it's safe to assume that the feature is always enabled + * on p-core, but we cannot do the same assumption for KVM. + */ + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) + pmu->intel_cap.perf_metrics = 1; pmu->intel_cap.pebs_output_pt_available = 0; memcpy(pmu->hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(pmu->hw_cache_event_ids)); Thanks, Kan
On Fri, Jan 20, 2023, Liang, Kan wrote: > On 2023-01-20 12:32 p.m., Sean Christopherson wrote: > > On Fri, Jan 20, 2023, Liang, Kan wrote: > >> There is nothing wrong for the information provided by the kernel. I > >> think it should be a KVM issue (my guess is the CPUID enumeration.) we > >> should fix rather than simply disable the PMU for entire hybrid machines. > > > > I'm not arguing this isn't KVM's problem, and I'm all for proper enabling in KVM, > > but I'm not seeing any patches being posted. In the meantime, we've got bug reports > > coming in about KVM guests having PMU problems on hybrid hosts, and a pile of > > evidence that strongly suggests this isn't going to be fixed by a one-line patch. > > > > Again, I'm not against enabling vPMU on hybrid CPUs, but AFAICT the enabling is > > non-trivial and may require new uAPI to provide the necessary information to > > userspace. As a short term fix, and something that can be backported to stable > > trees, I don't see a better alternative than disabling vPMU support. > > I just did some tests with the latest kernel on a RPL machine, and > observed the below error in the guest. > > [ 0.118214] unchecked MSR access error: WRMSR to 0x38f (tried to > write 0x00011000f0000003f) at rIP: 0xffffffff83082124 > (native_write_msr+0x4/0x30) > [ 0.118949] Call Trace: > [ 0.119092] <TASK> > [ 0.119215] ? __intel_pmu_enable_all.constprop.0+0x88/0xe0 > [ 0.119533] intel_pmu_enable_all+0x15/0x20 > [ 0.119778] x86_pmu_enable+0x17c/0x320 > > > The error is caused by the access to an unsupported bit (bit 48). > The bit is to enable the Perf Metrics feature, which is a p-core only > feature. > > KVM doesn't support the feature, so the corresponding bit of > PERF_CAPABILITIES MSR is not exposed to the guest. For a non-hybrid > platform, guest checks the bit. Everything works well. > > However, the current implementation in perf kernel for ADL and RPL > doesn't check the bit. Because the bit is not reliable on ADL and RPL. > Perf assumes that the p-core hardware always has the feature enabled. > There is no problem for the bare metal, but seems bring troubles on KVM. > > There is no such issue for the later platforms anymore, e.g., MTL, since > we enhance the PMU features enumeration for the hybrid platforms. > Please find the enhancement in Chapter 10 NEXT GENERATION PERFORMANCE > MONITORING UNIT (PMU) > https://cdrdv2-public.intel.com/671368/architecture-instruction-set-extensions-programming-reference.pdf > > I think, for a short term fix, we should fix the issue in the perf > kernel for ADL and RPL, rather than disable the entire vPMU on a hybrid > platform. > > How about the below patch? No, fudging around this in the guest isn't a viable fix, even as a short term fix. Linux isn't the only guest supported by KVM, the VMM isn't strictly required to set HYPERVISOR in guest CPUID, and it doesn't fix the problems with trying to use microarchitectural events. > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c > index dfd2c124cdf8..d667e8b79286 100644 > --- a/arch/x86/events/intel/core.c > +++ b/arch/x86/events/intel/core.c > @@ -6459,7 +6459,13 @@ __init int intel_pmu_init(void) > __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1, > 0, pmu->num_counters, 0, 0); > pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities; > - pmu->intel_cap.perf_metrics = 1; > + /* > + * The perf metrics bit is not reliable on ADL and RPL. For bare > + * metal, it's safe to assume that the feature is always enabled > + * on p-core, but we cannot do the same assumption for KVM. > + */ > + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) > + pmu->intel_cap.perf_metrics = 1; > pmu->intel_cap.pebs_output_pt_available = 0; > > memcpy(pmu->hw_cache_event_ids, spr_hw_cache_event_ids, > sizeof(pmu->hw_cache_event_ids)); > > > Thanks, > Kan
On 2023-01-20 3:34 p.m., Sean Christopherson wrote: > On Fri, Jan 20, 2023, Liang, Kan wrote: >> On 2023-01-20 12:32 p.m., Sean Christopherson wrote: >>> On Fri, Jan 20, 2023, Liang, Kan wrote: >>>> There is nothing wrong for the information provided by the kernel. I >>>> think it should be a KVM issue (my guess is the CPUID enumeration.) we >>>> should fix rather than simply disable the PMU for entire hybrid machines. >>> >>> I'm not arguing this isn't KVM's problem, and I'm all for proper enabling in KVM, >>> but I'm not seeing any patches being posted. In the meantime, we've got bug reports >>> coming in about KVM guests having PMU problems on hybrid hosts, and a pile of >>> evidence that strongly suggests this isn't going to be fixed by a one-line patch. >>> >>> Again, I'm not against enabling vPMU on hybrid CPUs, but AFAICT the enabling is >>> non-trivial and may require new uAPI to provide the necessary information to >>> userspace. As a short term fix, and something that can be backported to stable >>> trees, I don't see a better alternative than disabling vPMU support. >> >> I just did some tests with the latest kernel on a RPL machine, and >> observed the below error in the guest. >> >> [ 0.118214] unchecked MSR access error: WRMSR to 0x38f (tried to >> write 0x00011000f0000003f) at rIP: 0xffffffff83082124 >> (native_write_msr+0x4/0x30) >> [ 0.118949] Call Trace: >> [ 0.119092] <TASK> >> [ 0.119215] ? __intel_pmu_enable_all.constprop.0+0x88/0xe0 >> [ 0.119533] intel_pmu_enable_all+0x15/0x20 >> [ 0.119778] x86_pmu_enable+0x17c/0x320 >> >> >> The error is caused by the access to an unsupported bit (bit 48). >> The bit is to enable the Perf Metrics feature, which is a p-core only >> feature. >> >> KVM doesn't support the feature, so the corresponding bit of >> PERF_CAPABILITIES MSR is not exposed to the guest. For a non-hybrid >> platform, guest checks the bit. Everything works well. >> >> However, the current implementation in perf kernel for ADL and RPL >> doesn't check the bit. Because the bit is not reliable on ADL and RPL. >> Perf assumes that the p-core hardware always has the feature enabled. >> There is no problem for the bare metal, but seems bring troubles on KVM. >> >> There is no such issue for the later platforms anymore, e.g., MTL, since >> we enhance the PMU features enumeration for the hybrid platforms. >> Please find the enhancement in Chapter 10 NEXT GENERATION PERFORMANCE >> MONITORING UNIT (PMU) >> https://cdrdv2-public.intel.com/671368/architecture-instruction-set-extensions-programming-reference.pdf >> >> I think, for a short term fix, we should fix the issue in the perf >> kernel for ADL and RPL, rather than disable the entire vPMU on a hybrid >> platform. >> >> How about the below patch? > > No, fudging around this in the guest isn't a viable fix, even as a short term fix. > Linux isn't the only guest supported by KVM, the VMM isn't strictly required to > set HYPERVISOR in guest CPUID, I once thought it's a KVM issue, but I was wrong after the debugging. It's the Linux guest which doesn't behave properly. The response from KVM is correct. KVM doesn't expose the perf metrics feature to the guest. But the guest tries to enable the feature. The MSR access error should be expected. I think we should fix the wrong behavior of the Linux guest, rather than disable innocent KVM. If the HYPERVISOR bit is nor reliable, is there other way to check whether it's a guest? > and it doesn't fix the problems with trying to use > microarchitectural events. I think it's a different problem. Even in the non-hybrid machine, the guest can try any events (supported or non-supported). You cannot stop it. It's a long term issue. If I understand correct, the workaround in KVM is to add a white/black list to filter the events. I think we can do the same thing for the hybrid machine for now. https://lore.kernel.org/lkml/CAOyeoRUUK+T_71J=+zcToyL93LkpARpsuWSfZS7jbJq=wd1rQg@mail.gmail.com/ Thanks, Kan > >> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c >> index dfd2c124cdf8..d667e8b79286 100644 >> --- a/arch/x86/events/intel/core.c >> +++ b/arch/x86/events/intel/core.c >> @@ -6459,7 +6459,13 @@ __init int intel_pmu_init(void) >> __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1, >> 0, pmu->num_counters, 0, 0); >> pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities; >> - pmu->intel_cap.perf_metrics = 1; >> + /* >> + * The perf metrics bit is not reliable on ADL and RPL. For bare >> + * metal, it's safe to assume that the feature is always enabled >> + * on p-core, but we cannot do the same assumption for KVM. >> + */ >> + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) >> + pmu->intel_cap.perf_metrics = 1; >> pmu->intel_cap.pebs_output_pt_available = 0; >> >> memcpy(pmu->hw_cache_event_ids, spr_hw_cache_event_ids, >> sizeof(pmu->hw_cache_event_ids)); >> >> >> Thanks, >> Kan
> If I understand correct, the workaround in KVM is to add a white/black > list to filter the events. I think we can do the same thing for the > hybrid machine for now. > https://lore.kernel.org/lkml/CAOyeoRUUK+T_71J=+zcToyL93LkpARpsuWSfZS7jbJq=wd1rQg@mail.gmail.com/ This will make everyone who actually wants to use the PMU sad. It's reasonable if the vCPUs are not bound, but if they are bound it would be better to expose it with a suitable CPUID for the types. -Andi
On 2023-01-23 8:04 p.m., Andi Kleen wrote: > >> If I understand correct, the workaround in KVM is to add a white/black >> list to filter the events. I think we can do the same thing for the >> hybrid machine for now. >> https://lore.kernel.org/lkml/CAOyeoRUUK+T_71J=+zcToyL93LkpARpsuWSfZS7jbJq=wd1rQg@mail.gmail.com/ > > > This will make everyone who actually wants to use the PMU sad. Yes, but we still have all the architecture events work. I think it should be good enough as a short-term solution, when the hybrid is not completely supported in KVM. > > It's reasonable if the vCPUs are not bound, but if they are bound it > would be better to expose it with a suitable CPUID for the types. > Yes, and also the CPUID leaf 0x23H support to enumerate the PMU features of each types. Thanks, Kan
On Tue, Jan 24, 2023 at 10:31:00AM -0500, Liang, Kan wrote: > Yes, and also the CPUID leaf 0x23H support to enumerate the PMU features > of each types. Note that this is not enough or even useful. There is nothing that stops a vCPU from migrating between types at every instruction. There simply is no relation between a vCPU and a type, so knowing what a type does or does not support is useless information.
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 85a63a41c471..a67667c41cc8 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2974,17 +2974,18 @@ unsigned long perf_misc_flags(struct pt_regs *regs) void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) { - if (!x86_pmu_initialized()) { + /* + * Hybrid PMUs don't play nice with virtualization unless userspace + * pins vCPUs _and_ can enumerate accurate information to the guest. + * Disable vPMU support for hybrid PMUs until KVM gains a way to let + * userspace opt into the dangers of hybrid vPMUs. + */ + if (!x86_pmu_initialized() || is_hybrid()) { memset(cap, 0, sizeof(*cap)); return; } cap->version = x86_pmu.version; - /* - * KVM doesn't support the hybrid PMU yet. - * Return the common value in global x86_pmu, - * which available for all cores. - */ cap->num_counters_gp = x86_pmu.num_counters; cap->num_counters_fixed = x86_pmu.num_counters_fixed; cap->bit_width_gp = x86_pmu.cntval_bits;