Message ID | 20231104000239.367005-4-seanjc@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp1379248vqu; Fri, 3 Nov 2023 17:03:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHW7FvGKkqj6LabEuiSvzHFxhAFGpyC/6pl1IV8ZwZ+N6qJoAecwXvQq/l0Mtxe041qqo/Z X-Received: by 2002:a05:6870:fb8e:b0:1e9:a3e1:c388 with SMTP id kv14-20020a056870fb8e00b001e9a3e1c388mr26130609oab.41.1699056181325; Fri, 03 Nov 2023 17:03:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1699056181; cv=none; d=google.com; s=arc-20160816; b=RFobZLUySnKYPjUNfk+Bka4EQlLvvkSOX5ypNTsAETa5RqGXC+8hRCi6EVhmOTaXZT Bk7QH2Opa3k5iOQogGnRvleq6nZzw9/IDaVEsYWD5SLTTJw64tGKrX2ocCdcqTfAwnU0 cjlaTXuP8nax9sl/9GviFypDHFrPEf0FMDJLgQ132p428KwTwEevOUJGc6JT5F3owVTJ 5vUInqst6G4DRxBVY1MMwv9sD8E4coJ4uj4twvhUpehZ9CEu8qIqmVfnJrr9Pnag2aCO KCbznOg/l/D8crNAJ9YfgTqY77TsZeRQruxjSgLgwyOMhQYzsVMCMc2LtOnlyF8wp9Wy 5w6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=D/rIH8GJk/IJPB5pqHHZ2EWC6jVCZLZPq4HYOnxUc90=; fh=z/UQYx9FUD9Bz9JyYrmi2ijAScp29gktKr8lejSUuXU=; b=WGqByPY/P1o6fqMLZnYyfRwUurGOsEh0AeZgaYJpvsE/l8tr0HILeiWtXneRSml5Tm XkI0oOeHLWn83vTqeiNv9tbgjw3qt5PbJi7eDXkGLHBLfl2uxq3qEMzLPTW232m5YqYJ hGvV6NvRThYRu3RrSkYI7zc6shnQ3zaup+JVpTnAtNARftKvCV6hrhfkfARjbIDzpYgF +hmyCPTF7CyK7o5TEKiIdDotXpGmgrpfB76f8jWQ/05EbxO3HTCuyjHQE5IBbqZp/q+J WvJEoTJZW5Fmz+do/EnI9gfUIvislq76P8dug9e2wa+mGWmV1zr/W7OxLw6AWv6mQbKn UWMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Cf35BIGg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id nx25-20020a056870be9900b001b44a5a8a42si1051545oab.326.2023.11.03.17.03.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Nov 2023 17:03:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Cf35BIGg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 6B739812AD2D; Fri, 3 Nov 2023 17:03:00 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjKDAC5 (ORCPT <rfc822;heyuhang3455@gmail.com> + 35 others); Fri, 3 Nov 2023 20:02:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231147AbjKDACu (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 3 Nov 2023 20:02:50 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E5C8D5A for <linux-kernel@vger.kernel.org>; Fri, 3 Nov 2023 17:02:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9a5a3f2d4fso3024044276.3 for <linux-kernel@vger.kernel.org>; Fri, 03 Nov 2023 17:02:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699056166; x=1699660966; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=D/rIH8GJk/IJPB5pqHHZ2EWC6jVCZLZPq4HYOnxUc90=; b=Cf35BIGgAvlwQ+n1pQBl2TYlhFecQtLuiALXRejxKRxu8FSdNrEBVqY404/+ZunHwF 1/A3RdiT2KpfK2JDtTPWw6ZpVyqQ3yvWlSPj3KGXOQfokkhfhOQkS5uGWtQEy3LwIjSr F0VFdPqlsxuMHhKoAywFRRhmaTFU9pV1ZZIS5Mf2VBzp78alsa1P+sWhkFVvaiNUDL1u qCrziZX2Zcn/Od6TZfKIdGzumybkDniVdyWReh2/H0FxboEyC48+6GXzt11douvrP1Jk HuzG5VDnhyjY5G6xP+9/ETJ02RqrHTzAO5kGuKlLoE7Wds9yQle/VRJTnRBta/vjNq53 LalA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699056166; x=1699660966; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=D/rIH8GJk/IJPB5pqHHZ2EWC6jVCZLZPq4HYOnxUc90=; b=C3SwEPJGAkvug0c3bHDA/T77C+s7IUMezEPcRRURkgGAGudBoCWfTTA6jGBVz0Nstm Q7m4obSVPhfDK2zl4hdKqmF4Zvs+fkkDXxJGKFKtOOdMlfMuy7XJuKutZxeDjdyzr5nw fBf9RhbOLdR0D85sXsOmtSNGm2WOG6DLcyCgZC4Ivk+97j/ENC3aNNgI6l10yqI/HUXK dCEjM3oINvdwn1/02lpxvkbXJsMrjZTQPGMChemHBJ9s8rIipNTgfv2VvsipYffisE3K bcKLXG2a2l0fdTRXyLOxfsc39jrpkAvcSy2Bflhqu4tPv9SqegL0gu9vGBGQZ/61VFq6 dYhA== X-Gm-Message-State: AOJu0YwN71i3BcrKKJYL928PKdM3ZroUWwE9Nkss3Y8YsJZ7qsP3qaOn 0IoEPVTqUEc5JT5JskVGInADKkBXAYg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1083:b0:d9a:c3b8:4274 with SMTP id v3-20020a056902108300b00d9ac3b84274mr544412ybu.7.1699056166538; Fri, 03 Nov 2023 17:02:46 -0700 (PDT) Reply-To: Sean Christopherson <seanjc@google.com> Date: Fri, 3 Nov 2023 17:02:21 -0700 In-Reply-To: <20231104000239.367005-1-seanjc@google.com> Mime-Version: 1.0 References: <20231104000239.367005-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231104000239.367005-4-seanjc@google.com> Subject: [PATCH v6 03/20] KVM: x86/pmu: Don't enumerate arch events KVM doesn't support From: Sean Christopherson <seanjc@google.com> To: Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com> Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang <kan.liang@linux.intel.com>, Dapeng Mi <dapeng1.mi@linux.intel.com>, Jinrong Liang <cloudliang@tencent.com>, Like Xu <likexu@tencent.com>, Jim Mattson <jmattson@google.com>, Aaron Lewis <aaronlewis@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 03 Nov 2023 17:03:00 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781589534626745238 X-GMAIL-MSGID: 1781589534626745238 |
Series |
KVM: x86/pmu: selftests: Fixes and new tests
|
|
Commit Message
Sean Christopherson
Nov. 4, 2023, 12:02 a.m. UTC
Don't advertise support to userspace for architectural events that KVM
doesn't support, i.e. for "real" events that aren't listed in
intel_pmu_architectural_events. On current hardware, this effectively
means "don't advertise support for Top Down Slots".
Mask off the associated "unavailable" bits, as said bits for undefined
events are reserved to zero. Arguably the events _are_ defined, but from
a KVM perspective they might as well not exist, and there's absolutely no
reason to leave useless unavailable bits set.
Fixes: a6c06ed1a60a ("KVM: Expose the architectural performance monitoring CPUID leaf")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/pmu_intel.c | 9 +++++++++
1 file changed, 9 insertions(+)
Comments
On Fri, Nov 3, 2023 at 5:02 PM Sean Christopherson <seanjc@google.com> wrote: > > Don't advertise support to userspace for architectural events that KVM > doesn't support, i.e. for "real" events that aren't listed in > intel_pmu_architectural_events. On current hardware, this effectively > means "don't advertise support for Top Down Slots". NR_REAL_INTEL_ARCH_EVENTS is only used in intel_hw_event_available(). As discussed (https://lore.kernel.org/kvm/ZUU12-TUR_1cj47u@google.com/), intel_hw_event_available() should go away. Aside from mapping fixed counters to event selector and unit mask (fixed_pmc_events[]), KVM has no reason to know when a new architectural event is defined. The variable that this change "fixes" is only used to feed CPUID.0AH:EBX in KVM_GET_SUPPORTED_CPUID, and kvm_pmu_cap.events_mask is already constructed from what host perf advertises support for. > Mask off the associated "unavailable" bits, as said bits for undefined > events are reserved to zero. Arguably the events _are_ defined, but from > a KVM perspective they might as well not exist, and there's absolutely no > reason to leave useless unavailable bits set. > > Fixes: a6c06ed1a60a ("KVM: Expose the architectural performance monitoring CPUID leaf") > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/vmx/pmu_intel.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c > index 3316fdea212a..8d545f84dc4a 100644 > --- a/arch/x86/kvm/vmx/pmu_intel.c > +++ b/arch/x86/kvm/vmx/pmu_intel.c > @@ -73,6 +73,15 @@ static void intel_init_pmu_capability(void) > int i; > > /* > + * Do not enumerate support for architectural events that KVM doesn't > + * support. Clear unsupported events "unavailable" bit as well, as > + * architecturally such bits are reserved to zero. > + */ > + kvm_pmu_cap.events_mask_len = min(kvm_pmu_cap.events_mask_len, > + NR_REAL_INTEL_ARCH_EVENTS); > + kvm_pmu_cap.events_mask &= GENMASK(kvm_pmu_cap.events_mask_len - 1, 0); > + > + /* > * Perf may (sadly) back a guest fixed counter with a general purpose > * counter, and so KVM must hide fixed counters whose associated > * architectural event are unsupported. On real hardware, this should > -- > 2.42.0.869.gea05f2083d-goog >
On 11/4/2023 8:41 PM, Jim Mattson wrote: > On Fri, Nov 3, 2023 at 5:02 PM Sean Christopherson <seanjc@google.com> wrote: >> Don't advertise support to userspace for architectural events that KVM >> doesn't support, i.e. for "real" events that aren't listed in >> intel_pmu_architectural_events. On current hardware, this effectively >> means "don't advertise support for Top Down Slots". > NR_REAL_INTEL_ARCH_EVENTS is only used in intel_hw_event_available(). > As discussed (https://lore.kernel.org/kvm/ZUU12-TUR_1cj47u@google.com/), > intel_hw_event_available() should go away. > > Aside from mapping fixed counters to event selector and unit mask > (fixed_pmc_events[]), KVM has no reason to know when a new > architectural event is defined. Since intel_hw_event_available() would be removed, it looks the enum intel_pmu_architectural_events and intel_arch_events[] array become useless. We can directly simply modify current fixed_pmc_events[] array and use it to store fixed counter events code and umask. > > The variable that this change "fixes" is only used to feed > CPUID.0AH:EBX in KVM_GET_SUPPORTED_CPUID, and kvm_pmu_cap.events_mask > is already constructed from what host perf advertises support for. > >> Mask off the associated "unavailable" bits, as said bits for undefined >> events are reserved to zero. Arguably the events _are_ defined, but from >> a KVM perspective they might as well not exist, and there's absolutely no >> reason to leave useless unavailable bits set. >> >> Fixes: a6c06ed1a60a ("KVM: Expose the architectural performance monitoring CPUID leaf") >> Signed-off-by: Sean Christopherson <seanjc@google.com> >> --- >> arch/x86/kvm/vmx/pmu_intel.c | 9 +++++++++ >> 1 file changed, 9 insertions(+) >> >> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c >> index 3316fdea212a..8d545f84dc4a 100644 >> --- a/arch/x86/kvm/vmx/pmu_intel.c >> +++ b/arch/x86/kvm/vmx/pmu_intel.c >> @@ -73,6 +73,15 @@ static void intel_init_pmu_capability(void) >> int i; >> >> /* >> + * Do not enumerate support for architectural events that KVM doesn't >> + * support. Clear unsupported events "unavailable" bit as well, as >> + * architecturally such bits are reserved to zero. >> + */ >> + kvm_pmu_cap.events_mask_len = min(kvm_pmu_cap.events_mask_len, >> + NR_REAL_INTEL_ARCH_EVENTS); >> + kvm_pmu_cap.events_mask &= GENMASK(kvm_pmu_cap.events_mask_len - 1, 0); >> + >> + /* >> * Perf may (sadly) back a guest fixed counter with a general purpose >> * counter, and so KVM must hide fixed counters whose associated >> * architectural event are unsupported. On real hardware, this should >> -- >> 2.42.0.869.gea05f2083d-goog >>
On Tue, Nov 07, 2023, Dapeng Mi wrote: > > On 11/4/2023 8:41 PM, Jim Mattson wrote: > > On Fri, Nov 3, 2023 at 5:02 PM Sean Christopherson <seanjc@google.com> wrote: > > > Don't advertise support to userspace for architectural events that KVM > > > doesn't support, i.e. for "real" events that aren't listed in > > > intel_pmu_architectural_events. On current hardware, this effectively > > > means "don't advertise support for Top Down Slots". > > NR_REAL_INTEL_ARCH_EVENTS is only used in intel_hw_event_available(). > > As discussed (https://lore.kernel.org/kvm/ZUU12-TUR_1cj47u@google.com/), > > intel_hw_event_available() should go away. > > > > Aside from mapping fixed counters to event selector and unit mask > > (fixed_pmc_events[]), KVM has no reason to know when a new > > architectural event is defined. > > > Since intel_hw_event_available() would be removed, it looks the enum > intel_pmu_architectural_events and intel_arch_events[] array become useless. > We can directly simply modify current fixed_pmc_events[] array and use it to > store fixed counter events code and umask. Yep, I came to the same conclusion. This is what I ended up with yesterday: /* * Map fixed counter events to architectural general purpose event encodings. * Perf doesn't provide APIs to allow KVM to directly program a fixed counter, * and so KVM instead programs the architectural event to effectively request * the fixed counter. Perf isn't guaranteed to use a fixed counter and may * instead program the encoding into a general purpose counter, e.g. if a * different perf_event is already utilizing the requested counter, but the end * result is the same (ignoring the fact that using a general purpose counter * will likely exacerbate counter contention). * * Note, reference cycles is counted using a perf-defined "psuedo-encoding", * there is no architectural general purpose encoding for reference TSC cycles. */ static u64 intel_get_fixed_pmc_eventsel(int index) { const struct { u8 eventsel; u8 unit_mask; } fixed_pmc_events[] = { [0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */ [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ [2] = { 0x00, 0x03 }, /* Reference TSC Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ }; BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED); return (fixed_pmc_events[index].unit_mask << 8) | fixed_pmc_events[index].eventsel; } ... static void intel_pmu_init(struct kvm_vcpu *vcpu) { int i; struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu); for (i = 0; i < KVM_INTEL_PMC_MAX_GENERIC; i++) { pmu->gp_counters[i].type = KVM_PMC_GP; pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; } for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { pmu->fixed_counters[i].type = KVM_PMC_FIXED; pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; pmu->fixed_counters[i].eventsel = intel_get_fixed_pmc_eventsel(i); } lbr_desc->records.nr = 0; lbr_desc->event = NULL; lbr_desc->msr_passthrough = false; }
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 3316fdea212a..8d545f84dc4a 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -73,6 +73,15 @@ static void intel_init_pmu_capability(void) int i; /* + * Do not enumerate support for architectural events that KVM doesn't + * support. Clear unsupported events "unavailable" bit as well, as + * architecturally such bits are reserved to zero. + */ + kvm_pmu_cap.events_mask_len = min(kvm_pmu_cap.events_mask_len, + NR_REAL_INTEL_ARCH_EVENTS); + kvm_pmu_cap.events_mask &= GENMASK(kvm_pmu_cap.events_mask_len - 1, 0); + + /* * Perf may (sadly) back a guest fixed counter with a general purpose * counter, and so KVM must hide fixed counters whose associated * architectural event are unsupported. On real hardware, this should