Message ID | 20230201231250.3806412-8-atishp@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp564033wrn; Wed, 1 Feb 2023 15:18:39 -0800 (PST) X-Google-Smtp-Source: AK7set9qDFKdFR6GqTWXwkfC9cBrg9wxkBRFcBP0DO/C4eQkYsFE7pUhlY2xgu8miU2B4elM1J8E X-Received: by 2002:a17:90b:4d10:b0:229:ee75:5d09 with SMTP id mw16-20020a17090b4d1000b00229ee755d09mr3725572pjb.40.1675293519268; Wed, 01 Feb 2023 15:18:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675293519; cv=none; d=google.com; s=arc-20160816; b=hSus8FnIet2p44QMKMF3W3Mcr1IGWLavO/avQKLSraDvQrAb4c0ot9UuLJ2qC36vk4 J28i5fnuJGt1kroaAtL3IprJsVpcJUTyGBjOm5tpARTbe6AROuYLkDijDP4yYwc+LQDM 1ChfLINJngEDsx+zsBrsotoFDT7Jucn8p8hxgkmWqP4l1CeUmSLzBQaLHFlU9SUL1dpI QYS+t5ds3xxEFzNWPJruuPVNip0U1fPxNGA/vWqa5o2gxfuJqpTgyKm09tJ81UmhuDqK LdXs1l0QmaWg5XAxidlur9Y6+lj9TN2n7XFjDBrxRlt12qe489T/Xy5pGEq/7+m2ELd+ Tsrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sY5EbHWDjXdtGnMmGcQqauYJ3Pfpe6RsWFmLD2OzY+A=; b=j4NW+VpbYl1zK4G12CvuugEukP16zK80SuOisdnX/+xra+2GyhUx47wXg1PFWqgFLP JrSu4esEi+WJbCSu1HmRGIflZ1R//0OyYQDVP2Y4gG8RD0OxK/aQtfKDMmLTV9lFvQSX 8kDpHXZKQobIDEWjDv9ZPbpRUXBkvu/PuEWDDAtIZ0W/ijqYiOG+eTG83JQsNJQveWp1 NOaEHE4GPC/k6a+r2kpDD7SlLikrapipSevdba8JGzetzRTY5Eb8DecO5VBrzkwFrFKu pUKvFqXLd6EN60Wya0uEug+XSpijshCRsYDrATdt8F91N8ZR+/YOWXpPQLTbPqm/+Tt5 JfaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=MJV++Kj2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s15-20020a17090aba0f00b0022c029768aasi2893655pjr.22.2023.02.01.15.18.26; Wed, 01 Feb 2023 15:18:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=MJV++Kj2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231989AbjBAXNp (ORCPT <rfc822;duw91626@gmail.com> + 99 others); Wed, 1 Feb 2023 18:13:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231688AbjBAXNT (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 1 Feb 2023 18:13:19 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A8937D84 for <linux-kernel@vger.kernel.org>; Wed, 1 Feb 2023 15:13:00 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id r8so103016pls.2 for <linux-kernel@vger.kernel.org>; Wed, 01 Feb 2023 15:13:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sY5EbHWDjXdtGnMmGcQqauYJ3Pfpe6RsWFmLD2OzY+A=; b=MJV++Kj2CyNhFtdeQTqwrd60+969GyxxqLldNf2d691vYHj4jbX6SWZ5yHYqq7GSFx o6eNtnAaPhCUJXrrnUxyntF6G9dZYsMZrnCEOw6fg3ZxFn9SFaxIbMcGPKWla7oM1TK7 d+6aTBHtQHf/HQHyY0z+l/HdcWZ+Ge/bglxHkFoPnMNcGl5n6lKq3erdQu9at5yvd+uj A9/36cwgLIH+JV23hBweECy/qxKIvB1jcsRSUAe2A2U+3pPhI4Lwo8hKNZ+VqxVVWXJx eINnAjEIHXfbtJzMm2q2l7s7ZVgH29ZBkgieMpV6W0lpLWTWsYAQf49hDtbnPrHd6PdF mFyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sY5EbHWDjXdtGnMmGcQqauYJ3Pfpe6RsWFmLD2OzY+A=; b=oIlQCdRpfci2aWpj9ACeDgMTa/aYysSRmtI0c72qxn/R8ZK1qfyBZ1T9O4ef7TIZgF uAlpFC5R05+pv0iS3E8Vp2nYDc75pnHZSDyP7NmSES/K8N5RxmD1mLRaBy2O5aZuPwa9 nkV/OL556MQYFOceIr4k8hLeDxIGXDZv9T6pDJTsJ1rstZCvMZiWyefogZEM4+XWqcvt tRzyXoIK6iL2amgN+27H/wcCovPKmPEAqI4fUWKVVGmPL6s/1OaGUAmjfXiuRhihcMHj MUKxhSFM1QMIyYOVXLPN9nYcC0QiZv1zyRy6W9gAhkzV20UmgXxUmrAOuvYZjJPKiBnS g0TA== X-Gm-Message-State: AO0yUKVbsYhu63w+cwQPB/EYxme73Le39pUZeUmcz6hybjEZJ0qZNeqN EjPKhL+X1hm6Vz9PJjPOtHpA3fcXEiD5966P X-Received: by 2002:a17:90a:1a5d:b0:22c:a232:9309 with SMTP id 29-20020a17090a1a5d00b0022ca2329309mr3855402pjl.36.1675293179280; Wed, 01 Feb 2023 15:12:59 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id t3-20020a17090a510300b0022bf0b0e1b7sm1861774pjh.10.2023.02.01.15.12.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 15:12:59 -0800 (PST) From: Atish Patra <atishp@rivosinc.com> To: linux-kernel@vger.kernel.org Cc: Atish Patra <atishp@rivosinc.com>, Albert Ou <aou@eecs.berkeley.edu>, Andrew Jones <ajones@ventanamicro.com>, Anup Patel <anup@brainfault.org>, Atish Patra <atishp@atishpatra.org>, Eric Lin <eric.lin@sifive.com>, Guo Ren <guoren@kernel.org>, Heiko Stuebner <heiko@sntech.de>, kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>, Palmer Dabbelt <palmer@dabbelt.com>, Paul Walmsley <paul.walmsley@sifive.com>, Will Deacon <will@kernel.org> Subject: [PATCH v4 07/14] RISC-V: KVM: Add skeleton support for perf Date: Wed, 1 Feb 2023 15:12:43 -0800 Message-Id: <20230201231250.3806412-8-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230201231250.3806412-1-atishp@rivosinc.com> References: <20230201231250.3806412-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756672577038258553?= X-GMAIL-MSGID: =?utf-8?q?1756672577038258553?= |
Series |
KVM perf support
|
|
Commit Message
Atish Patra
Feb. 1, 2023, 11:12 p.m. UTC
This patch only adds barebone structure of perf implementation. Most of
the function returns zero at this point and will be implemented
fully in the future.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
arch/riscv/include/asm/kvm_host.h | 4 +
arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++
arch/riscv/kvm/Makefile | 1 +
arch/riscv/kvm/vcpu.c | 7 ++
arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++
5 files changed, 226 insertions(+)
create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h
create mode 100644 arch/riscv/kvm/vcpu_pmu.c
Comments
On Thu, Feb 2, 2023 at 4:42 AM Atish Patra <atishp@rivosinc.com> wrote: > > This patch only adds barebone structure of perf implementation. Most of > the function returns zero at this point and will be implemented > fully in the future. > > Signed-off-by: Atish Patra <atishp@rivosinc.com> Looks good to me. Reviewed-by: Anup Patel <anup@brainfault.org> Regards, Anup > --- > arch/riscv/include/asm/kvm_host.h | 4 + > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > arch/riscv/kvm/Makefile | 1 + > arch/riscv/kvm/vcpu.c | 7 ++ > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > 5 files changed, 226 insertions(+) > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > index 93f43a3..b90be9a 100644 > --- a/arch/riscv/include/asm/kvm_host.h > +++ b/arch/riscv/include/asm/kvm_host.h > @@ -18,6 +18,7 @@ > #include <asm/kvm_vcpu_insn.h> > #include <asm/kvm_vcpu_sbi.h> > #include <asm/kvm_vcpu_timer.h> > +#include <asm/kvm_vcpu_pmu.h> > > #define KVM_MAX_VCPUS 1024 > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > /* Don't run the VCPU (blocked) */ > bool pause; > + > + /* Performance monitoring context */ > + struct kvm_pmu pmu_context; > }; > > static inline void kvm_arch_hardware_unsetup(void) {} > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > new file mode 100644 > index 0000000..e2b4038 > --- /dev/null > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > @@ -0,0 +1,78 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* > + * Copyright (c) 2023 Rivos Inc > + * > + * Authors: > + * Atish Patra <atishp@rivosinc.com> > + */ > + > +#ifndef __KVM_VCPU_RISCV_PMU_H > +#define __KVM_VCPU_RISCV_PMU_H > + > +#include <linux/perf/riscv_pmu.h> > +#include <asm/kvm_vcpu_sbi.h> > +#include <asm/sbi.h> > + > +#ifdef CONFIG_RISCV_PMU_SBI > +#define RISCV_KVM_MAX_FW_CTRS 32 > + > +#if RISCV_KVM_MAX_FW_CTRS > 32 > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > +#endif > + > +#define RISCV_MAX_COUNTERS 64 > + > +/* Per virtual pmu counter data */ > +struct kvm_pmc { > + u8 idx; > + struct perf_event *perf_event; > + uint64_t counter_val; > + union sbi_pmu_ctr_info cinfo; > + /* Event monitoring status */ > + bool started; > +}; > + > +/* PMU data structure per vcpu */ > +struct kvm_pmu { > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > + /* Number of the virtual firmware counters available */ > + int num_fw_ctrs; > + /* Number of the virtual hardware counters available */ > + int num_hw_ctrs; > + /* A flag to indicate that pmu initialization is done */ > + bool init_done; > + /* Bit map of all the virtual counter used */ > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > +}; > + > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) > + > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + unsigned long eidx, uint64_t evtdata, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata); > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > + > +#else > +struct kvm_pmu { > +}; > + > +static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > +#endif /* CONFIG_RISCV_PMU_SBI */ > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > index 019df920..5de1053 100644 > --- a/arch/riscv/kvm/Makefile > +++ b/arch/riscv/kvm/Makefile > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > kvm-y += vcpu_sbi_replace.o > kvm-y += vcpu_sbi_hsm.o > kvm-y += vcpu_timer.o > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > index 7c08567..7d010b0 100644 > --- a/arch/riscv/kvm/vcpu.c > +++ b/arch/riscv/kvm/vcpu.c > @@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > + kvm_riscv_vcpu_pmu_reset(vcpu); > + > vcpu->arch.hfence_head = 0; > vcpu->arch.hfence_tail = 0; > memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); > @@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > /* Setup VCPU timer */ > kvm_riscv_vcpu_timer_init(vcpu); > > + /* setup performance monitoring */ > + kvm_riscv_vcpu_pmu_init(vcpu); > + > /* Reset VCPU */ > kvm_riscv_reset_vcpu(vcpu); > > @@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > /* Cleanup VCPU timer */ > kvm_riscv_vcpu_timer_deinit(vcpu); > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > + > /* Free unused pages pre-allocated for G-stage page table mappings */ > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > } > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > new file mode 100644 > index 0000000..2dad37f > --- /dev/null > +++ b/arch/riscv/kvm/vcpu_pmu.c > @@ -0,0 +1,136 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright (c) 2023 Rivos Inc > + * > + * Authors: > + * Atish Patra <atishp@rivosinc.com> > + */ > + > +#include <linux/errno.h> > +#include <linux/err.h> > +#include <linux/kvm_host.h> > +#include <linux/perf/riscv_pmu.h> > +#include <asm/csr.h> > +#include <asm/kvm_vcpu_sbi.h> > +#include <asm/kvm_vcpu_pmu.h> > +#include <linux/kvm_host.h> > + > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > + > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + > + retdata->out_val = kvm_pmu_num_counters(kvpmu); > + > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > + retdata->err_val = SBI_ERR_INVALID_PARAM; > + return 0; > + } > + > + retdata->out_val = kvpmu->pmc[cidx].cinfo.value; > + > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + unsigned long eidx, uint64_t evtdata, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > +{ > + int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0; > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + struct kvm_pmc *pmc; > + > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > + if (ret < 0 || !hpm_width || !num_hw_ctrs) > + return; > + > + /* > + * It is guranteed that RISCV_KVM_MAX_FW_CTRS can't exceed 32 as > + * that may exceed total number of counters more than RISCV_MAX_COUNTERS > + */ > + kvpmu->num_hw_ctrs = num_hw_ctrs; > + kvpmu->num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; > + > + /* > + * There is no correlation between the logical hardware counter and virtual counters. > + * However, we need to encode a hpmcounter CSR in the counter info field so that > + * KVM can trap n emulate the read. This works well in the migration use case as > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > + */ > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > + /* TIME CSR shouldn't be read from perf interface */ > + if (i == 1) > + continue; > + pmc = &kvpmu->pmc[i]; > + pmc->idx = i; > + if (i < kvpmu->num_hw_ctrs) { > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; > + if (i < 3) > + /* CY, IR counters */ > + pmc->cinfo.width = 63; > + else > + pmc->cinfo.width = hpm_width; > + /* > + * The CSR number doesn't have any relation with the logical > + * hardware counters. The CSR numbers are encoded sequentially > + * to avoid maintaining a map between the virtual counter > + * and CSR number. > + */ > + pmc->cinfo.csr = CSR_CYCLE + i; > + } else { > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > + pmc->cinfo.width = BITS_PER_LONG - 1; > + } > + } > + > + kvpmu->init_done = true; > +} > + > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > +{ > + /* TODO */ > +} > + > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > +{ > + kvm_riscv_vcpu_pmu_deinit(vcpu); > +} > -- > 2.25.1 >
On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > This patch only adds barebone structure of perf implementation. Most of > the function returns zero at this point and will be implemented > fully in the future. > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > +/* Per virtual pmu counter data */ > +struct kvm_pmc { > + u8 idx; > + struct perf_event *perf_event; > + uint64_t counter_val; CI also complained that here, and elsewhere, you used uint64_t rather than u64. Am I missing a reason for not using the regular types? Thanks, Conor. > + union sbi_pmu_ctr_info cinfo; > + /* Event monitoring status */ > + bool started;
On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > This patch only adds barebone structure of perf implementation. Most of > the function returns zero at this point and will be implemented > fully in the future. > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > --- > arch/riscv/include/asm/kvm_host.h | 4 + > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > arch/riscv/kvm/Makefile | 1 + > arch/riscv/kvm/vcpu.c | 7 ++ > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > 5 files changed, 226 insertions(+) > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > index 93f43a3..b90be9a 100644 > --- a/arch/riscv/include/asm/kvm_host.h > +++ b/arch/riscv/include/asm/kvm_host.h > @@ -18,6 +18,7 @@ > #include <asm/kvm_vcpu_insn.h> > #include <asm/kvm_vcpu_sbi.h> > #include <asm/kvm_vcpu_timer.h> > +#include <asm/kvm_vcpu_pmu.h> > > #define KVM_MAX_VCPUS 1024 > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > /* Don't run the VCPU (blocked) */ > bool pause; > + > + /* Performance monitoring context */ > + struct kvm_pmu pmu_context; > }; > > static inline void kvm_arch_hardware_unsetup(void) {} > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > new file mode 100644 > index 0000000..e2b4038 > --- /dev/null > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > @@ -0,0 +1,78 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* > + * Copyright (c) 2023 Rivos Inc > + * > + * Authors: > + * Atish Patra <atishp@rivosinc.com> > + */ > + > +#ifndef __KVM_VCPU_RISCV_PMU_H > +#define __KVM_VCPU_RISCV_PMU_H > + > +#include <linux/perf/riscv_pmu.h> > +#include <asm/kvm_vcpu_sbi.h> > +#include <asm/sbi.h> > + > +#ifdef CONFIG_RISCV_PMU_SBI > +#define RISCV_KVM_MAX_FW_CTRS 32 > + > +#if RISCV_KVM_MAX_FW_CTRS > 32 > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > +#endif > + > +#define RISCV_MAX_COUNTERS 64 But instead of that message, what I think we need is something like #define RISCV_KVM_MAX_HW_CTRS 32 #define RISCV_KVM_MAX_FW_CTRS 32 #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) static_assert(RISCV_MAX_COUNTERS <= 64) And then in pmu_sbi_device_probe() should ensure num_counters <= RISCV_MAX_COUNTERS and pmu_sbi_get_ctrinfo() should ensure num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS which has to be done at runtime. > + > +/* Per virtual pmu counter data */ > +struct kvm_pmc { > + u8 idx; > + struct perf_event *perf_event; > + uint64_t counter_val; > + union sbi_pmu_ctr_info cinfo; > + /* Event monitoring status */ > + bool started; > +}; > + > +/* PMU data structure per vcpu */ > +struct kvm_pmu { > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > + /* Number of the virtual firmware counters available */ > + int num_fw_ctrs; > + /* Number of the virtual hardware counters available */ > + int num_hw_ctrs; > + /* A flag to indicate that pmu initialization is done */ > + bool init_done; > + /* Bit map of all the virtual counter used */ > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > +}; > + > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) > + > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + struct kvm_vcpu_sbi_return *retdata); > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + unsigned long eidx, uint64_t evtdata, > + struct kvm_vcpu_sbi_return *retdata); s/flag/flags/ for all the above prototypes and all the implementations below. > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata); > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > + > +#else > +struct kvm_pmu { > +}; > + > +static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > +#endif /* CONFIG_RISCV_PMU_SBI */ > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > index 019df920..5de1053 100644 > --- a/arch/riscv/kvm/Makefile > +++ b/arch/riscv/kvm/Makefile > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > kvm-y += vcpu_sbi_replace.o > kvm-y += vcpu_sbi_hsm.o > kvm-y += vcpu_timer.o > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > index 7c08567..7d010b0 100644 > --- a/arch/riscv/kvm/vcpu.c > +++ b/arch/riscv/kvm/vcpu.c > @@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > + kvm_riscv_vcpu_pmu_reset(vcpu); > + > vcpu->arch.hfence_head = 0; > vcpu->arch.hfence_tail = 0; > memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); > @@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > /* Setup VCPU timer */ > kvm_riscv_vcpu_timer_init(vcpu); > > + /* setup performance monitoring */ > + kvm_riscv_vcpu_pmu_init(vcpu); > + > /* Reset VCPU */ > kvm_riscv_reset_vcpu(vcpu); > > @@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > /* Cleanup VCPU timer */ > kvm_riscv_vcpu_timer_deinit(vcpu); > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > + > /* Free unused pages pre-allocated for G-stage page table mappings */ > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > } > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > new file mode 100644 > index 0000000..2dad37f > --- /dev/null > +++ b/arch/riscv/kvm/vcpu_pmu.c > @@ -0,0 +1,136 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright (c) 2023 Rivos Inc > + * > + * Authors: > + * Atish Patra <atishp@rivosinc.com> > + */ > + > +#include <linux/errno.h> > +#include <linux/err.h> > +#include <linux/kvm_host.h> > +#include <linux/perf/riscv_pmu.h> > +#include <asm/csr.h> > +#include <asm/kvm_vcpu_sbi.h> > +#include <asm/kvm_vcpu_pmu.h> > +#include <linux/kvm_host.h> > + > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > + > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + > + retdata->out_val = kvm_pmu_num_counters(kvpmu); > + > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > + retdata->err_val = SBI_ERR_INVALID_PARAM; > + return 0; > + } > + > + retdata->out_val = kvpmu->pmc[cidx].cinfo.value; > + > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > + unsigned long ctr_mask, unsigned long flag, > + unsigned long eidx, uint64_t evtdata, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + /* TODO */ > + return 0; > +} > + > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > +{ > + int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0; > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + struct kvm_pmc *pmc; > + > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > + if (ret < 0 || !hpm_width || !num_hw_ctrs) > + return; > + > + /* > + * It is guranteed that RISCV_KVM_MAX_FW_CTRS can't exceed 32 as > + * that may exceed total number of counters more than RISCV_MAX_COUNTERS > + */ > + kvpmu->num_hw_ctrs = num_hw_ctrs; > + kvpmu->num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; If we sanity check that num_hw_ctrs <= 32 and num_fw_ctrs <= 32 at sbi_pmu probe time, then we can also return num_fw_ctrs (or num_ctrs) along with num_hw_ctrs from riscv_pmu_get_hpm_info(). Then, we can put the exact number here into kvmpmu->num_fw_ctrs, rather than using its max. > + > + /* > + * There is no correlation between the logical hardware counter and virtual counters. > + * However, we need to encode a hpmcounter CSR in the counter info field so that > + * KVM can trap n emulate the read. This works well in the migration use case as > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > + */ > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > + /* TIME CSR shouldn't be read from perf interface */ > + if (i == 1) > + continue; > + pmc = &kvpmu->pmc[i]; > + pmc->idx = i; > + if (i < kvpmu->num_hw_ctrs) { > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; > + if (i < 3) > + /* CY, IR counters */ > + pmc->cinfo.width = 63; > + else > + pmc->cinfo.width = hpm_width; > + /* > + * The CSR number doesn't have any relation with the logical > + * hardware counters. The CSR numbers are encoded sequentially > + * to avoid maintaining a map between the virtual counter > + * and CSR number. > + */ > + pmc->cinfo.csr = CSR_CYCLE + i; > + } else { > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > + pmc->cinfo.width = BITS_PER_LONG - 1; > + } > + } > + > + kvpmu->init_done = true; > +} > + > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > +{ > + /* TODO */ > +} > + > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > +{ > + kvm_riscv_vcpu_pmu_deinit(vcpu); > +} > -- > 2.25.1 > Thanks, drew
On Thu, Feb 2, 2023 at 3:34 AM Conor Dooley <conor.dooley@microchip.com> wrote: > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > This patch only adds barebone structure of perf implementation. Most of > > the function returns zero at this point and will be implemented > > fully in the future. > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > +/* Per virtual pmu counter data */ > > +struct kvm_pmc { > > + u8 idx; > > + struct perf_event *perf_event; > > + uint64_t counter_val; > > CI also complained that here, and elsewhere, you used uint64_t rather > than u64. Am I missing a reason for not using the regular types? > Nope. It was a simple oversight. I will fix it. Do you have a link to the CI report so that I can address them all in v5 ? > Thanks, > Conor. > > > + union sbi_pmu_ctr_info cinfo; > > + /* Event monitoring status */ > > + bool started;
On 3 February 2023 08:04:00 GMT, Atish Patra <atishp@atishpatra.org> wrote: >On Thu, Feb 2, 2023 at 3:34 AM Conor Dooley <conor.dooley@microchip.com> wrote: >> >> On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: >> > This patch only adds barebone structure of perf implementation. Most of >> > the function returns zero at this point and will be implemented >> > fully in the future. >> > >> > Signed-off-by: Atish Patra <atishp@rivosinc.com> >> > +/* Per virtual pmu counter data */ >> > +struct kvm_pmc { >> > + u8 idx; >> > + struct perf_event *perf_event; >> > + uint64_t counter_val; >> >> CI also complained that here, and elsewhere, you used uint64_t rather >> than u64. Am I missing a reason for not using the regular types? >> > >Nope. It was a simple oversight. I will fix it. >Do you have a link to the CI report so that I can address them all in v5 ? Try: :%s/uint64_t/u64 It was just this patch, and checkpatch --strict should show it. > >> Thanks, >> Conor. >> >> > + union sbi_pmu_ctr_info cinfo; >> > + /* Event monitoring status */ >> > + bool started; > > >
On Thu, Feb 2, 2023 at 9:03 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > This patch only adds barebone structure of perf implementation. Most of > > the function returns zero at this point and will be implemented > > fully in the future. > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > --- > > arch/riscv/include/asm/kvm_host.h | 4 + > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > > arch/riscv/kvm/Makefile | 1 + > > arch/riscv/kvm/vcpu.c | 7 ++ > > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > > 5 files changed, 226 insertions(+) > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > index 93f43a3..b90be9a 100644 > > --- a/arch/riscv/include/asm/kvm_host.h > > +++ b/arch/riscv/include/asm/kvm_host.h > > @@ -18,6 +18,7 @@ > > #include <asm/kvm_vcpu_insn.h> > > #include <asm/kvm_vcpu_sbi.h> > > #include <asm/kvm_vcpu_timer.h> > > +#include <asm/kvm_vcpu_pmu.h> > > > > #define KVM_MAX_VCPUS 1024 > > > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > > > /* Don't run the VCPU (blocked) */ > > bool pause; > > + > > + /* Performance monitoring context */ > > + struct kvm_pmu pmu_context; > > }; > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > new file mode 100644 > > index 0000000..e2b4038 > > --- /dev/null > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > @@ -0,0 +1,78 @@ > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > +/* > > + * Copyright (c) 2023 Rivos Inc > > + * > > + * Authors: > > + * Atish Patra <atishp@rivosinc.com> > > + */ > > + > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > +#define __KVM_VCPU_RISCV_PMU_H > > + > > +#include <linux/perf/riscv_pmu.h> > > +#include <asm/kvm_vcpu_sbi.h> > > +#include <asm/sbi.h> > > + > > +#ifdef CONFIG_RISCV_PMU_SBI > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > + > > +#if RISCV_KVM_MAX_FW_CTRS > 32 > > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > > "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > > > +#endif > > + > > +#define RISCV_MAX_COUNTERS 64 > > But instead of that message, what I think we need is something like > > #define RISCV_KVM_MAX_HW_CTRS 32 > #define RISCV_KVM_MAX_FW_CTRS 32 > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > > static_assert(RISCV_MAX_COUNTERS <= 64) > > And then in pmu_sbi_device_probe() should ensure > > num_counters <= RISCV_MAX_COUNTERS > > and pmu_sbi_get_ctrinfo() should ensure > > num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS > num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS > > which has to be done at runtime. > Sure. I will add the additional sanity checks. > > + > > +/* Per virtual pmu counter data */ > > +struct kvm_pmc { > > + u8 idx; > > + struct perf_event *perf_event; > > + uint64_t counter_val; > > + union sbi_pmu_ctr_info cinfo; > > + /* Event monitoring status */ > > + bool started; > > +}; > > + > > +/* PMU data structure per vcpu */ > > +struct kvm_pmu { > > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > > + /* Number of the virtual firmware counters available */ > > + int num_fw_ctrs; > > + /* Number of the virtual hardware counters available */ > > + int num_hw_ctrs; > > + /* A flag to indicate that pmu initialization is done */ > > + bool init_done; > > + /* Bit map of all the virtual counter used */ > > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > > +}; > > + > > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) > > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) > > + > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > + struct kvm_vcpu_sbi_return *retdata); > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > + struct kvm_vcpu_sbi_return *retdata); > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, > > + struct kvm_vcpu_sbi_return *retdata); > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, > > + unsigned long eidx, uint64_t evtdata, > > + struct kvm_vcpu_sbi_return *retdata); > > s/flag/flags/ for all the above prototypes and all the implementations > below. > Fixed. > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > + struct kvm_vcpu_sbi_return *retdata); > > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > + > > +#else > > +struct kvm_pmu { > > +}; > > + > > +static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} > > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > > +#endif /* CONFIG_RISCV_PMU_SBI */ > > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > > index 019df920..5de1053 100644 > > --- a/arch/riscv/kvm/Makefile > > +++ b/arch/riscv/kvm/Makefile > > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > > kvm-y += vcpu_sbi_replace.o > > kvm-y += vcpu_sbi_hsm.o > > kvm-y += vcpu_timer.o > > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > > index 7c08567..7d010b0 100644 > > --- a/arch/riscv/kvm/vcpu.c > > +++ b/arch/riscv/kvm/vcpu.c > > @@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > > > + kvm_riscv_vcpu_pmu_reset(vcpu); > > + > > vcpu->arch.hfence_head = 0; > > vcpu->arch.hfence_tail = 0; > > memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); > > @@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > > /* Setup VCPU timer */ > > kvm_riscv_vcpu_timer_init(vcpu); > > > > + /* setup performance monitoring */ > > + kvm_riscv_vcpu_pmu_init(vcpu); > > + > > /* Reset VCPU */ > > kvm_riscv_reset_vcpu(vcpu); > > > > @@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > > /* Cleanup VCPU timer */ > > kvm_riscv_vcpu_timer_deinit(vcpu); > > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > + > > /* Free unused pages pre-allocated for G-stage page table mappings */ > > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > > } > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > new file mode 100644 > > index 0000000..2dad37f > > --- /dev/null > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > @@ -0,0 +1,136 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * Copyright (c) 2023 Rivos Inc > > + * > > + * Authors: > > + * Atish Patra <atishp@rivosinc.com> > > + */ > > + > > +#include <linux/errno.h> > > +#include <linux/err.h> > > +#include <linux/kvm_host.h> > > +#include <linux/perf/riscv_pmu.h> > > +#include <asm/csr.h> > > +#include <asm/kvm_vcpu_sbi.h> > > +#include <asm/kvm_vcpu_pmu.h> > > +#include <linux/kvm_host.h> > > + > > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > + > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) > > +{ > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > + > > + retdata->out_val = kvm_pmu_num_counters(kvpmu); > > + > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > + struct kvm_vcpu_sbi_return *retdata) > > +{ > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > + > > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > > + retdata->err_val = SBI_ERR_INVALID_PARAM; > > + return 0; > > + } > > + > > + retdata->out_val = kvpmu->pmc[cidx].cinfo.value; > > + > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > + struct kvm_vcpu_sbi_return *retdata) > > +{ > > + /* TODO */ > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, > > + struct kvm_vcpu_sbi_return *retdata) > > +{ > > + /* TODO */ > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > + unsigned long ctr_mask, unsigned long flag, > > + unsigned long eidx, uint64_t evtdata, > > + struct kvm_vcpu_sbi_return *retdata) > > +{ > > + /* TODO */ > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > + struct kvm_vcpu_sbi_return *retdata) > > +{ > > + /* TODO */ > > + return 0; > > +} > > + > > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > +{ > > + int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0; > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > + struct kvm_pmc *pmc; > > + > > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > > + if (ret < 0 || !hpm_width || !num_hw_ctrs) > > + return; > > + > > + /* > > + * It is guranteed that RISCV_KVM_MAX_FW_CTRS can't exceed 32 as > > + * that may exceed total number of counters more than RISCV_MAX_COUNTERS > > + */ > > + kvpmu->num_hw_ctrs = num_hw_ctrs; > > + kvpmu->num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; > > If we sanity check that num_hw_ctrs <= 32 and num_fw_ctrs <= 32 at sbi_pmu > probe time, then we can also return num_fw_ctrs (or num_ctrs) along with > num_hw_ctrs from riscv_pmu_get_hpm_info(). Then, we can put the exact > number here into kvmpmu->num_fw_ctrs, rather than using its max. > The firmware counter information retrieved from PMU driver will be the number of firmware counter host supports (i.e. M-mode firmware supports). The number of counters supported for a guest is entirely up to the hypervisor. There shouldn't be any relation with the host's firmware counter. Looking at it again, we should probably set kvpmu->num_fw_ctrs to SBI_PMU_FW_MAX instead of RISCV_KVM_MAX_FW_CTRS. We already have a sanity check for SBI_PMU_FW_MAX in the code. > > + > > + /* > > + * There is no correlation between the logical hardware counter and virtual counters. > > + * However, we need to encode a hpmcounter CSR in the counter info field so that > > + * KVM can trap n emulate the read. This works well in the migration use case as > > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > > + */ > > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > > + /* TIME CSR shouldn't be read from perf interface */ > > + if (i == 1) > > + continue; > > + pmc = &kvpmu->pmc[i]; > > + pmc->idx = i; > > + if (i < kvpmu->num_hw_ctrs) { > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; > > + if (i < 3) > > + /* CY, IR counters */ > > + pmc->cinfo.width = 63; > > + else > > + pmc->cinfo.width = hpm_width; > > + /* > > + * The CSR number doesn't have any relation with the logical > > + * hardware counters. The CSR numbers are encoded sequentially > > + * to avoid maintaining a map between the virtual counter > > + * and CSR number. > > + */ > > + pmc->cinfo.csr = CSR_CYCLE + i; > > + } else { > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > > + pmc->cinfo.width = BITS_PER_LONG - 1; > > + } > > + } > > + > > + kvpmu->init_done = true; > > +} > > + > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > > +{ > > + /* TODO */ > > +} > > + > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > > +{ > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > +} > > -- > > 2.25.1 > > > > Thanks, > drew
On Fri, Feb 3, 2023 at 12:47 AM Atish Patra <atishp@atishpatra.org> wrote: > > On Thu, Feb 2, 2023 at 9:03 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > > This patch only adds barebone structure of perf implementation. Most of > > > the function returns zero at this point and will be implemented > > > fully in the future. > > > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > > --- > > > arch/riscv/include/asm/kvm_host.h | 4 + > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > > > arch/riscv/kvm/Makefile | 1 + > > > arch/riscv/kvm/vcpu.c | 7 ++ > > > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > > > 5 files changed, 226 insertions(+) > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > index 93f43a3..b90be9a 100644 > > > --- a/arch/riscv/include/asm/kvm_host.h > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > @@ -18,6 +18,7 @@ > > > #include <asm/kvm_vcpu_insn.h> > > > #include <asm/kvm_vcpu_sbi.h> > > > #include <asm/kvm_vcpu_timer.h> > > > +#include <asm/kvm_vcpu_pmu.h> > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > > > > > /* Don't run the VCPU (blocked) */ > > > bool pause; > > > + > > > + /* Performance monitoring context */ > > > + struct kvm_pmu pmu_context; > > > }; > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > new file mode 100644 > > > index 0000000..e2b4038 > > > --- /dev/null > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > @@ -0,0 +1,78 @@ > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra <atishp@rivosinc.com> > > > + */ > > > + > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > +#define __KVM_VCPU_RISCV_PMU_H > > > + > > > +#include <linux/perf/riscv_pmu.h> > > > +#include <asm/kvm_vcpu_sbi.h> > > > +#include <asm/sbi.h> > > > + > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > + > > > +#if RISCV_KVM_MAX_FW_CTRS > 32 > > > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > > > > "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > > > > > +#endif > > > + > > > +#define RISCV_MAX_COUNTERS 64 > > > > But instead of that message, what I think we need is something like > > > > #define RISCV_KVM_MAX_HW_CTRS 32 > > #define RISCV_KVM_MAX_FW_CTRS 32 > > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > > > > static_assert(RISCV_MAX_COUNTERS <= 64) > > > > And then in pmu_sbi_device_probe() should ensure > > > > num_counters <= RISCV_MAX_COUNTERS > > > > and pmu_sbi_get_ctrinfo() should ensure > > > > num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS > > num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS > > > > which has to be done at runtime. > > > > Sure. I will add the additional sanity checks. > As explained above, I feel we shouldn't mix the firmware number of counters that the host gets and it exposes to a guest. So I have not included this suggestion in the v5. I have changed the num_fw_ctrs to PMU_FW_MAX though to accurately reflect the firmware counters KVM is actually using. I don't know if there is any benefit of static_assert over #error. Please let me know if you feel strongly about that. > > > + > > > +/* Per virtual pmu counter data */ > > > +struct kvm_pmc { > > > + u8 idx; > > > + struct perf_event *perf_event; > > > + uint64_t counter_val; > > > + union sbi_pmu_ctr_info cinfo; > > > + /* Event monitoring status */ > > > + bool started; > > > +}; > > > + > > > +/* PMU data structure per vcpu */ > > > +struct kvm_pmu { > > > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > > > + /* Number of the virtual firmware counters available */ > > > + int num_fw_ctrs; > > > + /* Number of the virtual hardware counters available */ > > > + int num_hw_ctrs; > > > + /* A flag to indicate that pmu initialization is done */ > > > + bool init_done; > > > + /* Bit map of all the virtual counter used */ > > > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > > > +}; > > > + > > > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) > > > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_return *retdata); > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_return *retdata); > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_return *retdata); > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_return *retdata); > > > > s/flag/flags/ for all the above prototypes and all the implementations > > below. > > > > Fixed. > > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_return *retdata); > > > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > > + > > > +#else > > > +struct kvm_pmu { > > > +}; > > > + > > > +static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} > > > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > > > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > > > +#endif /* CONFIG_RISCV_PMU_SBI */ > > > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > > > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > > > index 019df920..5de1053 100644 > > > --- a/arch/riscv/kvm/Makefile > > > +++ b/arch/riscv/kvm/Makefile > > > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > > > kvm-y += vcpu_sbi_replace.o > > > kvm-y += vcpu_sbi_hsm.o > > > kvm-y += vcpu_timer.o > > > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > > > index 7c08567..7d010b0 100644 > > > --- a/arch/riscv/kvm/vcpu.c > > > +++ b/arch/riscv/kvm/vcpu.c > > > @@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > > > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > > > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > > > > > + kvm_riscv_vcpu_pmu_reset(vcpu); > > > + > > > vcpu->arch.hfence_head = 0; > > > vcpu->arch.hfence_tail = 0; > > > memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); > > > @@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > > > /* Setup VCPU timer */ > > > kvm_riscv_vcpu_timer_init(vcpu); > > > > > > + /* setup performance monitoring */ > > > + kvm_riscv_vcpu_pmu_init(vcpu); > > > + > > > /* Reset VCPU */ > > > kvm_riscv_reset_vcpu(vcpu); > > > > > > @@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > > > /* Cleanup VCPU timer */ > > > kvm_riscv_vcpu_timer_deinit(vcpu); > > > > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > + > > > /* Free unused pages pre-allocated for G-stage page table mappings */ > > > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > > > } > > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > > new file mode 100644 > > > index 0000000..2dad37f > > > --- /dev/null > > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > > @@ -0,0 +1,136 @@ > > > +// SPDX-License-Identifier: GPL-2.0 > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra <atishp@rivosinc.com> > > > + */ > > > + > > > +#include <linux/errno.h> > > > +#include <linux/err.h> > > > +#include <linux/kvm_host.h> > > > +#include <linux/perf/riscv_pmu.h> > > > +#include <asm/csr.h> > > > +#include <asm/kvm_vcpu_sbi.h> > > > +#include <asm/kvm_vcpu_pmu.h> > > > +#include <linux/kvm_host.h> > > > + > > > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + retdata->out_val = kvm_pmu_num_counters(kvpmu); > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > > > + retdata->err_val = SBI_ERR_INVALID_PARAM; > > > + return 0; > > > + } > > > + > > > + retdata->out_val = kvpmu->pmc[cidx].cinfo.value; > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_return *retdata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > +{ > > > + int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0; > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + struct kvm_pmc *pmc; > > > + > > > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > > > + if (ret < 0 || !hpm_width || !num_hw_ctrs) > > > + return; > > > + > > > + /* > > > + * It is guranteed that RISCV_KVM_MAX_FW_CTRS can't exceed 32 as > > > + * that may exceed total number of counters more than RISCV_MAX_COUNTERS > > > + */ > > > + kvpmu->num_hw_ctrs = num_hw_ctrs; > > > + kvpmu->num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; > > > > If we sanity check that num_hw_ctrs <= 32 and num_fw_ctrs <= 32 at sbi_pmu > > probe time, then we can also return num_fw_ctrs (or num_ctrs) along with > > num_hw_ctrs from riscv_pmu_get_hpm_info(). Then, we can put the exact > > number here into kvmpmu->num_fw_ctrs, rather than using its max. > > > > The firmware counter information retrieved from PMU driver will be the > number of firmware > counter host supports (i.e. M-mode firmware supports). The number of > counters supported for a > guest is entirely up to the hypervisor. There shouldn't be any > relation with the host's firmware counter. > > Looking at it again, we should probably set kvpmu->num_fw_ctrs to > SBI_PMU_FW_MAX instead of RISCV_KVM_MAX_FW_CTRS. > We already have a sanity check for SBI_PMU_FW_MAX in the code. > > > + > > > + /* > > > + * There is no correlation between the logical hardware counter and virtual counters. > > > + * However, we need to encode a hpmcounter CSR in the counter info field so that > > > + * KVM can trap n emulate the read. This works well in the migration use case as > > > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > > > + */ > > > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > > > + /* TIME CSR shouldn't be read from perf interface */ > > > + if (i == 1) > > > + continue; > > > + pmc = &kvpmu->pmc[i]; > > > + pmc->idx = i; > > > + if (i < kvpmu->num_hw_ctrs) { > > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; > > > + if (i < 3) > > > + /* CY, IR counters */ > > > + pmc->cinfo.width = 63; > > > + else > > > + pmc->cinfo.width = hpm_width; > > > + /* > > > + * The CSR number doesn't have any relation with the logical > > > + * hardware counters. The CSR numbers are encoded sequentially > > > + * to avoid maintaining a map between the virtual counter > > > + * and CSR number. > > > + */ > > > + pmc->cinfo.csr = CSR_CYCLE + i; > > > + } else { > > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > > > + pmc->cinfo.width = BITS_PER_LONG - 1; > > > + } > > > + } > > > + > > > + kvpmu->init_done = true; > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > > > +{ > > > + /* TODO */ > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > > > +{ > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > +} > > > -- > > > 2.25.1 > > > > > > > Thanks, > > drew > > > > -- > Regards, > Atish -- Regards, Atish
On Sat, Feb 04, 2023 at 11:37:47PM -0800, Atish Patra wrote: > On Fri, Feb 3, 2023 at 12:47 AM Atish Patra <atishp@atishpatra.org> wrote: > > > > On Thu, Feb 2, 2023 at 9:03 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > > > > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > > > This patch only adds barebone structure of perf implementation. Most of > > > > the function returns zero at this point and will be implemented > > > > fully in the future. > > > > > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > > > --- > > > > arch/riscv/include/asm/kvm_host.h | 4 + > > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > > > > arch/riscv/kvm/Makefile | 1 + > > > > arch/riscv/kvm/vcpu.c | 7 ++ > > > > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > > > > 5 files changed, 226 insertions(+) > > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > > index 93f43a3..b90be9a 100644 > > > > --- a/arch/riscv/include/asm/kvm_host.h > > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > > @@ -18,6 +18,7 @@ > > > > #include <asm/kvm_vcpu_insn.h> > > > > #include <asm/kvm_vcpu_sbi.h> > > > > #include <asm/kvm_vcpu_timer.h> > > > > +#include <asm/kvm_vcpu_pmu.h> > > > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > > > > > > > /* Don't run the VCPU (blocked) */ > > > > bool pause; > > > > + > > > > + /* Performance monitoring context */ > > > > + struct kvm_pmu pmu_context; > > > > }; > > > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > new file mode 100644 > > > > index 0000000..e2b4038 > > > > --- /dev/null > > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > @@ -0,0 +1,78 @@ > > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > > +/* > > > > + * Copyright (c) 2023 Rivos Inc > > > > + * > > > > + * Authors: > > > > + * Atish Patra <atishp@rivosinc.com> > > > > + */ > > > > + > > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > > +#define __KVM_VCPU_RISCV_PMU_H > > > > + > > > > +#include <linux/perf/riscv_pmu.h> > > > > +#include <asm/kvm_vcpu_sbi.h> > > > > +#include <asm/sbi.h> > > > > + > > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > > + > > > > +#if RISCV_KVM_MAX_FW_CTRS > 32 > > > > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > > > > > > "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > > > > > > > +#endif > > > > + > > > > +#define RISCV_MAX_COUNTERS 64 > > > > > > But instead of that message, what I think we need is something like > > > > > > #define RISCV_KVM_MAX_HW_CTRS 32 > > > #define RISCV_KVM_MAX_FW_CTRS 32 > > > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > > > > > > static_assert(RISCV_MAX_COUNTERS <= 64) > > > > > > And then in pmu_sbi_device_probe() should ensure > > > > > > num_counters <= RISCV_MAX_COUNTERS > > > > > > and pmu_sbi_get_ctrinfo() should ensure > > > > > > num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS > > > num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS > > > > > > which has to be done at runtime. > > > > > > > Sure. I will add the additional sanity checks. > > > > As explained above, I feel we shouldn't mix the firmware number of > counters that the host gets and it exposes to a guest. > So I have not included this suggestion in the v5. > I have changed the num_fw_ctrs to PMU_FW_MAX though to accurately > reflect the firmware counters KVM is actually using. Sounds good > I don't know if there is any benefit of static_assert over #error. > Please let me know if you feel strongly about that. One "normal" line vs. three #-lines? Thanks, drew
On Mon, Feb 06, 2023 at 10:22:04AM +0100, Andrew Jones wrote: > On Sat, Feb 04, 2023 at 11:37:47PM -0800, Atish Patra wrote: > > On Fri, Feb 3, 2023 at 12:47 AM Atish Patra <atishp@atishpatra.org> wrote: > > > > > > On Thu, Feb 2, 2023 at 9:03 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > > > > > > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > > > > This patch only adds barebone structure of perf implementation. Most of > > > > > the function returns zero at this point and will be implemented > > > > > fully in the future. > > > > > > > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > > > > --- > > > > > arch/riscv/include/asm/kvm_host.h | 4 + > > > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > > > > > arch/riscv/kvm/Makefile | 1 + > > > > > arch/riscv/kvm/vcpu.c | 7 ++ > > > > > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > > > > > 5 files changed, 226 insertions(+) > > > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > > > index 93f43a3..b90be9a 100644 > > > > > --- a/arch/riscv/include/asm/kvm_host.h > > > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > > > @@ -18,6 +18,7 @@ > > > > > #include <asm/kvm_vcpu_insn.h> > > > > > #include <asm/kvm_vcpu_sbi.h> > > > > > #include <asm/kvm_vcpu_timer.h> > > > > > +#include <asm/kvm_vcpu_pmu.h> > > > > > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > > > > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > > > > > > > > > /* Don't run the VCPU (blocked) */ > > > > > bool pause; > > > > > + > > > > > + /* Performance monitoring context */ > > > > > + struct kvm_pmu pmu_context; > > > > > }; > > > > > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > new file mode 100644 > > > > > index 0000000..e2b4038 > > > > > --- /dev/null > > > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > @@ -0,0 +1,78 @@ > > > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > > > +/* > > > > > + * Copyright (c) 2023 Rivos Inc > > > > > + * > > > > > + * Authors: > > > > > + * Atish Patra <atishp@rivosinc.com> > > > > > + */ > > > > > + > > > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > > > +#define __KVM_VCPU_RISCV_PMU_H > > > > > + > > > > > +#include <linux/perf/riscv_pmu.h> > > > > > +#include <asm/kvm_vcpu_sbi.h> > > > > > +#include <asm/sbi.h> > > > > > + > > > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > > > + > > > > > +#if RISCV_KVM_MAX_FW_CTRS > 32 > > > > > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > > > > > > > > "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > > > > > > > > > +#endif > > > > > + > > > > > +#define RISCV_MAX_COUNTERS 64 > > > > > > > > But instead of that message, what I think we need is something like > > > > > > > > #define RISCV_KVM_MAX_HW_CTRS 32 > > > > #define RISCV_KVM_MAX_FW_CTRS 32 > > > > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > > > > > > > > static_assert(RISCV_MAX_COUNTERS <= 64) > > > > > > > > And then in pmu_sbi_device_probe() should ensure > > > > > > > > num_counters <= RISCV_MAX_COUNTERS > > > > > > > > and pmu_sbi_get_ctrinfo() should ensure > > > > > > > > num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS > > > > num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS > > > > > > > > which has to be done at runtime. > > > > > > > > > > Sure. I will add the additional sanity checks. > > > > > > > As explained above, I feel we shouldn't mix the firmware number of > > counters that the host gets and it exposes to a guest. > > So I have not included this suggestion in the v5. > > I have changed the num_fw_ctrs to PMU_FW_MAX though to accurately > > reflect the firmware counters KVM is actually using. > > Sounds good I just looked at v5. IMO, much of what I proposed above still makes sense, since what I'm proposing is that the relationship between RISCV_KVM_MAX_HW_CTRS, RISCV_KVM_MAX_FW_CTRS, RISCV_MAX_COUNTERS, and 64 (our current max bitmap size) be explicitly checked. So, even if we want RISCV_KVM_MAX_FW_CTRS to be SBI_PMU_FW_MAX, it'd be good to have #define RISCV_KVM_MAX_HW_CTRS 32 (And a runtime check confirming num_hw_ctrs + 1 <= RISCV_KVM_MAX_HW_CTRS, and then either silently capping or issuing a warning and capping) And, to be sure the sum of RISCV_KVM_MAX_FW_CTRS and RISCV_KVM_MAX_HW_CTRS doesn't exceed the size of the bitmap #define RISCV_KVM_MAX_FW_CTRS SBI_PMU_FW_MAX #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) static_assert(RISCV_MAX_COUNTERS <= 64) Thanks, drew > > > I don't know if there is any benefit of static_assert over #error. > > Please let me know if you feel strongly about that. > > One "normal" line vs. three #-lines? > > Thanks, > drew
On Mon, Feb 6, 2023 at 3:39 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > On Mon, Feb 06, 2023 at 10:22:04AM +0100, Andrew Jones wrote: > > On Sat, Feb 04, 2023 at 11:37:47PM -0800, Atish Patra wrote: > > > On Fri, Feb 3, 2023 at 12:47 AM Atish Patra <atishp@atishpatra.org> wrote: > > > > > > > > On Thu, Feb 2, 2023 at 9:03 AM Andrew Jones <ajones@ventanamicro.com> wrote: > > > > > > > > > > On Wed, Feb 01, 2023 at 03:12:43PM -0800, Atish Patra wrote: > > > > > > This patch only adds barebone structure of perf implementation. Most of > > > > > > the function returns zero at this point and will be implemented > > > > > > fully in the future. > > > > > > > > > > > > Signed-off-by: Atish Patra <atishp@rivosinc.com> > > > > > > --- > > > > > > arch/riscv/include/asm/kvm_host.h | 4 + > > > > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 78 +++++++++++++++ > > > > > > arch/riscv/kvm/Makefile | 1 + > > > > > > arch/riscv/kvm/vcpu.c | 7 ++ > > > > > > arch/riscv/kvm/vcpu_pmu.c | 136 ++++++++++++++++++++++++++ > > > > > > 5 files changed, 226 insertions(+) > > > > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > > > > index 93f43a3..b90be9a 100644 > > > > > > --- a/arch/riscv/include/asm/kvm_host.h > > > > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > > > > @@ -18,6 +18,7 @@ > > > > > > #include <asm/kvm_vcpu_insn.h> > > > > > > #include <asm/kvm_vcpu_sbi.h> > > > > > > #include <asm/kvm_vcpu_timer.h> > > > > > > +#include <asm/kvm_vcpu_pmu.h> > > > > > > > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > > > > > > > @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { > > > > > > > > > > > > /* Don't run the VCPU (blocked) */ > > > > > > bool pause; > > > > > > + > > > > > > + /* Performance monitoring context */ > > > > > > + struct kvm_pmu pmu_context; > > > > > > }; > > > > > > > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > > new file mode 100644 > > > > > > index 0000000..e2b4038 > > > > > > --- /dev/null > > > > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > > > > @@ -0,0 +1,78 @@ > > > > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > > > > +/* > > > > > > + * Copyright (c) 2023 Rivos Inc > > > > > > + * > > > > > > + * Authors: > > > > > > + * Atish Patra <atishp@rivosinc.com> > > > > > > + */ > > > > > > + > > > > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > > > > +#define __KVM_VCPU_RISCV_PMU_H > > > > > > + > > > > > > +#include <linux/perf/riscv_pmu.h> > > > > > > +#include <asm/kvm_vcpu_sbi.h> > > > > > > +#include <asm/sbi.h> > > > > > > + > > > > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > > > > + > > > > > > +#if RISCV_KVM_MAX_FW_CTRS > 32 > > > > > > +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" > > > > > > > > > > "The number of firmware counters cannot exceed 32 without increasing RISCV_MAX_COUNTERS" > > > > > > > > > > > +#endif > > > > > > + > > > > > > +#define RISCV_MAX_COUNTERS 64 > > > > > > > > > > But instead of that message, what I think we need is something like > > > > > > > > > > #define RISCV_KVM_MAX_HW_CTRS 32 > > > > > #define RISCV_KVM_MAX_FW_CTRS 32 > > > > > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > > > > > > > > > > static_assert(RISCV_MAX_COUNTERS <= 64) > > > > > > > > > > And then in pmu_sbi_device_probe() should ensure > > > > > > > > > > num_counters <= RISCV_MAX_COUNTERS > > > > > > > > > > and pmu_sbi_get_ctrinfo() should ensure > > > > > > > > > > num_hw_ctr <= RISCV_KVM_MAX_HW_CTRS > > > > > num_fw_ctr <= RISCV_KVM_MAX_FW_CTRS > > > > > > > > > > which has to be done at runtime. > > > > > > > > > > > > > Sure. I will add the additional sanity checks. > > > > > > > > > > As explained above, I feel we shouldn't mix the firmware number of > > > counters that the host gets and it exposes to a guest. > > > So I have not included this suggestion in the v5. > > > I have changed the num_fw_ctrs to PMU_FW_MAX though to accurately > > > reflect the firmware counters KVM is actually using. > > > > Sounds good > > I just looked at v5. IMO, much of what I proposed above still makes > sense, since what I'm proposing is that the relationship between > RISCV_KVM_MAX_HW_CTRS, RISCV_KVM_MAX_FW_CTRS, RISCV_MAX_COUNTERS, and 64 > (our current max bitmap size) be explicitly checked. So, even if we want > RISCV_KVM_MAX_FW_CTRS to be SBI_PMU_FW_MAX, it'd be good to have > > #define RISCV_KVM_MAX_HW_CTRS 32 > > (And a runtime check confirming num_hw_ctrs + 1 <= RISCV_KVM_MAX_HW_CTRS, > and then either silently capping or issuing a warning and capping) > > And, to be sure the sum of RISCV_KVM_MAX_FW_CTRS and RISCV_KVM_MAX_HW_CTRS > doesn't exceed the size of the bitmap > > #define RISCV_KVM_MAX_FW_CTRS SBI_PMU_FW_MAX > #define RISCV_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS) > static_assert(RISCV_MAX_COUNTERS <= 64) > ok. I have added those changes but I have modified RISCV_MAX_COUNTERS to RISCV_KVM_MAX_COUNTERS, to avoid overlapping with the RISCV_MAX_COUNTERS defined in the host. Logically, the host and the guest can have separate counters anyways. > Thanks, > drew > > > > > > I don't know if there is any benefit of static_assert over #error. > > > Please let me know if you feel strongly about that. > > > > One "normal" line vs. three #-lines? > > Fair enough. Fixed. > > Thanks, > > drew
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 93f43a3..b90be9a 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -18,6 +18,7 @@ #include <asm/kvm_vcpu_insn.h> #include <asm/kvm_vcpu_sbi.h> #include <asm/kvm_vcpu_timer.h> +#include <asm/kvm_vcpu_pmu.h> #define KVM_MAX_VCPUS 1024 @@ -228,6 +229,9 @@ struct kvm_vcpu_arch { /* Don't run the VCPU (blocked) */ bool pause; + + /* Performance monitoring context */ + struct kvm_pmu pmu_context; }; static inline void kvm_arch_hardware_unsetup(void) {} diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h new file mode 100644 index 0000000..e2b4038 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra <atishp@rivosinc.com> + */ + +#ifndef __KVM_VCPU_RISCV_PMU_H +#define __KVM_VCPU_RISCV_PMU_H + +#include <linux/perf/riscv_pmu.h> +#include <asm/kvm_vcpu_sbi.h> +#include <asm/sbi.h> + +#ifdef CONFIG_RISCV_PMU_SBI +#define RISCV_KVM_MAX_FW_CTRS 32 + +#if RISCV_KVM_MAX_FW_CTRS > 32 +#error "Maximum firmware counter can't exceed 32 without increasing the RISCV_MAX_COUNTERS" +#endif + +#define RISCV_MAX_COUNTERS 64 + +/* Per virtual pmu counter data */ +struct kvm_pmc { + u8 idx; + struct perf_event *perf_event; + uint64_t counter_val; + union sbi_pmu_ctr_info cinfo; + /* Event monitoring status */ + bool started; +}; + +/* PMU data structure per vcpu */ +struct kvm_pmu { + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; + /* Number of the virtual firmware counters available */ + int num_fw_ctrs; + /* Number of the virtual hardware counters available */ + int num_hw_ctrs; + /* A flag to indicate that pmu initialization is done */ + bool init_done; + /* Bit map of all the virtual counter used */ + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); +}; + +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t evtdata, + struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata); +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); + +#else +struct kvm_pmu { +}; + +static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} +#endif /* CONFIG_RISCV_PMU_SBI */ +#endif /* !__KVM_VCPU_RISCV_PMU_H */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 019df920..5de1053 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 7c08567..7d010b0 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + kvm_riscv_vcpu_pmu_reset(vcpu); + vcpu->arch.hfence_head = 0; vcpu->arch.hfence_tail = 0; memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue)); @@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); + /* setup performance monitoring */ + kvm_riscv_vcpu_pmu_init(vcpu); + /* Reset VCPU */ kvm_riscv_reset_vcpu(vcpu); @@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) /* Cleanup VCPU timer */ kvm_riscv_vcpu_timer_deinit(vcpu); + kvm_riscv_vcpu_pmu_deinit(vcpu); + /* Free unused pages pre-allocated for G-stage page table mappings */ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); } diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c new file mode 100644 index 0000000..2dad37f --- /dev/null +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra <atishp@rivosinc.com> + */ + +#include <linux/errno.h> +#include <linux/err.h> +#include <linux/kvm_host.h> +#include <linux/perf/riscv_pmu.h> +#include <asm/csr.h> +#include <asm/kvm_vcpu_sbi.h> +#include <asm/kvm_vcpu_pmu.h> +#include <linux/kvm_host.h> + +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + retdata->out_val = kvm_pmu_num_counters(kvpmu); + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { + retdata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } + + retdata->out_val = kvpmu->pmc[cidx].cinfo.value; + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_return *retdata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_return *retdata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t evtdata, + struct kvm_vcpu_sbi_return *retdata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata) +{ + /* TODO */ + return 0; +} + +void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) +{ + int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); + if (ret < 0 || !hpm_width || !num_hw_ctrs) + return; + + /* + * It is guranteed that RISCV_KVM_MAX_FW_CTRS can't exceed 32 as + * that may exceed total number of counters more than RISCV_MAX_COUNTERS + */ + kvpmu->num_hw_ctrs = num_hw_ctrs; + kvpmu->num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; + + /* + * There is no correlation between the logical hardware counter and virtual counters. + * However, we need to encode a hpmcounter CSR in the counter info field so that + * KVM can trap n emulate the read. This works well in the migration use case as + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. + */ + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { + /* TIME CSR shouldn't be read from perf interface */ + if (i == 1) + continue; + pmc = &kvpmu->pmc[i]; + pmc->idx = i; + if (i < kvpmu->num_hw_ctrs) { + pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; + if (i < 3) + /* CY, IR counters */ + pmc->cinfo.width = 63; + else + pmc->cinfo.width = hpm_width; + /* + * The CSR number doesn't have any relation with the logical + * hardware counters. The CSR numbers are encoded sequentially + * to avoid maintaining a map between the virtual counter + * and CSR number. + */ + pmc->cinfo.csr = CSR_CYCLE + i; + } else { + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; + pmc->cinfo.width = BITS_PER_LONG - 1; + } + } + + kvpmu->init_done = true; +} + +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) +{ + kvm_riscv_vcpu_pmu_deinit(vcpu); +}