Message ID | 20230205011515.1284674-9-atishp@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1576617wrn; Sat, 4 Feb 2023 17:17:07 -0800 (PST) X-Google-Smtp-Source: AK7set+oZOxBHJiOW2/v+aip8SH3ulbY/pR5/bdXmReHUoWOkwnlOEnl7SD5g+HXWcpokzYE1CNS X-Received: by 2002:a17:906:c0c3:b0:883:3c7d:944e with SMTP id bn3-20020a170906c0c300b008833c7d944emr14719026ejb.9.1675559827172; Sat, 04 Feb 2023 17:17:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675559827; cv=none; d=google.com; s=arc-20160816; b=CPl6tRqb9ExHuorO/B5MmA+m97PWN3Crf+vNsGXVBLLoaYVvjSnQKLY/697X/HvlXK S3JIeiSjXl9R+8na8hnQYRFzNfomASBTtlYTb+hGGZZWz5QNDZHoX6/i9/fqAQQyW46+ apJfJX604x6NXAJz0+WI4n8alviyWRCOYyjFIjFw60sOngSd8MmY6YKkXMdbKPpfnHcN 42z8VVGFAg9wCSZHg0woVE6ud/1WTI14JB0/7wsdM97EcDs51p2uqLfTftub6LFk+W5d FlD8sIyAuNlBDKtiEN+sw0quHVfOUjudsM5EQVUUwWoEbFEOxVjGkoYxHoBIDEINhZkV ha7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gG9hdNpqSHyyiW/kY0PtWrSKIXU+zOCOJ2WI9yMHzdg=; b=ECJCxM5xxgVKmElUVCEkujWpdT3i8Nje0eM5XSH6v8gh90InxLqxJdLduor9YxCgWK oz2I/G7lfpgqCiz/XSUgfvEV9bL8VDdIicF4sybF6KlwRP03VpXVEnyJ6IzCUjVM1psd RhUEMblcDXbK/SMyvP3PR1aNK871TMQh5ztmFLEUzia5355YUyhQllXdwpFuFsXxm5Xj oEiXEtbmGNAO2hdt32ysCSEWIZGNvaP4IwgHtrbUyMlqhqD7hge6/cMGS0O8gUovPCaV Lc4FpUWZGHUJxb86iZkFunWhwHX/HZJ6o/KdZOxBegTvctoqd0tCrr4h9T/JNw4VfiCo gjCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=NrgdXdgs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h11-20020a170906530b00b0088b7b6e6ab4si8781774ejo.81.2023.02.04.17.16.43; Sat, 04 Feb 2023 17:17:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=NrgdXdgs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232428AbjBEBPz (ORCPT <rfc822;il.mystafa@gmail.com> + 99 others); Sat, 4 Feb 2023 20:15:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230379AbjBEBPd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 4 Feb 2023 20:15:33 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A333226872 for <linux-kernel@vger.kernel.org>; Sat, 4 Feb 2023 17:15:30 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id d2so4877020pjd.5 for <linux-kernel@vger.kernel.org>; Sat, 04 Feb 2023 17:15:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gG9hdNpqSHyyiW/kY0PtWrSKIXU+zOCOJ2WI9yMHzdg=; b=NrgdXdgsnEPBD0D7s8HlbYKiz6iJJDx0D/nBEP5Qqi61Ch7DapV4hAhTIibqHhM+Ly W6QhKTE0QSPNmdDHOe/GFSyI+s76VYwQbcYmT+O0qIbzui5CNEvbffIMmUh6Q5tYzzlh ezWQg5X/RGfP6CwIXjOE2f/TUL4m+laTITDHrv85qcx5wLzmPa8qJHO4EVsvM0LBtpBu AYowC8LX9onYBl5i72QZ/4NwbXOxJOi8aQMmGgBT5xS90MIGX6CB1N8p3iROisyOsUlv Jd1V4R93o4PyacN72SJJkSF/Ap4YErqQLehzYI9Ssc0C4Xub+CrgR8Ne5qAS57Zcxr4c Ie6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gG9hdNpqSHyyiW/kY0PtWrSKIXU+zOCOJ2WI9yMHzdg=; b=cVNlK4EoL3UMDsVATwcROMD+XGCQLjdUjBkPH3DjB1ZFJdAfo8fzHa6Xeh2vgqoFQV 0yMIFiID2MiqgU49siHohYITcR1nMukPUaH4MIPs5y9Ci7J87ZdzSDBhJbpBW9FZgdA0 DJpFzH/QYu9QzA6qz6lZC64gs2PN0Mt+W083Crj+GJzNmffO9c9+KudJMKZTmpdtKav5 WbRANivHdVRnepZIDxUFH0QmlxekUfNixGutBScAcnol6M2jLtZDgAf+tlCI31KmHqyh WoMkuy7NLNrDL8wq26D/oAgsAymg6XorVmZ63T49/hRfODcCtChlpU96XXqSOquBsBB0 9hFQ== X-Gm-Message-State: AO0yUKXGUqTpnD+wJ/PVKxirQpyeIyLT6zbM2/JxePvgvgkRSr7jLs2T 8yO9OmzJVUIL1lwxtpJQxYApcl+pe2gnZpl9 X-Received: by 2002:a17:90a:c092:b0:230:ac49:9802 with SMTP id o18-20020a17090ac09200b00230ac499802mr230784pjs.22.1675559729917; Sat, 04 Feb 2023 17:15:29 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id c7-20020a17090a020700b0023080c4c3bcsm2721917pjc.31.2023.02.04.17.15.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Feb 2023 17:15:29 -0800 (PST) From: Atish Patra <atishp@rivosinc.com> To: linux-kernel@vger.kernel.org Cc: Atish Patra <atishp@rivosinc.com>, Anup Patel <anup@brainfault.org>, Albert Ou <aou@eecs.berkeley.edu>, Andrew Jones <ajones@ventanamicro.com>, Atish Patra <atishp@atishpatra.org>, Guo Ren <guoren@kernel.org>, Heiko Stuebner <heiko@sntech.de>, kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>, Palmer Dabbelt <palmer@dabbelt.com>, Paul Walmsley <paul.walmsley@sifive.com>, Will Deacon <will@kernel.org> Subject: [PATCH v5 08/14] RISC-V: KVM: Add SBI PMU extension support Date: Sat, 4 Feb 2023 17:15:09 -0800 Message-Id: <20230205011515.1284674-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230205011515.1284674-1-atishp@rivosinc.com> References: <20230205011515.1284674-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756951821294453005?= X-GMAIL-MSGID: =?utf-8?q?1756951821294453005?= |
Series |
KVM perf support
|
|
Commit Message
Atish Patra
Feb. 5, 2023, 1:15 a.m. UTC
SBI PMU extension allows KVM guests to configure/start/stop/query about the PMU counters in virtualized enviornment as well. In order to allow that, KVM implements the entire SBI PMU extension. Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Atish Patra <atishp@rivosinc.com> --- arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu_sbi.c | 11 +++++ arch/riscv/kvm/vcpu_sbi_pmu.c | 87 +++++++++++++++++++++++++++++++++++ 3 files changed, 99 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c
Comments
On 5 February 2023 02:15:09 GMT+01:00, Atish Patra <atishp@rivosinc.com> wrote: >SBI PMU extension allows KVM guests to configure/start/stop/query about >the PMU counters in virtualized enviornment as well. > >In order to allow that, KVM implements the entire SBI PMU extension. > >Reviewed-by: Anup Patel <anup@brainfault.org> >Signed-off-by: Atish Patra <atishp@rivosinc.com> Hey Atish, CI is still complaining about something in this patch: https://gist.github.com/conor-pwbot/8f8d6a60a65b0b44d96c9c3b220e3efd I'm without a laptop this weekend, so apologies for the lack of investigation as to whether there's a reason for it. Cheers, Conor. >--- > arch/riscv/kvm/Makefile | 2 +- > arch/riscv/kvm/vcpu_sbi.c | 11 +++++ > arch/riscv/kvm/vcpu_sbi_pmu.c | 87 +++++++++++++++++++++++++++++++++++ > 3 files changed, 99 insertions(+), 1 deletion(-) > create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c > >diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile >index 5de1053..278e97c 100644 >--- a/arch/riscv/kvm/Makefile >+++ b/arch/riscv/kvm/Makefile >@@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o > kvm-y += vcpu_sbi_replace.o > kvm-y += vcpu_sbi_hsm.o > kvm-y += vcpu_timer.o >-kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o >+kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o >diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c >index fe2897e..15fde15 100644 >--- a/arch/riscv/kvm/vcpu_sbi.c >+++ b/arch/riscv/kvm/vcpu_sbi.c >@@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { > }; > #endif > >+#ifdef CONFIG_RISCV_PMU_SBI >+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; >+#else >+static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { >+ .extid_start = -1UL, >+ .extid_end = -1UL, >+ .handler = NULL, >+}; >+#endif >+ > static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > &vcpu_sbi_ext_v01, > &vcpu_sbi_ext_base, >@@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > &vcpu_sbi_ext_rfence, > &vcpu_sbi_ext_srst, > &vcpu_sbi_ext_hsm, >+ &vcpu_sbi_ext_pmu, > &vcpu_sbi_ext_experimental, > &vcpu_sbi_ext_vendor, > }; >diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c >new file mode 100644 >index 0000000..9fdc1e1 >--- /dev/null >+++ b/arch/riscv/kvm/vcpu_sbi_pmu.c >@@ -0,0 +1,87 @@ >+// SPDX-License-Identifier: GPL-2.0 >+/* >+ * Copyright (c) 2023 Rivos Inc >+ * >+ * Authors: >+ * Atish Patra <atishp@rivosinc.com> >+ */ >+ >+#include <linux/errno.h> >+#include <linux/err.h> >+#include <linux/kvm_host.h> >+#include <asm/csr.h> >+#include <asm/sbi.h> >+#include <asm/kvm_vcpu_sbi.h> >+ >+static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, >+ struct kvm_vcpu_sbi_return *retdata) >+{ >+ int ret = 0; >+ struct kvm_cpu_context *cp = &vcpu->arch.guest_context; >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); >+ unsigned long funcid = cp->a6; >+ u64 temp; >+ >+ /* Return not supported if PMU is not initialized */ >+ if (!kvpmu->init_done) { >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; >+ return 0; >+ } >+ >+ switch (funcid) { >+ case SBI_EXT_PMU_NUM_COUNTERS: >+ ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata); >+ break; >+ case SBI_EXT_PMU_COUNTER_GET_INFO: >+ ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata); >+ break; >+ case SBI_EXT_PMU_COUNTER_CFG_MATCH: >+#if defined(CONFIG_32BIT) >+ temp = ((uint64_t)cp->a5 << 32) | cp->a4; >+#else >+ temp = cp->a4; >+#endif >+ /* >+ * This can fail if perf core framework fails to create an event. >+ * Forward the error to userspace because it's an error happened >+ * within the host kernel. The other option would be to convert >+ * this an SBI error and forward to the guest. >+ */ >+ ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, >+ cp->a2, cp->a3, temp, retdata); >+ break; >+ case SBI_EXT_PMU_COUNTER_START: >+#if defined(CONFIG_32BIT) >+ temp = ((uint64_t)cp->a4 << 32) | cp->a3; >+#else >+ temp = cp->a3; >+#endif >+ ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, >+ temp, retdata); >+ break; >+ case SBI_EXT_PMU_COUNTER_STOP: >+ ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata); >+ break; >+ case SBI_EXT_PMU_COUNTER_FW_READ: >+ ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); >+ break; >+ default: >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; >+ } >+ >+ return ret; >+} >+ >+static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) >+{ >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); >+ >+ return kvpmu->init_done; >+} >+ >+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { >+ .extid_start = SBI_EXT_PMU, >+ .extid_end = SBI_EXT_PMU, >+ .handler = kvm_sbi_ext_pmu_handler, >+ .probe = kvm_sbi_ext_pmu_probe, >+};
On Sat, Feb 04, 2023 at 05:15:09PM -0800, Atish Patra wrote: > SBI PMU extension allows KVM guests to configure/start/stop/query about > the PMU counters in virtualized enviornment as well. > > In order to allow that, KVM implements the entire SBI PMU extension. > > Reviewed-by: Anup Patel <anup@brainfault.org> > Signed-off-by: Atish Patra <atishp@rivosinc.com> > --- > arch/riscv/kvm/Makefile | 2 +- > arch/riscv/kvm/vcpu_sbi.c | 11 +++++ > arch/riscv/kvm/vcpu_sbi_pmu.c | 87 +++++++++++++++++++++++++++++++++++ > 3 files changed, 99 insertions(+), 1 deletion(-) > create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c > > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > index 5de1053..278e97c 100644 > --- a/arch/riscv/kvm/Makefile > +++ b/arch/riscv/kvm/Makefile > @@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o > kvm-y += vcpu_sbi_replace.o > kvm-y += vcpu_sbi_hsm.o > kvm-y += vcpu_timer.o > -kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o > diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c > index fe2897e..15fde15 100644 > --- a/arch/riscv/kvm/vcpu_sbi.c > +++ b/arch/riscv/kvm/vcpu_sbi.c > @@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { > }; > #endif > > +#ifdef CONFIG_RISCV_PMU_SBI > +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; > +#else > +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { > + .extid_start = -1UL, > + .extid_end = -1UL, > + .handler = NULL, > +}; > +#endif > + > static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > &vcpu_sbi_ext_v01, > &vcpu_sbi_ext_base, > @@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > &vcpu_sbi_ext_rfence, > &vcpu_sbi_ext_srst, > &vcpu_sbi_ext_hsm, > + &vcpu_sbi_ext_pmu, > &vcpu_sbi_ext_experimental, > &vcpu_sbi_ext_vendor, > }; > diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c > new file mode 100644 > index 0000000..9fdc1e1 > --- /dev/null > +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c > @@ -0,0 +1,87 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright (c) 2023 Rivos Inc > + * > + * Authors: > + * Atish Patra <atishp@rivosinc.com> > + */ > + > +#include <linux/errno.h> > +#include <linux/err.h> > +#include <linux/kvm_host.h> > +#include <asm/csr.h> > +#include <asm/sbi.h> > +#include <asm/kvm_vcpu_sbi.h> > + > +static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, > + struct kvm_vcpu_sbi_return *retdata) > +{ > + int ret = 0; > + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + unsigned long funcid = cp->a6; > + u64 temp; > + > + /* Return not supported if PMU is not initialized */ This comment isn't necessary, it's just stating what the code clearly states. > + if (!kvpmu->init_done) { > + retdata->err_val = SBI_ERR_NOT_SUPPORTED; > + return 0; > + } > + > + switch (funcid) { > + case SBI_EXT_PMU_NUM_COUNTERS: > + ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata); > + break; > + case SBI_EXT_PMU_COUNTER_GET_INFO: > + ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata); > + break; > + case SBI_EXT_PMU_COUNTER_CFG_MATCH: > +#if defined(CONFIG_32BIT) > + temp = ((uint64_t)cp->a5 << 32) | cp->a4; > +#else > + temp = cp->a4; > +#endif > + /* > + * This can fail if perf core framework fails to create an event. > + * Forward the error to userspace because it's an error happened ^ which > + * within the host kernel. The other option would be to convert > + * this an SBI error and forward to the guest. ^ to > + */ > + ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, > + cp->a2, cp->a3, temp, retdata); > + break; > + case SBI_EXT_PMU_COUNTER_START: > +#if defined(CONFIG_32BIT) > + temp = ((uint64_t)cp->a4 << 32) | cp->a3; > +#else > + temp = cp->a3; > +#endif > + ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, > + temp, retdata); > + break; > + case SBI_EXT_PMU_COUNTER_STOP: > + ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata); > + break; > + case SBI_EXT_PMU_COUNTER_FW_READ: > + ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); > + break; > + default: > + retdata->err_val = SBI_ERR_NOT_SUPPORTED; > + } > + > + return ret; > +} > + > +static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + > + return kvpmu->init_done; > +} > + > +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { > + .extid_start = SBI_EXT_PMU, > + .extid_end = SBI_EXT_PMU, > + .handler = kvm_sbi_ext_pmu_handler, > + .probe = kvm_sbi_ext_pmu_probe, > +}; > -- > 2.25.1 > Otherwise, Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Thanks, drew
On Sun, Feb 5, 2023 at 2:31 AM Conor Dooley <conor@kernel.org> wrote: > > > > On 5 February 2023 02:15:09 GMT+01:00, Atish Patra <atishp@rivosinc.com> wrote: > >SBI PMU extension allows KVM guests to configure/start/stop/query about > >the PMU counters in virtualized enviornment as well. > > > >In order to allow that, KVM implements the entire SBI PMU extension. > > > >Reviewed-by: Anup Patel <anup@brainfault.org> > >Signed-off-by: Atish Patra <atishp@rivosinc.com> > > Hey Atish, > CI is still complaining about something in this patch: > https://gist.github.com/conor-pwbot/8f8d6a60a65b0b44d96c9c3b220e3efd > This is an invalid warning as vcpu_sbi_ext_pmu is used as an extern variable in arch/riscv/kvm/vcpu_sbi.c. That's how every sbi extension code defines the kvm_vcpu_sbi_extension in its own respective file. > I'm without a laptop this weekend, so apologies for > the lack of investigation as to whether there's a > reason for it. > > Cheers, > Conor. > > > >--- > > arch/riscv/kvm/Makefile | 2 +- > > arch/riscv/kvm/vcpu_sbi.c | 11 +++++ > > arch/riscv/kvm/vcpu_sbi_pmu.c | 87 +++++++++++++++++++++++++++++++++++ > > 3 files changed, 99 insertions(+), 1 deletion(-) > > create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c > >was > >diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > >index 5de1053..278e97c 100644 > >--- a/arch/riscv/kvm/Makefile > >+++ b/arch/riscv/kvm/Makefile > >@@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o > > kvm-y += vcpu_sbi_replace.o > > kvm-y += vcpu_sbi_hsm.o > > kvm-y += vcpu_timer.o > >-kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > >+kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o > >diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c > >index fe2897e..15fde15 100644 > >--- a/arch/riscv/kvm/vcpu_sbi.c > >+++ b/arch/riscv/kvm/vcpu_sbi.c > >@@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { > > }; > > #endif > > > >+#ifdef CONFIG_RISCV_PMU_SBI > >+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; > >+#else > >+static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { > >+ .extid_start = -1UL, > >+ .extid_end = -1UL, > >+ .handler = NULL, > >+}; > >+#endif > >+ > > static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > > &vcpu_sbi_ext_v01 > > &vcpu_sbi_ext_base, > >@@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { > > &vcpu_sbi_ext_rfence, > > &vcpu_sbi_ext_srst, > > &vcpu_sbi_ext_hsm, > >+ &vcpu_sbi_ext_pmu, > > &vcpu_sbi_ext_experimental, > > &vcpu_sbi_ext_vendor, > > }; > >diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c > >new file mode 100644 > >index 0000000..9fdc1e1 > >--- /dev/null > >+++ b/arch/riscv/kvm/vcpu_sbi_pmu.c > >@@ -0,0 +1,87 @@ > >+// SPDX-License-Identifier: GPL-2.0 > >+/* > >+ * Copyright (c) 2023 Rivos Inc > >+ * > >+ * Authors: > >+ * Atish Patra <atishp@rivosinc.com> > >+ */ > >+ > >+#include <linux/errno.h> > >+#include <linux/err.h> > >+#include <linux/kvm_host.h> > >+#include <asm/csr.h> > >+#include <asm/sbi.h> > >+#include <asm/kvm_vcpu_sbi.h> > >+ > >+static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, > >+ struct kvm_vcpu_sbi_return *retdata) > >+{ > >+ int ret = 0; > >+ struct kvm_cpu_context *cp = &vcpu->arch.guest_context; > >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > >+ unsigned long funcid = cp->a6; > >+ u64 temp; > >+ > >+ /* Return not supported if PMU is not initialized */ > >+ if (!kvpmu->init_done) { > >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; > >+ return 0; > >+ } > >+ > >+ switch (funcid) { > >+ case SBI_EXT_PMU_NUM_COUNTERS: > >+ ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata); > >+ break; > >+ case SBI_EXT_PMU_COUNTER_GET_INFO: > >+ ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata); > >+ break; > >+ case SBI_EXT_PMU_COUNTER_CFG_MATCH: > >+#if defined(CONFIG_32BIT) > >+ temp = ((uint64_t)cp->a5 << 32) | cp->a4; > >+#else > >+ temp = cp->a4; > >+#endif > >+ /* > >+ * This can fail if perf core framework fails to create an event. > >+ * Forward the error to userspace because it's an error happened > >+ * within the host kernel. The other option would be to convert > >+ * this an SBI error and forward to the guest. > >+ */ > >+ ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, > >+ cp->a2, cp->a3, temp, retdata); > >+ break; > >+ case SBI_EXT_PMU_COUNTER_START: > >+#if defined(CONFIG_32BIT) > >+ temp = ((uint64_t)cp->a4 << 32) | cp->a3; > >+#else > >+ temp = cp->a3; > >+#endif > >+ ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, > >+ temp, retdata); > >+ break; > >+ case SBI_EXT_PMU_COUNTER_STOP: > >+ ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata); > >+ break; > >+ case SBI_EXT_PMU_COUNTER_FW_READ: > >+ ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); > >+ break; > >+ default: > >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; > >+ } > >+ > >+ return ret; > >+} > >+ > >+static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) > >+{ > >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > >+ > >+ return kvpmu->init_done; > >+} > >+ > >+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { > >+ .extid_start = SBI_EXT_PMU, > >+ .extid_end = SBI_EXT_PMU, > >+ .handler = kvm_sbi_ext_pmu_handler, > >+ .probe = kvm_sbi_ext_pmu_probe, > >+};
On 7 February 2023 06:10:57 GMT, Atish Patra <atishp@atishpatra.org> wrote: >On Sun, Feb 5, 2023 at 2:31 AM Conor Dooley <conor@kernel.org> wrote: >> >> >> >> On 5 February 2023 02:15:09 GMT+01:00, Atish Patra <atishp@rivosinc.com> wrote: >> >SBI PMU extension allows KVM guests to configure/start/stop/query about >> >the PMU counters in virtualized enviornment as well. >> > >> >In order to allow that, KVM implements the entire SBI PMU extension. >> > >> >Reviewed-by: Anup Patel <anup@brainfault.org> >> >Signed-off-by: Atish Patra <atishp@rivosinc.com> >> >> Hey Atish, >> CI is still complaining about something in this patch: >> https://gist.github.com/conor-pwbot/8f8d6a60a65b0b44d96c9c3b220e3efd >> > >This is an invalid warning as vcpu_sbi_ext_pmu is used as an extern variable in >arch/riscv/kvm/vcpu_sbi.c. That's how every sbi extension code defines the >kvm_vcpu_sbi_extension in its own respective file. Ah, so the user is in another patch. >> I'm without a laptop this weekend, so apologies for >> the lack of investigation as to whether there's a >> reason for it. Hard to check these things from a phone! Sorry for bothering you so. >> >> Cheers, >> Conor. >> >> >> >--- >> > arch/riscv/kvm/Makefile | 2 +- >> > arch/riscv/kvm/vcpu_sbi.c | 11 +++++ >> > arch/riscv/kvm/vcpu_sbi_pmu.c | 87 +++++++++++++++++++++++++++++++++++ >> > 3 files changed, 99 insertions(+), 1 deletion(-) >> > create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c >> >was > >diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile >> >index 5de1053..278e97c 100644 >> >--- a/arch/riscv/kvm/Makefile >> >+++ b/arch/riscv/kvm/Makefile >> >@@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o >> > kvm-y += vcpu_sbi_replace.o >> > kvm-y += vcpu_sbi_hsm.o >> > kvm-y += vcpu_timer.o >> >-kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o >> >+kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o >> >diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c >> >index fe2897e..15fde15 100644 >> >--- a/arch/riscv/kvm/vcpu_sbi.c >> >+++ b/arch/riscv/kvm/vcpu_sbi.c >> >@@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { >> > }; >> > #endif >> > >> >+#ifdef CONFIG_RISCV_PMU_SBI >> >+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; >> >+#else >> >+static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { >> >+ .extid_start = -1UL, >> >+ .extid_end = -1UL, >> >+ .handler = NULL, >> >+}; >> >+#endif >> >+ >> > static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { >> > &vcpu_sbi_ext_v01 >> > &vcpu_sbi_ext_base, >> >@@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { >> > &vcpu_sbi_ext_rfence, >> > &vcpu_sbi_ext_srst, >> > &vcpu_sbi_ext_hsm, >> >+ &vcpu_sbi_ext_pmu, >> > &vcpu_sbi_ext_experimental, >> > &vcpu_sbi_ext_vendor, >> > }; >> >diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c >> >new file mode 100644 >> >index 0000000..9fdc1e1 >> >--- /dev/null >> >+++ b/arch/riscv/kvm/vcpu_sbi_pmu.c >> >@@ -0,0 +1,87 @@ >> >+// SPDX-License-Identifier: GPL-2.0 >> >+/* >> >+ * Copyright (c) 2023 Rivos Inc >> >+ * >> >+ * Authors: >> >+ * Atish Patra <atishp@rivosinc.com> >> >+ */ >> >+ >> >+#include <linux/errno.h> >> >+#include <linux/err.h> >> >+#include <linux/kvm_host.h> >> >+#include <asm/csr.h> >> >+#include <asm/sbi.h> >> >+#include <asm/kvm_vcpu_sbi.h> >> >+ >> >+static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, >> >+ struct kvm_vcpu_sbi_return *retdata) >> >+{ >> >+ int ret = 0; >> >+ struct kvm_cpu_context *cp = &vcpu->arch.guest_context; >> >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); >> >+ unsigned long funcid = cp->a6; >> >+ u64 temp; >> >+ >> >+ /* Return not supported if PMU is not initialized */ >> >+ if (!kvpmu->init_done) { >> >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; >> >+ return 0; >> >+ } >> >+ >> >+ switch (funcid) { >> >+ case SBI_EXT_PMU_NUM_COUNTERS: >> >+ ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata); >> >+ break; >> >+ case SBI_EXT_PMU_COUNTER_GET_INFO: >> >+ ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata); >> >+ break; >> >+ case SBI_EXT_PMU_COUNTER_CFG_MATCH: >> >+#if defined(CONFIG_32BIT) >> >+ temp = ((uint64_t)cp->a5 << 32) | cp->a4; >> >+#else >> >+ temp = cp->a4; >> >+#endif >> >+ /* >> >+ * This can fail if perf core framework fails to create an event. >> >+ * Forward the error to userspace because it's an error happened >> >+ * within the host kernel. The other option would be to convert >> >+ * this an SBI error and forward to the guest. >> >+ */ >> >+ ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, >> >+ cp->a2, cp->a3, temp, retdata); >> >+ break; >> >+ case SBI_EXT_PMU_COUNTER_START: >> >+#if defined(CONFIG_32BIT) >> >+ temp = ((uint64_t)cp->a4 << 32) | cp->a3; >> >+#else >> >+ temp = cp->a3; >> >+#endif >> >+ ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, >> >+ temp, retdata); >> >+ break; >> >+ case SBI_EXT_PMU_COUNTER_STOP: >> >+ ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata); >> >+ break; >> >+ case SBI_EXT_PMU_COUNTER_FW_READ: >> >+ ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); >> >+ break; >> >+ default: >> >+ retdata->err_val = SBI_ERR_NOT_SUPPORTED; >> >+ } >> >+ >> >+ return ret; >> >+} >> >+ >> >+static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) >> >+{ >> >+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); >> >+ >> >+ return kvpmu->init_done; >> >+} >> >+ >> >+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { >> >+ .extid_start = SBI_EXT_PMU, >> >+ .extid_end = SBI_EXT_PMU, >> >+ .handler = kvm_sbi_ext_pmu_handler, >> >+ .probe = kvm_sbi_ext_pmu_probe, >> >+}; > > >
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 5de1053..278e97c 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o -kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index fe2897e..15fde15 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { }; #endif +#ifdef CONFIG_RISCV_PMU_SBI +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; +#else +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = -1UL, + .extid_end = -1UL, + .handler = NULL, +}; +#endif + static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, &vcpu_sbi_ext_base, @@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_rfence, &vcpu_sbi_ext_srst, &vcpu_sbi_ext_hsm, + &vcpu_sbi_ext_pmu, &vcpu_sbi_ext_experimental, &vcpu_sbi_ext_vendor, }; diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c new file mode 100644 index 0000000..9fdc1e1 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra <atishp@rivosinc.com> + */ + +#include <linux/errno.h> +#include <linux/err.h> +#include <linux/kvm_host.h> +#include <asm/csr.h> +#include <asm/sbi.h> +#include <asm/kvm_vcpu_sbi.h> + +static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_vcpu_sbi_return *retdata) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + unsigned long funcid = cp->a6; + u64 temp; + + /* Return not supported if PMU is not initialized */ + if (!kvpmu->init_done) { + retdata->err_val = SBI_ERR_NOT_SUPPORTED; + return 0; + } + + switch (funcid) { + case SBI_EXT_PMU_NUM_COUNTERS: + ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata); + break; + case SBI_EXT_PMU_COUNTER_GET_INFO: + ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata); + break; + case SBI_EXT_PMU_COUNTER_CFG_MATCH: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a5 << 32) | cp->a4; +#else + temp = cp->a4; +#endif + /* + * This can fail if perf core framework fails to create an event. + * Forward the error to userspace because it's an error happened + * within the host kernel. The other option would be to convert + * this an SBI error and forward to the guest. + */ + ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, + cp->a2, cp->a3, temp, retdata); + break; + case SBI_EXT_PMU_COUNTER_START: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a4 << 32) | cp->a3; +#else + temp = cp->a3; +#endif + ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, + temp, retdata); + break; + case SBI_EXT_PMU_COUNTER_STOP: + ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata); + break; + case SBI_EXT_PMU_COUNTER_FW_READ: + ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); + break; + default: + retdata->err_val = SBI_ERR_NOT_SUPPORTED; + } + + return ret; +} + +static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + return kvpmu->init_done; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = SBI_EXT_PMU, + .extid_end = SBI_EXT_PMU, + .handler = kvm_sbi_ext_pmu_handler, + .probe = kvm_sbi_ext_pmu_probe, +};