From patchwork Fri Jan 27 18:25:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49573 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980255wrn; Fri, 27 Jan 2023 10:27:51 -0800 (PST) X-Google-Smtp-Source: AK7set9+IkU0w1MFMPFQbqgNl3piTqQiN5tceXfX9Ap92l1+vs0KXNQn+w/snS+CfcD0HWBm6Sxu X-Received: by 2002:aa7:8484:0:b0:593:89ab:2ec4 with SMTP id u4-20020aa78484000000b0059389ab2ec4mr596800pfn.10.1674844071224; Fri, 27 Jan 2023 10:27:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844071; cv=none; d=google.com; s=arc-20160816; b=hwDeRadSZAjMfMKCmgP9+KYhM/kStbhNmf+EV3hIzBvcWar1/6fgYwmp+7+eyZ2mVA eoaoXlPjHz61YUMnco9XlvNljMEC4YEt6MU/eueSO+XVOlsmZ3/lxKp2FhP17ukzjV9b cOMB3oflwn3Pi133M2G+w16fMzaokdCYLSJrWFujrsH4n8LSrEKPvslPg/rEJY3UGkLZ ExjQcqwrQGYuQRxlKvtDFeSczyA2RuerD4aNkkCqGqS069SVRgcduf0OWfCdz1F6/xNv 38V71LzzUUoVdSYRIfYLZRvwHygH66uUSkytvQo/PEIZPCY6+s2lfsaPMJ7B/JeXa/k4 d6nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iGqgvezQ/AlpTuqgVROWAp2lac08ioX9ghbgFT6R5zs=; b=FYLtDKWEbvUBdpu1nrQttVNHSnaf4lI7zseEr+4mNfdQsthRIydcUNudEX0NBEuQyO xPVAChfe1MzCfJbhP2sHbdQBx4kd6S5QFo36kC/rjwsSBjFNcAIgS1YgI8ehUugeX+6u EdrE7MdeZz6PK+YUXUif4qQdaLkrOaf3ElS9RkPa5NVSDAu8Z8GR+9+WdfktXxjzORS2 gCE8ta77d4MyTayr1FDraD6YV21rixd0tGZfTgGG7RD8tSC1nQb4abbxcmk3fxjEXKAF 3AI/c933kD6IaO79I2q/1RsULdXRcb5kqe4wv6zwRYldqmpNhQ4dqHHk8Djug+QpCY01 cXtw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=IoRffA7e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o1-20020a62cd01000000b005925f606e33si3422639pfg.293.2023.01.27.10.27.39; Fri, 27 Jan 2023 10:27:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=IoRffA7e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235260AbjA0S0V (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235072AbjA0S0Q (ORCPT ); Fri, 27 Jan 2023 13:26:16 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C39D7B40A for ; Fri, 27 Jan 2023 10:26:15 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id x5so2411924plr.2 for ; Fri, 27 Jan 2023 10:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iGqgvezQ/AlpTuqgVROWAp2lac08ioX9ghbgFT6R5zs=; b=IoRffA7eaBKD5TnGEAMzpERUl+R39LOxAlgfDLhDNga+rN9SbFYLfdz9WFRgJCTeRN 9Updpbd1OZtc//enWU826k0AachdF3yKioeNtgxkt9/kIfGqMeggP3NJkMHCVrn38zEt o0ySxQgMwR7DstBzrONN9sUSnnkIYSCZmjlPUaPZycJTqTBIKNAj0Y+IKQKTWXZUV/g1 Hbr1Yw2D00RYz6wcO9p7KP1dPScebiMdJ1Wz7lqeW/m8MLImzr1r9OBpSxBUtGPDMeaU EKmOy6JcaGBPxAZXqtqvKGClyDQ4IZH3V1Pgrky1d4NUmTsNJDKgHZ/O9226Ve6thdSn TZ6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iGqgvezQ/AlpTuqgVROWAp2lac08ioX9ghbgFT6R5zs=; b=kEFC6RLZliSHLbMvOOEVc1ApJwYA8Lw0Bjpu72GCChhqXAoYWiBSahHyjDB9MWxu6x 6kkmdYL0rDw1mpXziY/j44JxKzFlJdB2Ber9y2tRn2C6G/Z0ii37xkg5JgeNd65DDFwn UXkvtsYrD2HVKFY58meaNDXxB5CnsQpgQpDI1fLCKz+p4iRF+J8UfylEvUiwewzBZ/KV wvNVZv4vjoh59Dz1tKxZyp/hV6Npbhbzc9Arb4YNpxp6VOcOS5b07ozWnsIwCJNLihhP XPJU9MWllWW4v3cMjyditYEjx5zazsD+PrWCly50hknj2I9WTHytS+PHVaUyzTPJdmPi p7tA== X-Gm-Message-State: AO0yUKW7Hwq0baU/7aRLyIgFWOr2bW1FfaW6NiTvPZTwIBdLdULBg4+O gcgiZLFwisk+Qvf6Q1yU3r0VL0lBq+Kw/OWl X-Received: by 2002:a17:902:f54c:b0:196:f82:14c9 with SMTP id h12-20020a170902f54c00b001960f8214c9mr17447443plf.57.1674843974605; Fri, 27 Jan 2023 10:26:14 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:14 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 01/14] perf: RISC-V: Define helper functions expose hpm counter width and count Date: Fri, 27 Jan 2023 10:25:45 -0800 Message-Id: <20230127182558.2416400-2-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201296921019706?= X-GMAIL-MSGID: =?utf-8?q?1756201296921019706?= KVM module needs to know how many hardware counters and the counter width that the platform supports. Otherwise, it will not be able to show optimal value of virtual counters to the guest. The virtual hardware counters also need to have the same width as the logical hardware counters for simplicity. However, there shouldn't be mapping between virtual hardware counters and logical hardware counters. As we don't support hetergeneous harts or counters with different width as of now, the implementation relies on the counter width of the first available programmable counter. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- drivers/perf/riscv_pmu_sbi.c | 37 ++++++++++++++++++++++++++++++++-- include/linux/perf/riscv_pmu.h | 3 +++ 2 files changed, 38 insertions(+), 2 deletions(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index f6507ef..6b53adc 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -44,7 +44,7 @@ static const struct attribute_group *riscv_pmu_attr_groups[] = { }; /* - * RISC-V doesn't have hetergenous harts yet. This need to be part of + * RISC-V doesn't have heterogeneous harts yet. This need to be part of * per_cpu in case of harts with different pmu counters */ static union sbi_pmu_ctr_info *pmu_ctr_list; @@ -52,6 +52,9 @@ static bool riscv_pmu_use_irq; static unsigned int riscv_pmu_irq_num; static unsigned int riscv_pmu_irq; +/* Cache the available counters in a bitmask */ +static unsigned long cmask; + struct sbi_pmu_event_data { union { union { @@ -267,6 +270,37 @@ static bool pmu_sbi_ctr_is_fw(int cidx) return (info->type == SBI_PMU_CTR_TYPE_FW) ? true : false; } +/* + * Returns the counter width of a programmable counter and number of hardware + * counters. As we don't support heterogeneous CPUs yet, it is okay to just + * return the counter width of the first programmable counter. + */ +int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr) +{ + int i; + union sbi_pmu_ctr_info *info; + u32 hpm_width = 0, hpm_count = 0; + + if (!cmask) + return -EINVAL; + + for_each_set_bit(i, &cmask, RISCV_MAX_COUNTERS) { + info = &pmu_ctr_list[i]; + if (!info) + continue; + if (!hpm_width && info->csr != CSR_CYCLE && info->csr != CSR_INSTRET) + hpm_width = info->width; + if (info->type == SBI_PMU_CTR_TYPE_HW) + hpm_count++; + } + + *hw_ctr_width = hpm_width; + *num_hw_ctr = hpm_count; + + return 0; +} +EXPORT_SYMBOL_GPL(riscv_pmu_get_hpm_info); + static int pmu_sbi_ctr_get_idx(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; @@ -812,7 +846,6 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) static int pmu_sbi_device_probe(struct platform_device *pdev) { struct riscv_pmu *pmu = NULL; - unsigned long cmask = 0; int ret = -ENODEV; int num_counters; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index e17e86a..a1c3f77 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -73,6 +73,9 @@ void riscv_pmu_legacy_skip_init(void); static inline void riscv_pmu_legacy_skip_init(void) {}; #endif struct riscv_pmu *riscv_pmu_alloc(void); +#ifdef CONFIG_RISCV_PMU_SBI +int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); +#endif #endif /* CONFIG_RISCV_PMU */ From patchwork Fri Jan 27 18:25:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49574 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980286wrn; Fri, 27 Jan 2023 10:27:56 -0800 (PST) X-Google-Smtp-Source: AK7set9ZNIx3u7C6qfA8VgB4cLq7Qvyk7y6mwNYBHQGMpJQLUp+tTFVzypCnIIZ0m42ua23b/bzZ X-Received: by 2002:a05:6a21:7893:b0:bc:74c3:9499 with SMTP id bf19-20020a056a21789300b000bc74c39499mr580914pzc.24.1674844076040; Fri, 27 Jan 2023 10:27:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844076; cv=none; d=google.com; s=arc-20160816; b=E+X8GS1rCTeW7piklGdkucz02SXrNGGC/fFla6TIT4ue9js/g1hGd8qVcmHntXROw2 lOKCOGMFK5sTXwe+zlFx8C1sV1BuUMwUS24THmrPa9+FlgWdOpUPpyW5UGFANWUjILjf V5l6qx8Z8Hk3eN2TU98s9PRajS7hKVC8gMJ1BMYP4vM3BZmBrFlqyzXtFF5NrXte+4Xa X+kSLvv2n0Kys/11DblJAH5LYz7h4N6fDhlcF43dVR+/VMHclVpdvmdFXA2t4TGS7jDp p6dxSozB6f6ym7WuJ70XtEfNkVnewY23aAlVsGoRT7TLm6u7x32rPQ4WnWzBm3m1eiz+ +q2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KYWMw/n0BP2d3D/fZxV/dnS4sydraPBd1hVs+eTWwR0=; b=ISPT+FfXGumtLpsEyQL2j6GkMUQ7JCUpsdOQHXs1YHYO3hH9xyA0kDBD77DWZh9hIO 6ZcC2I5jhuEgs+oLH1xE4aCipoEeR4ZObKBNUMBq1JRLpvjmp/QrKaMb1f0ukyAzgffJ kpppwNjQMbE0lxSSeOD9RvhGcNZnbJSfCSp6LvSf2a91VsoUxp0WmXRcy6b67vCeM6MX uDi+7uXQdOACxijsvCdOuJstUF/yeLVfODNqCQ+bfhns4qWoOSgoc0uL1EGxcwY9E6eF dZk1hm53Ps/xLGgYNQutarsFDYsRxq0Zl7mifx3LhGpEylr/qo9TaaKrxbvdRDorYlvB YJiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=5SsgMGoB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l3-20020a17090a850300b0022c4175fcb0si3374801pjn.13.2023.01.27.10.27.44; Fri, 27 Jan 2023 10:27:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=5SsgMGoB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235271AbjA0S0Y (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235096AbjA0S0R (ORCPT ); Fri, 27 Jan 2023 13:26:17 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 455B07B79F for ; Fri, 27 Jan 2023 10:26:16 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id m7-20020a17090a71c700b0022c0c070f2eso8743492pjs.4 for ; Fri, 27 Jan 2023 10:26:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KYWMw/n0BP2d3D/fZxV/dnS4sydraPBd1hVs+eTWwR0=; b=5SsgMGoBXHhIzD23Jk2Wrz+EZw8bMtxXJLNk0wFmsOJBSWE7hahtcntSP1xluks06q F4tpREigjqJsBQUZsoDI0Fp/bMW5RTjXukaW7hdzjzL0MuyLhW7Jj91PTJ33Y+Op6Cq3 YaFCkgVMTVfM3UAp/CxpiOJX28RufdzQ8BFAh/P4zkQhBZdFsQc7LSy7eQOmm0W5tcVn mQyCpzl66nFnLyhVaXR8gRnQ+qnIOUjheSOJ+TWasvJvxcNsLJrTagTVV5e3TtFCkiAO QRkPrt4gPmKNuWt4IJ1Zpidq8ApG+FbV/pS9F/NdjeQuWwVOQNizjoqWj/5jh3B4rpDD hjOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KYWMw/n0BP2d3D/fZxV/dnS4sydraPBd1hVs+eTWwR0=; b=ivyZPI9tUNb7thPTPY2HKEIaC9oOuQAAS3yrSscYv+KM4YovxRcSzoM5uO0quoHT2/ rWubseC2Gie+CEqH2df/Dwh4ZxbtTV5vujmS3AzRuC88mXdERFkmhiDkl5vMOLgFKXYL cW5kYPBowy+Y+233ZhdzEaUG6Hl+/Qe3+s5aSksPcJHziPSGReySU1+2hxhD8HWYPBhs 2+1lxCjE/J65OK/nXww/s4kin5mhdCEDQw4LMpsLRYkjb5Q99b64V9DUyXisiwh/gizy wYBMMl+daGXWLhYpw0uF9+eSO6aQtjQ1xV7j9LFWcxpy3cO2NOnNRsGjCMqF3H3+jibv HBWw== X-Gm-Message-State: AO0yUKWdZ9adSLTxUbLlTC/Om3P1mH9U9QI8mnihFjN7ap+gsX+rqdAF 2QuCWr6k5dP76IWH8B9rkNMpC/zb54phfqrI X-Received: by 2002:a17:902:d2cc:b0:196:5cfe:70b2 with SMTP id n12-20020a170902d2cc00b001965cfe70b2mr1873451plc.52.1674843975472; Fri, 27 Jan 2023 10:26:15 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:15 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 02/14] perf: RISC-V: Improve privilege mode filtering for perf Date: Fri, 27 Jan 2023 10:25:46 -0800 Message-Id: <20230127182558.2416400-3-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201301548409071?= X-GMAIL-MSGID: =?utf-8?q?1756201301548409071?= Currently, the host driver doesn't have any method to identify if the requested perf event is from kvm or bare metal. As KVM runs in HS mode, there are no separate hypervisor privilege mode to distinguish between the attributes for guest/host. Improve the privilege mode filtering by using the event specific config1 field. Reviewed-by: Andrew Jones Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- drivers/perf/riscv_pmu_sbi.c | 27 ++++++++++++++++++++++----- include/linux/perf/riscv_pmu.h | 2 ++ 2 files changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 6b53adc..e862b13 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -301,6 +301,27 @@ int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr) } EXPORT_SYMBOL_GPL(riscv_pmu_get_hpm_info); +static unsigned long pmu_sbi_get_filter_flags(struct perf_event *event) +{ + unsigned long cflags = 0; + bool guest_events = false; + + if (event->attr.config1 & RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS) + guest_events = true; + if (event->attr.exclude_kernel) + cflags |= guest_events ? SBI_PMU_CFG_FLAG_SET_VSINH : SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_user) + cflags |= guest_events ? SBI_PMU_CFG_FLAG_SET_VUINH : SBI_PMU_CFG_FLAG_SET_UINH; + if (guest_events && event->attr.exclude_hv) + cflags |= SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_host) + cflags |= SBI_PMU_CFG_FLAG_SET_UINH | SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_guest) + cflags |= SBI_PMU_CFG_FLAG_SET_VSINH | SBI_PMU_CFG_FLAG_SET_VUINH; + + return cflags; +} + static int pmu_sbi_ctr_get_idx(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; @@ -311,11 +332,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) uint64_t cbase = 0; unsigned long cflags = 0; - if (event->attr.exclude_kernel) - cflags |= SBI_PMU_CFG_FLAG_SET_SINH; - if (event->attr.exclude_user) - cflags |= SBI_PMU_CFG_FLAG_SET_UINH; - + cflags = pmu_sbi_get_filter_flags(event); /* retrieve the available counter index */ #if defined(CONFIG_32BIT) ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index a1c3f77..1c42146 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -26,6 +26,8 @@ #define RISCV_PMU_STOP_FLAG_RESET 1 +#define RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS 0x1 + struct cpu_hw_events { /* currently enabled events */ int n_events; From patchwork Fri Jan 27 18:25:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49575 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980334wrn; Fri, 27 Jan 2023 10:28:01 -0800 (PST) X-Google-Smtp-Source: AK7set/reznzMILmX77CFKY7p47IWzYrezZKysq9X4wO0FNLg6/Jdq4SNLiysF6dnEpq13uNhBr4 X-Received: by 2002:a17:90b:3506:b0:219:9973:2746 with SMTP id ls6-20020a17090b350600b0021999732746mr7721062pjb.0.1674844080812; Fri, 27 Jan 2023 10:28:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844080; cv=none; d=google.com; s=arc-20160816; b=QG64s8MSlNbi1t/s1lSjvcTSqUQNQ0naIuzG+tiQOFHqwI+NW16eL1mphGftt5tdEl iT3mZAs/2G/uboJxxDYN9ML1U3kDk7RSGm2BwOmS1E5KEEZN7mW9DrLlIGwlzZuvqBQS jC6MWw/Y98N+9IfG9/5ypqlrpWsCqszFrMVmm3u3dZ5BrgHGP2jJq5ha526czUraf7qi T3b4GJ+1e6laJXqgUj55gFPtTBOfkCl8z/kKBm4Kgb3aHwJO9SHqTuk9wRPUYnN1Q0Qy /Ajt7BlF+jvIONXZC/Tlm+lKsq64ooMfcaqZLOjjGcKGkbfqYKJaQekYEYM/WW1HzFt/ PVjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cn2VkKN/WuIcEOyjejqwRSpO25aVX0itAsGTfYH0Pbw=; b=a4125473r7SfAMcwDcBd3Z9OW5o/08UQpyU8u2lNWvfkfvJgV02D1zzI6wyHa2vRJx X6Zrmuu/+Xy/7uiRhJX1PaZ8g65gTpEGasc2sR1D7wXNtxwRVNbsurjIki28ZgAR5Zxa 0V8qrmbghTSJXjKZ7I4o2g3UP7bYlSbBxGYqCJwhGKiZ0NW0nZGj2kQOAIaJ6+9n+QF3 EZzJnLrGbNpTcknztmF3sG+TvQGwecmagvC1VHPTCP4lxvaox/hNteiUpn0bZMwf/0wU hjge/s95AyQMZzOCC0OcIdJxeI2/5OIIfuePh4uKzg2idnmeZ4DGMMZEt8og7rvnomqd XpKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=DAAXxtYj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c4-20020a17090ad90400b0022915b6dd7asi5083365pjv.145.2023.01.27.10.27.46; Fri, 27 Jan 2023 10:28:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=DAAXxtYj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235326AbjA0S02 (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235201AbjA0S0R (ORCPT ); Fri, 27 Jan 2023 13:26:17 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18ECE7B7AF for ; Fri, 27 Jan 2023 10:26:17 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id j5so5349773pjn.5 for ; Fri, 27 Jan 2023 10:26:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cn2VkKN/WuIcEOyjejqwRSpO25aVX0itAsGTfYH0Pbw=; b=DAAXxtYjkQddNCj7UM7En6mqUmf3J2gzYsFadRPtlJg6aTcQ7YLch8H//JxlEijKk7 5uxQ73XFulPZlp1QmIf6vQrIzjaSZg+PKSPJ3UkME9YpCFWvyWYP/k4BMD7A2aXG0iOr je+nSeC+7pEg35a+Jn4PQsXJfTRp5WdUJb2DW73Yn+WSuExXhf/39l5UMQta2v5BYmLX eHi9cQY4mXQ1MAatncFFmUs7n1voCzvRxMfNIgfWNoKB2Yt+UJ8yX8cBUxEdezG4CNU+ BfL31EsqTfc+KOhOQi0xZ4vhH735HWHaTQ3Rtpsn7fIxeyWHSsET1qCBmGo3xlclDkCW /gXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cn2VkKN/WuIcEOyjejqwRSpO25aVX0itAsGTfYH0Pbw=; b=IAG9HzHdyt6N5EkK3nIfks/yt0OboahHIgsLHiNI+/yiHjhCZu71io4UUUsCdQcFq4 bYkCGtjr8H0TMYSuxtiAVokTTa06Nd4TjA1HLToMWLdcXz1AskbAW+WbMugFrIgmJJIf hEiDZ7ZUH0KbsEmDDOouTSd9E9u24SLPNspoxrJkIitMZoCLuRK7dFW4QDaBk62UJT6d SkREahQQcZWOZpT7S7Pttb3r/nUMykp9+dzLp2IK7UWtQRH8eIYijpf6+uw9CARGZcY2 K9MVARKroMumOk+tK5Mwv2PlaWR2TnaZndiuslaccN5nwnhdjJreRwnfuNJ4KIUagx9o ID0w== X-Gm-Message-State: AO0yUKWoT41vp0PdOHcqAzivqzHXaAHTwP4I1qM/YF8NKZhMV8S11u3u m6CTJ4ULqA5wg4BYIQpcJMy6/FZGfRjPLiAH X-Received: by 2002:a17:902:db0d:b0:196:32fa:69a9 with SMTP id m13-20020a170902db0d00b0019632fa69a9mr6206312plx.51.1674843976411; Fri, 27 Jan 2023 10:26:16 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:16 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 03/14] RISC-V: Improve SBI PMU extension related definitions Date: Fri, 27 Jan 2023 10:25:47 -0800 Message-Id: <20230127182558.2416400-4-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201307168988362?= X-GMAIL-MSGID: =?utf-8?q?1756201307168988362?= This patch fixes/improve few minor things in SBI PMU extension definition. 1. Align all the firmware event names. 2. Add macros for bit positions in cache event ID & ops. The changes were small enough to combine them together instead of creating 1 liner patches. Signed-off-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 4ca7fba..f21c026 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -171,7 +171,7 @@ enum sbi_pmu_fw_generic_events_t { SBI_PMU_FW_IPI_SENT = 6, SBI_PMU_FW_IPI_RECVD = 7, SBI_PMU_FW_FENCE_I_SENT = 8, - SBI_PMU_FW_FENCE_I_RECVD = 9, + SBI_PMU_FW_FENCE_I_RCVD = 9, SBI_PMU_FW_SFENCE_VMA_SENT = 10, SBI_PMU_FW_SFENCE_VMA_RCVD = 11, SBI_PMU_FW_SFENCE_VMA_ASID_SENT = 12, @@ -215,6 +215,9 @@ enum sbi_pmu_ctr_type { #define SBI_PMU_EVENT_CACHE_OP_ID_CODE_MASK 0x06 #define SBI_PMU_EVENT_CACHE_RESULT_ID_CODE_MASK 0x01 +#define SBI_PMU_EVENT_CACHE_ID_SHIFT 3 +#define SBI_PMU_EVENT_CACHE_OP_SHIFT 1 + #define SBI_PMU_EVENT_IDX_INVALID 0xFFFFFFFF /* Flags defined for config matching function */ From patchwork Fri Jan 27 18:25:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49576 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980335wrn; Fri, 27 Jan 2023 10:28:01 -0800 (PST) X-Google-Smtp-Source: AMrXdXubnLjLeH/AUpGm3zUQcp7h3JO+y8gyS+5uGlBd1fIJLH3s1tZxDQ26HeGapO2LIzfMJOyH X-Received: by 2002:a05:6a20:be05:b0:b8:828c:d373 with SMTP id ge5-20020a056a20be0500b000b8828cd373mr42975378pzb.40.1674844081068; Fri, 27 Jan 2023 10:28:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844081; cv=none; d=google.com; s=arc-20160816; b=eaMswwCidXJSKAf9yvK1ylBoPnX91RMmbaAFN3SNVG9s8owXOv4KndsSbX1AUmCvaR 4qjPLFoasIe0XHX0nXrBlNMshMlGFyEeZVXlC00p7+xAnfvzCM+3MCfr+439ofGf3ql9 xQ6ft6bqdzT+wZFAi87nzzmZuwh3Oz8HdG+9357dxgG5xWoUQv3yWKE2si0FW1puxFB8 Qkjnwk8MG1VzCW4pruYrhRA2F3YUVPvWqvTqDC5elIqfmKkb6mOOT4c5XNkvhzZqqYWH jOrGgO4Yny3XTeqFnCZab55GeUysDzVqbE1pV74aCG7WrKK9X8vVUzhQZNG/XhBgdysC HnZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=izgQjGebJOE3RdsQHdegT8RbHZUq8xzpqR3/LUH4GSg=; b=CZwOHx+DhUcHNCxxaU80Rqv7+3XTfhew1vPCVSjx5wPBRzpKzz0AGD/sFbBR6NoRnl M/R6c3uLhVvUKRkq7+u63GlZSzOVjKMrzxSe1BCBhcMDb8owNrcyB/zoGTSKG1TrwGFx PC2IH6ylLXrY0ZN6kR3zcm3a+DWUp7MimcLTUU4J5g8duJpCGrf+wVqTvP0/G1iyWcht Is7J8EJPO18E+7fway02gUQAfR9v/d9lgC5Zlw4TRXuXPMTQV1zuIfESWl416m/VIcuI fpfOcZnigL9GCXYhcYr0OGhGlSSkkXBYaxEUE0aWTcv9swEBU0fhBrZJwhV5h0EWM1J4 7qMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=a9q6O59e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j71-20020a638b4a000000b004dfb11310dbsi2576665pge.164.2023.01.27.10.27.48; Fri, 27 Jan 2023 10:28:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=a9q6O59e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235201AbjA0S0b (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234055AbjA0S0T (ORCPT ); Fri, 27 Jan 2023 13:26:19 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E791B7B405 for ; Fri, 27 Jan 2023 10:26:17 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id 5so5885398plo.3 for ; Fri, 27 Jan 2023 10:26:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=izgQjGebJOE3RdsQHdegT8RbHZUq8xzpqR3/LUH4GSg=; b=a9q6O59eBw3CpprTPbzQxW83Q7UrY2+lKmZmXjuTmVoRmWq832uyL21I+/u/2jlFsO bTlm0YBWf2ah/OxNoOPiZxfWIeGbfnN6qqpFy+vDApROfAjY7xLdi+oHeD5YAzXrntLp 0iA7A84/LEUf3mVNSHCXRjpqYhD5EyLDz8/2zyHnDr7cKiUp7p2pzS596BsBKHsnBxAk G89h93eoTjPyxN8e6Txu2tZ0uIjZaYWgpUApxOIgGu3sq4yuNH7SSPIAbXdbH2l+GwV5 PzoXrmBsDm+Di6HYC9puIB+BtcJFyZ4EposGAB4Xp8Pyn5GQ/BX/pp2f8FI38oYMDVU5 OtMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=izgQjGebJOE3RdsQHdegT8RbHZUq8xzpqR3/LUH4GSg=; b=seizOioNcZDlVGj5079cHPB7c4bFmKtLKtO0W+kj/GmgHRqkL+CUzxonGpaZF78hLy YzvguDD74yR1hnyBqQiHz2x24w1bzeab4BM9GwUnBo75wKwOLAeVb7+gH0sU9LfWSaBL guYsSIW56xx7xV+haxTGt7DfIv5puozzbzBb+TJUcxhQ6i1f02YGoWDTPnmVApcpcBTN KIPoZ1zkae3xaqArEkub/PlcWc1rqGxke9xIrQq0GNXFJ1c/Xb0Hr0JVkABXMjggKoq3 2zt3D/5HLq4H9efFdnanCERf3XsLapCR68lu1SXyV9CZqktZ8f2zKquY97PM/I9RTYT5 mXnA== X-Gm-Message-State: AFqh2kqX7+QocRr+kEEb/Sjqn3n2nIljJfNQObGdngK3Bj2lRnNdBbaf xqeFi/4DGjJeyz+Nh8zguVukAYO6O/HQyvkY X-Received: by 2002:a17:902:6a89:b0:194:88a3:6e28 with SMTP id n9-20020a1709026a8900b0019488a36e28mr38634507plk.51.1674843977309; Fri, 27 Jan 2023 10:26:17 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:17 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 04/14] RISC-V: KVM: Define a probe function for SBI extension data structures Date: Fri, 27 Jan 2023 10:25:48 -0800 Message-Id: <20230127182558.2416400-5-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201307142071960?= X-GMAIL-MSGID: =?utf-8?q?1756201307142071960?= Currently the probe function just checks if an SBI extension is registered or not. However, the extension may not want to advertise itself depending on some other condition. An additional extension specific probe function will allow extensions to decide if they want to be advertised to the caller or not. Any extension that does not require additional dependency checks can avoid implementing this function. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 3 +++ arch/riscv/kvm/vcpu_sbi_base.c | 13 +++++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index f79478a..45ba341 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -29,6 +29,9 @@ struct kvm_vcpu_sbi_extension { int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long *out_val, struct kvm_cpu_trap *utrap, bool *exit); + + /* Extension specific probe function */ + unsigned long (*probe)(struct kvm_vcpu *vcpu); }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run); diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c index 5d65c63..846d518 100644 --- a/arch/riscv/kvm/vcpu_sbi_base.c +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -19,6 +19,7 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + const struct kvm_vcpu_sbi_extension *sbi_ext; switch (cp->a6) { case SBI_EXT_BASE_GET_SPEC_VERSION: @@ -43,8 +44,16 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, */ kvm_riscv_vcpu_sbi_forward(vcpu, run); *exit = true; - } else - *out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0; + } else { + sbi_ext = kvm_vcpu_sbi_find_ext(cp->a0); + if (sbi_ext) { + if (sbi_ext->probe) + *out_val = sbi_ext->probe(vcpu); + else + *out_val = 1; + } else + *out_val = 0; + } break; case SBI_EXT_BASE_GET_MVENDORID: *out_val = vcpu->arch.mvendorid; From patchwork Fri Jan 27 18:25:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49583 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980707wrn; Fri, 27 Jan 2023 10:28:53 -0800 (PST) X-Google-Smtp-Source: AK7set8ipbcFLfVZDtGiaXixfrZTltunIUDkv+bh40K0BWRKn56LKNTpaRlH6Av4Q1wVaW2gMetk X-Received: by 2002:a17:90b:3e83:b0:22c:3544:8cb0 with SMTP id rj3-20020a17090b3e8300b0022c35448cb0mr5972744pjb.8.1674844132872; Fri, 27 Jan 2023 10:28:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844132; cv=none; d=google.com; s=arc-20160816; b=nkINBsZ+FVjg7jx+9YpwgPCbuR6o6ngCBRNcTGqloxPXG3LclhaoKk+PS87/Tl1Upc qWq6UmjAyqDqE1W9LKGwhy2fHSvixrpfeti4ZhKt4el+e3u37v8Lexgf06Vnh2KcCgoz 3OX11qsJZEslRfqcXkIPuV6KVfGBwwU8dCDUqpqe0NyUBaUKAYMzCp59v030GGsb7smk DNrXtH0Xtt15C9SoT1e4+zYZfmP+elbbaMuLbyfURboEdvYkf+IyjPaO3G42m8mGrNtk AvaAcmBE9zTcw9fvwAUwv2CGraScjrE2gxnjDH4f7XVUS6ydQfHj3XV4Rgy4qpTqYJT1 bKHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=75NYeBzTa+D//nXwSO0c9cu0uNpb/RJgMmwNWXsNQ7A=; b=pNieL6/G0ijNs31ZROi441oEZSlfP4/F9jUvn8hE/GZAalFkg6oNzIkf2u1bDTTnbH 3OWYocdM8NSZ2PxhsEN6HswFM8jhhF+70fcQ/yOcb2sIbVo0mcoadg5vtjGKdL3lYsKx FjKMooU+FiymjgwFr5bNWhCVND5T97/o1pHQSk+pvilhvBgCbGyDHAC+cpq4K94YqC8J BoQ9cWTz6yB64QNoJxRPN9rP8x6UhnvTaC2ksBY6XOCNDYG2bdbYNAQOknMv4CyIcF3W 1LSlWK92MSVrHKf0HdSFCl8PNgVhb+5UbBQ+8LcNEK+snvyM1xdZ6AoIi4+1CCfLp8DQ bCPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b="YY7c/gb7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds15-20020a17090b08cf00b0022904ce68b7si9122431pjb.158.2023.01.27.10.28.39; Fri, 27 Jan 2023 10:28:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b="YY7c/gb7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234953AbjA0S0d (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235261AbjA0S0T (ORCPT ); Fri, 27 Jan 2023 13:26:19 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26B547E075 for ; Fri, 27 Jan 2023 10:26:18 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id o13so5376403pjg.2 for ; Fri, 27 Jan 2023 10:26:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=75NYeBzTa+D//nXwSO0c9cu0uNpb/RJgMmwNWXsNQ7A=; b=YY7c/gb7KD4BgGwCl4QuD8RpNT7TTipby8rQS1fM8ewCQdAAVE7CoiEkS36lf3TY1+ j3OpeGdZzI/FiuYvcGelaEpd818JPNjfPw5HRXDaKrEsFrqQR+6h3u2BoRBgd9QlAccY HDaOvbsPh9oJjc7aXIhCOrLyevo+m6Ry6cj+mJZ8grdtEokiLxkJS3Wgb67hSnJ3ny3B fdGvutiKC31lsRsHB07vjTgOC5Mh7oUjuEQX5QwOVbq5DU0s+Ur+Y7Z/GcKfyw3I0xA7 lPMsCdqnd8ySESkWIsbXcv5QzXJOfH01v3I0e1Tj1NG1M9eSLmeKg5cOtxQhNaIVEsDH tagA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=75NYeBzTa+D//nXwSO0c9cu0uNpb/RJgMmwNWXsNQ7A=; b=ssonuVYikUJI15+aAeieYSlCZt8y0F3o4fYlCBcQYN4M1bA8q9HahxzpkAbfp/qoMf 9UR3CDyj9xtGyHrA46tf5ZTgHeWOVax+NW3fvPjY2eVI5pbIuIL7ZCpMl4fn3ZeUY0VO Guvgg8LHlJYjk0lBnlmBAtvPXWZ/yHkKmxCN3dO3FpEBEafdMlKyAffnlAZfCpWcSUIv zCgWpOHC2Vf45Bk8twfITgyVGpriW4JC+f/nXwujO01i7EjvK/2To9lUbNvM0usAvD0K 3faSP55EuLPptjPTSs3YJa7ehE+EuXY8XD+wZs7VdUpw/0F30XKMYupM6+xovG8gqc8O 1+TQ== X-Gm-Message-State: AFqh2kpQRgmV4tR5toDZ3/Wk81ttxuRveh6+WlUeXGTnjukmAK8xJmse 4+ZZvGiI0rSB+d9nMjcbhJ6v0k2Oq/i74Lpk X-Received: by 2002:a17:902:7c93:b0:194:98f0:108e with SMTP id y19-20020a1709027c9300b0019498f0108emr34153952pll.13.1674843978225; Fri, 27 Jan 2023 10:26:18 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:17 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 05/14] RISC-V: KVM: Return correct code for hsm stop function Date: Fri, 27 Jan 2023 10:25:49 -0800 Message-Id: <20230127182558.2416400-6-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201361308608206?= X-GMAIL-MSGID: =?utf-8?q?1756201361308608206?= According to the SBI specification, the stop function can only return error code SBI_ERR_FAILED. However, currently it returns -EINVAL which will be mapped SBI_ERR_INVALID_PARAM. Return an linux error code that maps to SBI_ERR_FAILED i.e doesn't map to any other SBI error code. While EACCES is not the best error code to describe the situation, it is close enough and will be replaced with SBI error codes directly anyways. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/vcpu_sbi_hsm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 2e915ca..619ac0f 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -42,7 +42,7 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { if (vcpu->arch.power_off) - return -EINVAL; + return -EACCES; kvm_riscv_vcpu_power_off(vcpu); From patchwork Fri Jan 27 18:25:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49577 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980406wrn; Fri, 27 Jan 2023 10:28:08 -0800 (PST) X-Google-Smtp-Source: AK7set9rxeFCOIkdjSxrNrWnjh2iQ3g7xQPleQCKumJFoFyNWXGgtvwTvomCxxctTIIwPdkCG0wo X-Received: by 2002:aa7:8f95:0:b0:592:42b0:5c1a with SMTP id t21-20020aa78f95000000b0059242b05c1amr4556199pfs.3.1674844088107; Fri, 27 Jan 2023 10:28:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844088; cv=none; d=google.com; s=arc-20160816; b=fqpHZfdqoBgyZ+CaiBH0YzoiSifketPMkcfZtBr4fsVJFNwjcIIo4Z+NPDqZhpVjuF BIPetVJOibkVoEXsRj+Mq7vz3vasJAtIH9IfqYVKeQeBwpbjztDbuTLAN2qvfg+ArB5x 2cv7wGvnnXK+8gnxbRHszmnBXbEo5IRbATwcLB5Vj+kU0bGLcPcfivWGa5VptTVXV1UF N4GkGyPT2B/Sy5mBCus5yhodAAFXzCCW8gO0gEskitpX2cTM8j4roTb6lOUpAsw1aFtS oLwM/II/poz1ZPQ+v4MU8DlnwnVmW0SqAYitRvOs2G2HeWN/lMl0cUfW4/JE5OFlgoeW 9o1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dXwA+He5Gl75oa0P0/Izb/PHhFUIRPdbLW1x5GmAqWI=; b=Frhssr4hmIk4en8pIOjp2MrmBSc/bOC2DIrPhS9rHEDdLKIy3PKMRfzz1yT089QH71 w6nJqX5F0XtSoQN7p+alJXhASdo+8NkXdHs9Y+LygqUJSD8qxqyw1CbNmq5fCHZ7KSTM P4W1Pkq8MVfmI8k7J9TKYBdecBx8v8HyIK6qtVYcoJJfKbFXpjVz/JpD/lbpM/kMxrsK b1RDNhYqU5u0tBqnB2bkcVW6g/YmM/y1yrtWlftA9xFnMtERHhYz6KWNcqua7ldGSRLp YhXlRsQWcMXespmbhonlA+gk5k/p2XX0I5NUV+ynqt5xJs27+u80FYlqTIOjEuUjBI6D 6yEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=b1KgQDhz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020aa79dc5000000b0058095546a38si4842697pfq.361.2023.01.27.10.27.53; Fri, 27 Jan 2023 10:28:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=b1KgQDhz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235358AbjA0S0h (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235072AbjA0S0W (ORCPT ); Fri, 27 Jan 2023 13:26:22 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47595841B1 for ; Fri, 27 Jan 2023 10:26:20 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id d9so5849243pll.9 for ; Fri, 27 Jan 2023 10:26:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dXwA+He5Gl75oa0P0/Izb/PHhFUIRPdbLW1x5GmAqWI=; b=b1KgQDhzP3LBMu0UWWE4gvo2MupDUaDq7RN2Q1+oBsyYEk5RIMm8udeSuJy/Vzj8Q6 tZ3Oh2W3Y+iOHXD6rRMREkMG3UUnzd3BhamUanWUdTnzvFWM35Y1JFZ4GpLGQKU17DCE 14YA+N927/6cyYscb/r9wMchv/BRNWcGAsjtIfE7h0JnB254PdPotP/2NFgIRJkfTTbP uQLli0QHjFe+Z8WAhY/x/RXB13KQRkYCYer4S6jYXoV5lwa7SPR5SaDtrOG0RlwL96cE 3qoLEECh8r/yax1PSm9aclMgysV7A7DUSnIoOAzUHML08r8U/BDcVdJx4AEfSQbJrw7p buRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dXwA+He5Gl75oa0P0/Izb/PHhFUIRPdbLW1x5GmAqWI=; b=xJl337ks7TR+VsQIMq4hgsDWKpNhIFIIWtc2443OkVRICILdeCxqpbXs9C8jzsgqG8 9Qto1a+Veahgy8aNz+IgFvqcjcfTbo5WHq4lQ8Ot6yUaS9xE91ojrCc3RDN4Cko3j9oL iLZd1CRjBF7QljFSLdKxae2w8I+y1P4aRXH890dcM3xBd/A3+WTbq+UYBCQ1fpeGSYyJ CcHFDqC4i6wf37oekPaYt4hFX2DnmAb80s3RrDc2FmMr0q9xc02z1peWQyZ7px8b+mRM a4gPBA74vYXeXv7XO0A2LwPJ4on9UHLaQoriAZUDg8Q0vBWP5nkS4WpA1cTq++kv/C6z NtrA== X-Gm-Message-State: AO0yUKW0Zf7MHp7t8UOaVgyNgEJk5tYGAFhOeMaQLFZc0v32A2RXQjrS 8cjoKasnvzYm26AamrTWiz42z6coQ4gpCmNq X-Received: by 2002:a17:903:244f:b0:192:7845:e0cc with SMTP id l15-20020a170903244f00b001927845e0ccmr7260781pls.68.1674843979140; Fri, 27 Jan 2023 10:26:19 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:18 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 06/14] RISC-V: KVM: Modify SBI extension handler to return SBI error code Date: Fri, 27 Jan 2023 10:25:50 -0800 Message-Id: <20230127182558.2416400-7-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201314564536926?= X-GMAIL-MSGID: =?utf-8?q?1756201314564536926?= Currently, the SBI extension handle is expected to return Linux error code. The top SBI layer converts the Linux error code to SBI specific error code that can be returned to guest invoking the SBI calls. This model works as long as SBI error codes have 1-to-1 mappings between them. However, that may not be true always. This patch attempts to disassociate both these error codes by allowing the SBI extension implementation to return SBI specific error codes as well. The extension will continue to return the Linux error specific code which will indicate any problem *with* the extension emulation while the SBI specific error will indicate the problem *of* the emulation. Suggested-by: Andrew Jones Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 10 ++++-- arch/riscv/kvm/vcpu_sbi.c | 46 ++++++++------------------ arch/riscv/kvm/vcpu_sbi_base.c | 38 ++++++++++------------ arch/riscv/kvm/vcpu_sbi_hsm.c | 29 +++++++++-------- arch/riscv/kvm/vcpu_sbi_replace.c | 47 ++++++++++++++------------- arch/riscv/kvm/vcpu_sbi_v01.c | 11 +++---- 6 files changed, 84 insertions(+), 97 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index 45ba341..38407b3 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -18,6 +18,12 @@ struct kvm_vcpu_sbi_context { int return_handled; }; +struct kvm_vcpu_sbi_ext_data { + unsigned long out_val; + unsigned long err_val; + bool uexit; +}; + struct kvm_vcpu_sbi_extension { unsigned long extid_start; unsigned long extid_end; @@ -27,8 +33,8 @@ struct kvm_vcpu_sbi_extension { * specific error codes. */ int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, struct kvm_cpu_trap *utrap, - bool *exit); + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap); /* Extension specific probe function */ unsigned long (*probe)(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index f96991d..aa42da6 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -12,26 +12,6 @@ #include #include -static int kvm_linux_err_map_sbi(int err) -{ - switch (err) { - case 0: - return SBI_SUCCESS; - case -EPERM: - return SBI_ERR_DENIED; - case -EINVAL: - return SBI_ERR_INVALID_PARAM; - case -EFAULT: - return SBI_ERR_INVALID_ADDRESS; - case -EOPNOTSUPP: - return SBI_ERR_NOT_SUPPORTED; - case -EALREADY: - return SBI_ERR_ALREADY_AVAILABLE; - default: - return SBI_ERR_FAILURE; - }; -} - #ifndef CONFIG_RISCV_SBI_V01 static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { .extid_start = -1UL, @@ -125,11 +105,10 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) { int ret = 1; bool next_sepc = true; - bool userspace_exit = false; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; const struct kvm_vcpu_sbi_extension *sbi_ext; struct kvm_cpu_trap utrap = { 0 }; - unsigned long out_val = 0; + struct kvm_vcpu_sbi_ext_data edata_out = { 0 }; bool ext_is_v01 = false; sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7); @@ -139,13 +118,22 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) cp->a7 <= SBI_EXT_0_1_SHUTDOWN) ext_is_v01 = true; #endif - ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit); + ret = sbi_ext->handler(vcpu, run, &edata_out, &utrap); } else { /* Return error for unsupported SBI calls */ cp->a0 = SBI_ERR_NOT_SUPPORTED; goto ecall_done; } + /* + * When the SBI extension returns a Linux error code, it exits the ioctl + * loop and forwards the error to userspace. + */ + if (ret < 0) { + next_sepc = false; + goto ecall_done; + } + /* Handle special error cases i.e trap, exit or userspace forward */ if (utrap.scause) { /* No need to increment sepc or exit ioctl loop */ @@ -157,24 +145,18 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) } /* Exit ioctl loop or Propagate the error code the guest */ - if (userspace_exit) { + if (edata_out.uexit) { next_sepc = false; ret = 0; } else { - /** - * SBI extension handler always returns an Linux error code. Convert - * it to the SBI specific error code that can be propagated the SBI - * caller. - */ - ret = kvm_linux_err_map_sbi(ret); - cp->a0 = ret; + cp->a0 = edata_out.err_val; ret = 1; } ecall_done: if (next_sepc) cp->sepc += 4; if (!ext_is_v01) - cp->a1 = out_val; + cp->a1 = edata_out.out_val; return ret; } diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c index 846d518..84885e5 100644 --- a/arch/riscv/kvm/vcpu_sbi_base.c +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -14,24 +14,23 @@ #include static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *trap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *trap) { - int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; const struct kvm_vcpu_sbi_extension *sbi_ext; switch (cp->a6) { case SBI_EXT_BASE_GET_SPEC_VERSION: - *out_val = (KVM_SBI_VERSION_MAJOR << + edata->out_val = (KVM_SBI_VERSION_MAJOR << SBI_SPEC_VERSION_MAJOR_SHIFT) | KVM_SBI_VERSION_MINOR; break; case SBI_EXT_BASE_GET_IMP_ID: - *out_val = KVM_SBI_IMPID; + edata->out_val = KVM_SBI_IMPID; break; case SBI_EXT_BASE_GET_IMP_VERSION: - *out_val = LINUX_VERSION_CODE; + edata->out_val = LINUX_VERSION_CODE; break; case SBI_EXT_BASE_PROBE_EXT: if ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START && @@ -43,33 +42,33 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, * forward it to the userspace */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; + edata->uexit = true; } else { sbi_ext = kvm_vcpu_sbi_find_ext(cp->a0); if (sbi_ext) { if (sbi_ext->probe) - *out_val = sbi_ext->probe(vcpu); + edata->out_val = sbi_ext->probe(vcpu); else - *out_val = 1; + edata->out_val = 1; } else - *out_val = 0; + edata->out_val = 0; } break; case SBI_EXT_BASE_GET_MVENDORID: - *out_val = vcpu->arch.mvendorid; + edata->out_val = vcpu->arch.mvendorid; break; case SBI_EXT_BASE_GET_MARCHID: - *out_val = vcpu->arch.marchid; + edata->out_val = vcpu->arch.marchid; break; case SBI_EXT_BASE_GET_MIMPID: - *out_val = vcpu->arch.mimpid; + edata->out_val = vcpu->arch.mimpid; break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; break; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = { @@ -79,17 +78,16 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = { }; static int kvm_sbi_ext_forward_handler(struct kvm_vcpu *vcpu, - struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) + struct kvm_run *run, + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { /* * Both SBI experimental and vendor extensions are * unconditionally forwarded to userspace. */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; + edata->uexit = true; return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 619ac0f..5fb526c 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -21,9 +21,9 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) - return -EINVAL; + return SBI_ERR_INVALID_PARAM; if (!target_vcpu->arch.power_off) - return -EALREADY; + return SBI_ERR_ALREADY_AVAILABLE; reset_cntx = &target_vcpu->arch.guest_reset_context; /* start address */ @@ -42,7 +42,7 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { if (vcpu->arch.power_off) - return -EACCES; + return SBI_ERR_FAILURE; kvm_riscv_vcpu_power_off(vcpu); @@ -57,7 +57,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) - return -EINVAL; + return SBI_ERR_INVALID_PARAM; if (!target_vcpu->arch.power_off) return SBI_HSM_STATE_STARTED; else if (vcpu->stat.generic.blocking) @@ -67,9 +67,8 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) } static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; @@ -88,27 +87,29 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_HSM_HART_STATUS: ret = kvm_sbi_hsm_vcpu_get_status(vcpu); if (ret >= 0) { - *out_val = ret; - ret = 0; + edata->out_val = ret; + edata->err_val = 0; } - break; + return 0; case SBI_EXT_HSM_HART_SUSPEND: switch (cp->a0) { case SBI_HSM_SUSPEND_RET_DEFAULT: kvm_riscv_vcpu_wfi(vcpu); break; case SBI_HSM_SUSPEND_NON_RET_DEFAULT: - ret = -EOPNOTSUPP; + ret = SBI_ERR_NOT_SUPPORTED; break; default: - ret = -EINVAL; + ret = SBI_ERR_INVALID_PARAM; } break; default: - ret = -EOPNOTSUPP; + ret = SBI_ERR_NOT_SUPPORTED; } - return ret; + edata->err_val = ret; + + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = { diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index 03a0198..abeb55f 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -14,15 +14,16 @@ #include static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { - int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; u64 next_cycle; - if (cp->a6 != SBI_EXT_TIME_SET_TIMER) - return -EINVAL; + if (cp->a6 != SBI_EXT_TIME_SET_TIMER) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } #if __riscv_xlen == 32 next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; @@ -31,7 +32,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, #endif kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = { @@ -41,8 +42,8 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = { }; static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { int ret = 0; unsigned long i; @@ -51,8 +52,10 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long hmask = cp->a0; unsigned long hbase = cp->a1; - if (cp->a6 != SBI_EXT_IPI_SEND_IPI) - return -EINVAL; + if (cp->a6 != SBI_EXT_IPI_SEND_IPI) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } kvm_for_each_vcpu(i, tmp, vcpu->kvm) { if (hbase != -1UL) { @@ -76,10 +79,9 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = { }; static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { - int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; unsigned long hmask = cp->a0; unsigned long hbase = cp->a1; @@ -116,10 +118,10 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run */ break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = { @@ -130,14 +132,13 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = { static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { struct kvm_cpu_context *cp = &vcpu->arch.guest_context; unsigned long funcid = cp->a6; u32 reason = cp->a1; u32 type = cp->a0; - int ret = 0; switch (funcid) { case SBI_EXT_SRST_RESET: @@ -146,24 +147,24 @@ static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu, kvm_riscv_vcpu_sbi_system_reset(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN, reason); - *exit = true; + edata->uexit = true; break; case SBI_SRST_RESET_TYPE_COLD_REBOOT: case SBI_SRST_RESET_TYPE_WARM_REBOOT: kvm_riscv_vcpu_sbi_system_reset(vcpu, run, KVM_SYSTEM_EVENT_RESET, reason); - *exit = true; + edata->uexit = true; break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst = { diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 489f225..c0ccc58 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -14,9 +14,8 @@ #include static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { ulong hmask; int i, ret = 0; @@ -33,7 +32,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, * handled in kernel so we forward these to user-space */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; + edata->uexit = true; break; case SBI_EXT_0_1_SET_TIMER: #if __riscv_xlen == 32 @@ -65,7 +64,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_0_1_SHUTDOWN: kvm_riscv_vcpu_sbi_system_reset(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN, 0); - *exit = true; + edata->uexit = true; break; case SBI_EXT_0_1_REMOTE_FENCE_I: case SBI_EXT_0_1_REMOTE_SFENCE_VMA: @@ -103,7 +102,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, } break; default: - ret = -EINVAL; + edata->err_val = SBI_ERR_NOT_SUPPORTED; break; } From patchwork Fri Jan 27 18:25:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49586 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980736wrn; Fri, 27 Jan 2023 10:28:57 -0800 (PST) X-Google-Smtp-Source: AK7set+WUzvk14cKJG5H9wNmxk+MQ2gpviS6EzBGaUTr70Cz0twl3PQvcgQ2JCgKufRluIGDsTAU X-Received: by 2002:a17:90b:4a8e:b0:22c:51b0:6b49 with SMTP id lp14-20020a17090b4a8e00b0022c51b06b49mr2382014pjb.37.1674844137425; Fri, 27 Jan 2023 10:28:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844137; cv=none; d=google.com; s=arc-20160816; b=mOq5epe+nlr4cd7sS/kK2Pp/F6WzvkNdd+duONXp8A9izIoDRKhyUpUUn7qv7QG3eO wgkDWXKwKye6k37zsMLtU6WL8NiJR+EB2z3ZuWZc3BZ90uxAIgfMw6jHk6VrK3ogJ9Dm qxboE8C9T4zrXWKq2As1KJ98hc9quhlM5eU3DT+jJyI9cazJtrSIXbWSr+qGn0SknrZI 9v63NQsPSgCW2MpENxvCqmwiD4FnGDmr7xYF2A/A004O7aitu/l+hKWiUF9io0p9KovK 4Pa/5B3fZ32qZp0A67fLhnWtl9/4ezmSM+53LXCmQyEVjcqDBjirlh2CYGsp3579fv9g /IDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ngz4BJEkYwj1EhZHcesEeRsl9PBRoQJPRxlLhAhdcSk=; b=q/gOLx9VwFDPlY0znlFntohwF6x9ubAcd8DwLn+vzqIV+3e2Sl2xH6M09DQj2bZotn rBr6PtPdK5sazTj/stg1uR2VCXx7zxGc0CilSDO4wljAHyquAc2id9R70ulEo3NaklgO bXapubhu4UKpjF+8/iamBPn2OlM2CTxuoAo2in2xyvAgnd5NJU2ZAmJwLNiw4y9mFWKt 7xkjVZtu25xjFJQZisHHI6gAPuqvF9QOO2L+HWdOlXBm9qVZEG7O1eR55OthN8sLWmmO C7u+6xW0HK6Ekuke7w0FN49NTx7kY9mFEFINRynCkFAK6Nm4kFa48Z6gu6zlGek8daxt 8qQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=u6oB7dWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bv13-20020a17090af18d00b0022be0f7773asi5107515pjb.95.2023.01.27.10.28.45; Fri, 27 Jan 2023 10:28:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=u6oB7dWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235368AbjA0S0m (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235288AbjA0S0X (ORCPT ); Fri, 27 Jan 2023 13:26:23 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D35C07F698 for ; Fri, 27 Jan 2023 10:26:20 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id g23so5836730plq.12 for ; Fri, 27 Jan 2023 10:26:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ngz4BJEkYwj1EhZHcesEeRsl9PBRoQJPRxlLhAhdcSk=; b=u6oB7dWx2nvfeKTmg127o8Z3ghvpoM1bbYLH05DhRneKdCwOmcMQjV+vs0BmoOFFMD FVNEpwEHKcdqtb5ibldK24GhPd3+1OQN7KBbcGwfMJaazEFkMlgxV1U2bsGqANViR6JN 8Y4LZJRP8L5WkwhwASxPgzpSA6Up7PgeQCbLknH30pmTUTJTms+xJx52r1sY63x8MlT7 JeBF+z9A9QNKEcGJbg9gDr+mKRr/0piNHBnonZ/MPyXgqyiZGgL5idGzTiyi/7/3vdi0 6KWv/gT2SOBwQ6Ng4eTG4/xTFMEqd92IamzpE+9JqFY/T5wA2Fo8DUUlzi47HzyoOVVc 3nDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ngz4BJEkYwj1EhZHcesEeRsl9PBRoQJPRxlLhAhdcSk=; b=6yKulKGdN+RenpgZdjnuHbGtTZSg7zPbw9kG0tqAflmLbd5PHeLmnzPWtyq0PjX81R +SNRjYTwinS2lv67Bs57V34g2whulhP/ikCm4YssV/haE0LITYhmCQV4MKzDth7MlNi2 rhoSlbsFK5O0KLXLLern0Zud58fwQgSiF3GfP1Hy6uZIpVekuLUtNFu6PX7abqA7rhjF 45WtsWx5kFlo25BwvIQOWpBG5y+9WnwTyYQtbmixFLnIhBIWxSfegB0lQsFcnok0e28L 9Q61ImL3/xwjf4xOCuiky1dyk2RZrkgaR4mPF5flpTg1ZM21Qt948yq4opTo09jCOiIC fdxQ== X-Gm-Message-State: AFqh2kq3fxlCevksloarJ9c0Bi4lIp0FRP+NIeBRmu5Xw+X8gAhBTfwt mhUQWaTYDi/UKMadPWWMrBPr3FgJwRb7t29u X-Received: by 2002:a17:902:f546:b0:194:d9d7:88c4 with SMTP id h6-20020a170902f54600b00194d9d788c4mr38308129plf.49.1674843980061; Fri, 27 Jan 2023 10:26:20 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:19 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 07/14] RISC-V: KVM: Add skeleton support for perf Date: Fri, 27 Jan 2023 10:25:51 -0800 Message-Id: <20230127182558.2416400-8-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201366244289614?= X-GMAIL-MSGID: =?utf-8?q?1756201366244289614?= This patch only adds barebore structure of perf implementation. Most of the function returns zero at this point and will be implemented fully in the future. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 3 + arch/riscv/include/asm/kvm_vcpu_pmu.h | 76 ++++++++++++++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu.c | 5 + arch/riscv/kvm/vcpu_pmu.c | 145 ++++++++++++++++++++++++++ 5 files changed, 230 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h create mode 100644 arch/riscv/kvm/vcpu_pmu.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 93f43a3..f9874b4 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -18,6 +18,7 @@ #include #include #include +#include #define KVM_MAX_VCPUS 1024 @@ -228,6 +229,8 @@ struct kvm_vcpu_arch { /* Don't run the VCPU (blocked) */ bool pause; + + struct kvm_pmu pmu; }; static inline void kvm_arch_hardware_unsetup(void) {} diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h new file mode 100644 index 0000000..3f43a43 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_VCPU_RISCV_PMU_H +#define __KVM_VCPU_RISCV_PMU_H + +#include +#include +#include + +#ifdef CONFIG_RISCV_PMU_SBI +#define RISCV_KVM_MAX_FW_CTRS 32 +#define RISCV_MAX_COUNTERS 64 + +/* Per virtual pmu counter data */ +struct kvm_pmc { + u8 idx; + struct perf_event *perf_event; + uint64_t counter_val; + union sbi_pmu_ctr_info cinfo; + /* Event monitoring status */ + bool started; +}; + +/* PMU data structure per vcpu */ +struct kvm_pmu { + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; + /* Number of the virtual firmware counters available */ + int num_fw_ctrs; + /* Number of the virtual hardware counters available */ + int num_hw_ctrs; + /* A flag to indicate that pmu initialization is done */ + bool init_done; + /* Bit map of all the virtual counter used */ + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); +}; + +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t evtdata, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); + +#else +struct kvm_pmu { +}; + +static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} +#endif /* CONFIG_RISCV_PMU_SBI */ +#endif /* !__KVM_VCPU_RISCV_PMU_H */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 019df920..5de1053 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 7c08567..b746f21 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -137,6 +137,7 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + kvm_riscv_vcpu_pmu_reset(vcpu); vcpu->arch.hfence_head = 0; vcpu->arch.hfence_tail = 0; @@ -194,6 +195,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); + /* setup performance monitoring */ + kvm_riscv_vcpu_pmu_init(vcpu); + /* Reset VCPU */ kvm_riscv_reset_vcpu(vcpu); @@ -216,6 +220,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) /* Cleanup VCPU timer */ kvm_riscv_vcpu_timer_deinit(vcpu); + kvm_riscv_vcpu_pmu_deinit(vcpu); /* Free unused pages pre-allocated for G-stage page table mappings */ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); } diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c new file mode 100644 index 0000000..d3fd551 --- /dev/null +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + edata->out_val = kvm_pmu_num_counters(kvpmu); + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } + + edata->out_val = kvpmu->pmc[cidx].cinfo.value; + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t evtdata, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) +{ + int i = 0, num_fw_ctrs, ret, num_hw_ctrs = 0, hpm_width = 0; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); + if (ret < 0) + return ret; + + if (!hpm_width || !num_hw_ctrs) { + pr_err("Cannot initialize VCPU with NULL hpmcounter width or number of counters\n"); + return -EINVAL; + } + + if ((num_hw_ctrs + RISCV_KVM_MAX_FW_CTRS) > RISCV_MAX_COUNTERS) { + pr_warn("Limiting fw counters as hw & fw counters exceed maximum counters\n"); + num_fw_ctrs = RISCV_MAX_COUNTERS - num_hw_ctrs; + } else + num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; + + kvpmu->num_hw_ctrs = num_hw_ctrs; + kvpmu->num_fw_ctrs = num_fw_ctrs; + + /* + * There is no correlation between the logical hardware counter and virtual counters. + * However, we need to encode a hpmcounter CSR in the counter info field so that + * KVM can trap n emulate the read. This works well in the migration use case as + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. + */ + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { + /* TIME CSR shouldn't be read from perf interface */ + if (i == 1) + continue; + pmc = &kvpmu->pmc[i]; + pmc->idx = i; + if (i < kvpmu->num_hw_ctrs) { + kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; + if (i < 3) + /* CY, IR counters */ + kvpmu->pmc[i].cinfo.width = 63; + else + kvpmu->pmc[i].cinfo.width = hpm_width; + /* + * The CSR number doesn't have any relation with the logical + * hardware counters. The CSR numbers are encoded sequentially + * to avoid maintaining a map between the virtual counter + * and CSR number. + */ + pmc->cinfo.csr = CSR_CYCLE + i; + } else { + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; + pmc->cinfo.width = BITS_PER_LONG - 1; + } + } + + kvpmu->init_done = true; + + return 0; +} + +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) +{ + kvm_riscv_vcpu_pmu_deinit(vcpu); +} From patchwork Fri Jan 27 18:25:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49578 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980448wrn; Fri, 27 Jan 2023 10:28:12 -0800 (PST) X-Google-Smtp-Source: AMrXdXuS1h4YBNDT0D3FmjPFgDx/MCFX+Q9YsGu9/DP5vBTt/UfMHZnR/Pi6Q4smzPW47jD81MJs X-Received: by 2002:a17:90a:7e8d:b0:228:f893:bc4d with SMTP id j13-20020a17090a7e8d00b00228f893bc4dmr42388500pjl.23.1674844091847; Fri, 27 Jan 2023 10:28:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844091; cv=none; d=google.com; s=arc-20160816; b=MZNUxIyC82OFZc38qUxXHA1jgeSJYGGXYcBaiLTJoY6coltatGW8oPLFkHG+F0qwlo UNOma4KxR01hWM8ZjjEBx8LeWxjTslYX1R08QUyp/HcAqffjuAaihCb4LAlqdch6hi0E j+6uQhfFBtxEihF0AY0Ra0lzCC5WatyKli4vuVRIBIqZO03Y5KRIOOvBTZLJPgpfEQ9S MyYgPq7qwyuxjD1YUiT3B8XKIrbYVNQ+qekHtmxJPLtVY+qLXWuf29++5oJvKohgH82e 62myGXsA1HJN8vzPzkbtqR1gNcikrOLhoR0DQ3OqoUy8qkF1sWLH7+4jiM+LOxH0EDlj G53A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yZBj71qSb3+Aqnn9b5ZzRJQhfdhAJ0+/w4jzSKSrSG0=; b=M9Tf0pzZf67K40k2O2xgYSmEGBb22/j+INMhvDq1J6hE9VyIhajOFaPJDqUNTb0gw8 7nOPsm2NPwyzP+1ZE3FqoQLQe9M7E2IPyvjFZBtj2rGDUTHHTd8KTR2DuvT6WdAj/ahu yq2rRVwSMKHfsqGmp7EzsDllGVhZaDqS5KpNqyn02nxd8sN47AihfrqfYZ0CDbXwP6lF Ad0DwNDjOqEEzpZcU+NViRr8+ryLjDQUwWk0pCu9/pQptxI/+GMpIpE26fwqVD2VnzRB cDdLSjVke/sqXpB1af3umdbMp+TuW5tQphv6Pj5CS2O0Cb0aN4TZ62QZ1K/jgQLn2pi4 FIvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=aNBa46Tc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a22-20020a17090ad81600b0022c0912029esi5473795pjv.30.2023.01.27.10.28.00; Fri, 27 Jan 2023 10:28:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=aNBa46Tc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235152AbjA0S0p (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbjA0S0Y (ORCPT ); Fri, 27 Jan 2023 13:26:24 -0500 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D4407C327 for ; Fri, 27 Jan 2023 10:26:21 -0800 (PST) Received: by mail-pl1-x62e.google.com with SMTP id jm10so5833414plb.13 for ; Fri, 27 Jan 2023 10:26:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yZBj71qSb3+Aqnn9b5ZzRJQhfdhAJ0+/w4jzSKSrSG0=; b=aNBa46TcUSAaIYzEsEXoxK+K6UaJ27trR2LRtWhlroeZP8+4qMn4qHDpVUaLXyOqqS SsxYp5Ild2HPTcXbRh+qJZ9WgicgtHl2E1GbkiuBr0GDbj8CCl6SqD8SYOAKx6pczEjl MJ3iCXE43O9o1sqIYxImQvfN0UiiNGNKiGDPWPn0gzNqDbF5ouyQdT911q2Pk/OZ6YQT 2FoeisVf6i+eAomU9JIxZ9t/5x7aksViIfhATp/SNSCFNmn4q1xjP4Iad/sCTGyRkA/T hYmUwi3atuAJmSBf8bh0NqKY9uFcp05HGs3LpQZ1vpGQ/FeXTNC7LFoqO/Afpc57xcL5 4pfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yZBj71qSb3+Aqnn9b5ZzRJQhfdhAJ0+/w4jzSKSrSG0=; b=zhh5oBfJtk3J/gE1bUv2U/pbGS4mayN4FG1WA/1XJlDtVMGk3evXSDNnl3xrKlLCC3 CvoGObGbW+nx1ToIlIJVp3h4BKc5kLw8EuTP1iQHxf3JDTD0HBWmtTMLRwG2gz47BcRx ucP5674mHpc3Dv1YZec5qivGmeDs1zki2Vh937/oVfSRRT11AUKF9gDZr8rn7EFvdnC7 UqtyN09NGBBluJ2Q+nVE5dVTrBNP0wmoL/KSdxP/odX6ZVbKK2htEzWIkWJxqEc/iYl6 BWkEjfC1TXCd7oAnx9aq5OY5Vu+iFaaeiyZZwF0JqLCzH3FR9j3S2bX26rPJBdaddp7j TCoA== X-Gm-Message-State: AFqh2kqW+ntRUOXVoDQEadJr7r9gvqBEgLQMZ6etfJM/gRJJJr8aNmIT qdTcp4b6BAyOWZ4KuMgnrBeV3VIR+ecFM35+ X-Received: by 2002:a17:902:8b83:b0:194:ab28:3268 with SMTP id ay3-20020a1709028b8300b00194ab283268mr34680188plb.34.1674843980961; Fri, 27 Jan 2023 10:26:20 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:20 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 08/14] RISC-V: KVM: Add SBI PMU extension support Date: Fri, 27 Jan 2023 10:25:52 -0800 Message-Id: <20230127182558.2416400-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201318236578609?= X-GMAIL-MSGID: =?utf-8?q?1756201318236578609?= SBI PMU extension allows KVM guests to configure/start/stop/query about the PMU counters in virtualized enviornment as well. In order to allow that, KVM implements the entire SBI PMU extension. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu_sbi.c | 11 +++++ arch/riscv/kvm/vcpu_sbi_pmu.c | 86 +++++++++++++++++++++++++++++++++++ 3 files changed, 98 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 5de1053..278e97c 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o -kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index aa42da6..04a3b4b 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { }; #endif +#ifdef CONFIG_RISCV_PMU_SBI +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; +#else +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = -1UL, + .extid_end = -1UL, + .handler = NULL, +}; +#endif + static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, &vcpu_sbi_ext_base, @@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_rfence, &vcpu_sbi_ext_srst, &vcpu_sbi_ext_hsm, + &vcpu_sbi_ext_pmu, &vcpu_sbi_ext_experimental, &vcpu_sbi_ext_vendor, }; diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c new file mode 100644 index 0000000..73aab30 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -0,0 +1,86 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include + +static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + unsigned long funcid = cp->a6; + uint64_t temp; + + /* Return not supported if PMU is not initialized */ + if (!kvpmu->init_done) + return -EINVAL; + + switch (funcid) { + case SBI_EXT_PMU_NUM_COUNTERS: + ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, edata); + break; + case SBI_EXT_PMU_COUNTER_GET_INFO: + ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, edata); + break; + case SBI_EXT_PMU_COUNTER_CFG_MATCH: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a5 << 32) | cp->a4; +#else + temp = cp->a4; +#endif + /* + * This can fail if perf core framework fails to create an event. + * Forward the error to the user space because its an error happened + * within host kernel. The other option would be convert this to + * an SBI error and forward to the guest. + */ + ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, + cp->a2, cp->a3, temp, edata); + break; + case SBI_EXT_PMU_COUNTER_START: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a4 << 32) | cp->a3; +#else + temp = cp->a3; +#endif + ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, + temp, edata); + break; + case SBI_EXT_PMU_COUNTER_STOP: + ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, edata); + break; + case SBI_EXT_PMU_COUNTER_FW_READ: + ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, edata); + break; + default: + edata->err_val = SBI_ERR_NOT_SUPPORTED; + } + + return ret; +} + +unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + return kvpmu->init_done; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = SBI_EXT_PMU, + .extid_end = SBI_EXT_PMU, + .handler = kvm_sbi_ext_pmu_handler, + .probe = kvm_sbi_ext_pmu_probe, +}; From patchwork Fri Jan 27 18:25:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49582 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980639wrn; Fri, 27 Jan 2023 10:28:43 -0800 (PST) X-Google-Smtp-Source: AK7set/G2L0QduNqsRUiXkhCxmVDy/XFrqyu5srPI5ThkilWXafr3LTRsJaca6TMTcEtQY4PeJFh X-Received: by 2002:a05:6a20:3d24:b0:bc:5fc6:6894 with SMTP id y36-20020a056a203d2400b000bc5fc66894mr1416356pzi.53.1674844122848; Fri, 27 Jan 2023 10:28:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844122; cv=none; d=google.com; s=arc-20160816; b=iqNCPADFYq63eU0df0Yrec6BFNNKlYyyUwqLb+p+k/IeQhLGN1MLYZkijgltqj+dQM WIGLrpwEBOWwoFij7WO3ZkWJnLhTwM1pRF/blBA64DJTO1NTMV7kUefiRkKORXIojjRz n6M7UsBrsRYVRcqG8MhBpblulJ6h0MSYo1SWWYYhENxm6i9i7xw6LFss9B+Pf4GuY/GS 03S6VHQU44yjqa7vYQcw4dqTJcVIe5mM1nBoSo7kk4VgrIPKjrEyxFbuPsOYwgRM647W wih7eszrLap7VxHprZdIMC6nFWflGJmseo3zwRUh6WPB/yZzVvYWrIasOXxRH5Ub9XMd eazw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VYOVEt3jt/KSlQQ1kq4H/LNDYTWOBqHX+UHxFdJqK+A=; b=i9T5kPxu6UKHQ+shy51D+MzFkzk94Ge21eVwsBl8bb2AmR/Tk0636qxMa7VQszDBAb dEVnNInNQdDoIs+E0FqNz1+OZH8ERaQmJ/dDtOqHvNIWZGLL0z30qvKncyUJTxY/bPJS P7xGbG3Ox1zcxOxjQpcXZ2110NI10c6uQMMYBA/n+dd0FV+rWNLxKM+NN1diIiYDFnhF Pi6XCawniX6zv1vFMoRZI6zQOAAN1W2HeObx7WjPSgJHVTd39Pr5zac0HAYqy9TgNNfv VyRvvrKfKx47rxM9TpScvfJ7EgKdiFffSKlyQt+YaV0r9tsDzkXNKdhraFlDeYInvOay Dr/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=kmh1lQGU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kb16-20020a17090ae7d000b0022c2bad5842si6259393pjb.82.2023.01.27.10.28.05; Fri, 27 Jan 2023 10:28:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=kmh1lQGU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235384AbjA0S0s (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235132AbjA0S01 (ORCPT ); Fri, 27 Jan 2023 13:26:27 -0500 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97D8F7F6AB for ; Fri, 27 Jan 2023 10:26:22 -0800 (PST) Received: by mail-pl1-x62c.google.com with SMTP id p24so5845052plw.11 for ; Fri, 27 Jan 2023 10:26:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VYOVEt3jt/KSlQQ1kq4H/LNDYTWOBqHX+UHxFdJqK+A=; b=kmh1lQGUzNyTV+SJvoAKXkHkMGjivjZAJea6T5Hr7EIlrPYBaMWMvcEiaLpqyM/KtS ePUoVDqeA8FMW6jaejIY4R3VgjOlyByf9TehWXCybvPMj+3uRjKWKop61ca/PFAC6MT4 u3APAiin9cC4fpcKBa+GtGHu3seYOHxtbVJgMIUBvfzxemER3gL+mJmlsb0jlsOtQ+oS uOR1c9hgmjpmrfdg2V/+etjez1klBx34/ATu64GSwu4Arvg8fJLRv0JOESyyZq5JJ4hV TMtYAzqqBm3p15eDA0DYzPXUxJxBvUsdN3ACp+QtTe86hJ8JnVWhd0xd34wW7zvkSCt7 3xuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VYOVEt3jt/KSlQQ1kq4H/LNDYTWOBqHX+UHxFdJqK+A=; b=VUrX+ancO7YXE+ZCa1yGq4OioxrO90Uw2GqSFi7yhM0IEnetaNk5iA8RfBuCziJa6y RzHO8Cs5mEmWBXbERloHO3miSh+hT7Sl/qPCQOenxk1G8b+xmaqunZcbbPxZbAmc1OWs WtcGDy+1+qyBq8aBEtPEwoce1dCP5hVx8YKsVveNyP5hxu5qXE1opawnCr/z+7V5CTiq 1jX4O6u0Nk+/DaZqhtp5V3CoZkKDQ5Il6vMvsWdqsu/jF5+XpPb5OvMZQXKG5VslmNy7 oxrjaXUOtnVOGd3P+izUILhhBhoeJ92b9gUtFBrGVIsU8s8dbLpZxT0tn3LD/L2k3KwI uleQ== X-Gm-Message-State: AFqh2krkU6EzHtx3g69kcl9aINX95ce4wvW4JLwLns0zXgenETy/ydFe 6pFUq9/bIJVSfLJ25BO3Jvx4fH5HWfLigm+D X-Received: by 2002:a17:902:e811:b0:194:5066:fc20 with SMTP id u17-20020a170902e81100b001945066fc20mr51140337plg.40.1674843981856; Fri, 27 Jan 2023 10:26:21 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:21 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 09/14] RISC-V: KVM: Make PMU functionality depend on Sscofpmf Date: Fri, 27 Jan 2023 10:25:53 -0800 Message-Id: <20230127182558.2416400-10-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201351227447643?= X-GMAIL-MSGID: =?utf-8?q?1756201351227447643?= The privilege mode filtering feature must be available in the host so that the host can inhibit the counters while the execution is in HS mode. Otherwise, the guests may have access to critical guest information. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/vcpu_pmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index d3fd551..7713927 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -79,6 +79,14 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; + /* + * PMU functionality should be only available to guests if privilege mode + * filtering is available in the host. Otherwise, guest will always count + * events while the execution is in hypervisor mode. + */ + if (!riscv_isa_extension_available(NULL, SSCOFPMF)) + return 0; + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); if (ret < 0) return ret; From patchwork Fri Jan 27 18:25:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49579 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980514wrn; Fri, 27 Jan 2023 10:28:20 -0800 (PST) X-Google-Smtp-Source: AK7set8mnfBaiopE+QNkdjCzI2KVz/I0QMDz//tS0DYecUnLdjkAw9l2fWBngUkaPBgC0B9/l8xE X-Received: by 2002:a17:90b:4a4b:b0:22c:2f5e:33df with SMTP id lb11-20020a17090b4a4b00b0022c2f5e33dfmr6605947pjb.22.1674844100511; Fri, 27 Jan 2023 10:28:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844100; cv=none; d=google.com; s=arc-20160816; b=Ggs4C84kZ1ujnFhKi9cBio0fBlXNMKWYNk3ttfmtG+phUVWUEt1w8TEwLKV+BIycsv l5GGM43SbaVVvdAP5VKXsu0YQfddLAYrx7OrcE2S2u7pNARt4ZNffI+cDRHvDU33uKPF 76DH4GGe4ZUBwog3h5q8BTxUCNCLbWtJaLM5nqxsJR2OmAI91Gf7R0jqn2NTkvskwMcU WrMYGqX5kkdMfV209WvDqq1/226X4GevZ5Fs5DgvG1WUt0bdvbRs+Vms1XXS8gUVyvuF 4lu7EMZweqY9Jucjpz2XYGIIapNRPALyDXKrrdOp62Qe2IUJyZh2eLquBSHW5S50+ZXL ClIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Zm221noYPDv16VavlZywICLYGigsQ3r83tHpYQ6Bets=; b=fM4I5aKo4/gKxwTEHRYZKLVL7yZP1PKqzYfNps02BGRihIJHAcKU20WESufnFrUNJu fsdP4T6kOj+bZE2l7vD0jMhTlS+Fq0oyrR7I6nIbUSA4u+w35FaNBevZ240tBDeQiPD1 YqPrsohHHz3sU1Lv6En3/W6cF5JKoh5xklQTJcnSsDhCcjKjDIFQssPmGG/vJCc7Zu+/ UFRMYKrJT+uVMAahfTWDF5n3dKJnck/OXj3WMKiHBaFym/1A30uJZnagcNG7tg3qnzuH 6RwKnu+VwGUPDksT70yCaxctLQm14Bd/Q0Gi1kSqLz19LjFmOEPmdOwS/4NkhK8UU6gy Qmuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=ZcBhDH5t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dw18-20020a17090b095200b002262a74af32si5243278pjb.40.2023.01.27.10.28.08; Fri, 27 Jan 2023 10:28:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=ZcBhDH5t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234713AbjA0S0x (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235328AbjA0S02 (ORCPT ); Fri, 27 Jan 2023 13:26:28 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C12083959 for ; Fri, 27 Jan 2023 10:26:23 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id b10so5377367pjo.1 for ; Fri, 27 Jan 2023 10:26:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zm221noYPDv16VavlZywICLYGigsQ3r83tHpYQ6Bets=; b=ZcBhDH5tOacQHuA94vrShOzYvQg6QtjBICmN2cS9lLp1eNzFB5yxsNZPBOaH2GNR6m M5ZLuxW0OtCybveoStp55DxKU8uE0CxX5PfGQPKUBBnGgNvfRWDLBHWWcQjmA0eZe3bS 0gICApwRPkQqa5+iA0bzYwDiu+ARraoCbqtymSf1RtjJ56UefDcSvvHA9neCIP/RLe+L Kq89X4xXUrnImpbZxkgb6zSUkHEl9QULQMfmpvOi2kCMFN1Fd9u6yshUOIupfzv7+26D FQzOb1OxA5eOjhxcYwlwt3DoLbjMDjg9TaRwKVBqiHlJ9uZ+TfYZQI2DY6/BwGy43OOh EyfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zm221noYPDv16VavlZywICLYGigsQ3r83tHpYQ6Bets=; b=yOzytGGAwVZGaBXvfcaVQTpz8JjQ3PGUdmEUv52Xa6s47I94trM14Aq2SQdtfd2g0R FMg8ZiDcsjCTXUflRCmuzoWkhriKnHREJaGl5caNjJXjHlo0RRXOvMHASlEXel+WWTBu +3crtxuNkj+KdrOxm94ATYnmcBnbxBfo8MZuGGcEFeO48nhVxxCDZG07aDi1Q5oz6zLC r3C5F05gLW3YDmk9xuk7/DcJrLDAkSamdsCD220SuCyt026/KTxmuM+1kFCQXznJtmtR gu2fPGsT9EB59PCLdUkw7rnYsyv4i0sKzlrLLSW8xYHfCfN/DY5lK+c3de+uFcLaW7Op 61HQ== X-Gm-Message-State: AO0yUKVPbTZdOTEgAcw41EjEL2tN7vTwApDB7/VraqHpByf1qg+pc+Lr J5UV60WbpIJH5XrdrJde1vdpL3Br/juTEvr/ X-Received: by 2002:a05:6a20:8423:b0:bc:5a6:1b2a with SMTP id c35-20020a056a20842300b000bc05a61b2amr9264595pzd.49.1674843982840; Fri, 27 Jan 2023 10:26:22 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:22 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 10/14] RISC-V: KVM: Disable all hpmcounter access for VS/VU mode Date: Fri, 27 Jan 2023 10:25:54 -0800 Message-Id: <20230127182558.2416400-11-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201327137288715?= X-GMAIL-MSGID: =?utf-8?q?1756201327137288715?= Any guest must not get access to any hpmcounter including cycle/instret without any checks. We achieve that by disabling all the bits except TM bit in hcounteren. However, instret and cycle access for guest user space can be enabled upon explicit request (via ONE REG) or on first trap from VU mode to maintain ABI requirement in the future. This patch doesn't support that as ONE REG interface is not settled yet. Reviewed-by: Andrew Jones Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 58c5489..c5d400f 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -49,7 +49,8 @@ int kvm_arch_hardware_enable(void) hideleg |= (1UL << IRQ_VS_EXT); csr_write(CSR_HIDELEG, hideleg); - csr_write(CSR_HCOUNTEREN, -1UL); + /* VS should access only the time counter directly. Everything else should trap */ + csr_write(CSR_HCOUNTEREN, 0x02); csr_write(CSR_HVIP, 0); From patchwork Fri Jan 27 18:25:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49580 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980524wrn; Fri, 27 Jan 2023 10:28:22 -0800 (PST) X-Google-Smtp-Source: AMrXdXveMqLZ0/TXzp25yYNvWAWB5HyTNNsjxiZmnMQl5Pss8ZrtUJicrl9RwAnF+vPOuG8h3T7G X-Received: by 2002:a17:902:d650:b0:193:13fc:8840 with SMTP id y16-20020a170902d65000b0019313fc8840mr38828051plh.21.1674844102561; Fri, 27 Jan 2023 10:28:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844102; cv=none; d=google.com; s=arc-20160816; b=uGI10XGeYZYCP9sLL6aBtDqs22KAh2d9f6EHj3yECsg7WWv2yPIVa8U1AyTKnJ29EO PoDnT+U24Z3FGLqEdYNJbgJYsjwyaxybWa7bgRx7lw45+D6cQKC56VjCQiQIqQn1esUV 9B3RRNsi+6YuX/5w0kgPR94nh5nqaNBSbxWrwu6C/YdiEmophqaHqkjdstFnkCeGf2eP j1sYFpDxBOBXKu7dipbgB+s0uRj7/m0h0Am1ayps5ttkQclJO3L9wbRKLwXLfPSW1WxG h9hRZ4EpP4OF9VkJUdqvcn2sXyF+hVf9hDY2HVS5fAtc9+IG+eVncX4lipzhvgglMQNt sA6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2eB56PlxVYYG9BW+idfsAIH60JlcBASeIP8IcBFqjbQ=; b=IOPwRLaOwQSEYdv18okL/yo9JTVGSbkIroqsTQtgFc6Y/Fr725fYfnz5faUzeu+9g9 hSyFu5+8cshC9MCA5blcsv7ljvJXHChGZMrq8+sjcd2P0/a7Bm0ENKdOaIAmT/6QDdtM jB7CRRIs6pQ8Nanapwmuflw7rF7Qdhn1dOERfmcCYjouzr82pUg/SJ2AMFcRy148EyTE XVmoFRtB2vk5050Pze6oCu8GdjtCarkwWPMcNqKzDTAFPww0kwrKRIvs9pWhvOeoXui3 yij6mX2+WzaJwN2fOFcv0wCIuspHtrfxQfzDh0wW51ekTGlLNUJguLVK3xjuQpcgG1ET cdPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=6tHALczf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h4-20020a633844000000b0049ca23d4915si4708260pgn.104.2023.01.27.10.28.10; Fri, 27 Jan 2023 10:28:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=6tHALczf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235010AbjA0S0y (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235303AbjA0S0g (ORCPT ); Fri, 27 Jan 2023 13:26:36 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87A2386E81 for ; Fri, 27 Jan 2023 10:26:24 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id m11so5397969pji.0 for ; Fri, 27 Jan 2023 10:26:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2eB56PlxVYYG9BW+idfsAIH60JlcBASeIP8IcBFqjbQ=; b=6tHALczfMbO8hse+3nrl1EO2mLFyq7flzJfTD3DpnSn2HXfxrnmud2b1bJLig7zxsC nbMCPzVzxWzfujxyIf3eM9MR3OpgNMIjwoO8ZOmLNNhMHi3n91saw1z16D3JdgLOR/4Z hl4TwdItUXrD6ixIe9gzHpd+70KjkqhLApicV8ke19vKhGbW7RtqcaqBgU4vRhojWRBH tfPbkA/cS2eonmVQKajbTs9Y40yZ5rJgSiq7h9X3lF7eKKpYieg0se7+8xtyI7UQk59Q 9KQWR7a3SMnndznUFFGX8jqJDkQS3sqrVaKWKImVrHp7RCW3k6MjZBqg/+bPCaHemItz NiCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2eB56PlxVYYG9BW+idfsAIH60JlcBASeIP8IcBFqjbQ=; b=gOV8Edma7toL0V4VMn0bDgCLSN6xT2sAojg32bHm7i8ORxpxuczpnFXJvkcVKAh1TF kBwr06q3aD/k21tc2Gx+XDVUsYNne8jcXY6URl+LIeQzmjr/gY/MA96fC8kA3fx3DQhw VsMEuVxrtjBM2M0q7ljcROOwXHDgqEd7k1bgvfrKG409/liH99syCil/SFu9jSotmTwG 4TXab9+ocwJij4o1iyHKzX3ufu2nLT3KLAJxcFcqnxQi11CNq0X+udYBmgXq4i24Ji3u uomRBd5nP4/wbvdqSA0p8s3vEWq5dO1mcgbRzIS86ie3Ssp6PJw3rJzpwYhjudPB81UK 7Zsw== X-Gm-Message-State: AO0yUKWB7osMS+hIhYmgBdC6ClDraxqnW7Hq45p+vWAX+FVMSvfaiD8/ 9YQ4zRt3peUeHVNV4SYB5rb5PHpleBjYVpdb X-Received: by 2002:a17:902:d4c8:b0:196:357f:9398 with SMTP id o8-20020a170902d4c800b00196357f9398mr10667341plg.34.1674843983718; Fri, 27 Jan 2023 10:26:23 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:23 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 11/14] RISC-V: KVM: Implement trap & emulate for hpmcounters Date: Fri, 27 Jan 2023 10:25:55 -0800 Message-Id: <20230127182558.2416400-12-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201329583601121?= X-GMAIL-MSGID: =?utf-8?q?1756201329583601121?= As the KVM guests only see the virtual PMU counters, all hpmcounter access should trap and KVM emulates the read access on behalf of guests. Reviewed-by: Andrew Jones Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++++ arch/riscv/kvm/vcpu_insn.c | 4 ++- arch/riscv/kvm/vcpu_pmu.c | 45 ++++++++++++++++++++++++++- 3 files changed, 63 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 3f43a43..022d45d 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -43,6 +43,19 @@ struct kvm_pmu { #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) +#if defined(CONFIG_32BIT) +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#else +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#endif + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask); + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_ext_data *edata); @@ -65,6 +78,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = 0, .count = 0, .func = NULL }, + static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) { diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 0bb5276..f689337 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -213,7 +213,9 @@ struct csr_func { unsigned long wr_mask); }; -static const struct csr_func csr_funcs[] = { }; +static const struct csr_func csr_funcs[] = { + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS +}; /** * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 7713927..894053a 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -17,6 +17,44 @@ #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + unsigned long *out_val) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u64 enabled, running; + + pmc = &kvpmu->pmc[cidx]; + if (!pmc->perf_event) + return -EINVAL; + + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); + *out_val = pmc->counter_val; + + return 0; +} + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; + + if (!kvpmu || !kvpmu->init_done) + return KVM_INSN_EXIT_TO_USER_SPACE; + + if (wr_mask) + return KVM_INSN_ILLEGAL_TRAP; + + cidx = csr_num - CSR_CYCLE; + + if (pmu_ctr_read(vcpu, cidx, val) < 0) + return KVM_INSN_EXIT_TO_USER_SPACE; + + return ret; +} + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); @@ -69,7 +107,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + int ret; + + ret = pmu_ctr_read(vcpu, cidx, &edata->out_val); + if (ret == -EINVAL) + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; } From patchwork Fri Jan 27 18:25:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49581 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980535wrn; Fri, 27 Jan 2023 10:28:24 -0800 (PST) X-Google-Smtp-Source: AMrXdXuiGuCpVeOYIgy/48k0U9KuiiVPJHAtyCOspalOgW3+Xa5dra1lrSNrGUeE5qnqdWJ9uLtw X-Received: by 2002:a05:6a20:1929:b0:b6:330a:dfd with SMTP id bv41-20020a056a20192900b000b6330a0dfdmr39163126pzb.48.1674844103719; Fri, 27 Jan 2023 10:28:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844103; cv=none; d=google.com; s=arc-20160816; b=PDqx53sHp1da/C46q3oFwz1nFGMT4h2MGV/nx+1IsEw7Vn9q+JwKv6TWSydFc/gMii khBjkAm7VYdYxWCIdfDXwN0+Rz4/QeIjJLBDIGJFmgReDIuq7ps0bBqB7ia/X5efPOc2 g+J+ppSzIou5sTyad02eAOIoOP3n0gUKdimseHC0/xBmqIdQ4pXwa4nbrATFLH2NFpH0 kqRFsED56N11uBif83GEA98Wi1w8U0wU5U75bV67Ax8QZkiWa7XINxrpd2PGBQUDsbLX 6FMpcAHGzRsV13q1gkN0XqOPGRNrXI2n5rWQZeReCro2I6YJd9qFaoH+ELDJR3H/93c3 OAgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oqQob0qYcr3fLE6LsIPmUL4RmTJwqSeWmXNp79edZW8=; b=jrV1pRQEsFaOiwr42XYbpaYLq8scFxyKetGNWPqNkVaMuk8fjK+s6nZvBXdXFuFE+5 9tI59WIm9ZsjOe7iHMi6cOPvn5OK60DqP3zn0CmMiCoDbvRq2xTrtbPFqW2YvBnsAbJ+ 7NxueB3oxXvGFbsemnDSuWyxNCPd91XEMexlLImC9nZHAPky/e5SyfuunMK+0ayrVru+ gEMraAFNxtJThMlfY6YROwPQRxq/iDcm13zIvfX2NwvSz+fqySl8OAFUIip29QTo6HR+ msQM6i3o1yVzLUYbdgtqXecSLXmp/8fov90W5Vp7MnN06DgE4DNOopJL42sHZONMv0PB O7wA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=y66C9pub; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a63a803000000b004b1d3738f5bsi4961273pgf.390.2023.01.27.10.28.11; Fri, 27 Jan 2023 10:28:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=y66C9pub; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235328AbjA0S05 (ORCPT + 99 others); Fri, 27 Jan 2023 13:26:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235351AbjA0S0g (ORCPT ); Fri, 27 Jan 2023 13:26:36 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C26386E8E for ; Fri, 27 Jan 2023 10:26:25 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id o13so5376648pjg.2 for ; Fri, 27 Jan 2023 10:26:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oqQob0qYcr3fLE6LsIPmUL4RmTJwqSeWmXNp79edZW8=; b=y66C9pubUgBmp0Fm1Tx8jXl9wNNf/omEhV/j1Ggbsx0Zha1aGObZO/AEV/RAL8X82U YapUoTXPi7TKVKSzsblf61Qt1HVoo9C8Kr0RkpHT6d2IIDAv9Cu8u3k+yq6mV3UNsLXA 0HcmoCJxYEZ+z1MZK3O8oB22M+BT+DWTYGuy720F1USbE+6CmO70wAbmgw37Bn0r+BjR 4bq7WVhVUCAjT6L0DnCx9u/gEdkeJsCkCJYdEZNzdtNiZC1gdiUYX7Xo4X0xkvmGWL/T pdQ0asqtGmxByoFSd+OPsTsGOk4tuxGCGm1E5qQjl6x6ynLBa8qItsu0Ln3OKE83asjM 1weA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oqQob0qYcr3fLE6LsIPmUL4RmTJwqSeWmXNp79edZW8=; b=jVEZmdRis00a4QdMCkU7TM9tsWmSmvHfniBjuGpzrczHSX72IGgyJbevq40Hep+2FH hBiVQJQtdEHgN48/XZLD+ERv2Hs4J5FEFjfP8kHtAcH3/PJPv/CCeCVFg+gD8WY/uTrA /OXL6ghbAsqPoi5JlKmHpu0gkjLFkVd706IIIfsVCeH5ujLGknRVPeMGFLgQQAwmeOXk 9BrfnU7NmkUDtRIM5VzPr4iOei6SL0rXgvc89E9Xp/1RNKbcw33mA/gyGjd+j9StxKw6 mb6h4hBkqC9gA59JhxAkJdzcSHptpsIU7Igh+JI4qASWchRPQVboah4DBU9gJzl8MN9n ahWw== X-Gm-Message-State: AFqh2kpoZKl816wEOK47V5UxfZALpR63XyFtDxAf2Gat78/zW3t7gYIz CTtMQent/OHM+H+j3jXyPksr8oIiag4Bsr63 X-Received: by 2002:a17:902:9693:b0:191:11ec:2028 with SMTP id n19-20020a170902969300b0019111ec2028mr40739470plp.46.1674843984615; Fri, 27 Jan 2023 10:26:24 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:24 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 12/14] RISC-V: KVM: Implement perf support without sampling Date: Fri, 27 Jan 2023 10:25:56 -0800 Message-Id: <20230127182558.2416400-13-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201331182937698?= X-GMAIL-MSGID: =?utf-8?q?1756201331182937698?= RISC-V SBI PMU & Sscofpmf ISA extension allows supporting perf in the virtualization enviornment as well. KVM implementation relies on SBI PMU extension for the most part while trapping & emulating the CSRs read for counter access. This patch doesn't have the event sampling support yet. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/vcpu_pmu.c | 366 +++++++++++++++++++++++++++++++++++++- 1 file changed, 360 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 894053a..73dccf7 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -12,10 +12,190 @@ #include #include #include +#include #include #include #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) +#define get_event_type(x) (((x) & SBI_PMU_EVENT_IDX_TYPE_MASK) >> 16) +#define get_event_code(x) ((x) & SBI_PMU_EVENT_IDX_CODE_MASK) + + +static enum perf_hw_id hw_event_perf_map[SBI_PMU_HW_GENERAL_MAX] = { + [SBI_PMU_HW_CPU_CYCLES] = PERF_COUNT_HW_CPU_CYCLES, + [SBI_PMU_HW_INSTRUCTIONS] = PERF_COUNT_HW_INSTRUCTIONS, + [SBI_PMU_HW_CACHE_REFERENCES] = PERF_COUNT_HW_CACHE_REFERENCES, + [SBI_PMU_HW_CACHE_MISSES] = PERF_COUNT_HW_CACHE_MISSES, + [SBI_PMU_HW_BRANCH_INSTRUCTIONS] = PERF_COUNT_HW_BRANCH_INSTRUCTIONS, + [SBI_PMU_HW_BRANCH_MISSES] = PERF_COUNT_HW_BRANCH_MISSES, + [SBI_PMU_HW_BUS_CYCLES] = PERF_COUNT_HW_BUS_CYCLES, + [SBI_PMU_HW_STALLED_CYCLES_FRONTEND] = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND, + [SBI_PMU_HW_STALLED_CYCLES_BACKEND] = PERF_COUNT_HW_STALLED_CYCLES_BACKEND, + [SBI_PMU_HW_REF_CPU_CYCLES] = PERF_COUNT_HW_REF_CPU_CYCLES, +}; + +static u64 kvm_pmu_get_sample_period(struct kvm_pmc *pmc) +{ + u64 counter_val_mask = GENMASK(pmc->cinfo.width, 0); + u64 sample_period; + + if (!pmc->counter_val) + sample_period = counter_val_mask + 1; + else + sample_period = (-pmc->counter_val) & counter_val_mask; + + return sample_period; +} + +static u32 kvm_pmu_get_perf_event_type(unsigned long eidx) +{ + enum sbi_pmu_event_type etype = get_event_type(eidx); + u32 type = PERF_TYPE_MAX; + + switch (etype) { + case SBI_PMU_EVENT_TYPE_HW: + type = PERF_TYPE_HARDWARE; + break; + case SBI_PMU_EVENT_TYPE_CACHE: + type = PERF_TYPE_HW_CACHE; + break; + case SBI_PMU_EVENT_TYPE_RAW: + case SBI_PMU_EVENT_TYPE_FW: + type = PERF_TYPE_RAW; + break; + default: + break; + } + + return type; +} + +static bool kvm_pmu_is_fw_event(unsigned long eidx) +{ + return get_event_type(eidx) == SBI_PMU_EVENT_TYPE_FW; +} + +static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } +} + +static u64 kvm_pmu_get_perf_event_hw_config(u32 sbi_event_code) +{ + return hw_event_perf_map[sbi_event_code]; +} + +static u64 kvm_pmu_get_perf_event_cache_config(u32 sbi_event_code) +{ + u64 config = U64_MAX; + unsigned int cache_type, cache_op, cache_result; + + /* All the cache event masks lie within 0xFF. No separate masking is necesssary */ + cache_type = (sbi_event_code & SBI_PMU_EVENT_CACHE_ID_CODE_MASK) >> + SBI_PMU_EVENT_CACHE_ID_SHIFT; + cache_op = (sbi_event_code & SBI_PMU_EVENT_CACHE_OP_ID_CODE_MASK) >> + SBI_PMU_EVENT_CACHE_OP_SHIFT; + cache_result = sbi_event_code & SBI_PMU_EVENT_CACHE_RESULT_ID_CODE_MASK; + + if (cache_type >= PERF_COUNT_HW_CACHE_MAX || + cache_op >= PERF_COUNT_HW_CACHE_OP_MAX || + cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) + return config; + + config = cache_type | (cache_op << 8) | (cache_result << 16); + + return config; +} + +static u64 kvm_pmu_get_perf_event_config(unsigned long eidx, uint64_t evt_data) +{ + enum sbi_pmu_event_type etype = get_event_type(eidx); + u32 ecode = get_event_code(eidx); + u64 config = U64_MAX; + + switch (etype) { + case SBI_PMU_EVENT_TYPE_HW: + if (ecode < SBI_PMU_HW_GENERAL_MAX) + config = kvm_pmu_get_perf_event_hw_config(ecode); + break; + case SBI_PMU_EVENT_TYPE_CACHE: + config = kvm_pmu_get_perf_event_cache_config(ecode); + break; + case SBI_PMU_EVENT_TYPE_RAW: + config = evt_data & RISCV_PMU_RAW_EVENT_MASK; + break; + case SBI_PMU_EVENT_TYPE_FW: + if (ecode < SBI_PMU_FW_MAX) + config = (1ULL << 63) | ecode; + break; + default: + break; + } + + return config; +} + +static int kvm_pmu_get_fixed_pmc_index(unsigned long eidx) +{ + u32 etype = kvm_pmu_get_perf_event_type(eidx); + u32 ecode = get_event_code(eidx); + + if (etype != SBI_PMU_EVENT_TYPE_HW) + return -EINVAL; + + if (ecode == SBI_PMU_HW_CPU_CYCLES) + return 0; + else if (ecode == SBI_PMU_HW_INSTRUCTIONS) + return 2; + else + return -EINVAL; +} + +static int kvm_pmu_get_programmable_pmc_index(struct kvm_pmu *kvpmu, unsigned long eidx, + unsigned long cbase, unsigned long cmask) +{ + int ctr_idx = -1; + int i, pmc_idx; + int min, max; + + if (kvm_pmu_is_fw_event(eidx)) { + /* Firmware counters are mapped 1:1 starting from num_hw_ctrs for simplicity */ + min = kvpmu->num_hw_ctrs; + max = min + kvpmu->num_fw_ctrs; + } else { + /* First 3 counters are reserved for fixed counters */ + min = 3; + max = kvpmu->num_hw_ctrs; + } + + for_each_set_bit(i, &cmask, BITS_PER_LONG) { + pmc_idx = i + cbase; + if ((pmc_idx >= min && pmc_idx < max) && + !test_bit(pmc_idx, kvpmu->pmc_in_use)) { + ctr_idx = pmc_idx; + break; + } + } + + return ctr_idx; +} + +static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, + unsigned long cbase, unsigned long cmask) +{ + int ret; + + /* Fixed counters need to be have fixed mapping as they have different width */ + ret = kvm_pmu_get_fixed_pmc_index(eidx); + if (ret >= 0) + return ret; + + return kvm_pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask); +} static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, unsigned long *out_val) @@ -34,6 +214,16 @@ static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, return 0; } +static int kvm_pmu_validate_counter_mask(struct kvm_pmu *kvpmu, unsigned long ctr_base, + unsigned long ctr_mask) +{ + /* Make sure the we have a valid counter mask requested from the caller */ + if (!ctr_mask || (ctr_base + __fls(ctr_mask) >= kvm_pmu_num_counters(kvpmu))) + return -EINVAL; + + return 0; +} + int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, unsigned long *val, unsigned long new_val, unsigned long wr_mask) @@ -83,7 +273,39 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, unsigned long ctr_mask, unsigned long flag, uint64_t ival, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int i, pmc_index, sbiret = 0; + struct kvm_pmc *pmc; + + if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Start the counters that have been configured and requested by the guest */ + for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { + pmc_index = i + ctr_base; + if (!test_bit(pmc_index, kvpmu->pmc_in_use)) + continue; + pmc = &kvpmu->pmc[pmc_index]; + if (flag & SBI_PMU_START_FLAG_SET_INIT_VALUE) + pmc->counter_val = ival; + if (pmc->perf_event) { + if (unlikely(pmc->started)) { + sbiret = SBI_ERR_ALREADY_STARTED; + continue; + } + perf_event_period(pmc->perf_event, kvm_pmu_get_sample_period(pmc)); + perf_event_enable(pmc->perf_event); + pmc->started = true; + } else { + sbiret = SBI_ERR_INVALID_PARAM; + } + } + +out: + edata->err_val = sbiret; + return 0; } @@ -91,7 +313,45 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, unsigned long ctr_mask, unsigned long flag, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int i, pmc_index, sbiret = 0; + u64 enabled, running; + struct kvm_pmc *pmc; + + if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Stop the counters that have been configured and requested by the guest */ + for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { + pmc_index = i + ctr_base; + if (!test_bit(pmc_index, kvpmu->pmc_in_use)) + continue; + pmc = &kvpmu->pmc[pmc_index]; + if (pmc->perf_event) { + if (pmc->started) { + /* Stop counting the counter */ + perf_event_disable(pmc->perf_event); + pmc->started = false; + } else + sbiret = SBI_ERR_ALREADY_STOPPED; + + if (flag & SBI_PMU_STOP_FLAG_RESET) { + /* Relase the counter if this is a reset request */ + pmc->counter_val += perf_event_read_value(pmc->perf_event, + &enabled, &running); + kvm_pmu_release_perf_event(pmc); + clear_bit(pmc_index, kvpmu->pmc_in_use); + } + } else { + sbiret = SBI_ERR_INVALID_PARAM; + } + } + +out: + edata->err_val = sbiret; + return 0; } @@ -100,7 +360,89 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba unsigned long eidx, uint64_t evtdata, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct perf_event *event; + int ctr_idx; + u32 etype = kvm_pmu_get_perf_event_type(eidx); + u64 config; + struct kvm_pmc *pmc; + int sbiret = 0; + struct perf_event_attr attr = { + .type = etype, + .size = sizeof(struct perf_event_attr), + .pinned = true, + /* + * It should never reach here if the platform doesn't support the sscofpmf + * extension as mode filtering won't work without it. + */ + .exclude_host = true, + .exclude_hv = true, + .exclude_user = !!(flag & SBI_PMU_CFG_FLAG_SET_UINH), + .exclude_kernel = !!(flag & SBI_PMU_CFG_FLAG_SET_SINH), + .config1 = RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS, + }; + + if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (kvm_pmu_is_fw_event(eidx)) { + sbiret = SBI_ERR_NOT_SUPPORTED; + goto out; + } + + /* + * SKIP_MATCH flag indicates the caller is aware of the assigned counter + * for this event. Just do a sanity check if it already marked used. + */ + if (flag & SBI_PMU_CFG_FLAG_SKIP_MATCH) { + if (!test_bit(ctr_base + __ffs(ctr_mask), kvpmu->pmc_in_use)) { + sbiret = SBI_ERR_FAILURE; + goto out; + } + ctr_idx = ctr_base + __ffs(ctr_mask); + } else { + + ctr_idx = pmu_get_pmc_index(kvpmu, eidx, ctr_base, ctr_mask); + if (ctr_idx < 0) { + sbiret = SBI_ERR_NOT_SUPPORTED; + goto out; + } + } + + pmc = &kvpmu->pmc[ctr_idx]; + kvm_pmu_release_perf_event(pmc); + pmc->idx = ctr_idx; + + config = kvm_pmu_get_perf_event_config(eidx, evtdata); + attr.config = config; + if (flag & SBI_PMU_CFG_FLAG_CLEAR_VALUE) { + //TODO: Do we really want to clear the value in hardware counter + pmc->counter_val = 0; + } + + /* + * Set the default sample_period for now. The guest specified value + * will be updated in the start call. + */ + attr.sample_period = kvm_pmu_get_sample_period(pmc); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event)); + return PTR_ERR(event); + } + + set_bit(ctr_idx, kvpmu->pmc_in_use); + pmc->perf_event = event; + if (flag & SBI_PMU_CFG_FLAG_AUTO_START) + perf_event_enable(pmc->perf_event); + + edata->out_val = ctr_idx; +out: + edata->err_val = sbiret; + return 0; } @@ -164,9 +506,9 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; if (i < 3) /* CY, IR counters */ - kvpmu->pmc[i].cinfo.width = 63; + pmc->cinfo.width = 63; else - kvpmu->pmc[i].cinfo.width = hpm_width; + pmc->cinfo.width = hpm_width; /* * The CSR number doesn't have any relation with the logical * hardware counters. The CSR numbers are encoded sequentially @@ -187,7 +529,19 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + + if (!kvpmu) + return; + + for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_MAX_COUNTERS) { + pmc = &kvpmu->pmc[i]; + pmc->counter_val = 0; + kvm_pmu_release_perf_event(pmc); + } + bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); } void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) From patchwork Fri Jan 27 18:25:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49584 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980717wrn; Fri, 27 Jan 2023 10:28:53 -0800 (PST) X-Google-Smtp-Source: AMrXdXtTzmR3MQU2+D66SbxQL/jl1KGnractgN0U4pWFoEwF9YpZ+5dORfwqc0HoTm0ssEiBMyLV X-Received: by 2002:a05:6a21:2d88:b0:ad:79bb:a417 with SMTP id ty8-20020a056a212d8800b000ad79bba417mr41740736pzb.9.1674844133563; Fri, 27 Jan 2023 10:28:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844133; cv=none; d=google.com; s=arc-20160816; b=HFQhIoTaCAr/2LCz7EGCpKEoA1kUEK+J1ruLeccZDfUJb7Sud+zFBqpGf0+86TLaRY 5bVd8atOdAVmMvjCynUIZTBVIsOGLTavRYs2KiWNXucz/ssRJJyVYh/FyYzgF316SD85 17nqMdzRpP3UPI90T07PZ37rXGWbZAeGDGLta9Z4ijL/W6Kkc+hNa/NqhDnEpKPRo/kS O3+u2dQYVwTCLoqxJEpA33zhcMugzCQulJylldGLaXKHXVoKH1Ej9JDuqx8P2SulMdeh VgOL24qd1LxkhtGQZ7ef0MHETmZXlpCh3xYloebCK8PESosOeb0svjzvAj/y8MUSRNO3 nBAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GryYCsA0yB0D+er6B3vBmQcLGa+75ViLyEbdS+lBcgo=; b=fq48wbOrCYpbljo8G/YNvDXeY9d/Km0410a2RBAGwCvbnabv4kLM+S8QF91g9gnATy ejoji43uVBdwzDT+mfK6FDDyOxaZZJsue7JNbNZyVSVl6Rv0CEF13Nislshkhr9wTFh7 KEzwFO6KzcDn6SBTBMUsuzcIP6k2JPj8iBk7GhjYGTo3qI/3PLZczmmcSeB4wVMaE6sE WBLFDcbjRpz4OUiKaa6gKCj7Sk8yoYktX71gU4n+Iv1ZU1jC2ZAmcURo2gknDNW08jUM bdcH9jj4NnnmNoYLGJszo7h6CP/6osuIm7Ot3s+ZIoB5qT8xQt5H/dwTXAC7Aebj2izO 9tvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=lHG0Ie1r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c21-20020a637255000000b004c35b840706si5061512pgn.857.2023.01.27.10.28.41; Fri, 27 Jan 2023 10:28:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=lHG0Ie1r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235075AbjA0S1I (ORCPT + 99 others); Fri, 27 Jan 2023 13:27:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235361AbjA0S0l (ORCPT ); Fri, 27 Jan 2023 13:26:41 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49FBA86EA4 for ; Fri, 27 Jan 2023 10:26:26 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id nn18-20020a17090b38d200b0022bfb584987so5533642pjb.2 for ; Fri, 27 Jan 2023 10:26:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GryYCsA0yB0D+er6B3vBmQcLGa+75ViLyEbdS+lBcgo=; b=lHG0Ie1rTp5lC9PUj6YQU6LNQROHBz/272/cbOF9xQ+6C+Cw38RwiPJZjGnwle8ZKl /G2yvqUFWR7sDaj9BoAfegij6enI9Yd6tCcexmxqWE2GURfI1iNdE2/RIGoFMtz5orEw f6WhXD9gUaM+VROyoeeNoYbfT8A6gbhBjlZg7lmvEjxFxLm/OX8OnQ0xAXIsOW6j+ivU 3uKQh0IKBpHrzlAxgCL6pmbbMWj5U4mFToSwd6wuLhO18fo5W8adTVtFaDFvw3HH7gbC HDBMBkhhbTEeC5mTDSDC6NgCl5gX3KvglWS3VmBMCsJ9m3riwfYFalE0q9I5S2cZ4krY 9Ozg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GryYCsA0yB0D+er6B3vBmQcLGa+75ViLyEbdS+lBcgo=; b=RCdZLkA1vtKgF1uRu2w6KOvAVoqoNn0+3Utqr2KvP4/rHq0yT8LOxmzPIdaDzBrmjX SmFkOFqlQeMseZIgI059f9f8sQeq4kiwJTli1lHfUKT4kUnW6KO58J7/yaTYH6zfybcm TEnhRNbJYkOZW7KJOU3yhm4KPak6ft9VqBrXc9XlAK+j3qYT2xyeB55aWLxA+Jsob8VR GOYlmm8ksCU1FpJz3I6VmA5jUO4SYDuR6MXDWWpPVaDLrtaZ5broff32QLpyy2stU6kq CxAxSN5ApODlKzIeP+4XS3DIGp3FZpyvHuLrQX/tQPFMyorYjyzl7mjPmI6Z/p5cTf3X mYzw== X-Gm-Message-State: AFqh2ko96+nTq+34GU/IKGbncgksQkLy1InvM4SCuSb4tmRHNJRgyI4c OcX/HlUy1nh50dIEvlMC8iCTpr4xcUCZlnIi X-Received: by 2002:a17:903:1211:b0:194:d999:33f0 with SMTP id l17-20020a170903121100b00194d99933f0mr34943270plh.31.1674843985569; Fri, 27 Jan 2023 10:26:25 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:25 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 13/14] RISC-V: KVM: Support firmware events Date: Fri, 27 Jan 2023 10:25:57 -0800 Message-Id: <20230127182558.2416400-14-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201362339169119?= X-GMAIL-MSGID: =?utf-8?q?1756201362339169119?= SBI PMU extension defines a set of firmware events which can provide useful information to guests about number of SBI calls. As hypervisor implements the SBI PMU extension, these firmware events corresponds to ecall invocations between VS->HS mode. All other firmware events will always report zero if monitored as KVM doesn't implement them. This patch adds all the infrastructure required to support firmware events. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 +++ arch/riscv/kvm/vcpu_pmu.c | 144 +++++++++++++++++++------- 2 files changed, 124 insertions(+), 36 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 022d45d..b235e7e 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -17,6 +17,14 @@ #define RISCV_KVM_MAX_FW_CTRS 32 #define RISCV_MAX_COUNTERS 64 +struct kvm_fw_event { + /* Current value of the event */ + unsigned long value; + + /* Event monitoring status */ + bool started; +}; + /* Per virtual pmu counter data */ struct kvm_pmc { u8 idx; @@ -25,11 +33,14 @@ struct kvm_pmc { union sbi_pmu_ctr_info cinfo; /* Event monitoring status */ bool started; + /* Monitoring event ID */ + unsigned long event_idx; }; /* PMU data structure per vcpu */ struct kvm_pmu { struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; + struct kvm_fw_event fw_event[RISCV_KVM_MAX_FW_CTRS]; /* Number of the virtual firmware counters available */ int num_fw_ctrs; /* Number of the virtual hardware counters available */ @@ -52,6 +63,7 @@ struct kvm_pmu { { .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, #endif +int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid); int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, unsigned long *val, unsigned long new_val, unsigned long wr_mask); @@ -81,6 +93,10 @@ struct kvm_pmu { #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ { .base = 0, .count = 0, .func = NULL }, +static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid) +{ + return 0; +} static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) { diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 73dccf7..b8d6aba 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -203,12 +203,15 @@ static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; u64 enabled, running; + int fevent_code; pmc = &kvpmu->pmc[cidx]; - if (!pmc->perf_event) - return -EINVAL; - pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + } else if (pmc->perf_event) + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); *out_val = pmc->counter_val; return 0; @@ -224,6 +227,55 @@ static int kvm_pmu_validate_counter_mask(struct kvm_pmu *kvpmu, unsigned long ct return 0; } +static int kvm_pmu_create_perf_event(struct kvm_pmc *pmc, int ctr_idx, + struct perf_event_attr *attr, unsigned long flag, + unsigned long eidx, unsigned long evtdata) +{ + struct perf_event *event; + + kvm_pmu_release_perf_event(pmc); + pmc->idx = ctr_idx; + + attr->config = kvm_pmu_get_perf_event_config(eidx, evtdata); + if (flag & SBI_PMU_CFG_FLAG_CLEAR_VALUE) { + //TODO: Do we really want to clear the value in hardware counter + pmc->counter_val = 0; + } + + /* + * Set the default sample_period for now. The guest specified value + * will be updated in the start call. + */ + attr->sample_period = kvm_pmu_get_sample_period(pmc); + + event = perf_event_create_kernel_counter(attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event)); + return PTR_ERR(event); + } + + pmc->perf_event = event; + if (flag & SBI_PMU_CFG_FLAG_AUTO_START) + perf_event_enable(pmc->perf_event); + + return 0; +} + +int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_fw_event *fevent; + + if (!kvpmu || fid >= SBI_PMU_FW_MAX) + return -EINVAL; + + fevent = &kvpmu->fw_event[fid]; + if (fevent->started) + fevent->value++; + + return 0; +} + int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, unsigned long *val, unsigned long new_val, unsigned long wr_mask) @@ -276,6 +328,7 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int i, pmc_index, sbiret = 0; struct kvm_pmc *pmc; + int fevent_code; if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { sbiret = SBI_ERR_INVALID_PARAM; @@ -290,7 +343,22 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, pmc = &kvpmu->pmc[pmc_index]; if (flag & SBI_PMU_START_FLAG_SET_INIT_VALUE) pmc->counter_val = ival; - if (pmc->perf_event) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + if (fevent_code >= SBI_PMU_FW_MAX) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Check if the counter was already started for some reason */ + if (kvpmu->fw_event[fevent_code].started) { + sbiret = SBI_ERR_ALREADY_STARTED; + continue; + } + + kvpmu->fw_event[fevent_code].started = true; + kvpmu->fw_event[fevent_code].value = pmc->counter_val; + } else if (pmc->perf_event) { if (unlikely(pmc->started)) { sbiret = SBI_ERR_ALREADY_STARTED; continue; @@ -317,6 +385,7 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, int i, pmc_index, sbiret = 0; u64 enabled, running; struct kvm_pmc *pmc; + int fevent_code; if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { sbiret = SBI_ERR_INVALID_PARAM; @@ -329,7 +398,18 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, if (!test_bit(pmc_index, kvpmu->pmc_in_use)) continue; pmc = &kvpmu->pmc[pmc_index]; - if (pmc->perf_event) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + if (fevent_code >= SBI_PMU_FW_MAX) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (!kvpmu->fw_event[fevent_code].started) + sbiret = SBI_ERR_ALREADY_STOPPED; + + kvpmu->fw_event[fevent_code].started = false; + } else if (pmc->perf_event) { if (pmc->started) { /* Stop counting the counter */ perf_event_disable(pmc->perf_event); @@ -342,11 +422,14 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); kvm_pmu_release_perf_event(pmc); - clear_bit(pmc_index, kvpmu->pmc_in_use); } } else { sbiret = SBI_ERR_INVALID_PARAM; } + if (flag & SBI_PMU_STOP_FLAG_RESET) { + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; + clear_bit(pmc_index, kvpmu->pmc_in_use); + } } out: @@ -361,12 +444,11 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba struct kvm_vcpu_sbi_ext_data *edata) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); - struct perf_event *event; - int ctr_idx; + int ctr_idx, sbiret = 0, ret; u32 etype = kvm_pmu_get_perf_event_type(eidx); - u64 config; - struct kvm_pmc *pmc; - int sbiret = 0; + struct kvm_pmc *pmc = NULL; + bool is_fevent; + unsigned long event_code; struct perf_event_attr attr = { .type = etype, .size = sizeof(struct perf_event_attr), @@ -387,7 +469,9 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba goto out; } - if (kvm_pmu_is_fw_event(eidx)) { + event_code = get_event_code(eidx); + is_fevent = kvm_pmu_is_fw_event(eidx); + if (is_fevent && event_code >= SBI_PMU_FW_MAX) { sbiret = SBI_ERR_NOT_SUPPORTED; goto out; } @@ -412,33 +496,17 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba } pmc = &kvpmu->pmc[ctr_idx]; - kvm_pmu_release_perf_event(pmc); - pmc->idx = ctr_idx; - - config = kvm_pmu_get_perf_event_config(eidx, evtdata); - attr.config = config; - if (flag & SBI_PMU_CFG_FLAG_CLEAR_VALUE) { - //TODO: Do we really want to clear the value in hardware counter - pmc->counter_val = 0; - } - - /* - * Set the default sample_period for now. The guest specified value - * will be updated in the start call. - */ - attr.sample_period = kvm_pmu_get_sample_period(pmc); - - event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); - if (IS_ERR(event)) { - pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event)); - return PTR_ERR(event); + if (is_fevent) { + if (flag & SBI_PMU_CFG_FLAG_AUTO_START) + kvpmu->fw_event[event_code].started = true; + } else { + ret = kvm_pmu_create_perf_event(pmc, ctr_idx, &attr, flag, eidx, evtdata); + if (ret) + return ret; } set_bit(ctr_idx, kvpmu->pmc_in_use); - pmc->perf_event = event; - if (flag & SBI_PMU_CFG_FLAG_AUTO_START) - perf_event_enable(pmc->perf_event); - + pmc->event_idx = eidx; edata->out_val = ctr_idx; out: edata->err_val = sbiret; @@ -489,6 +557,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) kvpmu->num_hw_ctrs = num_hw_ctrs; kvpmu->num_fw_ctrs = num_fw_ctrs; + memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); /* * There is no correlation between the logical hardware counter and virtual counters. @@ -502,6 +571,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) continue; pmc = &kvpmu->pmc[i]; pmc->idx = i; + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; if (i < kvpmu->num_hw_ctrs) { kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; if (i < 3) @@ -540,8 +610,10 @@ void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) pmc = &kvpmu->pmc[i]; pmc->counter_val = 0; kvm_pmu_release_perf_event(pmc); + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; } bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); + memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); } void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) From patchwork Fri Jan 27 18:25:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 49585 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp980734wrn; Fri, 27 Jan 2023 10:28:57 -0800 (PST) X-Google-Smtp-Source: AK7set9wNL9eTxwA1BJPb26XebAQdW/ScVnhEqxJxrLEPUJ8CLRd/XCYPlOxTjjxcdapcF/czwIM X-Received: by 2002:a17:90a:14a5:b0:22b:b25a:d0a0 with SMTP id k34-20020a17090a14a500b0022bb25ad0a0mr7612377pja.15.1674844137118; Fri, 27 Jan 2023 10:28:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674844137; cv=none; d=google.com; s=arc-20160816; b=DDJJPwKippk5NKug0H8yonf1cGyQVEy5T/T+E3zENPh3V3aAHPC7KdQnUDEfaM2LmJ O9o2L5vxO45dUP3sBKAkYoxYNlf05CpXONZ7C6KGxaAQjRxjDZtZ8Hu02SyBCyeNBkHe rlV0AfX79ucfb18Y+eykexkWGLrEiQxHlv/l4Y1vzHq5Jav5mxEUqSANgqfSKWnSKeQb 5Iu7gpPXyr6Eg09BSDGdgeGJ27xhxC5zDtr41cjSrCH1LanHnhQrOrJ0EcmROFNKF0Bm xiTi9tHycUSER8pvaaBgJ2Z4pmrIV2dt31UWJ3OYSJ/84sXgO7CIUKCt8p+pB3TSzL0y eFgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+UCsgdBat0A/X8QbgW8UxlHki/znQOwLUkBQOZIZWgo=; b=m848bajIOYzY961IMKACB/xpDKVvrytgJ/fF4q32mO1PqPS432Sgc5eVgzLA46X/l5 1dS82MQgXPO4BIex3dpdrAjb7/05W6CTfb4uaYQ5x168D2/qKNMa5qgfXo3zm/ztZUpM gyjebJMqH7z368Znrp7avFmUbRWgKu7tabcHdaUel8TQ1rgBFxh9uYDyS6dnVxrlNop+ uuJbxIjw77Jai0zUwoUVJOqbloOPi+7616kWENth74t7nLbdWGDFB8ekLPeP92i6t508 hacbT5Ny7hmSuAWb0Imi5QezxDx+jeT4LpjfKZ18DFcHl3XldbXU2VfmYl5UJToIfJgg K2NQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=qTZC1QY7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id in2-20020a17090b438200b0022a8b5c12a3si5254883pjb.50.2023.01.27.10.28.45; Fri, 27 Jan 2023 10:28:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=qTZC1QY7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235375AbjA0S1K (ORCPT + 99 others); Fri, 27 Jan 2023 13:27:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235377AbjA0S0p (ORCPT ); Fri, 27 Jan 2023 13:26:45 -0500 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6553C86EBE for ; Fri, 27 Jan 2023 10:26:27 -0800 (PST) Received: by mail-pl1-x62e.google.com with SMTP id z13so5869546plg.6 for ; Fri, 27 Jan 2023 10:26:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+UCsgdBat0A/X8QbgW8UxlHki/znQOwLUkBQOZIZWgo=; b=qTZC1QY7AeVzJn+28DVAop48QwDUY48K0u17ecV9zQxbYCz8Bg0s0HQDbQQdG3i41g v5X9wikUTMaYjBauUozPB+4ZVT9+zlxxRx2QVnto/2ZSqEoEJmJJvGXq93/NWmNM7qty iyCV3IWGqunMwM2I0qqk/L2qU/CbnX27dkHi0Tj+ci20TWqvQq40vIYzLE4pAndkjDkA +esrjVhQT9YpuS8LApEcU6yXB8A7qt0g5+RlwG6LwG3XsiKOzFZM8V0bgCeGoE9p7Dd5 fOs1D/B65zdExuEvvvu7GjNNTmZkd0G73g3i8rKtDa47KKPlK3mBmyTY/KuRBKY4dNWj FNRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+UCsgdBat0A/X8QbgW8UxlHki/znQOwLUkBQOZIZWgo=; b=mQ1QGuRhQniTIU3CFqcIuHld1S53j6fhg5INCL8LOas7VpyPiNTj90Tkzo76/1yc0u VvvMBkIgfWNlEf2uoKp6WU2dOXEwy4adv4KbGL40FjVSMbWl8yhL/NC5o25pXWEGvfmk m7A/ltC+W1z6fyVaov1t81N+FB6NYzJhGH89NFP/XYLAg4K9XZqPeE5NtOHJj5pup9pV JxjB/5nLAntq7YiYq14S76do8dChsW5xDzAkbDdfHdquLe2R9WmS+mT4yhHn9TswBIuv IBfB8twbcDwr4cz+EpNYLPx4wlggtIo8+fLxloQeXTu4FNczMJN0h7BMbyhYs4M5kdfJ T0gg== X-Gm-Message-State: AO0yUKVa53pjx37IExqfbi6sStTGUc/pRP855snbaRMrgTirIWQ6mwtI rRGKl4rsR2YKO1q8pTMRMTQxvR4VOGnQaRux X-Received: by 2002:a17:902:da8e:b0:194:6f3b:3aa1 with SMTP id j14-20020a170902da8e00b001946f3b3aa1mr7245179plx.55.1674843986465; Fri, 27 Jan 2023 10:26:26 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jc5-20020a17090325c500b00189d4c666c8sm3195219plb.153.2023.01.27.10.26.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 10:26:26 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Subject: [PATCH v3 14/14] RISC-V: KVM: Increment firmware pmu events Date: Fri, 27 Jan 2023 10:25:58 -0800 Message-Id: <20230127182558.2416400-15-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230127182558.2416400-1-atishp@rivosinc.com> References: <20230127182558.2416400-1-atishp@rivosinc.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756201366185816236?= X-GMAIL-MSGID: =?utf-8?q?1756201366185816236?= KVM supports firmware events now. Invoke the firmware event increment function from appropriate places. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/tlb.c | 4 ++++ arch/riscv/kvm/vcpu_sbi_replace.c | 7 +++++++ 2 files changed, 11 insertions(+) diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 309d79b..b797f7c 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -181,6 +181,7 @@ void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD); local_flush_icache_all(); } @@ -264,15 +265,18 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); kvm_riscv_local_hfence_vvma_asid_gva( READ_ONCE(v->vmid), d.asid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); kvm_riscv_local_hfence_vvma_asid_all( READ_ONCE(v->vmid), d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); kvm_riscv_local_hfence_vvma_gva( READ_ONCE(v->vmid), d.addr, d.size, d.order); diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index abeb55f..71a671e 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -11,6 +11,7 @@ #include #include #include +#include #include static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, @@ -25,6 +26,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, return 0; } + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_SET_TIMER); #if __riscv_xlen == 32 next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; #else @@ -57,6 +59,7 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, return 0; } + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_IPI_SENT); kvm_for_each_vcpu(i, tmp, vcpu->kvm) { if (hbase != -1UL) { if (tmp->vcpu_id < hbase) @@ -67,6 +70,7 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT); if (ret < 0) break; + kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RECVD); } return ret; @@ -90,6 +94,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: kvm_riscv_fence_i(vcpu->kvm, hbase, hmask); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: if (cp->a2 == 0 && cp->a3 == 0) @@ -97,6 +102,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, cp->a2, cp->a3, PAGE_SHIFT); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: if (cp->a2 == 0 && cp->a3 == 0) @@ -107,6 +113,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run hbase, hmask, cp->a2, cp->a3, PAGE_SHIFT, cp->a4); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: