From patchwork Fri Jun 30 12:03:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114705 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10322700vqr; Fri, 30 Jun 2023 05:30:41 -0700 (PDT) X-Google-Smtp-Source: APBJJlESfL/pnhpwb8OQWcz3VE/DU0DPRGpk8PhY9DXrPAo+dKCYfiKyw3cm+qV379xo73JnqA7L X-Received: by 2002:a05:6a00:2355:b0:681:415d:ba2c with SMTP id j21-20020a056a00235500b00681415dba2cmr3106526pfj.31.1688128241502; Fri, 30 Jun 2023 05:30:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128241; cv=none; d=google.com; s=arc-20160816; b=HfmharDnd17kuyJ+AYiRRGYx19gqa7b8K+eQSwg4fk4tuQtjpdgPvPPRtEikqveNBb MxtWG5iKSfSZm3KxokgmrRXzWDriQBYo8HTErKg39adlBRTB0DhMBse1jaCuF9jTWGhu tQZHCc+AyvcSlo3urpGd0io315mv6QnzzmlQZYqgXOAXDzOZAW4rSgsMfqnCCFyivaVV Dy9hmblMoznw95Yy6f3yYaMC+ckaSAjlC8IL/x70v+RLAypGFlaL7LDQeBNVLeUHWINx 0cFjbWYANZfbpmKwyvLgyyj7NoQX0HnyGhiUe+0VLB5qteZRl9joBSU/Eo8inExyJTKI VimA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rQpQqNsFmEnWKYjTLxFnFgvFWXJuywrbtqpJn/1A1Lc=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=uA+MQuBe8iR8ypc+Dh8gU4mn/Fsa3iqzSyqNj4rfFUDBfj7K8hwpXRRf79xW9yGksw 6EncawVRfn56LhXmZiHHjiGkkoJaj2CVQq2fH/FpM4+i0d1p3IAZoGDvPRg4VJ3k0foI rDP9l2Ra5JntskU4xwCrzYt01dcf5Yj+5+rVDj3AsIEfwrJErt4+ml34vyasveiRZ7Ih lDWtp0iyEBLgcUNKEuRs6ia3VstGLClXSWtA+6J+vkuvjIN5gqX2SpCjxyA6knpzYdKe +3qzVurK24Z+IyZG+nivSL5fXeCYBfRwtFq+vAYCHQaF4uGm/3h26EF8waX2LllBCK1d oGrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=Omr5PnAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w29-20020a63161d000000b00557a26e5fefsi11284498pgl.684.2023.06.30.05.30.27; Fri, 30 Jun 2023 05:30:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=Omr5PnAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233000AbjF3MEf (ORCPT + 99 others); Fri, 30 Jun 2023 08:04:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232960AbjF3MEX (ORCPT ); Fri, 30 Jun 2023 08:04:23 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39FCE1FE8 for ; Fri, 30 Jun 2023 05:04:21 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35U2CEdA013690; Fri, 30 Jun 2023 05:04:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=rQpQqNsFmEnWKYjTLxFnFgvFWXJuywrbtqpJn/1A1Lc=; b=Omr5PnAQCpa/vlmzfLuCaIPOOiQa54rUqal7P9qOhGOOxQC8/Ht7abLGiElFGEt8qutK W2X+BfptGXGQQn/3HVZR0R6aM1HerXTvV4vLO3H3uedTFe6a4V3joNH7dlAuitVue6Mr Db0a6uQDaiBMc+TWW5LNUNQg9Mqfzz0Xp1L+NEZeLvy3e4O0qWSPsEabdfjMhVy5zWrZ YriJC0O/6cCQ3OtqZ5B4IiTMzBqpKTWLgvgZdVByNrs/YO3+P2Wh10Qk2oMOu/3GmyVW i5SpS4HJprHcHiTcGp/3H74r12HTEhuEEDu5lwk8TXxzufUA5Qho6lwVFuJ/fQkRmz/r qQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rhp2ehnkx-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:03 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:02 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id 0CC693F7059; Fri, 30 Jun 2023 05:03:58 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 1/6] perf/marvell: Marvell PEM performance monitor support Date: Fri, 30 Jun 2023 17:33:46 +0530 Message-ID: <20230630120351.1143773-2-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: xX5VeyxQPjjbE_kIV4hzeRB1nbmbeUSW X-Proofpoint-GUID: xX5VeyxQPjjbE_kIV4hzeRB1nbmbeUSW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770130758709928046?= X-GMAIL-MSGID: =?utf-8?q?1770130758709928046?= PCI Express Interface PMU includes various performance counters to monitor the data that is transmitted over the PCIe link. The counters track various inbound and outbound transactions which includes separate counters for posted/non-posted/completion TLPs. Also, inbound and outbound memory read requests along with their latencies can also be monitored. Address Translation Services(ATS)events such as ATS Translation, ATS Page Request, ATS Invalidation along with their corresponding latencies are also supported. The performance counters are 64 bits wide. For instance, perf stat -e ib_tlp_pr tracks the inbound posted TLPs for the workload. Signed-off-by: Linu Cherian Signed-off-by: Gowthami Thiagarajan --- MAINTAINERS | 7 + drivers/perf/Kconfig | 7 + drivers/perf/Makefile | 1 + drivers/perf/marvell_pem_pmu.c | 433 +++++++++++++++++++++++++++++++++ include/linux/cpuhotplug.h | 1 + 5 files changed, 449 insertions(+) create mode 100644 drivers/perf/marvell_pem_pmu.c diff --git a/MAINTAINERS b/MAINTAINERS index c6545eb54104..55a2a9b6f346 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12473,6 +12473,13 @@ S: Supported F: Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst F: drivers/net/ethernet/marvell/octeontx2/af/ +MARVELL PEM PMU DRIVER +M: Linu Cherian +M: Gowthami Thiagarajan +S: Supported +F: Documentation/devicetree/bindings/perf/marvell-odyssey-pem.yaml +F: drivers/perf/marvell_pem_pmu.c + MARVELL PRESTERA ETHERNET SWITCH DRIVER M: Taras Chornyi S: Supported diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 66c259000a44..1cd8d07ffefd 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -203,4 +203,11 @@ source "drivers/perf/arm_cspmu/Kconfig" source "drivers/perf/amlogic/Kconfig" +config MARVELL_PEM_PMU + tristate "MARVELL PEM PMU Support" + depends on ARCH_THUNDER || (COMPILE_TEST && 64BIT) + help + Enable support for PCIe Interface performance monitoring + on Marvell platform. + endmenu diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 13e45da61100..bf9fe9cacad9 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o +obj-$(CONFIG_MARVELL_PEM_PMU) += marvell_pem_pmu.o obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o obj-$(CONFIG_ALIBABA_UNCORE_DRW_PMU) += alibaba_uncore_drw_pmu.o obj-$(CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU) += arm_cspmu/ diff --git a/drivers/perf/marvell_pem_pmu.c b/drivers/perf/marvell_pem_pmu.c new file mode 100644 index 000000000000..fb27112aa7d4 --- /dev/null +++ b/drivers/perf/marvell_pem_pmu.c @@ -0,0 +1,433 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell PEM(PCIe RC) Performance Monitor Driver + * + * Copyright (C) 2023 Marvell. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* Each of these events maps to a free running 64 bit counter + * with no event control, but can be reset. + * + */ +enum pem_events { + IB_TLP_NPR, + IB_TLP_PR, + IB_TLP_CPL, + IB_TLP_DWORDS_NPR, + IB_TLP_DWORDS_PR, + IB_TLP_DWORDS_CPL, + IB_INFLIGHT, + IB_READS, + IB_REQ_NO_RO_NCB, + IB_REQ_NO_RO_EBUS, + OB_TLP_NPR, + OB_TLP_PR, + OB_TLP_CPL, + OB_TLP_DWORDS_NPR, + OB_TLP_DWORDS_PR, + OB_TLP_DWORDS_CPL, + OB_INFLIGHT, + OB_READS, + OB_MERGES_NPR, + OB_MERGES_PR, + OB_MERGES_CPL, + ATS_TRANS, + ATS_TRANS_LATENCY, + ATS_PRI, + ATS_PRI_LATENCY, + ATS_INV, + ATS_INV_LATENCY, + PEM_EVENTIDS_MAX, +}; + +static u64 eventid_to_offset_table[] = { + 0x0, + 0x8, + 0x10, + 0x100, + 0x108, + 0x110, + 0x200, + 0x300, + 0x400, + 0x408, + 0x500, + 0x508, + 0x510, + 0x600, + 0x608, + 0x610, + 0x700, + 0x800, + 0x900, + 0x908, + 0x910, + 0x2D18, + 0x2D20, + 0x2D28, + 0x2D30, + 0x2D38, + 0x2D40, +}; + +struct pem_pmu { + struct pmu pmu; + void __iomem *base; + unsigned int cpu; + struct device *dev; + struct hlist_node node; +}; + +#define to_pem_pmu(p) container_of(p, struct pem_pmu, pmu) + +static int eventid_to_offset(int eventid) +{ + return eventid_to_offset_table[eventid]; +} + +/* Events */ +static ssize_t pem_pmu_event_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + return sysfs_emit(page, "event=0x%02llx\n", pmu_attr->id); +} + +#define PEM_EVENT_ATTR(_name, _id) \ + (&((struct perf_pmu_events_attr[]) { \ + { .attr = __ATTR(_name, 0444, pem_pmu_event_show, NULL), \ + .id = _id, } \ + })[0].attr.attr) + +static struct attribute *pem_perf_events_attrs[] = { + PEM_EVENT_ATTR(ib_tlp_npr, IB_TLP_NPR), + PEM_EVENT_ATTR(ib_tlp_pr, IB_TLP_PR), + PEM_EVENT_ATTR(ib_tlp_cpl_partid, IB_TLP_CPL), + PEM_EVENT_ATTR(ib_tlp_dwords_npr, IB_TLP_DWORDS_NPR), + PEM_EVENT_ATTR(ib_tlp_dwords_pr, IB_TLP_DWORDS_PR), + PEM_EVENT_ATTR(ib_tlp_dwords_cpl_partid, IB_TLP_DWORDS_CPL), + PEM_EVENT_ATTR(ib_inflight, IB_INFLIGHT), + PEM_EVENT_ATTR(ib_reads, IB_READS), + PEM_EVENT_ATTR(ib_req_no_ro_ncb, IB_REQ_NO_RO_NCB), + PEM_EVENT_ATTR(ib_req_no_ro_ebus, IB_REQ_NO_RO_EBUS), + PEM_EVENT_ATTR(ob_tlp_npr_partid, OB_TLP_NPR), + PEM_EVENT_ATTR(ob_tlp_pr_partid, OB_TLP_PR), + PEM_EVENT_ATTR(ob_tlp_cpl_partid, OB_TLP_CPL), + PEM_EVENT_ATTR(ob_tlp_dwords_npr_partid, OB_TLP_DWORDS_NPR), + PEM_EVENT_ATTR(ob_tlp_dwords_pr_partid, OB_TLP_DWORDS_PR), + PEM_EVENT_ATTR(ob_tlp_dwords_cpl_partid, OB_TLP_DWORDS_CPL), + PEM_EVENT_ATTR(ob_inflight_partid, OB_INFLIGHT), + PEM_EVENT_ATTR(ob_reads_partid, OB_READS), + PEM_EVENT_ATTR(ob_merges_npr_partid, OB_MERGES_NPR), + PEM_EVENT_ATTR(ob_merges_pr_partid, OB_MERGES_PR), + PEM_EVENT_ATTR(ob_merges_cpl_partid, OB_MERGES_CPL), + PEM_EVENT_ATTR(ats_trans, ATS_TRANS), + PEM_EVENT_ATTR(ats_trans_latency, ATS_TRANS_LATENCY), + PEM_EVENT_ATTR(ats_pri, ATS_PRI), + PEM_EVENT_ATTR(ats_pri_latency, ATS_PRI_LATENCY), + PEM_EVENT_ATTR(ats_inv, ATS_INV), + PEM_EVENT_ATTR(ats_inv_latency, ATS_INV_LATENCY), + NULL +}; + +static struct attribute_group pem_perf_events_attr_group = { + .name = "events", + .attrs = pem_perf_events_attrs, +}; + +PMU_FORMAT_ATTR(event, "config:0-5"); + +static struct attribute *pem_perf_format_attrs[] = { + &format_attr_event.attr, + NULL +}; + +static struct attribute_group pem_perf_format_attr_group = { + .name = "format", + .attrs = pem_perf_format_attrs, +}; + +/* cpumask */ +static ssize_t pem_perf_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pem_pmu *pmu = dev_get_drvdata(dev); + + return cpumap_print_to_pagebuf(true, buf, cpumask_of(pmu->cpu)); +} + +static struct device_attribute pem_perf_cpumask_attr = + __ATTR(cpumask, 0444, pem_perf_cpumask_show, NULL); + +static struct attribute *pem_perf_cpumask_attrs[] = { + &pem_perf_cpumask_attr.attr, + NULL +}; + +static struct attribute_group pem_perf_cpumask_attr_group = { + .attrs = pem_perf_cpumask_attrs, +}; + +static const struct attribute_group *pem_perf_attr_groups[] = { + &pem_perf_events_attr_group, + &pem_perf_cpumask_attr_group, + &pem_perf_format_attr_group, + NULL +}; + +static int pem_perf_event_init(struct perf_event *event) +{ + struct pem_pmu *pmu = to_pem_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + + if (event->attr.type != event->pmu->type) + return -ENOENT; + + if (is_sampling_event(event)) { + dev_info(pmu->dev, "Sampling not supported!\n"); + return -EOPNOTSUPP; + } + + if (event->cpu < 0) { + dev_warn(pmu->dev, "Can't provide per-task data!\n"); + return -EOPNOTSUPP; + } + + /* We must NOT create groups containing mixed PMUs */ + if (event->group_leader->pmu != event->pmu && + !is_software_event(event->group_leader)) + return -EINVAL; + + /* Set ownership of event to one CPU, same event can not be observed + * on multiple cpus at same time. + */ + event->cpu = pmu->cpu; + hwc->idx = -1; + return 0; +} + +static void pem_perf_counter_reset(struct pem_pmu *pmu, + struct perf_event *event, int eventid) +{ + writeq_relaxed(0x0, pmu->base + eventid_to_offset(eventid)); +} + +static u64 pem_perf_read_counter(struct pem_pmu *pmu, + struct perf_event *event, int eventid) +{ + return readq_relaxed(pmu->base + eventid_to_offset(eventid)); +} + +static void pem_perf_event_update(struct perf_event *event) +{ + struct pem_pmu *pmu = to_pem_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u64 prev_count, new_count; + + do { + prev_count = local64_read(&hwc->prev_count); + new_count = pem_perf_read_counter(pmu, event, hwc->idx); + } while (local64_xchg(&hwc->prev_count, new_count) != prev_count); + + local64_add((new_count - prev_count), &event->count); +} + +static void pem_perf_event_start(struct perf_event *event, int flags) +{ + struct pem_pmu *pmu = to_pem_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int eventid = hwc->idx; + + local64_set(&hwc->prev_count, 0); + + pem_perf_counter_reset(pmu, event, eventid); + + hwc->state = 0; +} + +static int pem_perf_event_add(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + + hwc->idx = event->attr.config; + if (hwc->idx >= PEM_EVENTIDS_MAX) + return -EINVAL; + hwc->state |= PERF_HES_STOPPED; + + if (flags & PERF_EF_START) + pem_perf_event_start(event, flags); + + return 0; +} + +static void pem_perf_event_stop(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + + if (flags & PERF_EF_UPDATE) + pem_perf_event_update(event); + + hwc->state |= PERF_HES_STOPPED; +} + +static void pem_perf_event_del(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + + pem_perf_event_stop(event, PERF_EF_UPDATE); + hwc->idx = -1; +} + +static int pem_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) +{ + struct pem_pmu *pmu = hlist_entry_safe(node, struct pem_pmu, + node); + unsigned int target; + + if (cpu != pmu->cpu) + return 0; + + target = cpumask_any_but(cpu_online_mask, cpu); + if (target >= nr_cpu_ids) + return 0; + + perf_pmu_migrate_context(&pmu->pmu, cpu, target); + pmu->cpu = target; + return 0; +} + +static int pem_perf_probe(struct platform_device *pdev) +{ + struct pem_pmu *pem_pmu; + struct resource *res; + void __iomem *base; + char *name; + int ret; + + pem_pmu = devm_kzalloc(&pdev->dev, sizeof(*pem_pmu), GFP_KERNEL); + if (!pem_pmu) + return -ENOMEM; + + pem_pmu->dev = &pdev->dev; + platform_set_drvdata(pdev, pem_pmu); + + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); + if (IS_ERR(base)) + return PTR_ERR(base); + + pem_pmu->base = base; + + pem_pmu->pmu = (struct pmu) { + .module = THIS_MODULE, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .task_ctx_nr = perf_invalid_context, + .attr_groups = pem_perf_attr_groups, + .event_init = pem_perf_event_init, + .add = pem_perf_event_add, + .del = pem_perf_event_del, + .start = pem_perf_event_start, + .stop = pem_perf_event_stop, + .read = pem_perf_event_update, + }; + + /* Choose this cpu to collect perf data */ + pem_pmu->cpu = raw_smp_processor_id(); + + name = devm_kasprintf(pem_pmu->dev, GFP_KERNEL, "mrvl_pcie_rc_pmu_%llx", + res->start); + if (!name) + return -ENOMEM; + + cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE, + &pem_pmu->node); + + ret = perf_pmu_register(&pem_pmu->pmu, name, -1); + if (ret) + goto error; + + pr_info("Marvell PEM(PCIe RC) PMU Driver for pem@%llx\n", res->start); + return 0; +error: + cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE, + &pem_pmu->node); + return ret; +} + +static int pem_perf_remove(struct platform_device *pdev) +{ + struct pem_pmu *pem_pmu = platform_get_drvdata(pdev); + + cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE, + &pem_pmu->node); + + perf_pmu_unregister(&pem_pmu->pmu); + return 0; +} + +#ifdef CONFIG_OF +static const struct of_device_id pem_pmu_of_match[] = { + { .compatible = "marvell,pem-pmu", }, + { }, +}; +MODULE_DEVICE_TABLE(of, pem_pmu_of_match); +#endif + +#ifdef CONFIG_ACPI +static const struct acpi_device_id pem_pmu_acpi_match[] = { + {"MRVL000E", 0}, + {}, +}; +MODULE_DEVICE_TABLE(acpi, pem_pmu_acpi_match); +#endif + +static struct platform_driver pem_pmu_driver = { + .driver = { + .name = "pem-pmu", + .of_match_table = of_match_ptr(pem_pmu_of_match), + .acpi_match_table = ACPI_PTR(pem_pmu_acpi_match), + .suppress_bind_attrs = true, + }, + .probe = pem_perf_probe, + .remove = pem_perf_remove, +}; + +static int __init pem_pmu_init(void) +{ + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE, + "perf/marvell/pem:online", NULL, + pem_pmu_offline_cpu); + if (ret) + return ret; + + ret = platform_driver_register(&pem_pmu_driver); + if (ret) + cpuhp_remove_multi_state(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE); + return ret; +} + +static void __exit pem_pmu_exit(void) +{ + platform_driver_unregister(&pem_pmu_driver); + cpuhp_remove_multi_state(CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE); +} + +module_init(pem_pmu_init); +module_exit(pem_pmu_exit); + +MODULE_DESCRIPTION("Marvell PEM Perf driver"); +MODULE_AUTHOR("Linu Cherian "); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 0f1001dca0e0..f7710c03d24e 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -235,6 +235,7 @@ enum cpuhp_state { CPUHP_AP_PERF_ARM_APM_XGENE_ONLINE, CPUHP_AP_PERF_ARM_CAVIUM_TX2_UNCORE_ONLINE, CPUHP_AP_PERF_ARM_MARVELL_CN10K_DDR_ONLINE, + CPUHP_AP_PERF_ARM_MARVELL_PEM_ONLINE, CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE, CPUHP_AP_PERF_POWERPC_CORE_IMC_ONLINE, CPUHP_AP_PERF_POWERPC_THREAD_IMC_ONLINE, From patchwork Fri Jun 30 12:03:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114714 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10327247vqr; Fri, 30 Jun 2023 05:37:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5R+ZKipeRaFK0f56KeksRXZs2Y7sMk2nn/yxpMtpFxN8Pn4oYXiGsPkl4Gp/i0EOq1qFBq X-Received: by 2002:a05:6a20:7f8b:b0:118:eeef:2a25 with SMTP id d11-20020a056a207f8b00b00118eeef2a25mr3143547pzj.34.1688128623723; Fri, 30 Jun 2023 05:37:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128623; cv=none; d=google.com; s=arc-20160816; b=jHYtyvYtEdnovBR5L8fS0gUIAJgmymOtGgeJMhEfOmJDJoj2KQMJnxljYXVj/jF3Ge /+Mn5Q82UoN7t8V/t4CsZU3dAytPJ6R7XYrSwsawh6oswZRPYcPJ2ebI/dK3n4qWiouX WyjkklTJiyoS4IQQwvZRS/FPoY3xDE9hKO66rzn9RLEZqDE8trzDgN055FQIy8w9XnDI OiGMsznBJZ8wRjiMEMy1bBfBbey/XcRbKEBEFun4zhVctnzr72NS7cz9cYSwiIYbpG4U 84Wp6pmnd44GYEEZg6l9g7Q5VTf8ysVBt1icGD5PYUQxrrKF/F3C3Dvpn80H5He6f0Wt CDCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AouykxsHJQ6G801BqoEOg/S/d76AYcbTjVSAsV8G8KM=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=J7HWnGAZpEkuF9X/OkXbAbTs2XUWwvWXtZaSwMCn+qIx9K6bM89FXO5rvjLHg1gpDS BnXCMpPBJwUlR20nch5MNre+jyfE4caHBOHgJ9fqcht8xLbaATx90eu6582GV4HTaSgs dQUPWQDX16gIsxxsF+PMLpicGipJq9svMvnFhh0MITEocEzZ8etnPde53XUUcA1DP9tX eSXy7l/v0yD98TtzYsaNABRyneUqFRHY8FZIG2QJkC5LRyrMwz6VujqkVyT4M32bySlZ BMc+nBUbmkVi0xRHNM7qeypSATNnaHHfv4TFeq+qT2aELqg6T+qqJU11d+rysnQjkrIO s5ZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=FTv9Kh3F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j24-20020a63cf18000000b0055ab9bb07fbsi9975839pgg.395.2023.06.30.05.36.49; Fri, 30 Jun 2023 05:37:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=FTv9Kh3F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232971AbjF3ME0 (ORCPT + 99 others); Fri, 30 Jun 2023 08:04:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232951AbjF3MEW (ORCPT ); Fri, 30 Jun 2023 08:04:22 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2642C1FCC for ; Fri, 30 Jun 2023 05:04:21 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35U2CEJI013689; Fri, 30 Jun 2023 05:04:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=AouykxsHJQ6G801BqoEOg/S/d76AYcbTjVSAsV8G8KM=; b=FTv9Kh3FB2B0lgixISOT8FVmVZhwtiR6V4sDxVnRBLJPHUsEg2hsmdjDaUB1dhCKOyW7 qZQpM9rMM6uK9kp12pxQaLfG6E2c/ZIDLKx+jLm8At3cEtdZHX4KT7NSHaROEhQeiNYf QUPZDWse+x14PSjV+EzntLGPDKToMqoB+BsxwRgoDOZS/D8gkWLym4w1zoWRproNwK4X U5Hz1+qURbdut4/h1BunIyn64HbBxsii+1KyC+Sju0/Gqlpl7PZCs2rCPb2qugAWDjCY m8021hNTNe4QEuJYBDioK94jPL7z/bYGDHZpYjx2/UAC55IjW/zWTQvgp0VW8oDdXQtk AA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rhp2ehnm7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:05 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id 0F00E5B6926; Fri, 30 Jun 2023 05:04:02 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 2/6] dt-bindings: perf: marvell: Add YAML schemas for Marvell PEM pmu Date: Fri, 30 Jun 2023 17:33:47 +0530 Message-ID: <20230630120351.1143773-3-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 6pph-w79iTQO9SxQ4LFYKH_-HRgJlkFD X-Proofpoint-GUID: 6pph-w79iTQO9SxQ4LFYKH_-HRgJlkFD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770131159724321055?= X-GMAIL-MSGID: =?utf-8?q?1770131159724321055?= Add device tree bindings for Marvell PEM performance monitor unit Signed-off-by: Gowthami Thiagarajan Signed-off-by: Linu Cherian --- .../bindings/perf/marvell-odyssey-pem.yaml | 38 +++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 Documentation/devicetree/bindings/perf/marvell-odyssey-pem.yaml diff --git a/Documentation/devicetree/bindings/perf/marvell-odyssey-pem.yaml b/Documentation/devicetree/bindings/perf/marvell-odyssey-pem.yaml new file mode 100644 index 000000000000..6af201fbccd8 --- /dev/null +++ b/Documentation/devicetree/bindings/perf/marvell-odyssey-pem.yaml @@ -0,0 +1,38 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/perf/marvell-odyssey-pem.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Marvell Odyssey PCIe interface performance monitor + +maintainers: + - Linu Cherian + - Gowthami Thiagarajan + +properties: + compatible: + items: + - enum: + - marvell,pem-pmu + + reg: + maxItems: 1 + +required: + - compatible + - reg + +additionalProperties: false + +examples: + - | + bus { + #address-cells = <2>; + #size-cells = <2>; + + pmu@8e0000005000 { + compatible = "marvell,pem-pmu"; + reg = <0x8E00 0x00005000 0x0 0x3000>; + }; + }; From patchwork Fri Jun 30 12:03:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114716 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10328597vqr; Fri, 30 Jun 2023 05:39:21 -0700 (PDT) X-Google-Smtp-Source: APBJJlHuesnKuRRK9W+EDXpe5Vyk3tH6LrVeKd+M0NQ68SM9Zhc/4XG3rlwhV1kUJ1MQ/qahi0kS X-Received: by 2002:a05:6358:704d:b0:129:df95:f9a9 with SMTP id 13-20020a056358704d00b00129df95f9a9mr2240405rwp.2.1688128761318; Fri, 30 Jun 2023 05:39:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128761; cv=none; d=google.com; s=arc-20160816; b=WFCWNHR2+hPiry2pqi9FYPKU/xpKOpopSuq/agtJ4JcOajCNBGMlwuxLMB9AyS+S+v UoZhgXRiCC9EL4ITfBwiI2rseBaRa9RbATl0e5S2CFxbVpyILfDx8ub034padNI3jvVb JQ3BQ2a3L2gV5HNS8aNeiz7ANAWCC6UxpHdjKLmk3daZiHOzru4DgGV8t0Wr+dHzBse8 0lBgVF/6QziF/WB7eDJZ5qolEiSAd1BGEfKgVizVMFkwqXaFw+xdUmuHhrlblUuPNAQd LJVJn/g2SydyMBn4SWpn/1BqiUtCZg/Wk0O6FLBOOpn0K7dNHfsKH73WdeteG0nZTxWe om6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Rpp8jwOrqEAld18Vm3OMAUEHw0TX7nPklkyzcGyspmg=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=CH99mlffId8ps/K3W5vdAu6nLo+2c66Sba1C2N0tIxxfbw49U9o7lJtiEC4WSgV6nY Sz4kp4DoecVPyqNp7j/V1mHcQUdbCVHz/qMeP+DEQpguWaJM7JLAfUml0bZSNZRCl+bM pgtKlx0DyDhjEqM6prRZkU7pwHVO0v9MZjnsAGPnyHwqFnBjy2WAG0o7K08Pm7G3Erht MQQ/hveKJRisWECnSJ7qnWP7IBXU2/SEKJONucJ48KvAy24dgxgOFacyfiIhZTbmM10T BVUet7TPKDc1R23rAt+fluhGOB9x4hkzTyAcMzKENK54uUgnvm1JynTZk0SF4+0XbOIw ugig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=WZMAz+zJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w29-20020a63161d000000b00557a26e5fefsi11284498pgl.684.2023.06.30.05.39.07; Fri, 30 Jun 2023 05:39:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=WZMAz+zJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232975AbjF3MEb (ORCPT + 99 others); Fri, 30 Jun 2023 08:04:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232944AbjF3MEW (ORCPT ); Fri, 30 Jun 2023 08:04:22 -0400 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25E54E4F for ; Fri, 30 Jun 2023 05:04:20 -0700 (PDT) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35TMYJac001329; Fri, 30 Jun 2023 05:04:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Rpp8jwOrqEAld18Vm3OMAUEHw0TX7nPklkyzcGyspmg=; b=WZMAz+zJdlulLtEeULMY77NCJsjH0DitYmQGPSV1Az+JALo5kv4KT9QrApIZh5fg5nV1 95yMgKqeCHSCYMclTqDl4IkfpoBPVZC8Pe7Ct45V5+dPSStyupo6TsgxT0cdU898RrDK JNBqdou1VTbFO1Chg+4jTIuZTYRMy2nPcRWNTEL7sVwGH+vYt3uLO0h5FG5Kmt2WTuip 5l0DbhR8NHCE3ysPBx8Tszm+DhzFb9meGHvm8M7iolG0bUBzOBrYIBXY8QEiQ/T3E2Av 0mMZCcAALAe2INKLbw5enxDaLx2jFQafTVkNNIq5ajyqcNOwlSwaQ6VxLSmDc1wXaGYg tg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3rgvpc63bm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:11 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:09 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id C8E083F7059; Fri, 30 Jun 2023 05:04:06 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 3/6] perf/marvell : Odyssey LLC-TAD performance monitor support Date: Fri, 30 Jun 2023 17:33:48 +0530 Message-ID: <20230630120351.1143773-4-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: jcOiTHugENb-c0TYoKNogLSaTRBwp6cg X-Proofpoint-ORIG-GUID: jcOiTHugENb-c0TYoKNogLSaTRBwp6cg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770131303986542906?= X-GMAIL-MSGID: =?utf-8?q?1770131303986542906?= Each TAD provides eight 64-bit counters for monitoring cache behavior.The driver always configures the same counter for all the TADs. The user would end up effectively reserving one of eight counters in every TAD to look across all TADs. The occurrences of events are aggregated and presented to the user at the end of running the workload. The driver does not provide a way for the user to partition TADs so that different TADs are used for different applications. The performance events reflect various internal or interface activities. By combining the values from multiple performance counters, cache performance can be measured in terms such as: cache miss rate, cache allocations, interface retry rate, internal resource occupancy, etc. Each supported counter's event and formatting information is exposed to sysfs at /sys/devices/tad/. Use perf tool stat command to measure the pmu events. For instance: perf stat -e tad_hit_ltg,tad_hit_dtg Signed-off-by: Gowthami Thiagarajan --- MAINTAINERS | 8 + drivers/perf/Kconfig | 7 + drivers/perf/Makefile | 1 + drivers/perf/marvell_odyssey_tad_pmu.c | 406 +++++++++++++++++++++++++ 4 files changed, 422 insertions(+) create mode 100644 drivers/perf/marvell_odyssey_tad_pmu.c diff --git a/MAINTAINERS b/MAINTAINERS index 55a2a9b6f346..bbf3a97502db 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12512,6 +12512,14 @@ L: netdev@vger.kernel.org S: Supported F: drivers/net/ethernet/marvell/octeon_ep +MARVELL ODYSSEY TAD PMU DRIVER +M: Gowthami Thiagarajan +M: Bharat Bhushan +M: Linu Cherian +M: George Cherian +S: Supported +F: drivers/perf/marvell_odyssey_tad_pmu.c + MATROX FRAMEBUFFER DRIVER L: linux-fbdev@vger.kernel.org S: Orphan diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 1cd8d07ffefd..2dc649768c1b 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -210,4 +210,11 @@ config MARVELL_PEM_PMU Enable support for PCIe Interface performance monitoring on Marvell platform. +config MARVELL_ODYSSEY_TAD_PMU + tristate "MARVELL ODYSSEY LLC-TAD PMU" + depends on ARCH_THUNDER || (COMPILE_TEST && 64BIT) + help + Provides support for Last-Level cache Tag-and-data Units (LLC-TAD) + performance monitor on Odyssey platform + endmenu diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index bf9fe9cacad9..5dc1438e7d3d 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o obj-$(CONFIG_MARVELL_PEM_PMU) += marvell_pem_pmu.o +obj-$(CONFIG_MARVELL_ODYSSEY_TAD_PMU) += marvell_odyssey_tad_pmu.o obj-$(CONFIG_APPLE_M1_CPU_PMU) += apple_m1_cpu_pmu.o obj-$(CONFIG_ALIBABA_UNCORE_DRW_PMU) += alibaba_uncore_drw_pmu.o obj-$(CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU) += arm_cspmu/ diff --git a/drivers/perf/marvell_odyssey_tad_pmu.c b/drivers/perf/marvell_odyssey_tad_pmu.c new file mode 100644 index 000000000000..8f0204c88539 --- /dev/null +++ b/drivers/perf/marvell_odyssey_tad_pmu.c @@ -0,0 +1,406 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell Odyssey LLC-TAD perf driver + * + * Copyright (C) 2023 Marvell. + */ + +#define pr_fmt(fmt) "tad_pmu: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#define TAD_PFC_OFFSET 0x0800 +#define TAD_PFC(counter) (TAD_PFC_OFFSET | ((counter) << 3)) +#define TAD_PRF_OFFSET 0x0900 +#define TAD_PRF(counter) (TAD_PRF_OFFSET | ((counter) << 3)) +#define TAD_PRF_CNTSEL_MASK 0xFF +#define TAD_MAX_COUNTERS 8 + +#define to_tad_pmu(p) (container_of(p, struct tad_pmu, pmu)) + +struct tad_region { + void __iomem *base; +}; + +struct tad_pmu { + struct pmu pmu; + struct tad_region *regions; + u32 region_cnt; + unsigned int cpu; + struct hlist_node node; + struct perf_event *events[TAD_MAX_COUNTERS]; + DECLARE_BITMAP(counters_map, TAD_MAX_COUNTERS); +}; + +static int tad_pmu_cpuhp_state; + +static void tad_pmu_event_counter_read(struct perf_event *event) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u32 counter_idx = hwc->idx; + u64 delta, prev, new; + int tad_region; + + do { + prev = local64_read(&hwc->prev_count); + for (tad_region = 0, new = 0; tad_region < tad_pmu->region_cnt; tad_region++) + new += readq(tad_pmu->regions[tad_region].base + + TAD_PFC(counter_idx)); + } while (local64_cmpxchg(&hwc->prev_count, prev, new) != prev); + + delta = (new - prev) & GENMASK_ULL(63, 0); + local64_add(delta, &event->count); +} + +static void tad_pmu_event_counter_stop(struct perf_event *event, int flags) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u32 counter_idx = hwc->idx; + int tad_region; + + /* TAD()_PFC() stop counting on the write + * which sets TAD()_PRF()[CNTSEL] == 0 + */ + for (tad_region = 0; tad_region < tad_pmu->region_cnt; tad_region++) + writeq(0, tad_pmu->regions[tad_region].base + TAD_PRF(counter_idx)); + + tad_pmu_event_counter_read(event); + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; +} + +static void tad_pmu_event_counter_start(struct perf_event *event, int flags) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u32 event_idx = event->attr.config; + u32 counter_idx = hwc->idx; + u64 reg_val; + int tad_region; + + hwc->state = 0; + + /* Typically TAD_PFC() are zeroed to start counting */ + for (tad_region = 0; tad_region < tad_pmu->region_cnt; tad_region++) + writeq(0, tad_pmu->regions[tad_region].base + TAD_PFC(counter_idx)); + + /* TAD()_PFC() start counting on the write + * which sets TAD()_PRF()[CNTSEL] != 0 + */ + for (tad_region = 0; tad_region < tad_pmu->region_cnt; tad_region++) { + reg_val = (event_idx & 0xFF); + writeq(reg_val, tad_pmu->regions[tad_region].base + + TAD_PRF(counter_idx)); + } +} + +static void tad_pmu_event_counter_del(struct perf_event *event, int flags) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int idx = hwc->idx; + + tad_pmu_event_counter_stop(event, flags | PERF_EF_UPDATE); + tad_pmu->events[idx] = NULL; + clear_bit(idx, tad_pmu->counters_map); +} + +static int tad_pmu_event_counter_add(struct perf_event *event, int flags) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int idx; + + /* Get a free counter for this event */ + idx = find_first_zero_bit(tad_pmu->counters_map, TAD_MAX_COUNTERS); + if (idx == TAD_MAX_COUNTERS) + return -EAGAIN; + + set_bit(idx, tad_pmu->counters_map); + + hwc->idx = idx; + hwc->state = PERF_HES_STOPPED; + tad_pmu->events[idx] = event; + + if (flags & PERF_EF_START) + tad_pmu_event_counter_start(event, flags); + + return 0; +} + +static int tad_pmu_event_init(struct perf_event *event) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(event->pmu); + + if (event->attr.type != event->pmu->type) + return -ENOENT; + + if (!event->attr.disabled) + return -EINVAL; + + if (event->state != PERF_EVENT_STATE_OFF) + return -EINVAL; + + event->cpu = tad_pmu->cpu; + event->hw.idx = -1; + event->hw.config_base = event->attr.config; + + return 0; +} + +static ssize_t tad_pmu_event_show(struct device *dev, + struct device_attribute *attr, char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); +} + +#define TAD_PMU_EVENT_ATTR(_name, _id) \ + (&((struct perf_pmu_events_attr[]) { \ + { .attr = __ATTR(_name, 0444, tad_pmu_event_show, NULL),\ + .id = _id, } \ + })[0].attr.attr) + +static struct attribute *tad_pmu_event_attrs[] = { + TAD_PMU_EVENT_ATTR(tad_none, 0x0), + TAD_PMU_EVENT_ATTR(tad_req_msh_in_exlmn, 0x3), + TAD_PMU_EVENT_ATTR(tad_alloc_dtg, 0x1a), + TAD_PMU_EVENT_ATTR(tad_alloc_ltg, 0x1b), + TAD_PMU_EVENT_ATTR(tad_alloc_any, 0x1c), + TAD_PMU_EVENT_ATTR(tad_hit_dtg, 0x1d), + TAD_PMU_EVENT_ATTR(tad_hit_ltg, 0x1e), + TAD_PMU_EVENT_ATTR(tad_hit_any, 0x1f), + TAD_PMU_EVENT_ATTR(tad_tag_rd, 0x20), + TAD_PMU_EVENT_ATTR(tad_tot_cycle, 0xFF), + NULL +}; + +static const struct attribute_group tad_pmu_events_attr_group = { + .name = "events", + .attrs = tad_pmu_event_attrs, +}; + +PMU_FORMAT_ATTR(event, "config:0-7"); + +static struct attribute *tad_pmu_format_attrs[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group tad_pmu_format_attr_group = { + .name = "format", + .attrs = tad_pmu_format_attrs, +}; + +static ssize_t tad_pmu_cpumask_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct tad_pmu *tad_pmu = to_tad_pmu(dev_get_drvdata(dev)); + + return cpumap_print_to_pagebuf(true, buf, cpumask_of(tad_pmu->cpu)); +} + +static DEVICE_ATTR(cpumask, 0444, tad_pmu_cpumask_show, NULL); + +static struct attribute *tad_pmu_cpumask_attrs[] = { + &dev_attr_cpumask.attr, + NULL +}; + +static struct attribute_group tad_pmu_cpumask_attr_group = { + .attrs = tad_pmu_cpumask_attrs, +}; + +static const struct attribute_group *tad_pmu_attr_groups[] = { + &tad_pmu_events_attr_group, + &tad_pmu_format_attr_group, + &tad_pmu_cpumask_attr_group, + NULL +}; + +static int tad_pmu_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct tad_region *regions; + struct tad_pmu *tad_pmu; + struct resource *res; + u32 tad_pmu_page_size; + u32 tad_page_size; + u32 tad_cnt; + int i, ret; + char *name; + + tad_pmu = devm_kzalloc(&pdev->dev, sizeof(*tad_pmu), GFP_KERNEL); + if (!tad_pmu) + return -ENOMEM; + + platform_set_drvdata(pdev, tad_pmu); + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(&pdev->dev, "Mem resource not found\n"); + return -ENODEV; + } + + ret = device_property_read_u32(dev, "marvell,tad-page-size", &tad_page_size); + if (ret) { + dev_err(&pdev->dev, "Can't find tad-page-size property\n"); + return ret; + } + + ret = device_property_read_u32(dev, "marvell,tad-pmu-page-size", + &tad_pmu_page_size); + if (ret) { + dev_err(&pdev->dev, "Can't find tad-pmu-page-size property\n"); + return ret; + } + + ret = device_property_read_u32(dev, "marvell,tad-cnt", &tad_cnt); + if (ret) { + dev_err(&pdev->dev, "Can't find tad-cnt property\n"); + return ret; + } + + regions = kcalloc(tad_cnt, sizeof(*regions), GFP_KERNEL); + if (!regions) + return -ENOMEM; + + /* ioremap the distributed TAD pmu regions */ + for (i = 0; i < tad_cnt && res->start < res->end; i++) { + regions[i].base = devm_ioremap(&pdev->dev, + res->start, + tad_pmu_page_size); + if (IS_ERR(regions[i].base)) { + dev_err(&pdev->dev, "TAD%d ioremap fail\n", i); + return -ENOMEM; + } + res->start += tad_page_size; + } + + tad_pmu->regions = regions; + tad_pmu->region_cnt = tad_cnt; + + tad_pmu->pmu = (struct pmu) { + .module = THIS_MODULE, + .attr_groups = tad_pmu_attr_groups, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .task_ctx_nr = perf_invalid_context, + + .event_init = tad_pmu_event_init, + .add = tad_pmu_event_counter_add, + .del = tad_pmu_event_counter_del, + .start = tad_pmu_event_counter_start, + .stop = tad_pmu_event_counter_stop, + .read = tad_pmu_event_counter_read, + }; + + tad_pmu->cpu = raw_smp_processor_id(); + + /* Register pmu instance for cpu hotplug */ + ret = cpuhp_state_add_instance_nocalls(tad_pmu_cpuhp_state, + &tad_pmu->node); + if (ret) { + dev_err(&pdev->dev, "Error %d registering hotplug\n", ret); + return ret; + } + + name = "tad"; + ret = perf_pmu_register(&tad_pmu->pmu, name, -1); + if (ret) + cpuhp_state_remove_instance_nocalls(tad_pmu_cpuhp_state, + &tad_pmu->node); + + return ret; +} + +static int tad_pmu_remove(struct platform_device *pdev) +{ + struct tad_pmu *pmu = platform_get_drvdata(pdev); + + cpuhp_state_remove_instance_nocalls(tad_pmu_cpuhp_state, + &pmu->node); + perf_pmu_unregister(&pmu->pmu); + + return 0; +} + +#ifdef CONFIG_OF +static const struct of_device_id tad_pmu_of_match[] = { + { .compatible = "marvell,odyssey-tad-pmu", }, + {}, +}; +MODULE_DEVICE_TABLE(of, tad_pmu_of_match); +#endif + +#ifdef CONFIG_ACPI +static const struct acpi_device_id tad_pmu_acpi_match[] = { + {"MRVL000D", 0}, + {}, +}; +MODULE_DEVICE_TABLE(acpi, tad_pmu_acpi_match); +#endif + +static struct platform_driver odyssey_tad_pmu_driver = { + .driver = { + .name = "odyssey_tad_pmu", + .of_match_table = of_match_ptr(tad_pmu_of_match), + .acpi_match_table = ACPI_PTR(tad_pmu_acpi_match), + .suppress_bind_attrs = true, + }, + .probe = tad_pmu_probe, + .remove = tad_pmu_remove, +}; + +static int odyssey_tad_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) +{ + struct tad_pmu *pmu = hlist_entry_safe(node, struct tad_pmu, node); + unsigned int target; + + if (cpu != pmu->cpu) + return 0; + + target = cpumask_any_but(cpu_online_mask, cpu); + if (target >= nr_cpu_ids) + return 0; + + perf_pmu_migrate_context(&pmu->pmu, cpu, target); + pmu->cpu = target; + + return 0; +} + +static int __init odyssey_tad_pmu_init(void) +{ + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, + "perf/odyssey/tadpmu:online", + NULL, + odyssey_tad_pmu_offline_cpu); + if (ret < 0) + return ret; + tad_pmu_cpuhp_state = ret; + return platform_driver_register(&odyssey_tad_pmu_driver); +} + +static void __exit odyssey_tad_pmu_exit(void) +{ + platform_driver_unregister(&odyssey_tad_pmu_driver); + cpuhp_remove_multi_state(tad_pmu_cpuhp_state); +} + +module_init(odyssey_tad_pmu_init); +module_exit(odyssey_tad_pmu_exit); + +MODULE_DESCRIPTION("Marvell ODYSSEY LLC-TAD Perf driver"); +MODULE_AUTHOR("Gowthami Thiagarajan "); +MODULE_LICENSE("GPL v2"); From patchwork Fri Jun 30 12:03:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114710 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10325704vqr; Fri, 30 Jun 2023 05:34:46 -0700 (PDT) X-Google-Smtp-Source: APBJJlE81AZFT4Yoe6hQozoDhQLSGoTrQ8dty5ZMD+cMiOxW+1L0jkbSIC9NAr39BMAWHXryszvG X-Received: by 2002:a17:903:32c1:b0:1b0:499f:7a8d with SMTP id i1-20020a17090332c100b001b0499f7a8dmr1932335plr.9.1688128485705; Fri, 30 Jun 2023 05:34:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128485; cv=none; d=google.com; s=arc-20160816; b=eZUxBKMipBwgFpmE3sAo1HSYRuKNgcOiFDeh57UJouuJJQAlf2NmhzGgkkzbfepzz5 btRwyl4mr6v8/i9TbgqPAOl2rufaZ3dKUDY8TMvMJG3jK6b0mB28eKhg/JUhzX5o5VUr PfILFORaqcNvSi/A9TJdankY3O8b94iLccxScUCf7Oow9ezytg4Gunx8AYbBemXgmLpa I/ZDXHxR6O41p/XksDI2y90o6OdV2LSEmaHgg2112o/eoZiPQqFEvCHMjzBfa/R5SI4y 2BzqOOnR7BA/AIMv0on3IKg9syj0F9KlGqXiDNSMBvtThnIqSgIuCvKTgln2wLpA9aB6 erUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=u4Z40xhGawqWRi7aPeii+Iq8sOO7ofMi4Sa4WpeOOMo=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=j7wEoEIsq9IJP5X5mE+eNzC1VCpRqUI72sDCHz/nRvZtQcOpei6iyuHCSa0ZPsW9MR sy50WVOlPe/gV3tYMApB9BdQOkGdfj/a4kJqi8/doh7ief697YbL7jeNPSG8JVkIulC/ tiRx0Sxad41tNaXPTsjWgcboW5PmA2B078vsNIKlp1sO4QMP01VjTwZ4JcpKH3LiPYJB CYRPWxwY+Zhsh/6FY9CAuRbi3ZcjMlAnRBL7KLEDLx6ss8ZnSA/Cg9o6vyNi+CMXaRjA apIVa76eXVQEJAtMnfHykv2nSLpkPK/U88LVZxg4Q37GJh82aYPbyIRrheC6mNJ0cWn5 BZxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=J9+zi74e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k4-20020a170902c40400b001a677821130si13047763plk.13.2023.06.30.05.34.30; Fri, 30 Jun 2023 05:34:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=J9+zi74e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233003AbjF3MEj (ORCPT + 99 others); Fri, 30 Jun 2023 08:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232955AbjF3MEZ (ORCPT ); Fri, 30 Jun 2023 08:04:25 -0400 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D51D01FFD for ; Fri, 30 Jun 2023 05:04:23 -0700 (PDT) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35U79wCv011195; Fri, 30 Jun 2023 05:04:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=u4Z40xhGawqWRi7aPeii+Iq8sOO7ofMi4Sa4WpeOOMo=; b=J9+zi74eUNSClEECTkc5VR/tP7fHzIRvUUqRVL4AH+riCYXJB9pgezqoURv1r/6xsdaG gZOhx+/kapzJYCjnua6ksc9ucEF1KP2CUCaImasIRwHQb3fPuUvtL9apNpgNFQ8Hj2+H UW0hxVzewp288LuATxHbKpP1wd5wzov+rSVvAgBLGKgTnxFl/JUwLtxuKE0kfjxaRWBg VfBy/iaNdfSLLYtTydcJffnWEWAQ73vYkDE1KXJR803OE4aBOGw4VqRUnkAezHz+y3Bx rgBHjtG6gEUJ3otGgjD9kByrpZhEm01cjvmDBWPaFP3+GaVMoEo7u8XDmEYo9/Uja7X0 yA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3rgvpc63bp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:16 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:13 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id E204A3F7059; Fri, 30 Jun 2023 05:04:10 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 4/6] dt-bindings: perf: marvell: Add YAML schemas for Marvell Odyssey LLC-TAD pmu Date: Fri, 30 Jun 2023 17:33:49 +0530 Message-ID: <20230630120351.1143773-5-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: o4n1dmVrwuakqI0kXO-mv5yUYGgsmjsb X-Proofpoint-ORIG-GUID: o4n1dmVrwuakqI0kXO-mv5yUYGgsmjsb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770131014698415165?= X-GMAIL-MSGID: =?utf-8?q?1770131014698415165?= Add device tree bindings for Marvell Odyssey LLC-TAD performance monitor unit Signed-off-by: Gowthami Thiagarajan --- .../bindings/perf/marvell-odyssey-tad.yaml | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) create mode 100644 Documentation/devicetree/bindings/perf/marvell-odyssey-tad.yaml diff --git a/Documentation/devicetree/bindings/perf/marvell-odyssey-tad.yaml b/Documentation/devicetree/bindings/perf/marvell-odyssey-tad.yaml new file mode 100644 index 000000000000..139567166f77 --- /dev/null +++ b/Documentation/devicetree/bindings/perf/marvell-odyssey-tad.yaml @@ -0,0 +1,63 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/perf/marvell-odyssey-tad.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Marvell Odyssey LLC-TAD performance monitor + +maintainers: + - Gowthami Thiagarajan + +description: | + The Tag-and-Data units (TADs) maintain coherence and contain CN10K + shared on-chip last level cache (LLC). The tad pmu measures the + performance of last-level cache. Each tad pmu supports up to eight + counters. + + The DT setup comprises of number of tad blocks, the sizes of pmu + regions, tad blocks and overall base address of the HW. + +properties: + compatible: + const: marvell,odyssey-tad-pmu + + reg: + maxItems: 1 + + marvell,tad-cnt: + description: specifies the number of tads on the soc + $ref: /schemas/types.yaml#/definitions/uint32 + + marvell,tad-page-size: + description: specifies the size of each tad page + $ref: /schemas/types.yaml#/definitions/uint32 + + marvell,tad-pmu-page-size: + description: specifies the size of page that the pmu uses + $ref: /schemas/types.yaml#/definitions/uint32 + +required: + - compatible + - reg + - marvell,tad-cnt + - marvell,tad-page-size + - marvell,tad-pmu-page-size + +additionalProperties: false + +examples: + - | + + tad { + #address-cells = <2>; + #size-cells = <2>; + + tad_pmu@80000000 { + compatible = "marvell,odyssey-tad-pmu"; + reg = <0x87E2 0x2B030000 0x0 0x1000>; + marvell,tad-cnt = <1>; + marvell,tad-page-size = <0x1000>; + marvell,tad-pmu-page-size = <0x1000>; + }; + }; From patchwork Fri Jun 30 12:03:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114712 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10326878vqr; Fri, 30 Jun 2023 05:36:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5flkusOjUJF0Vv5SONKbDazie7Ev2XBQdxsCuxKNWrSkDwpVVQcvv4ObuLDifZzLcX9KDj X-Received: by 2002:a05:6a21:99a0:b0:12c:5f33:ace8 with SMTP id ve32-20020a056a2199a000b0012c5f33ace8mr2123074pzb.27.1688128592365; Fri, 30 Jun 2023 05:36:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128592; cv=none; d=google.com; s=arc-20160816; b=ICu37XJbvoUAGE1snHYrYspcs9m/ltLTm8Sz332wZBUEZNvsE+DgRvfl8FyubAU9gI WZmmsfprboL6eD/8FrlXhYGuXXtbUlbQNxnzifx0UhccqJmWqBQbKE6j+18ndlswolo1 h7UJMqmI9L+Ldl0xMtmU/k44tgk8m9zGVxZBjF48RTZZeL7k01LFTtSHvHjbStY4F98u 5jZasAMgxoMZAsAh9c1OyhApNZfg+/70BpmP5GbJYYXb2BkooG4xMjcR4hllQHksJupH 3H/d8oV6GjqPm/XaMJUyHCUBqv5zK1H+tmjXD65B6g+1CzNrP8ZaWNZtGlVeQI7HOYG7 zWiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ss10NP1hfzkCiv7G4kckrpVZOtyM70fj5aBc5AxSFc0=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=aXZcDgRp3UK/Jcf8hG/MoA8QuYrytgKMWg+qXMtaeBk38x2vsw74CV5XBnwRYhYNLn mDl7EfHpVSziA2UxwoBo2NtpcLX5zMyIJMxsvCwUphEx9l5m9GzHy6CDf0157hDg7NUA QZaRBXxM9Xp1sOMxQMwGdCdfamPo4iB91RCUbb3Dhf0VSpATcK16INOvd/iBfD2rKBUg HhOBb3sR+XrrEVMv+hxlIY2SA4s0t5kp+zRcxG2JOTu5sl540NHEPRaQH0x6zxPBXid6 KTAjbam5PGXVpt7vBfsoAdmWSlMWaNoE/qq1+vnhAq6EE1Q/9cagY/iL0oOZlmJ0LITv 8Xfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=PBQQxd5t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bm4-20020a056a00320400b0067bb1f0b329si7058194pfb.93.2023.06.30.05.36.19; Fri, 30 Jun 2023 05:36:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=PBQQxd5t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbjF3MFQ (ORCPT + 99 others); Fri, 30 Jun 2023 08:05:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233044AbjF3ME6 (ORCPT ); Fri, 30 Jun 2023 08:04:58 -0400 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4806B3C22 for ; Fri, 30 Jun 2023 05:04:31 -0700 (PDT) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35U79wCw011195; Fri, 30 Jun 2023 05:04:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Ss10NP1hfzkCiv7G4kckrpVZOtyM70fj5aBc5AxSFc0=; b=PBQQxd5tqpR0CUKDsU67RZsryBgigK22BhKwjYrG+Ak4XNw7mpfzc5hd+GBY8JVxWSN0 GZg1yYWgXIkDCs4BjIyJSF3Qy7SnDkyJnZdz6u5zLocPiBJY32Z467DD+0P2dEwPJTZc 35rpDtkZyEEZAdBuksrmLrJdJUIZFh1BGtZ9AE8GRPQ02bkAiDNxOkfXOyCGP28uvwh8 8GzNIWREN7G4EXs8U0r2xHaaA9zxZ83+SiRxrr0PKMAzyotvIXotJGAMGqgk+hbsdWXI FbJpwKuvZyjF3OWzlDJiMhsoTEQv2bG26azUyoDpRb6DJnZirRxF0ismMSYNzwmnOL8O 8g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3rgvpc63by-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:21 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:19 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id 3A5BE5B6926; Fri, 30 Jun 2023 05:04:14 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 5/6] perf/marvell: Odyssey DDR Performance monitor support Date: Fri, 30 Jun 2023 17:33:50 +0530 Message-ID: <20230630120351.1143773-6-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: C_bxTZoHARuEOQ22BUWI5zLQ98w-mJ6N X-Proofpoint-ORIG-GUID: C_bxTZoHARuEOQ22BUWI5zLQ98w-mJ6N X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770131126588252899?= X-GMAIL-MSGID: =?utf-8?q?1770131126588252899?= Odyssey DRAM Subsystem supports eight counters for monitoring performance and software can program those counters to monitor any of the defined performance events. Supported performance events include those counted at the interface between the DDR controller and the PHY, interface between the DDR Controller and the CHI interconnect, or within the DDR Controller. Additionally DSS also supports two fixed performance event counters, one for ddr reads and the other for ddr writes. Signed-off-by: Gowthami Thiagarajan --- drivers/perf/marvell_cn10k_ddr_pmu.c | 404 ++++++++++++++++++++++----- 1 file changed, 339 insertions(+), 65 deletions(-) diff --git a/drivers/perf/marvell_cn10k_ddr_pmu.c b/drivers/perf/marvell_cn10k_ddr_pmu.c index b94a5f6cc22b..d012e3d32e26 100644 --- a/drivers/perf/marvell_cn10k_ddr_pmu.c +++ b/drivers/perf/marvell_cn10k_ddr_pmu.c @@ -15,24 +15,29 @@ #include /* Performance Counters Operating Mode Control Registers */ -#define DDRC_PERF_CNT_OP_MODE_CTRL 0x8020 -#define OP_MODE_CTRL_VAL_MANNUAL 0x1 +#define CN10K_DDRC_PERF_CNT_OP_MODE_CTRL 0x8020 +#define ODY_DDRC_PERF_CNT_OP_MODE_CTRL 0x20020 +#define OP_MODE_CTRL_VAL_MANUAL 0x1 /* Performance Counters Start Operation Control Registers */ -#define DDRC_PERF_CNT_START_OP_CTRL 0x8028 -#define START_OP_CTRL_VAL_START 0x1ULL -#define START_OP_CTRL_VAL_ACTIVE 0x2 +#define CN10K_DDRC_PERF_CNT_START_OP_CTRL 0x8028 +#define ODY_DDRC_PERF_CNT_START_OP_CTRL 0x200A0 +#define START_OP_CTRL_VAL_START 0x1ULL +#define START_OP_CTRL_VAL_ACTIVE 0x2 /* Performance Counters End Operation Control Registers */ -#define DDRC_PERF_CNT_END_OP_CTRL 0x8030 -#define END_OP_CTRL_VAL_END 0x1ULL +#define CN10K_DDRC_PERF_CNT_END_OP_CTRL 0x8030 +#define ODY_DDRC_PERF_CNT_END_OP_CTRL 0x200E0 +#define END_OP_CTRL_VAL_END 0x1ULL /* Performance Counters End Status Registers */ -#define DDRC_PERF_CNT_END_STATUS 0x8038 +#define CN10K_DDRC_PERF_CNT_END_STATUS 0x8038 +#define ODY_DDRC_PERF_CNT_END_STATUS 0x20120 #define END_STATUS_VAL_END_TIMER_MODE_END 0x1 /* Performance Counters Configuration Registers */ -#define DDRC_PERF_CFG_BASE 0x8040 +#define CN10K_DDRC_PERF_CFG_BASE 0x8040 +#define ODY_DDRC_PERF_CFG_BASE 0x20160 /* 8 Generic event counter + 2 fixed event counters */ #define DDRC_PERF_NUM_GEN_COUNTERS 8 @@ -43,18 +48,31 @@ DDRC_PERF_NUM_FIX_COUNTERS) /* Generic event counter registers */ -#define DDRC_PERF_CFG(n) (DDRC_PERF_CFG_BASE + 8 * (n)) +#define DDRC_PERF_CFG(base, n) ((base) + 8 * (n)) #define EVENT_ENABLE BIT_ULL(63) /* Two dedicated event counters for DDR reads and writes */ #define EVENT_DDR_READS 101 #define EVENT_DDR_WRITES 100 +#define DDRC_PERF_REG(base, n) ((base) + 8 * (n)) /* * programmable events IDs in programmable event counters. * DO NOT change these event-id numbers, they are used to * program event bitmap in h/w. + * + */ +/* + * Additional programmable events defined in + * Odyssey. */ +#define EVENT_DFI_CMD_IS_RETRY 61 +#define EVENT_RD_UC_ECC_ERROR 60 +#define EVENT_RD_CRC_ERROR 59 +#define EVENT_CAPAR_ERROR 58 +#define EVENT_WR_CRC_ERROR 57 +#define EVENT_DFI_PARITY_POISON 56 + #define EVENT_OP_IS_ZQLATCH 55 #define EVENT_OP_IS_ZQSTART 54 #define EVENT_OP_IS_TCR_MRR 53 @@ -64,8 +82,8 @@ #define EVENT_VISIBLE_WIN_LIMIT_REACHED_RD 49 #define EVENT_BSM_STARVATION 48 #define EVENT_BSM_ALLOC 47 -#define EVENT_LPR_REQ_WITH_NOCREDIT 46 -#define EVENT_HPR_REQ_WITH_NOCREDIT 45 +#define EVENT_RETRY_FIFO_FULL_OR_LPR_REQ_NOCRED 46 +#define EVENT_DFI_OR_HPR_REQ_NOCRED 45 #define EVENT_OP_IS_ZQCS 44 #define EVENT_OP_IS_ZQCL 43 #define EVENT_OP_IS_LOAD_MODE 42 @@ -103,28 +121,37 @@ #define EVENT_HIF_RD_OR_WR 1 /* Event counter value registers */ -#define DDRC_PERF_CNT_VALUE_BASE 0x8080 -#define DDRC_PERF_CNT_VALUE(n) (DDRC_PERF_CNT_VALUE_BASE + 8 * (n)) +#define CN10K_DDRC_PERF_CNT_VALUE_BASE 0x8080 +#define ODY_DDRC_PERF_CNT_VALUE_BASE 0x201C0 /* Fixed event counter enable/disable register */ -#define DDRC_PERF_CNT_FREERUN_EN 0x80C0 -#define DDRC_PERF_FREERUN_WRITE_EN 0x1 -#define DDRC_PERF_FREERUN_READ_EN 0x2 +#define CN10K_DDRC_PERF_CNT_FREERUN_EN 0x80C0 +#define DDRC_PERF_FREERUN_WRITE_EN 0x1 +#define DDRC_PERF_FREERUN_READ_EN 0x2 /* Fixed event counter control register */ -#define DDRC_PERF_CNT_FREERUN_CTRL 0x80C8 -#define DDRC_FREERUN_WRITE_CNT_CLR 0x1 -#define DDRC_FREERUN_READ_CNT_CLR 0x2 +#define CN10K_DDRC_PERF_CNT_FREERUN_CTRL 0x80C8 +#define ODY_DDRC_PERF_CNT_FREERUN_CTRL 0x20240 +#define DDRC_FREERUN_WRITE_CNT_CLR 0x1 +#define DDRC_FREERUN_READ_CNT_CLR 0x2 + +/* Fixed event counter clear register, defined only for Odyssey */ +#define ODY_DDRC_PERF_CNT_FREERUN_CLR 0x20248 /* Fixed event counter value register */ -#define DDRC_PERF_CNT_VALUE_WR_OP 0x80D0 -#define DDRC_PERF_CNT_VALUE_RD_OP 0x80D8 -#define DDRC_PERF_CNT_VALUE_OVERFLOW BIT_ULL(48) -#define DDRC_PERF_CNT_MAX_VALUE GENMASK_ULL(48, 0) +#define CN10K_DDRC_PERF_CNT_VALUE_WR_OP 0x80D0 +#define CN10K_DDRC_PERF_CNT_VALUE_RD_OP 0x80D8 +#define ODY_DDRC_PERF_CNT_VALUE_WR_OP 0x20250 +#define ODY_DDRC_PERF_CNT_VALUE_RD_OP 0x20258 + +#define VERSION_V1 1 +#define VERSION_V2 2 struct cn10k_ddr_pmu { struct pmu pmu; void __iomem *base; + struct ddr_pmu_platform_data *p_data; + int version; unsigned int cpu; struct device *dev; int active_events; @@ -135,6 +162,54 @@ struct cn10k_ddr_pmu { #define to_cn10k_ddr_pmu(p) container_of(p, struct cn10k_ddr_pmu, pmu) +struct ddr_pmu_platform_data { + u64 counter_overflow_val; + u64 counter_max_val; + u64 ddrc_perf_cnt_base; + u64 ddrc_perf_cfg_base; + u64 ddrc_perf_cnt_op_mode_ctrl; + u64 ddrc_perf_cnt_start_op_ctrl; + u64 ddrc_perf_cnt_end_op_ctrl; + u64 ddrc_perf_cnt_end_status; + u64 ddrc_perf_cnt_freerun_en; + u64 ddrc_perf_cnt_freerun_ctrl; + u64 ddrc_perf_cnt_freerun_clr; + u64 ddrc_perf_cnt_value_wr_op; + u64 ddrc_perf_cnt_value_rd_op; +}; + +static const struct ddr_pmu_platform_data cn10k_ddr_pmu_pdata = { + .counter_overflow_val = BIT_ULL(48), + .counter_max_val = GENMASK_ULL(48, 0), + .ddrc_perf_cnt_base = CN10K_DDRC_PERF_CNT_VALUE_BASE, + .ddrc_perf_cfg_base = CN10K_DDRC_PERF_CFG_BASE, + .ddrc_perf_cnt_op_mode_ctrl = CN10K_DDRC_PERF_CNT_OP_MODE_CTRL, + .ddrc_perf_cnt_start_op_ctrl = CN10K_DDRC_PERF_CNT_START_OP_CTRL, + .ddrc_perf_cnt_end_op_ctrl = CN10K_DDRC_PERF_CNT_END_OP_CTRL, + .ddrc_perf_cnt_end_status = CN10K_DDRC_PERF_CNT_END_STATUS, + .ddrc_perf_cnt_freerun_en = CN10K_DDRC_PERF_CNT_FREERUN_EN, + .ddrc_perf_cnt_freerun_ctrl = CN10K_DDRC_PERF_CNT_FREERUN_CTRL, + .ddrc_perf_cnt_freerun_clr = 0, + .ddrc_perf_cnt_value_wr_op = CN10K_DDRC_PERF_CNT_VALUE_WR_OP, + .ddrc_perf_cnt_value_rd_op = CN10K_DDRC_PERF_CNT_VALUE_RD_OP, +}; + +static const struct ddr_pmu_platform_data odyssey_ddr_pmu_pdata = { + .counter_overflow_val = 0, + .counter_max_val = GENMASK_ULL(63, 0), + .ddrc_perf_cnt_base = ODY_DDRC_PERF_CNT_VALUE_BASE, + .ddrc_perf_cfg_base = ODY_DDRC_PERF_CFG_BASE, + .ddrc_perf_cnt_op_mode_ctrl = ODY_DDRC_PERF_CNT_OP_MODE_CTRL, + .ddrc_perf_cnt_start_op_ctrl = ODY_DDRC_PERF_CNT_START_OP_CTRL, + .ddrc_perf_cnt_end_op_ctrl = ODY_DDRC_PERF_CNT_END_OP_CTRL, + .ddrc_perf_cnt_end_status = ODY_DDRC_PERF_CNT_END_STATUS, + .ddrc_perf_cnt_freerun_en = 0, + .ddrc_perf_cnt_freerun_ctrl = ODY_DDRC_PERF_CNT_FREERUN_CTRL, + .ddrc_perf_cnt_freerun_clr = ODY_DDRC_PERF_CNT_FREERUN_CLR, + .ddrc_perf_cnt_value_wr_op = ODY_DDRC_PERF_CNT_VALUE_WR_OP, + .ddrc_perf_cnt_value_rd_op = ODY_DDRC_PERF_CNT_VALUE_RD_OP, +}; + static ssize_t cn10k_ddr_pmu_event_show(struct device *dev, struct device_attribute *attr, char *page) @@ -190,9 +265,9 @@ static struct attribute *cn10k_ddr_perf_events_attrs[] = { CN10K_DDR_PMU_EVENT_ATTR(ddr_zqcl, EVENT_OP_IS_ZQCL), CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_wr_access, EVENT_OP_IS_ZQCS), CN10K_DDR_PMU_EVENT_ATTR(ddr_hpr_req_with_nocredit, - EVENT_HPR_REQ_WITH_NOCREDIT), + EVENT_DFI_OR_HPR_REQ_NOCRED), CN10K_DDR_PMU_EVENT_ATTR(ddr_lpr_req_with_nocredit, - EVENT_LPR_REQ_WITH_NOCREDIT), + EVENT_RETRY_FIFO_FULL_OR_LPR_REQ_NOCRED), CN10K_DDR_PMU_EVENT_ATTR(ddr_bsm_alloc, EVENT_BSM_ALLOC), CN10K_DDR_PMU_EVENT_ATTR(ddr_bsm_starvation, EVENT_BSM_STARVATION), CN10K_DDR_PMU_EVENT_ATTR(ddr_win_limit_reached_rd, @@ -215,6 +290,78 @@ static struct attribute_group cn10k_ddr_perf_events_attr_group = { .attrs = cn10k_ddr_perf_events_attrs, }; +static struct attribute *odyssey_ddr_perf_events_attrs[] = { + /* Programmable */ + CN10K_DDR_PMU_EVENT_ATTR(ddr_hif_rd_or_wr_access, EVENT_HIF_RD_OR_WR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_hif_wr_access, EVENT_HIF_WR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_hif_rd_access, EVENT_HIF_RD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_hif_rmw_access, EVENT_HIF_RMW), + CN10K_DDR_PMU_EVENT_ATTR(ddr_hif_pri_rdaccess, EVENT_HIF_HI_PRI_RD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_rd_bypass_access, EVENT_READ_BYPASS), + CN10K_DDR_PMU_EVENT_ATTR(ddr_act_bypass_access, EVENT_ACT_BYPASS), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dfi_wr_data_access, EVENT_DFI_WR_DATA_CYCLES), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dfi_rd_data_access, EVENT_DFI_RD_DATA_CYCLES), + CN10K_DDR_PMU_EVENT_ATTR(ddr_hpri_sched_rd_crit_access, + EVENT_HPR_XACT_WHEN_CRITICAL), + CN10K_DDR_PMU_EVENT_ATTR(ddr_lpri_sched_rd_crit_access, + EVENT_LPR_XACT_WHEN_CRITICAL), + CN10K_DDR_PMU_EVENT_ATTR(ddr_wr_trxn_crit_access, + EVENT_WR_XACT_WHEN_CRITICAL), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_active_access, EVENT_OP_IS_ACTIVATE), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_rd_or_wr_access, EVENT_OP_IS_RD_OR_WR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_rd_active_access, EVENT_OP_IS_RD_ACTIVATE), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_read, EVENT_OP_IS_RD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_write, EVENT_OP_IS_WR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_mwr, EVENT_OP_IS_MWR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_precharge, EVENT_OP_IS_PRECHARGE), + CN10K_DDR_PMU_EVENT_ATTR(ddr_precharge_for_rdwr, EVENT_PRECHARGE_FOR_RDWR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_precharge_for_other, + EVENT_PRECHARGE_FOR_OTHER), + CN10K_DDR_PMU_EVENT_ATTR(ddr_rdwr_transitions, EVENT_RDWR_TRANSITIONS), + CN10K_DDR_PMU_EVENT_ATTR(ddr_write_combine, EVENT_WRITE_COMBINE), + CN10K_DDR_PMU_EVENT_ATTR(ddr_war_hazard, EVENT_WAR_HAZARD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_raw_hazard, EVENT_RAW_HAZARD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_waw_hazard, EVENT_WAW_HAZARD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_enter_selfref, EVENT_OP_IS_ENTER_SELFREF), + CN10K_DDR_PMU_EVENT_ATTR(ddr_enter_powerdown, EVENT_OP_IS_ENTER_POWERDOWN), + CN10K_DDR_PMU_EVENT_ATTR(ddr_enter_mpsm, EVENT_OP_IS_ENTER_MPSM), + CN10K_DDR_PMU_EVENT_ATTR(ddr_refresh, EVENT_OP_IS_REFRESH), + CN10K_DDR_PMU_EVENT_ATTR(ddr_crit_ref, EVENT_OP_IS_CRIT_REF), + CN10K_DDR_PMU_EVENT_ATTR(ddr_spec_ref, EVENT_OP_IS_SPEC_REF), + CN10K_DDR_PMU_EVENT_ATTR(ddr_load_mode, EVENT_OP_IS_LOAD_MODE), + CN10K_DDR_PMU_EVENT_ATTR(ddr_zqcl, EVENT_OP_IS_ZQCL), + CN10K_DDR_PMU_EVENT_ATTR(ddr_cam_wr_access, EVENT_OP_IS_ZQCS), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dfi_cycles, EVENT_DFI_OR_HPR_REQ_NOCRED), + CN10K_DDR_PMU_EVENT_ATTR(ddr_retry_fifo_full, + EVENT_RETRY_FIFO_FULL_OR_LPR_REQ_NOCRED), + CN10K_DDR_PMU_EVENT_ATTR(ddr_bsm_alloc, EVENT_BSM_ALLOC), + CN10K_DDR_PMU_EVENT_ATTR(ddr_bsm_starvation, EVENT_BSM_STARVATION), + CN10K_DDR_PMU_EVENT_ATTR(ddr_win_limit_reached_rd, + EVENT_VISIBLE_WIN_LIMIT_REACHED_RD), + CN10K_DDR_PMU_EVENT_ATTR(ddr_win_limit_reached_wr, + EVENT_VISIBLE_WIN_LIMIT_REACHED_WR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dqsosc_mpc, EVENT_OP_IS_DQSOSC_MPC), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dqsosc_mrr, EVENT_OP_IS_DQSOSC_MRR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_tcr_mrr, EVENT_OP_IS_TCR_MRR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_zqstart, EVENT_OP_IS_ZQSTART), + CN10K_DDR_PMU_EVENT_ATTR(ddr_zqlatch, EVENT_OP_IS_ZQLATCH), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dfi_parity_poison, EVENT_DFI_PARITY_POISON), + CN10K_DDR_PMU_EVENT_ATTR(ddr_wr_crc_error, EVENT_WR_CRC_ERROR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_capar_error, EVENT_CAPAR_ERROR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_rd_crc_error, EVENT_RD_CRC_ERROR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_rd_uc_ecc_error, EVENT_RD_UC_ECC_ERROR), + CN10K_DDR_PMU_EVENT_ATTR(ddr_dfi_cmd_is_retry, EVENT_DFI_CMD_IS_RETRY), + /* Free run event counters */ + CN10K_DDR_PMU_EVENT_ATTR(ddr_ddr_reads, EVENT_DDR_READS), + CN10K_DDR_PMU_EVENT_ATTR(ddr_ddr_writes, EVENT_DDR_WRITES), + NULL +}; + +static struct attribute_group odyssey_ddr_perf_events_attr_group = { + .name = "events", + .attrs = odyssey_ddr_perf_events_attrs, +}; + PMU_FORMAT_ATTR(event, "config:0-8"); static struct attribute *cn10k_ddr_perf_format_attrs[] = { @@ -255,6 +402,13 @@ static const struct attribute_group *cn10k_attr_groups[] = { NULL, }; +static const struct attribute_group *odyssey_attr_groups[] = { + &odyssey_ddr_perf_events_attr_group, + &cn10k_ddr_perf_format_attr_group, + &cn10k_ddr_perf_cpumask_attr_group, + NULL +}; + /* Default poll timeout is 100 sec, which is very sufficient for * 48 bit counter incremented max at 5.6 GT/s, which may take many * hours to overflow. @@ -267,13 +421,23 @@ static ktime_t cn10k_ddr_pmu_timer_period(void) return ms_to_ktime((u64)cn10k_ddr_pmu_poll_period_sec * USEC_PER_SEC); } -static int ddr_perf_get_event_bitmap(int eventid, u64 *event_bitmap) +static int ddr_perf_get_event_bitmap(int eventid, u64 *event_bitmap, struct cn10k_ddr_pmu *ddr_pmu) { + int ret = 0; + switch (eventid) { case EVENT_HIF_RD_OR_WR ... EVENT_WAW_HAZARD: case EVENT_OP_IS_REFRESH ... EVENT_OP_IS_ZQLATCH: *event_bitmap = (1ULL << (eventid - 1)); break; + case EVENT_DFI_PARITY_POISON ...EVENT_DFI_CMD_IS_RETRY: + if (ddr_pmu->version == VERSION_V2) { + *event_bitmap = (1ULL << (eventid - 1)); + } else { + pr_err("%s Invalid eventid %d\n", __func__, eventid); + ret = -EINVAL; + } + break; case EVENT_OP_IS_ENTER_SELFREF: case EVENT_OP_IS_ENTER_POWERDOWN: case EVENT_OP_IS_ENTER_MPSM: @@ -281,10 +445,10 @@ static int ddr_perf_get_event_bitmap(int eventid, u64 *event_bitmap) break; default: pr_err("%s Invalid eventid %d\n", __func__, eventid); - return -EINVAL; + ret = -EINVAL; } - return 0; + return ret; } static int cn10k_ddr_perf_alloc_counter(struct cn10k_ddr_pmu *pmu, @@ -357,6 +521,7 @@ static void cn10k_ddr_perf_counter_enable(struct cn10k_ddr_pmu *pmu, { u32 reg; u64 val; + struct ddr_pmu_platform_data *p_data = pmu->p_data; if (counter > DDRC_PERF_NUM_COUNTERS) { pr_err("Error: unsupported counter %d\n", counter); @@ -364,7 +529,7 @@ static void cn10k_ddr_perf_counter_enable(struct cn10k_ddr_pmu *pmu, } if (counter < DDRC_PERF_NUM_GEN_COUNTERS) { - reg = DDRC_PERF_CFG(counter); + reg = DDRC_PERF_CFG(p_data->ddrc_perf_cfg_base, counter); val = readq_relaxed(pmu->base + reg); if (enable) @@ -374,7 +539,11 @@ static void cn10k_ddr_perf_counter_enable(struct cn10k_ddr_pmu *pmu, writeq_relaxed(val, pmu->base + reg); } else { - val = readq_relaxed(pmu->base + DDRC_PERF_CNT_FREERUN_EN); + if (p_data->ddrc_perf_cnt_freerun_en) + val = readq_relaxed(pmu->base + p_data->ddrc_perf_cnt_freerun_en); + else + val = readq_relaxed(pmu->base + p_data->ddrc_perf_cnt_freerun_ctrl); + if (enable) { if (counter == DDRC_PERF_READ_COUNTER_IDX) val |= DDRC_PERF_FREERUN_READ_EN; @@ -386,7 +555,11 @@ static void cn10k_ddr_perf_counter_enable(struct cn10k_ddr_pmu *pmu, else val &= ~DDRC_PERF_FREERUN_WRITE_EN; } - writeq_relaxed(val, pmu->base + DDRC_PERF_CNT_FREERUN_EN); + + if (p_data->ddrc_perf_cnt_freerun_en) + writeq_relaxed(val, pmu->base + p_data->ddrc_perf_cnt_freerun_en); + else + writeq_relaxed(val, pmu->base + p_data->ddrc_perf_cnt_freerun_ctrl); } } @@ -394,13 +567,15 @@ static u64 cn10k_ddr_perf_read_counter(struct cn10k_ddr_pmu *pmu, int counter) { u64 val; + struct ddr_pmu_platform_data *p_data = pmu->p_data; + if (counter == DDRC_PERF_READ_COUNTER_IDX) - return readq_relaxed(pmu->base + DDRC_PERF_CNT_VALUE_RD_OP); + return readq_relaxed(pmu->base + p_data->ddrc_perf_cnt_value_rd_op); if (counter == DDRC_PERF_WRITE_COUNTER_IDX) - return readq_relaxed(pmu->base + DDRC_PERF_CNT_VALUE_WR_OP); + return readq_relaxed(pmu->base + p_data->ddrc_perf_cnt_value_wr_op); - val = readq_relaxed(pmu->base + DDRC_PERF_CNT_VALUE(counter)); + val = readq_relaxed(pmu->base + DDRC_PERF_REG(p_data->ddrc_perf_cnt_base, counter)); return val; } @@ -408,6 +583,7 @@ static void cn10k_ddr_perf_event_update(struct perf_event *event) { struct cn10k_ddr_pmu *pmu = to_cn10k_ddr_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct ddr_pmu_platform_data *p_data = pmu->p_data; u64 prev_count, new_count, mask; do { @@ -415,11 +591,27 @@ static void cn10k_ddr_perf_event_update(struct perf_event *event) new_count = cn10k_ddr_perf_read_counter(pmu, hwc->idx); } while (local64_xchg(&hwc->prev_count, new_count) != prev_count); - mask = DDRC_PERF_CNT_MAX_VALUE; + mask = p_data->counter_max_val; local64_add((new_count - prev_count) & mask, &event->count); } +static void cn10k_ddr_perf_counter_start(struct cn10k_ddr_pmu *ddr_pmu, int counter) +{ + struct ddr_pmu_platform_data *p_data = ddr_pmu->p_data; + + writeq_relaxed(START_OP_CTRL_VAL_START, ddr_pmu->base + + DDRC_PERF_REG(p_data->ddrc_perf_cnt_start_op_ctrl, counter)); +} + +static void cn10k_ddr_perf_counter_stop(struct cn10k_ddr_pmu *ddr_pmu, int counter) +{ + struct ddr_pmu_platform_data *p_data = ddr_pmu->p_data; + + writeq_relaxed(END_OP_CTRL_VAL_END, ddr_pmu->base + + DDRC_PERF_REG(p_data->ddrc_perf_cnt_end_op_ctrl, counter)); +} + static void cn10k_ddr_perf_event_start(struct perf_event *event, int flags) { struct cn10k_ddr_pmu *pmu = to_cn10k_ddr_pmu(event->pmu); @@ -429,6 +621,14 @@ static void cn10k_ddr_perf_event_start(struct perf_event *event, int flags) local64_set(&hwc->prev_count, 0); cn10k_ddr_perf_counter_enable(pmu, counter, true); + if (pmu->version == VERSION_V2) { + /* Setup the PMU counter to work in manual mode */ + writeq_relaxed(OP_MODE_CTRL_VAL_MANUAL, pmu->base + + DDRC_PERF_REG(pmu->p_data->ddrc_perf_cnt_op_mode_ctrl, + counter)); + + cn10k_ddr_perf_counter_start(pmu, counter); + } hwc->state = 0; } @@ -436,6 +636,7 @@ static void cn10k_ddr_perf_event_start(struct perf_event *event, int flags) static int cn10k_ddr_perf_event_add(struct perf_event *event, int flags) { struct cn10k_ddr_pmu *pmu = to_cn10k_ddr_pmu(event->pmu); + struct ddr_pmu_platform_data *p_data = pmu->p_data; struct hw_perf_event *hwc = &event->hw; u8 config = event->attr.config; int counter, ret; @@ -455,8 +656,8 @@ static int cn10k_ddr_perf_event_add(struct perf_event *event, int flags) if (counter < DDRC_PERF_NUM_GEN_COUNTERS) { /* Generic counters, configure event id */ - reg_offset = DDRC_PERF_CFG(counter); - ret = ddr_perf_get_event_bitmap(config, &val); + reg_offset = DDRC_PERF_CFG(p_data->ddrc_perf_cfg_base, counter); + ret = ddr_perf_get_event_bitmap(config, &val, pmu); if (ret) return ret; @@ -468,7 +669,10 @@ static int cn10k_ddr_perf_event_add(struct perf_event *event, int flags) else val = DDRC_FREERUN_WRITE_CNT_CLR; - writeq_relaxed(val, pmu->base + DDRC_PERF_CNT_FREERUN_CTRL); + if (p_data->ddrc_perf_cnt_freerun_clr) + writeq_relaxed(val, pmu->base + p_data->ddrc_perf_cnt_freerun_clr); + else + writeq_relaxed(val, pmu->base + p_data->ddrc_perf_cnt_freerun_ctrl); } hwc->state |= PERF_HES_STOPPED; @@ -487,6 +691,9 @@ static void cn10k_ddr_perf_event_stop(struct perf_event *event, int flags) cn10k_ddr_perf_counter_enable(pmu, counter, false); + if (pmu->version == VERSION_V2) + cn10k_ddr_perf_counter_stop(pmu, counter); + if (flags & PERF_EF_UPDATE) cn10k_ddr_perf_event_update(event); @@ -513,17 +720,19 @@ static void cn10k_ddr_perf_event_del(struct perf_event *event, int flags) static void cn10k_ddr_perf_pmu_enable(struct pmu *pmu) { struct cn10k_ddr_pmu *ddr_pmu = to_cn10k_ddr_pmu(pmu); + struct ddr_pmu_platform_data *p_data = ddr_pmu->p_data; writeq_relaxed(START_OP_CTRL_VAL_START, ddr_pmu->base + - DDRC_PERF_CNT_START_OP_CTRL); + p_data->ddrc_perf_cnt_start_op_ctrl); } static void cn10k_ddr_perf_pmu_disable(struct pmu *pmu) { struct cn10k_ddr_pmu *ddr_pmu = to_cn10k_ddr_pmu(pmu); + struct ddr_pmu_platform_data *p_data = ddr_pmu->p_data; writeq_relaxed(END_OP_CTRL_VAL_END, ddr_pmu->base + - DDRC_PERF_CNT_END_OP_CTRL); + p_data->ddrc_perf_cnt_end_op_ctrl); } static void cn10k_ddr_perf_event_update_all(struct cn10k_ddr_pmu *pmu) @@ -550,6 +759,7 @@ static void cn10k_ddr_perf_event_update_all(struct cn10k_ddr_pmu *pmu) static irqreturn_t cn10k_ddr_pmu_overflow_handler(struct cn10k_ddr_pmu *pmu) { + struct ddr_pmu_platform_data *p_data = pmu->p_data; struct perf_event *event; struct hw_perf_event *hwc; u64 prev_count, new_count; @@ -587,11 +797,23 @@ static irqreturn_t cn10k_ddr_pmu_overflow_handler(struct cn10k_ddr_pmu *pmu) continue; value = cn10k_ddr_perf_read_counter(pmu, i); - if (value == DDRC_PERF_CNT_MAX_VALUE) { + if (value == p_data->counter_max_val) { pr_info("Counter-(%d) reached max value\n", i); - cn10k_ddr_perf_event_update_all(pmu); - cn10k_ddr_perf_pmu_disable(&pmu->pmu); - cn10k_ddr_perf_pmu_enable(&pmu->pmu); + /* + * As separate control register is added for each counter + * in odyssey, no need to update all the events + * + */ + if (pmu->version == VERSION_V2) { + cn10k_ddr_perf_event_update(pmu->events[i]); + cn10k_ddr_perf_counter_stop(pmu, i); + cn10k_ddr_perf_counter_start(pmu, i); + + } else { + cn10k_ddr_perf_event_update_all(pmu); + cn10k_ddr_perf_pmu_disable(&pmu->pmu); + cn10k_ddr_perf_pmu_enable(&pmu->pmu); + } } } @@ -632,7 +854,10 @@ static int cn10k_ddr_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) static int cn10k_ddr_perf_probe(struct platform_device *pdev) { + struct ddr_pmu_platform_data *pltfm_data; + struct device *dev = &pdev->dev; struct cn10k_ddr_pmu *ddr_pmu; + const char *compatible; struct resource *res; void __iomem *base; char *name; @@ -643,6 +868,13 @@ static int cn10k_ddr_perf_probe(struct platform_device *pdev) return -ENOMEM; ddr_pmu->dev = &pdev->dev; + + pltfm_data = (struct ddr_pmu_platform_data *)device_get_match_data(&pdev->dev); + if (!pltfm_data) { + dev_err(&pdev->dev, "Error: No device match data found\n"); + return -ENODEV; + } + ddr_pmu->p_data = pltfm_data; platform_set_drvdata(pdev, ddr_pmu); base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); @@ -651,25 +883,59 @@ static int cn10k_ddr_perf_probe(struct platform_device *pdev) ddr_pmu->base = base; - /* Setup the PMU counter to work in manual mode */ - writeq_relaxed(OP_MODE_CTRL_VAL_MANNUAL, ddr_pmu->base + - DDRC_PERF_CNT_OP_MODE_CTRL); - - ddr_pmu->pmu = (struct pmu) { - .module = THIS_MODULE, - .capabilities = PERF_PMU_CAP_NO_EXCLUDE, - .task_ctx_nr = perf_invalid_context, - .attr_groups = cn10k_attr_groups, - .event_init = cn10k_ddr_perf_event_init, - .add = cn10k_ddr_perf_event_add, - .del = cn10k_ddr_perf_event_del, - .start = cn10k_ddr_perf_event_start, - .stop = cn10k_ddr_perf_event_stop, - .read = cn10k_ddr_perf_event_update, - .pmu_enable = cn10k_ddr_perf_pmu_enable, - .pmu_disable = cn10k_ddr_perf_pmu_disable, - }; + ret = device_property_read_string(dev, "compatible", &compatible); + if (ret) { + pr_err("compatible property not found\n"); + return ret; + } + if ((strncmp("marvell,cn10k-ddr-pmu", compatible, + strlen(compatible)) == 0)) + ddr_pmu->version = VERSION_V1; + else + ddr_pmu->version = VERSION_V2; + + if (ddr_pmu->version == VERSION_V1) { + ddr_pmu->pmu = (struct pmu) { + .module = THIS_MODULE, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .task_ctx_nr = perf_invalid_context, + .attr_groups = cn10k_attr_groups, + .event_init = cn10k_ddr_perf_event_init, + .add = cn10k_ddr_perf_event_add, + .del = cn10k_ddr_perf_event_del, + .start = cn10k_ddr_perf_event_start, + .stop = cn10k_ddr_perf_event_stop, + .read = cn10k_ddr_perf_event_update, + .pmu_enable = cn10k_ddr_perf_pmu_enable, + .pmu_disable = cn10k_ddr_perf_pmu_disable, + }; + + /* + * As we have separate control registers for each counter in Odyssey, + * setting up the mode will be done when we enable each counter + * + */ + + /* Setup the PMU counter to work in manual mode */ + writeq(OP_MODE_CTRL_VAL_MANUAL, ddr_pmu->base + + (ddr_pmu->p_data->ddrc_perf_cnt_op_mode_ctrl)); + } else { + ddr_pmu->pmu = (struct pmu) { + .module = THIS_MODULE, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .task_ctx_nr = perf_invalid_context, + .attr_groups = odyssey_attr_groups, + .event_init = cn10k_ddr_perf_event_init, + .add = cn10k_ddr_perf_event_add, + .del = cn10k_ddr_perf_event_del, + .start = cn10k_ddr_perf_event_start, + .stop = cn10k_ddr_perf_event_stop, + .read = cn10k_ddr_perf_event_update, + .pmu_enable = NULL, + .pmu_disable = NULL, + }; + } /* Choose this cpu to collect perf data */ ddr_pmu->cpu = raw_smp_processor_id(); @@ -712,7 +978,8 @@ static int cn10k_ddr_perf_remove(struct platform_device *pdev) #ifdef CONFIG_OF static const struct of_device_id cn10k_ddr_pmu_of_match[] = { - { .compatible = "marvell,cn10k-ddr-pmu", }, + { .compatible = "marvell,cn10k-ddr-pmu", .data = &cn10k_ddr_pmu_pdata }, + { .compatible = "marvell,odyssey-ddr-pmu", .data = &odyssey_ddr_pmu_pdata }, { }, }; MODULE_DEVICE_TABLE(of, cn10k_ddr_pmu_of_match); @@ -720,7 +987,14 @@ MODULE_DEVICE_TABLE(of, cn10k_ddr_pmu_of_match); #ifdef CONFIG_ACPI static const struct acpi_device_id cn10k_ddr_pmu_acpi_match[] = { - {"MRVL000A", 0}, + { + .id = "MRVL000A", + .driver_data = (kernel_ulong_t)&cn10k_ddr_pmu_pdata, + }, + { + .id = "MRVL000C", + .driver_data = (kernel_ulong_t)&odyssey_ddr_pmu_pdata, + }, {}, }; MODULE_DEVICE_TABLE(acpi, cn10k_ddr_pmu_acpi_match); From patchwork Fri Jun 30 12:03:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowthami Thiagarajan X-Patchwork-Id: 114713 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp10327231vqr; Fri, 30 Jun 2023 05:37:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4yVGmFm7OEnYsiPPuBtyoSvtgygq+ijXv71zZk2nTNsYCgRwSEfRoCaj2xZs8Rj3dBor+V X-Received: by 2002:a05:6830:114:b0:6b7:54b1:6524 with SMTP id i20-20020a056830011400b006b754b16524mr2383903otp.36.1688128622457; Fri, 30 Jun 2023 05:37:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688128622; cv=none; d=google.com; s=arc-20160816; b=dXqX5wyZmvjz8HLAtLKZMBaN6GlwtV8E04j30hzByQ6fhFbDEYUdXi2H/G7jPwHgxE mNh0MMRFfu9/DKPxjXOm2iG5fR8ht6UZjUCbaYK9sUCW+hg/h8UMRriUyYYephT56Jfm rNAgSmJGBh4WXAFrnyMe7g8geHBAVZXSKWrQnm6QibVSWA/xiaO0W7Ld3a9QIFJxKec/ CTLeWMQMR1NB35I8iC/PRjx1Zx+U13s03KIDKe+as7BMB7WIiIZOIO9OLlpYMKU50tSl 7g9/xIk0fNy1Ye1MKa+2UrPMIE91QDuOcpWSLOSRCUzzJcfoUiNZk1JOy5XawtbaM1pJ aH2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=i/mPNEMr80w3zGu0JQ+Zqc1x/xamRYWuNbxzQn5DbZk=; fh=BeTAU9KFPAR19PZDKGOMxIY7iGNYrFofsfNa6AmBg5k=; b=XcXdkY9i1ZHEDraMIHVDbfznGYPP1q622YN4b9zA0xED+PqKyocvkkp7tREa8xM6Rc q+c9l4l/AUf6MHMSVBR6pJ1SiQ8pi+8ZgNMuh8LYR8vxxIiwQ/ySWAwLgL4hC/8WHv5F 5NX4XIOu8GWVNb7MhgSmII/sFkTJnTFmi/KhRti/4KKrPI3cTHiAsIFZGZ0ulgJGfVnw B2CRbKiUaWUdYcLWUrZGgAVNaOhIRSlzMw1VE/zW2SsONKv5Smpm6f0Ox70lOYgj/FbA yNrS1HTJEAmjOfaZywf8ekFbYreNE7v0lQ1LzXkbggnBBfbb5GMz8zFiI2OY8go0tMF2 9j9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=KVuc6GIo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c5-20020a6566c5000000b0055acfb661c0si10813938pgw.524.2023.06.30.05.36.47; Fri, 30 Jun 2023 05:37:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=KVuc6GIo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233024AbjF3MFn (ORCPT + 99 others); Fri, 30 Jun 2023 08:05:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233037AbjF3MF1 (ORCPT ); Fri, 30 Jun 2023 08:05:27 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A40C44AE for ; Fri, 30 Jun 2023 05:04:44 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35U2CIgs013742; Fri, 30 Jun 2023 05:04:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=i/mPNEMr80w3zGu0JQ+Zqc1x/xamRYWuNbxzQn5DbZk=; b=KVuc6GIoIlrXqVoSiQ+N4X8Ave/Qx2x7F0rDTkGXqEnu4Dg7aZwtpe74Wv0/bhKnUusd PAiCXKlJ+/aOWB2mluI6qymr3GnY/MmwDj5Oyic3SdaXUxvlNtTAt97IYdoTaU5acKat M5IuhIkmUYfwe4Tcd0vKgrkKnfGfOYpdtWUeIDslOZi0Zrsjo2GK4pQJWJhXOAvnu8Jm Yq/slOc8CD3gKvO0T/yZL+X6d/ZNm5JKlZH2Fm522MuqNLzxwxYNbbpGJXcn25yPUWhb zLGhrQYxTeyNvj8QA0JhHXFTjRSPHSy+Ico/rhXQ4SfYkhYLtLY6RSyArEofKuHpLQKr oQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rhp2ehnmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 30 Jun 2023 05:04:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 30 Jun 2023 05:04:22 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 30 Jun 2023 05:04:22 -0700 Received: from IPBU-BLR-SERVER1.marvell.com (IPBU-BLR-SERVER1.marvell.com [10.28.8.41]) by maili.marvell.com (Postfix) with ESMTP id 6F28D3F707E; Fri, 30 Jun 2023 05:04:19 -0700 (PDT) From: Gowthami Thiagarajan To: , , , CC: , , , , Gowthami Thiagarajan Subject: [PATCH 6/6] dt-bindings: Add YAML schemas for Marvell Odyssey DDR PMU Date: Fri, 30 Jun 2023 17:33:51 +0530 Message-ID: <20230630120351.1143773-7-gthiagarajan@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230630120351.1143773-1-gthiagarajan@marvell.com> References: <20230630120351.1143773-1-gthiagarajan@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jQwluIhCxADtO5mxf2BxGJ2rdu2mhmsw X-Proofpoint-GUID: jQwluIhCxADtO5mxf2BxGJ2rdu2mhmsw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-30_05,2023-06-30_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770131158210802322?= X-GMAIL-MSGID: =?utf-8?q?1770131158210802322?= Add device tree bindings for Marvell Odyssey DDR PMU. Signed-off-by: Gowthami Thiagarajan --- .../devicetree/bindings/perf/marvell-cn10k-ddr.yaml | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/perf/marvell-cn10k-ddr.yaml b/Documentation/devicetree/bindings/perf/marvell-cn10k-ddr.yaml index a18dd0a8c43a..a435cbf4aea0 100644 --- a/Documentation/devicetree/bindings/perf/marvell-cn10k-ddr.yaml +++ b/Documentation/devicetree/bindings/perf/marvell-cn10k-ddr.yaml @@ -11,10 +11,15 @@ maintainers: properties: compatible: - items: + oneOf: - enum: - marvell,cn10k-ddr-pmu - + - marvell,odyssey-ddr-pmu + - items: + - enum: + - marvell,cn10k-ddr-pmu + - marvell,odyssey-ddr-pmu + - const: marvell,cn10k-ddr-pmu reg: maxItems: 1