[v1,02/20] perf jevents: Add idle metric for Intel models

Message ID 20240229001806.4158429-3-irogers@google.com
State New
Headers
Series Python generated Intel metrics |

Commit Message

Ian Rogers Feb. 29, 2024, 12:17 a.m. UTC
  Compute using the msr PMU the percentage of wallclock cycles where the
CPUs are in a low power state.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/pmu-events/intel_metrics.py | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)
  

Comments

Andi Kleen March 1, 2024, 5:49 p.m. UTC | #1
> +def Idle() -> Metric:
> +  cyc = Event("msr/mperf/")
> +  tsc = Event("msr/tsc/")
> +  low = max(tsc - cyc, 0)
> +  return Metric(
> +      "idle",
> +      "Percentage of total wallclock cycles where CPUs are in low power state (C1 or deeper sleep state)",
> +      d_ratio(low, tsc), "100%")

TBH I fail to see the advantage over the JSON. That's much more verbose
and we don't expect to have really complex metrics anyways.

And then we have a gigantic patch kit for what gain?

The motivation was the lack of comments in JSON? We could just add some
to the parser (e.g. with /* */ ).  And we could allow an JSON array for the
expression to get multiple lines.


-Andi
  
Ian Rogers March 1, 2024, 6:17 p.m. UTC | #2
On Fri, Mar 1, 2024 at 9:49 AM Andi Kleen <ak@linux.intel.com> wrote:
>
> > +def Idle() -> Metric:
> > +  cyc = Event("msr/mperf/")
> > +  tsc = Event("msr/tsc/")
> > +  low = max(tsc - cyc, 0)
> > +  return Metric(
> > +      "idle",
> > +      "Percentage of total wallclock cycles where CPUs are in low power state (C1 or deeper sleep state)",
> > +      d_ratio(low, tsc), "100%")
>
> TBH I fail to see the advantage over the JSON. That's much more verbose
> and we don't expect to have really complex metrics anyways.

Are you saying this is more verbose or the json? Here is an example of
a json metric:

https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json?h=perf-tools-next#n652
```
    {
        "BriefDescription": "Probability of Core Bound bottleneck
hidden by SMT-profiling artifacts",
        "MetricExpr": "(100 * (1 - tma_core_bound /
(((EXE_ACTIVITY.EXE_BOUND_0_PORTS + tma_core_bound *
RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTED.THREAD *
(CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) /
CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD +
(EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring *
EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if
ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL -
CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL +
tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD)
if tma_core_bound < (((EXE_ACTIVITY.EXE_BOUND_0_PORTS + tma_core_bound
* RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTED.THREAD *
(CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) /
CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD +
(EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring *
EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if
ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL -
CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL +
tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD)
else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
        "MetricGroup": "Cor;SMT;TopdownL1;tma_L1_group",
        "MetricName": "tma_info_botlnk_core_bound_likely",
        "MetricgroupNoGroup": "TopdownL1"
    },
```

Even with common metrics like tma_core_bound, tma_retiring and
tma_info_system_smt_2t_utilization replacing sections of the metric, I
think anyone has to admit the expression is pretty unintelligible
because of its size/verbosity. To understand the metric would at a
first step involve adding newlines. Comments would be nice, etc.

> And then we have a gigantic patch kit for what gain?

I see some of the gains as:
 - metrics that are human intelligible,
 - metrics for models that are no longer being updated,
 - removing copy-paste of metrics like tsx and smi across each model's
metric json (less lines-of-code),
 - validation of events in a metric expression being in the event json
for a model,
 - removal of forward porting metrics to a new model if the event
names of the new model line up with those of previous,
 - in this patch kit there are metrics added that don't currently
exist (more metrics should be better for users - yes there can always
be bugs).

I also hope the metric grouping is clearer, etc, etc.

> The motivation was the lack of comments in JSON? We could just add some
> to the parser (e.g. with /* */ ).  And we could allow an JSON array for the
> expression to get multiple lines.

Imo, a non-json variant of json would just be taking on a tech debt
burden for something that is inferior to this approach and a wasted
effort. We already generate the json from other more intelligible
sources using python - so I don't find the approach controversial:
https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py

The goal here has been to make a bunch of inhouse metrics public. It
also gives a foundation for vendors and other concerned people to add
metrics in a concise, with documentation and safe (broken events cause
compile time failures) way. There are some similar things like common
events/metrics on ARM:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/pmu-events/arch/arm64/arm/cmn/sys?h=perf-tools-next
but this lacks the structure, validation, documentation, etc. that's
present here so my preference would be for more of the common things
to be done in the python way started here.

Thanks,
Ian

> -Andi
  
Andi Kleen March 1, 2024, 9:34 p.m. UTC | #3
> 
> I see some of the gains as:
>  - metrics that are human intelligible,
>  - metrics for models that are no longer being updated,
>  - removing copy-paste of metrics like tsx and smi across each model's
> metric json (less lines-of-code),
>  - validation of events in a metric expression being in the event json
> for a model,
>  - removal of forward porting metrics to a new model if the event
> names of the new model line up with those of previous,
>  - in this patch kit there are metrics added that don't currently
> exist (more metrics should be better for users - yes there can always
> be bugs).

But then we have two ways to do things, and we already have a lot 
of problems with regressions from complexity and a growing
bug backlog that nobody fixes. 

Multiple ways to do basic operations seems just a recipe for
more and more fragmentation and similar problems.

The JSON format is certainly not perfect and has its share 
of issues, but at least it's a standard now that is supported
by many vendors and creating new standards just because
you don't like some minor aspects doesn't seem like 
a good approach. I'm sure the next person will come around
why wants Ruby metrics and the third would prefer to write
them in Rust. Who knows where it will stop.

Also in my experience this python stuff is unreliable because
half the people who build perf forget to install the python
libraries. Json at least works always.

Incrementional improvements are usually the way to do these
things.

-Andi
  
Ian Rogers March 1, 2024, 11:09 p.m. UTC | #4
On Fri, Mar 1, 2024 at 1:34 PM Andi Kleen <ak@linux.intel.com> wrote:
>
> >
> > I see some of the gains as:
> >  - metrics that are human intelligible,
> >  - metrics for models that are no longer being updated,
> >  - removing copy-paste of metrics like tsx and smi across each model's
> > metric json (less lines-of-code),
> >  - validation of events in a metric expression being in the event json
> > for a model,
> >  - removal of forward porting metrics to a new model if the event
> > names of the new model line up with those of previous,
> >  - in this patch kit there are metrics added that don't currently
> > exist (more metrics should be better for users - yes there can always
> > be bugs).
>
> But then we have two ways to do things, and we already have a lot
> of problems with regressions from complexity and a growing
> bug backlog that nobody fixes.

If you want something to work you put a test on it. We have a number
of both event and metric tests. I'm not sure what the bug backlog you
are mentioning is, but as far as I can see the tool is in the best
condition it has ever been. All tests passing with address sanitizer
was a particular milestone last year.

> Multiple ways to do basic operations seems just a recipe for
> more and more fragmentation and similar problems.
>
> The JSON format is certainly not perfect and has its share
> of issues, but at least it's a standard now that is supported
> by many vendors and creating new standards just because
> you don't like some minor aspects doesn't seem like
> a good approach. I'm sure the next person will come around
> why wants Ruby metrics and the third would prefer to write
> them in Rust. Who knows where it will stop.

These patches don't make the json format disappear, we use python to
generate the json metrics as json strings are a poor programming
language.

I agree we have too many formats, but json is part of the problem
there not the solution. I would like to make the only format the sysfs
one, and then we can do like a unionfs type thing in the perf tool
where we can have sysfs, a sysfs layer built into the tool (out of the
json) and possibly user specified layers. This would allow
customizability once the binary is built, but it would also allow us
to test with a sysfs for a machine we don't have. Linux on M1 macs are
a particular issue, but we recently had an issue with the layout of
the format directory for Intel uncore_pcu pre-Skylake which doesn't
have a umask. Finding such machines to test on is the challenge, and
abstracting sysfs as a unionfs type thing is, I think, the correct
approach.

I don't think the Linux build has tooling around Ruby, and there are
no host tools written in Rust yet. Will it happen? Probably, and I
think it is good the codebase keeps moving forward. Before the C
reference count checking implementation, we were talking about
rewriting at least pieces like libperf in Rust - the code was leaking
memory and it seemed unsolvable as reasonable fixes would yield
use-after-frees and crashes. I've even mentioned this in LWN comments
on articles around Rust, nobody stepped up with a fix until I did the
reference count checking.

Python is a good choice for reading json as the inbuilt library is of
a reasonable quality. Python is good for what I've done here as the
operator overloading makes the expressions readable. We can read in
and out of the python tree format, and do so in jevents.py to validate
the metrics can parse (we still have the C parse test). We haven't
written a full expression parser in python, although it wouldn't be
hard, we just ack the string and pretty much call eval. It'd be
relatively easy to add an output function to the python code to make
it convert the expressions to a different programming language, for
example the ToPython code here:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/pmu-events/metric.py?h=perf-tools-next#n17

> Also in my experience this python stuff is unreliable because
> half the people who build perf forget to install the python
> libraries. Json at least works always.

It has been the case for about a year (v6.4):
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/commit/?h=perf-tools-next&id=175f9315f76345e88e3abdc947c1e0030ab99da3
that if we can't build jevents because of python then the build fails.
You can explicitly request not to use python/jevents with
NO_JEVENTS=1, but it is an explicit opt-out.
I don't think this is unreliable. We've recently made BPF opt-out
rather than opt-in in a similar way, and that requires clang, etc. It
has been a problem in the past that implicit opt-in and opt-out could
give you a perf tool that a distribution couldn't ship (mixed GPLv2
and v3 code) or that was missing useful things. I think we've fixed
the bug by making the build fail unless you explicitly opt-out of
options we think you should have.

Fwiw, there is a similar bug that BTF support in the kernel is opt-in
rather than opt-out, meaning distributions ship BPF tools that can't
work for the kernel they've built. If there were more time I'd be
looking to make BTF opt-out rather than opt-in, I reported the issue
on the BPF mailing list.

> Incrementional improvements are usually the way to do these
> things.

We've had jevents as python for nearly 2 years. metric.py that this
code is building off has been in the tree for 15 months. I wrote the
code and there is a version of it for:
https://github.com/intel/perfmon/commits/main/scripts/create_perf_json.py
which is 2 years old. I don't see anything non-incremental, if
anything things have been slow to move forward. It's true vendors
haven't really adopted the code outside of Intel's perfmon, I've at
least discussed it with them in face-to-faces like LPC. Hopefully this
work is a foundation for vendors to write more metrics, it should be
little more struggle than it is for them to document the metric in
their manuals.

ARM have a python based json tool for perf (similar to the perfmon one) here:
https://gitlab.arm.com/telemetry-solution/telemetry-solution/-/tree/main/tools/perf_json_generator
So I'd say that python and perf json is a standard approach. ARM's
converter is just over a year old.

Thanks,
Ian

> -Andi
  

Patch

diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
index 5827f555005f..46866a25b166 100755
--- a/tools/perf/pmu-events/intel_metrics.py
+++ b/tools/perf/pmu-events/intel_metrics.py
@@ -1,7 +1,8 @@ 
 #!/usr/bin/env python3
 # SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-from metric import (d_ratio, has_event, Event, JsonEncodeMetric, JsonEncodeMetricGroupDescriptions,
-                    LoadEvents, Metric, MetricGroup, Select)
+from metric import (d_ratio, has_event, max, Event, JsonEncodeMetric,
+                    JsonEncodeMetricGroupDescriptions, LoadEvents, Metric,
+                    MetricGroup, Select)
 import argparse
 import json
 import math
@@ -17,6 +18,16 @@  LoadEvents(directory)
 
 interval_sec = Event("duration_time")
 
+def Idle() -> Metric:
+  cyc = Event("msr/mperf/")
+  tsc = Event("msr/tsc/")
+  low = max(tsc - cyc, 0)
+  return Metric(
+      "idle",
+      "Percentage of total wallclock cycles where CPUs are in low power state (C1 or deeper sleep state)",
+      d_ratio(low, tsc), "100%")
+
+
 def Rapl() -> MetricGroup:
   """Processor socket power consumption estimate.
 
@@ -52,6 +63,7 @@  def Rapl() -> MetricGroup:
 
 
 all_metrics = MetricGroup("", [
+    Idle(),
     Rapl(),
 ])