[v2,0/3] Add metric value validation test

Message ID 20230609162625.100897-1-weilin.wang@intel.com
Headers
Series Add metric value validation test |

Message

Wang, Weilin June 9, 2023, 4:26 p.m. UTC
  This is the second version of metric value validation tests.

We made the following changes from v1 to v2:
 - Add python setting check [Ian]
 - Skip non-Intel architectures [Ian]
 - Rename allowlist to skiplist [Ian]

v1: https://lore.kernel.org/lkml/20230606202421.2628401-1-weilin.wang@intel.com/

Weilin Wang (3):
  perf test: Add metric value validation test
  perf test: Add skip list for metrics known would fail
  perf test: Rerun failed metrics with longer workload

 .../tests/shell/lib/perf_metric_validation.py | 574 ++++++++++++++++++
 .../lib/perf_metric_validation_rules.json     | 398 ++++++++++++
 tools/perf/tests/shell/stat_metrics_values.sh |  30 +
 3 files changed, 1002 insertions(+)
 create mode 100644 tools/perf/tests/shell/lib/perf_metric_validation.py
 create mode 100644 tools/perf/tests/shell/lib/perf_metric_validation_rules.json
 create mode 100755 tools/perf/tests/shell/stat_metrics_values.sh


base-commit: 7cdda6998ee55140e64894e25048df7157344fc9
  

Comments

Ian Rogers June 13, 2023, 12:04 a.m. UTC | #1
On Fri, Jun 9, 2023 at 9:26 AM Weilin Wang <weilin.wang@intel.com> wrote:
>
> This is the second version of metric value validation tests.
>
> We made the following changes from v1 to v2:
>  - Add python setting check [Ian]
>  - Skip non-Intel architectures [Ian]
>  - Rename allowlist to skiplist [Ian]
>
> v1: https://lore.kernel.org/lkml/20230606202421.2628401-1-weilin.wang@intel.com/

Thanks Weilin,

when I try the test on my Tigerlake laptop I encounter an error:

```
$ sudo perf test 105 -vv
05: perf metrics value validation                                   :
--- start ---
test child forked, pid 1258192
Launch python validation script
./tools/perf/tests/shell/lib/perf_metric_validation.py
Output will be stored in: /tmp/__perf_test.program.J2u5c
Starting perf collection
Long workload: perf bench futex hash -r 2 -s
perf stat -j -M tma_mixing_vectors -a true
...
perf stat -j -M tma_retiring,tma_light_operations,tma_heavy_operations
-a perf bench futex hash -r 2
-s
# Running 'futex/hash' benchmark:
Run summary [PID 1258651]: 8 threads, each operating on 1024 [private]
futexes for 2 secs.

Averaged 953280 operations/sec (+- 1.07%), total secs = 2
Traceback (most recent call last):
 File "/home/irogers/kernel.org/./tools/perf/tests/shell/lib/perf_metric_validation.py",
line 571,
in <module>
   sys.exit(main())
            ^^^^^^
 File "/home/irogers/kernel.org/./tools/perf/tests/shell/lib/perf_metric_validation.py",
line 564,
in main
   ret = validator.test()
         ^^^^^^^^^^^^^^^^
 File "/home/irogers/kernel.org/./tools/perf/tests/shell/lib/perf_metric_validation.py",
line 522,
in test
   self.pos_val_test()
 File "/home/irogers/kernel.org/./tools/perf/tests/shell/lib/perf_metric_validation.py",
line 152,
in pos_val_test
   if val < 0:
      ^^^^^^^
TypeError: '<' not supported between instances of 'str' and 'int'
test child finished with -1
---- end ----
perf metrics value validation: FAILED!
```

Could you take a look?

Thanks,
Ian

> Weilin Wang (3):
>   perf test: Add metric value validation test
>   perf test: Add skip list for metrics known would fail
>   perf test: Rerun failed metrics with longer workload
>
>  .../tests/shell/lib/perf_metric_validation.py | 574 ++++++++++++++++++
>  .../lib/perf_metric_validation_rules.json     | 398 ++++++++++++
>  tools/perf/tests/shell/stat_metrics_values.sh |  30 +
>  3 files changed, 1002 insertions(+)
>  create mode 100644 tools/perf/tests/shell/lib/perf_metric_validation.py
>  create mode 100644 tools/perf/tests/shell/lib/perf_metric_validation_rules.json
>  create mode 100755 tools/perf/tests/shell/stat_metrics_values.sh
>
>
> base-commit: 7cdda6998ee55140e64894e25048df7157344fc9
> --
> 2.39.1
>