[0/6] Clean up perf mem

Message ID 20231206201324.184059-1-kan.liang@linux.intel.com
Headers
Series Clean up perf mem |

Message

Liang, Kan Dec. 6, 2023, 8:13 p.m. UTC
  From: Kan Liang <kan.liang@linux.intel.com>

As discussed in the below thread, the patch set is to clean up perf mem.
https://lore.kernel.org/lkml/afefab15-cffc-4345-9cf4-c6a4128d4d9c@linux.intel.com/

Introduce generic functions perf_mem_events__ptr(),
perf_mem_events__name() ,and is_mem_loads_aux_event() to replace the
ARCH specific ones.
Simplify the perf_mem_event__supported().

Only keeps the ARCH-specific perf_mem_events array in the corresponding
mem-events.c for each ARCH.

There is no functional change.

The patch set touches almost all the ARCHs, Intel, AMD, ARM, Power and
etc. But I can only test it on two Intel platforms.
Please give it try, if you have machines with other ARCHs.

Here are the test results:
Intel hybrid machine:

$perf mem record -e list
ldlat-loads  : available
ldlat-stores : available

$perf mem record -e ldlat-loads -v --ldlat 50
calling: record -e cpu_atom/mem-loads,ldlat=50/P -e cpu_core/mem-loads,ldlat=50/P

$perf mem record -v
calling: record -e cpu_atom/mem-loads,ldlat=30/P -e cpu_atom/mem-stores/P -e cpu_core/mem-loads,ldlat=30/P -e cpu_core/mem-stores/P

$perf mem record -t store -v
calling: record -e cpu_atom/mem-stores/P -e cpu_core/mem-stores/P


Intel SPR:
$perf mem record -e list
ldlat-loads  : available
ldlat-stores : available

$perf mem record -e ldlat-loads -v --ldlat 50
calling: record -e {cpu/mem-loads-aux/,cpu/mem-loads,ldlat=50/}:P

$perf mem record -v
calling: record -e {cpu/mem-loads-aux/,cpu/mem-loads,ldlat=30/}:P -e cpu/mem-stores/P

$perf mem record -t store -v
calling: record -e cpu/mem-stores/P

Kan Liang (6):
  perf mem: Add mem_events into the supported perf_pmu
  perf mem: Clean up perf_mem_events__ptr()
  perf mem: Clean up perf_mem_events__name()
  perf mem: Clean up perf_mem_event__supported()
  perf mem: Clean up is_mem_loads_aux_event()
  perf mem: Remove useless header files for X86

 tools/perf/arch/arm64/util/mem-events.c   |  36 +----
 tools/perf/arch/arm64/util/mem-events.h   |   7 +
 tools/perf/arch/arm64/util/pmu.c          |   6 +
 tools/perf/arch/powerpc/util/mem-events.c |  13 +-
 tools/perf/arch/powerpc/util/mem-events.h |   7 +
 tools/perf/arch/powerpc/util/pmu.c        |  11 ++
 tools/perf/arch/s390/util/pmu.c           |   3 +
 tools/perf/arch/x86/util/mem-events.c     |  99 +++---------
 tools/perf/arch/x86/util/mem-events.h     |  10 ++
 tools/perf/arch/x86/util/pmu.c            |  11 ++
 tools/perf/builtin-c2c.c                  |  28 +++-
 tools/perf/builtin-mem.c                  |  28 +++-
 tools/perf/util/mem-events.c              | 177 +++++++++++++---------
 tools/perf/util/mem-events.h              |  15 +-
 tools/perf/util/pmu.c                     |   4 +-
 tools/perf/util/pmu.h                     |   7 +
 16 files changed, 248 insertions(+), 214 deletions(-)
 create mode 100644 tools/perf/arch/arm64/util/mem-events.h
 create mode 100644 tools/perf/arch/powerpc/util/mem-events.h
 create mode 100644 tools/perf/arch/powerpc/util/pmu.c
 create mode 100644 tools/perf/arch/x86/util/mem-events.h
  

Comments

Ravi Bangoria Dec. 7, 2023, 2:44 p.m. UTC | #1
> The patch set touches almost all the ARCHs, Intel, AMD, ARM, Power and
> etc. But I can only test it on two Intel platforms.
> Please give it try, if you have machines with other ARCHs.

I did a quick perf mem and perf c2c test, with the fix I suggested
in patch #1, and things seems to be working fine.

[For AMD specific changes]
Tested-by: Ravi Bangoria <ravi.bangoria@amd.com>
  
Liang, Kan Dec. 7, 2023, 3:05 p.m. UTC | #2
On 2023-12-07 9:44 a.m., Ravi Bangoria wrote:
>> The patch set touches almost all the ARCHs, Intel, AMD, ARM, Power and
>> etc. But I can only test it on two Intel platforms.
>> Please give it try, if you have machines with other ARCHs.
> 
> I did a quick perf mem and perf c2c test, with the fix I suggested
> in patch #1, and things seems to be working fine.
> 
> [For AMD specific changes]
> Tested-by: Ravi Bangoria <ravi.bangoria@amd.com>

Thanks Ravi.

Kan