[1/3] perf/x86/intel: Add Granite Rapids

Message ID 20230314170041.2967712-1-kan.liang@linux.intel.com
State New
Headers
Series [1/3] perf/x86/intel: Add Granite Rapids |

Commit Message

Liang, Kan March 14, 2023, 5 p.m. UTC
  From: Kan Liang <kan.liang@linux.intel.com>

From core PMU's perspective, Granite Rapids is similar to the Sapphire
Rapids. The key differences include:
- Doesn't need the AUX event workaround for the mem load event.
  (Implement in this patch).
- Support Retire Latency (Has been implemented in the commit
  c87a31093c70 ("perf/x86: Support Retire Latency"))
- The event list, which will be supported in the perf tool later.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/events/intel/core.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)
  

Comments

Peter Zijlstra March 20, 2023, 10:32 a.m. UTC | #1
On Tue, Mar 14, 2023 at 10:00:39AM -0700, kan.liang@linux.intel.com wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> From core PMU's perspective, Granite Rapids is similar to the Sapphire
> Rapids. The key differences include:
> - Doesn't need the AUX event workaround for the mem load event.
>   (Implement in this patch).
> - Support Retire Latency (Has been implemented in the commit
>   c87a31093c70 ("perf/x86: Support Retire Latency"))
> - The event list, which will be supported in the perf tool later.
> 
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>

Thanks!
  

Patch

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index a3fb996a86a1..070cc4ef2672 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5469,6 +5469,15 @@  pebs_is_visible(struct kobject *kobj, struct attribute *attr, int i)
 	return x86_pmu.pebs ? attr->mode : 0;
 }
 
+static umode_t
+mem_is_visible(struct kobject *kobj, struct attribute *attr, int i)
+{
+	if (attr == &event_attr_mem_ld_aux.attr.attr)
+		return x86_pmu.flags & PMU_FL_MEM_LOADS_AUX ? attr->mode : 0;
+
+	return pebs_is_visible(kobj, attr, i);
+}
+
 static umode_t
 lbr_is_visible(struct kobject *kobj, struct attribute *attr, int i)
 {
@@ -5496,7 +5505,7 @@  static struct attribute_group group_events_td  = {
 
 static struct attribute_group group_events_mem = {
 	.name       = "events",
-	.is_visible = pebs_is_visible,
+	.is_visible = mem_is_visible,
 };
 
 static struct attribute_group group_events_tsx = {
@@ -6486,6 +6495,10 @@  __init int intel_pmu_init(void)
 
 	case INTEL_FAM6_SAPPHIRERAPIDS_X:
 	case INTEL_FAM6_EMERALDRAPIDS_X:
+		x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
+		fallthrough;
+	case INTEL_FAM6_GRANITERAPIDS_X:
+	case INTEL_FAM6_GRANITERAPIDS_D:
 		pmem = true;
 		x86_pmu.late_ack = true;
 		memcpy(hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(hw_cache_event_ids));
@@ -6502,7 +6515,6 @@  __init int intel_pmu_init(void)
 		x86_pmu.flags |= PMU_FL_HAS_RSP_1;
 		x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
 		x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
-		x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
 
 		x86_pmu.hw_config = hsw_hw_config;
 		x86_pmu.get_event_constraints = spr_get_event_constraints;