perf/x86/ibs: Set data_src.mem_lvl_num as well
Commit Message
The IBS PMU driver sets data source of memory operations but it missed
mem_lvl_num field. So I'm adding them here.
Most cases are straight-forward but please check if MEM_LVLNUM_ANY_CACHE
for peer cache hits is ok. Also I wonder we can add MEM_REMOTE_REMOTE
for peer cache hits in a far CCX.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
arch/x86/events/amd/ibs.c | 10 ++++++++++
1 file changed, 10 insertions(+)
Comments
Hi Namhyung,
> @@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
> if (ibs_caps & IBS_CAPS_ZEN4) {
> if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
> return;
> }
> } else {
> if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
> PERF_MEM_LVL_HIT;
> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
mem_lvl_num does not have option to set multiple sources. Setting just
PERF_MEM_LVLNUM_L3 is bit misleading here. Documentation (PPR 55898 Rev
0.70 - Oct 14, 2022) says:
"data returned from shared L3, other L2 on same CCX or other core's
cache trough same node."
As per my knowledge, "shared L3" and "other L2 on same CCX" has similar
latency. But request need to go through DF for "other core's cache trough
same node" which incurs higher latency. Thus, setting both is important.
This was one of the reason to not use mem_lvl_num in IBS code.
2nd reason was, perf c2c (c2c_decode_stats()) does not use mem_lvl_num.
3rd reason was, perf mem sorting logic (sort__lvl_cmp()) does not consider
mem_lvl_num.
4th one was, if I set both mem_lvl and mem_lvl_num, like what other archs
do, `perf mem report` prints both, which is kind of ugly:
464029 N/A
340728 L1 or L1 hit
8312 LFB/MAB or LFB/MAB hit
7901 L2 or L2 hit
123 L3 or Remote Cache (1 hop) or L3 hit
Without mem_lvl_num it's much cleaner:
330057 N/A
229646 L1 hit
5842 L2 hit
5726 LFB/MAB hit
78 L3 or Remote Cache (1 hop) hit
I think we should clean this before applying this patch? Other option is
to add bpf filter support for mem_lvl. What do you think?
Thanks,
Ravi
Hi Ravi,
On Mon, Mar 20, 2023 at 11:33 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote:
>
> Hi Namhyung,
>
> > @@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
> > if (ibs_caps & IBS_CAPS_ZEN4) {
> > if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
> > data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
> > + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
> > return;
> > }
> > } else {
> > if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
> > data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
> > PERF_MEM_LVL_HIT;
> > + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
>
> mem_lvl_num does not have option to set multiple sources. Setting just
> PERF_MEM_LVLNUM_L3 is bit misleading here. Documentation (PPR 55898 Rev
> 0.70 - Oct 14, 2022) says:
>
> "data returned from shared L3, other L2 on same CCX or other core's
> cache trough same node."
>
> As per my knowledge, "shared L3" and "other L2 on same CCX" has similar
> latency. But request need to go through DF for "other core's cache trough
> same node" which incurs higher latency. Thus, setting both is important.
> This was one of the reason to not use mem_lvl_num in IBS code.
I suspect it's a quality issue for CPUs prior to Zen4 not to identify
data source precisely. How about setting LVLNUM_ANY_CACHE then?
>
> 2nd reason was, perf c2c (c2c_decode_stats()) does not use mem_lvl_num.
Maybe we can change that. It'd be easy as long as they provide
the same information. IOW mem_lvl = mem_lvl_num + remote + snoop.
>
> 3rd reason was, perf mem sorting logic (sort__lvl_cmp()) does not consider
> mem_lvl_num.
Likewise.
>
> 4th one was, if I set both mem_lvl and mem_lvl_num, like what other archs
> do, `perf mem report` prints both, which is kind of ugly:
>
> 464029 N/A
> 340728 L1 or L1 hit
> 8312 LFB/MAB or LFB/MAB hit
> 7901 L2 or L2 hit
> 123 L3 or Remote Cache (1 hop) or L3 hit
>
> Without mem_lvl_num it's much cleaner:
>
> 330057 N/A
> 229646 L1 hit
> 5842 L2 hit
> 5726 LFB/MAB hit
> 78 L3 or Remote Cache (1 hop) hit
Agreed. It doesn't need to repeat the same information.
>
> I think we should clean this before applying this patch? Other option is
> to add bpf filter support for mem_lvl. What do you think?
I still prefer using mem_lvl_num as I think it's the way to go,
but I'm open for change.
Peter, what do you think?
Thanks,
Namhyung
Hi Namhyung,
>>> @@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
>>> if (ibs_caps & IBS_CAPS_ZEN4) {
>>> if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
>>> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
>>> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
>>> return;
>>> }
>>> } else {
>>> if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
>>> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
>>> PERF_MEM_LVL_HIT;
>>> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
>>
>> mem_lvl_num does not have option to set multiple sources. Setting just
>> PERF_MEM_LVLNUM_L3 is bit misleading here. Documentation (PPR 55898 Rev
>> 0.70 - Oct 14, 2022) says:
>>
>> "data returned from shared L3, other L2 on same CCX or other core's
>> cache trough same node."
>>
>> As per my knowledge, "shared L3" and "other L2 on same CCX" has similar
>> latency. But request need to go through DF for "other core's cache trough
>> same node" which incurs higher latency. Thus, setting both is important.
>> This was one of the reason to not use mem_lvl_num in IBS code.
>
> I suspect it's a quality issue for CPUs prior to Zen4 not to identify
> data source precisely. How about setting LVLNUM_ANY_CACHE then?
Ok. Although, ANY_CACHE is mostly clueless, adding HOPS_0 will make it
more consumable. There are many other places where this patch needs to
set mem_remote and mem_hops. Also, these changes will result in too many
assignment operations. So, I think IBS code should switch to using
PERF_MEM_S() macro. Do you mind if I send v2 with all those changes?
>
>>
>> 2nd reason was, perf c2c (c2c_decode_stats()) does not use mem_lvl_num.
>
> Maybe we can change that. It'd be easy as long as they provide
> the same information. IOW mem_lvl = mem_lvl_num + remote + snoop.
>
>>
>> 3rd reason was, perf mem sorting logic (sort__lvl_cmp()) does not consider
>> mem_lvl_num.
>
> Likewise.
>
>>
>> 4th one was, if I set both mem_lvl and mem_lvl_num, like what other archs
>> do, `perf mem report` prints both, which is kind of ugly:
>>
>> 464029 N/A
>> 340728 L1 or L1 hit
>> 8312 LFB/MAB or LFB/MAB hit
>> 7901 L2 or L2 hit
>> 123 L3 or Remote Cache (1 hop) or L3 hit
>>
>> Without mem_lvl_num it's much cleaner:
>>
>> 330057 N/A
>> 229646 L1 hit
>> 5842 L2 hit
>> 5726 LFB/MAB hit
>> 78 L3 or Remote Cache (1 hop) hit
>
> Agreed. It doesn't need to repeat the same information.
>
>>
>> I think we should clean this before applying this patch? Other option is
>> to add bpf filter support for mem_lvl. What do you think?
>
> I still prefer using mem_lvl_num as I think it's the way to go,
> but I'm open for change.
Sure. 2nd, 3rd and 4th are all tool side improvements. Although it would
be good to fix those, let me post v2 of this patch for now?
Thanks,
Ravi
On Thu, Mar 23, 2023 at 7:11 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote:
>
> Hi Namhyung,
>
> >>> @@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
> >>> if (ibs_caps & IBS_CAPS_ZEN4) {
> >>> if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
> >>> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
> >>> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
> >>> return;
> >>> }
> >>> } else {
> >>> if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
> >>> data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
> >>> PERF_MEM_LVL_HIT;
> >>> + data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
> >>
> >> mem_lvl_num does not have option to set multiple sources. Setting just
> >> PERF_MEM_LVLNUM_L3 is bit misleading here. Documentation (PPR 55898 Rev
> >> 0.70 - Oct 14, 2022) says:
> >>
> >> "data returned from shared L3, other L2 on same CCX or other core's
> >> cache trough same node."
> >>
> >> As per my knowledge, "shared L3" and "other L2 on same CCX" has similar
> >> latency. But request need to go through DF for "other core's cache trough
> >> same node" which incurs higher latency. Thus, setting both is important.
> >> This was one of the reason to not use mem_lvl_num in IBS code.
> >
> > I suspect it's a quality issue for CPUs prior to Zen4 not to identify
> > data source precisely. How about setting LVLNUM_ANY_CACHE then?
>
> Ok. Although, ANY_CACHE is mostly clueless, adding HOPS_0 will make it
> more consumable. There are many other places where this patch needs to
> set mem_remote and mem_hops. Also, these changes will result in too many
> assignment operations. So, I think IBS code should switch to using
> PERF_MEM_S() macro. Do you mind if I send v2 with all those changes?
Sounds good!
>
> >
> >>
> >> 2nd reason was, perf c2c (c2c_decode_stats()) does not use mem_lvl_num.
> >
> > Maybe we can change that. It'd be easy as long as they provide
> > the same information. IOW mem_lvl = mem_lvl_num + remote + snoop.
> >
> >>
> >> 3rd reason was, perf mem sorting logic (sort__lvl_cmp()) does not consider
> >> mem_lvl_num.
> >
> > Likewise.
> >
> >>
> >> 4th one was, if I set both mem_lvl and mem_lvl_num, like what other archs
> >> do, `perf mem report` prints both, which is kind of ugly:
> >>
> >> 464029 N/A
> >> 340728 L1 or L1 hit
> >> 8312 LFB/MAB or LFB/MAB hit
> >> 7901 L2 or L2 hit
> >> 123 L3 or Remote Cache (1 hop) or L3 hit
> >>
> >> Without mem_lvl_num it's much cleaner:
> >>
> >> 330057 N/A
> >> 229646 L1 hit
> >> 5842 L2 hit
> >> 5726 LFB/MAB hit
> >> 78 L3 or Remote Cache (1 hop) hit
> >
> > Agreed. It doesn't need to repeat the same information.
> >
> >>
> >> I think we should clean this before applying this patch? Other option is
> >> to add bpf filter support for mem_lvl. What do you think?
> >
> > I still prefer using mem_lvl_num as I think it's the way to go,
> > but I'm open for change.
>
> Sure. 2nd, 3rd and 4th are all tool side improvements. Although it would
> be good to fix those, let me post v2 of this patch for now?
Sure, please go ahead.
Thanks,
Namhyung
@@ -724,6 +724,7 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
/* L1 Hit */
if (op_data3->dc_miss == 0) {
data_src->mem_lvl = PERF_MEM_LVL_L1 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_L1;
return;
}
@@ -733,6 +734,7 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
if (boot_cpu_data.x86 != 0x19 || boot_cpu_data.x86_model > 0xF ||
!(op_data3->sw_pf || op_data3->dc_miss_no_mab_alloc)) {
data_src->mem_lvl = PERF_MEM_LVL_L2 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_L2;
return;
}
}
@@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
if (ibs_caps & IBS_CAPS_ZEN4) {
if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
return;
}
} else {
if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
return;
}
}
@@ -762,6 +766,7 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
if (ibs_caps & IBS_CAPS_ZEN4 &&
ibs_data_src == IBS_DATA_SRC_EXT_NEAR_CCX_CACHE) {
data_src->mem_lvl = PERF_MEM_LVL_REM_CCE1 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_ANY_CACHE;
return;
}
@@ -769,11 +774,13 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
if (ibs_caps & IBS_CAPS_ZEN4) {
if (ibs_data_src == IBS_DATA_SRC_EXT_FAR_CCX_CACHE) {
data_src->mem_lvl = PERF_MEM_LVL_REM_CCE2 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_ANY_CACHE;
return;
}
} else {
if (ibs_data_src == IBS_DATA_SRC_REM_CACHE) {
data_src->mem_lvl = PERF_MEM_LVL_REM_CCE2 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_ANY_CACHE;
return;
}
}
@@ -784,6 +791,7 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
data_src->mem_lvl = PERF_MEM_LVL_LOC_RAM | PERF_MEM_LVL_HIT;
else
data_src->mem_lvl = PERF_MEM_LVL_REM_RAM1 | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_RAM;
return;
}
@@ -832,10 +840,12 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
*/
if (op_data3->dc_miss_no_mab_alloc) {
data_src->mem_lvl = PERF_MEM_LVL_LFB | PERF_MEM_LVL_HIT;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_LFB;
return;
}
data_src->mem_lvl = PERF_MEM_LVL_NA;
+ data_src->mem_lvl_num = PERF_MEM_LVLNUM_NA;
}
static bool perf_ibs_cache_hit_st_valid(void)