[kvm-unit-tests,4/5] x86: pmu: Support validation for Intel PMU fixed counter 3

Message ID 20231024075748.1675382-5-dapeng1.mi@linux.intel.com
State New
Headers
Series Fix PMU test failures on Sapphire Rapids |

Commit Message

Mi, Dapeng Oct. 24, 2023, 7:57 a.m. UTC
  Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
(fixed counter 3) to counter/sample topdown.slots event, but current
code still doesn't cover this new fixed counter.

So add code to validate this new fixed counter.

Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
 x86/pmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
  

Comments

Jim Mattson Oct. 24, 2023, 7:05 p.m. UTC | #1
On Tue, Oct 24, 2023 at 12:51 AM Dapeng Mi <dapeng1.mi@linux.intel.com> wrote:
>
> Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
> (fixed counter 3) to counter/sample topdown.slots event, but current
> code still doesn't cover this new fixed counter.
>
> So add code to validate this new fixed counter.

Can you explain how this "validates" anything?

> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
>  x86/pmu.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 1bebf493d4a4..41165e168d8e 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -46,7 +46,8 @@ struct pmu_event {
>  }, fixed_events[] = {
>         {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
>         {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N},
> -       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N}
> +       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N},
> +       {"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 100*N}
>  };
>
>  char *buf;
> --
> 2.34.1
>
  
Mi, Dapeng Oct. 25, 2023, 11:26 a.m. UTC | #2
On 10/25/2023 3:05 AM, Jim Mattson wrote:
> On Tue, Oct 24, 2023 at 12:51 AM Dapeng Mi <dapeng1.mi@linux.intel.com> wrote:
>> Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
>> (fixed counter 3) to counter/sample topdown.slots event, but current
>> code still doesn't cover this new fixed counter.
>>
>> So add code to validate this new fixed counter.
> Can you explain how this "validates" anything?


I may not describe the sentence clearly. This would validate the fixed 
counter 3 can count the slots event and get a valid count in a 
reasonable range. Thanks.


>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> ---
>>   x86/pmu.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/x86/pmu.c b/x86/pmu.c
>> index 1bebf493d4a4..41165e168d8e 100644
>> --- a/x86/pmu.c
>> +++ b/x86/pmu.c
>> @@ -46,7 +46,8 @@ struct pmu_event {
>>   }, fixed_events[] = {
>>          {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
>>          {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N},
>> -       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N}
>> +       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N},
>> +       {"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 100*N}
>>   };
>>
>>   char *buf;
>> --
>> 2.34.1
>>
  
Jim Mattson Oct. 25, 2023, 12:38 p.m. UTC | #3
On Wed, Oct 25, 2023 at 4:26 AM Mi, Dapeng <dapeng1.mi@linux.intel.com> wrote:
>
>
> On 10/25/2023 3:05 AM, Jim Mattson wrote:
> > On Tue, Oct 24, 2023 at 12:51 AM Dapeng Mi <dapeng1.mi@linux.intel.com> wrote:
> >> Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
> >> (fixed counter 3) to counter/sample topdown.slots event, but current
> >> code still doesn't cover this new fixed counter.
> >>
> >> So add code to validate this new fixed counter.
> > Can you explain how this "validates" anything?
>
>
> I may not describe the sentence clearly. This would validate the fixed
> counter 3 can count the slots event and get a valid count in a
> reasonable range. Thanks.

I thought the current vPMU implementation did not actually support
top-down slots. If it doesn't work, how can it be validated?

>
> >
> >> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> >> ---
> >>   x86/pmu.c | 3 ++-
> >>   1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/x86/pmu.c b/x86/pmu.c
> >> index 1bebf493d4a4..41165e168d8e 100644
> >> --- a/x86/pmu.c
> >> +++ b/x86/pmu.c
> >> @@ -46,7 +46,8 @@ struct pmu_event {
> >>   }, fixed_events[] = {
> >>          {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
> >>          {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N},
> >> -       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N}
> >> +       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N},
> >> +       {"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 100*N}
> >>   };
> >>
> >>   char *buf;
> >> --
> >> 2.34.1
> >>
  
Mi, Dapeng Oct. 26, 2023, 2:29 a.m. UTC | #4
On 10/25/2023 8:38 PM, Jim Mattson wrote:
> On Wed, Oct 25, 2023 at 4:26 AM Mi, Dapeng <dapeng1.mi@linux.intel.com> wrote:
>>
>> On 10/25/2023 3:05 AM, Jim Mattson wrote:
>>> On Tue, Oct 24, 2023 at 12:51 AM Dapeng Mi <dapeng1.mi@linux.intel.com> wrote:
>>>> Intel CPUs, like Sapphire Rapids, introduces a new fixed counter
>>>> (fixed counter 3) to counter/sample topdown.slots event, but current
>>>> code still doesn't cover this new fixed counter.
>>>>
>>>> So add code to validate this new fixed counter.
>>> Can you explain how this "validates" anything?
>>
>> I may not describe the sentence clearly. This would validate the fixed
>> counter 3 can count the slots event and get a valid count in a
>> reasonable range. Thanks.
> I thought the current vPMU implementation did not actually support
> top-down slots. If it doesn't work, how can it be validated?

Ops, you reminds me, I just made a mistake, the kernel which I used 
includes the vtopdown supporting patches, so the topdown slots is 
supported. Since there are big arguments on the original vtopdown RFC 
patches,  the topdown metrics feature is probably not to be supported in 
current vPMU emulation framework, but the slots events support patches 
(the former two patches 
https://lore.kernel.org/all/20230927033124.1226509-1-dapeng1.mi@linux.intel.com/T/#m53883e39177eb9a0d8e23e4c382ddc6190c7f0f4 
and 
https://lore.kernel.org/all/20230927033124.1226509-1-dapeng1.mi@linux.intel.com/T/#m1d9c433eb6ce83b32e50f6d976fbfeee2b731fb9) 
are still valuable and just a small piece of work and doesn't touch any 
perf code. I'd like split these two patches into an independent patchset 
and resend to LKML.

>
>>>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>>>> ---
>>>>    x86/pmu.c | 3 ++-
>>>>    1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/x86/pmu.c b/x86/pmu.c
>>>> index 1bebf493d4a4..41165e168d8e 100644
>>>> --- a/x86/pmu.c
>>>> +++ b/x86/pmu.c
>>>> @@ -46,7 +46,8 @@ struct pmu_event {
>>>>    }, fixed_events[] = {
>>>>           {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
>>>>           {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N},
>>>> -       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N}
>>>> +       {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N},
>>>> +       {"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 100*N}
>>>>    };
>>>>
>>>>    char *buf;
>>>> --
>>>> 2.34.1
>>>>
  

Patch

diff --git a/x86/pmu.c b/x86/pmu.c
index 1bebf493d4a4..41165e168d8e 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -46,7 +46,8 @@  struct pmu_event {
 }, fixed_events[] = {
 	{"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N},
 	{"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N},
-	{"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N}
+	{"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N},
+	{"fixed 4", MSR_CORE_PERF_FIXED_CTR0 + 3, 1*N, 100*N}
 };
 
 char *buf;