[v2,00/15] Introduce Architectural LBR for vPMU

Message ID 20221125040604.5051-1-weijiang.yang@intel.com
Headers
Series Introduce Architectural LBR for vPMU |

Message

Yang, Weijiang Nov. 25, 2022, 4:05 a.m. UTC
  Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
The native support patches were merged into 5.9 kernel tree, and this
patch series is to enable Arch LBR in vPMU so that guest can benefit
from the feature.

The main advantages of Arch LBR are [1]:
- Faster context switching due to XSAVES support and faster reset of
  LBR MSRs via the new DEPTH MSR
- Faster LBR read for a non-PEBS event due to XSAVES support, which
  lowers the overhead of the NMI handler.
- Linux kernel can support the LBR features without knowing the model
  number of the current CPU.

From end user's point of view, the usage of Arch LBR is the same as
the Legacy LBR that has been merged in the mainline.

Note, in this series, there's one restriction for guest Arch LBR, i.e.,
guest can only set its LBR record depth the same as host's. This is due
to the special behavior of MSR_ARCH_LBR_DEPTH: 
1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
MSR_ARCH_LBR_DEPTH.
Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
and straightforward.

Paolo refactored the old series and the resulting patches became the
base of this new series, therefore he's the author of some patches.

[0] https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html
[1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/

v1:
https://lore.kernel.org/all/20220831223438.413090-1-weijiang.yang@intel.com/

Changes v2:
1. Removed Paolo's SOBs from some patches. [Sean]
2. Modified some patches due to KVM changes, e.g., SMM/vPMU refactor.
3. Rebased to https://git.kernel.org/pub/scm/virt/kvm/kvm.git : queue branch.


Like Xu (3):
  perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR
  KVM: x86: Add XSAVE Support for Architectural LBR

Paolo Bonzini (4):
  KVM: PMU: disable LBR handling if architectural LBR is available
  KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest Arch LBR
  KVM: VMX: Support passthrough of architectural LBRs
  KVM: x86: Refine the matching and clearing logic for supported_xss

Sean Christopherson (1):
  KVM: x86: Report XSS as an MSR to be saved if there are supported
    features

Yang Weijiang (7):
  KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
  KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list
  KVM: x86/vmx: Check Arch LBR config when return perf capabilities
  KVM: x86/vmx: Disable Arch LBREn bit in #DB and warm reset
  KVM: x86/vmx: Save/Restore guest Arch LBR Ctrl msr at SMM entry/exit
  KVM: x86: Add Arch LBR data MSR access interface
  KVM: x86/cpuid: Advertise Arch LBR feature in CPUID

 arch/x86/events/intel/lbr.c      |   6 +-
 arch/x86/include/asm/kvm_host.h  |   3 +
 arch/x86/include/asm/msr-index.h |   1 +
 arch/x86/include/asm/vmx.h       |   4 +
 arch/x86/kvm/cpuid.c             |  52 +++++++++-
 arch/x86/kvm/smm.c               |   1 +
 arch/x86/kvm/smm.h               |   3 +-
 arch/x86/kvm/vmx/capabilities.h  |   5 +
 arch/x86/kvm/vmx/nested.c        |   8 ++
 arch/x86/kvm/vmx/pmu_intel.c     | 161 +++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/vmx.c           |  74 +++++++++++++-
 arch/x86/kvm/vmx/vmx.h           |   6 +-
 arch/x86/kvm/x86.c               |  27 +++++-
 13 files changed, 316 insertions(+), 35 deletions(-)


base-commit: da5f28e10aa7df1a925dbc10656cc89d9c061358
  

Comments

Yang, Weijiang Jan. 12, 2023, 1:57 a.m. UTC | #1
Hi, Sean,

Sorry to bother,  do you have time to review this series? The feature 
has been pending

for a long time, and I want to move it forward.

Thanks!


On 11/25/2022 12:05 PM, Yang, Weijiang wrote:
> Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
> LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
> The native support patches were merged into 5.9 kernel tree, and this
> patch series is to enable Arch LBR in vPMU so that guest can benefit
> from the feature.
>
> The main advantages of Arch LBR are [1]:
> - Faster context switching due to XSAVES support and faster reset of
>    LBR MSRs via the new DEPTH MSR
> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>    lowers the overhead of the NMI handler.
> - Linux kernel can support the LBR features without knowing the model
>    number of the current CPU.
>
>  From end user's point of view, the usage of Arch LBR is the same as
> the Legacy LBR that has been merged in the mainline.
>
> Note, in this series, there's one restriction for guest Arch LBR, i.e.,
> guest can only set its LBR record depth the same as host's. This is due
> to the special behavior of MSR_ARCH_LBR_DEPTH:
> 1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
> 2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
> MSR_ARCH_LBR_DEPTH.
> Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
> and straightforward.
>
> Paolo refactored the old series and the resulting patches became the
> base of this new series, therefore he's the author of some patches.
>
> [0] https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html
> [1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/
>
> v1:
> https://lore.kernel.org/all/20220831223438.413090-1-weijiang.yang@intel.com/
>
> Changes v2:
> 1. Removed Paolo's SOBs from some patches. [Sean]
> 2. Modified some patches due to KVM changes, e.g., SMM/vPMU refactor.
> 3. Rebased to https://git.kernel.org/pub/scm/virt/kvm/kvm.git : queue branch.
>
>
> Like Xu (3):
>    perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
>    KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR
>    KVM: x86: Add XSAVE Support for Architectural LBR
>
> Paolo Bonzini (4):
>    KVM: PMU: disable LBR handling if architectural LBR is available
>    KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest Arch LBR
>    KVM: VMX: Support passthrough of architectural LBRs
>    KVM: x86: Refine the matching and clearing logic for supported_xss
>
> Sean Christopherson (1):
>    KVM: x86: Report XSS as an MSR to be saved if there are supported
>      features
>
> Yang Weijiang (7):
>    KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
>    KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list
>    KVM: x86/vmx: Check Arch LBR config when return perf capabilities
>    KVM: x86/vmx: Disable Arch LBREn bit in #DB and warm reset
>    KVM: x86/vmx: Save/Restore guest Arch LBR Ctrl msr at SMM entry/exit
>    KVM: x86: Add Arch LBR data MSR access interface
>    KVM: x86/cpuid: Advertise Arch LBR feature in CPUID
>
>   arch/x86/events/intel/lbr.c      |   6 +-
>   arch/x86/include/asm/kvm_host.h  |   3 +
>   arch/x86/include/asm/msr-index.h |   1 +
>   arch/x86/include/asm/vmx.h       |   4 +
>   arch/x86/kvm/cpuid.c             |  52 +++++++++-
>   arch/x86/kvm/smm.c               |   1 +
>   arch/x86/kvm/smm.h               |   3 +-
>   arch/x86/kvm/vmx/capabilities.h  |   5 +
>   arch/x86/kvm/vmx/nested.c        |   8 ++
>   arch/x86/kvm/vmx/pmu_intel.c     | 161 +++++++++++++++++++++++++++----
>   arch/x86/kvm/vmx/vmx.c           |  74 +++++++++++++-
>   arch/x86/kvm/vmx/vmx.h           |   6 +-
>   arch/x86/kvm/x86.c               |  27 +++++-
>   13 files changed, 316 insertions(+), 35 deletions(-)
>
>
> base-commit: da5f28e10aa7df1a925dbc10656cc89d9c061358
  
Sean Christopherson Jan. 27, 2023, 10:46 p.m. UTC | #2
On Thu, Nov 24, 2022, Yang Weijiang wrote:
> Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
> LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
> The native support patches were merged into 5.9 kernel tree, and this
> patch series is to enable Arch LBR in vPMU so that guest can benefit
> from the feature.
> 
> The main advantages of Arch LBR are [1]:
> - Faster context switching due to XSAVES support and faster reset of
>   LBR MSRs via the new DEPTH MSR
> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>   lowers the overhead of the NMI handler.
> - Linux kernel can support the LBR features without knowing the model
>   number of the current CPU.
> 
> From end user's point of view, the usage of Arch LBR is the same as
> the Legacy LBR that has been merged in the mainline.
> 
> Note, in this series, there's one restriction for guest Arch LBR, i.e.,
> guest can only set its LBR record depth the same as host's. This is due
> to the special behavior of MSR_ARCH_LBR_DEPTH: 
> 1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
> 2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
> MSR_ARCH_LBR_DEPTH.
> Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
> and straightforward.
> 
> Paolo refactored the old series and the resulting patches became the
> base of this new series, therefore he's the author of some patches.

To be very blunt, this series is a mess.  I don't want to point fingers as there
is plenty of blame to go around.  The existing LBR support is a confusing mess,
vPMU as a whole has been neglected for too long, review feedback has been relatively
non-existent, and I'm sure some of the mess is due to Paolo trying to hastily fix
things up back when this was temporarily queued.

However, for arch LBR support to be merged, things need to change.

First and foremost, the existing LBR support needs to be documented.  Someone,
I don't care who, needs to provide a detailed writeup of the contract between KVM
and perf.  Specifically, I want to know:

  1. When exactly is perf allowed to take control of LBR MRS.  Task switch?  IRQ?
     NMI?

  2. What is the expected behavior when perf is using LBRs?  Is the guest supposed
     to be traced?

  3. Why does KVM snapshot DEBUGCTL with IRQs enabled, but disables IRQs when
     accessing LBR MSRs?

It doesn't have to be polished, e.g. I'll happily wordsmith things into proper
documentation, but I want to have a very clear understanding of how LBR support
is _intended_ to function and how it all _actually_ functions without having to
make guesses.

And depending on the answers, I want to revisit KVM's LBR implementation before
tackling arch LBRs.  Letting perf usurp LBRs while KVM has the vCPU loaded is
frankly ridiculous.  Just have perf set a flag telling KVM that it needs to take
control of LBRs and have KVM service the flag as a request or something.  Stealing
the LBRs back in IRQ context adds a stupid amount of complexity without much value,
e.g. waiting a few branches for KVM to get to a safe place isn't going to meaningfully
change the traces.  If that can't actually happen, then why on earth does KVM need
to disable IRQs to read MSRs?

And AFAICT, since KVM unconditionally loads the guest's DEBUGCTL, whether or not
guest branches show up in the LBRs when the host is tracing is completely up to
the whims of the guest.  If that's correct, then again, what's the point of the
dance between KVM and perf?

Beyond the "how does this work" issues, there needs to be tests.  At the absolute
minimum, there needs to be selftests showing that this stuff actually works, that
save/restore (migration) works, that the MSRs can/can't be accessed when guest
CPUID is (in)correctly configured, etc. And I would really, really like to have
tests that force contention between host and guests, e.g. to make sure that KVM
isn't leaking host state or outright exploding, but I can understand that those
types of tests would be very difficult to write.

I've pushed a heavily reworked, but definitely broken, version to

  git@github.com:sean-jc/linux.git x86/arch_lbrs

It compiles, but it's otherwise untested and there are known gaps.  E.g. I omitted
toggling load+clear of ARCH_LBR_CTL because I couldn't figure out the intended
behavior.
  
Yang, Weijiang Jan. 30, 2023, 1:38 p.m. UTC | #3
On 1/28/2023 6:46 AM, Sean Christopherson wrote:
> On Thu, Nov 24, 2022, Yang Weijiang wrote:
>> Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
>> LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
>> The native support patches were merged into 5.9 kernel tree, and this
>> patch series is to enable Arch LBR in vPMU so that guest can benefit
>> from the feature.
>>
>> The main advantages of Arch LBR are [1]:
>> - Faster context switching due to XSAVES support and faster reset of
>>    LBR MSRs via the new DEPTH MSR
>> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>>    lowers the overhead of the NMI handler.
>> - Linux kernel can support the LBR features without knowing the model
>>    number of the current CPU.
>>
>>  From end user's point of view, the usage of Arch LBR is the same as
>> the Legacy LBR that has been merged in the mainline.
>>
>> Note, in this series, there's one restriction for guest Arch LBR, i.e.,
>> guest can only set its LBR record depth the same as host's. This is due
>> to the special behavior of MSR_ARCH_LBR_DEPTH:
>> 1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
>> 2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
>> MSR_ARCH_LBR_DEPTH.
>> Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
>> and straightforward.
>>
>> Paolo refactored the old series and the resulting patches became the
>> base of this new series, therefore he's the author of some patches.
> To be very blunt, this series is a mess.  I don't want to point fingers as there
> is plenty of blame to go around.  The existing LBR support is a confusing mess,
> vPMU as a whole has been neglected for too long, review feedback has been relatively
> non-existent, and I'm sure some of the mess is due to Paolo trying to hastily fix
> things up back when this was temporarily queued.
>
> However, for arch LBR support to be merged, things need to change.
>
> First and foremost, the existing LBR support needs to be documented.  Someone,
> I don't care who, needs to provide a detailed writeup of the contract between KVM
> and perf.  Specifically, I want to know:
>
>    1. When exactly is perf allowed to take control of LBR MRS.  Task switch?  IRQ?
>       NMI?
>
>    2. What is the expected behavior when perf is using LBRs?  Is the guest supposed
>       to be traced?
>
>    3. Why does KVM snapshot DEBUGCTL with IRQs enabled, but disables IRQs when
>       accessing LBR MSRs?
>
> It doesn't have to be polished, e.g. I'll happily wordsmith things into proper
> documentation, but I want to have a very clear understanding of how LBR support
> is _intended_ to function and how it all _actually_ functions without having to
> make guesses.
>
> And depending on the answers, I want to revisit KVM's LBR implementation before
> tackling arch LBRs.  Letting perf usurp LBRs while KVM has the vCPU loaded is
> frankly ridiculous.  Just have perf set a flag telling KVM that it needs to take
> control of LBRs and have KVM service the flag as a request or something.  Stealing
> the LBRs back in IRQ context adds a stupid amount of complexity without much value,
> e.g. waiting a few branches for KVM to get to a safe place isn't going to meaningfully
> change the traces.  If that can't actually happen, then why on earth does KVM need
> to disable IRQs to read MSRs?
>
> And AFAICT, since KVM unconditionally loads the guest's DEBUGCTL, whether or not
> guest branches show up in the LBRs when the host is tracing is completely up to
> the whims of the guest.  If that's correct, then again, what's the point of the
> dance between KVM and perf?
>
> Beyond the "how does this work" issues, there needs to be tests.  At the absolute
> minimum, there needs to be selftests showing that this stuff actually works, that
> save/restore (migration) works, that the MSRs can/can't be accessed when guest
> CPUID is (in)correctly configured, etc. And I would really, really like to have
> tests that force contention between host and guests, e.g. to make sure that KVM
> isn't leaking host state or outright exploding, but I can understand that those
> types of tests would be very difficult to write.
>
> I've pushed a heavily reworked, but definitely broken, version to
>
>    git@github.com:sean-jc/linux.git x86/arch_lbrs
>
> It compiles, but it's otherwise untested and there are known gaps.  E.g. I omitted
> toggling load+clear of ARCH_LBR_CTL because I couldn't figure out the intended
> behavior.

Appreciated for your elaborate review and comments!

I'll check your reworked version and discuss with stakeholders on how to 
move the work forward.
  
Like Xu June 5, 2023, 9:50 a.m. UTC | #4
+xiongzha to follow up.

On 28/1/2023 6:46 am, Sean Christopherson wrote:
> On Thu, Nov 24, 2022, Yang Weijiang wrote:
>> Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
>> LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
>> The native support patches were merged into 5.9 kernel tree, and this
>> patch series is to enable Arch LBR in vPMU so that guest can benefit
>> from the feature.
>>
>> The main advantages of Arch LBR are [1]:
>> - Faster context switching due to XSAVES support and faster reset of
>>    LBR MSRs via the new DEPTH MSR
>> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>>    lowers the overhead of the NMI handler.
>> - Linux kernel can support the LBR features without knowing the model
>>    number of the current CPU.
>>
>>  From end user's point of view, the usage of Arch LBR is the same as
>> the Legacy LBR that has been merged in the mainline.
>>
>> Note, in this series, there's one restriction for guest Arch LBR, i.e.,
>> guest can only set its LBR record depth the same as host's. This is due
>> to the special behavior of MSR_ARCH_LBR_DEPTH:
>> 1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
>> 2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
>> MSR_ARCH_LBR_DEPTH.
>> Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
>> and straightforward.
>>
>> Paolo refactored the old series and the resulting patches became the
>> base of this new series, therefore he's the author of some patches.
> 
> To be very blunt, this series is a mess.  I don't want to point fingers as there
> is plenty of blame to go around.  The existing LBR support is a confusing mess,
> vPMU as a whole has been neglected for too long, review feedback has been relatively
> non-existent, and I'm sure some of the mess is due to Paolo trying to hastily fix
> things up back when this was temporarily queued.
> 
> However, for arch LBR support to be merged, things need to change.
> 
> First and foremost, the existing LBR support needs to be documented.  Someone,
> I don't care who, needs to provide a detailed writeup of the contract between KVM
> and perf.  Specifically, I want to know:
> 
>    1. When exactly is perf allowed to take control of LBR MRS.  Task switch?  IRQ?
>       NMI?
> 
>    2. What is the expected behavior when perf is using LBRs?  Is the guest supposed
>       to be traced?
> 
>    3. Why does KVM snapshot DEBUGCTL with IRQs enabled, but disables IRQs when
>       accessing LBR MSRs?
> 
> It doesn't have to be polished, e.g. I'll happily wordsmith things into proper
> documentation, but I want to have a very clear understanding of how LBR support
> is _intended_ to function and how it all _actually_ functions without having to
> make guesses.

This is a very good topic for LPC KVM Microconference.

Many thanks to Sean for ranting about something that only I was thinking
about before. Having host and guest be able to use PMU at the same time
in peace (hybrid profiling mode) is a very use-case worthy goal rather than
introducing exclusivity, and it's clear that kvm+perf lacks reasonable and
well-documented support when there is host perf user interference.

Ref: https://lpc.events/event/17/page/200-proposed-microconferences#kvm

> 
> And depending on the answers, I want to revisit KVM's LBR implementation before
> tackling arch LBRs.  Letting perf usurp LBRs while KVM has the vCPU loaded is
> frankly ridiculous.  Just have perf set a flag telling KVM that it needs to take
> control of LBRs and have KVM service the flag as a request or something.  Stealing
> the LBRs back in IRQ context adds a stupid amount of complexity without much value,
> e.g. waiting a few branches for KVM to get to a safe place isn't going to meaningfully
> change the traces.  If that can't actually happen, then why on earth does KVM need
> to disable IRQs to read MSRs?
> 
> And AFAICT, since KVM unconditionally loads the guest's DEBUGCTL, whether or not
> guest branches show up in the LBRs when the host is tracing is completely up to
> the whims of the guest.  If that's correct, then again, what's the point of the
> dance between KVM and perf?
> 
> Beyond the "how does this work" issues, there needs to be tests.  At the absolute
> minimum, there needs to be selftests showing that this stuff actually works, that
> save/restore (migration) works, that the MSRs can/can't be accessed when guest
> CPUID is (in)correctly configured, etc. And I would really, really like to have
> tests that force contention between host and guests, e.g. to make sure that KVM
> isn't leaking host state or outright exploding, but I can understand that those
> types of tests would be very difficult to write.
> 
> I've pushed a heavily reworked, but definitely broken, version to
> 
>    git@github.com:sean-jc/linux.git x86/arch_lbrs
> 
> It compiles, but it's otherwise untested and there are known gaps.  E.g. I omitted
> toggling load+clear of ARCH_LBR_CTL because I couldn't figure out the intended
> behavior.