[0/8] KVM: SVM: fixes for vmentry code

Message ID 20221107145436.276079-1-pbonzini@redhat.com
Headers
Series KVM: SVM: fixes for vmentry code |

Message

Paolo Bonzini Nov. 7, 2022, 2:54 p.m. UTC
  This series comprises two related fixes:

- the FILL_RETURN_BUFFER macro in -next needs to access percpu data,
  hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER.
  This means moving guest vmload/vmsave and host vmload to assembly
  (patches 4 and 6).

- because AMD wants the OS to set STIBP to 1 before executing the
  return thunk (un)training sequence, IA32_SPEC_CTRL must be restored
  before UNTRAIN_RET, too.  This must also be moved to assembly and,
  for consistency, the guest SPEC_CTRL is also loaded in there
  (patch 7).

Neither is particularly hard, however because of 32-bit systems one needs
to keep the number of arguments to __svm_vcpu_run to three or fewer.
One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the
host save area, so all accesses to the vcpu_svm struct have to be done
from assembly too.  This is done in patches 2, 3 and 5 and it turns out
not to be that bad; in fact I don't think the code is much harder to
follow than before despite doing a lot more stuff.  Care has been taken
to keep the "normal" and SEV-ES code as similar as possible, too.

The above summary leaves out the more mundane patches 1 and 8.  They
are respectively preparation for adding more asm-offsets, and dead
code removal.  Most of the scary diffstat comes from patch 1, which is
purely moving inline functions to a separate header file than svm.h.

Peter Zijlstra had already sent a similar patch for the first issue last
Friday.  Unfortunately it did not take care of the 32-bit issue with the
number of arguments.  This series is independent of his, but I did steal
his organization of the exception fixup code because it's pretty.

Tested on 64-bit bare metal including SEV-ES, and on 32-bit nested.  On
top of this I also spent way too much time comparing the output of
the compiler code before the patch with the assembly code after.

Paolo

Supersedes: <20221028230723.3254250-1-pbonzini@redhat.com>

Paolo Bonzini (8):
  KVM: SVM: extract VMCB accessors to a new file
  KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
  KVM: SVM: adjust register allocation for __svm_vcpu_run
  KVM: SVM: move guest vmsave/vmload to assembly
  KVM: SVM: retrieve VMCB from assembly
  KVM: SVM: restore host save area from assembly
  KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
  x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
    callers

 arch/x86/include/asm/spec-ctrl.h |  10 +-
 arch/x86/kernel/asm-offsets.c    |  10 ++
 arch/x86/kernel/cpu/bugs.c       |  15 +-
 arch/x86/kvm/svm/avic.c          |   1 +
 arch/x86/kvm/svm/nested.c        |   1 +
 arch/x86/kvm/svm/sev.c           |   1 +
 arch/x86/kvm/svm/svm.c           |  54 +++-----
 arch/x86/kvm/svm/svm.h           | 204 +--------------------------
 arch/x86/kvm/svm/svm_onhyperv.c  |   1 +
 arch/x86/kvm/svm/svm_ops.h       |   5 -
 arch/x86/kvm/svm/vmcb.h          | 211 ++++++++++++++++++++++++++++
 arch/x86/kvm/svm/vmenter.S       | 231 ++++++++++++++++++++++++-------
 12 files changed, 434 insertions(+), 310 deletions(-)
 create mode 100644 arch/x86/kvm/svm/vmcb.h
  

Comments

Peter Zijlstra Nov. 7, 2022, 3:33 p.m. UTC | #1
On Mon, Nov 07, 2022 at 09:54:28AM -0500, Paolo Bonzini wrote:

> Paolo Bonzini (8):
>   KVM: SVM: extract VMCB accessors to a new file
>   KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
>   KVM: SVM: adjust register allocation for __svm_vcpu_run
>   KVM: SVM: move guest vmsave/vmload to assembly
>   KVM: SVM: retrieve VMCB from assembly
>   KVM: SVM: restore host save area from assembly
>   KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
>   x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
>     callers

Nice!

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>