[v2,0/8] KVM: SVM: fixes for vmentry code

Message ID 20221108151532.1377783-1-pbonzini@redhat.com
Headers
Series KVM: SVM: fixes for vmentry code |

Message

Paolo Bonzini Nov. 8, 2022, 3:15 p.m. UTC
  This series comprises two related fixes:

- the FILL_RETURN_BUFFER macro in -next needs to access percpu data,
  hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER.
  This means moving guest vmload/vmsave and host vmload to assembly
  (patches 5 and 6).

- because AMD wants the OS to set STIBP to 1 before executing the
  return thunk (un)training sequence, IA32_SPEC_CTRL must be restored
  before UNTRAIN_RET, too.  This must also be moved to assembly and,
  for consistency, the guest SPEC_CTRL is also loaded in there
  (patch 7).

Neither is particularly hard, however because of 32-bit systems one needs
to keep the number of arguments to __svm_vcpu_run to three or fewer.
One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the
host save area, so all accesses to the vcpu_svm struct have to be done
from assembly too.  This is done in patches 2 to 4, and it turns out
not to be that bad; in fact I think the code is simpler than before
after these prerequisites, and even at the end of the series it is not
much harder to follow despite doing a lot more stuff.  Care has been
taken to keep the "normal" and SEV-ES code as similar as possible,
even though the latter would not hit the three argument barrier.

The above summary leaves out the more mundane patches 1 and 8.  The
former introduces a separate asm-offsets.c file for KVM, so that
kernel/asm-offsets.c does not have to do ugly includes with ../ paths.
The latter is dead code removal.

Thanks,

Paolo

v1->v2: use a separate asm-offsets.c file instead of hacking around
	the arch/x86/kvm/svm/svm.h file; this could have been done
	also with just a "#ifndef COMPILE_OFFSETS", but Sean's
	suggestion is cleaner and there is a precedent in
	drivers/memory/ for private asm-offsets files

	keep preparatory cleanups together at the beginning of the
	series

	move SPEC_CTRL save/restore out of line [Jim]

Paolo Bonzini (8):
  KVM: x86: use a separate asm-offsets.c file
  KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
  KVM: SVM: adjust register allocation for __svm_vcpu_run
  KVM: SVM: retrieve VMCB from assembly
  KVM: SVM: move guest vmsave/vmload to assembly
  KVM: SVM: restore host save area from assembly
  KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
  x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
    callers

 arch/x86/include/asm/spec-ctrl.h |  10 +-
 arch/x86/kernel/asm-offsets.c    |   6 -
 arch/x86/kernel/cpu/bugs.c       |  15 +-
 arch/x86/kvm/Makefile            |  12 ++
 arch/x86/kvm/kvm-asm-offsets.c   |  28 ++++
 arch/x86/kvm/svm/svm.c           |  53 +++----
 arch/x86/kvm/svm/svm.h           |   4 +-
 arch/x86/kvm/svm/svm_ops.h       |   5 -
 arch/x86/kvm/svm/vmenter.S       | 241 ++++++++++++++++++++++++-------
 arch/x86/kvm/vmx/vmenter.S       |   2 +-
 10 files changed, 259 insertions(+), 117 deletions(-)
 create mode 100644 arch/x86/kvm/kvm-asm-offsets.c
  

Comments

Nathan Chancellor Nov. 8, 2022, 7:43 p.m. UTC | #1
On Tue, Nov 08, 2022 at 10:15:24AM -0500, Paolo Bonzini wrote:
> This series comprises two related fixes:
> 
> - the FILL_RETURN_BUFFER macro in -next needs to access percpu data,
>   hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER.
>   This means moving guest vmload/vmsave and host vmload to assembly
>   (patches 5 and 6).
> 
> - because AMD wants the OS to set STIBP to 1 before executing the
>   return thunk (un)training sequence, IA32_SPEC_CTRL must be restored
>   before UNTRAIN_RET, too.  This must also be moved to assembly and,
>   for consistency, the guest SPEC_CTRL is also loaded in there
>   (patch 7).
> 
> Neither is particularly hard, however because of 32-bit systems one needs
> to keep the number of arguments to __svm_vcpu_run to three or fewer.
> One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the
> host save area, so all accesses to the vcpu_svm struct have to be done
> from assembly too.  This is done in patches 2 to 4, and it turns out
> not to be that bad; in fact I think the code is simpler than before
> after these prerequisites, and even at the end of the series it is not
> much harder to follow despite doing a lot more stuff.  Care has been
> taken to keep the "normal" and SEV-ES code as similar as possible,
> even though the latter would not hit the three argument barrier.
> 
> The above summary leaves out the more mundane patches 1 and 8.  The
> former introduces a separate asm-offsets.c file for KVM, so that
> kernel/asm-offsets.c does not have to do ugly includes with ../ paths.
> The latter is dead code removal.
> 
> Thanks,
> 
> Paolo
> 
> v1->v2: use a separate asm-offsets.c file instead of hacking around
> 	the arch/x86/kvm/svm/svm.h file; this could have been done
> 	also with just a "#ifndef COMPILE_OFFSETS", but Sean's
> 	suggestion is cleaner and there is a precedent in
> 	drivers/memory/ for private asm-offsets files
> 
> 	keep preparatory cleanups together at the beginning of the
> 	series
> 
> 	move SPEC_CTRL save/restore out of line [Jim]
> 
> Paolo Bonzini (8):
>   KVM: x86: use a separate asm-offsets.c file
>   KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
>   KVM: SVM: adjust register allocation for __svm_vcpu_run
>   KVM: SVM: retrieve VMCB from assembly
>   KVM: SVM: move guest vmsave/vmload to assembly
>   KVM: SVM: restore host save area from assembly
>   KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
>   x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
>     callers
> 
>  arch/x86/include/asm/spec-ctrl.h |  10 +-
>  arch/x86/kernel/asm-offsets.c    |   6 -
>  arch/x86/kernel/cpu/bugs.c       |  15 +-
>  arch/x86/kvm/Makefile            |  12 ++
>  arch/x86/kvm/kvm-asm-offsets.c   |  28 ++++
>  arch/x86/kvm/svm/svm.c           |  53 +++----
>  arch/x86/kvm/svm/svm.h           |   4 +-
>  arch/x86/kvm/svm/svm_ops.h       |   5 -
>  arch/x86/kvm/svm/vmenter.S       | 241 ++++++++++++++++++++++++-------
>  arch/x86/kvm/vmx/vmenter.S       |   2 +-
>  10 files changed, 259 insertions(+), 117 deletions(-)
>  create mode 100644 arch/x86/kvm/kvm-asm-offsets.c
> 
> -- 
> 2.31.1
> 
> 

I applied this series on next-20221108, which has the call depth
tracking patches, and I no longer see the panic when starting a guest on
my AMD test system and I can still start a simple nested guest without
any problems, which is about the extent of my regular KVM testing.  I
did test the same kernel on my Intel systems and saw no problems there
but that seems expected given the diffstat. Thank you for the quick
response and fixes!

Tested-by: Nathan Chancellor <nathan@kernel.org>

One small nit I noticed: kvm-asm-offsets.h should be added to a
.gitignore file in arch/x86/kvm.

$ git status --short
?? arch/x86/kvm/kvm-asm-offsets.h

Cheers,
Nathan