Message ID | 20221108151532.1377783-1-pbonzini@redhat.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2771470wru; Tue, 8 Nov 2022 07:17:55 -0800 (PST) X-Google-Smtp-Source: AMsMyM6WxAdBCQNd99sodcq54oYabvQM+OSZj9k8N7+fafiq0ryx6DbmuAG4zUH1+h8XJQUSxJJ7 X-Received: by 2002:a63:2221:0:b0:43b:f4a3:80cc with SMTP id i33-20020a632221000000b0043bf4a380ccmr47828270pgi.367.1667920674873; Tue, 08 Nov 2022 07:17:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667920674; cv=none; d=google.com; s=arc-20160816; b=yzH4vP/M8n0wb6PwkPYnC/3tq/aWleybwh6hWlQbp554vEXs2Riq5twUP62eBUJkUM 9IoauwKURHiA6oBur+utdmTmSHX4hZkz0R3jw0X6xzvvehfrspbZNyoZQeKTj9BTJlPK KuMn6qcMMv5PlApl5fL9Hc1hIkKJCYqGEYJ081NPHw+G1hVwuGLQCaMHZDUVoGjdW+pj Pa7UvlqIqN5ivSx8dl1makpaE0FOcvhVcZGUxL1h2IsfN4NqKC85xxnNguP84QS2v5Bg a1jwMe7fXe3HxKIiafTnk5VBxwVPuT/lYAUcWLuDAsymqDNTY4lorjh/rf2o3WHcHgrO nO6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=aRoW2KDTlPczcWc8dhQ47w2ukmwsbIo1VAXAXMn1Vjg=; b=e29dTI2AlK1RIKCBldGMRlmIhceVqIik+5rTXyswxC3OkO3b/hOsui03y0+4mAxzYX o6s+9eGO30Vr+eTa8TcmKhmnRqYhA+R0vpQK91owE7Se6B4PBOhb2hVbRl3ak8MswgRA /bXyKNR89GXSCZ4uXTXfgG1IfX2Y5DKMU/FVlPUGjkDAMpSwMXzUiTUjIAfROYGIALyP QeabG4I/cc4eEa2q1lOrpwr/RGp8gi/xdM7rDI9KUMNBhfDIFNvgyBPjEKpUA8N1D6qz vVTv/CdVp+mJSK0dkdXl2/245NtjI+UX25HkjTR6gnfYImNMM1+grG0lNb3EKCGU6LsV rnaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NHAwotte; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x14-20020a63170e000000b0045d1aa2a586si13519667pgl.197.2022.11.08.07.17.37; Tue, 08 Nov 2022 07:17:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NHAwotte; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234240AbiKHPQi (ORCPT <rfc822;tony84727@gmail.com> + 99 others); Tue, 8 Nov 2022 10:16:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232693AbiKHPQe (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 8 Nov 2022 10:16:34 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23F97E60 for <linux-kernel@vger.kernel.org>; Tue, 8 Nov 2022 07:15:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667920535; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=aRoW2KDTlPczcWc8dhQ47w2ukmwsbIo1VAXAXMn1Vjg=; b=NHAwotte5VP7Zk82EEzdqvpvygtOV0G5TmYan2WrLM0IEP8WGFv8ODIJ0NBiG3jWnQmWOT DETGO8dkxbIvzfBjns7mQnTAb8Yy1vT/Wg9G7CYKBbe+R29MbWM3Mil9nonTVQgfiQCHeJ gDuTLWAnqvD0wcGTMo6pwUjq5Seic7I= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-122-gksRGa3sMWyj62g8sVug9w-1; Tue, 08 Nov 2022 10:15:33 -0500 X-MC-Unique: gksRGa3sMWyj62g8sVug9w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E1CD8001B8; Tue, 8 Nov 2022 15:15:33 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id E7D1A40C6FA3; Tue, 8 Nov 2022 15:15:32 +0000 (UTC) From: Paolo Bonzini <pbonzini@redhat.com> To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com Subject: [PATCH v2 0/8] KVM: SVM: fixes for vmentry code Date: Tue, 8 Nov 2022 10:15:24 -0500 Message-Id: <20221108151532.1377783-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748941589742143496?= X-GMAIL-MSGID: =?utf-8?q?1748941589742143496?= |
Series |
KVM: SVM: fixes for vmentry code
|
|
Message
Paolo Bonzini
Nov. 8, 2022, 3:15 p.m. UTC
This series comprises two related fixes: - the FILL_RETURN_BUFFER macro in -next needs to access percpu data, hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER. This means moving guest vmload/vmsave and host vmload to assembly (patches 5 and 6). - because AMD wants the OS to set STIBP to 1 before executing the return thunk (un)training sequence, IA32_SPEC_CTRL must be restored before UNTRAIN_RET, too. This must also be moved to assembly and, for consistency, the guest SPEC_CTRL is also loaded in there (patch 7). Neither is particularly hard, however because of 32-bit systems one needs to keep the number of arguments to __svm_vcpu_run to three or fewer. One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the host save area, so all accesses to the vcpu_svm struct have to be done from assembly too. This is done in patches 2 to 4, and it turns out not to be that bad; in fact I think the code is simpler than before after these prerequisites, and even at the end of the series it is not much harder to follow despite doing a lot more stuff. Care has been taken to keep the "normal" and SEV-ES code as similar as possible, even though the latter would not hit the three argument barrier. The above summary leaves out the more mundane patches 1 and 8. The former introduces a separate asm-offsets.c file for KVM, so that kernel/asm-offsets.c does not have to do ugly includes with ../ paths. The latter is dead code removal. Thanks, Paolo v1->v2: use a separate asm-offsets.c file instead of hacking around the arch/x86/kvm/svm/svm.h file; this could have been done also with just a "#ifndef COMPILE_OFFSETS", but Sean's suggestion is cleaner and there is a precedent in drivers/memory/ for private asm-offsets files keep preparatory cleanups together at the beginning of the series move SPEC_CTRL save/restore out of line [Jim] Paolo Bonzini (8): KVM: x86: use a separate asm-offsets.c file KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm KVM: SVM: adjust register allocation for __svm_vcpu_run KVM: SVM: retrieve VMCB from assembly KVM: SVM: move guest vmsave/vmload to assembly KVM: SVM: restore host save area from assembly KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and callers arch/x86/include/asm/spec-ctrl.h | 10 +- arch/x86/kernel/asm-offsets.c | 6 - arch/x86/kernel/cpu/bugs.c | 15 +- arch/x86/kvm/Makefile | 12 ++ arch/x86/kvm/kvm-asm-offsets.c | 28 ++++ arch/x86/kvm/svm/svm.c | 53 +++---- arch/x86/kvm/svm/svm.h | 4 +- arch/x86/kvm/svm/svm_ops.h | 5 - arch/x86/kvm/svm/vmenter.S | 241 ++++++++++++++++++++++++------- arch/x86/kvm/vmx/vmenter.S | 2 +- 10 files changed, 259 insertions(+), 117 deletions(-) create mode 100644 arch/x86/kvm/kvm-asm-offsets.c
Comments
On Tue, Nov 08, 2022 at 10:15:24AM -0500, Paolo Bonzini wrote: > This series comprises two related fixes: > > - the FILL_RETURN_BUFFER macro in -next needs to access percpu data, > hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER. > This means moving guest vmload/vmsave and host vmload to assembly > (patches 5 and 6). > > - because AMD wants the OS to set STIBP to 1 before executing the > return thunk (un)training sequence, IA32_SPEC_CTRL must be restored > before UNTRAIN_RET, too. This must also be moved to assembly and, > for consistency, the guest SPEC_CTRL is also loaded in there > (patch 7). > > Neither is particularly hard, however because of 32-bit systems one needs > to keep the number of arguments to __svm_vcpu_run to three or fewer. > One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the > host save area, so all accesses to the vcpu_svm struct have to be done > from assembly too. This is done in patches 2 to 4, and it turns out > not to be that bad; in fact I think the code is simpler than before > after these prerequisites, and even at the end of the series it is not > much harder to follow despite doing a lot more stuff. Care has been > taken to keep the "normal" and SEV-ES code as similar as possible, > even though the latter would not hit the three argument barrier. > > The above summary leaves out the more mundane patches 1 and 8. The > former introduces a separate asm-offsets.c file for KVM, so that > kernel/asm-offsets.c does not have to do ugly includes with ../ paths. > The latter is dead code removal. > > Thanks, > > Paolo > > v1->v2: use a separate asm-offsets.c file instead of hacking around > the arch/x86/kvm/svm/svm.h file; this could have been done > also with just a "#ifndef COMPILE_OFFSETS", but Sean's > suggestion is cleaner and there is a precedent in > drivers/memory/ for private asm-offsets files > > keep preparatory cleanups together at the beginning of the > series > > move SPEC_CTRL save/restore out of line [Jim] > > Paolo Bonzini (8): > KVM: x86: use a separate asm-offsets.c file > KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm > KVM: SVM: adjust register allocation for __svm_vcpu_run > KVM: SVM: retrieve VMCB from assembly > KVM: SVM: move guest vmsave/vmload to assembly > KVM: SVM: restore host save area from assembly > KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly > x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and > callers > > arch/x86/include/asm/spec-ctrl.h | 10 +- > arch/x86/kernel/asm-offsets.c | 6 - > arch/x86/kernel/cpu/bugs.c | 15 +- > arch/x86/kvm/Makefile | 12 ++ > arch/x86/kvm/kvm-asm-offsets.c | 28 ++++ > arch/x86/kvm/svm/svm.c | 53 +++---- > arch/x86/kvm/svm/svm.h | 4 +- > arch/x86/kvm/svm/svm_ops.h | 5 - > arch/x86/kvm/svm/vmenter.S | 241 ++++++++++++++++++++++++------- > arch/x86/kvm/vmx/vmenter.S | 2 +- > 10 files changed, 259 insertions(+), 117 deletions(-) > create mode 100644 arch/x86/kvm/kvm-asm-offsets.c > > -- > 2.31.1 > > I applied this series on next-20221108, which has the call depth tracking patches, and I no longer see the panic when starting a guest on my AMD test system and I can still start a simple nested guest without any problems, which is about the extent of my regular KVM testing. I did test the same kernel on my Intel systems and saw no problems there but that seems expected given the diffstat. Thank you for the quick response and fixes! Tested-by: Nathan Chancellor <nathan@kernel.org> One small nit I noticed: kvm-asm-offsets.h should be added to a .gitignore file in arch/x86/kvm. $ git status --short ?? arch/x86/kvm/kvm-asm-offsets.h Cheers, Nathan