From patchwork Mon Nov 7 14:54:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16492 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2099241wru; Mon, 7 Nov 2022 06:57:17 -0800 (PST) X-Google-Smtp-Source: AMsMyM5pKZDc/BKAv1GLRN2cJFvTWnV1DMHXO/3NxlRnqQ4qs3FxRsWnximaPNXttN5TaOJEn+Da X-Received: by 2002:aa7:d80a:0:b0:462:2c1c:8716 with SMTP id v10-20020aa7d80a000000b004622c1c8716mr51760258edq.185.1667833036955; Mon, 07 Nov 2022 06:57:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833036; cv=none; d=google.com; s=arc-20160816; b=SD2u60LB898Nw21SlE7cboyyLAmyODhBwFwjuH9vjLxIKoTmiEYnirEU3SwW5uN9cg atTF2jio/uiMej/nmN9FkOc0+Ay/MduyQ4tzIYCZyXGpEohFDO9rgFVLO8ugzNkqksIo eGVlLwm4Y6KWNbDk1iocls+0JyUpFNuwnaCr+8b/Jtmy7fCoiJtRtlByVuS4ovdSmvuk KDqBDMPK4ZZccjXtcd9w45OD1GS0lLpoVUib87w1vaFbReVNvFqeeQ9kKXxIIZMuGSSM kr5JyV+Gblflb/1MfF5JY+HzSX3+EHlzWQCwY09uPOwzI0ZHacxmvBXt4bbE8sOTX9xo HMSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=m5YDpyh7gQt2etyn3t8m5beruEQiF/HQKLk+nNtKpSY=; b=j1nyKBX+D2hH2400MFuoC+p/FJoCywZoyRRVSqVrk2mh3zHNt1SXD4mUQA2WfZrS0u qwHetP96s+rGH3vHC41cOc5uFQOC7cj+K8Idcr9WAVF5zve1skJMFqrukkDGoMH0fP9y cA/O4SgYh5rdYiNxz3A9I+rSUph1sgGvu9H4+Pub5+8SfZaFLuNFo9iHyDjZaaMDNlPK AYFxnJ0oii9f38azUiP3jkbTKtMJ7iUUXedLhNHXFkdozTMrhvLCLUtGTJO6SzDKn8XV 7QGluBRdwevQholcH5NRvP9a0Cel7o+IHoTXC+Nm+vVNogR0+Qy2+EpcSwkeMr5eNtj3 WtRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CpZleLIm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rs2-20020a170907036200b007a6384d506csi7477649ejb.643.2022.11.07.06.56.51; Mon, 07 Nov 2022 06:57:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CpZleLIm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232654AbiKGO4C (ORCPT + 99 others); Mon, 7 Nov 2022 09:56:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232572AbiKGOzo (ORCPT ); Mon, 7 Nov 2022 09:55:44 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 272491DF0C for ; Mon, 7 Nov 2022 06:54:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m5YDpyh7gQt2etyn3t8m5beruEQiF/HQKLk+nNtKpSY=; b=CpZleLImN03tuBOE9mk0VNg/OwoAwBkTURY6MsF+9QFk4d2ucBkOrerq3gW1kNOJe78GBP YVfMqWEF2zUi0eFlFPzThs6FFDipYbQpgmmFb/UAc9Qtc4WxMfZ1n0q5kK7UZLdaJzYpuO FlEAnCAJvIgmCotBJ47L73BOJRAStlQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-102-mOJLn8XNO92Ed2FOryesGA-1; Mon, 07 Nov 2022 09:54:38 -0500 X-MC-Unique: mOJLn8XNO92Ed2FOryesGA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C56BD185A794; Mon, 7 Nov 2022 14:54:37 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82B772028CE4; Mon, 7 Nov 2022 14:54:37 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 1/8] KVM: SVM: extract VMCB accessors to a new file Date: Mon, 7 Nov 2022 09:54:29 -0500 Message-Id: <20221107145436.276079-2-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849694501383947?= X-GMAIL-MSGID: =?utf-8?q?1748849694501383947?= Having inline functions confuses the compilation of asm-offsets.c, which cannot find kvm_cache_regs.h because arch/x86/kvm is not in asm-offset.c's include path. Just extract the functions to a new file. No functional change intended. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/avic.c | 1 + arch/x86/kvm/svm/nested.c | 1 + arch/x86/kvm/svm/sev.c | 1 + arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 200 ------------------------------ arch/x86/kvm/svm/svm_onhyperv.c | 1 + arch/x86/kvm/svm/vmcb.h | 211 ++++++++++++++++++++++++++++++++ 7 files changed, 216 insertions(+), 200 deletions(-) create mode 100644 arch/x86/kvm/svm/vmcb.h diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index 6919dee69f18..cc651a3310b1 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -26,6 +26,7 @@ #include "x86.h" #include "irq.h" #include "svm.h" +#include "vmcb.h" /* AVIC GATAG is encoded using VM and VCPU IDs */ #define AVIC_VCPU_ID_BITS 8 diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 4c620999d230..6a90aefb7a8e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -28,6 +28,7 @@ #include "cpuid.h" #include "lapic.h" #include "svm.h" +#include "vmcb.h" #include "hyperv.h" #define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 28064060413a..73a229a9975b 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -25,6 +25,7 @@ #include "mmu.h" #include "x86.h" #include "svm.h" +#include "vmcb.h" #include "svm_ops.h" #include "cpuid.h" #include "trace.h" diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 58f0077d9357..cd71f53590b2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -43,6 +43,7 @@ #include "trace.h" #include "svm.h" +#include "vmcb.h" #include "svm_ops.h" #include "kvm_onhyperv.h" diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 6a7686bf6900..222856788153 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -22,8 +22,6 @@ #include #include -#include "kvm_cache_regs.h" - #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) #define IOPM_SIZE PAGE_SIZE * 3 @@ -327,27 +325,6 @@ static __always_inline bool sev_es_guest(struct kvm *kvm) #endif } -static inline void vmcb_mark_all_dirty(struct vmcb *vmcb) -{ - vmcb->control.clean = 0; -} - -static inline void vmcb_mark_all_clean(struct vmcb *vmcb) -{ - vmcb->control.clean = VMCB_ALL_CLEAN_MASK - & ~VMCB_ALWAYS_DIRTY_MASK; -} - -static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit) -{ - vmcb->control.clean &= ~(1 << bit); -} - -static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit) -{ - return !test_bit(bit, (unsigned long *)&vmcb->control.clean); -} - static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) { return container_of(vcpu, struct vcpu_svm, vcpu); @@ -363,161 +340,6 @@ static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) */ #define SVM_REGS_LAZY_LOAD_SET (1 << VCPU_EXREG_PDPTR) -static inline void vmcb_set_intercept(struct vmcb_control_area *control, u32 bit) -{ - WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); - __set_bit(bit, (unsigned long *)&control->intercepts); -} - -static inline void vmcb_clr_intercept(struct vmcb_control_area *control, u32 bit) -{ - WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); - __clear_bit(bit, (unsigned long *)&control->intercepts); -} - -static inline bool vmcb_is_intercept(struct vmcb_control_area *control, u32 bit) -{ - WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); - return test_bit(bit, (unsigned long *)&control->intercepts); -} - -static inline bool vmcb12_is_intercept(struct vmcb_ctrl_area_cached *control, u32 bit) -{ - WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); - return test_bit(bit, (unsigned long *)&control->intercepts); -} - -static inline void set_dr_intercepts(struct vcpu_svm *svm) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - if (!sev_es_guest(svm->vcpu.kvm)) { - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_WRITE); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_WRITE); - } - - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE); - - recalc_intercepts(svm); -} - -static inline void clr_dr_intercepts(struct vcpu_svm *svm) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - vmcb->control.intercepts[INTERCEPT_DR] = 0; - - /* DR7 access must remain intercepted for an SEV-ES guest */ - if (sev_es_guest(svm->vcpu.kvm)) { - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ); - vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE); - } - - recalc_intercepts(svm); -} - -static inline void set_exception_intercept(struct vcpu_svm *svm, u32 bit) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - WARN_ON_ONCE(bit >= 32); - vmcb_set_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit); - - recalc_intercepts(svm); -} - -static inline void clr_exception_intercept(struct vcpu_svm *svm, u32 bit) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - WARN_ON_ONCE(bit >= 32); - vmcb_clr_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit); - - recalc_intercepts(svm); -} - -static inline void svm_set_intercept(struct vcpu_svm *svm, int bit) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - vmcb_set_intercept(&vmcb->control, bit); - - recalc_intercepts(svm); -} - -static inline void svm_clr_intercept(struct vcpu_svm *svm, int bit) -{ - struct vmcb *vmcb = svm->vmcb01.ptr; - - vmcb_clr_intercept(&vmcb->control, bit); - - recalc_intercepts(svm); -} - -static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit) -{ - return vmcb_is_intercept(&svm->vmcb->control, bit); -} - -static inline bool nested_vgif_enabled(struct vcpu_svm *svm) -{ - return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); -} - -static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm) -{ - if (!vgif) - return NULL; - - if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm)) - return svm->nested.vmcb02.ptr; - else - return svm->vmcb01.ptr; -} - -static inline void enable_gif(struct vcpu_svm *svm) -{ - struct vmcb *vmcb = get_vgif_vmcb(svm); - - if (vmcb) - vmcb->control.int_ctl |= V_GIF_MASK; - else - svm->vcpu.arch.hflags |= HF_GIF_MASK; -} - -static inline void disable_gif(struct vcpu_svm *svm) -{ - struct vmcb *vmcb = get_vgif_vmcb(svm); - - if (vmcb) - vmcb->control.int_ctl &= ~V_GIF_MASK; - else - svm->vcpu.arch.hflags &= ~HF_GIF_MASK; -} - -static inline bool gif_set(struct vcpu_svm *svm) -{ - struct vmcb *vmcb = get_vgif_vmcb(svm); - - if (vmcb) - return !!(vmcb->control.int_ctl & V_GIF_MASK); - else - return !!(svm->vcpu.arch.hflags & HF_GIF_MASK); -} - static inline bool nested_npt_enabled(struct vcpu_svm *svm) { return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; @@ -567,28 +389,6 @@ void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode, #define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */ #define NESTED_EXIT_CONTINUE 2 /* Further checks needed */ -static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu) -{ - struct vcpu_svm *svm = to_svm(vcpu); - - return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK); -} - -static inline bool nested_exit_on_smi(struct vcpu_svm *svm) -{ - return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_SMI); -} - -static inline bool nested_exit_on_intr(struct vcpu_svm *svm) -{ - return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_INTR); -} - -static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) -{ - return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_NMI); -} - int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun); void svm_leave_nested(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c index 8cdc62c74a96..ae0a101329e6 100644 --- a/arch/x86/kvm/svm/svm_onhyperv.c +++ b/arch/x86/kvm/svm/svm_onhyperv.c @@ -8,6 +8,7 @@ #include #include "svm.h" +#include "vmcb.h" #include "svm_ops.h" #include "hyperv.h" diff --git a/arch/x86/kvm/svm/vmcb.h b/arch/x86/kvm/svm/vmcb.h new file mode 100644 index 000000000000..8757cda27e3a --- /dev/null +++ b/arch/x86/kvm/svm/vmcb.h @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Kernel-based Virtual Machine driver for Linux + * + * AMD SVM support - VMCB accessors + */ + +#ifndef __SVM_VMCB_H +#define __SVM_VMCB_H + +#include "kvm_cache_regs.h" + +static inline void vmcb_mark_all_dirty(struct vmcb *vmcb) +{ + vmcb->control.clean = 0; +} + +static inline void vmcb_mark_all_clean(struct vmcb *vmcb) +{ + vmcb->control.clean = VMCB_ALL_CLEAN_MASK + & ~VMCB_ALWAYS_DIRTY_MASK; +} + +static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit) +{ + vmcb->control.clean &= ~(1 << bit); +} + +static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit) +{ + return !test_bit(bit, (unsigned long *)&vmcb->control.clean); +} + +static inline void vmcb_set_intercept(struct vmcb_control_area *control, u32 bit) +{ + WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); + __set_bit(bit, (unsigned long *)&control->intercepts); +} + +static inline void vmcb_clr_intercept(struct vmcb_control_area *control, u32 bit) +{ + WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); + __clear_bit(bit, (unsigned long *)&control->intercepts); +} + +static inline bool vmcb_is_intercept(struct vmcb_control_area *control, u32 bit) +{ + WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); + return test_bit(bit, (unsigned long *)&control->intercepts); +} + +static inline bool vmcb12_is_intercept(struct vmcb_ctrl_area_cached *control, u32 bit) +{ + WARN_ON_ONCE(bit >= 32 * MAX_INTERCEPT); + return test_bit(bit, (unsigned long *)&control->intercepts); +} + +static inline void set_dr_intercepts(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + if (!sev_es_guest(svm->vcpu.kvm)) { + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR0_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR1_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR2_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR3_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR4_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR5_WRITE); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR6_WRITE); + } + + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE); + + recalc_intercepts(svm); +} + +static inline void clr_dr_intercepts(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + vmcb->control.intercepts[INTERCEPT_DR] = 0; + + /* DR7 access must remain intercepted for an SEV-ES guest */ + if (sev_es_guest(svm->vcpu.kvm)) { + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ); + vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE); + } + + recalc_intercepts(svm); +} + +static inline void set_exception_intercept(struct vcpu_svm *svm, u32 bit) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + WARN_ON_ONCE(bit >= 32); + vmcb_set_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit); + + recalc_intercepts(svm); +} + +static inline void clr_exception_intercept(struct vcpu_svm *svm, u32 bit) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + WARN_ON_ONCE(bit >= 32); + vmcb_clr_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + bit); + + recalc_intercepts(svm); +} + +static inline void svm_set_intercept(struct vcpu_svm *svm, int bit) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + vmcb_set_intercept(&vmcb->control, bit); + + recalc_intercepts(svm); +} + +static inline void svm_clr_intercept(struct vcpu_svm *svm, int bit) +{ + struct vmcb *vmcb = svm->vmcb01.ptr; + + vmcb_clr_intercept(&vmcb->control, bit); + + recalc_intercepts(svm); +} + +static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit) +{ + return vmcb_is_intercept(&svm->vmcb->control, bit); +} + +static inline bool nested_vgif_enabled(struct vcpu_svm *svm) +{ + return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); +} + +static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm) +{ + if (!vgif) + return NULL; + + if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm)) + return svm->nested.vmcb02.ptr; + else + return svm->vmcb01.ptr; +} + +static inline void enable_gif(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = get_vgif_vmcb(svm); + + if (vmcb) + vmcb->control.int_ctl |= V_GIF_MASK; + else + svm->vcpu.arch.hflags |= HF_GIF_MASK; +} + +static inline void disable_gif(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = get_vgif_vmcb(svm); + + if (vmcb) + vmcb->control.int_ctl &= ~V_GIF_MASK; + else + svm->vcpu.arch.hflags &= ~HF_GIF_MASK; +} + +static inline bool gif_set(struct vcpu_svm *svm) +{ + struct vmcb *vmcb = get_vgif_vmcb(svm); + + if (vmcb) + return !!(vmcb->control.int_ctl & V_GIF_MASK); + else + return !!(svm->vcpu.arch.hflags & HF_GIF_MASK); +} + +static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK); +} + +static inline bool nested_exit_on_smi(struct vcpu_svm *svm) +{ + return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_SMI); +} + +static inline bool nested_exit_on_intr(struct vcpu_svm *svm) +{ + return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_INTR); +} + +static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) +{ + return vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_NMI); +} + +#endif From patchwork Mon Nov 7 14:54:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16489 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2098852wru; Mon, 7 Nov 2022 06:56:38 -0800 (PST) X-Google-Smtp-Source: AMsMyM5dBRR3iL3AcMxjJV5+m97t5yPWYxxQ2kqrt7wMbsl9bVY7Ntm+9TPhVulMFTzre29Obqcd X-Received: by 2002:a17:907:a02:b0:7ac:9a51:3403 with SMTP id bb2-20020a1709070a0200b007ac9a513403mr49138034ejc.220.1667832997901; Mon, 07 Nov 2022 06:56:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667832997; cv=none; d=google.com; s=arc-20160816; b=iCTyg29dXva+HkxpzhQMoMJMZuoiqmUX6/iynlcHRFb+3U86Rihu7Gp8kklknJzK7G ZocKty9qzAlNQb28NkWANJYMsl1s/JAdNNwiasejrCG/2Ts+v6djZz6r7hvGax30qwfd Q0bx93VvoZIbglhKx14l91srSSjOTZPBcYMD//PNZllXzT5ROCVnU92bI4ONOhU01fR4 aF7u0phATKh+SdVKowYl19D3Ke+YdZEBbW3Ci77grRHayKQbWrieTRdntDSiEvTF/qSl dvOOhsfS8oXs5IepOYWPRGJaaMyDr2MKr03tw+bI91c1ZIc+SkvTOKzDGxjSoFEAxcp+ L+yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RrSPTN1GL42ZbtbNo+KxgmJrO8rxOhcX7z7n51uQ838=; b=fmUjRSixwl0aOQ4+Kl3OiUaGn4ydQmdnYmkJF/UqagjxM4UO57irXLqt28x0Dmde/d B3hSA7748JkFua5pNcUnbMVK4HUN6elvrHJjIue0uLp/QdGG9JecXJFZtxqlVo9ABJjw PR/vo3VEpJCON8I6UloJ/bY2l2P+nNJq4Mjo9CPCLAr9axB8t93Opp1ryaJ61UiqDUVg gnAVoazYm4Tg5LchB989AuFykgc8qIUMpKxjMdS0m+3fTgFYQAildPw9ul++YDe4Wm4y rdGXdygmJGWR3aRvyUedm8k9ggg53Qgu5qObkUsFqFVBUctGfvRvFAtfZU9Xk2lKqEvY AYoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bsCRVqMO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk36-20020a1709077fa400b0078db8fe2834si9647993ejc.245.2022.11.07.06.56.14; Mon, 07 Nov 2022 06:56:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bsCRVqMO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232505AbiKGOzm (ORCPT + 99 others); Mon, 7 Nov 2022 09:55:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231486AbiKGOzk (ORCPT ); Mon, 7 Nov 2022 09:55:40 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAE9F13D3A for ; Mon, 7 Nov 2022 06:54:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RrSPTN1GL42ZbtbNo+KxgmJrO8rxOhcX7z7n51uQ838=; b=bsCRVqMOgQglMOl9Ur4LJu5CoDy/KICUlb7vLAz+1P+IqHKGwU6kxZefu3F7ynaz1vDJkl J4yM00QRWqa5RUmfW/KG8i2kR6bZZ5WCgo65SmaCG+ra8j9FFKQvzPf6RpbxAc7m236IZN hH9IB3kbJRtg/4o9H1Z9KAqbM8AkJik= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-29-mZ2SsyslOgyInQB00oTyfg-1; Mon, 07 Nov 2022 09:54:38 -0500 X-MC-Unique: mZ2SsyslOgyInQB00oTyfg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 13C20101A528; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE3632028CE4; Mon, 7 Nov 2022 14:54:37 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 2/8] KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm Date: Mon, 7 Nov 2022 09:54:30 -0500 Message-Id: <20221107145436.276079-3-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849654126356970?= X-GMAIL-MSGID: =?utf-8?q?1748849654126356970?= Since registers are reachable through vcpu_svm, and we will need to access more fields of that struct, pass it instead of the regs[] array. No functional change intended. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Signed-off-by: Paolo Bonzini --- arch/x86/kernel/asm-offsets.c | 6 ++++++ arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/svm/vmenter.S | 36 +++++++++++++++++------------------ 4 files changed, 26 insertions(+), 20 deletions(-) diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index cb50589a7102..85de7e4fe59a 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -20,6 +20,7 @@ #include #include #include "../kvm/vmx/vmx.h" +#include "../kvm/svm/svm.h" #ifdef CONFIG_XEN #include @@ -109,6 +110,11 @@ static void __used common(void) OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); OFFSET(TSS_sp2, tss_struct, x86_tss.sp2); + if (IS_ENABLED(CONFIG_KVM_AMD)) { + BLANK(); + OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs); + } + if (IS_ENABLED(CONFIG_KVM_INTEL)) { BLANK(); OFFSET(VMX_spec_ctrl, vcpu_vmx, spec_ctrl); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cd71f53590b2..4cfa62e66a0e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3931,7 +3931,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) * vmcb02 when switching vmcbs for nested virtualization. */ vmload(svm->vmcb01.pa); - __svm_vcpu_run(vmcb_pa, (unsigned long *)&vcpu->arch.regs); + __svm_vcpu_run(vmcb_pa, svm); vmsave(svm->vmcb01.pa); vmload(__sme_page_pa(sd->save_area)); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 222856788153..5f8dfc9cd9a7 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -484,6 +484,6 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); /* vmenter.S */ void __svm_sev_es_vcpu_run(unsigned long vmcb_pa); -void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); +void __svm_vcpu_run(unsigned long vmcb_pa, struct vcpu_svm *svm); #endif diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 723f8534986c..8fac744361e5 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -8,23 +8,23 @@ #define WORD_SIZE (BITS_PER_LONG / 8) /* Intentionally omit RAX as it's context switched by hardware */ -#define VCPU_RCX __VCPU_REGS_RCX * WORD_SIZE -#define VCPU_RDX __VCPU_REGS_RDX * WORD_SIZE -#define VCPU_RBX __VCPU_REGS_RBX * WORD_SIZE +#define VCPU_RCX (SVM_vcpu_arch_regs + __VCPU_REGS_RCX * WORD_SIZE) +#define VCPU_RDX (SVM_vcpu_arch_regs + __VCPU_REGS_RDX * WORD_SIZE) +#define VCPU_RBX (SVM_vcpu_arch_regs + __VCPU_REGS_RBX * WORD_SIZE) /* Intentionally omit RSP as it's context switched by hardware */ -#define VCPU_RBP __VCPU_REGS_RBP * WORD_SIZE -#define VCPU_RSI __VCPU_REGS_RSI * WORD_SIZE -#define VCPU_RDI __VCPU_REGS_RDI * WORD_SIZE +#define VCPU_RBP (SVM_vcpu_arch_regs + __VCPU_REGS_RBP * WORD_SIZE) +#define VCPU_RSI (SVM_vcpu_arch_regs + __VCPU_REGS_RSI * WORD_SIZE) +#define VCPU_RDI (SVM_vcpu_arch_regs + __VCPU_REGS_RDI * WORD_SIZE) #ifdef CONFIG_X86_64 -#define VCPU_R8 __VCPU_REGS_R8 * WORD_SIZE -#define VCPU_R9 __VCPU_REGS_R9 * WORD_SIZE -#define VCPU_R10 __VCPU_REGS_R10 * WORD_SIZE -#define VCPU_R11 __VCPU_REGS_R11 * WORD_SIZE -#define VCPU_R12 __VCPU_REGS_R12 * WORD_SIZE -#define VCPU_R13 __VCPU_REGS_R13 * WORD_SIZE -#define VCPU_R14 __VCPU_REGS_R14 * WORD_SIZE -#define VCPU_R15 __VCPU_REGS_R15 * WORD_SIZE +#define VCPU_R8 (SVM_vcpu_arch_regs + __VCPU_REGS_R8 * WORD_SIZE) +#define VCPU_R9 (SVM_vcpu_arch_regs + __VCPU_REGS_R9 * WORD_SIZE) +#define VCPU_R10 (SVM_vcpu_arch_regs + __VCPU_REGS_R10 * WORD_SIZE) +#define VCPU_R11 (SVM_vcpu_arch_regs + __VCPU_REGS_R11 * WORD_SIZE) +#define VCPU_R12 (SVM_vcpu_arch_regs + __VCPU_REGS_R12 * WORD_SIZE) +#define VCPU_R13 (SVM_vcpu_arch_regs + __VCPU_REGS_R13 * WORD_SIZE) +#define VCPU_R14 (SVM_vcpu_arch_regs + __VCPU_REGS_R14 * WORD_SIZE) +#define VCPU_R15 (SVM_vcpu_arch_regs + __VCPU_REGS_R15 * WORD_SIZE) #endif .section .noinstr.text, "ax" @@ -32,7 +32,7 @@ /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode * @vmcb_pa: unsigned long - * @regs: unsigned long * (to guest registers) + * @svm: struct vcpu_svm * */ SYM_FUNC_START(__svm_vcpu_run) push %_ASM_BP @@ -47,13 +47,13 @@ SYM_FUNC_START(__svm_vcpu_run) #endif push %_ASM_BX - /* Save @regs. */ + /* Save @svm. */ push %_ASM_ARG2 /* Save @vmcb. */ push %_ASM_ARG1 - /* Move @regs to RAX. */ + /* Move @svm to RAX. */ mov %_ASM_ARG2, %_ASM_AX /* Load guest registers. */ @@ -89,7 +89,7 @@ SYM_FUNC_START(__svm_vcpu_run) FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE #endif - /* "POP" @regs to RAX. */ + /* "POP" @svm to RAX. */ pop %_ASM_AX /* Save all guest registers. */ From patchwork Mon Nov 7 14:54:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16500 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2099787wru; Mon, 7 Nov 2022 06:58:17 -0800 (PST) X-Google-Smtp-Source: AMsMyM5FrJ3OR7+z+jEIvr+ot/B96O3Yj4sDMEisFOwXrzCx5/gsiYHKtXZ1aSI13BEDALsTh+7C X-Received: by 2002:a05:6402:524e:b0:461:fa05:aff8 with SMTP id t14-20020a056402524e00b00461fa05aff8mr51013662edd.283.1667833097137; Mon, 07 Nov 2022 06:58:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833097; cv=none; d=google.com; s=arc-20160816; b=we4Qg7s4MzyKsgz7/N1WMAcHodvUTpPi7QoO3rLEXgLnxcgWXhJ+LYaOlpD9veARCw 3w2oN3vAZZPLcwl09zVWKUoAbfNII+4P6dijfcFq6wxDt9NP9i/RF3XQYPVcUfFNLh3m S4sQw14qZ6npjv8XhSxX2jRkXCcfBf5GwUQ0ip+jKM5M0q9OXHChK14ghazaaTIAPlde 1aLPj/ZtHi/u37ZrITiJ7VpldQXsOLHO00fhQ51Ko891X0wcvD1vtmPfp51g4wS1HtPD TRVQodmcsjUgMZfcPnI4vb8SY1gVN8Xg76kZkcyRwEAupiO/3tR09F2NmVEBNj4fBrXI LZ8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=IIKDpG4pQW2B/ISB4mxgM/NPPKfDKLb0DiYWXIkbBYk=; b=I4lqNRtpY7psoRvLvkilHZUM8gdFIsXSZSl6B3k7p7xGeasEJq8H87eouoCt6xW2sz OVQUkS8b2xTwqHehvBuzr1ScB10+n8C2AnycVX+u2/lI3wTQf1lK+4jFvDtYV3xNby3Y 4ndZzqEKTbMvRncuY7U9nTl4djL2Z48OK1WUf/5FjpcxZITDTWFy1RL/PPM7Z8fXP/Nb CNwSrQAt5RYiFnnGa1FefmkMe+BpaAYc7c270ZZjNEDP6THPZxCfjw904pNNbf50icsn kBAmUCmGiEHockBsFX/MwmJ+F30wpWFdVO7Lu6l97BS+QM34zlbbe8esb6hQZcmJxdOT 3rzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=crim5H++; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id la26-20020a170907781a00b0073beb58e98dsi7194279ejc.276.2022.11.07.06.57.53; Mon, 07 Nov 2022 06:58:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=crim5H++; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232474AbiKGOzk (ORCPT + 99 others); Mon, 7 Nov 2022 09:55:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231617AbiKGOzh (ORCPT ); Mon, 7 Nov 2022 09:55:37 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDEE61DDE0 for ; Mon, 7 Nov 2022 06:54:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IIKDpG4pQW2B/ISB4mxgM/NPPKfDKLb0DiYWXIkbBYk=; b=crim5H++UpdXXIxYxGJ/PGOXCBPU2jIJmWdsCUAuiRS05pMwm13MMfuRSQ9xr3lY8Yf5V7 3bXvHBYGjxak8NwBH5+k9O3mQhqD3y+hGgBlNRd7CpYdv4JAo4tn6ulK4OffFFRwJCzr20 qQ7yLcm7cBYYSgYHkUwcFO+NTNrzO6E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-344-1g5ouwadN6-LHlvLIc19fw-1; Mon, 07 Nov 2022 09:54:38 -0500 X-MC-Unique: 1g5ouwadN6-LHlvLIc19fw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 54F9F811E67; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B9022024CB7; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 3/8] KVM: SVM: adjust register allocation for __svm_vcpu_run Date: Mon, 7 Nov 2022 09:54:31 -0500 Message-Id: <20221107145436.276079-4-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849757557608215?= X-GMAIL-MSGID: =?utf-8?q?1748849757557608215?= In preparation for moving vmload/vmsave to __svm_vcpu_run, keep the pointer to the struct vcpu_svm in %rdi. This way it is possible to load svm->vmcb01.pa in %rax without clobbering the pointer to svm itself. No functional change intended. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/vmenter.S | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 8fac744361e5..dc558d0a589e 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -53,29 +53,29 @@ SYM_FUNC_START(__svm_vcpu_run) /* Save @vmcb. */ push %_ASM_ARG1 - /* Move @svm to RAX. */ - mov %_ASM_ARG2, %_ASM_AX + /* Move @svm to RDI. */ + mov %_ASM_ARG2, %_ASM_DI + + /* "POP" @vmcb to RAX. */ + pop %_ASM_AX /* Load guest registers. */ - mov VCPU_RCX(%_ASM_AX), %_ASM_CX - mov VCPU_RDX(%_ASM_AX), %_ASM_DX - mov VCPU_RBX(%_ASM_AX), %_ASM_BX - mov VCPU_RBP(%_ASM_AX), %_ASM_BP - mov VCPU_RSI(%_ASM_AX), %_ASM_SI - mov VCPU_RDI(%_ASM_AX), %_ASM_DI + mov VCPU_RCX(%_ASM_DI), %_ASM_CX + mov VCPU_RDX(%_ASM_DI), %_ASM_DX + mov VCPU_RBX(%_ASM_DI), %_ASM_BX + mov VCPU_RBP(%_ASM_DI), %_ASM_BP + mov VCPU_RSI(%_ASM_DI), %_ASM_SI #ifdef CONFIG_X86_64 - mov VCPU_R8 (%_ASM_AX), %r8 - mov VCPU_R9 (%_ASM_AX), %r9 - mov VCPU_R10(%_ASM_AX), %r10 - mov VCPU_R11(%_ASM_AX), %r11 - mov VCPU_R12(%_ASM_AX), %r12 - mov VCPU_R13(%_ASM_AX), %r13 - mov VCPU_R14(%_ASM_AX), %r14 - mov VCPU_R15(%_ASM_AX), %r15 + mov VCPU_R8 (%_ASM_DI), %r8 + mov VCPU_R9 (%_ASM_DI), %r9 + mov VCPU_R10(%_ASM_DI), %r10 + mov VCPU_R11(%_ASM_DI), %r11 + mov VCPU_R12(%_ASM_DI), %r12 + mov VCPU_R13(%_ASM_DI), %r13 + mov VCPU_R14(%_ASM_DI), %r14 + mov VCPU_R15(%_ASM_DI), %r15 #endif - - /* "POP" @vmcb to RAX. */ - pop %_ASM_AX + mov VCPU_RDI(%_ASM_DI), %_ASM_DI /* Enter guest mode */ sti From patchwork Mon Nov 7 14:54:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16502 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2099879wru; Mon, 7 Nov 2022 06:58:27 -0800 (PST) X-Google-Smtp-Source: AMsMyM418CsBS1R/uGsYo8GOIS2BSrPgIjM7OsL5U4zzC34+OnvMeRgQT8WXYTf7z3CAEqbVFXzp X-Received: by 2002:a05:6402:3223:b0:461:8635:e5c with SMTP id g35-20020a056402322300b0046186350e5cmr51244685eda.303.1667833107471; Mon, 07 Nov 2022 06:58:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833107; cv=none; d=google.com; s=arc-20160816; b=Wfos3eX5oXrdbtvXMxYQcyIfmHFi7bmHEqj19Aw5jRHqugrwN6FCR4tR4kyVCJ81rP AotWpq2xykv2MnG2hJYVsQQpOoO1UReNbSKfjKsiHt7nS0ZtXVLf+81B4iZAebHT/YAm QUAHT5iy04JwIlD1RGEvk6HOn10hlGCNGRdAwMbMq/XHdmK3p4dd8/seSP1Mkv/wjFts C3G/uDxKhbstJp/YsNnmV9yl76pgEpAlQfjKhu+1k/rLro3Y1WG4SWAQTbdsTH/brq+y buHJi0L0uPPI0dCfdJiGtJgpRJ+FxKBl3MQh60QiR8/Fh83e+iK4rjfvK0ng06MKAmze 2+Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eLn2B2UMIE/ykgitRBS/Bhc4JymhVepTpG9QJmEukqo=; b=R8SNgLMP4zKB5h5eNgABRcrU/UnsTO/lpmdnLAdOkLy1pyxtQhKcdmvktk80eT8wn0 t+N7SIGAvgIJ3bIijkzSg0zMwuGTTIs4hgidrkMSOL7F+VbeOlOfJSyyHxpxeYx+YM7r IYHrT6VNgReXB5fKjnKFh6ZGlVSLxSN1qcr84w89RjUcsA04W9dgELuT/EEhD0X6vdL0 jQtB1OCyfaq2EXZix9tTYn78h1A5Nh2ye38jLHMTmPZh87ixSFUoVGrkSbestfug0dOY BWPGoFPKK3kDaIPGcEN08QV3Nj3OBWJXmemplfTSoYKjmaNpFS39ebAk/iM9nSaa15z3 t6RA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XU91BJAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq17-20020a1709073f1100b0073daf6b44a5si10885427ejc.775.2022.11.07.06.58.03; Mon, 07 Nov 2022 06:58:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XU91BJAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232580AbiKGOzr (ORCPT + 99 others); Mon, 7 Nov 2022 09:55:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232494AbiKGOzl (ORCPT ); Mon, 7 Nov 2022 09:55:41 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DA481DF17 for ; Mon, 7 Nov 2022 06:54:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eLn2B2UMIE/ykgitRBS/Bhc4JymhVepTpG9QJmEukqo=; b=XU91BJAZD1awWwe2HseP7C1Mcd5yOx0iUii/sIrZVQ28+kHoAWQnL2SCKnl+x/SG7VaXzY CYO6OSUVU/1PcWuxTZYEpuoW6sXQDiKpsiSghuuSrRjzwhgKwuHRPWUwNLmpIK40ClArt/ pW7Ylf2cQLLpIhZHDKjxpeE4GfszXtw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-356-mvaUaWYROzOnmplZDfU9PQ-1; Mon, 07 Nov 2022 09:54:39 -0500 X-MC-Unique: mvaUaWYROzOnmplZDfU9PQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 97F533826A48; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5DFD32024CB7; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 4/8] KVM: SVM: move guest vmsave/vmload to assembly Date: Mon, 7 Nov 2022 09:54:32 -0500 Message-Id: <20221107145436.276079-5-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849768399826381?= X-GMAIL-MSGID: =?utf-8?q?1748849768399826381?= FILL_RETURN_BUFFER can access percpu data, therefore vmload of the host save area must be executed first. First of all, move the VMCB vmsave/vmload to assembly. The idea on how to number the exception tables is stolen from a prototype patch by Peter Zijlstra. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Link: Signed-off-by: Paolo Bonzini --- arch/x86/kernel/asm-offsets.c | 2 ++ arch/x86/kvm/svm/svm.c | 9 ------- arch/x86/kvm/svm/vmenter.S | 50 +++++++++++++++++++++++++++-------- 3 files changed, 41 insertions(+), 20 deletions(-) diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index 85de7e4fe59a..f01293a1e594 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -113,6 +113,8 @@ static void __used common(void) if (IS_ENABLED(CONFIG_KVM_AMD)) { BLANK(); OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs); + OFFSET(SVM_vmcb01, vcpu_svm, vmcb01); + OFFSET(KVM_VMCB_pa, kvm_vmcb_info, pa); } if (IS_ENABLED(CONFIG_KVM_INTEL)) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4cfa62e66a0e..ae65cdcab660 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3924,16 +3924,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) } else { struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); - /* - * Use a single vmcb (vmcb01 because it's always valid) for - * context switching guest state via VMLOAD/VMSAVE, that way - * the state doesn't need to be copied between vmcb01 and - * vmcb02 when switching vmcbs for nested virtualization. - */ - vmload(svm->vmcb01.pa); __svm_vcpu_run(vmcb_pa, svm); - vmsave(svm->vmcb01.pa); - vmload(__sme_page_pa(sd->save_area)); } diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index dc558d0a589e..4709bc8868d7 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include #include +#include #include #include #include @@ -27,6 +28,8 @@ #define VCPU_R15 (SVM_vcpu_arch_regs + __VCPU_REGS_R15 * WORD_SIZE) #endif +#define SVM_vmcb01_pa (SVM_vmcb01 + KVM_VMCB_pa) + .section .noinstr.text, "ax" /** @@ -56,6 +59,16 @@ SYM_FUNC_START(__svm_vcpu_run) /* Move @svm to RDI. */ mov %_ASM_ARG2, %_ASM_DI + /* + * Use a single vmcb (vmcb01 because it's always valid) for + * context switching guest state via VMLOAD/VMSAVE, that way + * the state doesn't need to be copied between vmcb01 and + * vmcb02 when switching vmcbs for nested virtualization. + */ + mov SVM_vmcb01_pa(%_ASM_DI), %_ASM_AX +1: vmload %_ASM_AX +2: + /* "POP" @vmcb to RAX. */ pop %_ASM_AX @@ -80,16 +93,11 @@ SYM_FUNC_START(__svm_vcpu_run) /* Enter guest mode */ sti -1: vmrun %_ASM_AX - -2: cli - -#ifdef CONFIG_RETPOLINE - /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE -#endif +3: vmrun %_ASM_AX +4: + cli - /* "POP" @svm to RAX. */ + /* Pop @svm to RAX while it's the only available register. */ pop %_ASM_AX /* Save all guest registers. */ @@ -110,6 +118,18 @@ SYM_FUNC_START(__svm_vcpu_run) mov %r15, VCPU_R15(%_ASM_AX) #endif + /* @svm can stay in RDI from now on. */ + mov %_ASM_AX, %_ASM_DI + + mov SVM_vmcb01_pa(%_ASM_DI), %_ASM_AX +5: vmsave %_ASM_AX +6: + +#ifdef CONFIG_RETPOLINE + /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ + FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE +#endif + /* * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be * untrained as soon as we exit the VM and are back to the @@ -159,11 +179,19 @@ SYM_FUNC_START(__svm_vcpu_run) pop %_ASM_BP RET -3: cmpb $0, kvm_rebooting +10: cmpb $0, kvm_rebooting jne 2b ud2 +30: cmpb $0, kvm_rebooting + jne 4b + ud2 +50: cmpb $0, kvm_rebooting + jne 6b + ud2 - _ASM_EXTABLE(1b, 3b) + _ASM_EXTABLE(1b, 10b) + _ASM_EXTABLE(3b, 30b) + _ASM_EXTABLE(5b, 50b) SYM_FUNC_END(__svm_vcpu_run) From patchwork Mon Nov 7 14:54:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16505 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2101110wru; Mon, 7 Nov 2022 07:00:44 -0800 (PST) X-Google-Smtp-Source: AMsMyM6kZIdEUTG1V2M0lkv+v9qs1zbCFaL+1kvQkdd8yPn3Xd49WBsQdbtMhni14RMeGO0h0jHa X-Received: by 2002:a17:90b:38ca:b0:214:2cff:fbb5 with SMTP id nn10-20020a17090b38ca00b002142cfffbb5mr29881021pjb.79.1667833244534; Mon, 07 Nov 2022 07:00:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833244; cv=none; d=google.com; s=arc-20160816; b=YbrBY5jNIaRNiBIFg7lGgD/jzC12A5vROnoJo1VgxDHPLWKaS1PEIbhDgAhhP4VPg7 CnDLHGSvRigR4rMsKXFjHktyfIeHEVDWpGRIEER11j7pEj+L6naw/K8EWlmSkAImh9qf /UQTDwkKB5B2x/8xFDSJBJtKQvU13JZgrvT0uVJ83TJJZvdy7At/Gf1cvOpNGhwbf3aD CgizuyqI44lHDYtKpYm4G8WifQ7bZ4gM0ODUjU4evuBjMooY5m3oW1WRPnFxnEwqa9gR drO8PDAVOLT7a9X3Fmc7uA4J7PuVPjeqR/GdwuzwCFW7L7joVyWOcPBDkuSgtRXK/zyW e6OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=a52I+9dBo63DVUvruE2JWcTU/NQRvq6IWFpQ0EqB1i0=; b=pI7GtQYvRQA0LDQKCWbmvSCAcrVBAaY5np16Rh57JvZYS17ijvZyAwrvNwfgpKD/Y0 LXGVEetsO+sDSMNeMq0F4MCeBYxfvN9tACpbw0SmgWdxTVnwxwvBXUoHloNlfY/DiRow UZvkt21eTrb0tkM+GIF/wXbRB8IUDq1ybJhDAuAdIcPWh91IC7cxatfqxVV7ZSTcKX0a 3ZmnwIvyBYfTYYM34k4IdZyFiGc34t+6xRvmwRjw4tnKv2GbCDUESXEq4hq+TDl3Dr4p cA00aF7ky6k7Z+V4LIwqPfxAaD2R551OFbuy0Ff07lEBsu1ajaTTiySszypZ8UO6eIiF yjPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RRBZB6On; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a170902778300b00176a87603dbsi9090987pll.156.2022.11.07.07.00.27; Mon, 07 Nov 2022 07:00:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RRBZB6On; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232796AbiKGO5M (ORCPT + 99 others); Mon, 7 Nov 2022 09:57:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232523AbiKGO4f (ORCPT ); Mon, 7 Nov 2022 09:56:35 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B31491DF21 for ; Mon, 7 Nov 2022 06:54:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a52I+9dBo63DVUvruE2JWcTU/NQRvq6IWFpQ0EqB1i0=; b=RRBZB6On2qBvClUqT+P0V1wBYQixsdiRe6KmZe+y4lKClkx0zWZTojVzkzwO2ajq64lvNN 2IAJF4f65vCfVSiAs85atp47FVgwkFoCACjwrcBohA2MtV+kZyZcDcuhVpKvGbXtTfjhKS 9B0OUKx8GYYILieCuOpQTxixK3s2jUk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-138-3_Q7O6SgMUu9bD3brT7YpA-1; Mon, 07 Nov 2022 09:54:39 -0500 X-MC-Unique: 3_Q7O6SgMUu9bD3brT7YpA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9AA4185A7A8; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id A0B862027061; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 5/8] KVM: SVM: retrieve VMCB from assembly Date: Mon, 7 Nov 2022 09:54:33 -0500 Message-Id: <20221107145436.276079-6-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849912329024711?= X-GMAIL-MSGID: =?utf-8?q?1748849912329024711?= This is needed in order to keep the number of arguments to 3 or less, so that they are all passed in registers on either 32-bit or 64-bit builds. It is not strictly necessary for __svm_sev_es_vcpu_run, but staying consistent is a good idea since it makes __svm_sev_es_vcpu_run a stripped version of _svm_vcpu_run. No functional change intended. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Signed-off-by: Paolo Bonzini --- arch/x86/kernel/asm-offsets.c | 1 + arch/x86/kvm/svm/svm.c | 5 ++--- arch/x86/kvm/svm/svm.h | 4 ++-- arch/x86/kvm/svm/vmenter.S | 20 ++++++++++---------- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index f01293a1e594..69d1fed51086 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -114,6 +114,7 @@ static void __used common(void) BLANK(); OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs); OFFSET(SVM_vmcb01, vcpu_svm, vmcb01); + OFFSET(SVM_current_vmcb, vcpu_svm, current_vmcb); OFFSET(KVM_VMCB_pa, kvm_vmcb_info, pa); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ae65cdcab660..550a364be8d3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3915,16 +3915,15 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - unsigned long vmcb_pa = svm->current_vmcb->pa; guest_state_enter_irqoff(); if (sev_es_guest(vcpu->kvm)) { - __svm_sev_es_vcpu_run(vmcb_pa); + __svm_sev_es_vcpu_run(svm); } else { struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); - __svm_vcpu_run(vmcb_pa, svm); + __svm_vcpu_run(svm); vmload(__sme_page_pa(sd->save_area)); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5f8dfc9cd9a7..c5b8ec370108 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -483,7 +483,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); /* vmenter.S */ -void __svm_sev_es_vcpu_run(unsigned long vmcb_pa); -void __svm_vcpu_run(unsigned long vmcb_pa, struct vcpu_svm *svm); +void __svm_sev_es_vcpu_run(struct vcpu_svm *svm); +void __svm_vcpu_run(struct vcpu_svm *svm); #endif diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 4709bc8868d7..9738ce41fac9 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -34,7 +34,6 @@ /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode - * @vmcb_pa: unsigned long * @svm: struct vcpu_svm * */ SYM_FUNC_START(__svm_vcpu_run) @@ -51,13 +50,12 @@ SYM_FUNC_START(__svm_vcpu_run) push %_ASM_BX /* Save @svm. */ - push %_ASM_ARG2 - - /* Save @vmcb. */ push %_ASM_ARG1 +.ifnc _ASM_ARG1, _ASM_DI /* Move @svm to RDI. */ - mov %_ASM_ARG2, %_ASM_DI + mov %_ASM_ARG1, %_ASM_DI +.endif /* * Use a single vmcb (vmcb01 because it's always valid) for @@ -69,8 +67,9 @@ SYM_FUNC_START(__svm_vcpu_run) 1: vmload %_ASM_AX 2: - /* "POP" @vmcb to RAX. */ - pop %_ASM_AX + /* Get svm->current_vmcb->pa into RAX. */ + mov SVM_current_vmcb(%_ASM_DI), %_ASM_AX + mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX /* Load guest registers. */ mov VCPU_RCX(%_ASM_DI), %_ASM_CX @@ -197,7 +196,7 @@ SYM_FUNC_END(__svm_vcpu_run) /** * __svm_sev_es_vcpu_run - Run a SEV-ES vCPU via a transition to SVM guest mode - * @vmcb_pa: unsigned long + * @svm: struct vcpu_svm * */ SYM_FUNC_START(__svm_sev_es_vcpu_run) push %_ASM_BP @@ -212,8 +211,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) #endif push %_ASM_BX - /* Move @vmcb to RAX. */ - mov %_ASM_ARG1, %_ASM_AX + /* Get svm->current_vmcb->pa into RAX. */ + mov SVM_current_vmcb(%_ASM_ARG1), %_ASM_AX + mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX /* Enter guest mode */ sti From patchwork Mon Nov 7 14:54:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16506 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2102253wru; Mon, 7 Nov 2022 07:02:14 -0800 (PST) X-Google-Smtp-Source: AMsMyM6KToZxd1C+LAp1++mavmLqx6lWRke1QFpwDppTRpCSo7QG1aMaAVwtktyFuxAht5OqXf8n X-Received: by 2002:a05:6a00:230d:b0:53d:c198:6ad7 with SMTP id h13-20020a056a00230d00b0053dc1986ad7mr51764196pfh.67.1667833334664; Mon, 07 Nov 2022 07:02:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833334; cv=none; d=google.com; s=arc-20160816; b=lPGHfvmzFN7PNclmgbgBqLF0G4P4Uf168klZeyiVOJhvRm878+Vmq4+r4qRsvrxdZ6 SBvByhOcXKZaavboFT6Nx44cEiy9miD+5xQ+VUPJLuHW7pkI3VYy4zdeSHHPhNQNew4U rwbpuqwHXwaV588CsxJM7rQsEEqgwGQdJk4XfX4Uefbsjt+1mZmZVIhHcSOpplPEyjHV 0edzENaKeayZYJm0/NlLukd4cPs/l7FAbW2qAN8NV/nbOBRUrYconJisnxuf+RAkVqGo yejEMjR4qIrE2RDdWzrj9kzrTyxk1p0hkZ1iFk07ZLAYZBLNUYfVF8HLb8XWVkIHwzhx QPYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HxC4ET12VN55YVyqetOo6RD2ywSPtMRXUSJhnwR7Q90=; b=dpj1vqV3ycN5sYFuHs8s23qI+O81UmuGdl+SMEOfLxdlWl0RT53ZgCVMWyeKRSpUfo JIJG7Coshw2DtXtoufX+wnEQKX7qaCYefjH0NGfuGIgpgrY0bstm2JBRkcDC52uzj5I5 omnVBRma0Swo1ire+23ig5Y72V2KqeJB3fKTXE6g0msLQMAVfrcv0n9/b5EwZX5/i5C7 amWz2nhktahUBl5q6k7mAFnkp8PbYOXF92kXP5O8bNoDtU/tuT2vNI3xRqiPMUkX7E+X t0jwz0VGSzPOTAaSLh9bGf4s8kDmtdu3yqcmZmADkgvvWXrx2k2MhC7qCyUsaAShUjp7 RQKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RvexNNAW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 2-20020a630602000000b0046af0960898si11381466pgg.625.2022.11.07.07.01.51; Mon, 07 Nov 2022 07:02:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RvexNNAW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232742AbiKGO5V (ORCPT + 99 others); Mon, 7 Nov 2022 09:57:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232714AbiKGO4g (ORCPT ); Mon, 7 Nov 2022 09:56:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97ABBFD32 for ; Mon, 7 Nov 2022 06:54:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832884; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HxC4ET12VN55YVyqetOo6RD2ywSPtMRXUSJhnwR7Q90=; b=RvexNNAWvPHr6982FmBe3WP9N8djNnKF7qcMiKNf4jefTvmicOLv4pd0Vt7FNZn7P9rTWp pIe8P8c7fdwlMKRHw1Lx+NCTLonyOhqIzTN+3QkhG+vfllBSWrfUBQ/As9zj/q1JFxK/nu fEEFj0ZZBhp0QjlH0q1AbsRqNE/M93c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-390-gCx0CZFfMB68ZukcM2s1_g-1; Mon, 07 Nov 2022 09:54:39 -0500 X-MC-Unique: gCx0CZFfMB68ZukcM2s1_g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 27F22101A52A; Mon, 7 Nov 2022 14:54:39 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id E29ED2027061; Mon, 7 Nov 2022 14:54:38 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 6/8] KVM: SVM: restore host save area from assembly Date: Mon, 7 Nov 2022 09:54:34 -0500 Message-Id: <20221107145436.276079-7-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748850006335227839?= X-GMAIL-MSGID: =?utf-8?q?1748850006335227839?= This is needed so that FILL_RETURN_BUFFER has access to the percpu area via the GS segment base. Cc: stable@vger.kernel.org Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Reported-by: Nathan Chancellor Analyzed-by: Andrew Cooper Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/svm.c | 3 +-- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/svm/svm_ops.h | 5 ----- arch/x86/kvm/svm/vmenter.S | 13 +++++++++++++ 4 files changed, 15 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 550a364be8d3..381c7dcffe25 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3923,8 +3923,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) } else { struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); - __svm_vcpu_run(svm); - vmload(__sme_page_pa(sd->save_area)); + __svm_vcpu_run(svm, __sme_page_pa(sd->save_area)); } guest_state_exit_irqoff(); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c5b8ec370108..99410651f2a5 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -484,6 +484,6 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); /* vmenter.S */ void __svm_sev_es_vcpu_run(struct vcpu_svm *svm); -void __svm_vcpu_run(struct vcpu_svm *svm); +void __svm_vcpu_run(struct vcpu_svm *svm, unsigned long hsave_pa); #endif diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h index 9430d6437c9f..36c8af87a707 100644 --- a/arch/x86/kvm/svm/svm_ops.h +++ b/arch/x86/kvm/svm/svm_ops.h @@ -61,9 +61,4 @@ static __always_inline void vmsave(unsigned long pa) svm_asm1(vmsave, "a" (pa), "memory"); } -static __always_inline void vmload(unsigned long pa) -{ - svm_asm1(vmload, "a" (pa), "memory"); -} - #endif /* __KVM_X86_SVM_OPS_H */ diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 9738ce41fac9..45a4bd002494 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -35,6 +35,7 @@ /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode * @svm: struct vcpu_svm * + * @hsave_pa: unsigned long */ SYM_FUNC_START(__svm_vcpu_run) push %_ASM_BP @@ -49,6 +50,9 @@ SYM_FUNC_START(__svm_vcpu_run) #endif push %_ASM_BX + /* @hsave_pa is needed last after vmexit, save it first. */ + push %_ASM_ARG2 + /* Save @svm. */ push %_ASM_ARG1 @@ -124,6 +128,11 @@ SYM_FUNC_START(__svm_vcpu_run) 5: vmsave %_ASM_AX 6: + /* Pop @hsave_pa and restore GSBASE, allowing access to percpu data. */ + pop %_ASM_AX +7: vmload %_ASM_AX +8: + #ifdef CONFIG_RETPOLINE /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE @@ -187,10 +196,14 @@ SYM_FUNC_START(__svm_vcpu_run) 50: cmpb $0, kvm_rebooting jne 6b ud2 +70: cmpb $0, kvm_rebooting + jne 8b + ud2 _ASM_EXTABLE(1b, 10b) _ASM_EXTABLE(3b, 30b) _ASM_EXTABLE(5b, 50b) + _ASM_EXTABLE(7b, 70b) SYM_FUNC_END(__svm_vcpu_run) From patchwork Mon Nov 7 14:54:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16507 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2102534wru; Mon, 7 Nov 2022 07:02:36 -0800 (PST) X-Google-Smtp-Source: AMsMyM5Kn83+vNVGeOHWAhaIljc3zjlkYbUtVp0AIWVa+xmd3n5nE0BZBbPXaUiL21C3+XPEdVLs X-Received: by 2002:a17:902:db12:b0:187:4736:f780 with SMTP id m18-20020a170902db1200b001874736f780mr29286172plx.145.1667833356174; Mon, 07 Nov 2022 07:02:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833356; cv=none; d=google.com; s=arc-20160816; b=ndfRrai7za/K6R2qByn2jQ9uGiYQPne+x7vgUu5Zylvkg7sI0eg914QhTvAsMk6xB9 tRNNLWI/TFA7xIJiy8iuBeVWYVNdHW9Dgo/LzBL+SPg418nMtEZP876IHs0c4MImTcrv kwudTdAthr+mdJsMWkboFubb6ivCCfFgN4d+yg2Fe7ZOQa8FUHd4gB1y3iQJBcYqNpXD baOmExVGokKsP7/48B62lOoJY8Zg7ZK662t3wabvfp0fwPCnUJpAqII40tOBWeCxl7c4 mV4BGTUAv/7kjIe4CvEcERtIKBPumQIwlCffId+niCnXaJKzk6sY0EKGpyE/xrgD3LPG 1JIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XKgh8H92mGyxILocGQ8OyFDbDcmkvLrdzgAcIU+/znI=; b=quxdJq4Ng6KnmeDX/X6xhtVaLO6I8btb9iSiqKVwVKEsoExKyIWeGNnYfUnYAyt4r2 8rovdMLe2MsgfZZbbyk4iW0KOrZrcuDEcKp8D2NXnpsOZbz+B8z4+xfuXZ3mE9u8XVE6 HbdqRkKtDTVPzDbMxqEOh5YisDaIiD0x/OXN+6jB9LD40dfhv3euL93q6MA5Ok8GfnQi q5irYUO6HlH1VvmmwHHgGEtozRlTOkbh8Cd6+tuStxSDHfJN5tmKaDGnxYt5Y19gGSrn B8YcbRAsFBnYw8zc0G1iJ2WdhIgUPiwmTjRFXjfN0dzr5P8R650yB54/tp1kajSSrN42 1j+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GU+hLikn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pl4-20020a17090b268400b002171b49a356si9992843pjb.123.2022.11.07.07.02.21; Mon, 07 Nov 2022 07:02:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GU+hLikn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232674AbiKGO50 (ORCPT + 99 others); Mon, 7 Nov 2022 09:57:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232723AbiKGO4k (ORCPT ); Mon, 7 Nov 2022 09:56:40 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66C481E3D7 for ; Mon, 7 Nov 2022 06:54:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832886; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XKgh8H92mGyxILocGQ8OyFDbDcmkvLrdzgAcIU+/znI=; b=GU+hLiknvGdoZTNFfpmacEjdj01ACGR9vTTxjQE98yGzC+HYEkYqVdP+2USfT6Umcy4VIj dwGhwz/xuwM2MuLHEw1/1aRZLk2UCmJ8/dXWfvq2I94XjhkYLwamRZqAduD7BHNYLiJRq9 ILizdclSwMWMFnc7oY6SqAquVJkadDI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-640-OMe671EyNXWE1KA7wGM0OQ-1; Mon, 07 Nov 2022 09:54:40 -0500 X-MC-Unique: OMe671EyNXWE1KA7wGM0OQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6FBB587A9E1; Mon, 7 Nov 2022 14:54:39 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 31C982028DC1; Mon, 7 Nov 2022 14:54:39 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com, stable@vger.kernel.org Subject: [PATCH 7/8] KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly Date: Mon, 7 Nov 2022 09:54:35 -0500 Message-Id: <20221107145436.276079-8-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748850028919387051?= X-GMAIL-MSGID: =?utf-8?q?1748850028919387051?= Restoration of the host IA32_SPEC_CTRL value is probably too late with respect to the return thunk training sequence. With respect to the user/kernel boundary, AMD says, "If software chooses to toggle STIBP (e.g., set STIBP on kernel entry, and clear it on kernel exit), software should set STIBP to 1 before executing the return thunk training sequence." I assume the same requirements apply to the guest/host boundary. The return thunk training sequence is in vmenter.S, quite close to the VM-exit. On hosts without V_SPEC_CTRL, however, the host's IA32_SPEC_CTRL value is not restored until much later. To avoid this, move the restoration of host SPEC_CTRL to assembly and, for consistency, move the restoration of the guest SPEC_CTRL as well. This is not particularly difficult, apart from some care to cover both 32- and 64-bit, and to share code between SEV-ES and normal vmentry. Cc: stable@vger.kernel.org Fixes: a149180fbcf3 ("x86: Add magic AMD return-thunk") Suggested-by: Jim Mattson Signed-off-by: Paolo Bonzini --- arch/x86/kernel/asm-offsets.c | 1 + arch/x86/kernel/cpu/bugs.c | 13 ++--- arch/x86/kvm/svm/svm.c | 38 ++++++--------- arch/x86/kvm/svm/svm.h | 4 +- arch/x86/kvm/svm/vmenter.S | 92 ++++++++++++++++++++++++++++++++++- 5 files changed, 111 insertions(+), 37 deletions(-) diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index 69d1fed51086..d0bd68af0a5a 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -115,6 +115,7 @@ static void __used common(void) OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs); OFFSET(SVM_vmcb01, vcpu_svm, vmcb01); OFFSET(SVM_current_vmcb, vcpu_svm, current_vmcb); + OFFSET(SVM_spec_ctrl, vcpu_svm, spec_ctrl); OFFSET(KVM_VMCB_pa, kvm_vmcb_info, pa); } diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index da7c361f47e0..6ec0b7ce7453 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -196,22 +196,15 @@ void __init check_bugs(void) } /* - * NOTE: This function is *only* called for SVM. VMX spec_ctrl handling is - * done in vmenter.S. + * NOTE: This function is *only* called for SVM, since Intel uses + * MSR_IA32_SPEC_CTRL for SSBD. */ void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { - u64 msrval, guestval = guest_spec_ctrl, hostval = spec_ctrl_current(); + u64 guestval, hostval; struct thread_info *ti = current_thread_info(); - if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) { - if (hostval != guestval) { - msrval = setguest ? guestval : hostval; - wrmsrl(MSR_IA32_SPEC_CTRL, msrval); - } - } - /* * If SSBD is not handled in MSR_SPEC_CTRL on AMD, update * MSR_AMD64_L2_CFG or MSR_VIRT_SPEC_CTRL if supported. diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 381c7dcffe25..31aa158a2e10 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -731,6 +731,15 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) u32 offset; u32 *msrpm; + /* + * For non-nested case: + * If the L01 MSR bitmap does not intercept the MSR, then we need to + * save it. + * + * For nested case: + * If the L02 MSR bitmap does not intercept the MSR, then we need to + * save it. + */ msrpm = is_guest_mode(vcpu) ? to_svm(vcpu)->nested.msrpm: to_svm(vcpu)->msrpm; @@ -3912,18 +3921,19 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } -static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) +static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_intercepted) { struct vcpu_svm *svm = to_svm(vcpu); guest_state_enter_irqoff(); if (sev_es_guest(vcpu->kvm)) { - __svm_sev_es_vcpu_run(svm); + __svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted); } else { struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); - __svm_vcpu_run(svm, __sme_page_pa(sd->save_area)); + __svm_vcpu_run(svm, __sme_page_pa(sd->save_area), + spec_ctrl_intercepted); } guest_state_exit_irqoff(); @@ -3932,6 +3942,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + bool spec_ctrl_intercepted = msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL); trace_kvm_entry(vcpu); @@ -3990,26 +4001,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); - svm_vcpu_enter_exit(vcpu); - - /* - * We do not use IBRS in the kernel. If this vCPU has used the - * SPEC_CTRL MSR it may have left it on; save the value and - * turn it off. This is much more efficient than blindly adding - * it to the atomic save/restore list. Especially as the former - * (Saving guest MSRs on vmexit) doesn't even exist in KVM. - * - * For non-nested case: - * If the L01 MSR bitmap does not intercept the MSR, then we need to - * save it. - * - * For nested case: - * If the L02 MSR bitmap does not intercept the MSR, then we need to - * save it. - */ - if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL) && - unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) - svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); + svm_vcpu_enter_exit(vcpu, spec_ctrl_intercepted); if (!sev_es_guest(vcpu->kvm)) reload_tss(vcpu); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 99410651f2a5..9d940d8736f0 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -483,7 +483,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); /* vmenter.S */ -void __svm_sev_es_vcpu_run(struct vcpu_svm *svm); -void __svm_vcpu_run(struct vcpu_svm *svm, unsigned long hsave_pa); +void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted); +void __svm_vcpu_run(struct vcpu_svm *svm, unsigned long hsave_pa, bool spec_ctrl_intercepted); #endif diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 45a4bd002494..9e381386ffdc 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -32,10 +32,64 @@ .section .noinstr.text, "ax" +.macro RESTORE_GUEST_SPEC_CTRL + /* No need to do anything if SPEC_CTRL is unset or V_SPEC_CTRL is set */ + ALTERNATIVE_2 "jmp 999f", \ + "", X86_FEATURE_MSR_SPEC_CTRL, \ + "jmp 999f", X86_FEATURE_V_SPEC_CTRL + + /* + * SPEC_CTRL handling: if the guest's SPEC_CTRL value differs from the + * host's, write the MSR. + * + * IMPORTANT: To avoid RSB underflow attacks and any other nastiness, + * there must not be any returns or indirect branches between this code + * and vmentry. + */ + movl SVM_spec_ctrl(%_ASM_DI), %eax + cmp PER_CPU_VAR(x86_spec_ctrl_current), %eax + je 999f + mov $MSR_IA32_SPEC_CTRL, %ecx + xor %edx, %edx + wrmsr +999: + +.endm + +.macro RESTORE_HOST_SPEC_CTRL + /* No need to do anything if SPEC_CTRL is unset or V_SPEC_CTRL is set */ + ALTERNATIVE_2 "jmp 999f", \ + "", X86_FEATURE_MSR_SPEC_CTRL, \ + "jmp 999f", X86_FEATURE_V_SPEC_CTRL + + mov $MSR_IA32_SPEC_CTRL, %ecx + + /* + * Load the value that the guest had written into MSR_IA32_SPEC_CTRL, + * if it was not intercepted during guest execution. + */ + cmpb $0, (%_ASM_SP) + jnz 998f + rdmsr + movl %eax, SVM_spec_ctrl(%_ASM_DI) +998: + + /* Now restore the host value of the MSR if different from the guest's. */ + movl PER_CPU_VAR(x86_spec_ctrl_current), %eax + cmp SVM_spec_ctrl(%_ASM_DI), %eax + je 999f + xor %edx, %edx + wrmsr +999: + +.endm + + /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode * @svm: struct vcpu_svm * * @hsave_pa: unsigned long + * @spec_ctrl_intercepted: bool */ SYM_FUNC_START(__svm_vcpu_run) push %_ASM_BP @@ -50,7 +104,12 @@ SYM_FUNC_START(__svm_vcpu_run) #endif push %_ASM_BX - /* @hsave_pa is needed last after vmexit, save it first. */ + /* + * Both @spec_ctrl_intercepted and @hsave_pa are used only after vmexit. + * @spec_ctrl_intercepted is needed later and accessed directly from + * the stack in RESTORE_HOST_SPEC_CTRL, so save it first. + */ + push %_ASM_ARG3 push %_ASM_ARG2 /* Save @svm. */ @@ -61,6 +120,8 @@ SYM_FUNC_START(__svm_vcpu_run) mov %_ASM_ARG1, %_ASM_DI .endif + RESTORE_GUEST_SPEC_CTRL + /* * Use a single vmcb (vmcb01 because it's always valid) for * context switching guest state via VMLOAD/VMSAVE, that way @@ -138,6 +199,8 @@ SYM_FUNC_START(__svm_vcpu_run) FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE #endif + RESTORE_HOST_SPEC_CTRL + /* * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be * untrained as soon as we exit the VM and are back to the @@ -173,6 +236,9 @@ SYM_FUNC_START(__svm_vcpu_run) xor %r15d, %r15d #endif + /* "Pop" @spec_ctrl_intercepted. */ + pop %_ASM_BX + pop %_ASM_BX #ifdef CONFIG_X86_64 @@ -210,6 +276,7 @@ SYM_FUNC_END(__svm_vcpu_run) /** * __svm_sev_es_vcpu_run - Run a SEV-ES vCPU via a transition to SVM guest mode * @svm: struct vcpu_svm * + * @spec_ctrl_intercepted: bool */ SYM_FUNC_START(__svm_sev_es_vcpu_run) push %_ASM_BP @@ -224,8 +291,21 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) #endif push %_ASM_BX + /* Save @spec_ctrl_intercepted for RESTORE_HOST_SPEC_CTRL. */ + push %_ASM_ARG2 + + /* Save @svm. */ + push %_ASM_ARG1 + +.ifnc _ASM_ARG1, _ASM_DI + /* Move @svm to RDI for RESTORE_GUEST_SPEC_CTRL. */ + mov %_ASM_ARG1, %_ASM_DI +.endif + + RESTORE_GUEST_SPEC_CTRL + /* Get svm->current_vmcb->pa into RAX. */ - mov SVM_current_vmcb(%_ASM_ARG1), %_ASM_AX + mov SVM_current_vmcb(%_ASM_DI), %_ASM_AX mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX /* Enter guest mode */ @@ -235,11 +315,16 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) 2: cli + /* Pop @svm to RDI, guest registers have been saved already. */ + pop %_ASM_DI + #ifdef CONFIG_RETPOLINE /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE #endif + RESTORE_HOST_SPEC_CTRL + /* * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be * untrained as soon as we exit the VM and are back to the @@ -249,6 +334,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) */ UNTRAIN_RET + /* "Pop" @spec_ctrl_intercepted. */ + pop %_ASM_BX + pop %_ASM_BX #ifdef CONFIG_X86_64 From patchwork Mon Nov 7 14:54:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 16504 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2100477wru; Mon, 7 Nov 2022 06:59:42 -0800 (PST) X-Google-Smtp-Source: AA0mqf4InBXkMbpOzLsFFAe9KSzUej1Axk0KcOzpdo5q+FSLeFdlsLr9Lhh8HMJTm0Cdd3nRD2Jw X-Received: by 2002:a05:6402:528a:b0:454:8613:6560 with SMTP id en10-20020a056402528a00b0045486136560mr11203352edb.252.1667833182221; Mon, 07 Nov 2022 06:59:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667833182; cv=none; d=google.com; s=arc-20160816; b=P8v1TmATGW+P7ZRs2VTMRA3W0g6ay8Ke4fsYsiC06PyykoacFdyQYuFMe+lVhvhTZy lFHpuKs2oTN31Ee0v0bCwd3QqXWFj564k0kPVGP1BtlZpiprg/qxhs3uoyVkXDVCieBC EvAdpf6jOjbOYKXpIOFhoZjU1moC4OWNJ2b9MjfHblaePKQf/pGdBZmr4JylWflJzj+S XstJVe/dNu3S1JPzulZoLezlWIadpvoOk3yRMzuXBXIpXJ3P7bs7BeT9DcgJtEJmbXIC plGh4hO/Y7pSTVvVRtU5PDophRkoekTtO8ot+FWcEJmWHaglnqCKlrnVdowqizpHiHyd P0PA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cVbwk+9xr1sKrFlNTxQjNZzM0TkSy1dde/13PKXshmM=; b=HA9MFamqIdpYhcWp5iO+4/dxFXanaoBoCdSRsOJyQsQbm9THPbWrSkjUiFjPKBmfmX CkvgB5xMUwpXu3Q9oKSy1G6OW2k7uTqOmhx08jGmyh1WgOwnEf9lG751qfyN+/Z7plnb l1XsIVBd1k+A/8LWuUoX/IbqklecP3DTAB2xH1eDdF/jbKQnCe7WgDDiE4bdVCKmQvFn oRjd4a3sEaKKfzHmlB22v4heQAyYweCUf7OQNbLS8S/zBDLoh2/yBCSWBX/vQ+3B56ZJ xypvcGKrpe8+muuRU6bXdQiaMiB+gZC+nyiEFGtilE2bpcL9MRHn62ETRb49MKQ/Hmd8 e/YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GN2hGodP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w20-20020a056402269400b00462bd784df2si11173200edd.542.2022.11.07.06.59.17; Mon, 07 Nov 2022 06:59:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GN2hGodP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232776AbiKGO5K (ORCPT + 99 others); Mon, 7 Nov 2022 09:57:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232713AbiKGO4e (ORCPT ); Mon, 7 Nov 2022 09:56:34 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F7F71DF36 for ; Mon, 7 Nov 2022 06:54:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667832883; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cVbwk+9xr1sKrFlNTxQjNZzM0TkSy1dde/13PKXshmM=; b=GN2hGodP/e31qXJ57eswVh6BqyixZ9JQs5BLVXPUiX1WJYSqXmFsfEeeRfFMWij7ax+4H0 PX3JdKekKqM4KZZ6EGNPKWOEkCVAZLotOr5Zp8EiJhiEBCY4q1ro1sxe/z20hMTlRXVEdk 0rAcQYAsJwg8Ff+h+MO68mGUbyFTvFI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-450-d3sgd_FoNjyzGs34KLc8ZA-1; Mon, 07 Nov 2022 09:54:40 -0500 X-MC-Unique: d3sgd_FoNjyzGs34KLc8ZA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC8593826A43; Mon, 7 Nov 2022 14:54:39 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 786872024CB7; Mon, 7 Nov 2022 14:54:39 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: nathan@kernel.org, thomas.lendacky@amd.com, andrew.cooper3@citrix.com, peterz@infradead.org, jmattson@google.com, seanjc@google.com Subject: [PATCH 8/8] x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and callers Date: Mon, 7 Nov 2022 09:54:36 -0500 Message-Id: <20221107145436.276079-9-pbonzini@redhat.com> In-Reply-To: <20221107145436.276079-1-pbonzini@redhat.com> References: <20221107145436.276079-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748849847000728745?= X-GMAIL-MSGID: =?utf-8?q?1748849847000728745?= x86_virt_spec_ctrl only deals with the paravirtualized MSR_IA32_VIRT_SPEC_CTRL now and does not handle MSR_IA32_SPEC_CTRL anymore; remove the corresponding, unused argument. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/spec-ctrl.h | 10 +++++----- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h index 5393babc0598..cb0386fc4dc3 100644 --- a/arch/x86/include/asm/spec-ctrl.h +++ b/arch/x86/include/asm/spec-ctrl.h @@ -13,7 +13,7 @@ * Takes the guest view of SPEC_CTRL MSR as a parameter and also * the guest's version of VIRT_SPEC_CTRL, if emulated. */ -extern void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest); +extern void x86_virt_spec_ctrl(u64 guest_virt_spec_ctrl, bool guest); /** * x86_spec_ctrl_set_guest - Set speculation control registers for the guest @@ -24,9 +24,9 @@ extern void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bo * Avoids writing to the MSR if the content/bits are the same */ static inline -void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl) +void x86_spec_ctrl_set_guest(u64 guest_virt_spec_ctrl) { - x86_virt_spec_ctrl(guest_spec_ctrl, guest_virt_spec_ctrl, true); + x86_virt_spec_ctrl(guest_virt_spec_ctrl, true); } /** @@ -38,9 +38,9 @@ void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl) * Avoids writing to the MSR if the content/bits are the same */ static inline -void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl) +void x86_spec_ctrl_restore_host(u64 guest_virt_spec_ctrl) { - x86_virt_spec_ctrl(guest_spec_ctrl, guest_virt_spec_ctrl, false); + x86_virt_spec_ctrl(guest_virt_spec_ctrl, false); } /* AMD specific Speculative Store Bypass MSR data */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6ec0b7ce7453..3e3230cccaa7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -200,7 +200,7 @@ void __init check_bugs(void) * MSR_IA32_SPEC_CTRL for SSBD. */ void -x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) +x86_virt_spec_ctrl(u64 guest_virt_spec_ctrl, bool setguest) { u64 guestval, hostval; struct thread_info *ti = current_thread_info(); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 31aa158a2e10..e95684cbc194 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3999,7 +3999,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) * being speculatively taken. */ if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) - x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); + x86_spec_ctrl_set_guest(svm->virt_spec_ctrl); svm_vcpu_enter_exit(vcpu, spec_ctrl_intercepted); @@ -4007,7 +4007,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) reload_tss(vcpu); if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) - x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); + x86_spec_ctrl_restore_host(svm->virt_spec_ctrl); if (!sev_es_guest(vcpu->kvm)) { vcpu->arch.cr2 = svm->vmcb->save.cr2;