From patchwork Tue Oct 25 12:42:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10739 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983727wru; Tue, 25 Oct 2022 05:46:50 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6OFq7axOj87WZ6b4pmtG3ogKtegRqAELBUj6xNf3bBM7zTux3rVIyvr9pL3+u/iFi75w6q X-Received: by 2002:a17:907:760d:b0:7a9:5281:9eda with SMTP id jx13-20020a170907760d00b007a952819edamr7593133ejc.50.1666702010568; Tue, 25 Oct 2022 05:46:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702010; cv=none; d=google.com; s=arc-20160816; b=vlZEpYoI3S9BGIAeh9Wr5hArIXYqN7P3he2+QeTaQDDAmHiDdebwaBpAej1wKUP1GK sUqAMhF5uogKJdkx+852ltjdEzi4HVU5gPH8t1+yHVyp20zwJWDDLg1MpSU1xDP8RaGw Y3tTA4bzL8/Djx8wT0l6ctpFnyQV24vsDc0LbyRtqjkJkpiKMiJAmLUJYINJhE1gqI9l 8pkNLkgVdde0L2PCLMUXJy8OMLiYM22Lz8P2V3amFfkC8aU7o4bk02mlb61JqA6K9n9H GdNynLgzE0Dw7Q6btqug0ddjqrwKKNFFvPvv3WO5Qn/ZumFGgASNF29L1Z6TJigJmdOr wwrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6mk7l7TEcSmMlQlt2X6+supC9mE0OUP3ip+KZCOd0UQ=; b=cJO9dqbbt41HPi3qvm1anxiw67hMil2/YLtLtiB4uloV4OSP2E29V/sOqQzAn42Jle blVpjriaepnPUfA/Yr4XiO/PEtE2ZGgE5x+Ce0IT5rpTecdebrSIKkw85kL64iuIhz7y dGhoncsFMtf0GuQYMKzWEUTiYIjOnGipEAnqujLMcua61AblSxe2YUJ043rNF65yUM27 Y9YqClb7M5lOHiB8II9foPBWT1e15dc1SdDimwCTtzAhRvsHsIwDvqrIWdGHvUjPyd5k h3E46pH4udojkirs/06nonX/tuJsQkzV/lOQ0t47w3FyW1POMqvWnVVBg7nUN01gr3rB JQyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=U1Xaj5QN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rk8-20020a170907214800b00780def41dc4si1633877ejb.527.2022.10.25.05.46.25; Tue, 25 Oct 2022 05:46:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=U1Xaj5QN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232117AbiJYMmu (ORCPT + 99 others); Tue, 25 Oct 2022 08:42:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231992AbiJYMmi (ORCPT ); Tue, 25 Oct 2022 08:42:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE59718C426 for ; Tue, 25 Oct 2022 05:42:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6mk7l7TEcSmMlQlt2X6+supC9mE0OUP3ip+KZCOd0UQ=; b=U1Xaj5QNbmy5hqIrrrMHNBlokFJ0nw4ml5kREYXvnZOyt5oCGXt2Q0kkli8fAPXz8McDtM AcLF6su3wh3Y0JhJRomv3RjEdszLVObvddrJ1K3gMPayRkgrNoT7vPFGGniY0K2R+E/Ugq 2Pkq4QQzAclqVKRxEX9e4lBotN7jVPg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-365-LT4vJzZ4OnqmljNv6q7EoQ-1; Tue, 25 Oct 2022 08:42:32 -0400 X-MC-Unique: LT4vJzZ4OnqmljNv6q7EoQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 89094858F17; Tue, 25 Oct 2022 12:42:31 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 36FBD20290A2; Tue, 25 Oct 2022 12:42:28 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 01/23] KVM: x86: start moving SMM-related functions to new files Date: Tue, 25 Oct 2022 15:42:01 +0300 Message-Id: <20221025124223.227577-2-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663727739877844?= X-GMAIL-MSGID: =?utf-8?q?1747663727739877844?= From: Paolo Bonzini Create a new header and source with code related to system management mode emulation. Entry and exit will move there too; for now, opportunistically rename put_smstate to PUT_SMSTATE while moving it to smm.h, and adjust the SMM state saving code. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 6 -- arch/x86/kvm/Makefile | 1 + arch/x86/kvm/emulate.c | 1 + arch/x86/kvm/kvm_cache_regs.h | 5 -- arch/x86/kvm/lapic.c | 14 ++- arch/x86/kvm/lapic.h | 7 +- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/smm.c | 37 ++++++++ arch/x86/kvm/smm.h | 25 ++++++ arch/x86/kvm/svm/nested.c | 1 + arch/x86/kvm/svm/svm.c | 5 +- arch/x86/kvm/vmx/nested.c | 1 + arch/x86/kvm/vmx/vmx.c | 1 + arch/x86/kvm/x86.c | 148 ++++++++++++-------------------- 14 files changed, 138 insertions(+), 115 deletions(-) create mode 100644 arch/x86/kvm/smm.c create mode 100644 arch/x86/kvm/smm.h diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7551b6f9c31c52..af9798681f88a1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2084,12 +2084,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #endif } -#define put_smstate(type, buf, offset, val) \ - *(type *)((buf) + (offset) - 0x7e00) = val - -#define GET_SMSTATE(type, buf, offset) \ - (*(type *)((buf) + (offset) - 0x7e00)) - int kvm_cpu_dirty_log_size(void); int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 30f244b6452349..ec6f7656254b9f 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -20,6 +20,7 @@ endif kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o +kvm-y += smm.o kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 3b27622d46425b..de09660a61e522 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -30,6 +30,7 @@ #include "tss.h" #include "mmu.h" #include "pmu.h" +#include "smm.h" /* * Operand types diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 3febc342360cc7..c09174f73a344f 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -200,9 +200,4 @@ static inline bool is_guest_mode(struct kvm_vcpu *vcpu) return vcpu->arch.hflags & HF_GUEST_MASK; } -static inline bool is_smm(struct kvm_vcpu *vcpu) -{ - return vcpu->arch.hflags & HF_SMM_MASK; -} - #endif diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index d7639d126e6c7a..e636d8c681f4bb 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -42,6 +42,7 @@ #include "x86.h" #include "cpuid.h" #include "hyperv.h" +#include "smm.h" #ifndef CONFIG_X86_64 #define mod_64(x, y) ((x) - (y) * div64_u64(x, y)) @@ -1170,9 +1171,10 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, break; case APIC_DM_SMI: - result = 1; - kvm_make_request(KVM_REQ_SMI, vcpu); - kvm_vcpu_kick(vcpu); + if (!kvm_inject_smi(vcpu)) { + kvm_vcpu_kick(vcpu); + result = 1; + } break; case APIC_DM_NMI: @@ -3020,6 +3022,12 @@ int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len) return 0; } +bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu) +{ + return !is_smm(vcpu) && + !static_call(kvm_x86_apic_init_signal_blocked)(vcpu); +} + int kvm_apic_accept_events(struct kvm_vcpu *vcpu) { struct kvm_lapic *apic = vcpu->arch.apic; diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index a5ac4a5a517920..cb7e68c93e1a23 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -7,7 +7,6 @@ #include #include "hyperv.h" -#include "kvm_cache_regs.h" #define KVM_APIC_INIT 0 #define KVM_APIC_SIPI 1 @@ -229,11 +228,7 @@ static inline bool kvm_apic_has_pending_init_or_sipi(struct kvm_vcpu *vcpu) return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events; } -static inline bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu) -{ - return !is_smm(vcpu) && - !static_call(kvm_x86_apic_init_signal_blocked)(vcpu); -} +bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu); static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f81539061d648..6a30d67fadb4d6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -22,6 +22,7 @@ #include "tdp_mmu.h" #include "x86.h" #include "kvm_cache_regs.h" +#include "smm.h" #include "kvm_emulate.h" #include "cpuid.h" #include "spte.h" diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c new file mode 100644 index 00000000000000..b91c48d91f6ed9 --- /dev/null +++ b/arch/x86/kvm/smm.c @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include "x86.h" +#include "kvm_cache_regs.h" +#include "kvm_emulate.h" +#include "smm.h" +#include "trace.h" + +void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm) +{ + trace_kvm_smm_transition(vcpu->vcpu_id, vcpu->arch.smbase, entering_smm); + + if (entering_smm) { + vcpu->arch.hflags |= HF_SMM_MASK; + } else { + vcpu->arch.hflags &= ~(HF_SMM_MASK | HF_SMM_INSIDE_NMI_MASK); + + /* Process a latched INIT or SMI, if any. */ + kvm_make_request(KVM_REQ_EVENT, vcpu); + + /* + * Even if KVM_SET_SREGS2 loaded PDPTRs out of band, + * on SMM exit we still need to reload them from + * guest memory + */ + vcpu->arch.pdptrs_from_userspace = false; + } + + kvm_mmu_reset_context(vcpu); +} + +void process_smi(struct kvm_vcpu *vcpu) +{ + vcpu->arch.smi_pending = true; + kvm_make_request(KVM_REQ_EVENT, vcpu); +} diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h new file mode 100644 index 00000000000000..d85d4ccd32dd13 --- /dev/null +++ b/arch/x86/kvm/smm.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef ASM_KVM_SMM_H +#define ASM_KVM_SMM_H + +#define GET_SMSTATE(type, buf, offset) \ + (*(type *)((buf) + (offset) - 0x7e00)) + +#define PUT_SMSTATE(type, buf, offset, val) \ + *(type *)((buf) + (offset) - 0x7e00) = val + +static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) +{ + kvm_make_request(KVM_REQ_SMI, vcpu); + return 0; +} + +static inline bool is_smm(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.hflags & HF_SMM_MASK; +} + +void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm); +void process_smi(struct kvm_vcpu *vcpu); + +#endif diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 4c620999d230a5..cc0fd75f7cbab5 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -25,6 +25,7 @@ #include "trace.h" #include "mmu.h" #include "x86.h" +#include "smm.h" #include "cpuid.h" #include "lapic.h" #include "svm.h" diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 58f0077d935799..496ee7d1ae2fb7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -6,6 +6,7 @@ #include "mmu.h" #include "kvm_cache_regs.h" #include "x86.h" +#include "smm.h" #include "cpuid.h" #include "pmu.h" @@ -4442,9 +4443,9 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) return 0; /* FED8h - SVM Guest */ - put_smstate(u64, smstate, 0x7ed8, 1); + PUT_SMSTATE(u64, smstate, 0x7ed8, 1); /* FEE0h - SVM Guest VMCB Physical Address */ - put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa); + PUT_SMSTATE(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa); svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8f67a9c4a28706..29215925e75b11 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -16,6 +16,7 @@ #include "trace.h" #include "vmx.h" #include "x86.h" +#include "smm.h" static bool __read_mostly enable_shadow_vmcs = 1; module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9dba04b6b019ac..038809c6800601 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -66,6 +66,7 @@ #include "vmcs12.h" #include "vmx.h" #include "x86.h" +#include "smm.h" MODULE_AUTHOR("Qumranet"); MODULE_LICENSE("GPL"); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de91..6a5cebf7250826 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -30,6 +30,7 @@ #include "hyperv.h" #include "lapic.h" #include "xen.h" +#include "smm.h" #include #include @@ -119,7 +120,6 @@ static u64 __read_mostly cr4_reserved_bits = CR4_RESERVED_BITS; static void update_cr8_intercept(struct kvm_vcpu *vcpu); static void process_nmi(struct kvm_vcpu *vcpu); -static void process_smi(struct kvm_vcpu *vcpu); static void enter_smm(struct kvm_vcpu *vcpu); static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); static void store_regs(struct kvm_vcpu *vcpu); @@ -4896,13 +4896,6 @@ static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) return 0; } -static int kvm_vcpu_ioctl_smi(struct kvm_vcpu *vcpu) -{ - kvm_make_request(KVM_REQ_SMI, vcpu); - - return 0; -} - static int vcpu_ioctl_tpr_access_reporting(struct kvm_vcpu *vcpu, struct kvm_tpr_access_ctl *tac) { @@ -5125,8 +5118,6 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, memset(&events->reserved, 0, sizeof(events->reserved)); } -static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm); - static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { @@ -5579,7 +5570,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp, break; } case KVM_SMI: { - r = kvm_vcpu_ioctl_smi(vcpu); + r = kvm_inject_smi(vcpu); break; } case KVM_SET_CPUID: { @@ -8527,29 +8518,6 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, static int complete_emulated_mmio(struct kvm_vcpu *vcpu); static int complete_emulated_pio(struct kvm_vcpu *vcpu); -static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm) -{ - trace_kvm_smm_transition(vcpu->vcpu_id, vcpu->arch.smbase, entering_smm); - - if (entering_smm) { - vcpu->arch.hflags |= HF_SMM_MASK; - } else { - vcpu->arch.hflags &= ~(HF_SMM_MASK | HF_SMM_INSIDE_NMI_MASK); - - /* Process a latched INIT or SMI, if any. */ - kvm_make_request(KVM_REQ_EVENT, vcpu); - - /* - * Even if KVM_SET_SREGS2 loaded PDPTRs out of band, - * on SMM exit we still need to reload them from - * guest memory - */ - vcpu->arch.pdptrs_from_userspace = false; - } - - kvm_mmu_reset_context(vcpu); -} - static int kvm_vcpu_check_hw_bp(unsigned long addr, u32 type, u32 dr7, unsigned long *db) { @@ -10033,16 +10001,16 @@ static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, char *buf, int n) int offset; kvm_get_segment(vcpu, &seg, n); - put_smstate(u32, buf, 0x7fa8 + n * 4, seg.selector); + PUT_SMSTATE(u32, buf, 0x7fa8 + n * 4, seg.selector); if (n < 3) offset = 0x7f84 + n * 12; else offset = 0x7f2c + (n - 3) * 12; - put_smstate(u32, buf, offset + 8, seg.base); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u32, buf, offset, enter_smm_get_segment_flags(&seg)); + PUT_SMSTATE(u32, buf, offset + 8, seg.base); + PUT_SMSTATE(u32, buf, offset + 4, seg.limit); + PUT_SMSTATE(u32, buf, offset, enter_smm_get_segment_flags(&seg)); } #ifdef CONFIG_X86_64 @@ -10056,10 +10024,10 @@ static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, char *buf, int n) offset = 0x7e00 + n * 16; flags = enter_smm_get_segment_flags(&seg) >> 8; - put_smstate(u16, buf, offset, seg.selector); - put_smstate(u16, buf, offset + 2, flags); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u64, buf, offset + 8, seg.base); + PUT_SMSTATE(u16, buf, offset, seg.selector); + PUT_SMSTATE(u16, buf, offset + 2, flags); + PUT_SMSTATE(u32, buf, offset + 4, seg.limit); + PUT_SMSTATE(u64, buf, offset + 8, seg.base); } #endif @@ -10070,47 +10038,47 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) unsigned long val; int i; - put_smstate(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); - put_smstate(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); - put_smstate(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); - put_smstate(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); for (i = 0; i < 8; i++) - put_smstate(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); + PUT_SMSTATE(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); kvm_get_dr(vcpu, 6, &val); - put_smstate(u32, buf, 0x7fcc, (u32)val); + PUT_SMSTATE(u32, buf, 0x7fcc, (u32)val); kvm_get_dr(vcpu, 7, &val); - put_smstate(u32, buf, 0x7fc8, (u32)val); + PUT_SMSTATE(u32, buf, 0x7fc8, (u32)val); kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u32, buf, 0x7fc4, seg.selector); - put_smstate(u32, buf, 0x7f64, seg.base); - put_smstate(u32, buf, 0x7f60, seg.limit); - put_smstate(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); + PUT_SMSTATE(u32, buf, 0x7fc4, seg.selector); + PUT_SMSTATE(u32, buf, 0x7f64, seg.base); + PUT_SMSTATE(u32, buf, 0x7f60, seg.limit); + PUT_SMSTATE(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u32, buf, 0x7fc0, seg.selector); - put_smstate(u32, buf, 0x7f80, seg.base); - put_smstate(u32, buf, 0x7f7c, seg.limit); - put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); + PUT_SMSTATE(u32, buf, 0x7fc0, seg.selector); + PUT_SMSTATE(u32, buf, 0x7f80, seg.base); + PUT_SMSTATE(u32, buf, 0x7f7c, seg.limit); + PUT_SMSTATE(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f74, dt.address); - put_smstate(u32, buf, 0x7f70, dt.size); + PUT_SMSTATE(u32, buf, 0x7f74, dt.address); + PUT_SMSTATE(u32, buf, 0x7f70, dt.size); static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f58, dt.address); - put_smstate(u32, buf, 0x7f54, dt.size); + PUT_SMSTATE(u32, buf, 0x7f58, dt.address); + PUT_SMSTATE(u32, buf, 0x7f54, dt.size); for (i = 0; i < 6; i++) enter_smm_save_seg_32(vcpu, buf, i); - put_smstate(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); + PUT_SMSTATE(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020000); - put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase); + PUT_SMSTATE(u32, buf, 0x7efc, 0x00020000); + PUT_SMSTATE(u32, buf, 0x7ef8, vcpu->arch.smbase); } #ifdef CONFIG_X86_64 @@ -10122,46 +10090,46 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) int i; for (i = 0; i < 16; i++) - put_smstate(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); + PUT_SMSTATE(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); - put_smstate(u64, buf, 0x7f78, kvm_rip_read(vcpu)); - put_smstate(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f78, kvm_rip_read(vcpu)); + PUT_SMSTATE(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); kvm_get_dr(vcpu, 6, &val); - put_smstate(u64, buf, 0x7f68, val); + PUT_SMSTATE(u64, buf, 0x7f68, val); kvm_get_dr(vcpu, 7, &val); - put_smstate(u64, buf, 0x7f60, val); + PUT_SMSTATE(u64, buf, 0x7f60, val); - put_smstate(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); - put_smstate(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); - put_smstate(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); - put_smstate(u32, buf, 0x7f00, vcpu->arch.smbase); + PUT_SMSTATE(u32, buf, 0x7f00, vcpu->arch.smbase); /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020064); + PUT_SMSTATE(u32, buf, 0x7efc, 0x00020064); - put_smstate(u64, buf, 0x7ed0, vcpu->arch.efer); + PUT_SMSTATE(u64, buf, 0x7ed0, vcpu->arch.efer); kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u16, buf, 0x7e90, seg.selector); - put_smstate(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e94, seg.limit); - put_smstate(u64, buf, 0x7e98, seg.base); + PUT_SMSTATE(u16, buf, 0x7e90, seg.selector); + PUT_SMSTATE(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); + PUT_SMSTATE(u32, buf, 0x7e94, seg.limit); + PUT_SMSTATE(u64, buf, 0x7e98, seg.base); static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e84, dt.size); - put_smstate(u64, buf, 0x7e88, dt.address); + PUT_SMSTATE(u32, buf, 0x7e84, dt.size); + PUT_SMSTATE(u64, buf, 0x7e88, dt.address); kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u16, buf, 0x7e70, seg.selector); - put_smstate(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e74, seg.limit); - put_smstate(u64, buf, 0x7e78, seg.base); + PUT_SMSTATE(u16, buf, 0x7e70, seg.selector); + PUT_SMSTATE(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); + PUT_SMSTATE(u32, buf, 0x7e74, seg.limit); + PUT_SMSTATE(u64, buf, 0x7e78, seg.base); static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e64, dt.size); - put_smstate(u64, buf, 0x7e68, dt.address); + PUT_SMSTATE(u32, buf, 0x7e64, dt.size); + PUT_SMSTATE(u64, buf, 0x7e68, dt.address); for (i = 0; i < 6; i++) enter_smm_save_seg_64(vcpu, buf, i); @@ -10247,12 +10215,6 @@ static void enter_smm(struct kvm_vcpu *vcpu) kvm_mmu_reset_context(vcpu); } -static void process_smi(struct kvm_vcpu *vcpu) -{ - vcpu->arch.smi_pending = true; - kvm_make_request(KVM_REQ_EVENT, vcpu); -} - void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, unsigned long *vcpu_bitmap) { From patchwork Tue Oct 25 12:42:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10735 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983035wru; Tue, 25 Oct 2022 05:45:19 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7mhurZClmO0/UDJYOR8CWJ9xUlnMfKSbn6+EMB0o16ML/3Du+0FaluFdLRsyt2Dr+bq6a2 X-Received: by 2002:a05:6402:ea8:b0:456:d188:b347 with SMTP id h40-20020a0564020ea800b00456d188b347mr15955883eda.15.1666701919651; Tue, 25 Oct 2022 05:45:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666701919; cv=none; d=google.com; s=arc-20160816; b=kxTxuDAfLejvSUFi0vvNuprQa83DMiNu/Lje4MZZefMaQf9Dx4gR9Il5ESWHXbNNk3 EVR1uI0brI6Y2sWCJNisRQUmcQmPKipxZbVR/z6FbTrl+ixz36+H7PlVuXEoc6Yzz6Sm FeqyM5vnSddIb34Tui6lc2fGPsjScwI6bJ0+PnsX4e9rfkryFfwOAdlIRERcxPped00y L5tO/aqTmzyG+AsHROiciYUh58eEhK4iYSdceIdnpdCDJ+GVEoMIvXI3auzPcwL9nqU9 ODVIxyKESkuttLZh+jXtilcohs3vvhBHrHdWHcx8jKtnJ7lvSiyTxt1ak5AXHczPEFyH 97ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=g32vYfZmQC/FfC8zdSX48jNbwS+70jujqjXZ+Grg1Hc=; b=O6eM8wr/X15zngWTLFADqkXSbPFxBOnZc28c0A8hAqKbNZm8QBYB+ECBL0PiFbZQWL XShlIpoOMhJlFcTYyOR6KFXbyUoV8vtXa6ssIOQKlSgBFqPMkywzalLfITd+lKo/NcJA h/huObdOvUgfmMQj7mfLuRM2xMUFELGqNnfoAzZUSUBiKuL5R3Ze+30vowSK2E7cwlgq z22jwzPs3hF0/8jqAasCd7vlgYMPwZYxAQEnPKcqRwIcdH/ZM9C+9nHVm+m2u0aAfDU5 /m6tidYn6g/A53bq+BMDZW39qsF2VKzf4g2JNYA0aNw4S2qi85J2CK6vitGP766a/9OE K8+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LpCYWTTY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ne10-20020a1709077b8a00b007418a1e877dsi2820342ejc.580.2022.10.25.05.44.55; Tue, 25 Oct 2022 05:45:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LpCYWTTY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232252AbiJYMnD (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232165AbiJYMmn (ORCPT ); Tue, 25 Oct 2022 08:42:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8054D18DA88 for ; Tue, 25 Oct 2022 05:42:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701760; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g32vYfZmQC/FfC8zdSX48jNbwS+70jujqjXZ+Grg1Hc=; b=LpCYWTTY6bFthtc2TD0wvK6rKLIJTKWEfFkX+t4xFY6mUpC00hWLL4KijDVMKOB1OCypA4 qr+BaNpAOBe9MAUCXf270R3W8dx84da8zHKGtiTqtRKiVdf61z0R6/e9h/+c8hMJJzIDph qL/pF3g1Ji/yhxdmr5PYjQGrflnVyHU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-21-gulEcWQVNO68Wx91m3Y8cA-1; Tue, 25 Oct 2022 08:42:36 -0400 X-MC-Unique: gulEcWQVNO68Wx91m3Y8cA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 52082185A794; Tue, 25 Oct 2022 12:42:35 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id D0B0320290A2; Tue, 25 Oct 2022 12:42:31 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 02/23] KVM: x86: move SMM entry to a new file Date: Tue, 25 Oct 2022 15:42:02 +0300 Message-Id: <20221025124223.227577-3-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663631999460333?= X-GMAIL-MSGID: =?utf-8?q?1747663631999460333?= From: Paolo Bonzini Some users of KVM implement the UEFI variable store through a paravirtual device that does not require the "SMM lockbox" component of edk2, and would like to compile out system management mode. In preparation for that, move the SMM entry code out of x86.c and into a new file. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/smm.c | 235 +++++++++++++++++++++++++++++++ arch/x86/kvm/smm.h | 1 + arch/x86/kvm/x86.c | 239 +------------------------------- 4 files changed, 239 insertions(+), 237 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index af9798681f88a1..4afed04fcc8241 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1839,6 +1839,7 @@ int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu); int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu); void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); +void kvm_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg); void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index b91c48d91f6ed9..26a6859e421fe1 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -5,6 +5,7 @@ #include "kvm_cache_regs.h" #include "kvm_emulate.h" #include "smm.h" +#include "cpuid.h" #include "trace.h" void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm) @@ -35,3 +36,237 @@ void process_smi(struct kvm_vcpu *vcpu) vcpu->arch.smi_pending = true; kvm_make_request(KVM_REQ_EVENT, vcpu); } + +static u32 enter_smm_get_segment_flags(struct kvm_segment *seg) +{ + u32 flags = 0; + flags |= seg->g << 23; + flags |= seg->db << 22; + flags |= seg->l << 21; + flags |= seg->avl << 20; + flags |= seg->present << 15; + flags |= seg->dpl << 13; + flags |= seg->s << 12; + flags |= seg->type << 8; + return flags; +} + +static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, char *buf, int n) +{ + struct kvm_segment seg; + int offset; + + kvm_get_segment(vcpu, &seg, n); + PUT_SMSTATE(u32, buf, 0x7fa8 + n * 4, seg.selector); + + if (n < 3) + offset = 0x7f84 + n * 12; + else + offset = 0x7f2c + (n - 3) * 12; + + PUT_SMSTATE(u32, buf, offset + 8, seg.base); + PUT_SMSTATE(u32, buf, offset + 4, seg.limit); + PUT_SMSTATE(u32, buf, offset, enter_smm_get_segment_flags(&seg)); +} + +#ifdef CONFIG_X86_64 +static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, char *buf, int n) +{ + struct kvm_segment seg; + int offset; + u16 flags; + + kvm_get_segment(vcpu, &seg, n); + offset = 0x7e00 + n * 16; + + flags = enter_smm_get_segment_flags(&seg) >> 8; + PUT_SMSTATE(u16, buf, offset, seg.selector); + PUT_SMSTATE(u16, buf, offset + 2, flags); + PUT_SMSTATE(u32, buf, offset + 4, seg.limit); + PUT_SMSTATE(u64, buf, offset + 8, seg.base); +} +#endif + +static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) +{ + struct desc_ptr dt; + struct kvm_segment seg; + unsigned long val; + int i; + + PUT_SMSTATE(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); + PUT_SMSTATE(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); + + for (i = 0; i < 8; i++) + PUT_SMSTATE(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); + + kvm_get_dr(vcpu, 6, &val); + PUT_SMSTATE(u32, buf, 0x7fcc, (u32)val); + kvm_get_dr(vcpu, 7, &val); + PUT_SMSTATE(u32, buf, 0x7fc8, (u32)val); + + kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); + PUT_SMSTATE(u32, buf, 0x7fc4, seg.selector); + PUT_SMSTATE(u32, buf, 0x7f64, seg.base); + PUT_SMSTATE(u32, buf, 0x7f60, seg.limit); + PUT_SMSTATE(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); + + kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); + PUT_SMSTATE(u32, buf, 0x7fc0, seg.selector); + PUT_SMSTATE(u32, buf, 0x7f80, seg.base); + PUT_SMSTATE(u32, buf, 0x7f7c, seg.limit); + PUT_SMSTATE(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); + + static_call(kvm_x86_get_gdt)(vcpu, &dt); + PUT_SMSTATE(u32, buf, 0x7f74, dt.address); + PUT_SMSTATE(u32, buf, 0x7f70, dt.size); + + static_call(kvm_x86_get_idt)(vcpu, &dt); + PUT_SMSTATE(u32, buf, 0x7f58, dt.address); + PUT_SMSTATE(u32, buf, 0x7f54, dt.size); + + for (i = 0; i < 6; i++) + enter_smm_save_seg_32(vcpu, buf, i); + + PUT_SMSTATE(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); + + /* revision id */ + PUT_SMSTATE(u32, buf, 0x7efc, 0x00020000); + PUT_SMSTATE(u32, buf, 0x7ef8, vcpu->arch.smbase); +} + +#ifdef CONFIG_X86_64 +static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) +{ + struct desc_ptr dt; + struct kvm_segment seg; + unsigned long val; + int i; + + for (i = 0; i < 16; i++) + PUT_SMSTATE(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); + + PUT_SMSTATE(u64, buf, 0x7f78, kvm_rip_read(vcpu)); + PUT_SMSTATE(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); + + kvm_get_dr(vcpu, 6, &val); + PUT_SMSTATE(u64, buf, 0x7f68, val); + kvm_get_dr(vcpu, 7, &val); + PUT_SMSTATE(u64, buf, 0x7f60, val); + + PUT_SMSTATE(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); + PUT_SMSTATE(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); + + PUT_SMSTATE(u32, buf, 0x7f00, vcpu->arch.smbase); + + /* revision id */ + PUT_SMSTATE(u32, buf, 0x7efc, 0x00020064); + + PUT_SMSTATE(u64, buf, 0x7ed0, vcpu->arch.efer); + + kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); + PUT_SMSTATE(u16, buf, 0x7e90, seg.selector); + PUT_SMSTATE(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); + PUT_SMSTATE(u32, buf, 0x7e94, seg.limit); + PUT_SMSTATE(u64, buf, 0x7e98, seg.base); + + static_call(kvm_x86_get_idt)(vcpu, &dt); + PUT_SMSTATE(u32, buf, 0x7e84, dt.size); + PUT_SMSTATE(u64, buf, 0x7e88, dt.address); + + kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); + PUT_SMSTATE(u16, buf, 0x7e70, seg.selector); + PUT_SMSTATE(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); + PUT_SMSTATE(u32, buf, 0x7e74, seg.limit); + PUT_SMSTATE(u64, buf, 0x7e78, seg.base); + + static_call(kvm_x86_get_gdt)(vcpu, &dt); + PUT_SMSTATE(u32, buf, 0x7e64, dt.size); + PUT_SMSTATE(u64, buf, 0x7e68, dt.address); + + for (i = 0; i < 6; i++) + enter_smm_save_seg_64(vcpu, buf, i); +} +#endif + +void enter_smm(struct kvm_vcpu *vcpu) +{ + struct kvm_segment cs, ds; + struct desc_ptr dt; + unsigned long cr0; + char buf[512]; + + memset(buf, 0, 512); +#ifdef CONFIG_X86_64 + if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) + enter_smm_save_state_64(vcpu, buf); + else +#endif + enter_smm_save_state_32(vcpu, buf); + + /* + * Give enter_smm() a chance to make ISA-specific changes to the vCPU + * state (e.g. leave guest mode) after we've saved the state into the + * SMM state-save area. + */ + static_call(kvm_x86_enter_smm)(vcpu, buf); + + kvm_smm_changed(vcpu, true); + kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf)); + + if (static_call(kvm_x86_get_nmi_mask)(vcpu)) + vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK; + else + static_call(kvm_x86_set_nmi_mask)(vcpu, true); + + kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); + kvm_rip_write(vcpu, 0x8000); + + cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG); + static_call(kvm_x86_set_cr0)(vcpu, cr0); + vcpu->arch.cr0 = cr0; + + static_call(kvm_x86_set_cr4)(vcpu, 0); + + /* Undocumented: IDT limit is set to zero on entry to SMM. */ + dt.address = dt.size = 0; + static_call(kvm_x86_set_idt)(vcpu, &dt); + + kvm_set_dr(vcpu, 7, DR7_FIXED_1); + + cs.selector = (vcpu->arch.smbase >> 4) & 0xffff; + cs.base = vcpu->arch.smbase; + + ds.selector = 0; + ds.base = 0; + + cs.limit = ds.limit = 0xffffffff; + cs.type = ds.type = 0x3; + cs.dpl = ds.dpl = 0; + cs.db = ds.db = 0; + cs.s = ds.s = 1; + cs.l = ds.l = 0; + cs.g = ds.g = 1; + cs.avl = ds.avl = 0; + cs.present = ds.present = 1; + cs.unusable = ds.unusable = 0; + cs.padding = ds.padding = 0; + + kvm_set_segment(vcpu, &cs, VCPU_SREG_CS); + kvm_set_segment(vcpu, &ds, VCPU_SREG_DS); + kvm_set_segment(vcpu, &ds, VCPU_SREG_ES); + kvm_set_segment(vcpu, &ds, VCPU_SREG_FS); + kvm_set_segment(vcpu, &ds, VCPU_SREG_GS); + kvm_set_segment(vcpu, &ds, VCPU_SREG_SS); + +#ifdef CONFIG_X86_64 + if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) + static_call(kvm_x86_set_efer)(vcpu, 0); +#endif + + kvm_update_cpuid_runtime(vcpu); + kvm_mmu_reset_context(vcpu); +} diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index d85d4ccd32dd13..aacc6dac2c99a1 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -20,6 +20,7 @@ static inline bool is_smm(struct kvm_vcpu *vcpu) } void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm); +void enter_smm(struct kvm_vcpu *vcpu); void process_smi(struct kvm_vcpu *vcpu); #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6a5cebf7250826..1f69f54d1dbc82 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -120,7 +120,6 @@ static u64 __read_mostly cr4_reserved_bits = CR4_RESERVED_BITS; static void update_cr8_intercept(struct kvm_vcpu *vcpu); static void process_nmi(struct kvm_vcpu *vcpu); -static void enter_smm(struct kvm_vcpu *vcpu); static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); static void store_regs(struct kvm_vcpu *vcpu); static int sync_regs(struct kvm_vcpu *vcpu); @@ -7056,8 +7055,8 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v) return handled; } -static void kvm_set_segment(struct kvm_vcpu *vcpu, - struct kvm_segment *var, int seg) +void kvm_set_segment(struct kvm_vcpu *vcpu, + struct kvm_segment *var, int seg) { static_call(kvm_x86_set_segment)(vcpu, var, seg); } @@ -9981,240 +9980,6 @@ static void process_nmi(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); } -static u32 enter_smm_get_segment_flags(struct kvm_segment *seg) -{ - u32 flags = 0; - flags |= seg->g << 23; - flags |= seg->db << 22; - flags |= seg->l << 21; - flags |= seg->avl << 20; - flags |= seg->present << 15; - flags |= seg->dpl << 13; - flags |= seg->s << 12; - flags |= seg->type << 8; - return flags; -} - -static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, char *buf, int n) -{ - struct kvm_segment seg; - int offset; - - kvm_get_segment(vcpu, &seg, n); - PUT_SMSTATE(u32, buf, 0x7fa8 + n * 4, seg.selector); - - if (n < 3) - offset = 0x7f84 + n * 12; - else - offset = 0x7f2c + (n - 3) * 12; - - PUT_SMSTATE(u32, buf, offset + 8, seg.base); - PUT_SMSTATE(u32, buf, offset + 4, seg.limit); - PUT_SMSTATE(u32, buf, offset, enter_smm_get_segment_flags(&seg)); -} - -#ifdef CONFIG_X86_64 -static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, char *buf, int n) -{ - struct kvm_segment seg; - int offset; - u16 flags; - - kvm_get_segment(vcpu, &seg, n); - offset = 0x7e00 + n * 16; - - flags = enter_smm_get_segment_flags(&seg) >> 8; - PUT_SMSTATE(u16, buf, offset, seg.selector); - PUT_SMSTATE(u16, buf, offset + 2, flags); - PUT_SMSTATE(u32, buf, offset + 4, seg.limit); - PUT_SMSTATE(u64, buf, offset + 8, seg.base); -} -#endif - -static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) -{ - struct desc_ptr dt; - struct kvm_segment seg; - unsigned long val; - int i; - - PUT_SMSTATE(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); - PUT_SMSTATE(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); - PUT_SMSTATE(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); - PUT_SMSTATE(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); - - for (i = 0; i < 8; i++) - PUT_SMSTATE(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); - - kvm_get_dr(vcpu, 6, &val); - PUT_SMSTATE(u32, buf, 0x7fcc, (u32)val); - kvm_get_dr(vcpu, 7, &val); - PUT_SMSTATE(u32, buf, 0x7fc8, (u32)val); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - PUT_SMSTATE(u32, buf, 0x7fc4, seg.selector); - PUT_SMSTATE(u32, buf, 0x7f64, seg.base); - PUT_SMSTATE(u32, buf, 0x7f60, seg.limit); - PUT_SMSTATE(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - PUT_SMSTATE(u32, buf, 0x7fc0, seg.selector); - PUT_SMSTATE(u32, buf, 0x7f80, seg.base); - PUT_SMSTATE(u32, buf, 0x7f7c, seg.limit); - PUT_SMSTATE(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); - - static_call(kvm_x86_get_gdt)(vcpu, &dt); - PUT_SMSTATE(u32, buf, 0x7f74, dt.address); - PUT_SMSTATE(u32, buf, 0x7f70, dt.size); - - static_call(kvm_x86_get_idt)(vcpu, &dt); - PUT_SMSTATE(u32, buf, 0x7f58, dt.address); - PUT_SMSTATE(u32, buf, 0x7f54, dt.size); - - for (i = 0; i < 6; i++) - enter_smm_save_seg_32(vcpu, buf, i); - - PUT_SMSTATE(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); - - /* revision id */ - PUT_SMSTATE(u32, buf, 0x7efc, 0x00020000); - PUT_SMSTATE(u32, buf, 0x7ef8, vcpu->arch.smbase); -} - -#ifdef CONFIG_X86_64 -static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) -{ - struct desc_ptr dt; - struct kvm_segment seg; - unsigned long val; - int i; - - for (i = 0; i < 16; i++) - PUT_SMSTATE(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); - - PUT_SMSTATE(u64, buf, 0x7f78, kvm_rip_read(vcpu)); - PUT_SMSTATE(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); - - kvm_get_dr(vcpu, 6, &val); - PUT_SMSTATE(u64, buf, 0x7f68, val); - kvm_get_dr(vcpu, 7, &val); - PUT_SMSTATE(u64, buf, 0x7f60, val); - - PUT_SMSTATE(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); - PUT_SMSTATE(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); - PUT_SMSTATE(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); - - PUT_SMSTATE(u32, buf, 0x7f00, vcpu->arch.smbase); - - /* revision id */ - PUT_SMSTATE(u32, buf, 0x7efc, 0x00020064); - - PUT_SMSTATE(u64, buf, 0x7ed0, vcpu->arch.efer); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - PUT_SMSTATE(u16, buf, 0x7e90, seg.selector); - PUT_SMSTATE(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); - PUT_SMSTATE(u32, buf, 0x7e94, seg.limit); - PUT_SMSTATE(u64, buf, 0x7e98, seg.base); - - static_call(kvm_x86_get_idt)(vcpu, &dt); - PUT_SMSTATE(u32, buf, 0x7e84, dt.size); - PUT_SMSTATE(u64, buf, 0x7e88, dt.address); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - PUT_SMSTATE(u16, buf, 0x7e70, seg.selector); - PUT_SMSTATE(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); - PUT_SMSTATE(u32, buf, 0x7e74, seg.limit); - PUT_SMSTATE(u64, buf, 0x7e78, seg.base); - - static_call(kvm_x86_get_gdt)(vcpu, &dt); - PUT_SMSTATE(u32, buf, 0x7e64, dt.size); - PUT_SMSTATE(u64, buf, 0x7e68, dt.address); - - for (i = 0; i < 6; i++) - enter_smm_save_seg_64(vcpu, buf, i); -} -#endif - -static void enter_smm(struct kvm_vcpu *vcpu) -{ - struct kvm_segment cs, ds; - struct desc_ptr dt; - unsigned long cr0; - char buf[512]; - - memset(buf, 0, 512); -#ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, buf); - else -#endif - enter_smm_save_state_32(vcpu, buf); - - /* - * Give enter_smm() a chance to make ISA-specific changes to the vCPU - * state (e.g. leave guest mode) after we've saved the state into the - * SMM state-save area. - */ - static_call(kvm_x86_enter_smm)(vcpu, buf); - - kvm_smm_changed(vcpu, true); - kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf)); - - if (static_call(kvm_x86_get_nmi_mask)(vcpu)) - vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK; - else - static_call(kvm_x86_set_nmi_mask)(vcpu, true); - - kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); - kvm_rip_write(vcpu, 0x8000); - - cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG); - static_call(kvm_x86_set_cr0)(vcpu, cr0); - vcpu->arch.cr0 = cr0; - - static_call(kvm_x86_set_cr4)(vcpu, 0); - - /* Undocumented: IDT limit is set to zero on entry to SMM. */ - dt.address = dt.size = 0; - static_call(kvm_x86_set_idt)(vcpu, &dt); - - kvm_set_dr(vcpu, 7, DR7_FIXED_1); - - cs.selector = (vcpu->arch.smbase >> 4) & 0xffff; - cs.base = vcpu->arch.smbase; - - ds.selector = 0; - ds.base = 0; - - cs.limit = ds.limit = 0xffffffff; - cs.type = ds.type = 0x3; - cs.dpl = ds.dpl = 0; - cs.db = ds.db = 0; - cs.s = ds.s = 1; - cs.l = ds.l = 0; - cs.g = ds.g = 1; - cs.avl = ds.avl = 0; - cs.present = ds.present = 1; - cs.unusable = ds.unusable = 0; - cs.padding = ds.padding = 0; - - kvm_set_segment(vcpu, &cs, VCPU_SREG_CS); - kvm_set_segment(vcpu, &ds, VCPU_SREG_DS); - kvm_set_segment(vcpu, &ds, VCPU_SREG_ES); - kvm_set_segment(vcpu, &ds, VCPU_SREG_FS); - kvm_set_segment(vcpu, &ds, VCPU_SREG_GS); - kvm_set_segment(vcpu, &ds, VCPU_SREG_SS); - -#ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - static_call(kvm_x86_set_efer)(vcpu, 0); -#endif - - kvm_update_cpuid_runtime(vcpu); - kvm_mmu_reset_context(vcpu); -} - void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, unsigned long *vcpu_bitmap) { From patchwork Tue Oct 25 12:42:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10743 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983858wru; Tue, 25 Oct 2022 05:47:08 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6mOMrfSVJNmPFAHzN/VwSSTJ9DDdXDOC3Li3+1+bb9iusxsrODd61VZ3OENbd/IxH4dk4v X-Received: by 2002:a05:6402:294a:b0:461:b661:d903 with SMTP id ed10-20020a056402294a00b00461b661d903mr10933764edb.407.1666702028657; Tue, 25 Oct 2022 05:47:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702028; cv=none; d=google.com; s=arc-20160816; b=qB5FT+UPt1vFFytnVtsIQar11g4xIaZpppMbJSo+i4OS+rBmwweQL9yyEchcyvo1Y0 nu2LcexE0H4PZCw25Lkpj2J2AqIh7Wg8vO3QX32L/1l4Hguw1i1cbLmguI9NCXODwaON VUr8eIbleDAS0zQMTK0ykSe1gp64lm38cfFVEqFJtEN9PGfNP06GwOmSCcsx9b/NnQRn OJLCEn6+hQbtfX4CpfTbSjo3hW3mezktOYW6elZKeU8ucfnWBzlNVu2pq2kYVzN+O0QO OBt48qVv2jZ5fxZS/qw2scNiHdGBMN2jCjtdaKPnMFLjaahNtxOePoVkLXGHh+U497rC oocQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dELdUtEMFXaxyo0aOSAdhg8VM0SwHd6mw3Ub4VUavmM=; b=Cl2e2lj7XThRfW4MjVsIrusootiOpauF+DNv4I6x0Ru1dmlroqQGXeqk2OJttn6SKb 4Ux3rgvXGWGa5q08BBUiGSkGh6kskI5ZbQnVR4f4n707tv+tbJu1yd3YQ9b819MpSk4A ErhvZcGO26i21HGkVz0L5DrSdN2Xd+GETTWEbcHo+ZeZ+myjxUnEywbVW8+9WO8fiM8q wOMsFfBGtOr7Lh8MC12FS4na6usAuq4UofKx3jJYvq8iGSzvDKAMUVXb01ElFqMlNSoO ZbyeORPP1SO8hRDpkVrctwn3fKR/wJqewI99SM0ikPETTjbSpvZcEyY5O5YRYTv3AKPX 8Zpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HKEvpG55; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l26-20020aa7d95a000000b0045d25cf222csi2475979eds.362.2022.10.25.05.46.44; Tue, 25 Oct 2022 05:47:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HKEvpG55; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232265AbiJYMnH (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232062AbiJYMmr (ORCPT ); Tue, 25 Oct 2022 08:42:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E40718D442 for ; Tue, 25 Oct 2022 05:42:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701763; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dELdUtEMFXaxyo0aOSAdhg8VM0SwHd6mw3Ub4VUavmM=; b=HKEvpG55XgTBUfMx0GmcEfxhBb3dTXSyS3mTXuukOn+cSNvXtIK5QCYcfIfr3uDzZAGBeC HePEhjCQCIYJEIRKs/3B6XCh4nAu1ohLPTouFuUXYYstru1KIypeYdMEnnqmEwbzQKNM8L 2oBw5uQgExZhjOknOBa0ntUo+maqn34= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-654-rgW7Q6iyMHWTAwJ_5JglVQ-1; Tue, 25 Oct 2022 08:42:40 -0400 X-MC-Unique: rgW7Q6iyMHWTAwJ_5JglVQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ED6233C0D851; Tue, 25 Oct 2022 12:42:38 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BAA320290A2; Tue, 25 Oct 2022 12:42:35 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 03/23] KVM: x86: move SMM exit to a new file Date: Tue, 25 Oct 2022 15:42:03 +0300 Message-Id: <20221025124223.227577-4-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663746415705176?= X-GMAIL-MSGID: =?utf-8?q?1747663746415705176?= From: Paolo Bonzini Some users of KVM implement the UEFI variable store through a paravirtual device that does not require the "SMM lockbox" component of edk2, and would like to compile out system management mode. In preparation for that, move the SMM exit code out of emulate.c and into a new file. The code is still written as a series of invocations of the emulator callbacks, but the two exiting_smm and leave_smm callbacks are merged into one, and all the code from em_rsm is now part of the callback. This removes all knowledge of the format of the SMM save state area from the emulator. Further patches will clean up the code and invoke KVM's own functions to access control registers, descriptor caches, etc. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/emulate.c | 356 +------------------------------------ arch/x86/kvm/kvm_emulate.h | 34 +++- arch/x86/kvm/smm.c | 316 ++++++++++++++++++++++++++++++++ arch/x86/kvm/smm.h | 1 + arch/x86/kvm/x86.c | 14 -- 5 files changed, 351 insertions(+), 370 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index de09660a61e522..671f7e5871ff70 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -30,7 +30,6 @@ #include "tss.h" #include "mmu.h" #include "pmu.h" -#include "smm.h" /* * Operand types @@ -243,37 +242,6 @@ enum x86_transfer_type { X86_TRANSFER_TASK_SWITCH, }; -static ulong reg_read(struct x86_emulate_ctxt *ctxt, unsigned nr) -{ - if (KVM_EMULATOR_BUG_ON(nr >= NR_EMULATOR_GPRS, ctxt)) - nr &= NR_EMULATOR_GPRS - 1; - - if (!(ctxt->regs_valid & (1 << nr))) { - ctxt->regs_valid |= 1 << nr; - ctxt->_regs[nr] = ctxt->ops->read_gpr(ctxt, nr); - } - return ctxt->_regs[nr]; -} - -static ulong *reg_write(struct x86_emulate_ctxt *ctxt, unsigned nr) -{ - if (KVM_EMULATOR_BUG_ON(nr >= NR_EMULATOR_GPRS, ctxt)) - nr &= NR_EMULATOR_GPRS - 1; - - BUILD_BUG_ON(sizeof(ctxt->regs_dirty) * BITS_PER_BYTE < NR_EMULATOR_GPRS); - BUILD_BUG_ON(sizeof(ctxt->regs_valid) * BITS_PER_BYTE < NR_EMULATOR_GPRS); - - ctxt->regs_valid |= 1 << nr; - ctxt->regs_dirty |= 1 << nr; - return &ctxt->_regs[nr]; -} - -static ulong *reg_rmw(struct x86_emulate_ctxt *ctxt, unsigned nr) -{ - reg_read(ctxt, nr); - return reg_write(ctxt, nr); -} - static void writeback_registers(struct x86_emulate_ctxt *ctxt) { unsigned long dirty = ctxt->regs_dirty; @@ -2310,334 +2278,14 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt) return rc; } -static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt) -{ -#ifdef CONFIG_X86_64 - return ctxt->ops->guest_has_long_mode(ctxt); -#else - return false; -#endif -} - -static void rsm_set_desc_flags(struct desc_struct *desc, u32 flags) -{ - desc->g = (flags >> 23) & 1; - desc->d = (flags >> 22) & 1; - desc->l = (flags >> 21) & 1; - desc->avl = (flags >> 20) & 1; - desc->p = (flags >> 15) & 1; - desc->dpl = (flags >> 13) & 3; - desc->s = (flags >> 12) & 1; - desc->type = (flags >> 8) & 15; -} - -static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate, - int n) -{ - struct desc_struct desc; - int offset; - u16 selector; - - selector = GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); - - if (n < 3) - offset = 0x7f84 + n * 12; - else - offset = 0x7f2c + (n - 3) * 12; - - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, n); - return X86EMUL_CONTINUE; -} - -#ifdef CONFIG_X86_64 -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smstate, - int n) -{ - struct desc_struct desc; - int offset; - u16 selector; - u32 base3; - - offset = 0x7e00 + n * 16; - - selector = GET_SMSTATE(u16, smstate, offset); - rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - base3 = GET_SMSTATE(u32, smstate, offset + 12); - - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); - return X86EMUL_CONTINUE; -} -#endif - -static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, - u64 cr0, u64 cr3, u64 cr4) -{ - int bad; - u64 pcid; - - /* In order to later set CR4.PCIDE, CR3[11:0] must be zero. */ - pcid = 0; - if (cr4 & X86_CR4_PCIDE) { - pcid = cr3 & 0xfff; - cr3 &= ~0xfff; - } - - bad = ctxt->ops->set_cr(ctxt, 3, cr3); - if (bad) - return X86EMUL_UNHANDLEABLE; - - /* - * First enable PAE, long mode needs it before CR0.PG = 1 is set. - * Then enable protected mode. However, PCID cannot be enabled - * if EFER.LMA=0, so set it separately. - */ - bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); - if (bad) - return X86EMUL_UNHANDLEABLE; - - bad = ctxt->ops->set_cr(ctxt, 0, cr0); - if (bad) - return X86EMUL_UNHANDLEABLE; - - if (cr4 & X86_CR4_PCIDE) { - bad = ctxt->ops->set_cr(ctxt, 4, cr4); - if (bad) - return X86EMUL_UNHANDLEABLE; - if (pcid) { - bad = ctxt->ops->set_cr(ctxt, 3, cr3 | pcid); - if (bad) - return X86EMUL_UNHANDLEABLE; - } - - } - - return X86EMUL_CONTINUE; -} - -static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, - const char *smstate) -{ - struct desc_struct desc; - struct desc_ptr dt; - u16 selector; - u32 val, cr0, cr3, cr4; - int i; - - cr0 = GET_SMSTATE(u32, smstate, 0x7ffc); - cr3 = GET_SMSTATE(u32, smstate, 0x7ff8); - ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED; - ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0); - - for (i = 0; i < NR_EMULATOR_GPRS; i++) - *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); - - val = GET_SMSTATE(u32, smstate, 0x7fcc); - - if (ctxt->ops->set_dr(ctxt, 6, val)) - return X86EMUL_UNHANDLEABLE; - - val = GET_SMSTATE(u32, smstate, 0x7fc8); - - if (ctxt->ops->set_dr(ctxt, 7, val)) - return X86EMUL_UNHANDLEABLE; - - selector = GET_SMSTATE(u32, smstate, 0x7fc4); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR); - - selector = GET_SMSTATE(u32, smstate, 0x7fc0); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR); - - dt.address = GET_SMSTATE(u32, smstate, 0x7f74); - dt.size = GET_SMSTATE(u32, smstate, 0x7f70); - ctxt->ops->set_gdt(ctxt, &dt); - - dt.address = GET_SMSTATE(u32, smstate, 0x7f58); - dt.size = GET_SMSTATE(u32, smstate, 0x7f54); - ctxt->ops->set_idt(ctxt, &dt); - - for (i = 0; i < 6; i++) { - int r = rsm_load_seg_32(ctxt, smstate, i); - if (r != X86EMUL_CONTINUE) - return r; - } - - cr4 = GET_SMSTATE(u32, smstate, 0x7f14); - - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8)); - - return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); -} - -#ifdef CONFIG_X86_64 -static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, - const char *smstate) -{ - struct desc_struct desc; - struct desc_ptr dt; - u64 val, cr0, cr3, cr4; - u32 base3; - u16 selector; - int i, r; - - for (i = 0; i < NR_EMULATOR_GPRS; i++) - *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); - - ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78); - ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; - - val = GET_SMSTATE(u64, smstate, 0x7f68); - - if (ctxt->ops->set_dr(ctxt, 6, val)) - return X86EMUL_UNHANDLEABLE; - - val = GET_SMSTATE(u64, smstate, 0x7f60); - - if (ctxt->ops->set_dr(ctxt, 7, val)) - return X86EMUL_UNHANDLEABLE; - - cr0 = GET_SMSTATE(u64, smstate, 0x7f58); - cr3 = GET_SMSTATE(u64, smstate, 0x7f50); - cr4 = GET_SMSTATE(u64, smstate, 0x7f48); - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); - val = GET_SMSTATE(u64, smstate, 0x7ed0); - - if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) - return X86EMUL_UNHANDLEABLE; - - selector = GET_SMSTATE(u32, smstate, 0x7e90); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98)); - base3 = GET_SMSTATE(u32, smstate, 0x7e9c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR); - - dt.size = GET_SMSTATE(u32, smstate, 0x7e84); - dt.address = GET_SMSTATE(u64, smstate, 0x7e88); - ctxt->ops->set_idt(ctxt, &dt); - - selector = GET_SMSTATE(u32, smstate, 0x7e70); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78)); - base3 = GET_SMSTATE(u32, smstate, 0x7e7c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR); - - dt.size = GET_SMSTATE(u32, smstate, 0x7e64); - dt.address = GET_SMSTATE(u64, smstate, 0x7e68); - ctxt->ops->set_gdt(ctxt, &dt); - - r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); - if (r != X86EMUL_CONTINUE) - return r; - - for (i = 0; i < 6; i++) { - r = rsm_load_seg_64(ctxt, smstate, i); - if (r != X86EMUL_CONTINUE) - return r; - } - - return X86EMUL_CONTINUE; -} -#endif - static int em_rsm(struct x86_emulate_ctxt *ctxt) { - unsigned long cr0, cr4, efer; - char buf[512]; - u64 smbase; - int ret; - if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0) return emulate_ud(ctxt); - smbase = ctxt->ops->get_smbase(ctxt); - - ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf)); - if (ret != X86EMUL_CONTINUE) - return X86EMUL_UNHANDLEABLE; - - if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) - ctxt->ops->set_nmi_mask(ctxt, false); - - ctxt->ops->exiting_smm(ctxt); - - /* - * Get back to real mode, to prepare a safe state in which to load - * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU - * supports long mode. - */ - if (emulator_has_longmode(ctxt)) { - struct desc_struct cs_desc; - - /* Zero CR4.PCIDE before CR0.PG. */ - cr4 = ctxt->ops->get_cr(ctxt, 4); - if (cr4 & X86_CR4_PCIDE) - ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); - - /* A 32-bit code segment is required to clear EFER.LMA. */ - memset(&cs_desc, 0, sizeof(cs_desc)); - cs_desc.type = 0xb; - cs_desc.s = cs_desc.g = cs_desc.p = 1; - ctxt->ops->set_segment(ctxt, 0, &cs_desc, 0, VCPU_SREG_CS); - } - - /* For the 64-bit case, this will clear EFER.LMA. */ - cr0 = ctxt->ops->get_cr(ctxt, 0); - if (cr0 & X86_CR0_PE) - ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); - - if (emulator_has_longmode(ctxt)) { - /* Clear CR4.PAE before clearing EFER.LME. */ - cr4 = ctxt->ops->get_cr(ctxt, 4); - if (cr4 & X86_CR4_PAE) - ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE); - - /* And finally go back to 32-bit mode. */ - efer = 0; - ctxt->ops->set_msr(ctxt, MSR_EFER, efer); - } - - /* - * Give leave_smm() a chance to make ISA-specific changes to the vCPU - * state (e.g. enter guest mode) before loading state from the SMM - * state-save area. - */ - if (ctxt->ops->leave_smm(ctxt, buf)) - goto emulate_shutdown; - -#ifdef CONFIG_X86_64 - if (emulator_has_longmode(ctxt)) - ret = rsm_load_state_64(ctxt, buf); - else -#endif - ret = rsm_load_state_32(ctxt, buf); - - if (ret != X86EMUL_CONTINUE) - goto emulate_shutdown; - - /* - * Note, the ctxt->ops callbacks are responsible for handling side - * effects when writing MSRs and CRs, e.g. MMU context resets, CPUID - * runtime updates, etc... If that changes, e.g. this flow is moved - * out of the emulator to make it look more like enter_smm(), then - * those side effects need to be explicitly handled for both success - * and shutdown. - */ - return X86EMUL_CONTINUE; + if (ctxt->ops->leave_smm(ctxt)) + ctxt->ops->triple_fault(ctxt); -emulate_shutdown: - ctxt->ops->triple_fault(ctxt); return X86EMUL_CONTINUE; } diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 89246446d6aa9d..d7afbc448dd245 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -234,8 +234,7 @@ struct x86_emulate_ops { void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked); unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); - void (*exiting_smm)(struct x86_emulate_ctxt *ctxt); - int (*leave_smm)(struct x86_emulate_ctxt *ctxt, const char *smstate); + int (*leave_smm)(struct x86_emulate_ctxt *ctxt); void (*triple_fault)(struct x86_emulate_ctxt *ctxt); int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr); }; @@ -526,4 +525,35 @@ void emulator_invalidate_register_cache(struct x86_emulate_ctxt *ctxt); void emulator_writeback_register_cache(struct x86_emulate_ctxt *ctxt); bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt); +static inline ulong reg_read(struct x86_emulate_ctxt *ctxt, unsigned nr) +{ + if (KVM_EMULATOR_BUG_ON(nr >= NR_EMULATOR_GPRS, ctxt)) + nr &= NR_EMULATOR_GPRS - 1; + + if (!(ctxt->regs_valid & (1 << nr))) { + ctxt->regs_valid |= 1 << nr; + ctxt->_regs[nr] = ctxt->ops->read_gpr(ctxt, nr); + } + return ctxt->_regs[nr]; +} + +static inline ulong *reg_write(struct x86_emulate_ctxt *ctxt, unsigned nr) +{ + if (KVM_EMULATOR_BUG_ON(nr >= NR_EMULATOR_GPRS, ctxt)) + nr &= NR_EMULATOR_GPRS - 1; + + BUILD_BUG_ON(sizeof(ctxt->regs_dirty) * BITS_PER_BYTE < NR_EMULATOR_GPRS); + BUILD_BUG_ON(sizeof(ctxt->regs_valid) * BITS_PER_BYTE < NR_EMULATOR_GPRS); + + ctxt->regs_valid |= 1 << nr; + ctxt->regs_dirty |= 1 << nr; + return &ctxt->_regs[nr]; +} + +static inline ulong *reg_rmw(struct x86_emulate_ctxt *ctxt, unsigned nr) +{ + reg_read(ctxt, nr); + return reg_write(ctxt, nr); +} + #endif /* _ASM_X86_KVM_X86_EMULATE_H */ diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 26a6859e421fe1..773e07b6397df0 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -270,3 +270,319 @@ void enter_smm(struct kvm_vcpu *vcpu) kvm_update_cpuid_runtime(vcpu); kvm_mmu_reset_context(vcpu); } + +static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt) +{ +#ifdef CONFIG_X86_64 + return ctxt->ops->guest_has_long_mode(ctxt); +#else + return false; +#endif +} + +static void rsm_set_desc_flags(struct desc_struct *desc, u32 flags) +{ + desc->g = (flags >> 23) & 1; + desc->d = (flags >> 22) & 1; + desc->l = (flags >> 21) & 1; + desc->avl = (flags >> 20) & 1; + desc->p = (flags >> 15) & 1; + desc->dpl = (flags >> 13) & 3; + desc->s = (flags >> 12) & 1; + desc->type = (flags >> 8) & 15; +} + +static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate, + int n) +{ + struct desc_struct desc; + int offset; + u16 selector; + + selector = GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); + + if (n < 3) + offset = 0x7f84 + n * 12; + else + offset = 0x7f2c + (n - 3) * 12; + + set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); + rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset)); + ctxt->ops->set_segment(ctxt, selector, &desc, 0, n); + return X86EMUL_CONTINUE; +} + +#ifdef CONFIG_X86_64 +static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smstate, + int n) +{ + struct desc_struct desc; + int offset; + u16 selector; + u32 base3; + + offset = 0x7e00 + n * 16; + + selector = GET_SMSTATE(u16, smstate, offset); + rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); + set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); + base3 = GET_SMSTATE(u32, smstate, offset + 12); + + ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); + return X86EMUL_CONTINUE; +} +#endif + +static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, + u64 cr0, u64 cr3, u64 cr4) +{ + int bad; + u64 pcid; + + /* In order to later set CR4.PCIDE, CR3[11:0] must be zero. */ + pcid = 0; + if (cr4 & X86_CR4_PCIDE) { + pcid = cr3 & 0xfff; + cr3 &= ~0xfff; + } + + bad = ctxt->ops->set_cr(ctxt, 3, cr3); + if (bad) + return X86EMUL_UNHANDLEABLE; + + /* + * First enable PAE, long mode needs it before CR0.PG = 1 is set. + * Then enable protected mode. However, PCID cannot be enabled + * if EFER.LMA=0, so set it separately. + */ + bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); + if (bad) + return X86EMUL_UNHANDLEABLE; + + bad = ctxt->ops->set_cr(ctxt, 0, cr0); + if (bad) + return X86EMUL_UNHANDLEABLE; + + if (cr4 & X86_CR4_PCIDE) { + bad = ctxt->ops->set_cr(ctxt, 4, cr4); + if (bad) + return X86EMUL_UNHANDLEABLE; + if (pcid) { + bad = ctxt->ops->set_cr(ctxt, 3, cr3 | pcid); + if (bad) + return X86EMUL_UNHANDLEABLE; + } + + } + + return X86EMUL_CONTINUE; +} + +static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, + const char *smstate) +{ + struct desc_struct desc; + struct desc_ptr dt; + u16 selector; + u32 val, cr0, cr3, cr4; + int i; + + cr0 = GET_SMSTATE(u32, smstate, 0x7ffc); + cr3 = GET_SMSTATE(u32, smstate, 0x7ff8); + ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED; + ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0); + + for (i = 0; i < NR_EMULATOR_GPRS; i++) + *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); + + val = GET_SMSTATE(u32, smstate, 0x7fcc); + + if (ctxt->ops->set_dr(ctxt, 6, val)) + return X86EMUL_UNHANDLEABLE; + + val = GET_SMSTATE(u32, smstate, 0x7fc8); + + if (ctxt->ops->set_dr(ctxt, 7, val)) + return X86EMUL_UNHANDLEABLE; + + selector = GET_SMSTATE(u32, smstate, 0x7fc4); + set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60)); + rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c)); + ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR); + + selector = GET_SMSTATE(u32, smstate, 0x7fc0); + set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80)); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c)); + rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78)); + ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR); + + dt.address = GET_SMSTATE(u32, smstate, 0x7f74); + dt.size = GET_SMSTATE(u32, smstate, 0x7f70); + ctxt->ops->set_gdt(ctxt, &dt); + + dt.address = GET_SMSTATE(u32, smstate, 0x7f58); + dt.size = GET_SMSTATE(u32, smstate, 0x7f54); + ctxt->ops->set_idt(ctxt, &dt); + + for (i = 0; i < 6; i++) { + int r = rsm_load_seg_32(ctxt, smstate, i); + if (r != X86EMUL_CONTINUE) + return r; + } + + cr4 = GET_SMSTATE(u32, smstate, 0x7f14); + + ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8)); + + return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); +} + +#ifdef CONFIG_X86_64 +static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, + const char *smstate) +{ + struct desc_struct desc; + struct desc_ptr dt; + u64 val, cr0, cr3, cr4; + u32 base3; + u16 selector; + int i, r; + + for (i = 0; i < NR_EMULATOR_GPRS; i++) + *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); + + ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78); + ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; + + val = GET_SMSTATE(u64, smstate, 0x7f68); + + if (ctxt->ops->set_dr(ctxt, 6, val)) + return X86EMUL_UNHANDLEABLE; + + val = GET_SMSTATE(u64, smstate, 0x7f60); + + if (ctxt->ops->set_dr(ctxt, 7, val)) + return X86EMUL_UNHANDLEABLE; + + cr0 = GET_SMSTATE(u64, smstate, 0x7f58); + cr3 = GET_SMSTATE(u64, smstate, 0x7f50); + cr4 = GET_SMSTATE(u64, smstate, 0x7f48); + ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); + val = GET_SMSTATE(u64, smstate, 0x7ed0); + + if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) + return X86EMUL_UNHANDLEABLE; + + selector = GET_SMSTATE(u32, smstate, 0x7e90); + rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94)); + set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98)); + base3 = GET_SMSTATE(u32, smstate, 0x7e9c); + ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR); + + dt.size = GET_SMSTATE(u32, smstate, 0x7e84); + dt.address = GET_SMSTATE(u64, smstate, 0x7e88); + ctxt->ops->set_idt(ctxt, &dt); + + selector = GET_SMSTATE(u32, smstate, 0x7e70); + rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8); + set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74)); + set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78)); + base3 = GET_SMSTATE(u32, smstate, 0x7e7c); + ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR); + + dt.size = GET_SMSTATE(u32, smstate, 0x7e64); + dt.address = GET_SMSTATE(u64, smstate, 0x7e68); + ctxt->ops->set_gdt(ctxt, &dt); + + r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + if (r != X86EMUL_CONTINUE) + return r; + + for (i = 0; i < 6; i++) { + r = rsm_load_seg_64(ctxt, smstate, i); + if (r != X86EMUL_CONTINUE) + return r; + } + + return X86EMUL_CONTINUE; +} +#endif + +int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) +{ + struct kvm_vcpu *vcpu = ctxt->vcpu; + unsigned long cr0, cr4, efer; + char buf[512]; + u64 smbase; + int ret; + + smbase = ctxt->ops->get_smbase(ctxt); + + ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf)); + if (ret != X86EMUL_CONTINUE) + return X86EMUL_UNHANDLEABLE; + + if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) + ctxt->ops->set_nmi_mask(ctxt, false); + + kvm_smm_changed(vcpu, false); + + /* + * Get back to real mode, to prepare a safe state in which to load + * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU + * supports long mode. + * + * The ctxt->ops callbacks will handle all side effects when writing + * writing MSRs and CRs, e.g. MMU context resets, CPUID + * runtime updates, etc. + */ + if (emulator_has_longmode(ctxt)) { + struct desc_struct cs_desc; + + /* Zero CR4.PCIDE before CR0.PG. */ + cr4 = ctxt->ops->get_cr(ctxt, 4); + if (cr4 & X86_CR4_PCIDE) + ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); + + /* A 32-bit code segment is required to clear EFER.LMA. */ + memset(&cs_desc, 0, sizeof(cs_desc)); + cs_desc.type = 0xb; + cs_desc.s = cs_desc.g = cs_desc.p = 1; + ctxt->ops->set_segment(ctxt, 0, &cs_desc, 0, VCPU_SREG_CS); + } + + /* For the 64-bit case, this will clear EFER.LMA. */ + cr0 = ctxt->ops->get_cr(ctxt, 0); + if (cr0 & X86_CR0_PE) + ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); + + if (emulator_has_longmode(ctxt)) { + /* Clear CR4.PAE before clearing EFER.LME. */ + cr4 = ctxt->ops->get_cr(ctxt, 4); + if (cr4 & X86_CR4_PAE) + ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE); + + /* And finally go back to 32-bit mode. */ + efer = 0; + ctxt->ops->set_msr(ctxt, MSR_EFER, efer); + } + + /* + * Give leave_smm() a chance to make ISA-specific changes to the vCPU + * state (e.g. enter guest mode) before loading state from the SMM + * state-save area. + */ + if (static_call(kvm_x86_leave_smm)(vcpu, buf)) + return X86EMUL_UNHANDLEABLE; + +#ifdef CONFIG_X86_64 + if (emulator_has_longmode(ctxt)) + return rsm_load_state_64(ctxt, buf); + else +#endif + return rsm_load_state_32(ctxt, buf); +} diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index aacc6dac2c99a1..b0602a92e511e1 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -21,6 +21,7 @@ static inline bool is_smm(struct kvm_vcpu *vcpu) void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm); void enter_smm(struct kvm_vcpu *vcpu); +int emulator_leave_smm(struct x86_emulate_ctxt *ctxt); void process_smi(struct kvm_vcpu *vcpu); #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1f69f54d1dbc82..a1e524c2c39df2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8108,19 +8108,6 @@ static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) return emul_to_vcpu(ctxt)->arch.hflags; } -static void emulator_exiting_smm(struct x86_emulate_ctxt *ctxt) -{ - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - - kvm_smm_changed(vcpu, false); -} - -static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt, - const char *smstate) -{ - return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smstate); -} - static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt) { kvm_make_request(KVM_REQ_TRIPLE_FAULT, emul_to_vcpu(ctxt)); @@ -8184,7 +8171,6 @@ static const struct x86_emulate_ops emulate_ops = { .guest_has_rdpid = emulator_guest_has_rdpid, .set_nmi_mask = emulator_set_nmi_mask, .get_hflags = emulator_get_hflags, - .exiting_smm = emulator_exiting_smm, .leave_smm = emulator_leave_smm, .triple_fault = emulator_triple_fault, .set_xcr = emulator_set_xcr, From patchwork Tue Oct 25 12:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10745 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983914wru; Tue, 25 Oct 2022 05:47:17 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Cuge4Bg9HtN6bfyXrO6zAYMBMazmPPdw6vr0aUIspRh+JPLzINHyOTjoklDS2bEqgTp2L X-Received: by 2002:a17:907:2d89:b0:78d:9c30:452b with SMTP id gt9-20020a1709072d8900b0078d9c30452bmr30634818ejc.533.1666702037093; Tue, 25 Oct 2022 05:47:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702037; cv=none; d=google.com; s=arc-20160816; b=Mw30ARH3M8LrmQkXy0+NSaCYMo05H7tkK+E8eO6yeYPZh8eK7Lv8DbmJJU6XkEpziP vEraxR5aaEYDriv1iJ1WFCvmBNbQW+/gwc+pGjWU6gdxRmyfIc7pxFtJeNCNPSIyD59E 1hTAlIYwW2eYUR6PjBeqoGpB7qf+fWnqavym+7Y5b/5dMO5qxljIePvt7+LxBDZ/TzsK pjJW+BDMobFe5H10G93TWpDRz4h6g8er/BQLxYN612ZUtTACxsstH98BijmjhdffNiM+ olE8EhLathJAPaAUVCMkzik8Tp8WWtkr+WdnWD8MJztl1nZzl3tecCD5CKQW7tUKKglI anlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/7EmJTXzQv4pnITPnpEfTf/QYoiVRjnASdrISGBMxx0=; b=r45rkBqStRYyE12Tr2YNCvqVoomTZv8ZxfgSD0Al7L4TpXclK+alps7hXlbtJt5rms TmPR39SftFoH/+yZUr2jGVSd+M0hybse8FfiECNp/OO3BJKOV2CvvWoia8kJnaYC4r9K DgVxNzex4UJwU4eQJCjZlvPUTVpeLRzorG3st4wr96Ke2o28y914Q/NbsZWCOxL6wmx5 l8LJ7W8fIPRvHsh2UxJ89RXQrGKbshS5h4zXVDC5hm9PNSxyvFAuxTMt+ZmBYolxEBH8 rxTQxKQxRHIufeXaixGjGkMeob1fkNfhzptiE5zRHCUiIDpH2zpLfvoP7NYAuxwuDaEs tttg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HA2ErfIM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s18-20020a056402521200b0045ca15afe1csi556753edd.553.2022.10.25.05.46.52; Tue, 25 Oct 2022 05:47:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HA2ErfIM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231474AbiJYMnS (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230035AbiJYMmu (ORCPT ); Tue, 25 Oct 2022 08:42:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7519F18BE34 for ; Tue, 25 Oct 2022 05:42:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701767; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/7EmJTXzQv4pnITPnpEfTf/QYoiVRjnASdrISGBMxx0=; b=HA2ErfIM3xIpamAmID5n1tNfN48j0ZZwzyXcyQKWaV5j66fWkz7PbB6/eViCBi14uh3q0b FaQCXzkxSQt0S+Is6cCy9eZLtb3kpoQ2TlxnzICuZScFhf0jZSFOqPvFETmIzIyAsUBVxM OUC+Dol8F7/rqy7oTP6xykUV6yl7nFs= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-168-dTjfrd5APPijGwZKhSNqQQ-1; Tue, 25 Oct 2022 08:42:44 -0400 X-MC-Unique: dTjfrd5APPijGwZKhSNqQQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92AFF380671E; Tue, 25 Oct 2022 12:42:42 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 40DE720290A2; Tue, 25 Oct 2022 12:42:39 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 04/23] KVM: x86: do not go through ctxt->ops when emulating rsm Date: Tue, 25 Oct 2022 15:42:04 +0300 Message-Id: <20221025124223.227577-5-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663755283381711?= X-GMAIL-MSGID: =?utf-8?q?1747663755283381711?= From: Paolo Bonzini Now that RSM is implemented in a single emulator callback, there is no point in going through other callbacks for the sake of modifying processor state. Just invoke KVM's own internal functions directly, and remove the callbacks that were only used by em_rsm; the only substantial difference is in the handling of the segment registers and descriptor cache, which have to be parsed into a struct kvm_segment instead of a struct desc_struct. This also fixes a bug where emulator_set_segment was shifting the limit left by 12 if the G bit is set, but the limit had not been shifted right upon entry to SMM. The emulator context is still used to restore EIP and the general purpose registers. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/kvm_emulate.h | 13 --- arch/x86/kvm/smm.c | 177 +++++++++++++++++-------------------- arch/x86/kvm/x86.c | 33 ------- 3 files changed, 81 insertions(+), 142 deletions(-) diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index d7afbc448dd245..84b1f266146305 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -116,16 +116,6 @@ struct x86_emulate_ops { unsigned int bytes, struct x86_exception *fault, bool system); - /* - * read_phys: Read bytes of standard (non-emulated/special) memory. - * Used for descriptor reading. - * @addr: [IN ] Physical address from which to read. - * @val: [OUT] Value read from memory. - * @bytes: [IN ] Number of bytes to read from memory. - */ - int (*read_phys)(struct x86_emulate_ctxt *ctxt, unsigned long addr, - void *val, unsigned int bytes); - /* * write_std: Write bytes of standard (non-emulated/special) memory. * Used for descriptor writing. @@ -209,11 +199,8 @@ struct x86_emulate_ops { int (*cpl)(struct x86_emulate_ctxt *ctxt); void (*get_dr)(struct x86_emulate_ctxt *ctxt, int dr, ulong *dest); int (*set_dr)(struct x86_emulate_ctxt *ctxt, int dr, ulong value); - u64 (*get_smbase)(struct x86_emulate_ctxt *ctxt); - void (*set_smbase)(struct x86_emulate_ctxt *ctxt, u64 smbase); int (*set_msr_with_filter)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 data); int (*get_msr_with_filter)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 *pdata); - int (*set_msr)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 data); int (*get_msr)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 *pdata); int (*check_pmc)(struct x86_emulate_ctxt *ctxt, u32 pmc); int (*read_pmc)(struct x86_emulate_ctxt *ctxt, u32 pmc, u64 *pdata); diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 773e07b6397df0..41ca128478fcd4 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -271,71 +271,59 @@ void enter_smm(struct kvm_vcpu *vcpu) kvm_mmu_reset_context(vcpu); } -static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt) -{ -#ifdef CONFIG_X86_64 - return ctxt->ops->guest_has_long_mode(ctxt); -#else - return false; -#endif -} - -static void rsm_set_desc_flags(struct desc_struct *desc, u32 flags) +static void rsm_set_desc_flags(struct kvm_segment *desc, u32 flags) { desc->g = (flags >> 23) & 1; - desc->d = (flags >> 22) & 1; + desc->db = (flags >> 22) & 1; desc->l = (flags >> 21) & 1; desc->avl = (flags >> 20) & 1; - desc->p = (flags >> 15) & 1; + desc->present = (flags >> 15) & 1; desc->dpl = (flags >> 13) & 3; desc->s = (flags >> 12) & 1; desc->type = (flags >> 8) & 15; + + desc->unusable = !desc->present; + desc->padding = 0; } -static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate, +static int rsm_load_seg_32(struct kvm_vcpu *vcpu, const char *smstate, int n) { - struct desc_struct desc; + struct kvm_segment desc; int offset; - u16 selector; - - selector = GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); if (n < 3) offset = 0x7f84 + n * 12; else offset = 0x7f2c + (n - 3) * 12; - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); + desc.selector = GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); + desc.base = GET_SMSTATE(u32, smstate, offset + 8); + desc.limit = GET_SMSTATE(u32, smstate, offset + 4); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, n); + kvm_set_segment(vcpu, &desc, n); return X86EMUL_CONTINUE; } #ifdef CONFIG_X86_64 -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smstate, +static int rsm_load_seg_64(struct kvm_vcpu *vcpu, const char *smstate, int n) { - struct desc_struct desc; + struct kvm_segment desc; int offset; - u16 selector; - u32 base3; offset = 0x7e00 + n * 16; - selector = GET_SMSTATE(u16, smstate, offset); + desc.selector = GET_SMSTATE(u16, smstate, offset); rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - base3 = GET_SMSTATE(u32, smstate, offset + 12); - - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); + desc.limit = GET_SMSTATE(u32, smstate, offset + 4); + desc.base = GET_SMSTATE(u64, smstate, offset + 8); + kvm_set_segment(vcpu, &desc, n); return X86EMUL_CONTINUE; } #endif -static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, +static int rsm_enter_protected_mode(struct kvm_vcpu *vcpu, u64 cr0, u64 cr3, u64 cr4) { int bad; @@ -348,7 +336,7 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, cr3 &= ~0xfff; } - bad = ctxt->ops->set_cr(ctxt, 3, cr3); + bad = kvm_set_cr3(vcpu, cr3); if (bad) return X86EMUL_UNHANDLEABLE; @@ -357,20 +345,20 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, * Then enable protected mode. However, PCID cannot be enabled * if EFER.LMA=0, so set it separately. */ - bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); + bad = kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PCIDE); if (bad) return X86EMUL_UNHANDLEABLE; - bad = ctxt->ops->set_cr(ctxt, 0, cr0); + bad = kvm_set_cr0(vcpu, cr0); if (bad) return X86EMUL_UNHANDLEABLE; if (cr4 & X86_CR4_PCIDE) { - bad = ctxt->ops->set_cr(ctxt, 4, cr4); + bad = kvm_set_cr4(vcpu, cr4); if (bad) return X86EMUL_UNHANDLEABLE; if (pcid) { - bad = ctxt->ops->set_cr(ctxt, 3, cr3 | pcid); + bad = kvm_set_cr3(vcpu, cr3 | pcid); if (bad) return X86EMUL_UNHANDLEABLE; } @@ -383,9 +371,9 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, const char *smstate) { - struct desc_struct desc; + struct kvm_vcpu *vcpu = ctxt->vcpu; + struct kvm_segment desc; struct desc_ptr dt; - u16 selector; u32 val, cr0, cr3, cr4; int i; @@ -399,56 +387,55 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, val = GET_SMSTATE(u32, smstate, 0x7fcc); - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (kvm_set_dr(vcpu, 6, val)) return X86EMUL_UNHANDLEABLE; val = GET_SMSTATE(u32, smstate, 0x7fc8); - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (kvm_set_dr(vcpu, 7, val)) return X86EMUL_UNHANDLEABLE; - selector = GET_SMSTATE(u32, smstate, 0x7fc4); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60)); + desc.selector = GET_SMSTATE(u32, smstate, 0x7fc4); + desc.base = GET_SMSTATE(u32, smstate, 0x7f64); + desc.limit = GET_SMSTATE(u32, smstate, 0x7f60); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR); + kvm_set_segment(vcpu, &desc, VCPU_SREG_TR); - selector = GET_SMSTATE(u32, smstate, 0x7fc0); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c)); + desc.selector = GET_SMSTATE(u32, smstate, 0x7fc0); + desc.base = GET_SMSTATE(u32, smstate, 0x7f80); + desc.limit = GET_SMSTATE(u32, smstate, 0x7f7c); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR); + kvm_set_segment(vcpu, &desc, VCPU_SREG_LDTR); dt.address = GET_SMSTATE(u32, smstate, 0x7f74); dt.size = GET_SMSTATE(u32, smstate, 0x7f70); - ctxt->ops->set_gdt(ctxt, &dt); + static_call(kvm_x86_set_gdt)(vcpu, &dt); dt.address = GET_SMSTATE(u32, smstate, 0x7f58); dt.size = GET_SMSTATE(u32, smstate, 0x7f54); - ctxt->ops->set_idt(ctxt, &dt); + static_call(kvm_x86_set_idt)(vcpu, &dt); for (i = 0; i < 6; i++) { - int r = rsm_load_seg_32(ctxt, smstate, i); + int r = rsm_load_seg_32(vcpu, smstate, i); if (r != X86EMUL_CONTINUE) return r; } cr4 = GET_SMSTATE(u32, smstate, 0x7f14); - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8)); + vcpu->arch.smbase = GET_SMSTATE(u32, smstate, 0x7ef8); - return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + return rsm_enter_protected_mode(vcpu, cr0, cr3, cr4); } #ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, const char *smstate) { - struct desc_struct desc; + struct kvm_vcpu *vcpu = ctxt->vcpu; + struct kvm_segment desc; struct desc_ptr dt; u64 val, cr0, cr3, cr4; - u32 base3; - u16 selector; int i, r; for (i = 0; i < NR_EMULATOR_GPRS; i++) @@ -459,51 +446,49 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, val = GET_SMSTATE(u64, smstate, 0x7f68); - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (kvm_set_dr(vcpu, 6, val)) return X86EMUL_UNHANDLEABLE; val = GET_SMSTATE(u64, smstate, 0x7f60); - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (kvm_set_dr(vcpu, 7, val)) return X86EMUL_UNHANDLEABLE; cr0 = GET_SMSTATE(u64, smstate, 0x7f58); cr3 = GET_SMSTATE(u64, smstate, 0x7f50); cr4 = GET_SMSTATE(u64, smstate, 0x7f48); - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); + vcpu->arch.smbase = GET_SMSTATE(u32, smstate, 0x7f00); val = GET_SMSTATE(u64, smstate, 0x7ed0); - if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) + if (kvm_set_msr(vcpu, MSR_EFER, val & ~EFER_LMA)) return X86EMUL_UNHANDLEABLE; - selector = GET_SMSTATE(u32, smstate, 0x7e90); + desc.selector = GET_SMSTATE(u32, smstate, 0x7e90); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98)); - base3 = GET_SMSTATE(u32, smstate, 0x7e9c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR); + desc.limit = GET_SMSTATE(u32, smstate, 0x7e94); + desc.base = GET_SMSTATE(u64, smstate, 0x7e98); + kvm_set_segment(vcpu, &desc, VCPU_SREG_TR); dt.size = GET_SMSTATE(u32, smstate, 0x7e84); dt.address = GET_SMSTATE(u64, smstate, 0x7e88); - ctxt->ops->set_idt(ctxt, &dt); + static_call(kvm_x86_set_idt)(vcpu, &dt); - selector = GET_SMSTATE(u32, smstate, 0x7e70); + desc.selector = GET_SMSTATE(u32, smstate, 0x7e70); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78)); - base3 = GET_SMSTATE(u32, smstate, 0x7e7c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR); + desc.limit = GET_SMSTATE(u32, smstate, 0x7e74); + desc.base = GET_SMSTATE(u64, smstate, 0x7e78); + kvm_set_segment(vcpu, &desc, VCPU_SREG_LDTR); dt.size = GET_SMSTATE(u32, smstate, 0x7e64); dt.address = GET_SMSTATE(u64, smstate, 0x7e68); - ctxt->ops->set_gdt(ctxt, &dt); + static_call(kvm_x86_set_gdt)(vcpu, &dt); - r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + r = rsm_enter_protected_mode(vcpu, cr0, cr3, cr4); if (r != X86EMUL_CONTINUE) return r; for (i = 0; i < 6; i++) { - r = rsm_load_seg_64(ctxt, smstate, i); + r = rsm_load_seg_64(vcpu, smstate, i); if (r != X86EMUL_CONTINUE) return r; } @@ -520,14 +505,14 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) u64 smbase; int ret; - smbase = ctxt->ops->get_smbase(ctxt); + smbase = vcpu->arch.smbase; - ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf)); - if (ret != X86EMUL_CONTINUE) + ret = kvm_vcpu_read_guest(vcpu, smbase + 0xfe00, buf, sizeof(buf)); + if (ret < 0) return X86EMUL_UNHANDLEABLE; - if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) - ctxt->ops->set_nmi_mask(ctxt, false); + if ((vcpu->arch.hflags & HF_SMM_INSIDE_NMI_MASK) == 0) + static_call(kvm_x86_set_nmi_mask)(vcpu, false); kvm_smm_changed(vcpu, false); @@ -535,41 +520,41 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) * Get back to real mode, to prepare a safe state in which to load * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU * supports long mode. - * - * The ctxt->ops callbacks will handle all side effects when writing - * writing MSRs and CRs, e.g. MMU context resets, CPUID - * runtime updates, etc. */ - if (emulator_has_longmode(ctxt)) { - struct desc_struct cs_desc; +#ifdef CONFIG_X86_64 + if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { + struct kvm_segment cs_desc; /* Zero CR4.PCIDE before CR0.PG. */ - cr4 = ctxt->ops->get_cr(ctxt, 4); + cr4 = kvm_read_cr4(vcpu); if (cr4 & X86_CR4_PCIDE) - ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); + kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PCIDE); /* A 32-bit code segment is required to clear EFER.LMA. */ memset(&cs_desc, 0, sizeof(cs_desc)); cs_desc.type = 0xb; - cs_desc.s = cs_desc.g = cs_desc.p = 1; - ctxt->ops->set_segment(ctxt, 0, &cs_desc, 0, VCPU_SREG_CS); + cs_desc.s = cs_desc.g = cs_desc.present = 1; + kvm_set_segment(vcpu, &cs_desc, VCPU_SREG_CS); } +#endif /* For the 64-bit case, this will clear EFER.LMA. */ - cr0 = ctxt->ops->get_cr(ctxt, 0); + cr0 = kvm_read_cr0(vcpu); if (cr0 & X86_CR0_PE) - ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); + kvm_set_cr0(vcpu, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); - if (emulator_has_longmode(ctxt)) { +#ifdef CONFIG_X86_64 + if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { /* Clear CR4.PAE before clearing EFER.LME. */ - cr4 = ctxt->ops->get_cr(ctxt, 4); + cr4 = kvm_read_cr4(vcpu); if (cr4 & X86_CR4_PAE) - ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE); + kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PAE); /* And finally go back to 32-bit mode. */ efer = 0; - ctxt->ops->set_msr(ctxt, MSR_EFER, efer); + kvm_set_msr(vcpu, MSR_EFER, efer); } +#endif /* * Give leave_smm() a chance to make ISA-specific changes to the vCPU @@ -580,7 +565,7 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) return X86EMUL_UNHANDLEABLE; #ifdef CONFIG_X86_64 - if (emulator_has_longmode(ctxt)) + if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) return rsm_load_state_64(ctxt, buf); else #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a1e524c2c39df2..2ae8ac525fc324 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7214,15 +7214,6 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt, return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception); } -static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt, - unsigned long addr, void *val, unsigned int bytes) -{ - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - int r = kvm_vcpu_read_guest(vcpu, addr, val, bytes); - - return r < 0 ? X86EMUL_IO_NEEDED : X86EMUL_CONTINUE; -} - static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u64 access, struct x86_exception *exception) @@ -8014,26 +8005,6 @@ static int emulator_get_msr(struct x86_emulate_ctxt *ctxt, return kvm_get_msr(emul_to_vcpu(ctxt), msr_index, pdata); } -static int emulator_set_msr(struct x86_emulate_ctxt *ctxt, - u32 msr_index, u64 data) -{ - return kvm_set_msr(emul_to_vcpu(ctxt), msr_index, data); -} - -static u64 emulator_get_smbase(struct x86_emulate_ctxt *ctxt) -{ - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - - return vcpu->arch.smbase; -} - -static void emulator_set_smbase(struct x86_emulate_ctxt *ctxt, u64 smbase) -{ - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - - vcpu->arch.smbase = smbase; -} - static int emulator_check_pmc(struct x86_emulate_ctxt *ctxt, u32 pmc) { @@ -8132,7 +8103,6 @@ static const struct x86_emulate_ops emulate_ops = { .write_gpr = emulator_write_gpr, .read_std = emulator_read_std, .write_std = emulator_write_std, - .read_phys = kvm_read_guest_phys_system, .fetch = kvm_fetch_guest_virt, .read_emulated = emulator_read_emulated, .write_emulated = emulator_write_emulated, @@ -8152,11 +8122,8 @@ static const struct x86_emulate_ops emulate_ops = { .cpl = emulator_get_cpl, .get_dr = emulator_get_dr, .set_dr = emulator_set_dr, - .get_smbase = emulator_get_smbase, - .set_smbase = emulator_set_smbase, .set_msr_with_filter = emulator_set_msr_with_filter, .get_msr_with_filter = emulator_get_msr_with_filter, - .set_msr = emulator_set_msr, .get_msr = emulator_get_msr, .check_pmc = emulator_check_pmc, .read_pmc = emulator_read_pmc, From patchwork Tue Oct 25 12:42:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10737 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983387wru; Tue, 25 Oct 2022 05:46:04 -0700 (PDT) X-Google-Smtp-Source: AMsMyM54RKcC9pT/uOfN3F8Iibh+CvXLBel0TFB9bGJOs5aJEClFEfShOqM65+4EgWQvyuQFjpWN X-Received: by 2002:a17:907:2d8a:b0:78d:4448:e96c with SMTP id gt10-20020a1709072d8a00b0078d4448e96cmr32024528ejc.199.1666701964445; Tue, 25 Oct 2022 05:46:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666701964; cv=none; d=google.com; s=arc-20160816; b=xsKUudNBeM9ZSIRA1Y6IF6U3GdaJ7kYhYbM67NohqQx+l1Hp7QnUlMnQDWawATfwI3 gONHeC3lCLQOE9pwpMc4zFwKPOhqb0jNKD6u6F5BjyowZ8CuLwUS2OjT7IJal1Matmbp lMVU2tLau+XDCS89/1j16+jakboOTwnDv+/lW3AhTQLgzuHEiLOS47vIyDaMMlokrGnd V0HJKA7HuEYicRBA0Dto8otzdWJQbx6yZu0kGX4yeUKlPlVvqwwowhkilLwtCehczyKy ZDmRFe0LFwxphQicvdFlfWijqtOEOyFwX3Z7QRB/ckicZeh6MDIw4HqW4gnE98Ki0eiL iDUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JvwnISoFcHwT3IzPb2zUI8IuaUhU+WPlNhetx7cG71A=; b=CEZl+nleUcE+0AhEs0iUh8AFQgbrM1Bp/gWrbqPitt2j0pfWpdN5vnsUUy4Y/rhrm0 67Id4tV5KH2ZfxKJ4b+dTevKdakLtQwVNeQlpL+N7jaUI93Xe0YnOwTYM7+xZP771vv6 IVtC4wkTqYB+5mhVL5jdjY3A1TfPoV7o16Dp690W1m+V7/JMQZB6bmjSMvfCmSc8j6Em tHpqZfIWNuGUmALqNfM8xTU8jshm70UalRtYPOdx6SS6mFvhRcxGCx7UL6F/rFBT81Ew 8sqUOny4Gx19qbiPH6zdBA/+ucKddVp+9X/0VtCDTy4Gd6bjr1tS75eXBIPHRM6zFoUV /9Jw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ajWkL5ei; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y10-20020a056402270a00b0046216a28431si538863edd.146.2022.10.25.05.45.37; Tue, 25 Oct 2022 05:46:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ajWkL5ei; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232302AbiJYMn1 (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232161AbiJYMmy (ORCPT ); Tue, 25 Oct 2022 08:42:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48FBF18C944 for ; Tue, 25 Oct 2022 05:42:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JvwnISoFcHwT3IzPb2zUI8IuaUhU+WPlNhetx7cG71A=; b=ajWkL5eiTxOTUnGI1KFNhBwxVzcIRa/RcxF51CkHOU6Qo9AUpB/h0UuiD0ykNHjqtqLIeo c7gU5ou14zmjtxdE34V+uL+xH6N6c6dDUscvJNWbi++WeGe7XzY9Wq+1jRoNQkbeoFgvQ4 +Qqx/ELPelc7DA1YbSM5Ie5+4F32rBM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-440-O1lvseRSOkCqdjfaIpH6Yg-1; Tue, 25 Oct 2022 08:42:47 -0400 X-MC-Unique: O1lvseRSOkCqdjfaIpH6Yg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35A411C07594; Tue, 25 Oct 2022 12:42:46 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id D937320290A2; Tue, 25 Oct 2022 12:42:42 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 05/23] KVM: allow compiling out SMM support Date: Tue, 25 Oct 2022 15:42:05 +0300 Message-Id: <20221025124223.227577-6-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663678982064290?= X-GMAIL-MSGID: =?utf-8?q?1747663678982064290?= From: Paolo Bonzini Some users of KVM implement the UEFI variable store through a paravirtual device that does not require the "SMM lockbox" component of edk2; allow them to compile out system management mode, which is not a full implementation especially in how it interacts with nested virtualization. Suggested-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/kvm/Kconfig | 11 ++++++++++ arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/smm.h | 13 ++++++++++++ arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/vmx/vmx.c | 2 ++ arch/x86/kvm/x86.c | 21 +++++++++++++++++-- tools/testing/selftests/kvm/x86_64/smm_test.c | 2 ++ 7 files changed, 50 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 67be7f217e37bd..716becc0df45b4 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -87,6 +87,17 @@ config KVM_INTEL To compile this as a module, choose M here: the module will be called kvm-intel. +config KVM_SMM + bool "System Management Mode emulation" + default y + depends on KVM + help + Provides support for KVM to emulate System Management Mode (SMM) + in virtual machines. This can be used by the virtual machine + firmware to implement UEFI secure boot. + + If unsure, say Y. + config X86_SGX_KVM bool "Software Guard eXtensions (SGX) Virtualization" depends on X86_SGX && KVM_INTEL diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index ec6f7656254b9f..6cf40f66827776 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -20,7 +20,7 @@ endif kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o -kvm-y += smm.o +kvm-$(CONFIG_KVM_SMM) += smm.o kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index b0602a92e511e1..4c699fee449296 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -8,6 +8,7 @@ #define PUT_SMSTATE(type, buf, offset, val) \ *(type *)((buf) + (offset) - 0x7e00) = val +#ifdef CONFIG_KVM_SMM static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { kvm_make_request(KVM_REQ_SMI, vcpu); @@ -23,5 +24,17 @@ void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm); void enter_smm(struct kvm_vcpu *vcpu); int emulator_leave_smm(struct x86_emulate_ctxt *ctxt); void process_smi(struct kvm_vcpu *vcpu); +#else +static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { return -ENOTTY; } +static inline bool is_smm(struct kvm_vcpu *vcpu) { return false; } +static inline void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm) { WARN_ON_ONCE(1); } +static inline void enter_smm(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(1); } +static inline void process_smi(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(1); } + +/* + * emulator_leave_smm is used as a function pointer, so the + * stub is defined in x86.c. + */ +#endif #endif diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 496ee7d1ae2fb7..6f7ceb35d2ff08 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4150,6 +4150,8 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index) case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: return false; case MSR_IA32_SMBASE: + if (!IS_ENABLED(CONFIG_KVM_SMM)) + return false; /* SEV-ES guests do not support SMM, so report false */ if (kvm && sev_es_guest(kvm)) return false; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 038809c6800601..b22330a15adb63 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6841,6 +6841,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) { switch (index) { case MSR_IA32_SMBASE: + if (!IS_ENABLED(CONFIG_KVM_SMM)) + return false; /* * We cannot do SMM unless we can run the guest in big * real mode. diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2ae8ac525fc324..6c81d3a606e257 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3649,7 +3649,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) break; } case MSR_IA32_SMBASE: - if (!msr_info->host_initiated) + if (!IS_ENABLED(CONFIG_KVM_SMM) || !msr_info->host_initiated) return 1; vcpu->arch.smbase = data; break; @@ -4065,7 +4065,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) msr_info->data = vcpu->arch.ia32_misc_enable_msr; break; case MSR_IA32_SMBASE: - if (!msr_info->host_initiated) + if (!IS_ENABLED(CONFIG_KVM_SMM) || !msr_info->host_initiated) return 1; msr_info->data = vcpu->arch.smbase; break; @@ -4439,6 +4439,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r |= KVM_X86_DISABLE_EXITS_MWAIT; break; case KVM_CAP_X86_SMM: + if (!IS_ENABLED(CONFIG_KVM_SMM)) + break; + /* SMBASE is usually relocated above 1M on modern chipsets, * and SMM handlers might indeed rely on 4G segment limits, * so do not report SMM to be available if real mode is @@ -5189,6 +5192,12 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.apic->sipi_vector = events->sipi_vector; if (events->flags & KVM_VCPUEVENT_VALID_SMM) { + if (!IS_ENABLED(CONFIG_KVM_SMM) && + (events->smi.smm || + events->smi.pending || + events->smi.smm_inside_nmi)) + return -EINVAL; + if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) { kvm_x86_ops.nested_ops->leave_nested(vcpu); kvm_smm_changed(vcpu, events->smi.smm); @@ -8079,6 +8088,14 @@ static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) return emul_to_vcpu(ctxt)->arch.hflags; } +#ifndef CONFIG_KVM_SMM +static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) +{ + WARN_ON_ONCE(1); + return X86EMUL_UNHANDLEABLE; +} +#endif + static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt) { kvm_make_request(KVM_REQ_TRIPLE_FAULT, emul_to_vcpu(ctxt)); diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c index 1f136a81858e5d..cb38a478e1f62a 100644 --- a/tools/testing/selftests/kvm/x86_64/smm_test.c +++ b/tools/testing/selftests/kvm/x86_64/smm_test.c @@ -137,6 +137,8 @@ int main(int argc, char *argv[]) struct kvm_x86_state *state; int stage, stage_reported; + TEST_REQUIRE(kvm_has_cap(KVM_CAP_X86_SMM)); + /* Create VM */ vm = vm_create_with_one_vcpu(&vcpu, guest_code); From patchwork Tue Oct 25 12:42:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10736 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983330wru; Tue, 25 Oct 2022 05:45:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4fOarEKLN6AXfito61JJJmwQVjpHFWCtQtjROnOhotHdCB1KJ/DmAAe9uTgRyPtOkETqJ/ X-Received: by 2002:a17:907:7203:b0:7a5:b062:2338 with SMTP id dr3-20020a170907720300b007a5b0622338mr11235948ejc.8.1666701957428; Tue, 25 Oct 2022 05:45:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666701957; cv=none; d=google.com; s=arc-20160816; b=xW19OoVDPexO7Xkk4yNN+m/VmyMM6v67a5K3/vuJ5BFxxjMXmA5onVZSlcH0rR0yFc zuBdzTqUOYnq2PHWAa+U+81DK1tfCYtSEDHqOURfQiSENHKcZspnI8AfrZXX6y2YFgyw WLT13BGMe/GQluJMoqNW2mWdeoaiP0/Cn1wNtgaY9Yl3D6At0d1H76XaeqYCOadXceeS vIVNCMpBZQLz28LW2+z95k11Zee+tDeb3trr6JMnrlppu6cnR5Gcypg+tJvozIlJR7pJ XGgXoo+FtCIWrytkwmPDVT/k9E8Z3HiMNCQu95Uf6UBYlV3tCDIcnR6ijlnIHfADCVHR qADw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pNwX/yknDypfvbj/Q6qtMYljLDs3Vw2mj3Roo6Y33mY=; b=JPOuA+0HPZhHg1I8xEI/WHQbro+5Lmith5evUnKMrTlCsPoqO9ymMMQ1LNsAulmvTX jHWIOPJc2TCm0Huyl4y65uMSJVztOPw5HDXw1GNgosZr5SgjZcC6BvJ+zjQOdJ4+pWnr 2zf/8cIWeDsY7ZwLNUoQJm4pFjYfA2Z9zJ0f8baQzKUa2lja6eopiIF69Ja8uBLAjOXu rYe6XoMrYCTsKpRlLCQjlDLUM61YtGYRbXPb3760oI6G8cAc1wSuNzbXVQxX4AACJmSq rGtO3qnarH/rm+ZviEKaRnkytFvuj/VRTf10YjfjVslbJziAMlVHZsuW2dOfeAoNik6R Mwpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=S1IgEU8l; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ho6-20020a1709070e8600b0078c0c866a18si3353041ejc.19.2022.10.25.05.45.32; Tue, 25 Oct 2022 05:45:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=S1IgEU8l; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232160AbiJYMnW (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232168AbiJYMmy (ORCPT ); Tue, 25 Oct 2022 08:42:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B330718C949 for ; Tue, 25 Oct 2022 05:42:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pNwX/yknDypfvbj/Q6qtMYljLDs3Vw2mj3Roo6Y33mY=; b=S1IgEU8l/qVxu2bt5HxPa3vYT0+puY+LXA+Ckk/AlYuNuZOy5T3l15GXMZSec+r5EENpRw YNfAipxvqcAirYv8ARswRP07d76ZSu9WFfOIf/dIdBLmS6LCB2fSlTJLFT9KClslS14z3W e0LxIKNehnSilozciyVGmZvS6dP0vuQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-607-LUQjYSxXNJm8xk6qAAVqVg-1; Tue, 25 Oct 2022 08:42:50 -0400 X-MC-Unique: LUQjYSxXNJm8xk6qAAVqVg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE333185A7AB; Tue, 25 Oct 2022 12:42:49 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7D37E20290A2; Tue, 25 Oct 2022 12:42:46 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 06/23] KVM: x86: compile out vendor-specific code if SMM is disabled Date: Tue, 25 Oct 2022 15:42:06 +0300 Message-Id: <20221025124223.227577-7-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663671441903359?= X-GMAIL-MSGID: =?utf-8?q?1747663671441903359?= From: Paolo Bonzini Vendor-specific code that deals with SMI injection and saving/restoring SMM state is not needed if CONFIG_KVM_SMM is disabled, so remove the four callbacks smi_allowed, enter_smm, leave_smm and enable_smi_window. The users in svm/nested.c and x86.c also have to be compiled out; the amount of #ifdef'ed code is small and it's not worth moving it to smm.c. enter_smm is now used only within #ifdef CONFIG_KVM_SMM, and the stub can therefore be removed. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm-x86-ops.h | 2 ++ arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/smm.h | 1 - arch/x86/kvm/svm/nested.c | 2 ++ arch/x86/kvm/svm/svm.c | 4 ++++ arch/x86/kvm/vmx/vmx.c | 4 ++++ arch/x86/kvm/x86.c | 4 ++++ 7 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 82ba4a564e5875..ea58e67e9a6701 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -110,10 +110,12 @@ KVM_X86_OP_OPTIONAL_RET0(dy_apicv_has_pending_interrupt) KVM_X86_OP_OPTIONAL(set_hv_timer) KVM_X86_OP_OPTIONAL(cancel_hv_timer) KVM_X86_OP(setup_mce) +#ifdef CONFIG_KVM_SMM KVM_X86_OP(smi_allowed) KVM_X86_OP(enter_smm) KVM_X86_OP(leave_smm) KVM_X86_OP(enable_smi_window) +#endif KVM_X86_OP_OPTIONAL(mem_enc_ioctl) KVM_X86_OP_OPTIONAL(mem_enc_register_region) KVM_X86_OP_OPTIONAL(mem_enc_unregister_region) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4afed04fcc8241..541ed36cbb82f8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1607,10 +1607,12 @@ struct kvm_x86_ops { void (*setup_mce)(struct kvm_vcpu *vcpu); +#ifdef CONFIG_KVM_SMM int (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); int (*enter_smm)(struct kvm_vcpu *vcpu, char *smstate); int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); void (*enable_smi_window)(struct kvm_vcpu *vcpu); +#endif int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp); int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp); diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index 4c699fee449296..7ccce6b655cacf 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -28,7 +28,6 @@ void process_smi(struct kvm_vcpu *vcpu); static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { return -ENOTTY; } static inline bool is_smm(struct kvm_vcpu *vcpu) { return false; } static inline void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm) { WARN_ON_ONCE(1); } -static inline void enter_smm(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(1); } static inline void process_smi(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(1); } /* diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index cc0fd75f7cbab5..b258d6988f5dde 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1378,6 +1378,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) return 0; } +#ifdef CONFIG_KVM_SMM if (vcpu->arch.smi_pending && !svm_smi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; @@ -1386,6 +1387,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) nested_svm_simple_vmexit(svm, SVM_EXIT_SMI); return 0; } +#endif if (vcpu->arch.nmi_pending && !svm_nmi_blocked(vcpu)) { if (block_nested_events) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6f7ceb35d2ff08..2200b8aa727398 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4408,6 +4408,7 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) vcpu->arch.mcg_cap &= 0x1ff; } +#ifdef CONFIG_KVM_SMM bool svm_smi_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -4557,6 +4558,7 @@ static void svm_enable_smi_window(struct kvm_vcpu *vcpu) /* We must be in SMM; RSM will cause a vmexit anyway. */ } } +#endif static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, void *insn, int insn_len) @@ -4832,10 +4834,12 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .pi_update_irte = avic_pi_update_irte, .setup_mce = svm_setup_mce, +#ifdef CONFIG_KVM_SMM .smi_allowed = svm_smi_allowed, .enter_smm = svm_enter_smm, .leave_smm = svm_leave_smm, .enable_smi_window = svm_enable_smi_window, +#endif .mem_enc_ioctl = sev_mem_enc_ioctl, .mem_enc_register_region = sev_mem_enc_register_region, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b22330a15adb63..107fc035c91b80 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7905,6 +7905,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) ~FEAT_CTL_LMCE_ENABLED; } +#ifdef CONFIG_KVM_SMM static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { /* we need a nested vmexit to enter SMM, postpone if run is pending */ @@ -7959,6 +7960,7 @@ static void vmx_enable_smi_window(struct kvm_vcpu *vcpu) { /* RSM will cause a vmexit anyway. */ } +#endif static bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu) { @@ -8126,10 +8128,12 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .setup_mce = vmx_setup_mce, +#ifdef CONFIG_KVM_SMM .smi_allowed = vmx_smi_allowed, .enter_smm = vmx_enter_smm, .leave_smm = vmx_leave_smm, .enable_smi_window = vmx_enable_smi_window, +#endif .can_emulate_instruction = vmx_can_emulate_instruction, .apic_init_signal_blocked = vmx_apic_init_signal_blocked, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6c81d3a606e257..8394cd62c2854c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9876,6 +9876,7 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu, * in order to make progress and get back here for another iteration. * The kvm_x86_ops hooks communicate this by returning -EBUSY. */ +#ifdef CONFIG_KVM_SMM if (vcpu->arch.smi_pending) { r = can_inject ? static_call(kvm_x86_smi_allowed)(vcpu, true) : -EBUSY; if (r < 0) @@ -9888,6 +9889,7 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu, } else static_call(kvm_x86_enable_smi_window)(vcpu); } +#endif if (vcpu->arch.nmi_pending) { r = can_inject ? static_call(kvm_x86_nmi_allowed)(vcpu, true) : -EBUSY; @@ -12517,10 +12519,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) static_call(kvm_x86_nmi_allowed)(vcpu, false))) return true; +#ifdef CONFIG_KVM_SMM if (kvm_test_request(KVM_REQ_SMI, vcpu) || (vcpu->arch.smi_pending && static_call(kvm_x86_smi_allowed)(vcpu, false))) return true; +#endif if (kvm_arch_interrupt_allowed(vcpu) && (kvm_cpu_has_interrupt(vcpu) || From patchwork Tue Oct 25 12:42:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10738 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983524wru; Tue, 25 Oct 2022 05:46:21 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6GO71LGEfD9105ykoDs/KKf4nws56j9Ij2SFXwXnmKGMYCGNW0a42LK5hZ3s+VZYt0eskI X-Received: by 2002:a17:907:d02:b0:7a3:de36:b67 with SMTP id gn2-20020a1709070d0200b007a3de360b67mr12319712ejc.451.1666701981181; Tue, 25 Oct 2022 05:46:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666701981; cv=none; d=google.com; s=arc-20160816; b=muk66Xq9VKu2QHEp1uzTUbbzamUuYXTEk2c8w2V+7PFqJb1oiOh5YjokUMrL1Nu0WN my6+Wyt7S0ltEV18feBqUenTCUGYFlZVxTQjzU3bf+hs7IFaKEo1tFGcRlr95hmTEQsq 05vHPOlabnBCFcRuoP8K5XSgGoCpauCTvkONWaJ6lIGPyosZ4hKkjq8/uz5HPvlQD6fp QIj83cd1nv+WYp6PRRPNjHW08NHsYN8Q1ke8D0Z1YMEf5AN7h/C78Q6mlCJUUWds7ac8 D9p3I4m9t6+jwE3OLg4V4zTUmbx5UWc7yXqVE9YN/OqeHesY7ZeVe33j5AFdIMPUn6nN nQeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6RVePXqf+ZlRyOqE0ZPUurd+WAhhvI1fNgkZgVZLz8k=; b=taoxOkWjSnjIuBbP3Qiz+JfWqWdVYLJnk7DJd8/0toTqmk/7zrRsq3rS33v2CL6ypI mGz/x6c+7/PDDZhvHJKTjPGnbqGy20qCbepoc5XCBDbhgzdJar3vcaNsiYXbTA+gBol8 Ija8zjjDoPeBQce3cSTrzCIH3mjtIcWMGWssZmfL5BUCBK3DuWvLKrbKgK16y7Vz+LUF b2M+Mnv2YaDsM9LJ1oIN2FQ7YxuP9s/WgJBsPFSrN7DE0ZXkLs9Jz6rD1QUiupNY1Im3 KRmhlnXAFX6dOPVrzpjbl3gEcXh4bar5Zlz1eP0QTje2ZDkdVERDJognx/73DOIkXWv1 A0tQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WN6MlHU+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w21-20020a1709062f9500b007808f3f4cbcsi2330889eji.239.2022.10.25.05.45.55; Tue, 25 Oct 2022 05:46:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WN6MlHU+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231363AbiJYMns (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232243AbiJYMnB (ORCPT ); Tue, 25 Oct 2022 08:43:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF18418E73A for ; Tue, 25 Oct 2022 05:43:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701779; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6RVePXqf+ZlRyOqE0ZPUurd+WAhhvI1fNgkZgVZLz8k=; b=WN6MlHU+wzw8jDnrengD9wud6LWUfFrJbP7Lld8BeKxHLHDwIy2flwalGKPaM925BAVoJh TkoTNENjoi2zg54N2t/xHB3rB2gfyFG4mw/6oFrVnOcYiC/N0xQ9xixhhKUmNSzuuAan4G 4zkJknFj8VAp/b5OxahAyEfbKHl6fko= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-269-B1p8k4cYNAKpgU26vx41bQ-1; Tue, 25 Oct 2022 08:42:54 -0400 X-MC-Unique: B1p8k4cYNAKpgU26vx41bQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E63480253F; Tue, 25 Oct 2022 12:42:53 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 225CD20290A2; Tue, 25 Oct 2022 12:42:49 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 07/23] KVM: x86: remove SMRAM address space if SMM is not supported Date: Tue, 25 Oct 2022 15:42:07 +0300 Message-Id: <20221025124223.227577-8-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663696142353769?= X-GMAIL-MSGID: =?utf-8?q?1747663696142353769?= From: Paolo Bonzini If CONFIG_KVM_SMM is not defined HF_SMM_MASK will always be zero, and we can spare userspace the hassle of setting up the SMRAM address space simply by reporting that only one address space is supported. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 541ed36cbb82f8..28e41926027e7a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1995,11 +1995,14 @@ enum { #define HF_SMM_MASK (1 << 6) #define HF_SMM_INSIDE_NMI_MASK (1 << 7) -#define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE -#define KVM_ADDRESS_SPACE_NUM 2 - -#define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) -#define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) +#ifdef CONFIG_KVM_SMM +# define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE +# define KVM_ADDRESS_SPACE_NUM 2 +# define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) +# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) +#else +# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) +#endif #define KVM_ARCH_WANT_MMU_NOTIFIER From patchwork Tue Oct 25 12:42:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10746 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984171wru; Tue, 25 Oct 2022 05:47:53 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6aekRwVPSX0XMNivKxX1A6VE+NXJpdMp44UaSPgtBaGtQCLTOBKINXM3qfBP2V9hDt/FeH X-Received: by 2002:a17:90b:4d92:b0:213:48e0:8611 with SMTP id oj18-20020a17090b4d9200b0021348e08611mr991581pjb.212.1666702073648; Tue, 25 Oct 2022 05:47:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702073; cv=none; d=google.com; s=arc-20160816; b=mQCjYI25/k4FqgZNRynTJI9gx5kAmDkZhHLcO6J4lvBDZDFmjPCEyIuGCYaRKAfdm3 eGOANKCXDM2W4qDvHhNKtsTivMKD3QIVKCYiygrkQoP0o5Q6meJX9WtWaW08iiuUzSu+ yKjUvnYqDSMb3689iMGilIZJmcg8+XxrpFdwwnm+8bcc7lWzaZxbscWpqbw5ttnw3EcK qabBLp0PHnpaQg9nTd5D4HQOV8V/VVljxDyC4SOjP8phINbScqOMHElArf8xkyh0hGK5 OWgl/DzvZ5/61a3JQbGBfQhjYxVAk36SVbYNJm3ga5R4ek9qDh8HycVGxtjZwt542qCo 6DCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zVCaW6lEEBFNmdMv7F/qs/utrwfJnH2v1kJUVNJMX6M=; b=k+u2iB9h8d/MgEi87KlZqJnKrlsU8tNVga2wgmaBWul9Vxx2nkrRShVb9MmrA8Wm5c ZOFEpqjlRDj/yOwrt4UMUBXpHtFDQtZfNJiozFJEljwaXXUn32i5A7X4c7dRU3RFYz0E jrRmqOFlLjBhjT+MJJj2+Q/c+8NkFRqszE2doD76W2bwKN0xD1aFBVPp5RcwrxOM30hO znjsRiZjCkS2ZfnaoGub3mWOkFdbieU/bFUT9qFq9QUQm5w1x8dx+6HwA2ZoLlLUtLD0 WYin5a4YqZnb1+Mi4UNiMpNsJYLpYQ0J8R8dzFxCXO+cJCvRw3MoXYNnjd2r730J1QE3 b/Uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fN3BSZHM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s10-20020a170902ea0a00b001751d0ac52asi3452322plg.435.2022.10.25.05.47.40; Tue, 25 Oct 2022 05:47:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fN3BSZHM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231656AbiJYMnv (ORCPT + 99 others); Tue, 25 Oct 2022 08:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232250AbiJYMnC (ORCPT ); Tue, 25 Oct 2022 08:43:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C8D718F0DC for ; Tue, 25 Oct 2022 05:43:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701780; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zVCaW6lEEBFNmdMv7F/qs/utrwfJnH2v1kJUVNJMX6M=; b=fN3BSZHM/eVjZpPsyk912GLh3+Ow2SEBLNtKrb66pHSehaCUggYyClZCQ7CeU1NHDu70WM ZzY3O1PbYigCcSz1+8UKYS/wk2e/0vq8lHSF09cm4mO3pKe/tpuCNOP4+cxE1pxNAY+/UX RqsXM36RGIPBrDMF1/LIgN5BH8+SToQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-262-3KknvcArPDSo5DbzuUs2LQ-1; Tue, 25 Oct 2022 08:42:58 -0400 X-MC-Unique: 3KknvcArPDSo5DbzuUs2LQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 260C43833287; Tue, 25 Oct 2022 12:42:57 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4FEF2017DF0; Tue, 25 Oct 2022 12:42:53 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 08/23] KVM: x86: do not define KVM_REQ_SMI if SMM disabled Date: Tue, 25 Oct 2022 15:42:08 +0300 Message-Id: <20221025124223.227577-9-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663793516798687?= X-GMAIL-MSGID: =?utf-8?q?1747663793516798687?= From: Paolo Bonzini This ensures that all the relevant code is compiled out, in fact the process_smi stub can be removed too. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/smm.h | 1 - arch/x86/kvm/x86.c | 6 ++++++ 3 files changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 28e41926027e7a..0b0a82c0bb5cbd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -81,7 +81,9 @@ #define KVM_REQ_NMI KVM_ARCH_REQ(9) #define KVM_REQ_PMU KVM_ARCH_REQ(10) #define KVM_REQ_PMI KVM_ARCH_REQ(11) +#ifdef CONFIG_KVM_SMM #define KVM_REQ_SMI KVM_ARCH_REQ(12) +#endif #define KVM_REQ_MASTERCLOCK_UPDATE KVM_ARCH_REQ(13) #define KVM_REQ_MCLOCK_INPROGRESS \ KVM_ARCH_REQ_FLAGS(14, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index 7ccce6b655cacf..a6795b93ba3002 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -28,7 +28,6 @@ void process_smi(struct kvm_vcpu *vcpu); static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { return -ENOTTY; } static inline bool is_smm(struct kvm_vcpu *vcpu) { return false; } static inline void kvm_smm_changed(struct kvm_vcpu *vcpu, bool in_smm) { WARN_ON_ONCE(1); } -static inline void process_smi(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(1); } /* * emulator_leave_smm is used as a function pointer, so the diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8394cd62c2854c..56004890a71742 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5033,8 +5033,10 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, process_nmi(vcpu); +#ifdef CONFIG_KVM_SMM if (kvm_check_request(KVM_REQ_SMI, vcpu)) process_smi(vcpu); +#endif /* * KVM's ABI only allows for one exception to be migrated. Luckily, @@ -10207,8 +10209,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu)) record_steal_time(vcpu); +#ifdef CONFIG_KVM_SMM if (kvm_check_request(KVM_REQ_SMI, vcpu)) process_smi(vcpu); +#endif if (kvm_check_request(KVM_REQ_NMI, vcpu)) process_nmi(vcpu); if (kvm_check_request(KVM_REQ_PMU, vcpu)) @@ -12565,7 +12569,9 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu) return true; if (kvm_test_request(KVM_REQ_NMI, vcpu) || +#ifdef CONFIG_KVM_SMM kvm_test_request(KVM_REQ_SMI, vcpu) || +#endif kvm_test_request(KVM_REQ_EVENT, vcpu)) return true; From patchwork Tue Oct 25 12:42:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10747 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984228wru; Tue, 25 Oct 2022 05:48:02 -0700 (PDT) X-Google-Smtp-Source: AMsMyM70PqyuKWJqnensFbwfd1XkciKLRv2pLYEP7qlQ6Z6B5ucO4DyylLmLkR8l3A9PmEdFZjOE X-Received: by 2002:a63:c4e:0:b0:45f:795:c20a with SMTP id 14-20020a630c4e000000b0045f0795c20amr33343281pgm.559.1666702082648; Tue, 25 Oct 2022 05:48:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702082; cv=none; d=google.com; s=arc-20160816; b=GtKRLHKK5VvrZeYF2KHCmRJUfd/YU6XK3wj2ac8OcxHE5WXVKa0SUbqlOAYj7KFpJ/ l4Ncmuu7TS58Q58VRNNhcLIRwCF7TW0YUrzaZlbhGshwvbEmDC7gIfGQlLQ47RTL8Lky if4rWVrrfRdtw39jOn7mDRPA1oJoT1PkRr0jvSfg/lY3hxHpODFW+1zaSXdiP5rCfFy6 zQ8+zzHJ4dYmTroxIgCftRkPRDWtUmotq/E0wH0Obt5kqd1BdpUrk94+nuXJIRARPEqQ 64B18+nS5sbzai5K0FMaHAUHIIjy1DHP3SRGDk1B6vAuJKDliruoeI2rpycy6srrbSPh Oe8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k7eb+GFGOU5S7G9TNA6e1VQ8CjmkOXIwTqSDjDKPGFk=; b=HOGYPq4cu5gEgUex8DvOnWVqU+GIWGliWuaWZq6yozGOtALGSYY59IRsnHXvLsjALU TcsVCRzxZkGYA5T4E1L+ulFtFUIFCerOQhCN2SGwfDWqd5BdFPRjVLUT2f9KP7Wa74ym gBPGYwxWhQU1W7aH/eBPBe10YmDWgGMkBsPr7jAzjptYrBK4i/XNyK6nQifpeFhs/FJQ nk7bB2gaCPokUPK+sZmSZRtgm2yXsgZcec+mEIrLorU63OfgPVY+7MfcKiOW12oviQSu gGedjQWnH0k/kbX0cyxLj4b5ne4JojeamezraEW4RtkGtLWlXD0AeeH+CnX+I6yYSW7B Gc+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GwzMHSQ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rj4-20020a17090b3e8400b00205d0348e7fsi3600966pjb.93.2022.10.25.05.47.49; Tue, 25 Oct 2022 05:48:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GwzMHSQ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbiJYMoX (ORCPT + 99 others); Tue, 25 Oct 2022 08:44:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232214AbiJYMnn (ORCPT ); Tue, 25 Oct 2022 08:43:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4778A18F929 for ; Tue, 25 Oct 2022 05:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701787; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k7eb+GFGOU5S7G9TNA6e1VQ8CjmkOXIwTqSDjDKPGFk=; b=GwzMHSQ6qKEWyfdfE/jYm56VRpUvx3kia7QZmV8xlPjHGl8+u4Ko2yXVYcKRu5XR+DhPwh bGZdqfIZWAORYsuCw10syN2T9sc9WpvolkuclnpNi0q8a3nIF2XfVbahSUnwOa61PDYwxj V9N5jc4IC8ho/V4zhDO1DiZqYAYvEtk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-78-DlvJ23uJPcKZC5YGkQPVRA-1; Tue, 25 Oct 2022 08:43:02 -0400 X-MC-Unique: DlvJ23uJPcKZC5YGkQPVRA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF74C80252F; Tue, 25 Oct 2022 12:43:00 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6DA2D2028E98; Tue, 25 Oct 2022 12:42:57 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 09/23] bug: introduce ASSERT_STRUCT_OFFSET Date: Tue, 25 Oct 2022 15:42:09 +0300 Message-Id: <20221025124223.227577-10-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663803375007082?= X-GMAIL-MSGID: =?utf-8?q?1747663803375007082?= ASSERT_STRUCT_OFFSET allows to assert during the build of the kernel that a field in a struct have an expected offset. KVM used to have such macro, but there is almost nothing KVM specific in it so move it to build_bug.h, so that it can be used in other places in KVM. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/vmcs12.h | 5 ++--- include/linux/build_bug.h | 9 +++++++++ 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 746129ddd5ae02..01936013428b5c 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -208,9 +208,8 @@ struct __packed vmcs12 { /* * For save/restore compatibility, the vmcs12 field offsets must not change. */ -#define CHECK_OFFSET(field, loc) \ - BUILD_BUG_ON_MSG(offsetof(struct vmcs12, field) != (loc), \ - "Offset of " #field " in struct vmcs12 has changed.") +#define CHECK_OFFSET(field, loc) \ + ASSERT_STRUCT_OFFSET(struct vmcs12, field, loc) static inline void vmx_check_vmcs12_offsets(void) { diff --git a/include/linux/build_bug.h b/include/linux/build_bug.h index e3a0be2c90ad98..3aa3640f8c181f 100644 --- a/include/linux/build_bug.h +++ b/include/linux/build_bug.h @@ -77,4 +77,13 @@ #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr) #define __static_assert(expr, msg, ...) _Static_assert(expr, msg) + +/* + * Compile time check that field has an expected offset + */ +#define ASSERT_STRUCT_OFFSET(type, field, expected_offset) \ + BUILD_BUG_ON_MSG(offsetof(type, field) != (expected_offset), \ + "Offset of " #field " in " #type " has changed.") + + #endif /* _LINUX_BUILD_BUG_H */ From patchwork Tue Oct 25 12:42:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10741 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983766wru; Tue, 25 Oct 2022 05:46:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4cwixyJiUOoOCMVmDSR9ycH3E8jhp9zhzRmjaXUakGIvBlYn4Di0VOhwFbk+madX3Ib2zM X-Received: by 2002:a05:6402:270b:b0:45d:61cd:73cc with SMTP id y11-20020a056402270b00b0045d61cd73ccmr35862603edd.136.1666702006328; Tue, 25 Oct 2022 05:46:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702006; cv=none; d=google.com; s=arc-20160816; b=uEW9/cA1id0SpAQkT1m5Cs9gUam0eH5saMRJrPag/QARKrNNd0yNT/gvlaH1DOWrYj mBI25aP3T6VcnXl5kVMMepWtdzL9MCa+/9uLfapG9kQ8Nb85zS6FsBHLBVS/x5hU2fv0 4EgL4fS3KPJr3rWciuTxMwMHX6d/5o3jarb4hjiUAfBzhHSTkmHcRvlX5F9lZvV0rg5X Wg/Iavaq0T+zrvVTAYN9Sk+IEYvQsjTuD4ZY+7yampxgjwAY3XWnC25KeMdsyc/LZMgL nH2Qocb4CXbNCgS+oYWrOlFY3Hvc6bO9i8WxepR5jO7y0GTUZQmQ+wXXkHhJUVwRdbVD szcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Hjv4lWsvg+hFAXBQVxvDxYwhYVqBj8cP/nEGapoZ4V4=; b=pL9FjXg4+c0A2JNnEHmUbWh9yuMpivjhliSxEWgrrH1iNTck41VAw3I18j601p/+I7 vKkJ4m7ZtvRDXfF87Rp387qDKdW08CGRb319GCuGxTsvP0cr7A2cUlATStnu+CN/ji5E pXVpqtjXEK6c5+uyksvikVKwGroCOReYccKTl+hNdkM7d3SliJv9tt7N4oy5mEXnWlWU zh7fTKb6tew/OGPSY++sF81aojnpxzc9mQ1Yj/9q6xaD41e9bgL4XwoTdDYscSox9qRo he/2QQjHtJElHLiIMPKMHu+Gc/91P/S74dHSb9G9ItKIFgzXGPCD3LNJAGeb4plUHz1f x3rQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CcmFGvBt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk13-20020a1709077f8d00b0078bb4d5db86si2884768ejc.77.2022.10.25.05.46.21; Tue, 25 Oct 2022 05:46:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CcmFGvBt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232148AbiJYMo1 (ORCPT + 99 others); Tue, 25 Oct 2022 08:44:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232235AbiJYMnq (ORCPT ); Tue, 25 Oct 2022 08:43:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A39C718F93B for ; Tue, 25 Oct 2022 05:43:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701788; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hjv4lWsvg+hFAXBQVxvDxYwhYVqBj8cP/nEGapoZ4V4=; b=CcmFGvBtA6m7OcmKFaWHx2famfOkMWlBDilEpzlwyNkPXCxqp4VjKutUrsVRs363HAaemw nabhie3FFp7b3xuMGh9JCQ9hZKKiArM0IRgxIFMHn8dlOmO35q5stBBRJIiJiozLvD76Dw whrrIeVEyGT4Verck05bra1R3p6U4XI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-417-nfn2UCqLM1K6Lyga9TBEwg-1; Tue, 25 Oct 2022 08:43:05 -0400 X-MC-Unique: nfn2UCqLM1K6Lyga9TBEwg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6F1AC101AA47; Tue, 25 Oct 2022 12:43:04 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1505A20290A2; Tue, 25 Oct 2022 12:43:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 10/23] KVM: x86: emulator: em_sysexit should update ctxt->mode Date: Tue, 25 Oct 2022 15:42:10 +0300 Message-Id: <20221025124223.227577-11-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663723089344249?= X-GMAIL-MSGID: =?utf-8?q?1747663723089344249?= SYSEXIT is one of the instructions that can change the processor mode, thus ctxt->mode should be updated after it. Note that this is likely a benign bug, because the only problematic mode change is from 32 bit to 64 bit which can lead to truncation of RIP, and it is not possible to do with sysexit, since sysexit running in 32 bit mode will be limited to 32 bit version. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 671f7e5871ff70..e23c984d6aae09 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2525,6 +2525,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); ctxt->_eip = rdx; + ctxt->mode = usermode; *reg_write(ctxt, VCPU_REGS_RSP) = rcx; return X86EMUL_CONTINUE; From patchwork Tue Oct 25 12:42:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10744 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983900wru; Tue, 25 Oct 2022 05:47:16 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5gq4aMDSmaP56o1sPS7fqoNLVKxsR/HM0emig1SSZeBAbNYV3cTO9gYaF9AqsyH1MVjbDM X-Received: by 2002:a17:90b:1c8d:b0:203:cc25:4eb5 with SMTP id oo13-20020a17090b1c8d00b00203cc254eb5mr44417998pjb.132.1666702035805; Tue, 25 Oct 2022 05:47:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702035; cv=none; d=google.com; s=arc-20160816; b=RurSV1as2ghcbUk7R6TOzI++AE0vGi5yNTremVNEtXmdbKJcwKHPLnsILMHSo2pNCn UzR5A7lw4vT4VsdqwKtq+UmNqMsAmsIzfHkP1/h+IqhXl9ToCU4YsnufMtea5z43Nq+2 tjv+tiuf6BXL+zi52gbG/cVDYLNSspgPEdF4yzZctp0hsa8v4EHQrr6GfWFrwKTT2biv zaHRdH2ZKmGaXQH9LZQ18cmdYi9q2TL3J1p7upuyzyWTAuCBPWFrVwXYKyiFE4q58oE0 7R7ttkSFuDRKMBcLX+G1jorcpkEACeApiHNF6qxnic6z6hJJ4ue8Iarg9nSFDYI1fW3x 4Rhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4rzrXGptVIsN6KuOsZNVu5cL4vgxSqIWXtkapsoMQsw=; b=CL9uB2Qq1A8ar2WTUSmln8Y38uZVyhfoqVKF4s7/xF92NhyO1fWpNM2FQKSfaZCfmv Dc75XdbOb9wmOj/5gILHMMLFrg8Gk58YtSwL2gjteEqxkq1YQWS9QJheBuGjd3im5i6y LqPKZkBmG4Odv06qsEzujBZW7lSmG/CU0BhwA0cLzP6Kugvcpy/JvrIpntR4t33+Gtgw oInqc6asYltqKyDi9Ll+ZfoLOdzqg2JW+YQr47LP6+GGBLX6Y/V44x6bGHkSeu6g3Ftk I7Bk3FtcY6ryfY5R0ehxtmX1rTTu5fsMsRf/WV1D8QuaWgk/UDKpFJi8j/yzQlBU4Huq Vp4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="HVg/yS0C"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r1-20020aa79ec1000000b0053e1d872a91si2945719pfq.83.2022.10.25.05.47.00; Tue, 25 Oct 2022 05:47:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="HVg/yS0C"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232308AbiJYMow (ORCPT + 99 others); Tue, 25 Oct 2022 08:44:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232359AbiJYMoE (ORCPT ); Tue, 25 Oct 2022 08:44:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5FA31905E4 for ; Tue, 25 Oct 2022 05:43:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701791; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4rzrXGptVIsN6KuOsZNVu5cL4vgxSqIWXtkapsoMQsw=; b=HVg/yS0C7JG5A127iFUkRbdYsGPhvJ6rmXTchWu5jSlP4dr9L6mGQ56Nczfq2f0JYenUeF T3v3nI10Gr73IK+eJoh0JicSGsJQDPJp0t+QPYEYkEH+NeigogrfcPnWfuJB1EwxNVckES +teL3mcMvSQPlMN8XPLl6xbpHxWqAF0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-98-JjqpbgH5PGONRepgDsBhug-1; Tue, 25 Oct 2022 08:43:08 -0400 X-MC-Unique: JjqpbgH5PGONRepgDsBhug-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1303B3C0D843; Tue, 25 Oct 2022 12:43:08 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id B5E6C20290A2; Tue, 25 Oct 2022 12:43:04 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 11/23] KVM: x86: emulator: introduce emulator_recalc_and_set_mode Date: Tue, 25 Oct 2022 15:42:11 +0300 Message-Id: <20221025124223.227577-12-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663753974491117?= X-GMAIL-MSGID: =?utf-8?q?1747663753974491117?= Some instructions update the cpu execution mode, which needs to update the emulation mode. Extract this code, and make assign_eip_far use it. assign_eip_far now reads CS, instead of getting it via a parameter, which is ok, because callers always assign CS to the same value before calling this function. No functional change is intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 85 ++++++++++++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index e23c984d6aae09..c65f57b6da9bf1 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -760,8 +760,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt, ctxt->mode, linear); } -static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst, - enum x86emul_mode mode) +static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst) { ulong linear; int rc; @@ -771,41 +770,71 @@ static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst, if (ctxt->op_bytes != sizeof(unsigned long)) addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1); - rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear); + rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear); if (rc == X86EMUL_CONTINUE) ctxt->_eip = addr.ea; return rc; } +static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt) +{ + u64 efer; + struct desc_struct cs; + u16 selector; + u32 base3; + + ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); + + if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) { + /* Real mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode = X86EMUL_MODE_REAL; + return X86EMUL_CONTINUE; + } + + if (ctxt->eflags & X86_EFLAGS_VM) { + /* Protected/VM86 mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode = X86EMUL_MODE_VM86; + return X86EMUL_CONTINUE; + } + + if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS)) + return X86EMUL_UNHANDLEABLE; + + if (efer & EFER_LMA) { + if (cs.l) { + /* Proper long mode */ + ctxt->mode = X86EMUL_MODE_PROT64; + } else if (cs.d) { + /* 32 bit compatibility mode*/ + ctxt->mode = X86EMUL_MODE_PROT32; + } else { + ctxt->mode = X86EMUL_MODE_PROT16; + } + } else { + /* Legacy 32 bit / 16 bit mode */ + ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; + } + + return X86EMUL_CONTINUE; +} + static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) { - return assign_eip(ctxt, dst, ctxt->mode); + return assign_eip(ctxt, dst); } -static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, - const struct desc_struct *cs_desc) +static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst) { - enum x86emul_mode mode = ctxt->mode; - int rc; + int rc = emulator_recalc_and_set_mode(ctxt); -#ifdef CONFIG_X86_64 - if (ctxt->mode >= X86EMUL_MODE_PROT16) { - if (cs_desc->l) { - u64 efer = 0; + if (rc != X86EMUL_CONTINUE) + return rc; - ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); - if (efer & EFER_LMA) - mode = X86EMUL_MODE_PROT64; - } else - mode = X86EMUL_MODE_PROT32; /* temporary value */ - } -#endif - if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32) - mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; - rc = assign_eip(ctxt, dst, mode); - if (rc == X86EMUL_CONTINUE) - ctxt->mode = mode; - return rc; + return assign_eip(ctxt, dst); } static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) @@ -2141,7 +2170,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) if (rc != X86EMUL_CONTINUE) return rc; - rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc = assign_eip_far(ctxt, ctxt->src.val); /* Error handling is not implemented. */ if (rc != X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -2219,7 +2248,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) &new_desc); if (rc != X86EMUL_CONTINUE) return rc; - rc = assign_eip_far(ctxt, eip, &new_desc); + rc = assign_eip_far(ctxt, eip); /* Error handling is not implemented. */ if (rc != X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -3119,7 +3148,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) if (rc != X86EMUL_CONTINUE) return rc; - rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc = assign_eip_far(ctxt, ctxt->src.val); if (rc != X86EMUL_CONTINUE) goto fail; From patchwork Tue Oct 25 12:42:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10740 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp983758wru; Tue, 25 Oct 2022 05:46:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4FxDuqbBKeWVdIME3dVjCkU7BBWkGgZgs2gQeosP2xw+j8k1isXexKNB+uxUI7mwWSzV7r X-Received: by 2002:a17:906:ee88:b0:78d:1a9a:b2db with SMTP id wt8-20020a170906ee8800b0078d1a9ab2dbmr32205184ejb.225.1666702015746; Tue, 25 Oct 2022 05:46:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702015; cv=none; d=google.com; s=arc-20160816; b=rwpmsH6An3+qZ1EnH+B4IZ6S9NR38A9EqnQE8ZKl+zk0OLqmN4YSN3ZK9gokdgxy1t t7F9oaP64ZBiyJHfj3hqGncZith3q1LdxFA6fQDvaneIR7ExTPfKqOF2bpeeP9nYEytm PLvcBWeClYieswQ3bRhGpDT6YD6qZTlyFFMcAqooq2ady47ZTl+1E8ecCGkSE24geYov xMwVOKjkiXN8fCau4QqMNh8l9YyQCXF96qvOVzpOzmdtY5DoJD05luCgbV4Uy7ZN3ZbJ hvlWBSszNWEOm1LiaF+/qNcxerEaW775gT+d+Vsx4R4AF73aDdLC+a5m7nWNlE/ZELNm YZQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=IkrTzU5rG8TBDhJXggqu3Ho3+X8GGbZwRVqLpyPTTs0=; b=CLJpxlWr5v4lgMLlQpYr8jSbwyccC7oWedgv/5o37cklI4l/ZopmhRKEx/9lyArwnd 4Poz/D8N9jXX9Wrx7J02YJyQ4PRzq6hJOr4AmW+bO3njFWWJCDcvhDSimA9HbPASARDg Hlgs9Tr3iefNhMKBwCn0htCG+8RLs2jyLyok4A/5ueR+pL43aeZld8jEK2eTyvSw12ze 06ttSEXoAZhjopvIrNWwp7dXNOorcXO+YgufnS/JHubZYfAOic+07iuXateO/mEHBRyN q4ZKLx+ZD3dxG069O1AduTsjsAciSL3JLRkWdYoTtkVrzWl4RDMhsjvZvXCNDPJrZUxh SDcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=i80AaMb8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y6-20020a056402358600b0045b265a1712si3293302edc.595.2022.10.25.05.46.31; Tue, 25 Oct 2022 05:46:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=i80AaMb8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231379AbiJYMoj (ORCPT + 99 others); Tue, 25 Oct 2022 08:44:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232163AbiJYMnz (ORCPT ); Tue, 25 Oct 2022 08:43:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A3F7190461 for ; Tue, 25 Oct 2022 05:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701793; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IkrTzU5rG8TBDhJXggqu3Ho3+X8GGbZwRVqLpyPTTs0=; b=i80AaMb8PMxSsx0dQEsgI1kHPvqsiWMJfsXWt887yQT7xRHU410KJUcbjsbGXkQ40aw8Ul UwFmbhuj5JWukrfciaRPke+OLUfbQTczVOaHQz/3zv9dbbZdnR8qorxhnNgpehizbetCgR wGYODdY7t42+VaDfeZluj2YLZfHd2Ao= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-638-PM1bvPmSOxWnXe8PjSY2UQ-1; Tue, 25 Oct 2022 08:43:12 -0400 X-MC-Unique: PM1bvPmSOxWnXe8PjSY2UQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AD5723833289; Tue, 25 Oct 2022 12:43:11 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 59AA320290A2; Tue, 25 Oct 2022 12:43:08 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 12/23] KVM: x86: emulator: update the emulation mode after rsm Date: Tue, 25 Oct 2022 15:42:12 +0300 Message-Id: <20221025124223.227577-13-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663732680789550?= X-GMAIL-MSGID: =?utf-8?q?1747663732680789550?= Update the emulation mode after RSM so that RIP will be correctly written back, because the RSM instruction can switch the CPU mode from 32 bit (or less) to 64 bit. This fixes a guest crash in case the #SMI is received while the guest runs a code from an address > 32 bit. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index c65f57b6da9bf1..2c56d08b426065 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2315,7 +2315,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) if (ctxt->ops->leave_smm(ctxt)) ctxt->ops->triple_fault(ctxt); - return X86EMUL_CONTINUE; + return emulator_recalc_and_set_mode(ctxt); } static void From patchwork Tue Oct 25 12:42:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10752 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp985006wru; Tue, 25 Oct 2022 05:49:45 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6F6LscWsna23vuB90hpVnhCm3MdK5liGcoUnmp2Nnow7IHpZyF0mlRN8rhnXwhPBvF3/Z8 X-Received: by 2002:a17:907:2c47:b0:7a4:7673:d6ee with SMTP id hf7-20020a1709072c4700b007a47673d6eemr12087893ejc.397.1666702185111; Tue, 25 Oct 2022 05:49:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702185; cv=none; d=google.com; s=arc-20160816; b=1LxJ2MhNse0KYSFNyhgx+WXvOoblQWTpO6rVntLrSmIrrSyXO5bpGQUZVn/bXH29tp Fr+AlAvgNuYZ5/oo8PPCKigcjeW5YnGD5FcV5+EyrpakvMdTU2OSahYtazQPrEZQfl15 wmbA0ORCUkTjtZsPM22Z8v5XzifiB90la+b/9CAM09yrcV6eUgdhqwwh6HaEhQeOuE/P hK0Aaptk+hwN3G6+JJWjTI7M2Fy+NZMyrKW/OZdVBe8mG1aRnrx7VW2MPo1/3i3q1MoJ bTK28kNPjpbOnbyxKdg7ZwHIzU5RTxTe52rHSNgVud4ZLP7L+3wKYvrZsE0qhGsUa5ts frQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kk/y1Rqm9RfxugisprCGRVnRUEBSsRi9Ufn2Nn2BGLY=; b=OPInpFz40T7q2LAPfOZZ63XbC2gBTdD4VPBH4Q/gFLAA1hTCkBOzoitj793JEr6f8M GZ+DG7Cdg1hzg2RPuqz6W+qdCIWeNWNczfZqbBvClf2nK4RRFUnBo/6j5/BzdcoBYOne BndoizD1dE2NcLT9/HxkxMXS5S+KBW7M/wgqtFwcFf/OPekQ+wRzcLx2+RNWdgqkyR7W H17F3xV+zwtXzqsQVD2ZL6lf6ra1kOPpGsfktNcq9pCRFdDi/ojn714iDloP6HBAXgDz QwWSq4gycHKEZPZQPYb3d9Hc100ztb+i16lLnySLSf+LQjsz5b6XcRRi7PpcGOe/wnMB dVvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fzTiQ5jF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y13-20020a056402358d00b00461a976f952si3036499edc.543.2022.10.25.05.49.21; Tue, 25 Oct 2022 05:49:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fzTiQ5jF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229864AbiJYMqK (ORCPT + 99 others); Tue, 25 Oct 2022 08:46:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232239AbiJYMpr (ORCPT ); Tue, 25 Oct 2022 08:45:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7702192D84 for ; Tue, 25 Oct 2022 05:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701838; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kk/y1Rqm9RfxugisprCGRVnRUEBSsRi9Ufn2Nn2BGLY=; b=fzTiQ5jFc1DMo347V/qzBnomwyLgtVawfRIPLH1maUZMmJHX2uJmluY/qJcE6KYDjic9Ot nRc1TZWbpabNOlQgjXkcfcH5Sh+g2ldUDwe+60pdB77DWTY7xaJnaH/KrL29huOLkuZ9Hh jRKvkVzZ9IC14xdqDtEZn3pkz0G5GfE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-9Ctnj63mNuaC3Obps1YDFw-1; Tue, 25 Oct 2022 08:43:55 -0400 X-MC-Unique: 9Ctnj63mNuaC3Obps1YDFw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9A58C85A59D; Tue, 25 Oct 2022 12:43:20 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 00BB520290A2; Tue, 25 Oct 2022 12:43:11 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 13/23] KVM: x86: emulator: update the emulation mode after CR0 write Date: Tue, 25 Oct 2022 15:42:13 +0300 Message-Id: <20221025124223.227577-14-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663910773405878?= X-GMAIL-MSGID: =?utf-8?q?1747663910773405878?= Update the emulation mode when handling writes to CR0, because toggling CR0.PE switches between Real and Protected Mode, and toggling CR0.PG when EFER.LME=1 switches between Long and Protected Mode. This is likely a benign bug because there is no writeback of state, other than the RIP increment, and when toggling CR0.PE, the CPU has to execute code from a very low memory address. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 2c56d08b426065..5cc3efa0e21c17 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -3290,11 +3290,25 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt) static int em_cr_write(struct x86_emulate_ctxt *ctxt) { - if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val)) + int cr_num = ctxt->modrm_reg; + int r; + + if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val)) return emulate_gp(ctxt, 0); /* Disable writeback. */ ctxt->dst.type = OP_NONE; + + if (cr_num == 0) { + /* + * CR0 write might have updated CR0.PE and/or CR0.PG + * which can affect the cpu's execution mode. + */ + r = emulator_recalc_and_set_mode(ctxt); + if (r != X86EMUL_CONTINUE) + return r; + } + return X86EMUL_CONTINUE; } From patchwork Tue Oct 25 12:42:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10751 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984957wru; Tue, 25 Oct 2022 05:49:38 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6eup5GG7xEPB9b3vP3Jm+QkK05/ibuN/kjza2wyFPqZ9BkQXYztSVvoVIoLIaIWH73JHHc X-Received: by 2002:a17:907:6e9e:b0:78e:214b:e3c2 with SMTP id sh30-20020a1709076e9e00b0078e214be3c2mr31733647ejc.15.1666702178366; Tue, 25 Oct 2022 05:49:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702178; cv=none; d=google.com; s=arc-20160816; b=LsG81k1z8WTP7jCF0ILGykka2+I92+eBZE3+4NuxeU4dd796v3hsyBsrlKtUwzVLbY h6hgah+00PEU/R3QyyCfReVN7Y5/a2VU5uiYElXi2JDYLk16Pkf/ikxvV0dEalNWG/uV vF7a8gz248voYsIjB+2xbcKm+8EqXZDDH2NH0gHc4Xn1hYEUpEKI8uJZpkNqYnowDFAI qxRANSn3ZZemJl+8ETX8k3Ru04gqOUE2h1qPCqr1dj+nnOBOXk+KlcjV80TTyRvGqLV0 cElSkX+x0WlFGQkLQsNjRwsrdg2VEW3WCCXogCGb99BXUQNLPGEBfKAP8JJc10D3YbYd jmWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QrufkFAvyi6r9X0w4L0+uH1Qm3sKkNM96SGOrjFRcCQ=; b=G2YDuvZE0GEihEJRglmmF+xFGXxKcA/3XdsAi2hOeFtNgWAOzllRgavpXJqndSY8vO w6IsPxNO+ThdoEDOweczWx184kSm/Bd6Lxag9PyXqvaWv0cBdOROfmWRKjSLDpzAC2xt Q86xT3zoAHpwYiysdfHI5qlFm1JxQXaix01By7hOZr2EXS4ZfoDPCzdycE+BJYcTPYyD D3muBFo0D6V/4lG5ln+wE2Sf7TqMvrrmdAbyEHAY13D7ZY30bz+6sSALutJwdDFbWOLP 8LefZE/vvHOOLIVYgTTGHpKKGsV9D0OVz19mdjuv0fpjl54FWHctTil5/Whd7iS4lW3f 35ww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VlHTlolp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c20-20020aa7c994000000b00461af57d736si2424496edt.278.2022.10.25.05.49.13; Tue, 25 Oct 2022 05:49:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VlHTlolp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232214AbiJYMqF (ORCPT + 99 others); Tue, 25 Oct 2022 08:46:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232203AbiJYMpo (ORCPT ); Tue, 25 Oct 2022 08:45:44 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0C77192D82 for ; Tue, 25 Oct 2022 05:44:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QrufkFAvyi6r9X0w4L0+uH1Qm3sKkNM96SGOrjFRcCQ=; b=VlHTlolppfenCnT3wUpCjd8HrBwql7FN/Rl/VpdqbIpyF/ozAvCc0PghKMylLs8K/lPqwj lTMeRANtOMn4o77/jCvIk/IkE2u2J2dsb+dj2tFbtoVFK118WmqpQUEUe9uq78cCVDoY6u A3R1WTwjoPbGTopfGzXcXU8KFdGdOw8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-568-z4NTdB_4Mz2gjleY8VIkVQ-1; Tue, 25 Oct 2022 08:43:44 -0400 X-MC-Unique: z4NTdB_4Mz2gjleY8VIkVQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 649F03C0F45D; Tue, 25 Oct 2022 12:43:39 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9FA86200BA63; Tue, 25 Oct 2022 12:43:20 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 14/23] KVM: x86: smm: number of GPRs in the SMRAM image depends on the image format Date: Tue, 25 Oct 2022 15:42:14 +0300 Message-Id: <20221025124223.227577-15-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663903375135530?= X-GMAIL-MSGID: =?utf-8?q?1747663903375135530?= On 64 bit host, if the guest doesn't have X86_FEATURE_LM, KVM will access 16 gprs to 32-bit smram image, causing out-ouf-bound ram access. On 32 bit host, the rsm_load_state_64/enter_smm_save_state_64 is compiled out, thus access overflow can't happen. Fixes: b443183a25ab61 ("KVM: x86: Reduce the number of emulator GPRs to '8' for 32-bit KVM") Signed-off-by: Maxim Levitsky Reviewed-by: Sean Christopherson --- arch/x86/kvm/emulate.c | 1 + arch/x86/kvm/smm.c | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5cc3efa0e21c17..ac6fac25ba25d8 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2307,6 +2307,7 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt) return rc; } + static int em_rsm(struct x86_emulate_ctxt *ctxt) { if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0) diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 41ca128478fcd4..b290ad14070f72 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -382,7 +382,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED; ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0); - for (i = 0; i < NR_EMULATOR_GPRS; i++) + for (i = 0; i < 8; i++) *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); val = GET_SMSTATE(u32, smstate, 0x7fcc); @@ -438,7 +438,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 val, cr0, cr3, cr4; int i, r; - for (i = 0; i < NR_EMULATOR_GPRS; i++) + for (i = 0; i < 16; i++) *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78); From patchwork Tue Oct 25 12:42:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10749 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984351wru; Tue, 25 Oct 2022 05:48:19 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5wwpd9veXkLUXzJCj1ciDWYuSE/7sgJp2YyOFMinEJe72j+GcNd5ThkX9BwerrRbL6NM0M X-Received: by 2002:a17:907:7208:b0:78e:176e:e0c6 with SMTP id dr8-20020a170907720800b0078e176ee0c6mr32471882ejc.594.1666702099757; Tue, 25 Oct 2022 05:48:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702099; cv=none; d=google.com; s=arc-20160816; b=XmBmGjPgwsE/P59ZVY8fn71vBLc0bFZcM3QFvzoZsYURvcV1x06M8R0B9ZOkvQ7Ouh MT2f8rz5eJj50+XqjFUtUOwugZr2idelUWnWQtbSW4lQz45eRj3NPTAVegexvY33ZoEU lq6+rWTn5H2o9AVsW47jB/IJGjZOWOUuc9VrMTfZB+ZckJac0SQho4qy96ve9fycEdtz lqd7LeZr7hwLF4W0v4oCrDhgCiO/Q7LuAAmTgLUBythpneTJLbKIGJVlOW+P5oP7GAYQ lcOR2G2g1GV4iedQJCye2DFd84NAdH8++Dts+ZjXQce4Gm+8Iv4u+iK9PEDHopCgY98W UxqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=M1g7fz6KB/sbYm3Ds1BFDi+lUou8XyaUx6Re1h9B+aQ=; b=QHhY8Ub5fdzU6p/KW/Og6uUohVQgXRfHYY0e2n6YSiMcIypF1213Psfa5yWG9/H94u qgWMvxzViI/esD+8Ul/Ne65N8RpGx+DfnA4yWMpo2jujDHlt0vjzyrIOz/9eUotMT/0H iuVxt0ZH7p41qqg8qTxNXRiMrcfKU1iLBC/iH2PO+vPPlVu9H4zCahsb/cPnoOj2drRd RjPgOjmm5QqC+sSXI4CrZGMmWLTNBJb08hAdTQRb1jqZziv997SfBeQ1gh0+dh7jHz4x gaLYjhkKhOuJH+c1hgdKsfRPgDfYp7Hfgaa0heXJohwldeSMH2Cmpz6v2LRDdnckEbEa dI+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NI7DzGjJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i2-20020a1709061cc200b007a7424836d5si2498042ejh.234.2022.10.25.05.47.55; Tue, 25 Oct 2022 05:48:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NI7DzGjJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232197AbiJYMqU (ORCPT + 99 others); Tue, 25 Oct 2022 08:46:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232288AbiJYMpu (ORCPT ); Tue, 25 Oct 2022 08:45:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD04A18F91F for ; Tue, 25 Oct 2022 05:44:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701849; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M1g7fz6KB/sbYm3Ds1BFDi+lUou8XyaUx6Re1h9B+aQ=; b=NI7DzGjJ3R4gQjidEnwnyXYDMFWFj254RzfZHGOpQ1jMrYEuglmyrhckM6KXzgdY+XRthT j7d9+L0134ZIRCnaypKNQwLz5wnDURwOx0O6ndb65xOjRHtJ10IgZItag31HrY4WAGr1Yn z/X0oN+Z37oQk2Jm8b3wYMohXb9Q5P4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-634-kV2X2ZD4M1uhtNn3-gxnJA-1; Tue, 25 Oct 2022 08:44:06 -0400 X-MC-Unique: kV2X2ZD4M1uhtNn3-gxnJA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BB8185651D; Tue, 25 Oct 2022 12:43:41 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D9D7200BBD2; Tue, 25 Oct 2022 12:43:32 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 15/23] KVM: x86: smm: check for failures on smm entry Date: Tue, 25 Oct 2022 15:42:15 +0300 Message-Id: <20221025124223.227577-16-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663821103556937?= X-GMAIL-MSGID: =?utf-8?q?1747663821103556937?= In the rare case of the failure on SMM entry, the KVM should at least terminate the VM instead of going south. Suggested-by: Sean Christopherson Signed-off-by: Maxim Levitsky --- arch/x86/kvm/smm.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index b290ad14070f72..1191a79cf027e5 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -211,11 +211,17 @@ void enter_smm(struct kvm_vcpu *vcpu) * Give enter_smm() a chance to make ISA-specific changes to the vCPU * state (e.g. leave guest mode) after we've saved the state into the * SMM state-save area. + * + * Kill the VM in the unlikely case of failure, because the VM + * can be in undefined state in this case. */ - static_call(kvm_x86_enter_smm)(vcpu, buf); + if (static_call(kvm_x86_enter_smm)(vcpu, buf)) + goto error; kvm_smm_changed(vcpu, true); - kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf)); + + if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf))) + goto error; if (static_call(kvm_x86_get_nmi_mask)(vcpu)) vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK; @@ -235,7 +241,8 @@ void enter_smm(struct kvm_vcpu *vcpu) dt.address = dt.size = 0; static_call(kvm_x86_set_idt)(vcpu, &dt); - kvm_set_dr(vcpu, 7, DR7_FIXED_1); + if (WARN_ON_ONCE(kvm_set_dr(vcpu, 7, DR7_FIXED_1))) + goto error; cs.selector = (vcpu->arch.smbase >> 4) & 0xffff; cs.base = vcpu->arch.smbase; @@ -264,11 +271,15 @@ void enter_smm(struct kvm_vcpu *vcpu) #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - static_call(kvm_x86_set_efer)(vcpu, 0); + if (static_call(kvm_x86_set_efer)(vcpu, 0)) + goto error; #endif kvm_update_cpuid_runtime(vcpu); kvm_mmu_reset_context(vcpu); + return; +error: + kvm_vm_dead(vcpu->kvm); } static void rsm_set_desc_flags(struct kvm_segment *desc, u32 flags) From patchwork Tue Oct 25 12:42:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10750 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984537wru; Tue, 25 Oct 2022 05:48:44 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5j9btxzdHbCw/HKyRfgD5F4HmA4MGa71OUgJtVAVBjCCymIGI0MV205IWbbMTqtYWBrdtW X-Received: by 2002:a17:902:db0b:b0:185:51cc:811a with SMTP id m11-20020a170902db0b00b0018551cc811amr38273216plx.85.1666702112985; Tue, 25 Oct 2022 05:48:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702112; cv=none; d=google.com; s=arc-20160816; b=Q7GHMuk3+PANuBkg9/f/EMb1nxR6eIRfS1xqcJnZzWoMxeoy/yHgZT2gERxvO7hmFe zwhQek+QHBwvOCdfJp0Ed80vLDciI4EM/NXkrLBTSQELZiAh85c6wG66QhxHN4rbrQu3 YKCQ/T8wITwnH7Ban/c1vdinh5DYkCmSNZgSUUTHIfM5W6UTZNTM7WGFvLLaot8LXlYM DQFmua8PSdEGz6vOaasZkHfeOr7VhP/4W4tDw4vuEYAEeGY0kb2VvYuhbdlgzGcFCfFR 15iP37kUfstssAjaNpPDCvtQqYhUXHSpBWVjjAx/6YCf5uq6tMGW7kVRlEm54UoWwMKx AbhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wpEwGNfqbEqDVN2HPeR/hA2yISpF3GfgtXWKMNYfiSM=; b=bByYAVthVT9BqzN0gN8U/LhvQrayqdShwPucOYj9od0WtqIsU2LVYjUrfePtFIPI+s shWhGjc8bXHrgAzeKiznLA1PCuMUTDIOM2tVeK7Bb6YOUvKwuivqydSD33WMiYAQ8mKH 1QcSj/hOjQkBHMlOCdztiRGBxKdq0f/cFnNqvgjDTPx33aFcs5WbyFIn7bTANeNWpwL1 zyRMkoqzcRFIblLHExKTwGUQdgYl0k2g1fANC+/wrLV1JpswDZAPsxVllopfC2DM02DZ HGAySB4dAqxPE1J1l6L+z56bRD1HariExMf0+LIKaeqL3fQU6y5Fgv7tsmxOKCTnsnqv 1nxg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VAWfQXlG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r4-20020a1709028bc400b0018336dec9desi2746226plo.79.2022.10.25.05.48.20; Tue, 25 Oct 2022 05:48:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VAWfQXlG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230038AbiJYMqh (ORCPT + 99 others); Tue, 25 Oct 2022 08:46:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232369AbiJYMqD (ORCPT ); Tue, 25 Oct 2022 08:46:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11C61193472 for ; Tue, 25 Oct 2022 05:44:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701854; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wpEwGNfqbEqDVN2HPeR/hA2yISpF3GfgtXWKMNYfiSM=; b=VAWfQXlGDg7KPo+Ar6D4OK1xC6sfJq9HPTSfbQHa3v3pwGs2k/ur30iifUoFExejLOp6Fw woyHGncTcJn6ljWUU8+An+uTR/o17VLqgRDJDZ4LRihrbCmU1yAlkLKW5RGROl9UOSKM9k SNzdF73FXtV4wC3wCZ8zwzWy0Kr750k= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-49-J2-IhczJO9mIeRoKk69Xdg-1; Tue, 25 Oct 2022 08:44:08 -0400 X-MC-Unique: J2-IhczJO9mIeRoKk69Xdg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7BC10857D1B; Tue, 25 Oct 2022 12:44:01 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 022072023A16; Tue, 25 Oct 2022 12:43:41 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 16/23] KVM: x86: smm: add structs for KVM's smram layout Date: Tue, 25 Oct 2022 15:42:16 +0300 Message-Id: <20221025124223.227577-17-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663834881026402?= X-GMAIL-MSGID: =?utf-8?q?1747663834881026402?= Add structs that will be used to define and read/write the KVM's SMRAM layout, instead of reading/writing to raw offsets. Also document the differences between KVM's SMRAM layout and SMRAM layout that is used by real Intel/AMD cpus. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/smm.c | 94 +++++++++++++++++++++++++++++++++ arch/x86/kvm/smm.h | 127 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 221 insertions(+) diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 1191a79cf027e5..01dab9fc3ab4b7 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -8,6 +8,97 @@ #include "cpuid.h" #include "trace.h" +#define CHECK_SMRAM32_OFFSET(field, offset) \ + ASSERT_STRUCT_OFFSET(struct kvm_smram_state_32, field, offset - 0xFE00) + +#define CHECK_SMRAM64_OFFSET(field, offset) \ + ASSERT_STRUCT_OFFSET(struct kvm_smram_state_64, field, offset - 0xFE00) + +static void check_smram_offsets(void) +{ + /* 32 bit SMRAM image */ + CHECK_SMRAM32_OFFSET(reserved1, 0xFE00); + CHECK_SMRAM32_OFFSET(smbase, 0xFEF8); + CHECK_SMRAM32_OFFSET(smm_revision, 0xFEFC); + CHECK_SMRAM32_OFFSET(reserved2, 0xFF00); + CHECK_SMRAM32_OFFSET(cr4, 0xFF14); + CHECK_SMRAM32_OFFSET(reserved3, 0xFF18); + CHECK_SMRAM32_OFFSET(ds, 0xFF2C); + CHECK_SMRAM32_OFFSET(fs, 0xFF38); + CHECK_SMRAM32_OFFSET(gs, 0xFF44); + CHECK_SMRAM32_OFFSET(idtr, 0xFF50); + CHECK_SMRAM32_OFFSET(tr, 0xFF5C); + CHECK_SMRAM32_OFFSET(gdtr, 0xFF6C); + CHECK_SMRAM32_OFFSET(ldtr, 0xFF78); + CHECK_SMRAM32_OFFSET(es, 0xFF84); + CHECK_SMRAM32_OFFSET(cs, 0xFF90); + CHECK_SMRAM32_OFFSET(ss, 0xFF9C); + CHECK_SMRAM32_OFFSET(es_sel, 0xFFA8); + CHECK_SMRAM32_OFFSET(cs_sel, 0xFFAC); + CHECK_SMRAM32_OFFSET(ss_sel, 0xFFB0); + CHECK_SMRAM32_OFFSET(ds_sel, 0xFFB4); + CHECK_SMRAM32_OFFSET(fs_sel, 0xFFB8); + CHECK_SMRAM32_OFFSET(gs_sel, 0xFFBC); + CHECK_SMRAM32_OFFSET(ldtr_sel, 0xFFC0); + CHECK_SMRAM32_OFFSET(tr_sel, 0xFFC4); + CHECK_SMRAM32_OFFSET(dr7, 0xFFC8); + CHECK_SMRAM32_OFFSET(dr6, 0xFFCC); + CHECK_SMRAM32_OFFSET(gprs, 0xFFD0); + CHECK_SMRAM32_OFFSET(eip, 0xFFF0); + CHECK_SMRAM32_OFFSET(eflags, 0xFFF4); + CHECK_SMRAM32_OFFSET(cr3, 0xFFF8); + CHECK_SMRAM32_OFFSET(cr0, 0xFFFC); + + /* 64 bit SMRAM image */ + CHECK_SMRAM64_OFFSET(es, 0xFE00); + CHECK_SMRAM64_OFFSET(cs, 0xFE10); + CHECK_SMRAM64_OFFSET(ss, 0xFE20); + CHECK_SMRAM64_OFFSET(ds, 0xFE30); + CHECK_SMRAM64_OFFSET(fs, 0xFE40); + CHECK_SMRAM64_OFFSET(gs, 0xFE50); + CHECK_SMRAM64_OFFSET(gdtr, 0xFE60); + CHECK_SMRAM64_OFFSET(ldtr, 0xFE70); + CHECK_SMRAM64_OFFSET(idtr, 0xFE80); + CHECK_SMRAM64_OFFSET(tr, 0xFE90); + CHECK_SMRAM64_OFFSET(io_restart_rip, 0xFEA0); + CHECK_SMRAM64_OFFSET(io_restart_rcx, 0xFEA8); + CHECK_SMRAM64_OFFSET(io_restart_rsi, 0xFEB0); + CHECK_SMRAM64_OFFSET(io_restart_rdi, 0xFEB8); + CHECK_SMRAM64_OFFSET(io_restart_dword, 0xFEC0); + CHECK_SMRAM64_OFFSET(reserved1, 0xFEC4); + CHECK_SMRAM64_OFFSET(io_inst_restart, 0xFEC8); + CHECK_SMRAM64_OFFSET(auto_hlt_restart, 0xFEC9); + CHECK_SMRAM64_OFFSET(reserved2, 0xFECA); + CHECK_SMRAM64_OFFSET(efer, 0xFED0); + CHECK_SMRAM64_OFFSET(svm_guest_flag, 0xFED8); + CHECK_SMRAM64_OFFSET(svm_guest_vmcb_gpa, 0xFEE0); + CHECK_SMRAM64_OFFSET(svm_guest_virtual_int, 0xFEE8); + CHECK_SMRAM64_OFFSET(reserved3, 0xFEF0); + CHECK_SMRAM64_OFFSET(smm_revison, 0xFEFC); + CHECK_SMRAM64_OFFSET(smbase, 0xFF00); + CHECK_SMRAM64_OFFSET(reserved4, 0xFF04); + CHECK_SMRAM64_OFFSET(ssp, 0xFF18); + CHECK_SMRAM64_OFFSET(svm_guest_pat, 0xFF20); + CHECK_SMRAM64_OFFSET(svm_host_efer, 0xFF28); + CHECK_SMRAM64_OFFSET(svm_host_cr4, 0xFF30); + CHECK_SMRAM64_OFFSET(svm_host_cr3, 0xFF38); + CHECK_SMRAM64_OFFSET(svm_host_cr0, 0xFF40); + CHECK_SMRAM64_OFFSET(cr4, 0xFF48); + CHECK_SMRAM64_OFFSET(cr3, 0xFF50); + CHECK_SMRAM64_OFFSET(cr0, 0xFF58); + CHECK_SMRAM64_OFFSET(dr7, 0xFF60); + CHECK_SMRAM64_OFFSET(dr6, 0xFF68); + CHECK_SMRAM64_OFFSET(rflags, 0xFF70); + CHECK_SMRAM64_OFFSET(rip, 0xFF78); + CHECK_SMRAM64_OFFSET(gprs, 0xFF80); + + BUILD_BUG_ON(sizeof(union kvm_smram) != 512); +} + +#undef CHECK_SMRAM64_OFFSET +#undef CHECK_SMRAM32_OFFSET + + void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm) { trace_kvm_smm_transition(vcpu->vcpu_id, vcpu->arch.smbase, entering_smm); @@ -199,6 +290,8 @@ void enter_smm(struct kvm_vcpu *vcpu) unsigned long cr0; char buf[512]; + check_smram_offsets(); + memset(buf, 0, 512); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) @@ -449,6 +542,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 val, cr0, cr3, cr4; int i, r; + for (i = 0; i < 16; i++) *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index a6795b93ba3002..bf5c7ffeb11efc 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -2,6 +2,8 @@ #ifndef ASM_KVM_SMM_H #define ASM_KVM_SMM_H +#include + #define GET_SMSTATE(type, buf, offset) \ (*(type *)((buf) + (offset) - 0x7e00)) @@ -9,6 +11,131 @@ *(type *)((buf) + (offset) - 0x7e00) = val #ifdef CONFIG_KVM_SMM + + +/* 32 bit KVM's emulated SMM layout. Loosely based on Intel's layout */ + +struct kvm_smm_seg_state_32 { + u32 flags; + u32 limit; + u32 base; +} __packed; + +struct kvm_smram_state_32 { + u32 reserved1[62]; + u32 smbase; + u32 smm_revision; + u32 reserved2[5]; + u32 cr4; /* CR4 is not present in Intel/AMD SMRAM image */ + u32 reserved3[5]; + + /* + * Segment state is not present/documented in the Intel/AMD SMRAM image + * Instead this area on Intel/AMD contains IO/HLT restart flags. + */ + struct kvm_smm_seg_state_32 ds; + struct kvm_smm_seg_state_32 fs; + struct kvm_smm_seg_state_32 gs; + struct kvm_smm_seg_state_32 idtr; /* IDTR has only base and limit */ + struct kvm_smm_seg_state_32 tr; + u32 reserved; + struct kvm_smm_seg_state_32 gdtr; /* GDTR has only base and limit */ + struct kvm_smm_seg_state_32 ldtr; + struct kvm_smm_seg_state_32 es; + struct kvm_smm_seg_state_32 cs; + struct kvm_smm_seg_state_32 ss; + + u32 es_sel; + u32 cs_sel; + u32 ss_sel; + u32 ds_sel; + u32 fs_sel; + u32 gs_sel; + u32 ldtr_sel; + u32 tr_sel; + + u32 dr7; + u32 dr6; + u32 gprs[8]; /* GPRS in the "natural" X86 order (EAX/ECX/EDX.../EDI) */ + u32 eip; + u32 eflags; + u32 cr3; + u32 cr0; +} __packed; + + +/* 64 bit KVM's emulated SMM layout. Based on AMD64 layout */ + +struct kvm_smm_seg_state_64 { + u16 selector; + u16 attributes; + u32 limit; + u64 base; +}; + +struct kvm_smram_state_64 { + + struct kvm_smm_seg_state_64 es; + struct kvm_smm_seg_state_64 cs; + struct kvm_smm_seg_state_64 ss; + struct kvm_smm_seg_state_64 ds; + struct kvm_smm_seg_state_64 fs; + struct kvm_smm_seg_state_64 gs; + struct kvm_smm_seg_state_64 gdtr; /* GDTR has only base and limit*/ + struct kvm_smm_seg_state_64 ldtr; + struct kvm_smm_seg_state_64 idtr; /* IDTR has only base and limit*/ + struct kvm_smm_seg_state_64 tr; + + /* I/O restart and auto halt restart are not implemented by KVM */ + u64 io_restart_rip; + u64 io_restart_rcx; + u64 io_restart_rsi; + u64 io_restart_rdi; + u32 io_restart_dword; + u32 reserved1; + u8 io_inst_restart; + u8 auto_hlt_restart; + u8 reserved2[6]; + + u64 efer; + + /* + * Two fields below are implemented on AMD only, to store + * SVM guest vmcb address if the #SMI was received while in the guest mode. + */ + u64 svm_guest_flag; + u64 svm_guest_vmcb_gpa; + u64 svm_guest_virtual_int; /* unknown purpose, not implemented */ + + u32 reserved3[3]; + u32 smm_revison; + u32 smbase; + u32 reserved4[5]; + + /* ssp and svm_* fields below are not implemented by KVM */ + u64 ssp; + u64 svm_guest_pat; + u64 svm_host_efer; + u64 svm_host_cr4; + u64 svm_host_cr3; + u64 svm_host_cr0; + + u64 cr4; + u64 cr3; + u64 cr0; + u64 dr7; + u64 dr6; + u64 rflags; + u64 rip; + u64 gprs[16]; /* GPRS in a reversed "natural" X86 order (R15/R14/../RCX/RAX.) */ +}; + +union kvm_smram { + struct kvm_smram_state_64 smram64; + struct kvm_smram_state_32 smram32; + u8 bytes[512]; +}; + static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { kvm_make_request(KVM_REQ_SMI, vcpu); From patchwork Tue Oct 25 12:42:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 10748 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp984323wru; Tue, 25 Oct 2022 05:48:17 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4kEU3EZgodSgmNOSAAOvj3EY6bDw2gvyo/GRqA1fqmi8CFJVNSuqB1LTZ0EcSLv7ivfug0 X-Received: by 2002:a17:90a:540e:b0:210:1e26:9422 with SMTP id z14-20020a17090a540e00b002101e269422mr35395131pjh.100.1666702096704; Tue, 25 Oct 2022 05:48:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666702096; cv=none; d=google.com; s=arc-20160816; b=LpP108Wd/fc50Oik0Ul/CD560wDc2L60rQYB7zT3dI6JuQC74BHWBbyJArsi91E3Vu AfY44wZ5w7OetiTq79gnHHuq9opIUtbbtPwh3TgK3E6hDm5UsxtQy4Z2FrJczUCfQQcG G6PC6Z0NTtZ7l3i6dcW0Hc8g8QPHqkUMZp0lVpz3iH9t5KHkbINUoq+uxuduzJEN/lzE 87OnuqZ9ABwZt9g+xeZySEE0hGqn3uN2nWsHnRNsxX4OUHBiC9otsNQHpYDdcFxgmoD5 hvPiRRnBZqqbu/dw0HNjWHQptckuSCY2pr24rZIEa2DyMNCmm8vpq8zaNmssBZ8UG//2 uKoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=O2KKQqiBuAHVj5vGZFUVxQHEXy/uFmo1yeui9P4vRUI=; b=ujMZBDgx45cl740ADQFJdX0Ly6m2MmjmI+3eGWDlO/dkgdqOscIgqxKrrbWArl36DG 3Rgg1UB7s7Ictajj36dq7mLDtsxuNK8Fzw0mWMQM7dr9LKyOwKGgdWcxE0Cq/lRSRHb2 tqeJfrXLy0X3lx0rwFTaxMg/dsTCm0zLrSQ0czSgSvcTU0OnmVxbqewkFdMOaq/y4/+g Bd6Eu+hJukxpw0klG5vJf1VAH+78SAIoA6/vVatVn78DWVRmQWXdeQgX2Pmq4uQYnbKf OgedbMS7pfpGdgHE2LRl9vsyCBLYFQfmPY/ObWn7MJOznwPDdtfHJFDTgrIH0qn0BCXJ 7+3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JMdvnDt5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x2-20020a63fe42000000b0046040a85ea6si3028196pgj.120.2022.10.25.05.48.03; Tue, 25 Oct 2022 05:48:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JMdvnDt5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231259AbiJYMqc (ORCPT + 99 others); Tue, 25 Oct 2022 08:46:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232359AbiJYMp4 (ORCPT ); Tue, 25 Oct 2022 08:45:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C883219345D for ; Tue, 25 Oct 2022 05:44:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666701854; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O2KKQqiBuAHVj5vGZFUVxQHEXy/uFmo1yeui9P4vRUI=; b=JMdvnDt5YguRNUpyUDpSNknCvUgcKkhr9vVUpsSz9k1Fzn0DE0wJMSJxr9iFkrLhQ2GlFZ FO99F8RMYMnRYRufCeYX5WP3sZGdeIyxd2bWfAWvwiIG0VKezuP+vSZnx8MF0CxYRQrxMe YcLjI5e8rmBLMO1YMyRTMfQ5kDA0egk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-658-T1ukjTZdNwKmXisriy3hkA-1; Tue, 25 Oct 2022 08:44:08 -0400 X-MC-Unique: T1ukjTZdNwKmXisriy3hkA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1AEF3833AF4; Tue, 25 Oct 2022 12:44:07 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 86C35201C0D2; Tue, 25 Oct 2022 12:43:51 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Yang Zhong , linux-kselftest@vger.kernel.org, Kees Cook , Borislav Petkov , Guang Zeng , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky , Joerg Roedel , linux-kernel@vger.kernel.org, Wei Wang , Jim Mattson , Dave Hansen , Sean Christopherson , Vitaly Kuznetsov , x86@kernel.org, Shuah Khan Subject: [PATCH v4 17/23] KVM: x86: smm: use smram structs in the common code Date: Tue, 25 Oct 2022 15:42:17 +0300 Message-Id: <20221025124223.227577-18-mlevitsk@redhat.com> In-Reply-To: <20221025124223.227577-1-mlevitsk@redhat.com> References: <20221025124223.227577-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747663817655179320?= X-GMAIL-MSGID: =?utf-8?q?1747663817655179320?= Use kvm_smram union instad of raw arrays in the common smm code. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 5 +++-- arch/x86/kvm/smm.c | 27 ++++++++++++++------------- arch/x86/kvm/svm/svm.c | 8 ++++++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 4 files changed, 25 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0b0a82c0bb5cbd..540ff3122bbf8e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -206,6 +206,7 @@ typedef enum exit_fastpath_completion fastpath_t; struct x86_emulate_ctxt; struct x86_exception; +union kvm_smram; enum x86_intercept; enum x86_intercept_stage; @@ -1611,8 +1612,8 @@ struct kvm_x86_ops { #ifdef CONFIG_KVM_SMM int (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); - int (*enter_smm)(struct kvm_vcpu *vcpu, char *smstate); - int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); + int (*enter_smm)(struct kvm_vcpu *vcpu, union kvm_smram *smram); + int (*leave_smm)(struct kvm_vcpu *vcpu, const union kvm_smram *smram); void (*enable_smi_window)(struct kvm_vcpu *vcpu); #endif diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 01dab9fc3ab4b7..e714d43b746cce 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -288,17 +288,18 @@ void enter_smm(struct kvm_vcpu *vcpu) struct kvm_segment cs, ds; struct desc_ptr dt; unsigned long cr0; - char buf[512]; + union kvm_smram smram; check_smram_offsets(); - memset(buf, 0, 512); + memset(smram.bytes, 0, sizeof(smram.bytes)); + #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, buf); + enter_smm_save_state_64(vcpu, smram.bytes); else #endif - enter_smm_save_state_32(vcpu, buf); + enter_smm_save_state_32(vcpu, smram.bytes); /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU @@ -308,12 +309,12 @@ void enter_smm(struct kvm_vcpu *vcpu) * Kill the VM in the unlikely case of failure, because the VM * can be in undefined state in this case. */ - if (static_call(kvm_x86_enter_smm)(vcpu, buf)) + if (static_call(kvm_x86_enter_smm)(vcpu, &smram)) goto error; kvm_smm_changed(vcpu, true); - if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf))) + if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smram))) goto error; if (static_call(kvm_x86_get_nmi_mask)(vcpu)) @@ -473,7 +474,7 @@ static int rsm_enter_protected_mode(struct kvm_vcpu *vcpu, } static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, - const char *smstate) + u8 *smstate) { struct kvm_vcpu *vcpu = ctxt->vcpu; struct kvm_segment desc; @@ -534,7 +535,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, #ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, - const char *smstate) + u8 *smstate) { struct kvm_vcpu *vcpu = ctxt->vcpu; struct kvm_segment desc; @@ -606,13 +607,13 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) { struct kvm_vcpu *vcpu = ctxt->vcpu; unsigned long cr0, cr4, efer; - char buf[512]; + union kvm_smram smram; u64 smbase; int ret; smbase = vcpu->arch.smbase; - ret = kvm_vcpu_read_guest(vcpu, smbase + 0xfe00, buf, sizeof(buf)); + ret = kvm_vcpu_read_guest(vcpu, smbase + 0xfe00, smram.bytes, sizeof(smram)); if (ret < 0) return X86EMUL_UNHANDLEABLE; @@ -666,13 +667,13 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) * state (e.g. enter guest mode) before loading state from the SMM * state-save area. */ - if (static_call(kvm_x86_leave_smm)(vcpu, buf)) + if (static_call(kvm_x86_leave_smm)(vcpu, &smram)) return X86EMUL_UNHANDLEABLE; #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - return rsm_load_state_64(ctxt, buf); + return rsm_load_state_64(ctxt, smram.bytes); else #endif - return rsm_load_state_32(ctxt, buf); + return rsm_load_state_32(ctxt, smram.bytes); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2200b8aa727398..4cbb95796dcd63 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4436,12 +4436,14 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) return 1; } -static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { struct vcpu_svm *svm = to_svm(vcpu); struct kvm_host_map map_save; int ret; + char *smstate = (char *)smram; + if (!is_guest_mode(vcpu)) return 0; @@ -4483,7 +4485,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) return 0; } -static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) { struct vcpu_svm *svm = to_svm(vcpu); struct kvm_host_map map, map_save; @@ -4491,6 +4493,8 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) struct vmcb *vmcb12; int ret; + const char *smstate = (const char *)smram; + if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) return 0; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 107fc035c91b80..8c7890af11fb21 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7914,7 +7914,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) return !is_smm(vcpu); } -static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +static int vmx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -7935,7 +7935,7 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) return 0; } -static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +static int vmx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) { struct vcpu_vmx *vmx = to_vmx(vcpu); int ret;