From patchwork Wed Nov 30 23:09:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 28057 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1199609wrr; Wed, 30 Nov 2022 15:14:17 -0800 (PST) X-Google-Smtp-Source: AA0mqf6OL3yZtXh00GAw9Pt7QakrEWlT8XPIjqCl5xpPFAY/xCZDSMQ1N1sf4fGczfB/bZ0SsWa8 X-Received: by 2002:aa7:8d5a:0:b0:560:eec2:d0ab with SMTP id s26-20020aa78d5a000000b00560eec2d0abmr65316758pfe.43.1669850057409; Wed, 30 Nov 2022 15:14:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669850057; cv=none; d=google.com; s=arc-20160816; b=tDk2oc9q5wHHpHj52/BCPNVE/mHDrNFEs3ZtSXpwEbJY7L4qaOLKHRefRQTQWWWoKg dUKN4fkmy8gbVCpzadv6DaU9RF6hQITdzHuGjge/Ebld1VpJQsQ2U7KmQxoIgeZ9ZOsV MOw/s03zkxlOWbRzBwd9z7bewKdB3MB2RlIFYxVgjWIPFJaT3y18JCBpDfDwRa5qdCvF jB2WZKaNJlaDvayDAa335Vr8uG8RBr+kBGA3bLKrC+1rlLzIKScu7q5i4gL/HiaIg2gm zxzPu9FUC8dskJGZFtHnaHg5Ixro5vFWP/smNWBJNO57LRxQ9sH3hBVvjCi/bPTcIIhv z9Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=dHU3XdySL+sqLW+/LrYmZDKEHUg9NLktQjaTwPQsVog=; b=dCxX8r1qezSa8baVVhW8ek0RNyQ0f9DyTyRtIjvIBN3xE4XzxBufFZz7UREUDMvGgJ gG2gjWxAnlkp8RbQPzrHLoRWKPOCDUwa+VyJdlGToXFC2lOI6P+TGb96PLWC4BR93WCT sfuCYXIRoBRIADZKe8NUflEPvW4LYpvvZCIRQLMFf12YV8dHhYY2yMr7c40ZAWv7XMOf v99mQXWXPEuwF1JlnFbfrOzUDoRlN09nE4bDtTx/JbLv8yLTgQo9Xf4WFRU3WNhH3xTL eHAwKrl2hS4yPdHKUD1ZpWIRk6niSzLzB+B3hxPuYQiLHbw5b7jEYzlyIBliCaSqTvfk gCTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=c7btJG8V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h15-20020a170902ac8f00b00178aec118c9si2420170plr.378.2022.11.30.15.14.03; Wed, 30 Nov 2022 15:14:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=c7btJG8V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230015AbiK3XNm (ORCPT + 99 others); Wed, 30 Nov 2022 18:13:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbiK3XM7 (ORCPT ); Wed, 30 Nov 2022 18:12:59 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1B529C636 for ; Wed, 30 Nov 2022 15:10:41 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id m2-20020a17090aab0200b002188c801f92so170106pjq.4 for ; Wed, 30 Nov 2022 15:10:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dHU3XdySL+sqLW+/LrYmZDKEHUg9NLktQjaTwPQsVog=; b=c7btJG8Vz8oky7wiaNPxy35uhrDeANgOVQezTibSlYx2PbLSsMoH8mDkHjvWv43r2I DOyGFxiN/wlU13wd54TlpFEXVNRhIPVkrbT8VpB5+R7A+NORau1uqiIuKdNp1l9ZxS3q 63CVt28/ps3LWgQLgldTdPmii2k6nYJPIndkTp9NkSDs8QnJA4J1ix03atRul1JJ3BJo V4dVc1X9caFKy+lQSJ06QfkENtsRL2kJnSrgxeyZLCJnwJ/oKQ1fSFF4sK0sqmL7zSo3 SfI9db7qU+AzVLeBiqxZmQO+74EumASU/wum9bD/2lVaCR/4JTt9XngR5ZgsGJrmLu4R AyMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dHU3XdySL+sqLW+/LrYmZDKEHUg9NLktQjaTwPQsVog=; b=XlPht5BPm0XSMZyr9pO6uM0AUexT1orscbXdGfG4qIpm7oAHH+h3j+zMZJ0A9Cmoyz ltNLecbHGsl/vKongBFnnOLy5mGM7awWF1qATCzu1vbPAkEewBuG97Vy1dmLBXHwd1Mv 3tXO3g2sR7qkU6y01gLzANaQkmbjkMNGJ9iDZFBj1tqHiqOljeT1RMlOL3g+iCNT8Ge3 gpVEQYQH2qViyR4egxjIGg3KbWWCMIEzcPh59ZqNCb3mFscmgxZCUrz2+9OXNhXBPaFm 70SO/G/JgC+NA2ZKaDlJ36u7b4qZMXgDlN3ozTo+KjN3G/YJKtnnXR1uoK0zO/+6ktn+ lOhQ== X-Gm-Message-State: ANoB5pmwm0Mb4aNdUjY24rLaLtTM+p1MRuGqMVlnvY0lWUSKvL6azlLM JHpZS3tMyrL6OMWvmEIBwRBalPB3D1g= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:515:b0:189:90d4:3c03 with SMTP id jn21-20020a170903051500b0018990d43c03mr15127383plb.45.1669849841483; Wed, 30 Nov 2022 15:10:41 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Nov 2022 23:09:21 +0000 In-Reply-To: <20221130230934.1014142-1-seanjc@google.com> Mime-Version: 1.0 References: <20221130230934.1014142-1-seanjc@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221130230934.1014142-38-seanjc@google.com> Subject: [PATCH v2 37/50] KVM: VMX: Shuffle support checks and hardware enabling code around From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Matthew Rosato , Eric Farman , Sean Christopherson , Vitaly Kuznetsov , David Woodhouse , Paul Durrant Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Atish Patra , David Hildenbrand , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Cornelia Huck , Isaku Yamahata , " =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= " , Fabiano Rosas , Michael Ellerman , Kai Huang , Chao Gao , Thomas Gleixner X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750964693882040553?= X-GMAIL-MSGID: =?utf-8?q?1750964693882040553?= Reorder code in vmx.c so that the VMX support check helpers reside above the hardware enabling helpers, which will allow KVM to perform support checks during hardware enabling (in a future patch). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 216 ++++++++++++++++++++--------------------- 1 file changed, 108 insertions(+), 108 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 23b64bf4bfcf..2a8a6e481c76 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2485,79 +2485,6 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) } } -static int kvm_cpu_vmxon(u64 vmxon_pointer) -{ - u64 msr; - - cr4_set_bits(X86_CR4_VMXE); - - asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" - _ASM_EXTABLE(1b, %l[fault]) - : : [vmxon_pointer] "m"(vmxon_pointer) - : : fault); - return 0; - -fault: - WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", - rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); - cr4_clear_bits(X86_CR4_VMXE); - - return -EFAULT; -} - -static int vmx_hardware_enable(void) -{ - int cpu = raw_smp_processor_id(); - u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); - int r; - - if (cr4_read_shadow() & X86_CR4_VMXE) - return -EBUSY; - - /* - * This can happen if we hot-added a CPU but failed to allocate - * VP assist page for it. - */ - if (static_branch_unlikely(&enable_evmcs) && - !hv_get_vp_assist_page(cpu)) - return -EFAULT; - - intel_pt_handle_vmx(1); - - r = kvm_cpu_vmxon(phys_addr); - if (r) { - intel_pt_handle_vmx(0); - return r; - } - - if (enable_ept) - ept_sync_global(); - - return 0; -} - -static void vmclear_local_loaded_vmcss(void) -{ - int cpu = raw_smp_processor_id(); - struct loaded_vmcs *v, *n; - - list_for_each_entry_safe(v, n, &per_cpu(loaded_vmcss_on_cpu, cpu), - loaded_vmcss_on_cpu_link) - __loaded_vmcs_clear(v); -} - -static void vmx_hardware_disable(void) -{ - vmclear_local_loaded_vmcss(); - - if (cpu_vmxoff()) - kvm_spurious_fault(); - - hv_reset_evmcs(); - - intel_pt_handle_vmx(0); -} - /* * There is no X86_FEATURE for SGX yet, but anyway we need to query CPUID * directly instead of going through cpu_has(), to ensure KVM is trapping @@ -2783,6 +2710,114 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, return 0; } +static bool __init kvm_is_vmx_supported(void) +{ + if (!cpu_has_vmx()) { + pr_err("CPU doesn't support VMX\n"); + return false; + } + + if (!this_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) || + !this_cpu_has(X86_FEATURE_VMX)) { + pr_err("VMX not enabled (by BIOS) in MSR_IA32_FEAT_CTL\n"); + return false; + } + + return true; +} + +static int __init vmx_check_processor_compat(void) +{ + struct vmcs_config vmcs_conf; + struct vmx_capability vmx_cap; + + if (!kvm_is_vmx_supported()) + return -EIO; + + if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0) + return -EIO; + if (nested) + nested_vmx_setup_ctls_msrs(&vmcs_conf, vmx_cap.ept); + if (memcmp(&vmcs_config, &vmcs_conf, sizeof(struct vmcs_config)) != 0) { + pr_err("CPU %d feature inconsistency!\n", smp_processor_id()); + return -EIO; + } + return 0; +} + +static int kvm_cpu_vmxon(u64 vmxon_pointer) +{ + u64 msr; + + cr4_set_bits(X86_CR4_VMXE); + + asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" + _ASM_EXTABLE(1b, %l[fault]) + : : [vmxon_pointer] "m"(vmxon_pointer) + : : fault); + return 0; + +fault: + WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", + rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); + cr4_clear_bits(X86_CR4_VMXE); + + return -EFAULT; +} + +static int vmx_hardware_enable(void) +{ + int cpu = raw_smp_processor_id(); + u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); + int r; + + if (cr4_read_shadow() & X86_CR4_VMXE) + return -EBUSY; + + /* + * This can happen if we hot-added a CPU but failed to allocate + * VP assist page for it. + */ + if (static_branch_unlikely(&enable_evmcs) && + !hv_get_vp_assist_page(cpu)) + return -EFAULT; + + intel_pt_handle_vmx(1); + + r = kvm_cpu_vmxon(phys_addr); + if (r) { + intel_pt_handle_vmx(0); + return r; + } + + if (enable_ept) + ept_sync_global(); + + return 0; +} + +static void vmclear_local_loaded_vmcss(void) +{ + int cpu = raw_smp_processor_id(); + struct loaded_vmcs *v, *n; + + list_for_each_entry_safe(v, n, &per_cpu(loaded_vmcss_on_cpu, cpu), + loaded_vmcss_on_cpu_link) + __loaded_vmcs_clear(v); +} + +static void vmx_hardware_disable(void) +{ + vmclear_local_loaded_vmcss(); + + if (cpu_vmxoff()) + kvm_spurious_fault(); + + hv_reset_evmcs(); + + intel_pt_handle_vmx(0); +} + struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t flags) { int node = cpu_to_node(cpu); @@ -7468,41 +7503,6 @@ static int vmx_vm_init(struct kvm *kvm) return 0; } -static bool __init kvm_is_vmx_supported(void) -{ - if (!cpu_has_vmx()) { - pr_err("CPU doesn't support VMX\n"); - return false; - } - - if (!this_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) || - !this_cpu_has(X86_FEATURE_VMX)) { - pr_err("VMX not enabled (by BIOS) in MSR_IA32_FEAT_CTL\n"); - return false; - } - - return true; -} - -static int __init vmx_check_processor_compat(void) -{ - struct vmcs_config vmcs_conf; - struct vmx_capability vmx_cap; - - if (!kvm_is_vmx_supported()) - return -EIO; - - if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0) - return -EIO; - if (nested) - nested_vmx_setup_ctls_msrs(&vmcs_conf, vmx_cap.ept); - if (memcmp(&vmcs_config, &vmcs_conf, sizeof(struct vmcs_config)) != 0) { - pr_err("CPU %d feature inconsistency!\n", smp_processor_id()); - return -EIO; - } - return 0; -} - static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache;