From patchwork Mon Feb 26 08:26:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 206413 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp1945861dyb; Mon, 26 Feb 2024 00:54:53 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUU9+ehafiMK0sN1Rzz/NQdyUpCbqw1BTuKG9DI/nH1xCMOwUZVGzNu5ABfOURl3ef3+Tc51D0IuvlPK7ni2Jh4BYohQQ== X-Google-Smtp-Source: AGHT+IGR9rYDtCtkxoju0sU0huFf1bf/n4GzKL4QgQqe9TtQp11PjkgwPzDDJoxajpTz4jADMmHl X-Received: by 2002:a17:906:48c7:b0:a43:4eba:d008 with SMTP id d7-20020a17090648c700b00a434ebad008mr1301085ejt.73.1708937693178; Mon, 26 Feb 2024 00:54:53 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708937693; cv=pass; d=google.com; s=arc-20160816; b=lYHEL1bC1t7y1dGHRqRmmrIi+0uCIso7sgfiLMIVbQAL9pe+WR6AjDi9PPPh1pdkNh aA9ghmL4WgF/y7tSGAjV6Ax0I2u0U/dQSQwISr3RhMbyfhj8r4EEppsyybUHOobRYKMm WQ96wXCQ4T8DA1OIGx12tkhcqMYve4qawfbvN4zvY5mhQv/ByJAkka4jLmq8ceji4DqO Tuu0JvHMQ4Avdy0pUX56omxSJqZfl0ZjDsanmJXqTrcvEm/tkwvTN3Ig5hfEfFijACCf cHJpnKsOQe5KhQPY8ZMXpmkzEcSRYGtG9eHhkU37jIrG5YDo9bxJiClK4UhUuW+I3aJx Ybfg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=tyo2wqsdz1f40uEp8ppdNhSvIAjS1TFozQAx7QDNYys=; fh=Itbyk7CEvizIrzGEESCqq3I2tZgG1kc/GkVOa3S7Hsg=; b=QOCM8U/DoQMklSPyxCUlzm5ed+/E84ipgU+zL8WMYxIkzEj9LuEGG4/EYSJe27TeuM RZL9dUyAFnbiMK0eI0k3Bdg4Yr0RiIN8M8O2ukx0tFEacw4lZId+yfPNLl12qzQhL7rd 05rNP3YqkFZsZefQ1CDK1IIVuGvh6hvKEX+X8rzPocKazGPDZsz3t2EipvMySHKp0X/C sQ+10qqRH6KABopSdl6nCNGbE3KvVsMwAHfZzAhU9KggSK0WvTaT3eCU7r/TqJyVy+m+ PhOV3G4hhOHYBFt9t060IqlXnuZ7CoyAGBQX2CkwQ7gg27rvE9zMAdwtvVJNRrLIgU/x Lmvg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hm4TInZb; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80840-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80840-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id la22-20020a170906ad9600b00a3f161d3bd9si1845393ejb.497.2024.02.26.00.54.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 00:54:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80840-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hm4TInZb; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80840-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80840-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 9DACD1F2203C for ; Mon, 26 Feb 2024 08:54:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9A19B77F17; Mon, 26 Feb 2024 08:28:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Hm4TInZb" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D9141EEE9; Mon, 26 Feb 2024 08:28:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936119; cv=none; b=VFmuxvZozetJr6BOC+MfUesQsIJQvNZjpsKlNP69D/BXFuA53bpdnfTQ3ghlHmmnNaz4NB275HVeAOPgYI082GpgA+jtagMi8iX4OVyBaFIfoAkByOS2MI5lR8JBa3gwKbHeiPTue6/bBzYbpwUdMlhpKf1AJGn5+1ZOdtFq3uc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936119; c=relaxed/simple; bh=83+PksMV8WT23lQfWUC1NU2JQlcLGHpTbnnZQT1EEfs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YhBtBYDLBbfbmPSA2c3gXKXAQ39zye7pX4LpOwffI7OdMNro3kyl5wsSkneGF9jZlyjYIba+TNGotTRgOX3lZOe2v6H0lE3jkq3DFqRHqBahvwNNt7/6KgXU+rx3oBCPYHCttayMce2wiPIfvoh+5CbOer5h6Ig599lDTV9va0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Hm4TInZb; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708936117; x=1740472117; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=83+PksMV8WT23lQfWUC1NU2JQlcLGHpTbnnZQT1EEfs=; b=Hm4TInZbCiAHyYbKOYl4jIO8c0kzN6hSSVvcrRw4WTNcSt3bmn4WU/fF ZYkvm89ZrW58CzH9PcoN/xQLVFakD2HspGvlcmbm7VJ2i2XM9/redU3CK KdnwtphGqSrW4dmCcRTfrdz+ocI9kLq9FgyH8rtpeb0Ah82N1zEO/g+MO EBuJWi26zQuB2DIs503JzCON3TJDBgKp0TJkSQ/coDTdnizt8maIs1WiH XcQWkjTDxoOXMHk0Y6WKstQCOc+roUEPDwrgkrN2nSPxoa+6glhrQ1Xv6 3GUIeoNIge3/sjF3prgeFji0uVDuoPw77AxmIqygxUdT+XUzzJxwCoAfn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10995"; a="3069499" X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="3069499" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="11272522" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:36 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v19 078/130] KVM: TDX: Implement TDX vcpu enter/exit path Date: Mon, 26 Feb 2024 00:26:20 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791951050562742538 X-GMAIL-MSGID: 1791951050562742538 From: Isaku Yamahata This patch implements running TDX vcpu. Once vcpu runs on the logical processor (LP), the TDX vcpu is associated with it. When the TDX vcpu moves to another LP, the TDX vcpu needs to flush its status on the LP. When destroying TDX vcpu, it needs to complete flush and flush cpu memory cache. Track which LP the TDX vcpu run and flush it as necessary. Do nothing on sched_in event as TDX doesn't support pause loop. TDX vcpu execution requires restoring PMU debug store after returning back to KVM because the TDX module unconditionally resets the value. To reuse the existing code, export perf_restore_debug_store. Signed-off-by: Isaku Yamahata --- v19: - Removed export_symbol_gpl(host_xcr0) to the patch that uses it Changes v15 -> v16: - use __seamcall_saved_ret() - As struct tdx_module_args doesn't match with vcpu.arch.regs, copy regs before/after calling __seamcall_saved_ret(). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 21 +++++++++- arch/x86/kvm/vmx/tdx.c | 84 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 33 +++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 2 + 4 files changed, 138 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 7258a6304b4b..d72651ce99ac 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -158,6 +158,23 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vmx_vcpu_reset(vcpu, init_event); } +static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + /* Unconditionally continue to vcpu_run(). */ + return 1; + + return vmx_vcpu_pre_run(vcpu); +} + +static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_vcpu_run(vcpu); + + return vmx_vcpu_run(vcpu); +} + static void vt_flush_tlb_all(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) { @@ -343,8 +360,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .flush_tlb_gva = vt_flush_tlb_gva, .flush_tlb_guest = vt_flush_tlb_guest, - .vcpu_pre_run = vmx_vcpu_pre_run, - .vcpu_run = vmx_vcpu_run, + .vcpu_pre_run = vt_vcpu_pre_run, + .vcpu_run = vt_vcpu_run, .handle_exit = vmx_handle_exit, .skip_emulated_instruction = vmx_skip_emulated_instruction, .update_emulated_instruction = vmx_update_emulated_instruction, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 6aff3f7e2488..fdf9196cb592 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -11,6 +11,9 @@ #include "vmx.h" #include "x86.h" +#include +#include "trace.h" + #undef pr_fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -491,6 +494,87 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) */ } +static noinstr void tdx_vcpu_enter_exit(struct vcpu_tdx *tdx) +{ + struct tdx_module_args args; + + /* + * Avoid section mismatch with to_tdx() with KVM_VM_BUG(). The caller + * should call to_tdx(). + */ + struct kvm_vcpu *vcpu = &tdx->vcpu; + + guest_state_enter_irqoff(); + + /* + * TODO: optimization: + * - Eliminate copy between args and vcpu->arch.regs. + * - copyin/copyout registers only if (tdx->tdvmvall.regs_mask != 0) + * which means TDG.VP.VMCALL. + */ + args = (struct tdx_module_args) { + .rcx = tdx->tdvpr_pa, +#define REG(reg, REG) .reg = vcpu->arch.regs[VCPU_REGS_ ## REG] + REG(rdx, RDX), + REG(r8, R8), + REG(r9, R9), + REG(r10, R10), + REG(r11, R11), + REG(r12, R12), + REG(r13, R13), + REG(r14, R14), + REG(r15, R15), + REG(rbx, RBX), + REG(rdi, RDI), + REG(rsi, RSI), +#undef REG + }; + + tdx->exit_reason.full = __seamcall_saved_ret(TDH_VP_ENTER, &args); + +#define REG(reg, REG) vcpu->arch.regs[VCPU_REGS_ ## REG] = args.reg + REG(rcx, RCX); + REG(rdx, RDX); + REG(r8, R8); + REG(r9, R9); + REG(r10, R10); + REG(r11, R11); + REG(r12, R12); + REG(r13, R13); + REG(r14, R14); + REG(r15, R15); + REG(rbx, RBX); + REG(rdi, RDI); + REG(rsi, RSI); +#undef REG + + WARN_ON_ONCE(!kvm_rebooting && + (tdx->exit_reason.full & TDX_SW_ERROR) == TDX_SW_ERROR); + + guest_state_exit_irqoff(); +} + +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + if (unlikely(!tdx->initialized)) + return -EINVAL; + if (unlikely(vcpu->kvm->vm_bugged)) { + tdx->exit_reason.full = TDX_NON_RECOVERABLE_VCPU; + return EXIT_FASTPATH_NONE; + } + + trace_kvm_entry(vcpu); + + tdx_vcpu_enter_exit(tdx); + + vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET; + trace_kvm_exit(vcpu, KVM_ISA_VMX); + + return EXIT_FASTPATH_NONE; +} + void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { WARN_ON_ONCE(root_hpa & ~PAGE_MASK); diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index d822e790e3e5..81d301fbe638 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -27,6 +27,37 @@ struct kvm_tdx { struct page *source_page; }; +union tdx_exit_reason { + struct { + /* 31:0 mirror the VMX Exit Reason format */ + u64 basic : 16; + u64 reserved16 : 1; + u64 reserved17 : 1; + u64 reserved18 : 1; + u64 reserved19 : 1; + u64 reserved20 : 1; + u64 reserved21 : 1; + u64 reserved22 : 1; + u64 reserved23 : 1; + u64 reserved24 : 1; + u64 reserved25 : 1; + u64 bus_lock_detected : 1; + u64 enclave_mode : 1; + u64 smi_pending_mtf : 1; + u64 smi_from_vmx_root : 1; + u64 reserved30 : 1; + u64 failed_vmentry : 1; + + /* 63:32 are TDX specific */ + u64 details_l1 : 8; + u64 class : 8; + u64 reserved61_48 : 14; + u64 non_recoverable : 1; + u64 error : 1; + }; + u64 full; +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; @@ -34,6 +65,8 @@ struct vcpu_tdx { unsigned long *tdvpx_pa; bool td_vcpu_created; + union tdx_exit_reason exit_reason; + bool initialized; /* diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 191f2964ec8e..3e29a6fe28ef 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -150,6 +150,7 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -184,6 +185,7 @@ static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOP static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} +static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; } static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }