From patchwork Mon Feb 26 08:26:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 206428 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp1947352dyb; Mon, 26 Feb 2024 00:59:50 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX9DfVxAQl+T3bunCul8vkr6X4/sQfsN/4kKKNSa9xjhtmKaz7TDxR7cupQ8G4CChliG8XgmLGdYmgKifmKBwE4yKHhHA== X-Google-Smtp-Source: AGHT+IEO7IE+jpKEPiHc0SvawF+rHMMInh+jcA4/G1BfCeZyU11Q4HDHChJrMXuTBXywNk+TQ2wD X-Received: by 2002:a05:620a:a09:b0:787:bd2f:e30d with SMTP id i9-20020a05620a0a0900b00787bd2fe30dmr12471374qka.19.1708937989840; Mon, 26 Feb 2024 00:59:49 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708937989; cv=pass; d=google.com; s=arc-20160816; b=XrHi+YIW3xmh5LrhAl4zkHyvWYoP0RZdwH7EjJe7ncde73MEpE1MFoo3RwSkz2JN8W AFVCA0ZDgRYeIvb2oY2Lz6VqwTTaQyC9/8Nol0KkE6/7XTH/PDencLoKc9NnahE/UmIh HU0ViZ7m3M/S+L7Fma3MUjxpK1FxO5HPrpNewMFtfN1YG02JlPPPjRPL5J3gGx2GHblE jnT+4td2Rc5uPoxJln9WSGJnZ6I2GQzibEzpeFSelZD+MBRBvsaJ7cLiTLXIS5x9K16Q 0/D1aji9raRa9A71+braQuOB+GYzBL2bxQaFrzQZiTxMBnBp2zQIqqx7JeKUVgEQT24s w95g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+feUzuJXlG0aQP+2X+pKqf6SE7kycGPkNNEXBkWaxKI=; fh=Itbyk7CEvizIrzGEESCqq3I2tZgG1kc/GkVOa3S7Hsg=; b=VxpEBPPm1chRIyzNFQX5Lwa/5uV6H1Uyqp3hZyTNLGaeknG3ZNk8rPz73lqghWGSjR sP1d5oibsdtL3BI4YJYdIBh6pw8JuyylsydKInY+/Ov+i9wpkAMtJcr0uj+gvEB5Ey6n 1h8K6G6KUuX1DkTb6eoADpOrLM9WtMtd8FnV61MrSdludP/Vxz9b3kLtiasMOAl0ob+K Dp5a6EmkwB9UNNx8D1vZWGzGM+eMVL30HSSBhfPRSle1lLiBwXHD0NtQinvX650K7/HY Lty4T+1c4eWIECNTiURhSSmEaabtHAwLg/toSNO5vbH3OgOEgZURm5lNg0cSW0BZeW3w EjTg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=M3itnoEj; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80854-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80854-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id h5-20020a05620a244500b007873e889546si4975071qkn.129.2024.02.26.00.59.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 00:59:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80854-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=M3itnoEj; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80854-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80854-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 916841C22E64 for ; Mon, 26 Feb 2024 08:59:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F38FE7E10A; Mon, 26 Feb 2024 08:28:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="M3itnoEj" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAAA17A738; Mon, 26 Feb 2024 08:28:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936132; cv=none; b=TXyxODZQzkSEGHuR43FlcsM5emtEJ3UlrqMdRCI67QYfc7THldnKIdDPhEzfq28IKfSMf2Es2JBYqigj1qAGhQu0MMebF+9+McgahKFUibo26aNl3RQvziGgvrPFx2gGhAp9QZZd7JppO00Ebj5pAhpDGk93XtDqA+xQwKTTS9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936132; c=relaxed/simple; bh=fE0TWc2+hDBVZkABNfd5lMn7VSSV0PbBPyLb8gy3rxs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AegHskvMlZe3eQaINrK45us7goM8IptJVvtZiC8/71ogkjvYOTU0ML5eop/T8d2Cv2WBS1H++J2gjRdKVxPEp4eNFo2o6zAW1dyR0Eq+9m+Kyt3IXPG4+3OjcMEo2dzQTa0gB/xVj4rGZGdvkqWYhu7i7hXaUcsAAxBOtfTKfxI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=M3itnoEj; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708936131; x=1740472131; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fE0TWc2+hDBVZkABNfd5lMn7VSSV0PbBPyLb8gy3rxs=; b=M3itnoEjYQVIhFOezWTgUle7SENYJgeilh/q5sF4mSwcQLy0dh7rirwl tmmf5AJa3BXIvgJrC6ZYhDpcoukWMe3T+QwaxX0MBFgFsDPwXVlJ8QW0p ZmgMTQc/d5dXthBEUgHrt3jIjasN5vT/jQws8O4jYp1GM47ZcYsfpltSc mE110L6353esgRpTFpyZY2YrD9lyuhvLtomYKupi5jV9Y+203fVFdDkGj w7zguPtIjBoIh4D0mXh4JJspNeMwqCjl8nQnhcFQVEtqEiFwsmdOwd09W steyWtPdxJFCgNZzznYd7YUsz1Fb1f271sKvsZRhVIwRLYzSkW1RTXtF0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10995"; a="3069558" X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="3069558" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="11272642" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:49 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v19 091/130] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c Date: Mon, 26 Feb 2024 00:26:33 -0800 Message-Id: <6c7774a44515d6787c9512cb05c3b305e9b5855c.1708933498.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791951361791099278 X-GMAIL-MSGID: 1791951361791099278 From: Isaku Yamahata As TDX will use posted_interrupt.c, the use of struct vcpu_vmx is a blocker. Because the members of struct pi_desc pi_desc and struct list_head pi_wakeup_list are only used in posted_interrupt.c, introduce common structure, struct vcpu_pi, make vcpu_vmx and vcpu_tdx has same layout in the top of structure. To minimize the diff size, avoid code conversion like, vmx->pi_desc => vmx->common->pi_desc. Instead add compile time check if the layout is expected. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/posted_intr.c | 41 ++++++++++++++++++++++++++-------- arch/x86/kvm/vmx/posted_intr.h | 11 +++++++++ arch/x86/kvm/vmx/tdx.c | 1 + arch/x86/kvm/vmx/tdx.h | 8 +++++++ arch/x86/kvm/vmx/vmx.h | 14 +++++++----- 5 files changed, 60 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c index af662312fd07..b66add9da0f3 100644 --- a/arch/x86/kvm/vmx/posted_intr.c +++ b/arch/x86/kvm/vmx/posted_intr.c @@ -11,6 +11,7 @@ #include "posted_intr.h" #include "trace.h" #include "vmx.h" +#include "tdx.h" /* * Maintain a per-CPU list of vCPUs that need to be awakened by wakeup_handler() @@ -31,9 +32,29 @@ static DEFINE_PER_CPU(struct list_head, wakeup_vcpus_on_cpu); */ static DEFINE_PER_CPU(raw_spinlock_t, wakeup_vcpus_on_cpu_lock); +/* + * The layout of the head of struct vcpu_vmx and struct vcpu_tdx must match with + * struct vcpu_pi. + */ +static_assert(offsetof(struct vcpu_pi, pi_desc) == + offsetof(struct vcpu_vmx, pi_desc)); +static_assert(offsetof(struct vcpu_pi, pi_wakeup_list) == + offsetof(struct vcpu_vmx, pi_wakeup_list)); +#ifdef CONFIG_INTEL_TDX_HOST +static_assert(offsetof(struct vcpu_pi, pi_desc) == + offsetof(struct vcpu_tdx, pi_desc)); +static_assert(offsetof(struct vcpu_pi, pi_wakeup_list) == + offsetof(struct vcpu_tdx, pi_wakeup_list)); +#endif + +static inline struct vcpu_pi *vcpu_to_pi(struct kvm_vcpu *vcpu) +{ + return (struct vcpu_pi *)vcpu; +} + static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu) { - return &(to_vmx(vcpu)->pi_desc); + return &vcpu_to_pi(vcpu)->pi_desc; } static int pi_try_set_control(struct pi_desc *pi_desc, u64 *pold, u64 new) @@ -52,8 +73,8 @@ static int pi_try_set_control(struct pi_desc *pi_desc, u64 *pold, u64 new) void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu) { - struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); - struct vcpu_vmx *vmx = to_vmx(vcpu); + struct vcpu_pi *vcpu_pi = vcpu_to_pi(vcpu); + struct pi_desc *pi_desc = &vcpu_pi->pi_desc; struct pi_desc old, new; unsigned long flags; unsigned int dest; @@ -90,7 +111,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu) */ if (pi_desc->nv == POSTED_INTR_WAKEUP_VECTOR) { raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu)); - list_del(&vmx->pi_wakeup_list); + list_del(&vcpu_pi->pi_wakeup_list); raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu)); } @@ -145,15 +166,15 @@ static bool vmx_can_use_vtd_pi(struct kvm *kvm) */ static void pi_enable_wakeup_handler(struct kvm_vcpu *vcpu) { - struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); - struct vcpu_vmx *vmx = to_vmx(vcpu); + struct vcpu_pi *vcpu_pi = vcpu_to_pi(vcpu); + struct pi_desc *pi_desc = &vcpu_pi->pi_desc; struct pi_desc old, new; unsigned long flags; local_irq_save(flags); raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu)); - list_add_tail(&vmx->pi_wakeup_list, + list_add_tail(&vcpu_pi->pi_wakeup_list, &per_cpu(wakeup_vcpus_on_cpu, vcpu->cpu)); raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu)); @@ -190,7 +211,8 @@ static bool vmx_needs_pi_wakeup(struct kvm_vcpu *vcpu) * notification vector is switched to the one that calls * back to the pi_wakeup_handler() function. */ - return vmx_can_use_ipiv(vcpu) || vmx_can_use_vtd_pi(vcpu->kvm); + return (vmx_can_use_ipiv(vcpu) && !is_td_vcpu(vcpu)) || + vmx_can_use_vtd_pi(vcpu->kvm); } void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) @@ -200,7 +222,8 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) if (!vmx_needs_pi_wakeup(vcpu)) return; - if (kvm_vcpu_is_blocking(vcpu) && !vmx_interrupt_blocked(vcpu)) + if (kvm_vcpu_is_blocking(vcpu) && + (is_td_vcpu(vcpu) || !vmx_interrupt_blocked(vcpu))) pi_enable_wakeup_handler(vcpu); /* diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h index 26992076552e..2fe8222308b2 100644 --- a/arch/x86/kvm/vmx/posted_intr.h +++ b/arch/x86/kvm/vmx/posted_intr.h @@ -94,6 +94,17 @@ static inline bool pi_test_sn(struct pi_desc *pi_desc) (unsigned long *)&pi_desc->control); } +struct vcpu_pi { + struct kvm_vcpu vcpu; + + /* Posted interrupt descriptor */ + struct pi_desc pi_desc; + + /* Used if this vCPU is waiting for PI notification wakeup. */ + struct list_head pi_wakeup_list; + /* Until here common layout betwwn vcpu_vmx and vcpu_tdx. */ +}; + void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu); void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu); void pi_wakeup_handler(void); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index a5b52aa6d153..1da58c36217c 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -584,6 +584,7 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) fpstate_set_confidential(&vcpu->arch.guest_fpu); vcpu->arch.apic->guest_apic_protected = true; + INIT_LIST_HEAD(&tdx->pi_wakeup_list); vcpu->arch.efer = EFER_SCE | EFER_LME | EFER_LMA | EFER_NX; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 7f8c78f06508..eaffa7384725 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -4,6 +4,7 @@ #ifdef CONFIG_INTEL_TDX_HOST +#include "posted_intr.h" #include "pmu_intel.h" #include "tdx_ops.h" @@ -69,6 +70,13 @@ union tdx_exit_reason { struct vcpu_tdx { struct kvm_vcpu vcpu; + /* Posted interrupt descriptor */ + struct pi_desc pi_desc; + + /* Used if this vCPU is waiting for PI notification wakeup. */ + struct list_head pi_wakeup_list; + /* Until here same layout to struct vcpu_pi. */ + unsigned long tdvpr_pa; unsigned long *tdvpx_pa; bool td_vcpu_created; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 79ff54f08fee..634a9a250b95 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -235,6 +235,14 @@ struct nested_vmx { struct vcpu_vmx { struct kvm_vcpu vcpu; + + /* Posted interrupt descriptor */ + struct pi_desc pi_desc; + + /* Used if this vCPU is waiting for PI notification wakeup. */ + struct list_head pi_wakeup_list; + /* Until here same layout to struct vcpu_pi. */ + u8 fail; u8 x2apic_msr_bitmap_mode; @@ -304,12 +312,6 @@ struct vcpu_vmx { union vmx_exit_reason exit_reason; - /* Posted interrupt descriptor */ - struct pi_desc pi_desc; - - /* Used if this vCPU is waiting for PI notification wakeup. */ - struct list_head pi_wakeup_list; - /* Support for a guest hypervisor (nested VMX) */ struct nested_vmx nested;