From patchwork Sat Dec 2 09:31:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 172791 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp1677599vqy; Sat, 2 Dec 2023 02:01:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IErPG4Q4rGn+uCoVqY88j8gTjebf1uGWbosrXsYGaEpuOlzkugoQPSrXvBO/olN6iFhJK07 X-Received: by 2002:a17:902:b944:b0:1d0:6ffd:e2e8 with SMTP id h4-20020a170902b94400b001d06ffde2e8mr848031pls.130.1701511263215; Sat, 02 Dec 2023 02:01:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701511263; cv=none; d=google.com; s=arc-20160816; b=CgB2I6lbkp3GeM8PKbrdJ9CYInG3jPEMCt0kBS/WUw91Z821I5PBtuXYyEst8UI8zF 4CwO8OWrBGnG+yrqKZcVdavDtvhaDb6AAZY5+mzeLXbak7TmX5FzDWkrpNTfrq+9o9RW KFI+99d7U9IhzkI/5k0Dp/ppw6s3cKR3Oaxk+6o68jX4KrdYaGS5NKP4wdNau+FgvbG6 UMiD5vS0bHI+dC8EGm7qYvJfZb/PaFjWw8zldELmpJLKe0lSTOWOfKwUxtNG499/aVNK qIS7IYb7YinsiurDdLKIejSkpF71SK/gNXVlrsHAfY07Hd5x8dHmSAb90VPgLzLQ/Fcn QlsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=sQiOWx33SvnrHpuQk/RkF9q10pd2h9x0z9BsayJtTd0=; fh=+WI4m5k3dRLR+dR3neThuZkNBTzIm/a8HgtddERL9fA=; b=GQTOVmUb+GkDqOUBcqJy2U++v6JtNHsg99md8IIcI0K7Lmq++ecnnr3iO/8y6QwbSS 9/+9QhsICK6N5LxSKDC3NfZS8hYVaOdP9T5/YHeW4F1FrWvgsTLWsnBBMVo5Fq0ttKci veo6/NweMZSN3i1zA4jn+WPfo+XJcDNlQIXNZUnO0ONjUEiDd5vVUQZoo7dG5e8su65e XhitPGXk1Xyel6aPvStD+hhkYDR7XM52GPf7Jp2QkddYmyAKgD3h89eyOTw7iDQv1sZE iFFBKQ0xhsKUYHS4vSJ2S7lg/GcYtrVuF6PxGa3IlYhZfMqmlLNEAIkWh4yAFDBReWHX hUEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ejFyGcms; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id a14-20020a1709027d8e00b001cfcbf477c5si4594744plm.30.2023.12.02.02.00.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 02 Dec 2023 02:01:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ejFyGcms; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 3917481042C2; Sat, 2 Dec 2023 02:00:55 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232532AbjLBKAq (ORCPT + 99 others); Sat, 2 Dec 2023 05:00:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229472AbjLBKAo (ORCPT ); Sat, 2 Dec 2023 05:00:44 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B9E1E3; Sat, 2 Dec 2023 02:00:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701511250; x=1733047250; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=cQ/PBVpj7VdBPsR5uXHKk49rWc/LsdPwooGhW6lyHNs=; b=ejFyGcmsSgnKkdEkzBy7J5CqQhofODqM52qRnWmpWtRTqLicvdusE/UL OGXkJyfGkuBRjQotDagPyCcjsXZxx6/FiRkB0rSO/6MAeRtDHqvAi7CaX u6EZhrOgRE1i8aaC0DkDEfbO/mWPwzh+OD/CmmYHBi5yf6H/7ZJYfMLAK nikAnxovFRhpSR+Bf3eCDMUs0PfjgDT7jgzNcKo55qfvZMOPTGeQce25k e+rhcVZW4Iexa8TOS4m24XMbgLm/Zkps8XhUa2J8yKCaTo0VzajTKdAND mYCNP2oC/o98MXvEizEk7wctaopMgLXPoAMiEB3NKdy5M86zEkZuK91bq g==; X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="372983418" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="372983418" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 02:00:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="773709825" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="773709825" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 02:00:46 -0800 From: Yan Zhao To: iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: alex.williamson@redhat.com, jgg@nvidia.com, pbonzini@redhat.com, seanjc@google.com, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, dwmw2@infradead.org, yi.l.liu@intel.com, Yan Zhao Subject: [RFC PATCH 33/42] KVM: x86/mmu: add extra param "kvm" to make_spte() Date: Sat, 2 Dec 2023 17:31:46 +0800 Message-Id: <20231202093146.15477-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231202091211.13376-1-yan.y.zhao@intel.com> References: <20231202091211.13376-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Sat, 02 Dec 2023 02:00:55 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784163873973049779 X-GMAIL-MSGID: 1784163873973049779 Add an extra param "kvm" to make_spte() to allow param "vcpu" to be NULL in future to allow generating spte in non-vcpu context. "vcpu" is only used in make_spte() to get memory type mask if shadow_memtype_mask is true, which applies only to VMX when EPT is enabled. VMX only requires param "vcpu" when non-coherent DMA devices are attached to check vcpu's CR0.CD and guest MTRRs. So, if non-coherent DMAs are not attached, make_spte() can call kvm_x86_get_default_mt_mask() to get default memory type for non-vCPU context. This is a preparation patch for later KVM MMU to export TDP. Signed-off-by: Yan Zhao --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/spte.c | 18 ++++++++++++------ arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 5 files changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e4cae4ff20770..c9b587b30dae3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2939,7 +2939,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, was_rmapped = 1; } - wrprot = make_spte(vcpu, &vcpu->arch.mmu->common, + wrprot = make_spte(vcpu->kvm, vcpu, &vcpu->arch.mmu->common, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, true, host_writable, &spte); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 054d1a203f0ca..fb4767a9e966e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -960,7 +960,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int spte = *sptep; host_writable = spte & shadow_host_writable_mask; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, &vcpu->arch.mmu->common, sp, slot, pte_access, + make_spte(vcpu->kvm, vcpu, &vcpu->arch.mmu->common, sp, slot, pte_access, gfn, spte_to_pfn(spte), spte, true, false, host_writable, &spte); return mmu_spte_update(sptep, spte); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index daeab3b9eee1e..5e73a679464c0 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -138,7 +138,7 @@ bool spte_has_volatile_bits(u64 spte) return false; } -bool make_spte(struct kvm_vcpu *vcpu, +bool make_spte(struct kvm *kvm, struct kvm_vcpu *vcpu, struct kvm_mmu_common *mmu_common, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, @@ -179,7 +179,7 @@ bool make_spte(struct kvm_vcpu *vcpu, * just to optimize a mode that is anything but performance critical. */ if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) && - is_nx_huge_page_enabled(vcpu->kvm)) { + is_nx_huge_page_enabled(kvm)) { pte_access &= ~ACC_EXEC_MASK; } @@ -194,9 +194,15 @@ bool make_spte(struct kvm_vcpu *vcpu, if (level > PG_LEVEL_4K) spte |= PT_PAGE_SIZE_MASK; - if (shadow_memtype_mask) - spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, + if (shadow_memtype_mask) { + if (vcpu) + spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn)); + else + spte |= static_call(kvm_x86_get_default_mt_mask)(kvm, + kvm_is_mmio_pfn(pfn)); + } + if (host_writable) spte |= shadow_host_writable_mask; else @@ -225,7 +231,7 @@ bool make_spte(struct kvm_vcpu *vcpu, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(kvm, slot, gfn, can_unsync, prefetch)) { wrprot = true; pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); @@ -246,7 +252,7 @@ bool make_spte(struct kvm_vcpu *vcpu, if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON_ONCE(level > PG_LEVEL_4K); - mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); + mark_page_dirty_in_slot(kvm, slot, gfn); } *new_spte = spte; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 4ad19c469bd73..f1532589b7083 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -530,7 +530,7 @@ static inline u64 get_mmio_spte_generation(u64 spte) bool spte_has_volatile_bits(u64 spte); -bool make_spte(struct kvm_vcpu *vcpu, +bool make_spte(struct kvm *kvm, struct kvm_vcpu *vcpu, struct kvm_mmu_common *mmu_common, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 892cf1f5b57a8..a45d1b71cd62a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -964,7 +964,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (unlikely(!fault->slot)) new_spte = make_mmio_spte(vcpu->kvm, vcpu, iter->gfn, ACC_ALL); else - wrprot = make_spte(vcpu, &vcpu->arch.mmu->common, sp, fault->slot, + wrprot = make_spte(vcpu->kvm, vcpu, &vcpu->arch.mmu->common, sp, fault->slot, ACC_ALL, iter->gfn, fault->pfn, iter->old_spte, fault->prefetch, true, fault->map_writable, &new_spte);