From patchwork Wed Dec 21 22:24:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 35535 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:8188:b0:89:790f:f786 with SMTP id m8csp4945218dye; Wed, 21 Dec 2022 14:26:07 -0800 (PST) X-Google-Smtp-Source: AMrXdXvu+6Fv+82gTeHnF/e+f9caBU9YVN6EJ2CzMc5dz0LGtsCic9jqPbuYY1ZkLu8gdSA94MLf X-Received: by 2002:a05:6402:1654:b0:474:a583:2e25 with SMTP id s20-20020a056402165400b00474a5832e25mr2862021edx.5.1671661567617; Wed, 21 Dec 2022 14:26:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671661567; cv=none; d=google.com; s=arc-20160816; b=ECFEbswZ9XQ0FQKvCVWMX0Qwo+E26W4UYksP5ih5GbNZ68/n81gdQaGaqyaBELco5u 1Bc2So8+5NhDh8wc9tn694dPmaNBDxkwKCrk2j4LhR9X3gZe4JpwaC8JHxOCYmxR/cJe ro1kk0U/chB15WM/stq/Y3Fo8RUmvuTyuqeWpUqTXE00h0GyOrCefnv7ZPSOUcy4vNm4 BwTEvVbk1sFhXSj4PXApt7ZUUuk4HlzkxbeDxjFFiz9xrdvVPSmt7pwzZwnZ1AJcGhdY q/GYyKQ4QUv6mjy/OfJCMN29GWAdcSzXt3iyo2K24jSmjgsLCsdRgKeRqCC/ypivgf4d VhhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=8kZrCNGBjg1WS7ODt19uiL6/LMaIleV0jy9dXg5qXsU=; b=rDlFMa++aZW4qJ6o2YRXpxv3RZ1uiAgU61fmT9q3Wn1bAUWZPyZ4q1RFW073hwdFx3 IsnaW5ijX0ze+0ZsJ/82rqfNIxFj/wR/V5q9JSymrSWx3mrkqV0f7VSf4EwY1MD4vFzq S/fO7J4j/kATfVxbrLkIKj+3g2/V5WuNq09hB9v+2RQEwv0sSVQRAXgU8OnrsPzErHiv 493GmvFftlCW13LKt9fxbOw8SBFwm3z+Qngvjcsnh5DYqStpAeoSSiuuFlwYg2hhXd3H /zykvGPGzuKN4aV5rWMwwg1vSpkpouZil/opKier6mNzQO02U0RaN7FO0uAog+3Em4gG saog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=gm4wVDeM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w16-20020a056402269000b0046afe27742asi16128234edd.396.2022.12.21.14.25.43; Wed, 21 Dec 2022 14:26:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=gm4wVDeM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234993AbiLUWYr (ORCPT + 99 others); Wed, 21 Dec 2022 17:24:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234947AbiLUWYd (ORCPT ); Wed, 21 Dec 2022 17:24:33 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 652FC275C5 for ; Wed, 21 Dec 2022 14:24:31 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id k21-20020aa78215000000b00575ab46ca2cso9221276pfi.20 for ; Wed, 21 Dec 2022 14:24:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8kZrCNGBjg1WS7ODt19uiL6/LMaIleV0jy9dXg5qXsU=; b=gm4wVDeMjNDC25vIMLZJDuIuxtNrgMkW/U8DzY9/JjzdiZzvFqg5Z9nrrcuVrdI+Cr mlp9FAy3A+XDVeEZyHG8Xc52Dea5StINuVBgplWUrUUWZ69CNhfzLT4HzTN4LN1TEk0D IpYExoj/5fAFNK6+d1bgopQ28Zgn7SVezYfuMlI3pItzNuDAxxrhAWNTk67M+IKLP4c2 n+qybJS7g4bTiF/kDELV1VGdQmRTOxAQSwONIrFtaMW76yMhZ3d0Kp39BOtqavNeur3r Auhjsa2ghpUls+QQDvUrWngcD+RIh8YLSET0PWut8cdrrGwgVsH0f/PqQl4TB9PEBsIV uuCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8kZrCNGBjg1WS7ODt19uiL6/LMaIleV0jy9dXg5qXsU=; b=3qYlzkh2EqzqZinkf69cEGb7NAwB/0/CJ/Gh9AMvVa4nj3tSpPniny5BDB84ZWMqPD szl0FhG4q5BNei3w4gsjobsItWLlaIf5LFFbrZ4FbSMeSH6jkKZcIoy1Adjh+Zkc6cf2 IPY3lnzcU+7aCTUNO7L7dSKXtMymGa5Mb+/yz9oMSXn7dqK+imbtdCeFo4SYo2xfNIED 2i3eCvH1WwQlVNHc9e/jERyrrfzs4wS8Nk/Z5EVJY/kMG6VjzbJSA+TasfAR09Hj/Cxe vAlTXFCSUJ0H/6ne9GHqYlQowE+w0RVDcliJIlxm2pNwj99qLPkfEcChv3XO4emH06gZ EZCA== X-Gm-Message-State: AFqh2kpKFf4FKPv7N7I3cQ8CeeYKww81LxEuZzz5FLjzoMfdPT2NEKoJ kVx7QQMLXGyM8lUf3BVLI8JwLhLrTvIkHvcd/VIuxwMeE9tnEFlLkL7lOFJT6lys5ZHHVBxXqaM zWaOEfKgBPtTlQIeJsmRiENRsDYRgfJfMFyfJe8uiy+p9HkcUjdmYBhLqz0t5/MPJgAgVUtHU X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:902:d510:b0:189:ced9:c9e7 with SMTP id b16-20020a170902d51000b00189ced9c9e7mr251984plg.108.1671661470555; Wed, 21 Dec 2022 14:24:30 -0800 (PST) Date: Wed, 21 Dec 2022 22:24:09 +0000 In-Reply-To: <20221221222418.3307832-1-bgardon@google.com> Mime-Version: 1.0 References: <20221221222418.3307832-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221221222418.3307832-6-bgardon@google.com> Subject: [RFC 05/14] KVM: x86/MMU: Move paging_tmpl.h includes to shadow_mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Nagareddy Reddy , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752864200282471866?= X-GMAIL-MSGID: =?utf-8?q?1752864200282471866?= Move the integration point for paging_tmpl.h to shadow_mmu.c since paging_tmpl.h is ostensibly part of the Shadow MMU. This requires modifying some of the definitions to be non-static and then exporting the pre-processed function names through shadow_mmu.h since they are needed for mmu context callbacks in mmu.c. This will facilitate cleanups in following commits because many of the functions being exposed by shadow_mmu.h are only needed by paging_tmpl.h. Those functions will no longer need to be exported. sync_mmio_spte() is only used by paging_tmpl.h, so move it along with the includes. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 29 ----------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 11 +++++------ arch/x86/kvm/mmu/shadow_mmu.c | 30 ++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/shadow_mmu.h | 25 ++++++++++++++++++++++++- 4 files changed, 59 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a17e8a79e4df..dd97e346c786 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1699,35 +1699,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) return kvm_read_cr3(vcpu); } -static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, - unsigned int access) -{ - if (unlikely(is_mmio_spte(*sptep))) { - if (gfn != get_mmio_spte_gfn(*sptep)) { - mmu_spte_clear_no_track(sptep); - return true; - } - - mark_mmio_spte(vcpu, sptep, gfn, access); - return true; - } - - return false; -} - -#define PTTYPE_EPT 18 /* arbitrary */ -#define PTTYPE PTTYPE_EPT -#include "paging_tmpl.h" -#undef PTTYPE - -#define PTTYPE 64 -#include "paging_tmpl.h" -#undef PTTYPE - -#define PTTYPE 32 -#include "paging_tmpl.h" -#undef PTTYPE - static void __reset_rsvds_bits_mask(struct rsvd_bits_validate *rsvd_check, u64 pa_bits_rsvd, int level, bool nx, bool gbpages, diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0f6455072055..2e3b2aca64ad 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -787,7 +787,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, * Returns: 1 if we need to emulate the instruction, 0 otherwise, or * a negative value on error. */ -static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct guest_walker walker; int r; @@ -897,7 +897,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) +void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) { struct kvm_shadow_walk_iterator iterator; struct kvm_mmu_page *sp; @@ -957,9 +957,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) } /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */ -static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gpa_t addr, u64 access, - struct x86_exception *exception) +gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t addr, + u64 access, struct x86_exception *exception) { struct guest_walker walker; gpa_t gpa = INVALID_GPA; @@ -992,7 +991,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * 0: the sp is synced and no tlb flushing is required * > 0: the sp is synced and tlb flushing is required */ -static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; int i; diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index 05d8f5be559d..86b5fb75d50a 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -10,6 +10,7 @@ * Shadow MMU also supports TDP, it's just less scalable. The Shadow and TDP * MMUs can cooperate to support nested virtualization on hardware with TDP. */ +#include "ioapic.h" #include "mmu.h" #include "mmu_internal.h" #include "mmutrace.h" @@ -2798,6 +2799,35 @@ void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr) walk_shadow_page_lockless_end(vcpu); } +static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, + unsigned int access) +{ + if (unlikely(is_mmio_spte(*sptep))) { + if (gfn != get_mmio_spte_gfn(*sptep)) { + mmu_spte_clear_no_track(sptep); + return true; + } + + mark_mmio_spte(vcpu, sptep, gfn, access); + return true; + } + + return false; +} + +#define PTTYPE_EPT 18 /* arbitrary */ +#define PTTYPE PTTYPE_EPT +#include "paging_tmpl.h" +#undef PTTYPE + +#define PTTYPE 64 +#include "paging_tmpl.h" +#undef PTTYPE + +#define PTTYPE 32 +#include "paging_tmpl.h" +#undef PTTYPE + static bool is_obsolete_root(struct kvm *kvm, hpa_t root_hpa) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/shadow_mmu.h b/arch/x86/kvm/mmu/shadow_mmu.h index 83876047c1f5..00d2f9abecf0 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.h +++ b/arch/x86/kvm/mmu/shadow_mmu.h @@ -73,7 +73,6 @@ bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, int level, pte_t unused); void drop_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte); -int nonpaging_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); int mmu_sync_children(struct kvm_vcpu *vcpu, struct kvm_mmu_page *parent, bool can_yield); void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp); @@ -150,4 +149,28 @@ void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); unsigned long mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc); + +/* Exports from paging_tmpl.h */ +gpa_t paging32_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gpa_t vaddr, u64 access, + struct x86_exception *exception); +gpa_t paging64_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gpa_t vaddr, u64 access, + struct x86_exception *exception); +gpa_t ept_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t vaddr, + u64 access, struct x86_exception *exception); + +int paging32_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); +int paging64_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); +int ept_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); + +int paging32_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); +int paging64_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); +int ept_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); +/* Defined in shadow_mmu.c. */ +int nonpaging_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); + +void paging32_invlpg(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root); +void paging64_invlpg(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root); +void ept_invlpg(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root); #endif /* __KVM_X86_MMU_SHADOW_MMU_H */