From patchwork Wed Dec 21 22:24:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 35536 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:8188:b0:89:790f:f786 with SMTP id m8csp4945252dye; Wed, 21 Dec 2022 14:26:10 -0800 (PST) X-Google-Smtp-Source: AMrXdXtn6RdCp8ydM1IyQwNIOH0nWbbKfA8lfWx9eolUUu82RcXk6/gkl5dJsU96OUpTZEOUflZ4 X-Received: by 2002:aa7:da49:0:b0:470:489c:5369 with SMTP id w9-20020aa7da49000000b00470489c5369mr2841098eds.6.1671661570743; Wed, 21 Dec 2022 14:26:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671661570; cv=none; d=google.com; s=arc-20160816; b=FXe5u+JGypm7R/hAjr4V72qON/6Cjjp1EPlWOtNYMvlBREuQzRx6syN5M/hRRIskAZ q72gcNFw8VOSGpvQNzZJcLUZlXc/R1GFh/591LJ5cVA+dhWSn017kyOa6Pp6yD2E8yif lhrPPjws4iTg1ksIhxBscKbV+IIdpzQ1ji4IRO1AzWyrraxhl6A7FNg2kUkIVGzaBQcW jDwjjO2J6PXcN9+Bz1nEsBohTaK5yIFw8k1WTdLHsTEfQNnVE3N4nuayyNr3uvHwACK4 JXIdcOz/mUx4yCEMoLl92nsgyu+Aafpqtg0WVNS6JEF3jZAF7sGurNRoVFY5ESsTV8kr BZ+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=PpjDc5kQAW+k4ZQYHkXtWYHcVTdhQtxJVF/UKPc9J1o=; b=qBfzSIZV20f7u4OURungAFgiGE6HdxxK2XcU20EKc96j64SwFsdySAVDk3mOFF+2Za 9HMYUw7YtZxJslNmzBlr8XbQEvSiPY1+aOZzwQnXZDsCKNSRS6nCBOzZePwGq818FATv SLihxQwwa4hKzkwc6Z19jvy3DlAIzZHatSK05jBZhO8RO58nBO4L2nduEH+dfo5G/Ixe E0ul4CtvpwzSva8yrqjaryMKnOawKuaQqnaJI//5nUjy+T20xze1BwrbJkSs09EbcH4g QV1NjvjrhRAmDQeNYc4NrY1YvjEI6VH3a5rKrebHm9od7qsW7M6odMNdy3jT83LBrpgP HnVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=iOK91BqR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b11-20020a056402084b00b004739d471e12si16556728edz.280.2022.12.21.14.25.47; Wed, 21 Dec 2022 14:26:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=iOK91BqR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235028AbiLUWZK (ORCPT + 99 others); Wed, 21 Dec 2022 17:25:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234954AbiLUWYi (ORCPT ); Wed, 21 Dec 2022 17:24:38 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FAFF275DD for ; Wed, 21 Dec 2022 14:24:33 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-43ea89bc5d8so3642417b3.4 for ; Wed, 21 Dec 2022 14:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PpjDc5kQAW+k4ZQYHkXtWYHcVTdhQtxJVF/UKPc9J1o=; b=iOK91BqR2al5dx6yxJYNdJ0KjbgTezozD9d7Mfm19BVc0Kn0bIzeV8O1a2Amyvc21A k4zIUHIPW0rEnb7NHeK7QnH/T9jDadvayvwB+uwAIHMBE+jIW0LKKiX43+hFzNddGEDL PsoSH7FSpEWOZODYGU8x75XtalOTNUa5zPk+Dw5g9eAW8EGY1XV1zBxEfiZC/iqpNyik T1BVGUnWV6RWCRPu8ua0k42ZlNt7NDeOb9uT5nVz4LTvl55RESt8bk00TJGWB9wl+4Xk 8Q+0BypI3uvPrwdnt6Xpgenc3i+yQX0gL4bvsoyke4sHKc9qjqTPrazgLRNuAaA9Jrhb se0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PpjDc5kQAW+k4ZQYHkXtWYHcVTdhQtxJVF/UKPc9J1o=; b=UxyNhsisehhbdqPsuksS5CAJxdUcgLkT5+9Ou65WvAlomQ8BQjtGTbPz3jm026XkX/ Igyr0iim7tGkg/Shj4x1tpM7PJO0hkPHWACN1UYvB2QpnNKBC2OBr2uMN2/EXqha4Mgt u/cuyocZQeGtq+xFuHeQKjknl96qsEbVU9XJjigoSVRyem+nxGQVtQL3vbq+JtRYTeK7 RNxEGhkXGlD95oHMfVzOFhBkoJyq48722IYguQzNDwiaXsspxOioJDwUbs2u/3h4aBn0 Zh//vtk0VsleMRtc98cI9/xBqHF3LuRSc4y8QCH4tcWjHlAvjtY6R9z6G8M6tGjkdOK8 EAjg== X-Gm-Message-State: AFqh2kr9mA+Gvg8AQKiey2WyWgbFjXZBmtE5yhOsDCTk8vqpI4/8Z62Z ByFdqlyMi3Wp71PTvJG/cFK2I8FlPcxzCiKCnx9nMS8S8N17NAeiZx8T+ifA2ZfxZXQHWgaYl6A 3D142HwWx1gABIg/ZyUDO/qqf39bqtZ4Rvn3QxKg/sHSJYAsM4iJEzmWNIt+LNesiOTZkpX+c X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6902:513:b0:6fa:5a47:c208 with SMTP id x19-20020a056902051300b006fa5a47c208mr376658ybs.472.1671661472442; Wed, 21 Dec 2022 14:24:32 -0800 (PST) Date: Wed, 21 Dec 2022 22:24:10 +0000 In-Reply-To: <20221221222418.3307832-1-bgardon@google.com> Mime-Version: 1.0 References: <20221221222418.3307832-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221221222418.3307832-7-bgardon@google.com> Subject: [RFC 06/14] KVM: x86/MMU: Clean up Shadow MMU exports From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Nagareddy Reddy , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752864203130124496?= X-GMAIL-MSGID: =?utf-8?q?1752864203130124496?= Now that paging_tmpl.h is included from shadow_mmu.c, there's no need to export many of the functions currrently in shadow_mmu.h, so remove those exports and mark the functions static. This cleans up the interface of the Shadow MMU, and will allow the implementation to keep the details of rmap_heads internal. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/shadow_mmu.c | 78 +++++++++++++++++++++-------------- arch/x86/kvm/mmu/shadow_mmu.h | 51 +---------------------- 2 files changed, 48 insertions(+), 81 deletions(-) diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index 86b5fb75d50a..090b4788f7de 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -21,6 +21,20 @@ #include #include +struct kvm_shadow_walk_iterator { + u64 addr; + hpa_t shadow_addr; + u64 *sptep; + int level; + unsigned index; +}; + +#define for_each_shadow_entry_using_root(_vcpu, _root, _addr, _walker) \ + for (shadow_walk_init_using_root(&(_walker), (_vcpu), \ + (_root), (_addr)); \ + shadow_walk_okay(&(_walker)); \ + shadow_walk_next(&(_walker))) + #define for_each_shadow_entry(_vcpu, _addr, _walker) \ for (shadow_walk_init(&(_walker), _vcpu, _addr); \ shadow_walk_okay(&(_walker)); \ @@ -227,7 +241,7 @@ static u64 mmu_spte_update_no_track(u64 *sptep, u64 new_spte) * * Returns true if the TLB needs to be flushed */ -bool mmu_spte_update(u64 *sptep, u64 new_spte) +static bool mmu_spte_update(u64 *sptep, u64 new_spte) { bool flush = false; u64 old_spte = mmu_spte_update_no_track(sptep, new_spte); @@ -311,7 +325,7 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) * Directly clear spte without caring the state bits of sptep, * it is used to set the upper level spte. */ -void mmu_spte_clear_no_track(u64 *sptep) +static void mmu_spte_clear_no_track(u64 *sptep) { __update_clear_spte_fast(sptep, 0ull); } @@ -354,7 +368,7 @@ static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) static bool sp_has_gptes(struct kvm_mmu_page *sp); -gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) +static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (sp->role.passthrough) return sp->gfn; @@ -410,8 +424,8 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); } -void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, - unsigned int access) +static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, + unsigned int access) { gfn_t gfn = kvm_mmu_page_get_gfn(sp, index); @@ -627,7 +641,7 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } -bool rmap_can_add(struct kvm_vcpu *vcpu) +static bool rmap_can_add(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_cache *mc; @@ -735,7 +749,7 @@ static u64 *rmap_get_next(struct rmap_iterator *iter) for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ _spte_; _spte_ = rmap_get_next(_iter_)) -void drop_spte(struct kvm *kvm, u64 *sptep) +static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); @@ -1112,7 +1126,7 @@ static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, pte_list_remove(parent_pte, &sp->parent_ptes); } -void drop_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte) +static void drop_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte) { mmu_page_remove_parent_pte(sp, parent_pte); mmu_spte_clear_no_track(parent_pte); @@ -1342,8 +1356,8 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) } while (!sp->unsync_children); } -int mmu_sync_children(struct kvm_vcpu *vcpu, struct kvm_mmu_page *parent, - bool can_yield) +static int mmu_sync_children(struct kvm_vcpu *vcpu, struct kvm_mmu_page *parent, + bool can_yield) { int i; struct kvm_mmu_page *sp; @@ -1389,7 +1403,7 @@ void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) atomic_set(&sp->write_flooding_count, 0); } -void clear_sp_write_flooding_count(u64 *spte) +static void clear_sp_write_flooding_count(u64 *spte) { __clear_sp_write_flooding_count(sptep_to_sp(spte)); } @@ -1602,9 +1616,9 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, return role; } -struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, u64 *sptep, - gfn_t gfn, bool direct, - unsigned int access) +static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, + u64 *sptep, gfn_t gfn, + bool direct, unsigned int access) { union kvm_mmu_page_role role; @@ -1615,8 +1629,9 @@ struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, u64 *sptep, return kvm_mmu_get_shadow_page(vcpu, gfn, role); } -void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, - struct kvm_vcpu *vcpu, hpa_t root, u64 addr) +static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, + struct kvm_vcpu *vcpu, hpa_t root, + u64 addr) { iterator->addr = addr; iterator->shadow_addr = root; @@ -1643,14 +1658,14 @@ void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, } } -void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator, - struct kvm_vcpu *vcpu, u64 addr) +static void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator, + struct kvm_vcpu *vcpu, u64 addr) { shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root.hpa, addr); } -bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator) +static bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator) { if (iterator->level < PG_LEVEL_4K) return false; @@ -1672,7 +1687,7 @@ static void __shadow_walk_next(struct kvm_shadow_walk_iterator *iterator, --iterator->level; } -void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) +static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) { __shadow_walk_next(iterator, *iterator->sptep); } @@ -1703,13 +1718,14 @@ static void __link_shadow_page(struct kvm *kvm, mark_unsync(sptep); } -void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, + struct kvm_mmu_page *sp) { __link_shadow_page(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, sptep, sp, true); } -void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned direct_access) +static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned direct_access) { if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) { struct kvm_mmu_page *child; @@ -1731,8 +1747,8 @@ void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, } /* Returns the number of zapped non-leaf child shadow pages. */ -int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, u64 *spte, - struct list_head *invalid_list) +static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, u64 *spte, + struct list_head *invalid_list) { u64 pte; struct kvm_mmu_page *child; @@ -2144,9 +2160,9 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, return 0; } -int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, - u64 *sptep, unsigned int pte_access, gfn_t gfn, - kvm_pfn_t pfn, struct kvm_page_fault *fault) +static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + u64 *sptep, unsigned int pte_access, gfn_t gfn, + kvm_pfn_t pfn, struct kvm_page_fault *fault) { struct kvm_mmu_page *sp = sptep_to_sp(sptep); int level = sp->role.level; @@ -2251,8 +2267,8 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, return 0; } -void __direct_pte_prefetch(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *sptep) +static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, u64 *sptep) { u64 *spte, *start = NULL; int i; @@ -2788,7 +2804,7 @@ int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) return leaf; } -void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr) +static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr) { struct kvm_shadow_walk_iterator iterator; u64 spte; diff --git a/arch/x86/kvm/mmu/shadow_mmu.h b/arch/x86/kvm/mmu/shadow_mmu.h index 00d2f9abecf0..20c65a0ea52c 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.h +++ b/arch/x86/kvm/mmu/shadow_mmu.h @@ -23,32 +23,11 @@ struct pte_list_desc { u64 *sptes[PTE_LIST_EXT]; }; +/* Only exported for debugfs.c. */ unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); -struct kvm_shadow_walk_iterator { - u64 addr; - hpa_t shadow_addr; - u64 *sptep; - int level; - unsigned index; -}; - -#define for_each_shadow_entry_using_root(_vcpu, _root, _addr, _walker) \ - for (shadow_walk_init_using_root(&(_walker), (_vcpu), \ - (_root), (_addr)); \ - shadow_walk_okay(&(_walker)); \ - shadow_walk_next(&(_walker))) - -bool mmu_spte_update(u64 *sptep, u64 new_spte); -void mmu_spte_clear_no_track(u64 *sptep); -gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); -void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, - unsigned int access); - struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot); -bool rmap_can_add(struct kvm_vcpu *vcpu); -void drop_spte(struct kvm *kvm, u64 *sptep); bool rmap_write_protect(struct kvm_rmap_head *rmap_head, bool pt_protect); bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot); @@ -72,30 +51,8 @@ bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t unused); -void drop_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte); -int mmu_sync_children(struct kvm_vcpu *vcpu, struct kvm_mmu_page *parent, - bool can_yield); void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp); -void clear_sp_write_flooding_count(u64 *spte); - -struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, u64 *sptep, - gfn_t gfn, bool direct, - unsigned int access); - -void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, - struct kvm_vcpu *vcpu, hpa_t root, u64 addr); -void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator, - struct kvm_vcpu *vcpu, u64 addr); -bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator); -void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator); - -void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp); - -void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned direct_access); -int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, u64 *spte, - struct list_head *invalid_list); bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, struct list_head *invalid_list, int *nr_zapped); @@ -107,11 +64,6 @@ int make_mmu_pages_available(struct kvm_vcpu *vcpu); int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva); -int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, - u64 *sptep, unsigned int pte_access, gfn_t gfn, - kvm_pfn_t pfn, struct kvm_page_fault *fault); -void __direct_pte_prefetch(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *sptep); int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); u64 *fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gpa_t gpa, u64 *spte); @@ -121,7 +73,6 @@ int mmu_alloc_special_roots(struct kvm_vcpu *vcpu); int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); -void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr); void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, int bytes, struct kvm_page_track_notifier_node *node);