From patchwork Tue Dec 6 17:35:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30451 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2957945wrr; Tue, 6 Dec 2022 09:39:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf7Zyum51y2xKMXtwzVBvBkuMKXKK11Idza2eYlp4Q3SlQzLFoYw4ktfaVBOyhVYdDdngW7h X-Received: by 2002:a05:6402:1802:b0:461:72cb:e5d with SMTP id g2-20020a056402180200b0046172cb0e5dmr70217101edy.410.1670348369265; Tue, 06 Dec 2022 09:39:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348369; cv=none; d=google.com; s=arc-20160816; b=z0DE06ZfQlciijnD/ln0qgr83sZgvKCwi1Dsz16ToSkrFcTkleeNUzfIyAp4fhhgAc dH7eiSGZVkwgRZToQi5pBs2PZjnCA6lHN5SOjlQtWiVmgZZZ5yhtwk+9me1DkRMBwN4+ DzJYbN/TrnBeTL5fJ1PeXXxyIwAjE5BEyXuU1e7KhxtF3rFIU8e2rdPNRfZL/SocwxQi uqxECEHL70Mav2g0aFVA6AwHSnE7PIBe0+GTTgMiYITwH9oGm6dw1iyaweePa1rvVA6h LGIvaWciw4UVN7KvsZLPmp98A3MrM5xPqF3Ecw999q0hpX5h6/1Bqe/13qnqTyfVc6/s V0Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=aHgt7zSM8h0N/oWvB/dF5Vf4TYo71BPoC22sFJSZT0/GFqdCtL5t5JLIO6jsqWTm3V XB3qmTDDdtoWC2YgZWV741s5jq9Ewhg2fAw7tybRGHz/N7cyb8Utzy2aWpVmIt8briUF HnvVE8iKFGDG0GPob+dlILC0BTZOh8pvonOiH2OeleeFyDZYeXdZFbe8yJRbV4TFHt23 oa3qATB0IXqlDZDJluW+2q6vqxYUI5ZA40Wb7QYRxie6yYus0rMzcninMOk5riDcTtgD GGMtf3/Eiiyg1hY/cvFyQNbv5OpupGtQqtX91HjmcCYRXDBKzWrDYnNn+sw7op/QlrZ7 M2BA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=h+zaYT9e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jg25-20020a170907971900b007c110cdf37bsi1524218ejc.529.2022.12.06.09.39.06; Tue, 06 Dec 2022 09:39:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=h+zaYT9e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235259AbiLFRgO (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235271AbiLFRgJ (ORCPT ); Tue, 6 Dec 2022 12:36:09 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3779937FA4 for ; Tue, 6 Dec 2022 09:36:08 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 94-20020a17090a09e700b002191897f70aso13662015pjo.9 for ; Tue, 06 Dec 2022 09:36:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=h+zaYT9end89tIo9b2PVWNxFrVHzi+Iim+OouFjtvn2sBZEdZu0z4TsPxoZaO27dOJ p5rvD7S2+MUVgtauzWvUtd8OEbeS05jWjro8t99zLHM1rJs2l0OvfcDkrG76OPUECgjA zJRDQIAXpk6lQy7W3HdN9RTXYen3QoVCz//rHnev2iMgv7itjFppiRrY8QewM+REbgND xc5QRxhfvR0/nJs8vXhcd1+1yAR54THKwSx2HtQGLKK385JZXgvgfR9wV7+QQtbv8qMR 1JEqPxBj3gZDK5UWCw1XhUMgoJb303nVQy49bINx+QxxtHhnuXEA1/4JJb6GrZgt+lLL RIfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=u6+y/T860YRz8JW1rZVW/KAfq5/sJyx5hU/aj+1PzfkeMWIYKBxItrGy8d6/Gw0ZK4 BUQa331BjwClI4CNsoufbWoz1VtYIAefZDpKmPKTE/rnpSQvHEU/6X9V32XIHLHc+vT/ tGtAhhXT0mykiL6qDThgUCngh2zdusH95d/NXhona2R0JJ7qVe1LMW/LDYqGvHeTQPwl qCglHSDbiEbImv3+XFnh8+YL28HrHPVZ768AWH3apehykMBjXHvOfXCF5AoZMqBQE1Ml E7OEJzLJBkNwFH9OchK8aoqW26PdOqmYxJkvDpmglNtymyTsqrtfPqvp85g0mQPrPtni KjdA== X-Gm-Message-State: ANoB5pnoIJ7+wNhY619k8FIVbcjfjBLR1mi0xXjo4UFcf/eSn3p1q0wD 0t/LZtG+2i1ReeseDO5mam8UxAKbcJB//7+1QiJFW/Ey/cXnEe8YuW0gY4EPxGMMf454bgptweP cO8YLdgy4ipLkHizMoYG80M5Ww3qeSREJZ2IHW4LlOKYg5X9omOTBzGvE8YGg4siBbVdTB8Ej X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993173pju.0.1670348167215; Tue, 06 Dec 2022 09:36:07 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:55 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-2-bgardon@google.com> Subject: [PATCH 1/7] KVM: x86/MMU: Move pte_list operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487211279762022?= X-GMAIL-MSGID: =?utf-8?q?1751487211279762022?= In the interest of eventually splitting the Shadow MMU out of mmu.c, start by moving some of the operations for manipulating pte_lists out of mmu.c and into a new pair of files: rmap.c and rmap.h. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/debugfs.c | 1 + arch/x86/kvm/mmu/mmu.c | 152 +------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 - arch/x86/kvm/mmu/rmap.c | 141 +++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 34 +++++++ 6 files changed, 179 insertions(+), 152 deletions(-) create mode 100644 arch/x86/kvm/mmu/rmap.c create mode 100644 arch/x86/kvm/mmu/rmap.h diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 80e3fe184d17..9f766eebeddf 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -12,7 +12,7 @@ include $(srctree)/virt/kvm/Makefile.kvm kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o \ - mmu/spte.o + mmu/spte.o mmu/rmap.o ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index c1390357126a..29f692ecd6f3 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -9,6 +9,7 @@ #include "lapic.h" #include "mmu.h" #include "mmu/mmu_internal.h" +#include "mmu/rmap.h" static int vcpu_get_timer_advance_ns(void *data, u64 *val) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..90b3735d6064 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -26,6 +26,7 @@ #include "kvm_emulate.h" #include "cpuid.h" #include "spte.h" +#include "rmap.h" #include #include @@ -112,24 +113,6 @@ module_param(dbg, bool, 0644); #include -/* make pte_list_desc fit well in cache lines */ -#define PTE_LIST_EXT 14 - -/* - * Slight optimization of cacheline layout, by putting `more' and `spte_count' - * at the start; then accessing it will only use one single cacheline for - * either full (entries==PTE_LIST_EXT) case or entries<=6. - */ -struct pte_list_desc { - struct pte_list_desc *more; - /* - * Stores number of entries stored in the pte_list_desc. No need to be - * u64 but just for easier alignment. When PTE_LIST_EXT, means full. - */ - u64 spte_count; - u64 *sptes[PTE_LIST_EXT]; -}; - struct kvm_shadow_walk_iterator { u64 addr; hpa_t shadow_addr; @@ -155,7 +138,6 @@ struct kvm_shadow_walk_iterator { ({ spte = mmu_spte_get_lockless(_walker.sptep); 1; }); \ __shadow_walk_next(&(_walker), spte)) -static struct kmem_cache *pte_list_desc_cache; struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; @@ -674,11 +656,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) -{ - kmem_cache_free(pte_list_desc_cache, pte_list_desc); -} - static bool sp_has_gptes(struct kvm_mmu_page *sp); static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) @@ -878,111 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, return slot; } -/* - * About rmap_head encoding: - * - * If the bit zero of rmap_head->val is clear, then it points to the only spte - * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct - * pte_list_desc containing more mappings. - */ - -/* - * Returns the number of pointers in the rmap chain, not counting the new one. - */ -static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - int count = 0; - - if (!rmap_head->val) { - rmap_printk("%p %llx 0->1\n", spte, *spte); - rmap_head->val = (unsigned long)spte; - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = kvm_mmu_memory_cache_alloc(cache); - desc->sptes[0] = (u64 *)rmap_head->val; - desc->sptes[1] = spte; - desc->spte_count = 2; - rmap_head->val = (unsigned long)desc | 1; - ++count; - } else { - rmap_printk("%p %llx many->many\n", spte, *spte); - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - while (desc->spte_count == PTE_LIST_EXT) { - count += PTE_LIST_EXT; - if (!desc->more) { - desc->more = kvm_mmu_memory_cache_alloc(cache); - desc = desc->more; - desc->spte_count = 0; - break; - } - desc = desc->more; - } - count += desc->spte_count; - desc->sptes[desc->spte_count++] = spte; - } - return count; -} - -static void -pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, - struct pte_list_desc *desc, int i, - struct pte_list_desc *prev_desc) -{ - int j = desc->spte_count - 1; - - desc->sptes[i] = desc->sptes[j]; - desc->sptes[j] = NULL; - desc->spte_count--; - if (desc->spte_count) - return; - if (!prev_desc && !desc->more) - rmap_head->val = 0; - else - if (prev_desc) - prev_desc->more = desc->more; - else - rmap_head->val = (unsigned long)desc->more | 1; - mmu_free_pte_list_desc(desc); -} - -static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - struct pte_list_desc *prev_desc; - int i; - - if (!rmap_head->val) { - pr_err("%s: %p 0->BUG\n", __func__, spte); - BUG(); - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p 1->0\n", spte); - if ((u64 *)rmap_head->val != spte) { - pr_err("%s: %p 1->BUG\n", __func__, spte); - BUG(); - } - rmap_head->val = 0; - } else { - rmap_printk("%p many->many\n", spte); - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - prev_desc = NULL; - while (desc) { - for (i = 0; i < desc->spte_count; ++i) { - if (desc->sptes[i] == spte) { - pte_list_desc_remove_entry(rmap_head, - desc, i, prev_desc); - return; - } - } - prev_desc = desc; - desc = desc->more; - } - pr_err("%s: %p many->many\n", __func__, spte); - BUG(); - } -} - static void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep) { @@ -1011,7 +883,7 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, for (i = 0; i < desc->spte_count; i++) mmu_spte_clear_track_bits(kvm, desc->sptes[i]); next = desc->more; - mmu_free_pte_list_desc(desc); + free_pte_list_desc(desc); } out: /* rmap_head is meaningless now, remember to reset it */ @@ -1019,26 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - unsigned int count = 0; - - if (!rmap_head->val) - return 0; - else if (!(rmap_head->val & 1)) - return 1; - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - - while (desc) { - count += desc->spte_count; - desc = desc->more; - } - - return count; -} - static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index dbaf6755c5a7..cd1c8f32269d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -166,7 +166,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; static inline bool is_nx_huge_page_enabled(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c new file mode 100644 index 000000000000..daa99dee0709 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.c @@ -0,0 +1,141 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "mmu.h" +#include "mmu_internal.h" +#include "mmutrace.h" +#include "rmap.h" +#include "spte.h" + +#include +#include + +/* + * About rmap_head encoding: + * + * If the bit zero of rmap_head->val is clear, then it points to the only spte + * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct + * pte_list_desc containing more mappings. + */ + +/* + * Returns the number of pointers in the rmap chain, not counting the new one. + */ +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + int count = 0; + + if (!rmap_head->val) { + rmap_printk("%p %llx 0->1\n", spte, *spte); + rmap_head->val = (unsigned long)spte; + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p %llx 1->many\n", spte, *spte); + desc = kvm_mmu_memory_cache_alloc(cache); + desc->sptes[0] = (u64 *)rmap_head->val; + desc->sptes[1] = spte; + desc->spte_count = 2; + rmap_head->val = (unsigned long)desc | 1; + ++count; + } else { + rmap_printk("%p %llx many->many\n", spte, *spte); + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + while (desc->spte_count == PTE_LIST_EXT) { + count += PTE_LIST_EXT; + if (!desc->more) { + desc->more = kvm_mmu_memory_cache_alloc(cache); + desc = desc->more; + desc->spte_count = 0; + break; + } + desc = desc->more; + } + count += desc->spte_count; + desc->sptes[desc->spte_count++] = spte; + } + return count; +} + +void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +{ + kmem_cache_free(pte_list_desc_cache, pte_list_desc); +} + +static void +pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, + struct pte_list_desc *desc, int i, + struct pte_list_desc *prev_desc) +{ + int j = desc->spte_count - 1; + + desc->sptes[i] = desc->sptes[j]; + desc->sptes[j] = NULL; + desc->spte_count--; + if (desc->spte_count) + return; + if (!prev_desc && !desc->more) + rmap_head->val = 0; + else + if (prev_desc) + prev_desc->more = desc->more; + else + rmap_head->val = (unsigned long)desc->more | 1; + free_pte_list_desc(desc); +} + +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + struct pte_list_desc *prev_desc; + int i; + + if (!rmap_head->val) { + pr_err("%s: %p 0->BUG\n", __func__, spte); + BUG(); + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p 1->0\n", spte); + if ((u64 *)rmap_head->val != spte) { + pr_err("%s: %p 1->BUG\n", __func__, spte); + BUG(); + } + rmap_head->val = 0; + } else { + rmap_printk("%p many->many\n", spte); + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + prev_desc = NULL; + while (desc) { + for (i = 0; i < desc->spte_count; ++i) { + if (desc->sptes[i] == spte) { + pte_list_desc_remove_entry(rmap_head, + desc, i, prev_desc); + return; + } + } + prev_desc = desc; + desc = desc->more; + } + pr_err("%s: %p many->many\n", __func__, spte); + BUG(); + } +} + +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + unsigned int count = 0; + + if (!rmap_head->val) + return 0; + else if (!(rmap_head->val & 1)) + return 1; + + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + + while (desc) { + count += desc->spte_count; + desc = desc->more; + } + + return count; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h new file mode 100644 index 000000000000..059765b6e066 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.h @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef __KVM_X86_MMU_RMAP_H +#define __KVM_X86_MMU_RMAP_H + +#include + +/* make pte_list_desc fit well in cache lines */ +#define PTE_LIST_EXT 14 + +/* + * Slight optimization of cacheline layout, by putting `more' and `spte_count' + * at the start; then accessing it will only use one single cacheline for + * either full (entries==PTE_LIST_EXT) case or entries<=6. + */ +struct pte_list_desc { + struct pte_list_desc *more; + /* + * Stores number of entries stored in the pte_list_desc. No need to be + * u64 but just for easier alignment. When PTE_LIST_EXT, means full. + */ + u64 spte_count; + u64 *sptes[PTE_LIST_EXT]; +}; + +static struct kmem_cache *pte_list_desc_cache; + +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head); +void free_pte_list_desc(struct pte_list_desc *pte_list_desc); +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); + +#endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30455 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2958475wrr; Tue, 6 Dec 2022 09:40:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf6FYiIGr5So2zAyrNvc2pPskj13OdXr4bN2pR5530VVE1t2ASlNhXJQUBk2Mz292OgvIY1/ X-Received: by 2002:a63:c046:0:b0:477:b0d0:bbff with SMTP id z6-20020a63c046000000b00477b0d0bbffmr57743181pgi.364.1670348415796; Tue, 06 Dec 2022 09:40:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348415; cv=none; d=google.com; s=arc-20160816; b=FxlOMazevMrD13bI5StBsgsM7p4YsGsJswIzVOwvhMWs9as5DxoBg6WWwM1IS+amq+ LBk5HkXfdCdbXBr5DU1D0tMMKYNgM+0M0DYbOc1fVwE8j7FqlnsJKEZDkUZxWoq7qZg0 rUUOur7DTE6aUpYzg3AT0dn6wwBcjXqzX5FXv+Wi4GuZZf5PyV903wQGzPVD+MRiuN8j UEePL9qginqKdEe1Q35xwh8f0E14ZlFC8XI8oblN6y0u8i0JURxw3MtRyoh8HJWbfq7S WLUrAkeO5V5PmZIAlYYBcbAevTglwYwRsOxuIYGL5e5U8I5MvAOpFcSTjqyoHPFEjR9+ iM3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=TpOQX7BjHckbVOkOU+8WJ3RaiiMV0Q3Cbb5K9j5nd8HT1t6626DEWQjOZ8bXqpcMgo I3NcafT1V8vGYGvdDY6WOTQKEwAjXcsFE214U5onhhrRH+zD0Tq+Es6efcQOpis++YvR z5FaZwHo82KfgE8jO7FXrVFxTeHWScZy0wCb3W8pG7Iy0mc2cB3v3Zcck4vfYAcTgzKS ExLLt0o08b0inOHqlVhYqaCFKlpyH4fmwMH6uWsxAGB1ijpxQEZxf/05c/KjANBs4n9Y QlTtHxhV2ETKjHnYt8IxRTSWYXrovxoHCK10gf7Ox7ucEqmv5oyTHOXxtmgLAHd+X+Uw VkBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=P3zoi6Oo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t9-20020a17090340c900b00188bee6e9f2si18114738pld.239.2022.12.06.09.40.02; Tue, 06 Dec 2022 09:40:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=P3zoi6Oo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235291AbiLFRgT (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235297AbiLFRgK (ORCPT ); Tue, 6 Dec 2022 12:36:10 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFA5C3AC15 for ; Tue, 6 Dec 2022 09:36:09 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id x16-20020a63b210000000b0045f5c1e18d0so12539050pge.0 for ; Tue, 06 Dec 2022 09:36:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=P3zoi6Oo6b7q+soGLki3tsCGlAU9dd9EOInpq1nWCzLQXQqk0hqowY4Hkq8RRNwYi6 O/UBNmRBhKw2A1BU/fGaVbprZcSxapecuRH8bj0X94WYxwiQFZ0e5TqdxvXyApmA6Nfz t5JaVSrJVdjIYX+F6sAntOrzLzbsicP3kYMzZ6UugA24K+onBITM6wb3Db0Dtkp1heHx /1bFv58lqLKo0R59joc5Q/rGf0bzp/hDa9eIH2sRAt1ATPZrn1sxXPFjwd1gzFhDLEW3 AWzzmxhdvQN+2ZBILDa3i/iPLkmZGA3zrWubVUq/F/L7sKGoicBwrrWfy5g3Lzl1Q63C 9QUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=kvc3GT5Vhz6m4uFPLWLGdWPUcgZ40+ZU/yku+qq95h8684EcQk/SRvZdg7Dt2/9UMQ 83rXqb7IwvosWNsuJM+ZdDam6obwITIBEXWK3uModOIF8ThyvgV5+08mQPUqeSl6MsxD w4yWxPbO5nk5S8EIjRS+pcf0NAKlbE9roE0bJVhYbYq3qvH3UwwMWXVnu5iPe3VbQrES B2r/f1kwbeAClP2bzQku2beHo2LVHmNIU2KvUIlChDCSikqDn0zhwtd+VZchIIPcXmdb Ju4UX9Mcfwg6UT6CQ3C2YTsFWYmLnsuyOg7L766gTu4tJGpWBCPWCzorOU3pSrl3ImyJ 1KFA== X-Gm-Message-State: ANoB5pnlKCYi6N3IxTi+MMEyn7JAfPOpLv8rEcF0lgeuhohn6naGSI7c +5q6wNdOyf14n7aEHIv5Sz+q/rvdIK9TfhWpZ/ETKQ5pBXkfelS6fU482Bkork1w9jZ/1Jbcolf +2zh3vTLg0ySq15GsHfe4Bp0gK8snQN0l/cfepgQhG3fgrVIW8wT4sGbFrdiQ4Pc/Ga0PWnEC X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a62:8683:0:b0:577:3624:2d49 with SMTP id x125-20020a628683000000b0057736242d49mr5185978pfd.64.1670348169136; Tue, 06 Dec 2022 09:36:09 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:56 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-3-bgardon@google.com> Subject: [PATCH 2/7] KVM: x86/MMU: Move rmap_iterator to rmap.h From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487260281415841?= X-GMAIL-MSGID: =?utf-8?q?1751487260281415841?= In continuing to factor the rmap out of mmu.c, move the rmap_iterator and associated functions and macros into rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 76 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 61 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 18 ++++++++++ 3 files changed, 79 insertions(+), 76 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 90b3735d6064..c3a7f443a213 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -932,82 +932,6 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) pte_list_remove(spte, rmap_head); } -/* - * Used by the following functions to iterate through the sptes linked by a - * rmap. All fields are private and not assumed to be used outside. - */ -struct rmap_iterator { - /* private fields */ - struct pte_list_desc *desc; /* holds the sptep if not NULL */ - int pos; /* index of the sptep */ -}; - -/* - * Iteration must be started by this function. This should also be used after - * removing/dropping sptes from the rmap link because in such cases the - * information in the iterator may not be valid. - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, - struct rmap_iterator *iter) -{ - u64 *sptep; - - if (!rmap_head->val) - return NULL; - - if (!(rmap_head->val & 1)) { - iter->desc = NULL; - sptep = (u64 *)rmap_head->val; - goto out; - } - - iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - iter->pos = 0; - sptep = iter->desc->sptes[iter->pos]; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -/* - * Must be used with a valid iterator: e.g. after rmap_get_first(). - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_next(struct rmap_iterator *iter) -{ - u64 *sptep; - - if (iter->desc) { - if (iter->pos < PTE_LIST_EXT - 1) { - ++iter->pos; - sptep = iter->desc->sptes[iter->pos]; - if (sptep) - goto out; - } - - iter->desc = iter->desc->more; - - if (iter->desc) { - iter->pos = 0; - /* desc->sptes[0] cannot be NULL */ - sptep = iter->desc->sptes[iter->pos]; - goto out; - } - } - - return NULL; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ - for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ - _spte_; _spte_ = rmap_get_next(_iter_)) - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index daa99dee0709..c3bad366b627 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -139,3 +139,64 @@ unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) return count; } +/* + * Iteration must be started by this function. This should also be used after + * removing/dropping sptes from the rmap link because in such cases the + * information in the iterator may not be valid. + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, struct rmap_iterator *iter) +{ + u64 *sptep; + + if (!rmap_head->val) + return NULL; + + if (!(rmap_head->val & 1)) { + iter->desc = NULL; + sptep = (u64 *)rmap_head->val; + goto out; + } + + iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + iter->pos = 0; + sptep = iter->desc->sptes[iter->pos]; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + +/* + * Must be used with a valid iterator: e.g. after rmap_get_first(). + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_next(struct rmap_iterator *iter) +{ + u64 *sptep; + + if (iter->desc) { + if (iter->pos < PTE_LIST_EXT - 1) { + ++iter->pos; + sptep = iter->desc->sptes[iter->pos]; + if (sptep) + goto out; + } + + iter->desc = iter->desc->more; + + if (iter->desc) { + iter->pos = 0; + /* desc->sptes[0] cannot be NULL */ + sptep = iter->desc->sptes[iter->pos]; + goto out; + } + } + + return NULL; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 059765b6e066..13b265f3a95e 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -31,4 +31,22 @@ void free_pte_list_desc(struct pte_list_desc *pte_list_desc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); +/* + * Used by the following functions to iterate through the sptes linked by a + * rmap. All fields are private and not assumed to be used outside. + */ +struct rmap_iterator { + /* private fields */ + struct pte_list_desc *desc; /* holds the sptep if not NULL */ + int pos; /* index of the sptep */ +}; + +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, + struct rmap_iterator *iter); +u64 *rmap_get_next(struct rmap_iterator *iter); + +#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ + for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ + _spte_; _spte_ = rmap_get_next(_iter_)) + #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30452 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2958088wrr; Tue, 6 Dec 2022 09:39:46 -0800 (PST) X-Google-Smtp-Source: AA0mqf4SUocLK382VX5rdB3UjKIgtXwSO0IkxiYaa9sLBWTh3NzV71LTJpZ4BOlgG2N+1PS9+/HX X-Received: by 2002:aa7:ce86:0:b0:46b:1872:4194 with SMTP id y6-20020aa7ce86000000b0046b18724194mr36812687edv.362.1670348386351; Tue, 06 Dec 2022 09:39:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348386; cv=none; d=google.com; s=arc-20160816; b=Bmw0uG0Br8m9ZdD7saijFMLNMju36//CKsjSZvHwTVZqfJowETzYFN0A+UGHusw3Zv vYGqGIi4iMcfXhtvaNvjEoV5gMEVyJqlfAJWnm97fhFYU49LR3iQUVI+tjllv3e5iRxr nHrhBNwvNvmly2T/2sB6odr6QlE52lezSiWK28KFBmB077gFEBUHaQVA8RUcA3sCZ0xl P0TAfuCCT6UpVb0WD1Ui69YKo6dzlMJKIu8pYX//bkLuNh2Ht+MYQcm+SfxJNM7zvyQF bXTqaf5/LaSzOEKa31s5pY+1iHfKemLSujh6bUf3zPqs3R8N/mvOgEnpSdGzR405FdT9 IQ1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=ZO7KYvasSfcYSc0RiDDKjN327HmgZho7+3bIlhO03sw6HF5nTIDMTuRefA/Xbci0pa 2jZy9BYk7hy7AdYIQnY8+4xuxlhLDnDT1xgIH361wnoa+REViqZ5S9uessUR4mVtoKSP 4FLem/Uvh7Zp67UqgkVHpcWSpFwIv35mzdE3c2O3ba3bJGV79KlcPzwqzx4Iib8HZFlU 6Kji+mu3Y+ObcID37fZdha7q1QhJt76K4fP2EoAn/rkDfYiK4aycF6zXiuTq/i4rbrs7 a4JvgiVCRcJ9rX8igZVrd7hhAadIcLs8BS57r1yFnoBf/jN+msWC61KIPXHyZI8s4KxG N+rQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="Ie/mmiJH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z11-20020a05640235cb00b004696e8f74cbsi2951333edc.537.2022.12.06.09.39.23; Tue, 06 Dec 2022 09:39:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="Ie/mmiJH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235455AbiLFRgZ (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235360AbiLFRgM (ORCPT ); Tue, 6 Dec 2022 12:36:12 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88638391D3 for ; Tue, 6 Dec 2022 09:36:11 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id o18-20020a17090aac1200b00219ca917708so6457717pjq.8 for ; Tue, 06 Dec 2022 09:36:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=Ie/mmiJHQUpEyx6oKaIMTBhaNypqhp7wR5I9PgyeMslphAOxXP9Mt0H1vtwizsJhIe KudAd2OjLuL/EPpuQX9326EMo5VoILq1g7QyBXb6UFkaTuiDxZclbwT36DYio2poZCsS QnyUIh5aI4j5k7lLeu39t1k7Ll7Jcd5AbHaDNYnuGkYcn8utG14NtPeFRh8YUFbIi8cL cJWfh4uSFvrE9hq19uADqys655XW12zElikxi/7jrj2UQ1uE/y7Rfbjh6LTg7OdPXF8N 4p3XD24i3opGf7A5gtoUq/j1EaErnZ4+1m3FR0zLUPit2WLjg1bVsAfAScnCsjsWWKKN kj+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=cfcRc6NKRIy/irFz7dfBGOqxNH2/AiWop3IbfBjDP1DYFGeuNudSTamhp/f31HtxBr /ki/kBjYdVuLc0FDDOsBbq4Y4qYX7Ci1tZKXg2i3GImC3cn+pmO/6jdxm7rOtU5A+eI1 ysV60wKCz08r6Al4oIkGPQcW5XlzbWKxo50LvMFVrd3NxdcocRw0huAV2lplcSZFRdJL 1fiqVF4RM5wTM2JUq+sqRkMzLDIfuu7e5jTRQlHKFtNSpskj3qNnMKrEKOTGVX6yucHV t+x4DjTutfBpUZYb53lH++ckiTP61zxrMGPVETuX8KzAGk0Vs1U+kVI2F1LQCWnJplWW zOnA== X-Gm-Message-State: ANoB5pmWZ0jmWC9npwgIuVFKIf0srwQS13doD9EkA0TuY8nqxPUhD3V6 n3eLOVbq2Dd/rOtro+EMPj9QK7tFKLxMawxkGFYfVpAmdlvmmQ0NsBioK+GxIzw9HAhyUoWWYHF GhpO8fgL3PwEOvfJiMdQ/q/BWEXuszAQc+GRJvBvRa/tCcImt1Sux6vPN2i8xVOA9QAJ/Ag+k X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:902:b707:b0:189:5f3c:fb25 with SMTP id d7-20020a170902b70700b001895f3cfb25mr60010769pls.123.1670348170913; Tue, 06 Dec 2022 09:36:10 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:57 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-4-bgardon@google.com> Subject: [PATCH 3/7] KVM: x86/MMU: Move gfn_to_rmap() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487229907992206?= X-GMAIL-MSGID: =?utf-8?q?1751487229907992206?= Move gfn_to_rmap() to rmap.c. While the function is not part of manipulating the rmap, it is the main way that the MMU gets pointers to the rmaps. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 9 --------- arch/x86/kvm/mmu/rmap.c | 8 ++++++++ arch/x86/kvm/mmu/rmap.h | 2 ++ 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c3a7f443a213..f8d7201210c8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -891,15 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, - const struct kvm_memory_slot *slot) -{ - unsigned long idx; - - idx = gfn_to_index(gfn, slot->base_gfn, level); - return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; -} - static bool rmap_can_add(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_cache *mc; diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index c3bad366b627..272e89147d96 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -200,3 +200,11 @@ u64 *rmap_get_next(struct rmap_iterator *iter) return sptep; } +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot) +{ + unsigned long idx; + + idx = gfn_to_index(gfn, slot->base_gfn, level); + return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 13b265f3a95e..45732eda57e5 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -49,4 +49,6 @@ u64 *rmap_get_next(struct rmap_iterator *iter); for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \ _spte_; _spte_ = rmap_get_next(_iter_)) +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30453 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2958180wrr; Tue, 6 Dec 2022 09:39:57 -0800 (PST) X-Google-Smtp-Source: AA0mqf5iBQ6l74ITWWXbeHS/A9UJo3fH5k2UvE1W53Q9OMztzDljJwNpQSjat70NP2SkVX9xThuw X-Received: by 2002:a50:ed90:0:b0:46a:e6e3:b3cf with SMTP id h16-20020a50ed90000000b0046ae6e3b3cfmr41972497edr.333.1670348396890; Tue, 06 Dec 2022 09:39:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348396; cv=none; d=google.com; s=arc-20160816; b=ALEVF9b8cO0nFwmTdMs/vFrx3gzYcMeU8AkuumPIpNAOnNGfEGEr/EpAJ4zJ5bCkZ0 0kxKmkejjO19sse8od/J8vVzKa9B4XZA+mHY/gsU3Kg64a58IJbsSRaQDzxjV2JXAY99 FkhXKDJXA1OdX52yx6vezmMZjZTtVahUNs3ldH+bR1/ropi2ngpMBHDhdUTiGxpgvwHY NpgPLO6DxwcOXL6rwTufX+Qq1tsueAdqcqcjoyD5LrH31hHvqiKNrDxRSCWfLrsbGw8b jMD37aJJDp7nTmS6c1+IxleIm9JmwlEWZ/xHlh/M5lzrPCxAVWzF3JbuTY7PZmiVoSwW thog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=nMCGA2iKJE3pUKKiEjdKJHaluf9POGM5WOY4KDToE3c+w0qHoWq7/jVmLSsDv4nccC D6mGCB7BYSS+YlgC8plJVdpR0t3MrUM+THAaM8S/qIH1/Yn2j/In1O92eZmG3bTTCB2H tZZJ/+w4PdZYdiup0bzYZ0LIzrf5wMUEJLE8yRPucU1TwW/JsSudmn4dk+yYU1vPPi2F sAMrDn+YIZCioZktGxx6pZRSBtTvZVBNrajC5zPv2nhADv5S75qw9FYEuo+JmIIyBMab rItANpiZsanIIo1Otmzus6OB2ZcfdXXMm27KOaedZHm+F5/49uWQeO4IFuTjStAV+UvZ KBxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=FRZhVvY5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f8-20020a50ee88000000b0046b1827eda2si2150843edr.74.2022.12.06.09.39.34; Tue, 06 Dec 2022 09:39:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=FRZhVvY5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235392AbiLFRg3 (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235373AbiLFRgQ (ORCPT ); Tue, 6 Dec 2022 12:36:16 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15E033AC2E for ; Tue, 6 Dec 2022 09:36:13 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id p17-20020a056a0026d100b005769067d113so8029744pfw.3 for ; Tue, 06 Dec 2022 09:36:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=FRZhVvY5mtOkbQD31kJD+YaHVJajPmTARMOqiCcI+/xvmLCmh6BGekzUTtjDNeotzq uV42h3eYcy2Y1U/jWIpUCdXPMDgEMZB91kpXLVv2PxG8wk3IE0JRMUxaYINZkDlBMDBa xpCtjKpLUcnALTVXha4JOqpMnLfJtCq7RtJbdS9Y/NG265fX+/qqKwUBhyWT0k0diYdV hq5HsZiSMwFdUeFrTC5lDJVcu3kftBgtGZ1hMUr8kPf0vK+aWbxsNiNiyI68v8zC2qbC 6pvrAGBfgZS4RecxrDW7y8awmGVUrRF0AFrFHz2Pjud05rWFkofdAumELQtzKuoObx4B tcew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=jLwfkjnYmu+7WmfPTRcn4Ps1rOklcvD1KnBXljTAsO5KPMa6/zQLwuh5UkJzONjpM0 zykdxBpr0C7PsmXOYeiN1KVH6tOJLENocw0XTjsDZgY03/GjAC2M44xsfSQHEb3QgsUN eh5deEUUuktYYV+YCwKuu7GXoKh3jolSSrNyXdWKzjU/yFHLF7STB8YrjrdHSOY98kyB GMuzDhnmtSIKmooOVqNAa6kelz1N4xpfM1uLZrqOo6/eM5cRnVqvLPOxvMOgs93+syFs lwzMUTX4I5BeQ+4Trfb9TYLqCZuGMpVNBQa1ki3Mlj8+A+d3Y3HXLfQAK6wjUguPU0CN k/lg== X-Gm-Message-State: ANoB5plyJXLMCIjdPawPaTD8AWzT7gt46Il+LJJ3SAZPzfK0H+FiBfUA MFBleQesqLgvAgVBriicpIeYJZ9F57uoSTd0ma41U19Dx6UgMg1W3gJ3jHFqGAuDhVok5b27I3t pC8FPA/hIu87BYBEZwG1gVdSJCQH4/n8tJ1Af5lcivaR5pbWpyvlbwUJXcwqdBxZxF7c1hnqo X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6a00:3398:b0:575:72f3:d4dc with SMTP id cm24-20020a056a00339800b0057572f3d4dcmr36836377pfb.6.1670348172521; Tue, 06 Dec 2022 09:36:12 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:58 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-5-bgardon@google.com> Subject: [PATCH 4/7] KVM: x86/MMU: Move rmap_can_add() and rmap_remove() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487240726642230?= X-GMAIL-MSGID: =?utf-8?q?1751487240726642230?= Move the functions to check if an entry can be added to an rmap and for removing elements from an rmap to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 34 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 32 +++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 3 +++ 4 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f8d7201210c8..52e487d89d54 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -658,7 +658,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) static bool sp_has_gptes(struct kvm_mmu_page *sp); -static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (sp->role.passthrough) return sp->gfn; @@ -891,38 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - -static void rmap_remove(struct kvm *kvm, u64 *spte) -{ - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; - struct kvm_mmu_page *sp; - gfn_t gfn; - struct kvm_rmap_head *rmap_head; - - sp = sptep_to_sp(spte); - gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); - - /* - * Unlike rmap_add, rmap_remove does not run in the context of a vCPU - * so we have to determine which memslots to use based on context - * information in sp->role. - */ - slots = kvm_memslots_for_spte_role(kvm, sp->role); - - slot = __gfn_to_memslot(slots, gfn); - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - - pte_list_remove(spte, rmap_head); -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index cd1c8f32269d..3de703c2a5d4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -318,4 +318,5 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 272e89147d96..6833676aa9ea 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -208,3 +208,35 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, idx = gfn_to_index(gfn, slot->base_gfn, level); return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } + +bool rmap_can_add(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_memory_cache *mc; + + mc = &vcpu->arch.mmu_pte_list_desc_cache; + return kvm_mmu_memory_cache_nr_free_objects(mc); +} + +void rmap_remove(struct kvm *kvm, u64 *spte) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + gfn_t gfn; + struct kvm_rmap_head *rmap_head; + + sp = sptep_to_sp(spte); + gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); + + /* + * Unlike rmap_add, rmap_remove does not run in the context of a vCPU + * so we have to determine which memslots to use based on context + * information in sp->role. + */ + slots = kvm_memslots_for_spte_role(kvm, sp->role); + + slot = __gfn_to_memslot(slots, gfn); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + + pte_list_remove(spte, rmap_head); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 45732eda57e5..81df186ba3c3 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -51,4 +51,7 @@ u64 *rmap_get_next(struct rmap_iterator *iter); struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot); + +bool rmap_can_add(struct kvm_vcpu *vcpu); +void rmap_remove(struct kvm *kvm, u64 *spte); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:35:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30463 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2959396wrr; Tue, 6 Dec 2022 09:42:02 -0800 (PST) X-Google-Smtp-Source: AA0mqf5vEY2kvd+YDIp5cNW455LgDQAFh+jaQdADMTI65Q2yy8DDy2JmyyxJGNFSx/NavYgLvCgc X-Received: by 2002:aa7:d14c:0:b0:46c:f8af:b331 with SMTP id r12-20020aa7d14c000000b0046cf8afb331mr5131801edo.390.1670348522209; Tue, 06 Dec 2022 09:42:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348522; cv=none; d=google.com; s=arc-20160816; b=b0QgE7VD+6dpLlbW8CGCH7DYTq3Sr8dHF28wireQED0BU8HsH6CB2BBOThOw1r0t1Y ifXqHFiwwlN/i3K1A8bqJCdhd6J6ei2bJ1Rl344KstQPBMef1M1eF5UiiAfSQkI1uUMO B56QRqn4mqMnGIu2eFntcmk8GvPdSnHwAJTTjamMNlz5BaGzpsRDqpOyNtvQ53QBY2aI 6IhFb+vu7umuj7Qkmxq5YfdNMjL5OM/s7yeJ2l2gKzpuqEEdZzzZIqgDd2nxx3wdQiT1 ogUgVQ4g+5kgS26QYIxE6wXkmayHS3t5AfPOXFAm2wbY/yBuaL1326ZTf1Xb1SO7hg83 TbFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=NA2TMXxRpkH3IExWBH+9K5drc220wIIKPrZhsWc5jY4aiHSt/u+xyIfXHu+sLNSDxV 0NFL//UYlhXSm+IDQ48pfrJ4mcTzB6FIOIQLjwBu33fJB7tJZ7g3RjdBtNKErvMg+/r6 bPSEL/I5X/uz84RQHrdtSLT8ufIWcCV1nzgZGEnBN5iF0VqwzLhA13ELvNlWr0QTH/eg FeGoXKT2oPYzQ18KxFEastM4AITSiAmhHeR6wibxK9kR02ZpXaLUWrK6anfULcae9Zp0 idV5uW4jmVjX+uyQ/RWCphudkr58pLE+FdYRAGsiPISXBSATI0oXaJNH3F3hZ75IVYQK r86w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=oVPJ2nG7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ho10-20020a1709070e8a00b007c0ae1af6dcsi3355629ejc.256.2022.12.06.09.41.39; Tue, 06 Dec 2022 09:42:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=oVPJ2nG7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235573AbiLFRgh (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235309AbiLFRgS (ORCPT ); Tue, 6 Dec 2022 12:36:18 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E52623B9C3 for ; Tue, 6 Dec 2022 09:36:14 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id b13-20020a056a000a8d00b0057348c50123so13531014pfl.18 for ; Tue, 06 Dec 2022 09:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=oVPJ2nG7PFgq3bksWsgionXC79t3Kk/k1kgfLgV8FmLQwSFfrzXfhIv7LJ4yoS4zML KxXcvA273d0cpEacTsrs3CPwZ0QeGzc/wrtfBDP8vEEiwzvY9qAx/YXCee4fbzCDlyPA wf12/QEcjIC8x4ZQyEhI4RVPSyH0ittgNyouIntMcJOj9jEe20vMURToED9sTUyrn/4z 9oQV3HNShZ/sB9f4KGupI+YqgXr8EERfZU+z+oZiaGH+zBRcZ0PjIAAJsyAjMUzSm7Vs 4g6VdBhmU7ZsH3z3DVifdx/0F/p8Z+JwDN7p3vpkEeG1Qde7kp/1dlDnT2BnY/1VIB4d dowA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=0G/cwGlhqMZPkjZE5N0Mckc7JvXYU7h37H7im+5IO7I/ygV1CCrkchSIZcB/HVyR8V f9ZhILh2/LrKpqIzBKcZKe+xjI6qanJsDT3B+iMRuYDsbQ4XXwu5VBept/rK8DJKb8bU dbYRNhlcZQvR90RMxW36X3JzE1R/Kodii2LU4WNpdzGSJYMrbBCUq6FMA0CTbtZ36loM i/8tyQ/yjdqGF09+yFk8ZU0qi5IEwis4P/AAD9kj3hkzAArBhq1Ikb5Wo4czh0dtGdWA iYd8icycoie0lUaewD8UpBpvSRbEK+Jr7kXrCdAAAXZR5kfJguNcmT8C7DZggiqS+jaL TpAg== X-Gm-Message-State: ANoB5pnhGngahw7OZmt9ezfRq7kqBxiSZhgPag1eG6X7XlY5Al8VqKD3 JavBCwPXoaasE2UCpGUiNFhMzr9YpCKgX9i8aWuG7bwwrEVOX3Wy19qWFpH/Snn2+5lvKx0921w T/09Vr+22iV5ifPUzlo68FglQKZhOQoUD+qfWLV2YxCGxOYlI2VWQxXJ3u8he3EqGZ55g9BfW X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:8b03:b0:213:16d2:4d4c with SMTP id y3-20020a17090a8b0300b0021316d24d4cmr96765107pjn.70.1670348174449; Tue, 06 Dec 2022 09:36:14 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:59 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-6-bgardon@google.com> Subject: [PATCH 5/7] KVM: x86/MMU: Move the rmap walk iterator out of mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487372143654762?= X-GMAIL-MSGID: =?utf-8?q?1751487372143654762?= Move slot_rmap_walk_iterator and its associated functions out of mmu.c to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 73 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 43 ++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 36 ++++++++++++++++++++ 3 files changed, 79 insertions(+), 73 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52e487d89d54..88da2abc2375 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1198,79 +1198,6 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, return need_flush; } -struct slot_rmap_walk_iterator { - /* input fields. */ - const struct kvm_memory_slot *slot; - gfn_t start_gfn; - gfn_t end_gfn; - int start_level; - int end_level; - - /* output fields. */ - gfn_t gfn; - struct kvm_rmap_head *rmap; - int level; - - /* private field. */ - struct kvm_rmap_head *end_rmap; -}; - -static void -rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) -{ - iterator->level = level; - iterator->gfn = iterator->start_gfn; - iterator->rmap = gfn_to_rmap(iterator->gfn, level, iterator->slot); - iterator->end_rmap = gfn_to_rmap(iterator->end_gfn, level, iterator->slot); -} - -static void -slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, - const struct kvm_memory_slot *slot, int start_level, - int end_level, gfn_t start_gfn, gfn_t end_gfn) -{ - iterator->slot = slot; - iterator->start_level = start_level; - iterator->end_level = end_level; - iterator->start_gfn = start_gfn; - iterator->end_gfn = end_gfn; - - rmap_walk_init_level(iterator, iterator->start_level); -} - -static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) -{ - return !!iterator->rmap; -} - -static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) -{ - while (++iterator->rmap <= iterator->end_rmap) { - iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); - - if (iterator->rmap->val) - return; - } - - if (++iterator->level > iterator->end_level) { - iterator->rmap = NULL; - return; - } - - rmap_walk_init_level(iterator, iterator->level); -} - -#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ - _start_gfn, _end_gfn, _iter_) \ - for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ - _end_level_, _start_gfn, _end_gfn); \ - slot_rmap_walk_okay(_iter_); \ - slot_rmap_walk_next(_iter_)) - -typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - struct kvm_memory_slot *slot, gfn_t gfn, - int level, pte_t pte); - static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, rmap_handler_t handler) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 6833676aa9ea..91af5b32cffb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -240,3 +240,46 @@ void rmap_remove(struct kvm *kvm, u64 *spte) pte_list_remove(spte, rmap_head); } + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) +{ + iterator->level = level; + iterator->gfn = iterator->start_gfn; + iterator->rmap = gfn_to_rmap(iterator->gfn, level, iterator->slot); + iterator->end_rmap = gfn_to_rmap(iterator->end_gfn, level, iterator->slot); +} + +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn) +{ + iterator->slot = slot; + iterator->start_level = start_level; + iterator->end_level = end_level; + iterator->start_gfn = start_gfn; + iterator->end_gfn = end_gfn; + + rmap_walk_init_level(iterator, iterator->start_level); +} + +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) +{ + return !!iterator->rmap; +} + +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) +{ + while (++iterator->rmap <= iterator->end_rmap) { + iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); + + if (iterator->rmap->val) + return; + } + + if (++iterator->level > iterator->end_level) { + iterator->rmap = NULL; + return; + } + + rmap_walk_init_level(iterator, iterator->level); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 81df186ba3c3..dc4bf7e609ec 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -54,4 +54,40 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, bool rmap_can_add(struct kvm_vcpu *vcpu); void rmap_remove(struct kvm *kvm, u64 *spte); + +struct slot_rmap_walk_iterator { + /* input fields. */ + const struct kvm_memory_slot *slot; + gfn_t start_gfn; + gfn_t end_gfn; + int start_level; + int end_level; + + /* output fields. */ + gfn_t gfn; + struct kvm_rmap_head *rmap; + int level; + + /* private field. */ + struct kvm_rmap_head *end_rmap; +}; + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level); +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn); +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator); +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator); + +#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ + _start_gfn, _end_gfn, _iter_) \ + for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ + _end_level_, _start_gfn, _end_gfn); \ + slot_rmap_walk_okay(_iter_); \ + slot_rmap_walk_next(_iter_)) + +typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, + int level, pte_t pte); + #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:36:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30456 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2958480wrr; Tue, 6 Dec 2022 09:40:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf5h0vsI/UJj4tF+YLAoDwp7GCF+fMRhd9YvmCZTfeZj5wi6AmBSIqLJrsrVEmx66OMwOuKC X-Received: by 2002:a05:6402:559:b0:46c:94ff:76dc with SMTP id i25-20020a056402055900b0046c94ff76dcmr5169097edx.158.1670348416702; Tue, 06 Dec 2022 09:40:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348416; cv=none; d=google.com; s=arc-20160816; b=alwatROM2/5Yf0wuUsKhJ1APKXTGI5MT4MpkQwr0ClpuVQvMpIlmfE2NNDXivNxHQI K9bC+obikcH6mXwPPEzBCCske6FjyV3hjjfRYhh0U7h1xfmkNJbtDdtBPWDvMd4ftpsr AUiCuR/F20BgKqUU2dtiOubvktH+3UYKAXV++pILboHCxspiFyHWeNaj84t+SN7NX/ws 9Vk7V6cbZJB4bw1q9CiDwHgzR7zjPSLc0b3QtShvr5Z9v8LqWFpa1ltjR+vFDTvu/IgC JOuTWNfI+/dfw8GNUH9jnSotKCjqqEwHF+Bk5YLxnAjWjQ7bE8e5YsY1elCgDB18BgHa 4+ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=mMUPGeBVC5Pp0WHFIIXdcTonpISAGIB+Tm329n0ip/vTPG87lUg49RMRVWZ/IHhxg+ mGPw5NnpxOOqWyIJk88W88GNHOevo47wxzzBIjVwNb1vrqdcUBDMrn5qaHP2wi5DLSj1 G3BIBZiBKMSPLI8S/zVJx4ybMPFwLVwjLYnc94gv61Vb/t7FXR8FX+3grF2jJv6Pc7PJ gFP4tre6zV5R0Cjzjn2uBKQ1LrAym8fwJQFvuACgu/Y8NokGAMNT3LQz2YfhDcj2cNQd Ej0twAcLozOja9owZLoDxGN5S25aULYEHNge+y5i9fWLQmlZruzX0HFS+FRfEtq/kCv4 DhYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=EGiIlaHT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ss28-20020a170907c01c00b007c07db7d869si7153080ejc.287.2022.12.06.09.39.52; Tue, 06 Dec 2022 09:40:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=EGiIlaHT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235597AbiLFRgl (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235447AbiLFRgW (ORCPT ); Tue, 6 Dec 2022 12:36:22 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD10A3B9D5 for ; Tue, 6 Dec 2022 09:36:16 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id l10-20020a170902f68a00b00189d1728848so6739896plg.2 for ; Tue, 06 Dec 2022 09:36:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=EGiIlaHT8ks2bvi5YXJjx9YD9KUtl5Bv7ychIQYtNPVEquCrgQSKDZ3acg5PgVPIVe pA28XThubo+ao/p3YKAxinQQWFyIQQSFkmKsnVY8WZCzhxljx0GJPSk1N3aJKeKOnyzO y3L6C6OLq9XYYqt7xwnr4+YTvztFafCbhJZxAymemVVXLgFhs4mJlMvh9daCOWTjifxU tHgiZumwPGXMLfuo+hZRZ6k7aZq4LV1VJveUo3alSBxbMD7MvMmBtQTJ66hHFiQOaMJ4 UB2QL9SzWEUQbGKo7llrKdknPgVwZyt1vBwC1vd6/WMmb9z36w4uwQ5uWqH0Hqjl2gat pqig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=2/g0/L3zGYbmqIA2lZDyBOFL134CXGNEAwnSAexTCQHKjpAfQ0ayWyHNQ8BSGTT5yu yRDHvSuag/9idyvK/onG/FNv0iAPL+ZbETwrIp7Hg6uzE2Bh7uCz0l1dcv1K7F/L1Y+5 XILZ05QXyUy0jVSKKYCCKOGrUjQgoxJ1M3TqxEPUn39qhhZsYeAqGsHOMzEzi+rASnP6 xZ3u1bKaUfKjsgqHcKShkdNiOH0VKpEPUbKQ7jKfhDsXYQEiRokHTKP5eSTuiCub4CaR l6+EnSW9B6kgjFx1RX7e6DRFwIeqCxz+Tiwrmxj/zZu1diZxudSTEHZljAgqZ5ISRaJ5 3U3w== X-Gm-Message-State: ANoB5pl8ioQPUAvofX0xpdjKegfzxfo2i79GhC/j+ZwlN/rqLqZKSmXm BR0K5miOZrlyKoDU5bTa2A0kSrVIYBXofm7EIhvIiOSYHonOiTpSG0xl04pyOkGx+L8/DabHbXy Lokyn6sU/C/4+ARTu+J9mpmHrQ2SclPCX6wxSAeQjTlqwHHoo6nEWNqZnFl6UzSwTiDWU6m2W X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:43a4:b0:219:1d0a:34a6 with SMTP id r33-20020a17090a43a400b002191d0a34a6mr5141962pjg.1.1670348175816; Tue, 06 Dec 2022 09:36:15 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:00 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-7-bgardon@google.com> Subject: [PATCH 6/7] KVM: x86/MMU: Move rmap zap operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487261503128567?= X-GMAIL-MSGID: =?utf-8?q?1751487261503128567?= Move the various rmap zap functions to rmap.c. These functions are less "pure" rmap operations in that they also contain some SPTE manipulation, however they're mostly about rmap / pte list manipulation. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 51 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 50 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 9 +++++- 4 files changed, 59 insertions(+), 52 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 88da2abc2375..12082314d82d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -512,7 +512,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * state bits, it is used to clear the last level sptep. * Returns the old PTE. */ -static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { kvm_pfn_t pfn; u64 old_spte = *sptep; @@ -855,42 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, return slot; } -static void kvm_zap_one_rmap_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head, u64 *sptep) -{ - mmu_spte_clear_track_bits(kvm, sptep); - pte_list_remove(sptep, rmap_head); -} - -/* Return true if at least one SPTE was zapped, false otherwise */ -static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc, *next; - int i; - - if (!rmap_head->val) - return false; - - if (!(rmap_head->val & 1)) { - mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); - goto out; - } - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - - for (; desc; desc = next) { - for (i = 0; i < desc->spte_count; i++) - mmu_spte_clear_track_bits(kvm, desc->sptes[i]); - next = desc->more; - free_pte_list_desc(desc); - } -out: - /* rmap_head is meaningless now, remember to reset it */ - rmap_head->val = 0; - return true; -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); @@ -1145,19 +1109,6 @@ static bool kvm_vcpu_write_protect_gfn(struct kvm_vcpu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K); } -static bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - const struct kvm_memory_slot *slot) -{ - return kvm_zap_all_rmap_sptes(kvm, rmap_head); -} - -static bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - struct kvm_memory_slot *slot, gfn_t gfn, int level, - pte_t unused) -{ - return __kvm_zap_rmap(kvm, rmap_head, slot); -} - static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 3de703c2a5d4..a219c8e556e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -319,4 +319,5 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 91af5b32cffb..9cc4252aaabb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -56,7 +56,7 @@ int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, return count; } -void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +static void free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); } @@ -283,3 +283,51 @@ void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) rmap_walk_init_level(iterator, iterator->level); } + +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + u64 *sptep) +{ + mmu_spte_clear_track_bits(kvm, sptep); + pte_list_remove(sptep, rmap_head); +} + +/* Return true if at least one SPTE was zapped, false otherwise */ +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc, *next; + int i; + + if (!rmap_head->val) + return false; + + if (!(rmap_head->val & 1)) { + mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); + goto out; + } + + desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); + + for (; desc; desc = next) { + for (i = 0; i < desc->spte_count; i++) + mmu_spte_clear_track_bits(kvm, desc->sptes[i]); + next = desc->more; + free_pte_list_desc(desc); + } +out: + /* rmap_head is meaningless now, remember to reset it */ + rmap_head->val = 0; + return true; +} + +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + return kvm_zap_all_rmap_sptes(kvm, rmap_head); +} + +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused) +{ + return __kvm_zap_rmap(kvm, rmap_head, slot); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index dc4bf7e609ec..a9bf48494e1a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -27,7 +27,6 @@ static struct kmem_cache *pte_list_desc_cache; int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head); -void free_pte_list_desc(struct pte_list_desc *pte_list_desc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); @@ -90,4 +89,12 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte); +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + u64 *sptep); +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head); +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot); +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused); #endif /* __KVM_X86_MMU_RMAP_H */ From patchwork Tue Dec 6 17:36:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 30457 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2958600wrr; Tue, 6 Dec 2022 09:40:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf4gkUX4CB42lb1gS3d+bgQNpKNjPknrJ2w9WVYjZxVAVUiZnWppGcsPNzKdapyRvRttR3X9 X-Received: by 2002:a05:6402:124a:b0:46b:8e9e:876 with SMTP id l10-20020a056402124a00b0046b8e9e0876mr28073425edw.232.1670348426092; Tue, 06 Dec 2022 09:40:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670348426; cv=none; d=google.com; s=arc-20160816; b=sg2qf+eR07RonsNhWq7ZHkQoz7qbKK5XYWr3xEpdWNOyKkpoUasfFD5UEP79rwaSUC e/yzJs4LJePdDvsBEsMKsb9yRbp0B4eP11RJU6zbStl6TpO5yYrC3fn4H4piDT9foQNg ImO9FNr57HBVIK7xWniRtzBBElIsKCN/0POvt7fy02QsRgQRkeM8l7QTVip9tEixAPoe W0YVMNA/XvDCws4heON4extR0Cn0pyYhBl3R6H2fawFl/KFsmisjZC9oxpaOnXCEJVO/ i1aVkGWv+9akF3f7FfJVK1bqm1n+5/vug5JH1fNrqbwDow7H4jv7GUVAGcjP4yw4sKva n3xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=REqNWrItkYfn7ZoqCLxEO+076l18a7QlOmZKyVuB4HeibS7BAN3wMDcPFeeV/bavlN u+P86MKWR7XfBaPti2EYv141TNXhQbAvSIMMSf22le2sVTwtCkEWIEeih6jkHwzjuvlJ ZjOGCPpPrXuDdFyEtlZaiHEFkJeEkruKaaq1Ul7eBjlObS5jnahrfEOl0QQkRnmuIYrh GuioNzON5jxTDdCRrKS0IAe7NBIODkVip/JP7wgiE0m2wNM/NT/92hXCy5Zf0ZX6TPgR MsYx715ZlwxPOJTWLdCeNrWBElHZgBn61pM9eha333CpUZdIXYlKQiNdmFf1HIkv/5vY C9NA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=the3UR5n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xg2-20020a170907320200b00781d302d5e9si9074950ejb.166.2022.12.06.09.40.02; Tue, 06 Dec 2022 09:40:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=the3UR5n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235373AbiLFRgt (ORCPT + 99 others); Tue, 6 Dec 2022 12:36:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235503AbiLFRg2 (ORCPT ); Tue, 6 Dec 2022 12:36:28 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B594F3B9EA for ; Tue, 6 Dec 2022 09:36:18 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pa16-20020a17090b265000b0020a71040b4cso11997198pjb.6 for ; Tue, 06 Dec 2022 09:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=the3UR5nZvASMSHWZbLNk/phN3QpGy2ZKbi9DHYfcXEQ159p8R/Z/7Kn8+DTeMoLVC pHaMaXdy8nBuAF7ZQ2GCqrLjaiaRR0ok0NVp/8M1NybmC1AX9iALlFOCP1J9eBukP62K TexEBs1IBdXfB8yWGuUpm4cBsb0DlWf6ECzEhJKJpgiNZHKgKJslJ4+mK+GkHrO3BR6n whrLxfo9JKB/TzV1zU5UtNlT+X/J7I2EibnOxAYrtaKAZ5MlhSUTaNP44stCu67XoUmc VA412z6d51UVGfKCbdx1Ny80uArOlblTwP+eCoH6u+49GoAX1nxW82zITs2cd+GzLIVD agtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=rOAz96E7NWoWxGCvre4KAXvBCrXVgk6efGLHVxSR268d2024BrVEdv15idbVQemw9E bJDXJBKZhHv75MJy5p95PfVkpNz9nYB1+ZnEUn8ZL+l3wmS5GBr6eN+mEkcPhCbHB5hx xhz9L2lvAiOxPdjzzc6ZIqnTiCQcnzuA8Ts2qjdvKw91+ouTlEpLVBSG/Z5nZtsOd5aD /dK1tTSVUkazVP6Y+QOEQgyvQahDBzvT6xVrv/lRILSjYPcLYJ7t6NPpchleKNivPpyV 8aPBzhIyxXGCluAmgxvAoOKdu+fGEGGcoWQBwuyQZdUAvolC2ZPIrBDYVuhcaV+iKqZV nh5Q== X-Gm-Message-State: ANoB5pl/CZUUIgYoIKBxTk/CcTqMqp02gLZNtoDmW0CuJtJu9d+GTntg VJNHwseE+bI4b8DYEO7a8io5vVUF0MDMQjvCeo2gzzSbqE9Lz2s0AUrnvDSzuU2uPAWKqhUDp7/ 51xor/MYBnkWY07xtlm9/U9WiNvoHTnu3uiiB7SvF/iVkKsEJWj6BYxJwDrRpPD3CJQ5ABqcr X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993281pju.0.1670348177805; Tue, 06 Dec 2022 09:36:17 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:01 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-8-bgardon@google.com> Subject: [PATCH 7/7] KVM: x86/MMU: Move rmap_add() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751487271212648821?= X-GMAIL-MSGID: =?utf-8?q?1751487271212648821?= Move rmap_add() to rmap.c to complete the migration of the various rmap operations out of mmu.c. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 45 ++++----------------------------- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++ arch/x86/kvm/mmu/rmap.c | 37 ++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 8 +++++- 4 files changed, 54 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 12082314d82d..b122c90a3e5f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -215,13 +215,13 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; } -static inline bool kvm_available_flush_tlb_with_range(void) +inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) { int ret = -ENOTSUPP; @@ -695,8 +695,8 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) return sp->role.access; } -static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, - gfn_t gfn, unsigned int access) +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; @@ -1217,41 +1217,6 @@ static bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, return false; } -#define RMAP_RECYCLE_THRESHOLD 1000 - -static void __rmap_add(struct kvm *kvm, - struct kvm_mmu_memory_cache *cache, - const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_page *sp; - struct kvm_rmap_head *rmap_head; - int rmap_count; - - sp = sptep_to_sp(spte); - kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); - kvm_update_page_stats(kvm, sp->role.level, 1); - - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(cache, spte, rmap_head); - - if (rmap_count > kvm->stat.max_mmu_rmap_size) - kvm->stat.max_mmu_rmap_size = rmap_count; - if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); - } -} - -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_memory_cache *cache = &vcpu->arch.mmu_pte_list_desc_cache; - - __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); -} - bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a219c8e556e9..03da1f8b066e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -320,4 +320,10 @@ void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access); + +inline bool kvm_available_flush_tlb_with_range(void); +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 9cc4252aaabb..136c5f4f867b 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -292,7 +292,8 @@ void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } /* Return true if at least one SPTE was zapped, false otherwise */ -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, + struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc, *next; int i; @@ -331,3 +332,37 @@ bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, { return __kvm_zap_rmap(kvm, rmap_head, slot); } + +#define RMAP_RECYCLE_THRESHOLD 1000 + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access) +{ + struct kvm_mmu_page *sp; + struct kvm_rmap_head *rmap_head; + int rmap_count; + + sp = sptep_to_sp(spte); + kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); + kvm_update_page_stats(kvm, sp->role.level, 1); + + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + rmap_count = pte_list_add(cache, spte, rmap_head); + + if (rmap_count > kvm->stat.max_mmu_rmap_size) + kvm->stat.max_mmu_rmap_size = rmap_count; + if (rmap_count > RMAP_RECYCLE_THRESHOLD) { + kvm_zap_all_rmap_sptes(kvm, rmap_head); + kvm_flush_remote_tlbs_with_address( + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + } +} + +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access) +{ + struct kvm_mmu_memory_cache *cache = &vcpu->arch.mmu_pte_list_desc_cache; + + __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index a9bf48494e1a..b06897dad76a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -91,10 +91,16 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep); -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_head); bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot); bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t unused); + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access); +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access); + #endif /* __KVM_X86_MMU_RMAP_H */