From patchwork Thu Feb 2 18:28:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 52139 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp410381wrn; Thu, 2 Feb 2023 10:50:24 -0800 (PST) X-Google-Smtp-Source: AK7set8Sj/aNAxq+jYxMluCUghwhUcYzGeufuw4IH8RUEqACIsSKIId7CTafo3jOJsgBUj+wkFOy X-Received: by 2002:a17:902:d510:b0:198:b945:4108 with SMTP id b16-20020a170902d51000b00198b9454108mr5503208plg.0.1675363824233; Thu, 02 Feb 2023 10:50:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675363824; cv=none; d=google.com; s=arc-20160816; b=j4GrYoPdGuSWeXmxK1wByPObgfFJv/gIPFVyjjL1Gsnah5+O+Sd6A8TJ8HgM8PrrW5 MKIjWFs0Oz3SQhwU+d5k6k+qYabS9gfjviM/ywfz1K08uLtHXS/j9iibTbEUtwBanra9 MSjf52j3bZQOdJ1k877dNhGSAypaiQbJaPkRFgwhm5hHiHhca5witvSvHMcdpEI4vwxs 934Id5+jL+3zRCJ2YeUSlJVKyVPep1HW0Dl50jhh041m8a1fyFpjYVcwCR68F9rt5KUh LBBzDrqK64OXFUHXRneoJwhv1NIU1FbbNmvE4eCEfwaAzmQIgDmJCMNjDjfVIUThg7pV g01w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=j1L53IKzbh0q2dj44QZHCfw/umfVO+10rbiNwcUgEhA=; b=NWp3wNPplR7NaFvRulZatJCajw1PdcBFb/xYXJMy6k6UYL8Eap09BB0uU7UM+EAfAD qiBZ/JSKHiWLAG4LBTOoIX6KTg0qRhDg0iUNc2pSy1kerilR8HhptrVWgqgTYa6d/UY7 1AxuIDyqALFrGzLG+XioUZSvXAW2jSJkmie2vcs/3Hmz4+1EAuKvF5ddD6m/f3RuuN/k FNk6sMRd0v/s+6hi+4voo3S/FaeCP86vVCu+H4N5fz6nxrh1LDzB5vDHfIeU+eZNthQd EprnNHTxhtu1yhWqI1fLT+2/pN/wVCavzLSo+ZY2Ft6S6LAEKd2qrSi0CzYsuzWO0yzv TNNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=W1S45eT2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m3-20020a1709026bc300b001967afdbc94si56404plt.27.2023.02.02.10.50.11; Thu, 02 Feb 2023 10:50:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=W1S45eT2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232960AbjBBSar (ORCPT + 99 others); Thu, 2 Feb 2023 13:30:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232869AbjBBS34 (ORCPT ); Thu, 2 Feb 2023 13:29:56 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2111054542 for ; Thu, 2 Feb 2023 10:29:00 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id x9-20020aa79409000000b00593a1f7c3d8so1367486pfo.14 for ; Thu, 02 Feb 2023 10:29:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j1L53IKzbh0q2dj44QZHCfw/umfVO+10rbiNwcUgEhA=; b=W1S45eT23c0+qcokTUS8aeHe9NQ0UJlsA8P8amVpYEG5na6e9bWDZkdSemeWjy5s4c Q3x5YQdMqMG9sOw+jpBa0BuqS4vl1h5JFeEVTcscL7zQrdvi83yZU0mdweM3jK3Sybnc 7bn1PGgFgtR7pQQ5yibMzVq38fz84GEMPIbRGY42hhwRwHkWdDXvI0LsCl1A+21U3dmT WbDhSnwzRmI8cVTzq19EpBrUIsXhdGHet3yG0DbiFyYYBZBQ9da39DANL1F2QxLZPpSf ifC9/mJFXwbWze15ZHLVPt1tb78Ksk9gqibxUCSuCLGD6JNE63FFu/w3e6I50lLnfMMG XbQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j1L53IKzbh0q2dj44QZHCfw/umfVO+10rbiNwcUgEhA=; b=Ztr/pWU97xUxBqlnI+awqDefRfeepK84LNIMZ27b81hRqxnborw5O/U2BsITpL7kSf fZGYJrtkUQxtap88ReD1hrZEs0RJTjsvmEjvsvBeRWsr7mCslgzqafbEPFZhbk9BO4Ca 8BG/LK+4wqqJB6HUsb6UvMmEp4rbYbefr6bwEN9RWM3EZk9PYv6g36VsdN/S1fuBOLep /Z4gmhn3Lpg65w7Ri3ue8TlzNLgoAqNjEHh9Gg2udAxBvR4pCVZ9rZUgzXw5oQb9ex1/ O+B/l/LxZ9yskht+GtDtaIOsP4PJizSmqGA4cjeSqN5BvxjHHIOUzDDJzOaV4qeeOVfV dDqA== X-Gm-Message-State: AO0yUKXtJwbcTD2myOGkRC0KBeRfwTmCrSudJ+w/9HgrAm9+eZmeOhCp HcnxOq4yahlMNVh6InbroKNF6xaxqptZFwj0bGNzslnU+5tOyK9zf72qr3Gz/rK94Sf+s8zbQyn Uh7Cydmd4eqlf+5UQmOGRWfMD1zBHqxNj6b1Xfny3tpW9dqRExeqkBucG8zGqvEBl7QM7BWRS X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a62:1a57:0:b0:593:bac2:b49 with SMTP id a84-20020a621a57000000b00593bac20b49mr1825572pfa.44.1675362524733; Thu, 02 Feb 2023 10:28:44 -0800 (PST) Date: Thu, 2 Feb 2023 18:28:08 +0000 In-Reply-To: <20230202182809.1929122-1-bgardon@google.com> Mime-Version: 1.0 References: <20230202182809.1929122-1-bgardon@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230202182809.1929122-21-bgardon@google.com> Subject: [PATCH 20/21] KVM: x86/mmu: Move Shadow MMU init/teardown to shadow_mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ricardo Koller , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756746297473338110?= X-GMAIL-MSGID: =?utf-8?q?1756746297473338110?= Move the meat of kvm_mmu_init_vm() and kvm_mmu_uninit_vm() pertaining to the Shadow MMU to shadow_mmu.c. Suggested-by: David Matlack Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 41 +++--------------------------- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/shadow_mmu.c | 44 +++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/shadow_mmu.h | 6 ++--- 4 files changed, 51 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 63b928bded9d1..10aff23dea75d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2743,7 +2743,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) * not use any resource of the being-deleted slot or all slots * after calling the function. */ -static void kvm_mmu_zap_all_fast(struct kvm *kvm) +void kvm_mmu_zap_all_fast(struct kvm *kvm) { lockdep_assert_held(&kvm->slots_lock); @@ -2795,22 +2795,13 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) kvm_tdp_mmu_zap_invalidated_roots(kvm); } -static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, - struct kvm_memory_slot *slot, - struct kvm_page_track_notifier_node *node) -{ - kvm_mmu_zap_all_fast(kvm); -} - int kvm_mmu_init_vm(struct kvm *kvm) { - struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; int r; - INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); - INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); - spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); + + kvm_mmu_init_shadow_mmu(kvm); if (tdp_mmu_enabled) { r = kvm_mmu_init_tdp_mmu(kvm); @@ -2818,38 +2809,14 @@ int kvm_mmu_init_vm(struct kvm *kvm) return r; } - node->track_write = kvm_shadow_mmu_pte_write; - node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; - kvm_page_track_register_notifier(kvm, node); - - kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache; - kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO; - - kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO; - - kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache; - kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO; - return 0; } -static void mmu_free_vm_memory_caches(struct kvm *kvm) -{ - kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); - kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache); - kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache); -} - void kvm_mmu_uninit_vm(struct kvm *kvm) { - struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; - - kvm_page_track_unregister_notifier(kvm, node); - + kvm_mmu_uninit_shadow_mmu(kvm); if (tdp_mmu_enabled) kvm_mmu_uninit_tdp_mmu(kvm); - - mmu_free_vm_memory_caches(kvm); } /* diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 2273c6263faf0..c49d302b037ec 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -406,4 +406,6 @@ BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); + +void kvm_mmu_zap_all_fast(struct kvm *kvm); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index c6d3da795992e..6449ac4de4883 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -3013,8 +3013,9 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) return spte; } -void kvm_shadow_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, - int bytes, struct kvm_page_track_notifier_node *node) +static void kvm_shadow_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, + const u8 *new, int bytes, + struct kvm_page_track_notifier_node *node) { gfn_t gfn = gpa >> PAGE_SHIFT; struct kvm_mmu_page *sp; @@ -3623,3 +3624,42 @@ void kvm_shadow_mmu_zap_all(struct kvm *kvm) kvm_shadow_mmu_commit_zap_page(kvm, &invalid_list); } + +static void kvm_shadow_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_page_track_notifier_node *node) +{ + kvm_mmu_zap_all_fast(kvm); +} + +void kvm_mmu_init_shadow_mmu(struct kvm *kvm) +{ + struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; + + INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); + INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); + spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); + + node->track_write = kvm_shadow_mmu_pte_write; + node->track_flush_slot = kvm_shadow_mmu_invalidate_zap_pages_in_memslot; + kvm_page_track_register_notifier(kvm, node); + + kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache; + kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO; + + kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO; + + kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache; + kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO; +} + +void kvm_mmu_uninit_shadow_mmu(struct kvm *kvm) +{ + struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; + + kvm_page_track_unregister_notifier(kvm, node); + + kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); + kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache); + kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache); +} diff --git a/arch/x86/kvm/mmu/shadow_mmu.h b/arch/x86/kvm/mmu/shadow_mmu.h index ab01636373bda..f2e54355ebb19 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.h +++ b/arch/x86/kvm/mmu/shadow_mmu.h @@ -65,9 +65,6 @@ int kvm_shadow_mmu_alloc_special_roots(struct kvm_vcpu *vcpu); int kvm_shadow_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); -void kvm_shadow_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, - int bytes, struct kvm_page_track_notifier_node *node); - void kvm_shadow_mmu_zap_obsolete_pages(struct kvm *kvm); bool kvm_shadow_mmu_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); @@ -103,6 +100,9 @@ bool kvm_shadow_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); void kvm_shadow_mmu_zap_all(struct kvm *kvm); +void kvm_mmu_init_shadow_mmu(struct kvm *kvm); +void kvm_mmu_uninit_shadow_mmu(struct kvm *kvm); + /* Exports from paging_tmpl.h */ gpa_t paging32_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t vaddr, u64 access,