From patchwork Thu Feb 2 18:28:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 52126 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp401138wrn; Thu, 2 Feb 2023 10:30:40 -0800 (PST) X-Google-Smtp-Source: AK7set9wvWyhBIQCRzHCkb1lJK4iivhERy3JrfgK4ha8xg9QL3ueeuowe/aTr4Px+w4zUBJGwlUm X-Received: by 2002:a17:902:e313:b0:196:47f0:50b6 with SMTP id q19-20020a170902e31300b0019647f050b6mr5430674plc.47.1675362640203; Thu, 02 Feb 2023 10:30:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675362640; cv=none; d=google.com; s=arc-20160816; b=WzLHXk5qAA4Vn26o9/z4oTqPW5a83RvE38T8kbRWbbHHFPZpNb11NRlKJSbpbrPAks R1yxGM4hCNe8bpNYySA6UJtQJPAYpshLG1kN5K+v4EQMlHy96Fq+s4z71jmfSetZyw6v q+nYfq6ThDrdtAprU/f+LGimOvYOA6NKy5pb6CJmV8L+w1Fm5pJZm8CIBHJ2MPN4/oO4 uYvLc/hx6WDgst6htJa4InsJ5YfBQsqNtrtOcBQ2Zh/lcHutuBJmeG5yXWziyB1GOxfE P95ngSA6VDeS6AEd9b9s/BrZr7rUT/NFcJ/tU0hZA7o92tnUU2CyXtVyoaWRZdkCyLMz rEFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=F2DHKgw0dJCtwQxpbZaWFIOX07AJ3Q8QHUf3isrBY5Q=; b=x+gAQuM2CucWAsjTxxDhiZ/lIfk1g+kJv6REAy4umPs8M+mqkdSijniTFiS0u+jpDM lt4BrlCmVnb82km+wIqT7EZV79HQ0TCkPQR+/LBpmZp3ZqpvRRd/CH1aTlDHYlKxWWc6 54yKzxPETDv2ysOq/afESOBlwG8IDB9nBDU8Z/cm9HXGytQpd//FP3R7qOLSj7sbbmxK Ez1DTUEcPJjQucBsTWN8zcKCZy5EmUNArs+0cpvH4sM4zHM2PJ+hQvPAOHhTq9JI0Q7j uIk4NiYRV8FMq3JRJiKfmSi5SK65/HKShUfN+M1EpWcO1vsqbaeDCOFH8vOQvQFnjuGC ak/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="hp3/EcJm"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ik9-20020a170902ab0900b00192a2923f37si22466795plb.359.2023.02.02.10.30.26; Thu, 02 Feb 2023 10:30:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="hp3/EcJm"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232795AbjBBS3u (ORCPT + 99 others); Thu, 2 Feb 2023 13:29:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232732AbjBBS3K (ORCPT ); Thu, 2 Feb 2023 13:29:10 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C53F179F25 for ; Thu, 2 Feb 2023 10:28:31 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id 127-20020a630685000000b004ec5996dcc0so1365244pgg.8 for ; Thu, 02 Feb 2023 10:28:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=F2DHKgw0dJCtwQxpbZaWFIOX07AJ3Q8QHUf3isrBY5Q=; b=hp3/EcJmYWEvxFjMK6a/uHpM1/syENeJe8Vvon0/F0y95LHB1UdDLRIDLy8N2fbnGz aLVOkNCYi1c49Z3AvQargVa8ZHbDsPSERWCFQbdrUusIodCCUkfqI0SNdPidsNnRfPm0 JAZDm+rCgHNoBeAXuhhHiZy6TKc1Dou8DmV4tVp+bExmmdqerC28oE/BkhYDUMS+mfi6 vtMcr/v80IWmTBj6AV/kY5Y7thWKzdPPsI5jxHsgvYNxug0EmP7ubS9aNVRGYj/m3k7j DTv45ZxkQ1pxSQwGY4zXeQgCvjHb46/SQTAemmsze3WrW0UbLW/5lV5NhGSw4dtjLLqz 8PSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=F2DHKgw0dJCtwQxpbZaWFIOX07AJ3Q8QHUf3isrBY5Q=; b=cKzxy6IubwcU+VRyqW+1+gUfUyDQzQqQsoK/7Hns9kW7EQP/xwEZM3MOYi2eZDrB7F 9QTesmCfKYVUVnmsDzCWoJrPI3lmLtGuHRpIDdZD074JXAciDbmxWz0jUFEqgaIHmBav O2C6xY/pxKs5MT4qlOPYHOgB0+SDCgZ6Ld/YDYFlltK72znaJEbbIvFjHvW7voJ1+S3x 8UweWLl0aa8vAoEaVKyat/Yzwke8Ms905YvQrDgGPR+4PuH9fG3gk+sQ3gqZ4abcHvSS uQFJJ3oXvZ5yOcioEQal0n8Sil2WO4KNUlhyz2dnHUmzOxKM7MyqKwDXjGJN5T8Ychup I/jA== X-Gm-Message-State: AO0yUKVA1XDnBfsYjHbVeS0T1FdBB8aWcnm/zemDQiCk0w1/fviQJ81M SmENeYjd1lVxdrmyUCf45iOIKcmcfdfEO18xf9MY8tmTcdGOvTjsLjn/Io9j2t3bPakQw3BdmWH lL2ULGftcY1UhzqHkxBUIbGfErrc57z6kVf1Uy+F0GMjJRKdbLJt6AAQDVJsRsnb4opUXLKQ8 X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:6388:b0:225:eaa2:3f5d with SMTP id f8-20020a17090a638800b00225eaa23f5dmr105787pjj.2.1675362510924; Thu, 02 Feb 2023 10:28:30 -0800 (PST) Date: Thu, 2 Feb 2023 18:28:00 +0000 In-Reply-To: <20230202182809.1929122-1-bgardon@google.com> Mime-Version: 1.0 References: <20230202182809.1929122-1-bgardon@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230202182809.1929122-13-bgardon@google.com> Subject: [PATCH 12/21] KVM: x86/MMU: Clean up naming of exported Shadow MMU functions From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ricardo Koller , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756745055733314817?= X-GMAIL-MSGID: =?utf-8?q?1756745055733314817?= Change the naming scheme on several functions exported from the shadow MMU to match the naming scheme used by the TDP MMU: kvm_shadow_mmu_. More cleanups will follow to convert the remaining functions to a similar naming scheme, but for now, start with the trivial renames. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/shadow_mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/shadow_mmu.h | 17 +++++++++-------- 4 files changed, 30 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3ea54b08239aa..9308ab8102f9b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1089,7 +1089,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) int r; write_lock(&vcpu->kvm->mmu_lock); - r = make_mmu_pages_available(vcpu); + r = kvm_shadow_mmu_make_pages_available(vcpu); if (r < 0) goto out_unlock; @@ -1164,7 +1164,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) if (is_tdp_mmu_active(vcpu)) leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else - leaf = get_walk(vcpu, addr, sptes, &root); + leaf = kvm_shadow_mmu_get_walk(vcpu, addr, sptes, &root); walk_shadow_page_lockless_end(vcpu); @@ -1432,11 +1432,11 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); + r = kvm_shadow_mmu_make_pages_available(vcpu); if (r) goto out_unlock; - r = direct_map(vcpu, fault); + r = kvm_shadow_mmu_direct_map(vcpu, fault); out_unlock: write_unlock(&vcpu->kvm->mmu_lock); @@ -1471,7 +1471,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, trace_kvm_page_fault(vcpu, fault_address, error_code); if (kvm_event_needs_reinjection(vcpu)) - kvm_mmu_unprotect_page_virt(vcpu, fault_address); + kvm_shadow_mmu_unprotect_page_virt(vcpu, fault_address); r = kvm_mmu_page_fault(vcpu, fault_address, error_code, insn, insn_len); } else if (flags & KVM_PV_REASON_PAGE_NOT_PRESENT) { @@ -2786,7 +2786,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * In order to ensure all vCPUs drop their soon-to-be invalid roots, * invalidating TDP MMU roots must be done while holding mmu_lock for * write and in the same critical section as making the reload request, - * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. + * e.g. before kvm_shadow_mmu_zap_obsolete_pages() could drop mmu_lock + * and yield. */ if (tdp_mmu_enabled) kvm_tdp_mmu_invalidate_all_roots(kvm); @@ -2801,7 +2802,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_FREE_OBSOLETE_ROOTS); - kvm_zap_obsolete_pages(kvm); + kvm_shadow_mmu_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -2890,7 +2891,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) kvm_mmu_invalidate_begin(kvm, 0, -1ul); - flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); + flush = kvm_shadow_mmu_zap_gfn_range(kvm, gfn_start, gfn_end); if (tdp_mmu_enabled) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) @@ -3034,7 +3035,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); - kvm_rmap_zap_collapsible_sptes(kvm, slot); + kvm_shadow_mmu_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 1251357794538..14a8c8217c4cf 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -866,7 +866,7 @@ int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); + r = kvm_shadow_mmu_make_pages_available(vcpu); if (r) goto out_unlock; r = FNAME(fetch)(vcpu, fault, &walker); diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index 76c50aca3c487..36b335d75aee2 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -1977,7 +1977,7 @@ static inline unsigned long kvm_mmu_available_pages(struct kvm *kvm) return 0; } -int make_mmu_pages_available(struct kvm_vcpu *vcpu) +int kvm_shadow_mmu_make_pages_available(struct kvm_vcpu *vcpu) { unsigned long avail = kvm_mmu_available_pages(vcpu->kvm); @@ -2041,7 +2041,7 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) return r; } -int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) +int kvm_shadow_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) { gpa_t gpa; int r; @@ -2331,7 +2331,7 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) __direct_pte_prefetch(vcpu, sp, sptep); } -int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +int kvm_shadow_mmu_direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -2549,7 +2549,7 @@ int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) return r; write_lock(&vcpu->kvm->mmu_lock); - r = make_mmu_pages_available(vcpu); + r = kvm_shadow_mmu_make_pages_available(vcpu); if (r < 0) goto out_unlock; @@ -2797,7 +2797,8 @@ void kvm_mmu_sync_prev_roots(struct kvm_vcpu *vcpu) * * Must be called between walk_shadow_page_lockless_{begin,end}. */ -int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) +int kvm_shadow_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level) { struct kvm_shadow_walk_iterator iterator; int leaf = -1; @@ -3104,7 +3105,7 @@ __always_inline bool walk_slot_rmaps_4k(struct kvm *kvm, } #define BATCH_ZAP_PAGES 10 -void kvm_zap_obsolete_pages(struct kvm *kvm) +void kvm_shadow_mmu_zap_obsolete_pages(struct kvm *kvm) { struct kvm_mmu_page *sp, *node; int nr_zapped, batch = 0; @@ -3165,7 +3166,7 @@ bool kvm_shadow_mmu_has_zapped_obsolete_pages(struct kvm *kvm) return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages)); } -bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) +bool kvm_shadow_mmu_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { const struct kvm_memory_slot *memslot; struct kvm_memslots *slots; @@ -3417,8 +3418,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, return need_tlb_flush; } -void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot) +void kvm_shadow_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) { /* * Note, use KVM_MAX_HUGEPAGE_LEVEL - 1 since there's no need to zap diff --git a/arch/x86/kvm/mmu/shadow_mmu.h b/arch/x86/kvm/mmu/shadow_mmu.h index 9e27d03fbe368..cc28895d2a24f 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.h +++ b/arch/x86/kvm/mmu/shadow_mmu.h @@ -73,18 +73,19 @@ bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, struct list_head *invalid_list); void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_list); -int make_mmu_pages_available(struct kvm_vcpu *vcpu); +int kvm_shadow_mmu_make_pages_available(struct kvm_vcpu *vcpu); -int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva); +int kvm_shadow_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva); -int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); +int kvm_shadow_mmu_direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); u64 *fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gpa_t gpa, u64 *spte); hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, u8 level); int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu); int mmu_alloc_special_roots(struct kvm_vcpu *vcpu); -int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); +int kvm_shadow_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level); void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, int bytes, struct kvm_page_track_notifier_node *node); @@ -99,8 +100,8 @@ bool walk_slot_rmaps(struct kvm *kvm, const struct kvm_memory_slot *slot, bool walk_slot_rmaps_4k(struct kvm *kvm, const struct kvm_memory_slot *slot, slot_rmaps_handler fn, bool flush_on_yield); -void kvm_zap_obsolete_pages(struct kvm *kvm); -bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); +void kvm_shadow_mmu_zap_obsolete_pages(struct kvm *kvm); +bool kvm_shadow_mmu_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot); @@ -109,8 +110,8 @@ void kvm_shadow_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t start, gfn_t end, int target_level); -void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot); +void kvm_shadow_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot); bool kvm_shadow_mmu_has_zapped_obsolete_pages(struct kvm *kvm); unsigned long kvm_shadow_mmu_shrink_scan(struct kvm *kvm, int pages_to_free);