From patchwork Thu Feb 2 18:28:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 52134 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp404814wrn; Thu, 2 Feb 2023 10:38:25 -0800 (PST) X-Google-Smtp-Source: AK7set9vjyqp9Q4Men3pl2YobpoPYRWWShzQu9qby2mll8A0Pea4nzQox3sNDCA/mzWCu7iPVftq X-Received: by 2002:aa7:9258:0:b0:593:1bab:1501 with SMTP id 24-20020aa79258000000b005931bab1501mr5848715pfp.7.1675363105358; Thu, 02 Feb 2023 10:38:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675363105; cv=none; d=google.com; s=arc-20160816; b=PxHCfmZhT9SH8zFCdMez0QZ6Ro/6n1+OsTVuPEzoUB06LjFwUISlJ8SlBPc1VL/5Ww byKL+azwiNHS2qu7wdOu6Jl5HAc+S3rrsjo0AfN1f5kgCt1owyuDrZpUZvMmRz0A729C wA9wr3E+hL+UykcvHWdMpmXJwgsEktj5jYd6EzCza0AMnAIe0TNxnQhjmn24j64UsBsj eqX4YR2bLHUH3TeCm3+aUeYjt+Cn/Coyra6b+3K8NVctIMydAB0SoH6lfDx027EVH3eJ IHrVhHksTOXiRmPcEQPTWBy7RgeF329bewMya3Lu7hv/LBKe9pugAATGJfYkWPxtPrUm 58vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=hIkQB5ezFR1E5ltI975hig3hYSC7kHPqCc3qy20DN/I=; b=aRKiNX6QrRG0Xn26edKr+mpmC8XHAse8f/hBXLhRdJgaOyt5g/cEED/g3JJNVEKlu2 EpCLJCBfILggt4X8mLp8PqqHBQcbF8ICsubBkMg63RFZ3kne3Cnw3YB+nVUFVakTgYys SlJUVclFYzaIv2mOefQ7p7nNGs1QJQwCOr0j8RB18h7+lpPVDASEGtB5GIXNDLgBTubt uvRZYCe1WhzSlmW0LFhNdGNu6gNTHmtMx1lAdpgostZKgiqDYhgGjG7Pvqq871DYQ08S o9rTFPb3R0MJGHH8FP/Q21YsiJM7rQPe8i25/FLJuOVqsT9oa0t2dq2rYx++PfkN3co0 mPkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Nah7cPJK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m65-20020a625844000000b0057629288720si12809pfb.176.2023.02.02.10.38.12; Thu, 02 Feb 2023 10:38:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Nah7cPJK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232875AbjBBSal (ORCPT + 99 others); Thu, 2 Feb 2023 13:30:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232845AbjBBS3y (ORCPT ); Thu, 2 Feb 2023 13:29:54 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F4747B7A2 for ; Thu, 2 Feb 2023 10:28:57 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id h1-20020a17090a9c0100b00230353d4d2aso1329382pjp.8 for ; Thu, 02 Feb 2023 10:28:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hIkQB5ezFR1E5ltI975hig3hYSC7kHPqCc3qy20DN/I=; b=Nah7cPJKZohGO4FLqLOCT6KxuTNsfV+DRfD03Pfh3QmDkv+5covs2CWSHe2yIVwXfy nzsvQ21unrgkeewgaki5RAYrp+L/N+1is0nm9d2enSCBuIHvCXgDGxzz9HNmPbxn8Dve Ry+C8Z0SasVDvtXWpPp+/1ju59GA4D2DoNm0HVzRzymsI9kvfweHQBu3f77CwZ8IKG9x xSji2nCTLE5LYWL/t3vN1QZeUARql/yPqoZFaiUjxJ+fI9HQwJhbcuMcQxe3VIjX4xL9 JJHmZtsIale/8f3UdcBBR1n9edoP1NwTFR4num2+Uw9mrZyvFKNaMm6crnIqltr3EDFG 7x8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hIkQB5ezFR1E5ltI975hig3hYSC7kHPqCc3qy20DN/I=; b=jKxmDw6I7mOzIauBiolPy+eHr+wCpvAKgC6/BYVggDFAuFkmhcKeLG6fPC7l2qZTaM KgSFDv5GXen9VWde1+9BZhHXpoUb3ZIRpMJeBipicbDkEHQAtYZV1EJFBrP5IYnBrH5Q p1z1wnjOsGVYsdiCtmZj5MRorS5I29SYzueZz4jabmddzpl0h847LQyCfXndQ1hwWUg8 kns9Dj8pWQ+wDpFXwONtRjs9aCw5aUon4w3Ng4rEfYqb2Zd9In+MFWb1BWrzyTxVfXrh viGE+gJJSbUFruq3cd9apymYIwE5R6OVSvMOC6CdwRZ6U3igslIm1w5mL6LBNCQ9upqj 3dhw== X-Gm-Message-State: AO0yUKV5XMZhHHx0i743VxBcZIrG+cFL2zUTMbhvDJef+2qbMPO0VxdK QGf3uo+ZUMme6IeBNeyWapkd2Ey+scazWPW5oj2LEQbC8sg/7RPxYB1dxejACE9cx9AZgK1EBbg x9su9hh393+3tXW3hFsoThX2WvWKNwOrAB+iEuFOe2AR+fBlY31Ccjo5tQl2vGM4eHz5CXM2J X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:902:f68e:b0:198:d453:68ef with SMTP id l14-20020a170902f68e00b00198d45368efmr213738plg.0.1675362521139; Thu, 02 Feb 2023 10:28:41 -0800 (PST) Date: Thu, 2 Feb 2023 18:28:06 +0000 In-Reply-To: <20230202182809.1929122-1-bgardon@google.com> Mime-Version: 1.0 References: <20230202182809.1929122-1-bgardon@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230202182809.1929122-19-bgardon@google.com> Subject: [PATCH 18/21] KVM: x86/mmu: Move split cache topup functions to shadow_mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ricardo Koller , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756745543759748100?= X-GMAIL-MSGID: =?utf-8?q?1756745543759748100?= The split cache topup functions are only used by the Shadow MMU and were left behind in mmu.c when splitting the Shadow MMU out to a separate file. Move them over as well. No functional change intended. Suggested-by: David Matlack Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 53 --------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 2 -- arch/x86/kvm/mmu/shadow_mmu.c | 53 +++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 55 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f5b9db00eff99..8514e998e2127 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2902,59 +2902,6 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, } } -static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min) -{ - return kvm_mmu_memory_cache_nr_free_objects(cache) < min; -} - -bool need_topup_split_caches_or_resched(struct kvm *kvm) -{ - if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) - return true; - - /* - * In the worst case, SPLIT_DESC_CACHE_MIN_NR_OBJECTS descriptors are needed - * to split a single huge page. Calculating how many are actually needed - * is possible but not worth the complexity. - */ - return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) || - need_topup(&kvm->arch.split_page_header_cache, 1) || - need_topup(&kvm->arch.split_shadow_page_cache, 1); -} - -int topup_split_caches(struct kvm *kvm) -{ - /* - * Allocating rmap list entries when splitting huge pages for nested - * MMUs is uncommon as KVM needs to use a list if and only if there is - * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be - * aliased by multiple L2 gfns and/or from multiple nested roots with - * different roles. Aliasing gfns when using TDP is atypical for VMMs; - * a few gfns are often aliased during boot, e.g. when remapping BIOS, - * but aliasing rarely occurs post-boot or for many gfns. If there is - * only one rmap entry, rmap->val points directly at that one entry and - * doesn't need to allocate a list. Buffer the cache by the default - * capacity so that KVM doesn't have to drop mmu_lock to topup if KVM - * encounters an aliased gfn or two. - */ - const int capacity = SPLIT_DESC_CACHE_MIN_NR_OBJECTS + - KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; - int r; - - lockdep_assert_held(&kvm->slots_lock); - - r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, capacity, - SPLIT_DESC_CACHE_MIN_NR_OBJECTS); - if (r) - return r; - - r = kvm_mmu_topup_memory_cache(&kvm->arch.split_page_header_cache, 1); - if (r) - return r; - - return kvm_mmu_topup_memory_cache(&kvm->arch.split_shadow_page_cache, 1); -} - /* Must be called with the mmu_lock held in write-mode. */ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 349d4a300ad34..2273c6263faf0 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -348,8 +348,6 @@ void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu); void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu); int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect); -bool need_topup_split_caches_or_resched(struct kvm *kvm); -int topup_split_caches(struct kvm *kvm); bool is_page_fault_stale(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index eb4424fedd73a..bb23692d34a73 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -3219,6 +3219,59 @@ bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, return rmap_write_protect(rmap_head, false); } +static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min) +{ + return kvm_mmu_memory_cache_nr_free_objects(cache) < min; +} + +static bool need_topup_split_caches_or_resched(struct kvm *kvm) +{ + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + return true; + + /* + * In the worst case, SPLIT_DESC_CACHE_MIN_NR_OBJECTS descriptors are needed + * to split a single huge page. Calculating how many are actually needed + * is possible but not worth the complexity. + */ + return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) || + need_topup(&kvm->arch.split_page_header_cache, 1) || + need_topup(&kvm->arch.split_shadow_page_cache, 1); +} + +static int topup_split_caches(struct kvm *kvm) +{ + /* + * Allocating rmap list entries when splitting huge pages for nested + * MMUs is uncommon as KVM needs to use a list if and only if there is + * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be + * aliased by multiple L2 gfns and/or from multiple nested roots with + * different roles. Aliasing gfns when using TDP is atypical for VMMs; + * a few gfns are often aliased during boot, e.g. when remapping BIOS, + * but aliasing rarely occurs post-boot or for many gfns. If there is + * only one rmap entry, rmap->val points directly at that one entry and + * doesn't need to allocate a list. Buffer the cache by the default + * capacity so that KVM doesn't have to drop mmu_lock to topup if KVM + * encounters an aliased gfn or two. + */ + const int capacity = SPLIT_DESC_CACHE_MIN_NR_OBJECTS + + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + int r; + + lockdep_assert_held(&kvm->slots_lock); + + r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, capacity, + SPLIT_DESC_CACHE_MIN_NR_OBJECTS); + if (r) + return r; + + r = kvm_mmu_topup_memory_cache(&kvm->arch.split_page_header_cache, 1); + if (r) + return r; + + return kvm_mmu_topup_memory_cache(&kvm->arch.split_shadow_page_cache, 1); +} + static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep) { struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);