From patchwork Sat Sep 16 00:39:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 140940 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp1478552vqi; Fri, 15 Sep 2023 21:52:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFQ1p1OyrorGn9afEyrfW3hriUalzxTdAZUXvBBSgOvPxsF0lZheaCGJCk92Jz7XseUO8Fd X-Received: by 2002:a05:6a00:1409:b0:68f:cd32:c52d with SMTP id l9-20020a056a00140900b0068fcd32c52dmr3779668pfu.14.1694839971923; Fri, 15 Sep 2023 21:52:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694839971; cv=none; d=google.com; s=arc-20160816; b=Aw5MfviDO48jyiQghhuqd3X+zXijTFkm9Lt8H2Jj6idflh+Fv0DQK7oMSEc1ZXX+Kn 25icv7PX6PVnyWquw9hujslR8vdbnLa6mBOt890LspuRKCrA6kblH5grW7Eg8ZD518PG oZ5RUxQIQOCDn8/jEfuiRHoiO5jb1Mpp9qBxzIxxP4yynns21L5IJd9UKtgiOpMT6k7k WoO8iCEBMs65rNMSsib2pNsPQhX0QbdUsNO1CIUw9kLofZs2F76bjUCZbSFe3b+7UPJb ZQFURaQlgGgkDp6HNcHQffVHNdYVfa5b5YxvfBfBM4jVGAcOn83w213A5rXwU5mRiNhs xzzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=wv4J3NjpzA01KgOV3LplTe0R6P5qicH6Eu51IuLkMzI=; fh=Ule+EmZomMuZFAbJQ8lqmeAEXCatKRnnjo4FlrQw2P0=; b=zmAdcVVWQvhPQVlx4shtJGFOiWDvlEJbZ7/T/hFfJtoQ+FXtA9ZJWNqsXfNpuf2UiP eCwBmd5+CO2zizno2mbclRWpOPNazG6rI87211XCldfxqJec0168yev6vqYBQVeVnGaw GUYwRsyh+3WFOOM3Ep5ywcYlYxyhQE1ZKzNwhR6/ml8aZS1eP9cXY31kIdENdzkJ8ihw k28qJfzO+OpwZnByIL/RTQELX23N/8jFPQLMo014CkOvr66StAaXNKCktGYsKWMWdrc6 mQLvIHNS26V35luKQlEgeH9PNhPbDWoJNxr/jiVjomp7YiabOyzSAgBfIOaJzCn0QEV1 IY2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=oKbvC4wv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id k3-20020a6568c3000000b0054fe6b9502esi4388139pgt.687.2023.09.15.21.52.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 21:52:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=oKbvC4wv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id D94F9808348C; Fri, 15 Sep 2023 17:46:26 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238457AbjIPApk (ORCPT + 28 others); Fri, 15 Sep 2023 20:45:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238577AbjIPApT (ORCPT ); Fri, 15 Sep 2023 20:45:19 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01E9E30EE for ; Fri, 15 Sep 2023 17:43:23 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-271bb60d8a8so2741507a91.3 for ; Fri, 15 Sep 2023 17:43:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694824762; x=1695429562; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=wv4J3NjpzA01KgOV3LplTe0R6P5qicH6Eu51IuLkMzI=; b=oKbvC4wv6JL7OXRLPqpVEciDDkj9kEqtFSERSsd3So8cKTT21k2VPJ5bvLboJm7PQt gTJhXN8edqZNhPuZ6qceg9MGcPcTgacvo34S5DL971kGPZTjzfE3S1wvMkXpYtwroJL5 ksZuNs0nF5xqUz8vueJ89ztbEY1MenS1JCKIrMJStZhORyWhZjM3y8ZoJQ6HzMllMmLV ET+GNKizocLAU0mzK0bUSHPozsw/cled2EKvIlAAzvAaD+B7HNgzOj9mjL1tUZZwTC35 z/IkpnxwGbHOwDDn3qHJ+mGCqFGTEmlqjHn2wlQyMsk1C0ghQ7D1b6uh/B5spTkT8s1T svoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694824762; x=1695429562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wv4J3NjpzA01KgOV3LplTe0R6P5qicH6Eu51IuLkMzI=; b=nOj3BST7nN73b2wfdh5tGJ9zfd3ooW6c/iElkmp6jxMl/aJOMx8CLlbb6wTg1/O/2G zAHTn3WeEV8aR/tgJBDwE9qy5lvVHEHcNNMXjw5EasiDnb5X1o3nfip/4IZ6WHeaz5os mnpgHkqofpHhQxndN56FgVELNsOqenW2mHKdRbvMWMDk1y918gmY06NxZWqTx/MptqLX JUe8tdWwiEgyVU7UqZCPYP0iflgimgqroCPgsKEgP0fKgBpzY07eRW0jxxGcg3FPBP1q 8ja3xINJkt5jWjunmHH7pkR7LVWYVzdhtL0BRMjJSnokkqZLSHWy9SGiE6ta99iWSpxm XPCw== X-Gm-Message-State: AOJu0YzTqrllZrAokObvHKpgv7bzsafKlfwLRrvL8GoywRcpnAs1mu5U RKDZ24LNJ4ah4Mf7p+ATX16t85Y/5V4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:1044:b0:274:6af0:d75b with SMTP id gq4-20020a17090b104400b002746af0d75bmr77079pjb.7.1694824762252; Fri, 15 Sep 2023 17:39:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 15 Sep 2023 17:39:15 -0700 In-Reply-To: <20230916003916.2545000-1-seanjc@google.com> Mime-Version: 1.0 References: <20230916003916.2545000-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.459.ge4e396fd5e-goog Message-ID: <20230916003916.2545000-3-seanjc@google.com> Subject: [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pattara Teerapong , David Stevens , Yiwei Zhang , Paul Hsia X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Fri, 15 Sep 2023 17:46:27 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777168518754381708 X-GMAIL-MSGID: 1777168518754381708 Replace the address space ID in for_each_tdp_mmu_root_yield_safe() with a shared (vs. exclusive) param, and have the walker iterate over all address spaces as all callers want to process all address spaces. Drop the @as_id param as well as the manual address space iteration in callers. Add the @shared param even though the two current callers pass "false" unconditionally, as the main reason for refactoring the walker is to simplify using it to zap invalid TDP MMU roots, which is done with mmu_lock held for read. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 8 ++------ arch/x86/kvm/mmu/tdp_mmu.c | 20 ++++++++++---------- arch/x86/kvm/mmu/tdp_mmu.h | 3 +-- 3 files changed, 13 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 59f5e40b8f55..54f94f644b42 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6246,7 +6246,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { bool flush; - int i; if (WARN_ON_ONCE(gfn_end <= gfn_start)) return; @@ -6257,11 +6256,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); - if (tdp_mmu_enabled) { - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) - flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, - gfn_end, flush); - } + if (tdp_mmu_enabled) + flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush); if (flush) kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 89aaa2463373..7cb1902ae032 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -211,8 +211,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true) -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \ - __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false) +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \ + for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \ + _root; \ + _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \ + if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \ + } else /* * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write, @@ -877,12 +881,11 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or * more SPTEs were zapped since the MMU lock was last acquired. */ -bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, - bool flush) +bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush) { struct kvm_mmu_page *root; - for_each_tdp_mmu_root_yield_safe(kvm, root, as_id) + for_each_tdp_mmu_root_yield_safe(kvm, root, false) flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush); return flush; @@ -891,7 +894,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, void kvm_tdp_mmu_zap_all(struct kvm *kvm) { struct kvm_mmu_page *root; - int i; /* * Zap all roots, including invalid roots, as all SPTEs must be dropped @@ -905,10 +907,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) * is being destroyed or the userspace VMM has exited. In both cases, * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. */ - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - for_each_tdp_mmu_root_yield_safe(kvm, root, i) - tdp_mmu_zap_root(kvm, root, false); - } + for_each_tdp_mmu_root_yield_safe(kvm, root, false) + tdp_mmu_zap_root(kvm, root, false); } /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index eb4fa345d3a4..bc088953f929 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -20,8 +20,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, bool shared); -bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, - bool flush); +bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);