From patchwork Sat Jul 22 01:23:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 124181 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp575962vqg; Fri, 21 Jul 2023 19:15:46 -0700 (PDT) X-Google-Smtp-Source: APBJJlGPSmiFxbwxNNI1mxxAGD96IklJglY2tpiHKGnQSrZTkjdmnRpQi7Re/5DczsuSMbLF7v0p X-Received: by 2002:a05:6a20:325b:b0:135:2b01:3737 with SMTP id hm27-20020a056a20325b00b001352b013737mr3095088pzc.38.1689992146139; Fri, 21 Jul 2023 19:15:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689992146; cv=none; d=google.com; s=arc-20160816; b=UqD+unxcbzVhv7hfozq0BqF8JCnjJMYT3J3TKqyo0qteI5EugqdmRunPntF9JMgXmm o3FDjpBy7kgXrqsPHntKcgFZpvTKtg//oTQE6Pw3UXxJvk6UixT5YKW5WHQiVBfShfoX 04RLy+ROcoatr6lKduZ765FeecFxi/AGe8XPrGmT8NAFobQLxa5ep1op0YLYapeeR+9N gOTJtUtgwlH9tOtqubNPpjGyV0NYYUX22rpah7AkQSscM+9UQWoiMBpWIRzc8drqu1sN Lpn+VTAztOwd+cKAmZCb7atHEHAzdL9KSLCCamqAIWlABYqfXqmQFpA/6msZT+3A5Fa5 f/WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=PY67CjTOwz6KI4T0Lj9P5q66c/RjGwxRUJSEeS4GRlE=; fh=+wFND6X2PweTsJwZ99iP2DHiwxyX/zx7C8kEbHFvmqg=; b=JYA3yXMfSFkJ10LGbWBGQyeTjAyoXXzMhkzjPbXuhRAQjyMQ0QqoxvlwrnoA+kk5y/ jZzHjZTmJKbsUXOKGDj4nIeiwZnTJuStiux7KNckOtwEagly24m5k+vW3L0W8382a55Z l09hnQmcRZ141FHBFs6K9M+a4aF1yikf6kMDp6UiGlL6t7xavHtBhRZvCcZYEl0RXLIN dUyw+BzBXcqmXzU9IwBSrQLguzwtFb23BvIUVE4XSQwtXE/iAmxkagBmLKLiqC8Wwm+L DG0zjiOqnPgV1ehMZAoYwbRhaW6di0u9VMKcw57PNqPXf4I4gAYL69y9GEqb6bNfCp48 93Ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Y5QCdS11; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h70-20020a638349000000b0054fd504e80asi4298857pge.542.2023.07.21.19.15.33; Fri, 21 Jul 2023 19:15:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Y5QCdS11; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231374AbjGVBYB (ORCPT + 99 others); Fri, 21 Jul 2023 21:24:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbjGVBX4 (ORCPT ); Fri, 21 Jul 2023 21:23:56 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 800753A84 for ; Fri, 21 Jul 2023 18:23:54 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1b895fa8929so13364345ad.0 for ; Fri, 21 Jul 2023 18:23:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689989034; x=1690593834; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PY67CjTOwz6KI4T0Lj9P5q66c/RjGwxRUJSEeS4GRlE=; b=Y5QCdS11tnDnLCj/zoS5V7HwmN0Zn3zbcafntmhLGKG/jUIQ9ujkzt5nDgDd4lkun8 yoBt5rwD0PYKLsAefwjU+bqE15ubcmUUslyoUDprloZZdGWmoHXnMq0s28PhQuWA7Vcy VM1hBuXBFSVRZqygAKzVKUNwO4/9DWJxfj8TOJqmILGVyZhNifXEyWhsRRI5UllnoVlt 9z5I2rt/jpWI1ygnQdsbAWXPC9VQvXUOFNgmHlR+53q/Ky22/G6TH9d5NvcwpU6V+5Di 9J3t8p4C0MdbLeuKl1VlqeOxoQrTlJI6OOX0OoxF2Kdc0WEGVKjhQpwo+BZa/pQbmDjl RDXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689989034; x=1690593834; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PY67CjTOwz6KI4T0Lj9P5q66c/RjGwxRUJSEeS4GRlE=; b=TtvOEucW+DLujRBTySFxeEdVb00Polv9fdtuQvyLWYv88H+op3C/TZCNRWTwgUqddT t6tUUKfh07j/Szq5Nvf8VgsGgCpa1PdvGKfKe0sQtfxq72mz9ADZrT2Pw6+TdnQHvHZe 9RKQYNZXlewWjRHxPNH85JSt31MSAgalJS5HE2Q23k4XVHAumem9sONd/TDSRq4CwOoa VIpw3CoNzfMrWljUaGSDpytc7zG+rp3+1oDptL6YiH24nnZIFW8yZgPpKVEp5mKSsu9u QJlE8OA/u21ffHeUsn8v28yhxaA+rOAIHcqOuidaxCCCmqv9aPnZ0UAjCA0JB+hjp89W +IAw== X-Gm-Message-State: ABy/qLbKzdAHQ+V9EYEUCWMbhgktgZTaE0wN1t9A5xcViy8hSpcq6CCT G6aeJtfl90quO8qUMiuWjN7xKBs4ttA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c410:b0:1ab:18eb:17c8 with SMTP id k16-20020a170902c41000b001ab18eb17c8mr17406plk.2.1689989033958; Fri, 21 Jul 2023 18:23:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 21 Jul 2023 18:23:46 -0700 In-Reply-To: <20230722012350.2371049-1-seanjc@google.com> Mime-Version: 1.0 References: <20230722012350.2371049-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230722012350.2371049-2-seanjc@google.com> Subject: [PATCH 1/5] KVM: x86/mmu: Add helper to convert root hpa to shadow page From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Reima Ishii X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772085204830065992 X-GMAIL-MSGID: 1772085204830065992 Add a dedicated helper for converting a root hpa to a shadow page in anticipation of using a "dummy" root to handle the scenario where KVM needs to load a valid shadow root (from hardware's perspective), but the guest doesn't have a visible root to shadow. Similar to PAE roots, the dummy root won't have an associated kvm_mmu_page and will need special handling when finding a shadow page given a root. Opportunistically retrieve the root shadow page in kvm_mmu_sync_roots() *after* verifying the root is unsync (the dummy root can never be unsync). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 28 +++++++++++++--------------- arch/x86/kvm/mmu/spte.h | 9 +++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 23 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce..1eadfcde30be 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3574,11 +3574,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (!VALID_PAGE(*root_hpa)) return; - /* - * The "root" may be a special root, e.g. a PAE entry, treat it as a - * SPTE to ensure any non-PA bits are dropped. - */ - sp = spte_to_child_sp(*root_hpa); + sp = root_to_sp(*root_hpa); if (WARN_ON(!sp)) return; @@ -3624,7 +3620,7 @@ void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu, &invalid_list); if (free_active_root) { - if (to_shadow_page(mmu->root.hpa)) { + if (root_to_sp(mmu->root.hpa)) { mmu_free_root_page(kvm, &mmu->root.hpa, &invalid_list); } else if (mmu->pae_root) { for (i = 0; i < 4; ++i) { @@ -3648,6 +3644,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_free_roots); void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) { unsigned long roots_to_free = 0; + struct kvm_mmu_page *sp; hpa_t root_hpa; int i; @@ -3662,8 +3659,8 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) if (!VALID_PAGE(root_hpa)) continue; - if (!to_shadow_page(root_hpa) || - to_shadow_page(root_hpa)->role.guest_mode) + sp = root_to_sp(root_hpa); + if (!sp || sp->role.guest_mode) roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i); } @@ -4018,7 +4015,7 @@ static bool is_unsync_root(hpa_t root) * requirement isn't satisfied. */ smp_rmb(); - sp = to_shadow_page(root); + sp = root_to_sp(root); /* * PAE roots (somewhat arbitrarily) aren't backed by shadow pages, the @@ -4048,11 +4045,12 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) if (vcpu->arch.mmu->cpu_role.base.level >= PT64_ROOT_4LEVEL) { hpa_t root = vcpu->arch.mmu->root.hpa; - sp = to_shadow_page(root); if (!is_unsync_root(root)) return; + sp = root_to_sp(root); + write_lock(&vcpu->kvm->mmu_lock); mmu_sync_children(vcpu, sp, true); write_unlock(&vcpu->kvm->mmu_lock); @@ -4382,7 +4380,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, static bool is_page_fault_stale(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root.hpa); + struct kvm_mmu_page *sp = root_to_sp(vcpu->arch.mmu->root.hpa); /* Special roots, e.g. pae_root, are not backed by shadow pages. */ if (sp && is_obsolete_sp(vcpu->kvm, sp)) @@ -4564,7 +4562,7 @@ static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, { return (role.direct || pgd == root->pgd) && VALID_PAGE(root->hpa) && - role.word == to_shadow_page(root->hpa)->role.word; + role.word == root_to_sp(root->hpa)->role.word; } /* @@ -4638,7 +4636,7 @@ static bool fast_pgd_switch(struct kvm *kvm, struct kvm_mmu *mmu, * having to deal with PDPTEs. We may add support for 32-bit hosts/VMs * later if necessary. */ - if (VALID_PAGE(mmu->root.hpa) && !to_shadow_page(mmu->root.hpa)) + if (VALID_PAGE(mmu->root.hpa) && !root_to_sp(mmu->root.hpa)) kvm_mmu_free_roots(kvm, mmu, KVM_MMU_ROOT_CURRENT); if (VALID_PAGE(mmu->root.hpa)) @@ -4686,7 +4684,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) */ if (!new_role.direct) __clear_sp_write_flooding_count( - to_shadow_page(vcpu->arch.mmu->root.hpa)); + root_to_sp(vcpu->arch.mmu->root.hpa)); } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); @@ -5555,7 +5553,7 @@ static bool is_obsolete_root(struct kvm *kvm, hpa_t root_hpa) * (c) KVM doesn't track previous roots for PAE paging, and the guest * is unlikely to zap an in-use PGD. */ - sp = to_shadow_page(root_hpa); + sp = root_to_sp(root_hpa); return !sp || is_obsolete_sp(kvm, sp); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1279db2eab44..9f8e8cda89e8 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -236,6 +236,15 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) return to_shadow_page(__pa(sptep)); } +static inline struct kvm_mmu_page *root_to_sp(hpa_t root) +{ + /* + * The "root" may be a special root, e.g. a PAE entry, treat it as a + * SPTE to ensure any non-PA bits are dropped. + */ + return spte_to_child_sp(root); +} + static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) == shadow_mmio_value && diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 512163d52194..046ac2589611 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -689,7 +689,7 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *kvm, struct tdp_iter *iter, else #define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end) + for_each_tdp_pte(_iter, root_to_sp(_mmu->root.hpa), _start, _end) /* * Yield if the MMU lock is contended or this thread needs to return control