From patchwork Sun Oct 30 06:22:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 12865 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1666129wru; Sat, 29 Oct 2022 23:29:02 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4qMnAZMjXB23hfTA7X98c8/445a5tLVcUuJh/5EAgD7nRN8Cib8gJGp4tqrcsR3/3mEVp2 X-Received: by 2002:a05:6402:3890:b0:45c:2b5:b622 with SMTP id fd16-20020a056402389000b0045c02b5b622mr7667745edb.69.1667111342721; Sat, 29 Oct 2022 23:29:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667111342; cv=none; d=google.com; s=arc-20160816; b=utO8uw4UHLCDhWQIa4pFXOz/Nd7Vx6z0mCnkJfYT0xJnpGjnWYvXGrEdU8JQGxRjS+ r3/PEXkehQ3yy5xVMILAyRaNq2W5k6seqPnTkdzJRnsLXo3VvXpmFOtiEpbeKN8Q03hJ xPv0i14LxYXZ6DG0IgbcRmpmqmKsybQd20lHP/uyKTMQZovvJk3vTljbU2D+2FFuXxp2 BOfnECX4wzFnDCkMsP0U5XpQdHY0NB3UYZ0Fl78ke/iE7VP9A1xlwrlDmBEoU07GZik+ Oxzl+r2njglQ82IFrfq2g4MlygbuDr54UrgiLS2lmSgKCAyzrHZB5OFsyrlrQx1jKJuR XKWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=l71AqR8qMHyFdRRhWMAH/yQSpuaQHeUs51apbHPgRwA=; b=uKcdEmgGVYgEpiEVXW2OvJVZTbofpBt7eCjUnuAgyVe+ZfZCwDoVpKs3rY9/Ju29UT vbgh5ZlvWVkcs2lM62vNczwRN2XSi+08+qp1doWntHdiyicOZTWezgp11YRFuNSMcdxJ X+B/LzBB3Rpyuf50kdToyGXljKzEgMByhFzjLOe7BPisbPiPelyhf7DYNDPKyv4RAm1E donrhWJZ6vp/Oo7P2udFndcOC5N2PVrOkaE96OQpRPI2x6+5EGo/Y/RaogSj7EIjQ+X1 MGyjUsoyJk5VakIPu+xjZyQWhvSE3jzuuStpWzI9iqJp7u5CEdeL3jvYJFBYFLQhnoA9 HDyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=f7EcaftI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hz4-20020a1709072ce400b007830f14fffesi4608812ejc.375.2022.10.29.23.28.38; Sat, 29 Oct 2022 23:29:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=f7EcaftI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230429AbiJ3G1V (ORCPT + 99 others); Sun, 30 Oct 2022 02:27:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbiJ3GYY (ORCPT ); Sun, 30 Oct 2022 02:24:24 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33CF2182; Sat, 29 Oct 2022 23:24:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667111051; x=1698647051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ui65nPEfZFPgrYsNSO8QLZrUq6LO0/8MC9q/JZO/TvA=; b=f7EcaftIeAIabGGP3LvX+nRY8a1Gs6ISTmrrHTdxmbIdEbC3fo+E/Zjl 6ufEcKZoWxxeIWPrELk74hcKDu0DHq3fDPUn/uqUdmp0dEJB/kBec8AtY EQEbry7gsOh8n54hJZ99vNOZUFhlLZJOcU3FM5aVM8UTdQVxNUXumUkUL hRujVCwMBPjRzo2p0sONsvxRaCo8lJF5I86MQpOmmG/b8kuewJQ+9MksL JsL5LgL2S8VdiqwtFuPQUJJv2busMi1OE/GzuYr+yfwyGy9A1cCxgyEE2 qOM1PxsmMBqY7vOltiAOrpcp8Ty74KsSUF1l4C/UxWxRgNh8suVPJebTj A==; X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="395037159" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="395037159" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:04 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="878392998" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="878392998" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:04 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Sean Christopherson Subject: [PATCH v10 047/108] KVM: x86/tdp_mmu: Don't zap private pages for unsupported cases Date: Sat, 29 Oct 2022 23:22:48 -0700 Message-Id: <9e8346b692eb377576363a028c3688c66f3c0bfe.1667110240.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748092943436923196?= X-GMAIL-MSGID: =?utf-8?q?1748092943436923196?= From: Sean Christopherson TDX supports only write-back(WB) memory type for private memory architecturally so that (virtualized) memory type change doesn't make sense for private memory. Also currently, page migration isn't supported for TDX yet. (TDX architecturally supports page migration. it's KVM and kernel implementation issue.) Regarding memory type change (mtrr virtualization and lapic page mapping change), pages are zapped by kvm_zap_gfn_range(). On the next KVM page fault, the SPTE entry with a new memory type for the page is populated. Regarding page migration, pages are zapped by the mmu notifier. On the next KVM page fault, the new migrated page is populated. Don't zap private pages on unmapping for those two cases. When deleting/moving a KVM memory slot, zap private pages. Typically tearing down VM. Don't invalidate private page tables. i.e. zap only leaf SPTEs for KVM mmu that has a shared bit mask. The existing kvm_tdp_mmu_invalidate_all_roots() depends on role.invalid with read-lock of mmu_lock so that other vcpu can operate on KVM mmu concurrently. It marks the root page table invalid and zaps SPTEs of the root page tables. The TDX module doesn't allow to unlink a protected root page table from the hardware and then allocate a new one for it. i.e. replacing a protected root page table. Instead, zap only leaf SPTEs for KVM mmu with a shared bit mask set. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 85 ++++++++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.c | 24 ++++++++--- arch/x86/kvm/mmu/tdp_mmu.h | 5 ++- 3 files changed, 103 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index faf69774c7ce..0237e143299c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1577,8 +1577,38 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); - if (is_tdp_mmu_enabled(kvm)) - flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); + if (is_tdp_mmu_enabled(kvm)) { + bool zap_private; + + if (kvm_slot_can_be_private(range->slot)) { + if (range->flags & KVM_GFN_RANGE_FLAGS_RESTRICTED_MEM) + /* + * For private slot, the callback is triggered + * via falloc. Mode can be allocation or punch + * hole. Because the private-shared conversion + * is done via + * KVM_MEMORY_ENCRYPT_REG/UNREG_REGION, we can + * ignore the request from restrictedmem. + */ + return flush; + else if (range->flags & KVM_GFN_RANGE_FLAGS_SET_MEM_ATTR) { + if (range->attr == KVM_MEM_ATTR_SHARED) + zap_private = true; + else { + WARN_ON_ONCE(range->attr != KVM_MEM_ATTR_PRIVATE); + zap_private = false; + } + } else + /* + * kvm_unmap_gfn_range() is called via mmu + * notifier. For now page migration for private + * page isn't supported yet, don't zap private + * pages. + */ + zap_private = false; + } + flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush, zap_private); + } return flush; } @@ -6066,11 +6096,48 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages)); } +static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + bool flush = false; + + write_lock(&kvm->mmu_lock); + + /* + * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst + * case scenario we'll have unused shadow pages lying around until they + * are recycled due to age or when the VM is destroyed. + */ + if (is_tdp_mmu_enabled(kvm)) { + struct kvm_gfn_range range = { + .slot = slot, + .start = slot->base_gfn, + .end = slot->base_gfn + slot->npages, + .may_block = false, + }; + + /* + * this handles both private gfn and shared gfn. + * All private page should be zapped on memslot deletion. + */ + flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush, true); + } else { + flush = slot_handle_level(kvm, slot, __kvm_zap_rmap, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, true); + } + if (flush) + kvm_flush_remote_tlbs(kvm); + + write_unlock(&kvm->mmu_lock); +} + static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, struct kvm_page_track_notifier_node *node) { - kvm_mmu_zap_all_fast(kvm); + if (kvm_gfn_shared_mask(kvm)) + kvm_mmu_zap_memslot(kvm, slot); + else + kvm_mmu_zap_all_fast(kvm); } int kvm_mmu_init_vm(struct kvm *kvm) @@ -6173,8 +6240,18 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) if (is_tdp_mmu_enabled(kvm)) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) + /* + * zap_private = true. Zap both private/shared pages. + * + * kvm_zap_gfn_range() is used when PAT memory type was + * changed. Later on the next kvm page fault, populate + * it with updated spte entry. + * Because only WB is supported for private pages, don't + * care of private pages. + */ flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, - gfn_end, true, flush); + gfn_end, true, flush, + true); } if (flush) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b2f56110d62d..85d990ec149e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -948,7 +948,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) * operation can cause a soft lockup. */ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield, bool flush) + gfn_t start, gfn_t end, bool can_yield, bool flush, + bool zap_private) { struct tdp_iter iter; @@ -956,6 +957,10 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, lockdep_assert_held_write(&kvm->mmu_lock); + WARN_ON_ONCE(zap_private && !is_private_sp(root)); + if (!zap_private && is_private_sp(root)) + return false; + rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) { @@ -988,12 +993,13 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, * more SPTEs were zapped since the MMU lock was last acquired. */ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, - bool can_yield, bool flush) + bool can_yield, bool flush, bool zap_private) { struct kvm_mmu_page *root; for_each_tdp_mmu_root_yield_safe(kvm, root, as_id) - flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush); + flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush, + zap_private && is_private_sp(root)); return flush; } @@ -1053,6 +1059,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) lockdep_assert_held_write(&kvm->mmu_lock); list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) { + /* + * Skip private root since private page table + * is only torn down when VM is destroyed. + */ + if (is_private_sp(root)) + continue; if (!root->role.invalid && !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) { root->role.invalid = true; @@ -1245,11 +1257,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; } +/* Used by mmu notifier via kvm_unmap_gfn_range() */ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, - bool flush) + bool flush, bool zap_private) { return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start, - range->end, range->may_block, flush); + range->end, range->may_block, flush, + zap_private); } typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..c98c7df449a8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -16,7 +16,8 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, bool shared); bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, - gfn_t end, bool can_yield, bool flush); + gfn_t end, bool can_yield, bool flush, + bool zap_private); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm); @@ -25,7 +26,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm); int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, - bool flush); + bool flush, bool zap_private); bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);