Message ID | 20221202061347.1070246-6-chao.p.peng@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp673835wrr; Thu, 1 Dec 2022 22:21:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf7rBzKm1UnoxBHVXbrbr3nlOqz+fo1YDH5sIBfIZbKBgib5BARobNmLWM+2T0IsQoJAlKrc X-Received: by 2002:a63:117:0:b0:476:f43d:913e with SMTP id 23-20020a630117000000b00476f43d913emr44193113pgb.386.1669962091577; Thu, 01 Dec 2022 22:21:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669962091; cv=none; d=google.com; s=arc-20160816; b=kEl3cLr+KTUMVvUlFzWZcYCOi7RWPezqP/CGHq8Z7AiE10z0YDr2JmOsLabylgON4Z MIRWp0XOcWIcCuyo7eO+9XCcv4NDzBKqp8Llct2ZSoZ7wBYQxKjsL0e3GRH8H+NonWhZ OYkPOuriztujmEPSHHm+i0QTQIxTz8Xc7Dt18NJTTEwrwnE8+Xx15P4tvbCY7N5A6u4z BBFbwqJkhZ/ECz0qBGduNDXeSVRtgZPSOTlUzLVbu3UAhgcDtuZigwR+LA3DUHAhSSBT iHLfx4IjxadrKlk3b2ul+IIr/fhvlNuHPusi+WQaurp9Njv3tH6IUaR5n9oGStnO2YUB ER1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Dq7aaqrADLkgD07Cyv5Y6vzScXo2Gft3dqFO6/czcno=; b=mKAHjgMjcw204sBhVpA6/PkdZjHctKbyvE1sJz744ALaIibmduRYWUcL0HqapJdbxB StHMxNxyR0x2P03Vw94a70qgh62t5K3ilVN8eSA4Y2Kicv9eJNfrXi5x8V826VDpzVRz LYp2+0Z3KA/WGjzRIi/T8ze1KgZ9P+YWt9/JA2C5pT030h6VN37OS4yZC8W6XxY4cZ0p 73vW611T6R/JVTC0xvRoB/dMCSFI7Y+dNNvGBt+YOCwH7M1bjFC/F1yUKjxzEw8+gEDB T3fJm4zQAAEgaguMIWjV4IMLLOaeHtp1FgYXZkmK0174QFbks/4lMbVBZSO9LFWr3o/z 6ccg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=F3yhmrSg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u64-20020a638543000000b00476c3ac7347si6464649pgd.443.2022.12.01.22.21.18; Thu, 01 Dec 2022 22:21:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=F3yhmrSg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232359AbiLBGUg (ORCPT <rfc822;heyuhang3455@gmail.com> + 99 others); Fri, 2 Dec 2022 01:20:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232392AbiLBGTk (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 2 Dec 2022 01:19:40 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E664CBA48; Thu, 1 Dec 2022 22:19:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961954; x=1701497954; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4VbBf/BrO2dlA458W3eDkygjDffYK27a3r7McgtRbUs=; b=F3yhmrSgi2EC+RodRfYayLzz+rr0x0rRVTSX4gmuEQ53DVAw+WcOPWdX OvDBgv5P1dhA2/RnYBJbWXPvSH7lEsBGNqlT9yr5+6VJZVwel8RJOLbLg o0Ai9fkNfR0kEaAp6ecAgfcX8i2A1qF1/qYB3ZvJ+ZRj4gIAB6Mc7BEFB cuMNWvqGV9xsxdvg5C+9ToLwStOIcP5Vox8lQr67QUPjlNAeRIojcAIwR rijj6EfPEL9MZI79J+jmuuyxsmHf5m1aULt27noIhqorqpNgfssMCFxj0 w8oAAfWf2PExIV4I6ZU2MBX+sHQL99gH2S+kIkxrJ1duQMxECWdjtEeqm Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170554" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170554" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698684" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698684" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:02 -0800 From: Chao Peng <chao.p.peng@linux.intel.com> To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini <pbonzini@redhat.com>, Jonathan Corbet <corbet@lwn.net>, Sean Christopherson <seanjc@google.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Arnd Bergmann <arnd@arndb.de>, Naoya Horiguchi <naoya.horiguchi@nec.com>, Miaohe Lin <linmiaohe@huawei.com>, x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>, Hugh Dickins <hughd@google.com>, Jeff Layton <jlayton@kernel.org>, "J . Bruce Fields" <bfields@fieldses.org>, Andrew Morton <akpm@linux-foundation.org>, Shuah Khan <shuah@kernel.org>, Mike Rapoport <rppt@kernel.org>, Steven Price <steven.price@arm.com>, "Maciej S . Szmigiero" <mail@maciej.szmigiero.name>, Vlastimil Babka <vbabka@suse.cz>, Vishal Annapurve <vannapurve@google.com>, Yu Zhang <yu.c.zhang@linux.intel.com>, Chao Peng <chao.p.peng@linux.intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret <qperret@google.com>, tabba@google.com, Michael Roth <michael.roth@amd.com>, mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 5/9] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Fri, 2 Dec 2022 14:13:43 +0800 Message-Id: <20221202061347.1070246-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751082170411260823?= X-GMAIL-MSGID: =?utf-8?q?1751082170411260823?= |
Series |
KVM: mm: fd-based approach for supporting KVM
|
|
Commit Message
Chao Peng
Dec. 2, 2022, 6:13 a.m. UTC
Currently in mmu_notifier invalidate path, hva range is recorded and
then checked against by mmu_notifier_retry_hva() in the page fault
handling path. However, for the to be introduced private memory, a page
fault may not have a hva associated, checking gfn(gpa) makes more sense.
For existing hva based shared memory, gfn is expected to also work. The
only downside is when aliasing multiple gfns to a single hva, the
current algorithm of checking multiple ranges could result in a much
larger range being rejected. Such aliasing should be uncommon, so the
impact is expected small.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
---
arch/x86/kvm/mmu/mmu.c | 8 +++++---
include/linux/kvm_host.h | 33 +++++++++++++++++++++------------
virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++---------
3 files changed, 49 insertions(+), 24 deletions(-)
Comments
Hi Chao, On Fri, Dec 2, 2022 at 6:19 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > Currently in mmu_notifier invalidate path, hva range is recorded and > then checked against by mmu_notifier_retry_hva() in the page fault > handling path. However, for the to be introduced private memory, a page > fault may not have a hva associated, checking gfn(gpa) makes more sense. > > For existing hva based shared memory, gfn is expected to also work. The > only downside is when aliasing multiple gfns to a single hva, the > current algorithm of checking multiple ranges could result in a much > larger range being rejected. Such aliasing should be uncommon, so the > impact is expected small. > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ > virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- > 3 files changed, 49 insertions(+), 24 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 4736d7849c60..e2c70b5afa3e 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, > return true; > > return fault->slot && > - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); > + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); > } > > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > write_lock(&kvm->mmu_lock); > > - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); > + kvm_mmu_invalidate_begin(kvm); > + > + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); > > flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); > > @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > kvm_flush_remote_tlbs_with_address(kvm, gfn_start, > gfn_end - gfn_start); > > - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); > + kvm_mmu_invalidate_end(kvm); > > write_unlock(&kvm->mmu_lock); > } > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 02347e386ea2..3d69484d2704 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -787,8 +787,8 @@ struct kvm { > struct mmu_notifier mmu_notifier; > unsigned long mmu_invalidate_seq; > long mmu_invalidate_in_progress; > - unsigned long mmu_invalidate_range_start; > - unsigned long mmu_invalidate_range_end; > + gfn_t mmu_invalidate_range_start; > + gfn_t mmu_invalidate_range_end; > #endif > struct list_head devices; > u64 manual_dirty_log_protect; > @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > #endif > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > - unsigned long end); > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > - unsigned long end); > +void kvm_mmu_invalidate_begin(struct kvm *kvm); > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); > +void kvm_mmu_invalidate_end(struct kvm *kvm); > > long kvm_arch_dev_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg); > @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) > return 0; > } > > -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, > unsigned long mmu_seq, > - unsigned long hva) > + gfn_t gfn) > { > lockdep_assert_held(&kvm->mmu_lock); > /* > @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > * that might be being invalidated. Note that it may include some false nit: "might be" (or) "is being" > * positives, due to shortcuts when handing concurrent invalidations. nit: handling > */ > - if (unlikely(kvm->mmu_invalidate_in_progress) && > - hva >= kvm->mmu_invalidate_range_start && > - hva < kvm->mmu_invalidate_range_end) > - return 1; > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > + /* > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > + * but before updating the range is a KVM bug. > + */ > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > + kvm->mmu_invalidate_range_end == INVALID_GPA)) INVALID_GPA is an x86-specific define in arch/x86/include/asm/kvm_host.h, so this doesn't build on other architectures. The obvious fix is to move it to include/linux/kvm_host.h. Cheers, /fuad > + return 1; > + > + if (gfn >= kvm->mmu_invalidate_range_start && > + gfn < kvm->mmu_invalidate_range_end) > + return 1; > + } > + > if (kvm->mmu_invalidate_seq != mmu_seq) > return 1; > return 0; > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index b882eb2c76a2..ad55dfbc75d7 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, > > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); > > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, > - unsigned long end); > - > +typedef void (*on_lock_fn_t)(struct kvm *kvm); > typedef void (*on_unlock_fn_t)(struct kvm *kvm); > > struct kvm_hva_range { > @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > locked = true; > KVM_MMU_LOCK(kvm); > if (!IS_KVM_NULL_FN(range->on_lock)) > - range->on_lock(kvm, range->start, range->end); > + range->on_lock(kvm); > + > if (IS_KVM_NULL_FN(range->handler)) > break; > } > @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > } > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > - unsigned long end) > +void kvm_mmu_invalidate_begin(struct kvm *kvm) > { > /* > * The count increase must become visible at unlock time as no > @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > * count is also read inside the mmu_lock critical section. > */ > kvm->mmu_invalidate_in_progress++; > + > + if (likely(kvm->mmu_invalidate_in_progress == 1)) { > + kvm->mmu_invalidate_range_start = INVALID_GPA; > + kvm->mmu_invalidate_range_end = INVALID_GPA; > + } > +} > + > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) > +{ > + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); > + > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > kvm->mmu_invalidate_range_start = start; > kvm->mmu_invalidate_range_end = end; > @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > } > } > > +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > +{ > + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); > + return kvm_unmap_gfn_range(kvm, range); > +} > + > static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > const struct mmu_notifier_range *range) > { > @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > .start = range->start, > .end = range->end, > .pte = __pte(0), > - .handler = kvm_unmap_gfn_range, > + .handler = kvm_mmu_unmap_gfn_range, > .on_lock = kvm_mmu_invalidate_begin, > .on_unlock = kvm_arch_guest_memory_reclaimed, > .flush_on_ret = true, > @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > return 0; > } > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > - unsigned long end) > +void kvm_mmu_invalidate_end(struct kvm *kvm) > { > /* > * This sequence increase will notify the kvm page fault that > -- > 2.25.1 >
On Mon, Dec 05, 2022 at 09:23:49AM +0000, Fuad Tabba wrote: > Hi Chao, > > On Fri, Dec 2, 2022 at 6:19 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > Currently in mmu_notifier invalidate path, hva range is recorded and > > then checked against by mmu_notifier_retry_hva() in the page fault > > handling path. However, for the to be introduced private memory, a page > > fault may not have a hva associated, checking gfn(gpa) makes more sense. > > > > For existing hva based shared memory, gfn is expected to also work. The > > only downside is when aliasing multiple gfns to a single hva, the > > current algorithm of checking multiple ranges could result in a much > > larger range being rejected. Such aliasing should be uncommon, so the > > impact is expected small. > > > > Suggested-by: Sean Christopherson <seanjc@google.com> > > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > > include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ > > virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- > > 3 files changed, 49 insertions(+), 24 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 4736d7849c60..e2c70b5afa3e 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, > > return true; > > > > return fault->slot && > > - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); > > + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); > > } > > > > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > > > write_lock(&kvm->mmu_lock); > > > > - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); > > + kvm_mmu_invalidate_begin(kvm); > > + > > + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); > > > > flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); > > > > @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > kvm_flush_remote_tlbs_with_address(kvm, gfn_start, > > gfn_end - gfn_start); > > > > - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); > > + kvm_mmu_invalidate_end(kvm); > > > > write_unlock(&kvm->mmu_lock); > > } > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index 02347e386ea2..3d69484d2704 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -787,8 +787,8 @@ struct kvm { > > struct mmu_notifier mmu_notifier; > > unsigned long mmu_invalidate_seq; > > long mmu_invalidate_in_progress; > > - unsigned long mmu_invalidate_range_start; > > - unsigned long mmu_invalidate_range_end; > > + gfn_t mmu_invalidate_range_start; > > + gfn_t mmu_invalidate_range_end; > > #endif > > struct list_head devices; > > u64 manual_dirty_log_protect; > > @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); > > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > > #endif > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > - unsigned long end); > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > - unsigned long end); > > +void kvm_mmu_invalidate_begin(struct kvm *kvm); > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); > > +void kvm_mmu_invalidate_end(struct kvm *kvm); > > > > long kvm_arch_dev_ioctl(struct file *filp, > > unsigned int ioctl, unsigned long arg); > > @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) > > return 0; > > } > > > > -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, > > unsigned long mmu_seq, > > - unsigned long hva) > > + gfn_t gfn) > > { > > lockdep_assert_held(&kvm->mmu_lock); > > /* > > @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > * that might be being invalidated. Note that it may include some false > > nit: "might be" (or) "is being" > > > * positives, due to shortcuts when handing concurrent invalidations. > > nit: handling Both are existing code, but I can fix it either. > > > */ > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > - hva >= kvm->mmu_invalidate_range_start && > > - hva < kvm->mmu_invalidate_range_end) > > - return 1; > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > + /* > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > + * but before updating the range is a KVM bug. > > + */ > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > INVALID_GPA is an x86-specific define in > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > architectures. The obvious fix is to move it to > include/linux/kvm_host.h. Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is correct choice for other architectures, but after search it has not been used for other architectures, so should be safe to make it common. Thanks, Chao > > Cheers, > /fuad > > > + return 1; > > + > > + if (gfn >= kvm->mmu_invalidate_range_start && > > + gfn < kvm->mmu_invalidate_range_end) > > + return 1; > > + } > > + > > if (kvm->mmu_invalidate_seq != mmu_seq) > > return 1; > > return 0; > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index b882eb2c76a2..ad55dfbc75d7 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, > > > > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); > > > > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, > > - unsigned long end); > > - > > +typedef void (*on_lock_fn_t)(struct kvm *kvm); > > typedef void (*on_unlock_fn_t)(struct kvm *kvm); > > > > struct kvm_hva_range { > > @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > > locked = true; > > KVM_MMU_LOCK(kvm); > > if (!IS_KVM_NULL_FN(range->on_lock)) > > - range->on_lock(kvm, range->start, range->end); > > + range->on_lock(kvm); > > + > > if (IS_KVM_NULL_FN(range->handler)) > > break; > > } > > @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > > } > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > - unsigned long end) > > +void kvm_mmu_invalidate_begin(struct kvm *kvm) > > { > > /* > > * The count increase must become visible at unlock time as no > > @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > * count is also read inside the mmu_lock critical section. > > */ > > kvm->mmu_invalidate_in_progress++; > > + > > + if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > + kvm->mmu_invalidate_range_start = INVALID_GPA; > > + kvm->mmu_invalidate_range_end = INVALID_GPA; > > + } > > +} > > + > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) > > +{ > > + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); > > + > > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > kvm->mmu_invalidate_range_start = start; > > kvm->mmu_invalidate_range_end = end; > > @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > } > > } > > > > +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > > +{ > > + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); > > + return kvm_unmap_gfn_range(kvm, range); > > +} > > + > > static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > const struct mmu_notifier_range *range) > > { > > @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > .start = range->start, > > .end = range->end, > > .pte = __pte(0), > > - .handler = kvm_unmap_gfn_range, > > + .handler = kvm_mmu_unmap_gfn_range, > > .on_lock = kvm_mmu_invalidate_begin, > > .on_unlock = kvm_arch_guest_memory_reclaimed, > > .flush_on_ret = true, > > @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > return 0; > > } > > > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > - unsigned long end) > > +void kvm_mmu_invalidate_end(struct kvm *kvm) > > { > > /* > > * This sequence increase will notify the kvm page fault that > > -- > > 2.25.1 > >
Hi, On Tue, Dec 6, 2022 at 12:01 PM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > On Mon, Dec 05, 2022 at 09:23:49AM +0000, Fuad Tabba wrote: > > Hi Chao, > > > > On Fri, Dec 2, 2022 at 6:19 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > > > Currently in mmu_notifier invalidate path, hva range is recorded and > > > then checked against by mmu_notifier_retry_hva() in the page fault > > > handling path. However, for the to be introduced private memory, a page > > > fault may not have a hva associated, checking gfn(gpa) makes more sense. > > > > > > For existing hva based shared memory, gfn is expected to also work. The > > > only downside is when aliasing multiple gfns to a single hva, the > > > current algorithm of checking multiple ranges could result in a much > > > larger range being rejected. Such aliasing should be uncommon, so the > > > impact is expected small. > > > > > > Suggested-by: Sean Christopherson <seanjc@google.com> > > > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > > > include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ > > > virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- > > > 3 files changed, 49 insertions(+), 24 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index 4736d7849c60..e2c70b5afa3e 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, > > > return true; > > > > > > return fault->slot && > > > - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); > > > + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); > > > } > > > > > > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > > @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > > > > > write_lock(&kvm->mmu_lock); > > > > > > - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); > > > + kvm_mmu_invalidate_begin(kvm); > > > + > > > + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); > > > > > > flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); > > > > > > @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) > > > kvm_flush_remote_tlbs_with_address(kvm, gfn_start, > > > gfn_end - gfn_start); > > > > > > - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); > > > + kvm_mmu_invalidate_end(kvm); > > > > > > write_unlock(&kvm->mmu_lock); > > > } > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > > index 02347e386ea2..3d69484d2704 100644 > > > --- a/include/linux/kvm_host.h > > > +++ b/include/linux/kvm_host.h > > > @@ -787,8 +787,8 @@ struct kvm { > > > struct mmu_notifier mmu_notifier; > > > unsigned long mmu_invalidate_seq; > > > long mmu_invalidate_in_progress; > > > - unsigned long mmu_invalidate_range_start; > > > - unsigned long mmu_invalidate_range_end; > > > + gfn_t mmu_invalidate_range_start; > > > + gfn_t mmu_invalidate_range_end; > > > #endif > > > struct list_head devices; > > > u64 manual_dirty_log_protect; > > > @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); > > > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > > > #endif > > > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > +void kvm_mmu_invalidate_begin(struct kvm *kvm); > > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); > > > +void kvm_mmu_invalidate_end(struct kvm *kvm); > > > > > > long kvm_arch_dev_ioctl(struct file *filp, > > > unsigned int ioctl, unsigned long arg); > > > @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) > > > return 0; > > > } > > > > > > -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > > +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, > > > unsigned long mmu_seq, > > > - unsigned long hva) > > > + gfn_t gfn) > > > { > > > lockdep_assert_held(&kvm->mmu_lock); > > > /* > > > @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, > > > * that might be being invalidated. Note that it may include some false > > > > nit: "might be" (or) "is being" > > > > > * positives, due to shortcuts when handing concurrent invalidations. > > > > nit: handling > > Both are existing code, but I can fix it either. That was just a nit, please feel free to ignore it, especially if it might cause headaches in the future with merges. > > > > > > */ > > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > > - hva >= kvm->mmu_invalidate_range_start && > > > - hva < kvm->mmu_invalidate_range_end) > > > - return 1; > > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > > + /* > > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > > + * but before updating the range is a KVM bug. > > > + */ > > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > > > INVALID_GPA is an x86-specific define in > > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > > architectures. The obvious fix is to move it to > > include/linux/kvm_host.h. > > Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is > correct choice for other architectures, but after search it has not been > used for other architectures, so should be safe to make it common. With this fixed, Reviewed-by: Fuad Tabba <tabba@google.com> And the necessary work to port to arm64 (on qemu/arm64): Tested-by: Fuad Tabba <tabba@google.com> Cheers, /fuad > > Thanks, > Chao > > > > Cheers, > > /fuad > > > > > + return 1; > > > + > > > + if (gfn >= kvm->mmu_invalidate_range_start && > > > + gfn < kvm->mmu_invalidate_range_end) > > > + return 1; > > > + } > > > + > > > if (kvm->mmu_invalidate_seq != mmu_seq) > > > return 1; > > > return 0; > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > > index b882eb2c76a2..ad55dfbc75d7 100644 > > > --- a/virt/kvm/kvm_main.c > > > +++ b/virt/kvm/kvm_main.c > > > @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, > > > > > > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); > > > > > > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, > > > - unsigned long end); > > > - > > > +typedef void (*on_lock_fn_t)(struct kvm *kvm); > > > typedef void (*on_unlock_fn_t)(struct kvm *kvm); > > > > > > struct kvm_hva_range { > > > @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > > > locked = true; > > > KVM_MMU_LOCK(kvm); > > > if (!IS_KVM_NULL_FN(range->on_lock)) > > > - range->on_lock(kvm, range->start, range->end); > > > + range->on_lock(kvm); > > > + > > > if (IS_KVM_NULL_FN(range->handler)) > > > break; > > > } > > > @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > > > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > > > } > > > > > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > - unsigned long end) > > > +void kvm_mmu_invalidate_begin(struct kvm *kvm) > > > { > > > /* > > > * The count increase must become visible at unlock time as no > > > @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > * count is also read inside the mmu_lock critical section. > > > */ > > > kvm->mmu_invalidate_in_progress++; > > > + > > > + if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > > + kvm->mmu_invalidate_range_start = INVALID_GPA; > > > + kvm->mmu_invalidate_range_end = INVALID_GPA; > > > + } > > > +} > > > + > > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) > > > +{ > > > + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); > > > + > > > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > > kvm->mmu_invalidate_range_start = start; > > > kvm->mmu_invalidate_range_end = end; > > > @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, > > > } > > > } > > > > > > +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > > > +{ > > > + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); > > > + return kvm_unmap_gfn_range(kvm, range); > > > +} > > > + > > > static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > const struct mmu_notifier_range *range) > > > { > > > @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > .start = range->start, > > > .end = range->end, > > > .pte = __pte(0), > > > - .handler = kvm_unmap_gfn_range, > > > + .handler = kvm_mmu_unmap_gfn_range, > > > .on_lock = kvm_mmu_invalidate_begin, > > > .on_unlock = kvm_arch_guest_memory_reclaimed, > > > .flush_on_ret = true, > > > @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > > return 0; > > > } > > > > > > -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, > > > - unsigned long end) > > > +void kvm_mmu_invalidate_end(struct kvm *kvm) > > > { > > > /* > > > * This sequence increase will notify the kvm page fault that > > > -- > > > 2.25.1 > > >
On Tue, Dec 06, 2022 at 07:56:23PM +0800, Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > > - hva >= kvm->mmu_invalidate_range_start && > > > - hva < kvm->mmu_invalidate_range_end) > > > - return 1; > > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > > + /* > > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > > + * but before updating the range is a KVM bug. > > > + */ > > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > > > INVALID_GPA is an x86-specific define in > > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > > architectures. The obvious fix is to move it to > > include/linux/kvm_host.h. > > Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is > correct choice for other architectures, but after search it has not been > used for other architectures, so should be safe to make it common. INVALID_GPA is defined as all bit 1. Please notice "~" (tilde). #define INVALID_GPA (~(gpa_t)0)
On Tue, Dec 06, 2022 at 10:34:11PM -0800, Isaku Yamahata wrote: > On Tue, Dec 06, 2022 at 07:56:23PM +0800, > Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > > > - hva >= kvm->mmu_invalidate_range_start && > > > > - hva < kvm->mmu_invalidate_range_end) > > > > - return 1; > > > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > > > + /* > > > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > > > + * but before updating the range is a KVM bug. > > > > + */ > > > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > > > > > INVALID_GPA is an x86-specific define in > > > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > > > architectures. The obvious fix is to move it to > > > include/linux/kvm_host.h. > > > > Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is > > correct choice for other architectures, but after search it has not been > > used for other architectures, so should be safe to make it common. > > INVALID_GPA is defined as all bit 1. Please notice "~" (tilde). > > #define INVALID_GPA (~(gpa_t)0) Thanks for mention. Still looks right moving it to include/linux/kvm_host.h. Chao > -- > Isaku Yamahata <isaku.yamahata@gmail.com>
On Tue, Dec 06, 2022 at 03:48:50PM +0000, Fuad Tabba wrote: ... > > > > > > */ > > > > - if (unlikely(kvm->mmu_invalidate_in_progress) && > > > > - hva >= kvm->mmu_invalidate_range_start && > > > > - hva < kvm->mmu_invalidate_range_end) > > > > - return 1; > > > > + if (unlikely(kvm->mmu_invalidate_in_progress)) { > > > > + /* > > > > + * Dropping mmu_lock after bumping mmu_invalidate_in_progress > > > > + * but before updating the range is a KVM bug. > > > > + */ > > > > + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || > > > > + kvm->mmu_invalidate_range_end == INVALID_GPA)) > > > > > > INVALID_GPA is an x86-specific define in > > > arch/x86/include/asm/kvm_host.h, so this doesn't build on other > > > architectures. The obvious fix is to move it to > > > include/linux/kvm_host.h. > > > > Hmm, INVALID_GPA is defined as ZERO for x86, not 100% confident this is > > correct choice for other architectures, but after search it has not been > > used for other architectures, so should be safe to make it common. As Yu posted a patch: https://lore.kernel.org/all/20221209023622.274715-1-yu.c.zhang@linux.intel.com/ There is a GPA_INVALID in include/linux/kvm_types.h and I see ARM has already been using it so sounds that is exactly what I need. Chao > > With this fixed, > > Reviewed-by: Fuad Tabba <tabba@google.com> > And the necessary work to port to arm64 (on qemu/arm64): > Tested-by: Fuad Tabba <tabba@google.com> > > Cheers, > /fuad
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..e2c70b5afa3e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) write_lock(&kvm->mmu_lock); - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_begin(kvm); + + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end - gfn_start); - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_end(kvm); write_unlock(&kvm->mmu_lock); } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 02347e386ea2..3d69484d2704 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -787,8 +787,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm); +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm); long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) return 0; } -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, * that might be being invalidated. Note that it may include some false * positives, due to shortcuts when handing concurrent invalidations. */ - if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >= kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) - return 1; + if (unlikely(kvm->mmu_invalidate_in_progress)) { + /* + * Dropping mmu_lock after bumping mmu_invalidate_in_progress + * but before updating the range is a KVM bug. + */ + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || + kvm->mmu_invalidate_range_end == INVALID_GPA)) + return 1; + + if (gfn >= kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) + return 1; + } + if (kvm->mmu_invalidate_seq != mmu_seq) return 1; return 0; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b882eb2c76a2..ad55dfbc75d7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); - +typedef void (*on_lock_fn_t)(struct kvm *kvm); typedef void (*on_unlock_fn_t)(struct kvm *kvm); struct kvm_hva_range { @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, locked = true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm); + if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_begin(struct kvm *kvm) { /* * The count increase must become visible at unlock time as no @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, * count is also read inside the mmu_lock critical section. */ kvm->mmu_invalidate_in_progress++; + + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = INVALID_GPA; + kvm->mmu_invalidate_range_end = INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + if (likely(kvm->mmu_invalidate_in_progress == 1)) { kvm->mmu_invalidate_range_start = start; kvm->mmu_invalidate_range_end = end; @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, } } +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .start = range->start, .end = range->end, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, + .handler = kvm_mmu_unmap_gfn_range, .on_lock = kvm_mmu_invalidate_begin, .on_unlock = kvm_arch_guest_memory_reclaimed, .flush_on_ret = true, @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm) { /* * This sequence increase will notify the kvm page fault that