Message ID | 20230526234435.662652-6-yuzhao@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp28605vqr; Fri, 26 May 2023 16:51:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ72RKI+iiV8TKa9i5EnaJHhcwLrr18nJrsqOu9iNopRuVZ/y+MahyaYQOL2RGWkhbcz3Cc4 X-Received: by 2002:a17:902:d48d:b0:1ac:637d:5888 with SMTP id c13-20020a170902d48d00b001ac637d5888mr5145255plg.43.1685145088026; Fri, 26 May 2023 16:51:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685145088; cv=none; d=google.com; s=arc-20160816; b=p2bS3yBpUk1TmQkTBQhDFhFb5pxaMu0TYa6q8rfLJ2pB5vUv+K8dXmXSWHkWQZJie/ IBgFQhcRIEXDWdo6frRxMS3AZ3AWBtJIShabymMY2VH9qZIFfb2zMRw/g0k5DNxNF1Ca FEF7C74TD3MALOxZ3Qv9tgl7WIeOup5J/8+4zQJvMsR4PN0aWv5E/vwMGrvzh8sZNqDl FdkZKm8W3/kXJ0WetDpbj4h6nnKxr2Ks55gUVKkqxy7A/RMUKDZueBN88lqRMvx3OqFW sKKlR2LcLGTxMhQ7CE2cxB6n+8kZ1E36JrvmjnbOj38H6XfHt86BTDnkZjnQWv7AaVyH 4RFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=gdtrmeQ7r8fSzyqqoeow0Oco6w18CVCYcKcBQXKvLk0=; b=hPKeRmP5CcRHZGCJCQizI2jkx0/uBuZI9Fr38/Yzmw7A5slnoe67/dl4fQYXG3eCxb Ruvqjv5fRiwW6bSLnGRj2MTL/hpxxzOSS3IRO73BL1t3V4WFlgFIJriYapeIfAkpvxJR Ugjiv9SkxV2SdbQCcj9JYVGx58eIu0plbiYf4wPOu2F+NdBVOiGoM7cZcr7hPU9fdKOJ EzUYB0dPz5kirp6Ngkl0UI+xxD+mIptd+WCqx11b23nhYhzw6uodobrM7xhmptJUzEYq cjNSLBmZavW5FhZFbhkUOxMWvFdp6aWXQfrPe20+/5/mntRW1z2M+y4NduSFb75jJ2or HhDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=A+5tnxly; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s13-20020a170902ea0d00b001adbb45eae6si5713358plg.331.2023.05.26.16.50.59; Fri, 26 May 2023 16:51:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=A+5tnxly; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbjEZXpL (ORCPT <rfc822;zhanglyra.2023@gmail.com> + 99 others); Fri, 26 May 2023 19:45:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237793AbjEZXov (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 26 May 2023 19:44:51 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1DF0E44 for <linux-kernel@vger.kernel.org>; Fri, 26 May 2023 16:44:48 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-ba81b24b8a4so1684879276.2 for <linux-kernel@vger.kernel.org>; Fri, 26 May 2023 16:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685144687; x=1687736687; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gdtrmeQ7r8fSzyqqoeow0Oco6w18CVCYcKcBQXKvLk0=; b=A+5tnxly4jAziM7PCgC2JSUrvZIuWUL9UuTqWd7F/s/V7U9ZvBSUtbCMn4PDjXhEkh xPkXA8ijRXU8kfBAAJh45kvENT/cQ+0BqqVDKv4CzNn3FeibDZ4esB2AS5+kj2xWW1A/ 5f+rTWaRryOWr0eLpQP68cwlR3USUfp7Ely4h3ulPbhNc/8a6O9EbhRXY0Hc8yu56mx8 LQMCwDasuefCJVI+rIDEYvr3/19bdInHAEUvVq6phC2Jc21wzdoWQlzM7WDhIq/i18eK BEwyErOKi0xkhmnwWH0N/n1UzNpwg5kRKa2pc24aeVs5XXh0p7GOcbD7gzDkKgQaeHXr C7IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685144687; x=1687736687; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gdtrmeQ7r8fSzyqqoeow0Oco6w18CVCYcKcBQXKvLk0=; b=kJZIEntHfMFViFU5rnSbN2OzBiRIOlMQK0105VWvEG2xTPXW97+0XkFP2+Dh4QM0fL s3mMtMi9thNPOnaV6ZrIhZJ1Odflc7lMofkqoWO/4v+nvkd2vg/wUvyXW1gc3VGuOfQB SFfz7Bd4ycoF0mQoWEVjQafr+D/62IL3FiDn9SA0n9GOrgSahRt2y/LymFsij9l27Rcd jV/BVIVTC8b+IMDwwYDr0lAxo5cKl6ZbQ10SaFyGmXNixfvIGO9apEVHSPZjNADUp+r5 PqhKjBTnyrXqGAu+Pbzc1ZVlPdKRlXIIvwgyKTbgu2yjwBcZTOZcube+bZszjBU207kT NRNQ== X-Gm-Message-State: AC+VfDxbR51lHlHSXEaffOmAS4Q/QrO4joy9vJimrcMd9gGq1yA8J7xQ MVVBdr0sV00U+FCm5sdjeNtbxKjL0Tw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:910f:8a15:592b:2087]) (user=yuzhao job=sendgmr) by 2002:a25:b28d:0:b0:bad:155a:1004 with SMTP id k13-20020a25b28d000000b00bad155a1004mr1830575ybj.2.1685144687512; Fri, 26 May 2023 16:44:47 -0700 (PDT) Date: Fri, 26 May 2023 17:44:30 -0600 In-Reply-To: <20230526234435.662652-1-yuzhao@google.com> Message-Id: <20230526234435.662652-6-yuzhao@google.com> Mime-Version: 1.0 References: <20230526234435.662652-1-yuzhao@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH mm-unstable v2 05/10] kvm/arm64: add kvm_arch_test_clear_young() From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Paolo Bonzini <pbonzini@redhat.com> Cc: Alistair Popple <apopple@nvidia.com>, Anup Patel <anup@brainfault.org>, Ben Gardon <bgardon@google.com>, Borislav Petkov <bp@alien8.de>, Catalin Marinas <catalin.marinas@arm.com>, Chao Peng <chao.p.peng@linux.intel.com>, Christophe Leroy <christophe.leroy@csgroup.eu>, Dave Hansen <dave.hansen@linux.intel.com>, Fabiano Rosas <farosas@linux.ibm.com>, Gaosheng Cui <cuigaosheng1@huawei.com>, Gavin Shan <gshan@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, James Morse <james.morse@arm.com>, "Jason A. Donenfeld" <Jason@zx2c4.com>, Jason Gunthorpe <jgg@ziepe.ca>, Jonathan Corbet <corbet@lwn.net>, Marc Zyngier <maz@kernel.org>, Masami Hiramatsu <mhiramat@kernel.org>, Michael Ellerman <mpe@ellerman.id.au>, Michael Larabel <michael@michaellarabel.com>, Mike Rapoport <rppt@kernel.org>, Nicholas Piggin <npiggin@gmail.com>, Oliver Upton <oliver.upton@linux.dev>, Paul Mackerras <paulus@ozlabs.org>, Peter Xu <peterx@redhat.com>, Sean Christopherson <seanjc@google.com>, Steven Rostedt <rostedt@goodmis.org>, Suzuki K Poulose <suzuki.poulose@arm.com>, Thomas Gleixner <tglx@linutronix.de>, Thomas Huth <thuth@redhat.com>, Will Deacon <will@kernel.org>, Zenghui Yu <yuzenghui@huawei.com>, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, linux-mm@google.com, Yu Zhao <yuzhao@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767002695805772355?= X-GMAIL-MSGID: =?utf-8?q?1767002695805772355?= |
Series |
mm/kvm: locklessly clear the accessed bit
|
|
Commit Message
Yu Zhao
May 26, 2023, 11:44 p.m. UTC
Implement kvm_arch_test_clear_young() to support the fast path in
mmu_notifier_ops->test_clear_young().
It focuses on a simple case, i.e., hardware sets the accessed bit in
KVM PTEs and VMs are not protected, where it can rely on RCU and
cmpxchg to safely clear the accessed bit without taking
kvm->mmu_lock. Complex cases fall back to the existing slow path
where kvm->mmu_lock is then taken.
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
arch/arm64/include/asm/kvm_host.h | 6 ++++++
arch/arm64/kvm/mmu.c | 36 +++++++++++++++++++++++++++++++
2 files changed, 42 insertions(+)
Comments
Hi Yu, On Fri, May 26, 2023 at 05:44:30PM -0600, Yu Zhao wrote: > Implement kvm_arch_test_clear_young() to support the fast path in > mmu_notifier_ops->test_clear_young(). > > It focuses on a simple case, i.e., hardware sets the accessed bit in > KVM PTEs and VMs are not protected, where it can rely on RCU and > cmpxchg to safely clear the accessed bit without taking > kvm->mmu_lock. Complex cases fall back to the existing slow path > where kvm->mmu_lock is then taken. > > Signed-off-by: Yu Zhao <yuzhao@google.com> > --- > arch/arm64/include/asm/kvm_host.h | 6 ++++++ > arch/arm64/kvm/mmu.c | 36 +++++++++++++++++++++++++++++++ > 2 files changed, 42 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 7e7e19ef6993..da32b0890716 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -1113,4 +1113,10 @@ static inline void kvm_hyp_reserve(void) { } > void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu); > bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); > > +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young > +static inline bool kvm_arch_has_test_clear_young(void) > +{ > + return cpu_has_hw_af() && !is_protected_kvm_enabled(); > +} I would *strongly* suggest you consider supporting test_clear_young on systems that do software Access Flag management. FEAT_HAFDBS is an *optional* extension to the architecture, so we're going to support software AF management for a very long time in KVM. It is also a valid fallback option in the case of hardware errata which render HAFDBS broken. So, we should expect (and support) systems of all shapes and sizes that do software AF. I'm sure we'll hear about more in the not-too-distant future... For future reference (even though I'm suggesting you support software AF), decisions such of these need an extremely verbose comment describing the rationale behind the decision. > + > #endif /* __ARM64_KVM_HOST_H__ */ > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c3b3e2afe26f..26a8d955b49c 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c Please do not implement page table walkers outside of hyp/pgtable.c > @@ -1678,6 +1678,42 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > range->start << PAGE_SHIFT); > } > > +static int stage2_test_clear_young(const struct kvm_pgtable_visit_ctx *ctx, > + enum kvm_pgtable_walk_flags flags) > +{ > + kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF; > + > + VM_WARN_ON_ONCE(!page_count(virt_to_page(ctx->ptep))); This sort of sanity checking is a bit excessive. Isn't there a risk of false negatives here too? IOW, if we tragically mess up RCU in the page table code, what's stopping a prematurely freed page from being allocated to another user? > + if (!kvm_pte_valid(new)) > + return 0; > + > + if (new == ctx->old) > + return 0; > + > + if (kvm_should_clear_young(ctx->arg, ctx->addr / PAGE_SIZE)) > + stage2_try_set_pte(ctx, new); > + > + return 0; > +} > + > +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range *range) > +{ > + u64 start = range->start * PAGE_SIZE; > + u64 end = range->end * PAGE_SIZE; > + struct kvm_pgtable_walker walker = { > + .cb = stage2_test_clear_young, > + .arg = range, > + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_SHARED, > + }; > + > + BUILD_BUG_ON(is_hyp_code()); Delete this assertion. > + kvm_pgtable_walk(kvm->arch.mmu.pgt, start, end - start, &walker); > + > + return false; > +} > + > phys_addr_t kvm_mmu_get_httbr(void) > { > return __pa(hyp_pgtable->pgd); > -- > 2.41.0.rc0.172.g3f132b7071-goog >
On Wed, May 31, 2023 at 1:56 PM Oliver Upton <oliver.upton@linux.dev> wrote: > > Hi Yu, > > On Fri, May 26, 2023 at 05:44:30PM -0600, Yu Zhao wrote: > > Implement kvm_arch_test_clear_young() to support the fast path in > > mmu_notifier_ops->test_clear_young(). > > > > It focuses on a simple case, i.e., hardware sets the accessed bit in > > KVM PTEs and VMs are not protected, where it can rely on RCU and > > cmpxchg to safely clear the accessed bit without taking > > kvm->mmu_lock. Complex cases fall back to the existing slow path > > where kvm->mmu_lock is then taken. > > > > Signed-off-by: Yu Zhao <yuzhao@google.com> > > --- > > arch/arm64/include/asm/kvm_host.h | 6 ++++++ > > arch/arm64/kvm/mmu.c | 36 +++++++++++++++++++++++++++++++ > > 2 files changed, 42 insertions(+) > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > > index 7e7e19ef6993..da32b0890716 100644 > > --- a/arch/arm64/include/asm/kvm_host.h > > +++ b/arch/arm64/include/asm/kvm_host.h > > @@ -1113,4 +1113,10 @@ static inline void kvm_hyp_reserve(void) { } > > void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu); > > bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); > > > > +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young > > +static inline bool kvm_arch_has_test_clear_young(void) > > +{ > > + return cpu_has_hw_af() && !is_protected_kvm_enabled(); > > +} > > I would *strongly* suggest you consider supporting test_clear_young on > systems that do software Access Flag management. FEAT_HAFDBS is an > *optional* extension to the architecture, so we're going to support > software AF management for a very long time in KVM. It is also a valid > fallback option in the case of hardware errata which render HAFDBS > broken. Hi Oliver, It's not about willingness but resources. Ideally we want to make everything perfect, but in reality, we can only move forward one step a time. If I looked at your request from ARM's POV, I would agree with you. But my goal is to lay the foundation for all architectures that could benefit, so I may not be able to cover a lot for each architecture. Specifically, I don't have the bandwidth to test the !FEAT_HAFDBS case for ARM. So here are some options I could offer, ordered by my preferences: 1. We proceed as it is for now. I *will* find someone from my team (or yours) to follow up -- this way we can make sure !FEAT_HAFDBS is well tested. 2. I drop the cpu_has_hw_af() check above. Not that I think there is much risk, I'm just trying to be cautious. 3. I drop the entire ARM support (and include the RISC-V support which I previously deprioritized). We revisit after the test is done. Sounds reasonable? > So, we should expect (and support) systems of all shapes and sizes that > do software AF. I'm sure we'll hear about more in the not-too-distant > future... > > For future reference (even though I'm suggesting you support software > AF), decisions such of these need an extremely verbose comment > describing the rationale behind the decision. > > > + > > #endif /* __ARM64_KVM_HOST_H__ */ > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index c3b3e2afe26f..26a8d955b49c 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > Please do not implement page table walkers outside of hyp/pgtable.c > > > @@ -1678,6 +1678,42 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > > range->start << PAGE_SHIFT); > > } > > > > +static int stage2_test_clear_young(const struct kvm_pgtable_visit_ctx *ctx, > > + enum kvm_pgtable_walk_flags flags) > > +{ > > + kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF; > > + > > + VM_WARN_ON_ONCE(!page_count(virt_to_page(ctx->ptep))); > > This sort of sanity checking is a bit excessive. Isn't there a risk of > false negatives here too? IOW, if we tragically mess up RCU in the page > table code, what's stopping a prematurely freed page from being > allocated to another user? Yes, but from my aforementioned POV (the breadth I'm focusing on), this is a good practice. I can live without this assertion if you feel strongly about it. > > + if (!kvm_pte_valid(new)) > > + return 0; > > + > > + if (new == ctx->old) > > + return 0; > > + > > + if (kvm_should_clear_young(ctx->arg, ctx->addr / PAGE_SIZE)) > > + stage2_try_set_pte(ctx, new); > > + > > + return 0; > > +} > > + > > +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range *range) > > +{ > > + u64 start = range->start * PAGE_SIZE; > > + u64 end = range->end * PAGE_SIZE; > > + struct kvm_pgtable_walker walker = { > > + .cb = stage2_test_clear_young, > > + .arg = range, > > + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_SHARED, > > + }; > > + > > + BUILD_BUG_ON(is_hyp_code()); > > Delete this assertion. Will do.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7e7e19ef6993..da32b0890716 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1113,4 +1113,10 @@ static inline void kvm_hyp_reserve(void) { } void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu); bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young +static inline bool kvm_arch_has_test_clear_young(void) +{ + return cpu_has_hw_af() && !is_protected_kvm_enabled(); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c3b3e2afe26f..26a8d955b49c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1678,6 +1678,42 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) range->start << PAGE_SHIFT); } +static int stage2_test_clear_young(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags flags) +{ + kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF; + + VM_WARN_ON_ONCE(!page_count(virt_to_page(ctx->ptep))); + + if (!kvm_pte_valid(new)) + return 0; + + if (new == ctx->old) + return 0; + + if (kvm_should_clear_young(ctx->arg, ctx->addr / PAGE_SIZE)) + stage2_try_set_pte(ctx, new); + + return 0; +} + +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range *range) +{ + u64 start = range->start * PAGE_SIZE; + u64 end = range->end * PAGE_SIZE; + struct kvm_pgtable_walker walker = { + .cb = stage2_test_clear_young, + .arg = range, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_SHARED, + }; + + BUILD_BUG_ON(is_hyp_code()); + + kvm_pgtable_walk(kvm->arch.mmu.pgt, start, end - start, &walker); + + return false; +} + phys_addr_t kvm_mmu_get_httbr(void) { return __pa(hyp_pgtable->pgd);