[4/9] KVM: x86/mmu: Rename MMU_WARN_ON() to KVM_MMU_WARN_ON()

Message ID 20230511235917.639770-5-seanjc@google.com
State New
Headers
Series KVM: x86/mmu: Clean up MMU_DEBUG and BUG/WARN usage |

Commit Message

Sean Christopherson May 11, 2023, 11:59 p.m. UTC
  Rename MMU_WARN_ON() to make it super obvious that the assertions are
all about KVM's MMU, not the primary MMU.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c          | 4 ++--
 arch/x86/kvm/mmu/mmu_internal.h | 4 ++--
 arch/x86/kvm/mmu/spte.h         | 8 ++++----
 arch/x86/kvm/mmu/tdp_mmu.c      | 8 ++++----
 4 files changed, 12 insertions(+), 12 deletions(-)
  

Comments

David Matlack May 12, 2023, 11:23 p.m. UTC | #1
On Thu, May 11, 2023 at 04:59:12PM -0700, Sean Christopherson wrote:
> Rename MMU_WARN_ON() to make it super obvious that the assertions are
> all about KVM's MMU, not the primary MMU.

I think adding KVM is a step in the right direction but I have 2
remaining problems with KVM_MMU_WARN_ON():

 - Reminds me of VM_WARN_ON(), which toggles between WARN_ON() and
   BUG_ON(), whereas KVM_MMU_WARN_ON() toggles between no-op and
   WARN_ON().

 - It's not obvious from the name that it's a no-op most of the time.

Naming is hard so I might just make things worse by trying but...

How about KVM_MMU_PROVE(condition). That directly pairs it with the new
CONFIG_KVM_PROVE_MMU(), makes it sufficiently different from
VM_WARN_ON() and WARN_ON() that readers will not make assumptions about
what's happening under the hood. Also "PROVE" sounds like a high bar
which conveys this might not always be enabled.

That also will allow us to convert this to a WARN_ON_ONCE() (my
suggestion on the other patch) without having to make the name any
longer.
  
Sean Christopherson May 12, 2023, 11:30 p.m. UTC | #2
On Fri, May 12, 2023, David Matlack wrote:
> On Thu, May 11, 2023 at 04:59:12PM -0700, Sean Christopherson wrote:
> > Rename MMU_WARN_ON() to make it super obvious that the assertions are
> > all about KVM's MMU, not the primary MMU.
> 
> I think adding KVM is a step in the right direction but I have 2
> remaining problems with KVM_MMU_WARN_ON():
> 
>  - Reminds me of VM_WARN_ON(), which toggles between WARN_ON() and
>    BUG_ON(), whereas KVM_MMU_WARN_ON() toggles between no-op and
>    WARN_ON().

No, VM_WARN_ON() bounces between WARN_ON() and nop, just like KVM_MMU_WARN_ON().
There's an extra bit of magic that adds a static assert that the code is valid
(which I can/should/will add), but the runtime behavior is a nop.

  #define VM_WARN_ON(cond) (void)WARN_ON(cond)
  #else
  #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)

/*
 * BUILD_BUG_ON_INVALID() permits the compiler to check the validity of the
 * expression but avoids the generation of any code, even if that expression
 * has side-effects.
 */
#define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))

>  - It's not obvious from the name that it's a no-op most of the time.
> 
> Naming is hard so I might just make things worse by trying but...
> 
> How about KVM_MMU_PROVE(condition). That directly pairs it with the new
> CONFIG_KVM_PROVE_MMU(), makes it sufficiently different from
> VM_WARN_ON() and WARN_ON() that readers will not make assumptions about
> what's happening under the hood. Also "PROVE" sounds like a high bar
> which conveys this might not always be enabled.

It inverts the checks though.  Contexting switching between "WARN_ON" and "ASSERT"
is hard enough, I don't want to add a third flavor.

> That also will allow us to convert this to a WARN_ON_ONCE() (my
> suggestion on the other patch) without having to make the name any
> longer.
  
David Matlack May 12, 2023, 11:35 p.m. UTC | #3
On Fri, May 12, 2023 at 4:30 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, May 12, 2023, David Matlack wrote:
> > On Thu, May 11, 2023 at 04:59:12PM -0700, Sean Christopherson wrote:
> > > Rename MMU_WARN_ON() to make it super obvious that the assertions are
> > > all about KVM's MMU, not the primary MMU.
> >
> > I think adding KVM is a step in the right direction but I have 2
> > remaining problems with KVM_MMU_WARN_ON():
> >
> >  - Reminds me of VM_WARN_ON(), which toggles between WARN_ON() and
> >    BUG_ON(), whereas KVM_MMU_WARN_ON() toggles between no-op and
> >    WARN_ON().
>
> No, VM_WARN_ON() bounces between WARN_ON() and nop, just like KVM_MMU_WARN_ON().
> There's an extra bit of magic that adds a static assert that the code is valid
> (which I can/should/will add), but the runtime behavior is a nop.

Ah, you're right, I misread VM_WARN_ON().
  

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2b65a62fb953..240272b10ceb 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1252,7 +1252,7 @@  static bool spte_clear_dirty(u64 *sptep)
 {
 	u64 spte = *sptep;
 
-	MMU_WARN_ON(!spte_ad_enabled(spte));
+	KVM_MMU_WARN_ON(!spte_ad_enabled(spte));
 	spte &= ~shadow_dirty_mask;
 	return mmu_spte_update(sptep, spte);
 }
@@ -1728,7 +1728,7 @@  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp)
 {
-	MMU_WARN_ON(!is_empty_shadow_page(sp->spt));
+	KVM_MMU_WARN_ON(!is_empty_shadow_page(sp->spt));
 	hlist_del(&sp->hash_link);
 	list_del(&sp->link);
 	free_page((unsigned long)sp->spt);
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 9ea80e4d463c..bb1649669bc9 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -9,9 +9,9 @@ 
 #undef MMU_DEBUG
 
 #ifdef MMU_DEBUG
-#define MMU_WARN_ON(x) WARN_ON(x)
+#define KVM_MMU_WARN_ON(x) WARN_ON(x)
 #else
-#define MMU_WARN_ON(x) do { } while (0)
+#define KVM_MMU_WARN_ON(x) do { } while (0)
 #endif
 
 /* Page table builder macros common to shadow (host) PTEs and guest PTEs. */
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 1279db2eab44..83e6614f3720 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -265,13 +265,13 @@  static inline bool sp_ad_disabled(struct kvm_mmu_page *sp)
 
 static inline bool spte_ad_enabled(u64 spte)
 {
-	MMU_WARN_ON(!is_shadow_present_pte(spte));
+	KVM_MMU_WARN_ON(!is_shadow_present_pte(spte));
 	return (spte & SPTE_TDP_AD_MASK) != SPTE_TDP_AD_DISABLED;
 }
 
 static inline bool spte_ad_need_write_protect(u64 spte)
 {
-	MMU_WARN_ON(!is_shadow_present_pte(spte));
+	KVM_MMU_WARN_ON(!is_shadow_present_pte(spte));
 	/*
 	 * This is benign for non-TDP SPTEs as SPTE_TDP_AD_ENABLED is '0',
 	 * and non-TDP SPTEs will never set these bits.  Optimize for 64-bit
@@ -282,13 +282,13 @@  static inline bool spte_ad_need_write_protect(u64 spte)
 
 static inline u64 spte_shadow_accessed_mask(u64 spte)
 {
-	MMU_WARN_ON(!is_shadow_present_pte(spte));
+	KVM_MMU_WARN_ON(!is_shadow_present_pte(spte));
 	return spte_ad_enabled(spte) ? shadow_accessed_mask : 0;
 }
 
 static inline u64 spte_shadow_dirty_mask(u64 spte)
 {
-	MMU_WARN_ON(!is_shadow_present_pte(spte));
+	KVM_MMU_WARN_ON(!is_shadow_present_pte(spte));
 	return spte_ad_enabled(spte) ? shadow_dirty_mask : 0;
 }
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 08340219c35a..6ef44d60ba2b 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1545,8 +1545,8 @@  static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
 		if (!is_shadow_present_pte(iter.old_spte))
 			continue;
 
-		MMU_WARN_ON(kvm_ad_enabled() &&
-			    spte_ad_need_write_protect(iter.old_spte));
+		KVM_MMU_WARN_ON(kvm_ad_enabled() &&
+				spte_ad_need_write_protect(iter.old_spte));
 
 		if (!(iter.old_spte & dbit))
 			continue;
@@ -1604,8 +1604,8 @@  static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root,
 		if (!mask)
 			break;
 
-		MMU_WARN_ON(kvm_ad_enabled() &&
-			    spte_ad_need_write_protect(iter.old_spte));
+		KVM_MMU_WARN_ON(kvm_ad_enabled() &&
+				spte_ad_need_write_protect(iter.old_spte));
 
 		if (iter.level > PG_LEVEL_4K ||
 		    !(mask & (1UL << (iter.gfn - gfn))))