[v6,8/8] KVM: x86/mmu: WARN if TDP MMU SP disallows hugepage after being zapped

Message ID 20221019165618.927057-9-seanjc@google.com
State New
Headers
Series KVM: x86: Apply NX mitigation more precisely |

Commit Message

Sean Christopherson Oct. 19, 2022, 4:56 p.m. UTC
  Extend the accounting sanity check in kvm_recover_nx_huge_pages() to the
TDP MMU, i.e. verify that zapping a shadow page unaccounts the disallowed
NX huge page regardless of the MMU type.  Recovery runs while holding
mmu_lock for write and so it should be impossible to get false positives
on the WARN.

Suggested-by: Yan Zhao <yan.y.zhao@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
  

Comments

Yan Zhao Oct. 21, 2022, 5:41 a.m. UTC | #1
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>

On Wed, Oct 19, 2022 at 04:56:18PM +0000, Sean Christopherson wrote:
> Extend the accounting sanity check in kvm_recover_nx_huge_pages() to the
> TDP MMU, i.e. verify that zapping a shadow page unaccounts the disallowed
> NX huge page regardless of the MMU type.  Recovery runs while holding
> mmu_lock for write and so it should be impossible to get false positives
> on the WARN.
> 
> Suggested-by: Yan Zhao <yan.y.zhao@intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 14674c9e10f7..dfd1656232ad 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6864,12 +6864,11 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
>  				      struct kvm_mmu_page,
>  				      possible_nx_huge_page_link);
>  		WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
> -		if (is_tdp_mmu_page(sp)) {
> +		if (is_tdp_mmu_page(sp))
>  			flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
> -		} else {
> +		else
>  			kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
> -			WARN_ON_ONCE(sp->nx_huge_page_disallowed);
> -		}
> +		WARN_ON_ONCE(sp->nx_huge_page_disallowed);
>  
>  		if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
>  			kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
> -- 
> 2.38.0.413.g74048e4d9e-goog
>
  

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 14674c9e10f7..dfd1656232ad 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6864,12 +6864,11 @@  static void kvm_recover_nx_huge_pages(struct kvm *kvm)
 				      struct kvm_mmu_page,
 				      possible_nx_huge_page_link);
 		WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
-		if (is_tdp_mmu_page(sp)) {
+		if (is_tdp_mmu_page(sp))
 			flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
-		} else {
+		else
 			kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
-			WARN_ON_ONCE(sp->nx_huge_page_disallowed);
-		}
+		WARN_ON_ONCE(sp->nx_huge_page_disallowed);
 
 		if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
 			kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);