[V3,03/14] kvm: x86/mmu: Check mmu->sync_page pointer in kvm_sync_page_check()

Message ID 20230216154115.710033-4-jiangshanlai@gmail.com
State New
Headers
Series kvm: x86/mmu: Share the same code to invalidate each vTLB entry |

Commit Message

Lai Jiangshan Feb. 16, 2023, 3:41 p.m. UTC
  From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Check the pointer before calling it to catch any possible mistake.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ee2837ea18d4..69ab0d1bb0ec 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1940,7 +1940,7 @@  static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	 * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the
 	 * reserved bits checks will be wrong, etc...
 	 */
-	if (WARN_ON_ONCE(sp->role.direct ||
+	if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_page ||
 			 (sp->role.word ^ root_role.word) & ~sync_role_ign.word))
 		return false;