From patchwork Sat May 13 00:35:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 93464 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp5487264vqo; Fri, 12 May 2023 17:59:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6IWMSOW8YRu1Dm8yBrK8+j2VXSOyAhUQRdtBW0RVZvVW2myvjde9d9smMuZtkp5PFzfSiB X-Received: by 2002:a05:6a20:9192:b0:f2:cc6a:932 with SMTP id v18-20020a056a20919200b000f2cc6a0932mr32540963pzd.49.1683939539794; Fri, 12 May 2023 17:58:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683939539; cv=none; d=google.com; s=arc-20160816; b=QW2aeN82VjQgVa3mEi8sh/8j7eJBKkhzQBMRuxVotFQm5DEgraTsbiQOHOLQewEzPL +6WcgVp3QXgdg2ujJbgHq0QQpCrpn3usygQE1FEUHQBFXjGdy/SpXetVgdFKToYeqeju YGqL9RuneQpKdGSx7ei4GxB+Cw2OlzTHAEYs7M0aCgYEo3WTQX0P9WTnH0P6SRpPUosE sN/+k80Z4Vo2MGH2fokXI9eOvCXSl7x+eHyoPLnyGQIwYhpB8sH8GLvhScXqRWi/0noO 9oReqlu6qicTvWZSZ8LCvxOzETGSHpcNnqcxABMYRUoh8fNHmXQfP7KjIux8ZfHfw7TX OCbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=5osPeSp5NshN59a0GSWdu0T0inckOvzdkguH3sE8u+4=; b=jHMbNs5nY55b1muPn4wkvxv3jQEjg7gnGJBF2bj7EnC+310+3/RxwmrYXGs7HSkoQC NV9P8ciMvpjXbJVPgz/nTbtun/3iSLpr+gUJmN5BIdWdmT0iEnOe3asBEd0eDm6GEzJB RnJ+yEAxqcPp7SKsyg64qj6Drc+XK2wZGux5kTOUjW713p332o7HBXAGPBhtPQnw2yUA i1sPGVr8VugjStiYXcJGMobxZnNrOoaf5O39rtqKrMbYYLFSDaYZDJw+TavaQPrtLSHX mYbxkjXP/6DWA22mmMVuNMLpVCr3JBaeAvZzxEFDLbadL/IVcaHjEIsUlXoXqvFOTkNI AhjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=TyxksrxF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e5-20020a63aa05000000b00528595a1390si10882248pgf.588.2023.05.12.17.58.44; Fri, 12 May 2023 17:58:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=TyxksrxF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241989AbjEMAi0 (ORCPT + 99 others); Fri, 12 May 2023 20:38:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241129AbjEMAhj (ORCPT ); Fri, 12 May 2023 20:37:39 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7009959F3 for ; Fri, 12 May 2023 17:36:57 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-521262a6680so9703738a12.1 for ; Fri, 12 May 2023 17:36:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683938207; x=1686530207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5osPeSp5NshN59a0GSWdu0T0inckOvzdkguH3sE8u+4=; b=TyxksrxFuPh+a7FlS7lU2fy+RTf1numk8H6URtk7+K3KtiNTUd/GMOsezQJmP8h2mM hGXn7ko0IYfO0PYWNj6jb1sBJ+u3BU5yYJn2ilHPeuzFtCKuPYE6w3d1k7ormPzBwsRe 0l/6BuemtrKbQ2R9xfXjJGVOM+V4oBFcsI0SJCpC7Z2l1WYKdYcQMNSMiNz4D8gnGbJY TSLNgnMOjV0j0il82vgm5s1mvgnOtNV5vKs3Y0o+sVrm2y3bM02W0v/Pav7zrt3ZxZNU P11yzxZT57oRK/cQStbY3M6x8+dz2Wa8GeMN7Ek4IWYzAYv/X4pPi6eno34rzNVXYD/y bIgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683938207; x=1686530207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5osPeSp5NshN59a0GSWdu0T0inckOvzdkguH3sE8u+4=; b=iQzaC6LrW3M4EJBz9xdgPjWZPOLSDJYfNM8+2IF7G7meV1O8s1dw3RQMnbEdu+H59r r2vVTS9K4DFjTkdZOwLd364sPRqFPpcI1Bh/WhoAcaA6OU3dsj9V/mWMbgyXDzIIb4oJ 1avgjvdLTgBWcGfhhlkfQVn88zgnCzsw5JijrzbaNW7tJTChW0e2eSBj8zeq7ZLyPHvM tE91GenCAWYmFFsWP7T3FuBTyiAlymZaPRQSalnb85eaNAuHbrvgr65G+POIpSWjhpWc 6mFWXfX6DILA03RbpE1Ul2O80hK9pCQVE6uYkPA8YzOokHAYS845KNb9fKaflQOFrLGV t/hA== X-Gm-Message-State: AC+VfDz01Uuhy/DrVMaPHXsHbn8fDP65Z6rvrq2Ohd50EuTMPC5js00c 5Q76JhcL2Ki53qeFrfQJS7VKHFy0wGg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:6a89:b0:1a6:a50d:652c with SMTP id n9-20020a1709026a8900b001a6a50d652cmr9192827plk.9.1683938207724; Fri, 12 May 2023 17:36:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 12 May 2023 17:35:54 -0700 In-Reply-To: <20230513003600.818142-1-seanjc@google.com> Mime-Version: 1.0 References: <20230513003600.818142-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230513003600.818142-23-seanjc@google.com> Subject: [PATCH v3 22/28] KVM: x86/mmu: Drop infrastructure for multiple page-track modes From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Zhenyu Wang , Zhi Wang Cc: kvm@vger.kernel.org, intel-gvt-dev@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Yan Zhao , Ben Gardon X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765738587060726625?= X-GMAIL-MSGID: =?utf-8?q?1765738587060726625?= Drop "support" for multiple page-track modes, as there is no evidence that array-based and refcounted metadata is the optimal solution for other modes, nor is there any evidence that other use cases, e.g. for access-tracking, will be a good fit for the page-track machinery in general. E.g. one potential use case of access-tracking would be to prevent guest access to poisoned memory (from the guest's perspective). In that case, the number of poisoned pages is likely to be a very small percentage of the guest memory, and there is no need to reference count the number of access-tracking users, i.e. expanding gfn_track[] for a new mode would be grossly inefficient. And for poisoned memory, host userspace would also likely want to trap accesses, e.g. to inject #MC into the guest, and that isn't currently supported by the page-track framework. A better alternative for that poisoned page use case is likely a variation of the proposed per-gfn attributes overlay (linked), which would allow efficiently tracking the sparse set of poisoned pages, and by default would exit to userspace on access. Link: https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 12 +-- arch/x86/include/asm/kvm_page_track.h | 11 +-- arch/x86/kvm/mmu/mmu.c | 14 ++-- arch/x86/kvm/mmu/page_track.c | 111 ++++++++------------------ arch/x86/kvm/mmu/page_track.h | 3 +- drivers/gpu/drm/i915/gvt/kvmgt.c | 4 +- 6 files changed, 51 insertions(+), 104 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5ce06a75d3de..3dde3a11113a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -288,13 +288,13 @@ struct kvm_kernel_irq_routing_entry; * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page * also includes TDP pages) to determine whether or not a page can be used in * the given MMU context. This is a subset of the overall kvm_cpu_role to - * minimize the size of kvm_memory_slot.arch.gfn_track, i.e. allows allocating - * 2 bytes per gfn instead of 4 bytes per gfn. + * minimize the size of kvm_memory_slot.arch.gfn_write_track, i.e. allows + * allocating 2 bytes per gfn instead of 4 bytes per gfn. * * Upper-level shadow pages having gptes are tracked for write-protection via - * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create - * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise - * gfn_track will overflow and explosions will ensure. + * gfn_write_track. As above, gfn_write_track is a 16 bit counter, so KVM must + * not create more than 2^16-1 upper-level shadow pages at a single gfn, + * otherwise gfn_write_track will overflow and explosions will ensue. * * A unique shadow page (SP) for a gfn is created if and only if an existing SP * cannot be reused. The ability to reuse a SP is tracked by its role, which @@ -1005,7 +1005,7 @@ struct kvm_lpage_info { struct kvm_arch_memory_slot { struct kvm_rmap_head *rmap[KVM_NR_PAGE_SIZES]; struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1]; - unsigned short *gfn_track[KVM_PAGE_TRACK_MAX]; + unsigned short *gfn_write_track; }; /* diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 61adb07b5927..9e4ee26d1779 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -4,17 +4,10 @@ #include -enum kvm_page_track_mode { - KVM_PAGE_TRACK_WRITE, - KVM_PAGE_TRACK_MAX, -}; - void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + struct kvm_memory_slot *slot, gfn_t gfn); void kvm_slot_page_track_remove_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + struct kvm_memory_slot *slot, gfn_t gfn); #ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0d9fe54ecb01..8041f5747704 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -837,8 +837,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) - return kvm_slot_page_track_add_page(kvm, slot, gfn, - KVM_PAGE_TRACK_WRITE); + return kvm_slot_page_track_add_page(kvm, slot, gfn); kvm_mmu_gfn_disallow_lpage(slot, gfn); @@ -884,8 +883,7 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) slots = kvm_memslots_for_spte_role(kvm, sp->role); slot = __gfn_to_memslot(slots, gfn); if (sp->role.level > PG_LEVEL_4K) - return kvm_slot_page_track_remove_page(kvm, slot, gfn, - KVM_PAGE_TRACK_WRITE); + return kvm_slot_page_track_remove_page(kvm, slot, gfn); kvm_mmu_gfn_allow_lpage(slot, gfn); } @@ -2825,7 +2823,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(kvm, slot, gfn)) return -EPERM; /* @@ -4226,7 +4224,7 @@ static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, * guest is writing the page which is write tracked which can * not be fixed by page fault handler. */ - if (kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn)) return true; return false; @@ -5461,8 +5459,8 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) * physical address properties) in a single VM would require tracking * all relevant CPUID information in kvm_mmu_page_role. That is very * undesirable as it would increase the memory requirements for - * gfn_track (see struct kvm_mmu_page_role comments). For now that - * problem is swept under the rug; KVM's CPUID API is horrific and + * gfn_write_track (see struct kvm_mmu_page_role comments). For now + * that problem is swept under the rug; KVM's CPUID API is horrific and * it's all but impossible to solve it without introducing a new API. */ vcpu->arch.root_mmu.root_role.word = 0; diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index b20aad7ac3fe..cdc6069b8caf 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -27,76 +27,50 @@ bool kvm_page_track_write_tracking_enabled(struct kvm *kvm) void kvm_page_track_free_memslot(struct kvm_memory_slot *slot) { - int i; + kvfree(slot->arch.gfn_write_track); + slot->arch.gfn_write_track = NULL; +} - for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { - kvfree(slot->arch.gfn_track[i]); - slot->arch.gfn_track[i] = NULL; - } +static int __kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot, + unsigned long npages) +{ + const size_t size = sizeof(*slot->arch.gfn_write_track); + + if (!slot->arch.gfn_write_track) + slot->arch.gfn_write_track = __vcalloc(npages, size, + GFP_KERNEL_ACCOUNT); + + return slot->arch.gfn_write_track ? 0 : -ENOMEM; } int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned long npages) { - int i; - - for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { - if (i == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm)) - continue; - - slot->arch.gfn_track[i] = - __vcalloc(npages, sizeof(*slot->arch.gfn_track[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.gfn_track[i]) - goto track_free; - } - - return 0; - -track_free: - kvm_page_track_free_memslot(slot); - return -ENOMEM; -} - -static inline bool page_track_mode_is_valid(enum kvm_page_track_mode mode) -{ - if (mode < 0 || mode >= KVM_PAGE_TRACK_MAX) - return false; - - return true; -} - -int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot) -{ - unsigned short *gfn_track; - - if (slot->arch.gfn_track[KVM_PAGE_TRACK_WRITE]) + if (!kvm_page_track_write_tracking_enabled(kvm)) return 0; - gfn_track = __vcalloc(slot->npages, sizeof(*gfn_track), - GFP_KERNEL_ACCOUNT); - if (gfn_track == NULL) - return -ENOMEM; + return __kvm_page_track_write_tracking_alloc(slot, npages); +} - slot->arch.gfn_track[KVM_PAGE_TRACK_WRITE] = gfn_track; - return 0; +int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot) +{ + return __kvm_page_track_write_tracking_alloc(slot, slot->npages); } -static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode, short count) +static void update_gfn_write_track(struct kvm_memory_slot *slot, gfn_t gfn, + short count) { int index, val; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); - val = slot->arch.gfn_track[mode][index]; + val = slot->arch.gfn_write_track[index]; if (WARN_ON(val + count < 0 || val + count > USHRT_MAX)) return; - slot->arch.gfn_track[mode][index] += count; + slot->arch.gfn_write_track[index] += count; } /* @@ -109,21 +83,15 @@ static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, * @kvm: the guest instance we are interested in. * @slot: the @gfn belongs to. * @gfn: the guest page. - * @mode: tracking mode, currently only write track is supported. */ void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + struct kvm_memory_slot *slot, gfn_t gfn) { - if (WARN_ON(!page_track_mode_is_valid(mode))) + if (WARN_ON(!kvm_page_track_write_tracking_enabled(kvm))) return; - if (WARN_ON(mode == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm))) - return; - - update_gfn_track(slot, gfn, mode, 1); + update_gfn_write_track(slot, gfn, 1); /* * new track stops large page mapping for the @@ -131,9 +99,8 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, */ kvm_mmu_gfn_disallow_lpage(slot, gfn); - if (mode == KVM_PAGE_TRACK_WRITE) - if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) - kvm_flush_remote_tlbs(kvm); + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) + kvm_flush_remote_tlbs(kvm); } EXPORT_SYMBOL_GPL(kvm_slot_page_track_add_page); @@ -148,20 +115,14 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_add_page); * @kvm: the guest instance we are interested in. * @slot: the @gfn belongs to. * @gfn: the guest page. - * @mode: tracking mode, currently only write track is supported. */ void kvm_slot_page_track_remove_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + struct kvm_memory_slot *slot, gfn_t gfn) { - if (WARN_ON(!page_track_mode_is_valid(mode))) + if (WARN_ON(!kvm_page_track_write_tracking_enabled(kvm))) return; - if (WARN_ON(mode == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm))) - return; - - update_gfn_track(slot, gfn, mode, -1); + update_gfn_write_track(slot, gfn, -1); /* * allow large page mapping for the tracked page @@ -176,22 +137,18 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); */ bool kvm_slot_page_track_is_active(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, enum kvm_page_track_mode mode) + gfn_t gfn) { int index; - if (WARN_ON(!page_track_mode_is_valid(mode))) - return false; - if (!slot) return false; - if (mode == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm)) + if (!kvm_page_track_write_tracking_enabled(kvm)) return false; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); - return !!READ_ONCE(slot->arch.gfn_track[mode][index]); + return !!READ_ONCE(slot->arch.gfn_write_track[index]); } #ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING diff --git a/arch/x86/kvm/mmu/page_track.h b/arch/x86/kvm/mmu/page_track.h index 931b26b8fc8f..789d0c479519 100644 --- a/arch/x86/kvm/mmu/page_track.h +++ b/arch/x86/kvm/mmu/page_track.h @@ -16,8 +16,7 @@ int kvm_page_track_create_memslot(struct kvm *kvm, unsigned long npages); bool kvm_slot_page_track_is_active(struct kvm *kvm, - const struct kvm_memory_slot *slot, - gfn_t gfn, enum kvm_page_track_mode mode); + const struct kvm_memory_slot *slot, gfn_t gfn); #ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING int kvm_page_track_init(struct kvm *kvm); diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index 2e65901270ca..25226e4e3417 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -1563,7 +1563,7 @@ int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn) } write_lock(&kvm->mmu_lock); - kvm_slot_page_track_add_page(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE); + kvm_slot_page_track_add_page(kvm, slot, gfn); write_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); @@ -1592,7 +1592,7 @@ int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn) } write_lock(&kvm->mmu_lock); - kvm_slot_page_track_remove_page(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE); + kvm_slot_page_track_remove_page(kvm, slot, gfn); write_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx);