[v13,04/21] KVM: pfncache: add a mark-dirty helper
Commit Message
From: Paul Durrant <pdurrant@amazon.com>
At the moment pages are marked dirty by open-coded calls to
mark_page_dirty_in_slot(), directly deferefencing the gpa and memslot
from the cache. After a subsequent patch these may not always be set
so add a helper now so that caller will protected from the need to know
about this detail.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
---
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
v13:
- s/kvm_gpc_mark_dirty/kvm_gpc_mark_dirty_in_slot
- Add a check for a NULL memslot pointer
v8:
- Make the helper a static inline.
---
arch/x86/kvm/x86.c | 2 +-
arch/x86/kvm/xen.c | 6 +++---
include/linux/kvm_host.h | 13 +++++++++++++
3 files changed, 17 insertions(+), 4 deletions(-)
Comments
On Thu, Feb 15, 2024, Paul Durrant wrote:
> +/**
> + * kvm_gpc_mark_dirty_in_slot - mark a cached guest page as dirty.
> + *
> + * @gpc: struct gfn_to_pfn_cache object.
Meh, just omit the kerneldoc comment.
> + */
> +static inline void kvm_gpc_mark_dirty_in_slot(struct gfn_to_pfn_cache *gpc)
> +{
> + lockdep_assert_held(&gpc->lock);
> + if (gpc->memslot)
> + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot,
> + gpc->gpa >> PAGE_SHIFT);
It's kinda silly, but I think it's worth landing this below gpa_to_gfn() so that
there's no need to open code the shift.
And I have a (very) slight preference for an early return.
static inline void kvm_gpc_mark_dirty_in_slot(struct gfn_to_pfn_cache *gpc)
{
lockdep_assert_held(&gpc->lock);
if (!gpc->memslot)
return;
mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpa_to_gfn(gpc->gpa));
}
> +}
> +
> void kvm_sigset_activate(struct kvm_vcpu *vcpu);
> void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
>
> --
> 2.39.2
>
On 19/02/2024 21:42, Sean Christopherson wrote:
> On Thu, Feb 15, 2024, Paul Durrant wrote:
>> +/**
>> + * kvm_gpc_mark_dirty_in_slot - mark a cached guest page as dirty.
>> + *
>> + * @gpc: struct gfn_to_pfn_cache object.
>
> Meh, just omit the kerneldoc comment.
>
>> + */
>> +static inline void kvm_gpc_mark_dirty_in_slot(struct gfn_to_pfn_cache *gpc)
>> +{
>> + lockdep_assert_held(&gpc->lock);
>> + if (gpc->memslot)
>> + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot,
>> + gpc->gpa >> PAGE_SHIFT);
>
> It's kinda silly, but I think it's worth landing this below gpa_to_gfn() so that
> there's no need to open code the shift.
>
> And I have a (very) slight preference for an early return.
>
> static inline void kvm_gpc_mark_dirty_in_slot(struct gfn_to_pfn_cache *gpc)
> {
> lockdep_assert_held(&gpc->lock);
>
> if (!gpc->memslot)
> return;
>
> mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpa_to_gfn(gpc->gpa));
> }
>
Ok. Will change.
>> +}
>> +
>> void kvm_sigset_activate(struct kvm_vcpu *vcpu);
>> void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
>>
>> --
>> 2.39.2
>>
@@ -3151,7 +3151,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
guest_hv_clock->version = ++vcpu->hv_clock.version;
- mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT);
+ kvm_gpc_mark_dirty_in_slot(gpc);
read_unlock_irqrestore(&gpc->lock, flags);
trace_kvm_pvclock_update(v->vcpu_id, &vcpu->hv_clock);
@@ -453,11 +453,11 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
}
if (user_len2) {
- mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT);
+ kvm_gpc_mark_dirty_in_slot(gpc2);
read_unlock(&gpc2->lock);
}
- mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT);
+ kvm_gpc_mark_dirty_in_slot(gpc1);
read_unlock_irqrestore(&gpc1->lock, flags);
}
@@ -565,7 +565,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v)
WRITE_ONCE(vi->evtchn_upcall_pending, 1);
}
- mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT);
+ kvm_gpc_mark_dirty_in_slot(gpc);
read_unlock_irqrestore(&gpc->lock, flags);
/* For the per-vCPU lapic vector, deliver it as MSI. */
@@ -1398,6 +1398,19 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, unsigned long len);
*/
void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc);
+/**
+ * kvm_gpc_mark_dirty_in_slot - mark a cached guest page as dirty.
+ *
+ * @gpc: struct gfn_to_pfn_cache object.
+ */
+static inline void kvm_gpc_mark_dirty_in_slot(struct gfn_to_pfn_cache *gpc)
+{
+ lockdep_assert_held(&gpc->lock);
+ if (gpc->memslot)
+ mark_page_dirty_in_slot(gpc->kvm, gpc->memslot,
+ gpc->gpa >> PAGE_SHIFT);
+}
+
void kvm_sigset_activate(struct kvm_vcpu *vcpu);
void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);