[v2,10/12] KVM: selftests / xen: map shared_info using HVA rather than GFN

Message ID 20230918112148.28855-11-paul@xen.org
State New
Headers
Series KVM: xen: update shared_info and vcpu_info handling |

Commit Message

Paul Durrant Sept. 18, 2023, 11:21 a.m. UTC
  From: Paul Durrant <pdurrant@amazon.com>

Using the HVA of the shared_info page is more efficient, so if the
capability (KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA) is present use that method
to do the mapping.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>

v2:
 - New in this version.
---
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 50 ++++++++++++++++---
 1 file changed, 43 insertions(+), 7 deletions(-)
  

Comments

David Woodhouse Sept. 18, 2023, 1:19 p.m. UTC | #1
On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
> 
> -                               ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm);
> +                               if (has_shinfo_hva)
> +                                       ret = pthread_create(&thread, NULL,
> +                                                            &juggle_shinfo_state_hva, (void *)vm);
> +                               else
> +                                       ret = pthread_create(&thread, NULL,
> +                                                            &juggle_shinfo_state_gfn, (void *)vm);
>                                 TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret));
>  


This means you don't exercise the GFN-based path (which all current VMM
implementations use) on newer kernels. Could you have a single
juggle_shinfo_state() function as before, but make it repeatedly set
and clear the shinfo using *both* HVA and GFN (if the former is
available, of course).

While you're at it, it looks like the thread leaves the shinfo
*deactivated*, which might come as a surprise to anyone who adds tests
at the end near the existing TEST_DONE. Can we make it leave the shinfo
*active* instead?
  
Paul Durrant Sept. 18, 2023, 1:24 p.m. UTC | #2
On 18/09/2023 14:19, David Woodhouse wrote:
> On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
>>
>> -                               ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm);
>> +                               if (has_shinfo_hva)
>> +                                       ret = pthread_create(&thread, NULL,
>> +                                                            &juggle_shinfo_state_hva, (void *)vm);
>> +                               else
>> +                                       ret = pthread_create(&thread, NULL,
>> +                                                            &juggle_shinfo_state_gfn, (void *)vm);
>>                                  TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret));
>>   
> 
> 
> This means you don't exercise the GFN-based path (which all current VMM
> implementations use) on newer kernels. Could you have a single
> juggle_shinfo_state() function as before, but make it repeatedly set
> and clear the shinfo using *both* HVA and GFN (if the former is
> available, of course).

The guidance is to use HVA if the feature is available; a VMM should not 
really be mixing and matching. That said, setting it either way should 
be equivalent.

> 
> While you're at it, it looks like the thread leaves the shinfo
> *deactivated*, which might come as a surprise to anyone who adds tests
> at the end near the existing TEST_DONE. Can we make it leave the shinfo
> *active* instead?

Ok.

   Paul
  
David Woodhouse Sept. 18, 2023, 1:30 p.m. UTC | #3
On Mon, 2023-09-18 at 14:24 +0100, Paul Durrant wrote:
> On 18/09/2023 14:19, David Woodhouse wrote:
> > On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
> > > 
> > > -                               ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm);
> > > +                               if (has_shinfo_hva)
> > > +                                       ret = pthread_create(&thread, NULL,
> > > +                                                            &juggle_shinfo_state_hva, (void *)vm);
> > > +                               else
> > > +                                       ret = pthread_create(&thread, NULL,
> > > +                                                            &juggle_shinfo_state_gfn, (void *)vm);
> > >                                  TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret));
> > >   
> > 
> > 
> > This means you don't exercise the GFN-based path (which all current VMM
> > implementations use) on newer kernels. Could you have a single
> > juggle_shinfo_state() function as before, but make it repeatedly set
> > and clear the shinfo using *both* HVA and GFN (if the former is
> > available, of course).
> 
> The guidance is to use HVA if the feature is available; a VMM should not 
> really be mixing and matching. That said, setting it either way should 
> be equivalent.

Sure, but this isn't really a VMM; it's a test harness to exercise a
bunch of ioctls. One of which it no longer bothers to test now, if the
new variant exists.

If we're going to keep the old API we should also continue to *test*
the old API. (And we should).
  

Patch

diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index 49d0c91ee078..fa829d6e0848 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -395,6 +395,7 @@  static int cmp_timespec(struct timespec *a, struct timespec *b)
 		return 0;
 }
 
+static struct shared_info *shinfo;
 static struct vcpu_info *vinfo;
 static struct kvm_vcpu *vcpu;
 
@@ -406,7 +407,7 @@  static void handle_alrm(int sig)
 	TEST_FAIL("IRQ delivery timed out");
 }
 
-static void *juggle_shinfo_state(void *arg)
+static void *juggle_shinfo_state_gfn(void *arg)
 {
 	struct kvm_vm *vm = (struct kvm_vm *)arg;
 
@@ -429,6 +430,29 @@  static void *juggle_shinfo_state(void *arg)
 	return NULL;
 }
 
+static void *juggle_shinfo_state_hva(void *arg)
+{
+	struct kvm_vm *vm = (struct kvm_vm *)arg;
+
+	struct kvm_xen_hvm_attr cache_activate = {
+		.type = KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA,
+		.u.shared_info.hva = (unsigned long)shinfo
+	};
+
+	struct kvm_xen_hvm_attr cache_deactivate = {
+		.type = KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA,
+		.u.shared_info.hva = 0
+	};
+
+	for (;;) {
+		__vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_activate);
+		__vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_deactivate);
+		pthread_testcancel();
+	}
+
+	return NULL;
+}
+
 int main(int argc, char *argv[])
 {
 	struct timespec min_ts, max_ts, vm_ts;
@@ -449,6 +473,7 @@  int main(int argc, char *argv[])
 	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 	bool do_evtchn_tests = do_eventfd_tests && !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_SEND);
 	bool has_vcpu_id = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_SEND);
+	bool has_shinfo_hva = !!(xen_caps & KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA);
 
 	clock_gettime(CLOCK_REALTIME, &min_ts);
 
@@ -459,7 +484,7 @@  int main(int argc, char *argv[])
 				    SHINFO_REGION_GPA, SHINFO_REGION_SLOT, 3, 0);
 	virt_map(vm, SHINFO_REGION_GVA, SHINFO_REGION_GPA, 3);
 
-	struct shared_info *shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
+	shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
 
 	int zero_fd = open("/dev/zero", O_RDONLY);
 	TEST_ASSERT(zero_fd != -1, "Failed to open /dev/zero");
@@ -495,10 +520,16 @@  int main(int argc, char *argv[])
 			    "Failed to read back RUNSTATE_UPDATE_FLAG attr");
 	}
 
-	struct kvm_xen_hvm_attr ha = {
-		.type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
-		.u.shared_info.gfn = SHINFO_ADDR / PAGE_SIZE,
-	};
+	struct kvm_xen_hvm_attr ha = {};
+
+	if (has_shinfo_hva) {
+		ha.type = KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA;
+		ha.u.shared_info.hva = (unsigned long)shinfo;
+	} else {
+		ha.type = KVM_XEN_ATTR_TYPE_SHARED_INFO;
+		ha.u.shared_info.gfn = SHINFO_ADDR / PAGE_SIZE;
+	}
+
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
 
 	/*
@@ -902,7 +933,12 @@  int main(int argc, char *argv[])
 				if (verbose)
 					printf("Testing shinfo lock corruption (KVM_XEN_HVM_EVTCHN_SEND)\n");
 
-				ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm);
+				if (has_shinfo_hva)
+					ret = pthread_create(&thread, NULL,
+							     &juggle_shinfo_state_hva, (void *)vm);
+				else
+					ret = pthread_create(&thread, NULL,
+							     &juggle_shinfo_state_gfn, (void *)vm);
 				TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret));
 
 				struct kvm_irq_routing_xen_evtchn uxe = {