[v2,09/12] KVM: selftests / xen: set KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID
Commit Message
From: Paul Durrant <pdurrant@amazon.com>
If the capability (KVM_XEN_HVM_CONFIG_EVTCHN_SEND) is present then set
the guest's vCPU id to match the chosen vcpu_info offset.
Also make some cosmetic fixes to the code for clarity.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
v2:
- New in this version.
---
.../selftests/kvm/x86_64/xen_shinfo_test.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
Comments
On Mon, 2023-09-18 at 13:18 +0100, Paul Durrant wrote:
> On 18/09/2023 12:59, David Woodhouse wrote:
> > On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
> > > From: Paul Durrant <pdurrant@amazon.com>
> > >
> > > If the capability (KVM_XEN_HVM_CONFIG_EVTCHN_SEND) is present then set
> > > the guest's vCPU id to match the chosen vcpu_info offset.
> >
> > I think from KVM's point of view, the vcpu_id is still zero. As is the
> > vcpu_idx. What you're setting is the *Xen* vcpu_id.
>
> Ok. I'll clarify the terminology; and I'll need to set it before the
> shinfo as noted on another thread.
>
> >
> > I like that it's *different* to the vcpu_id; we should definitely be
> > testing that case.
>
> How so?
>
Well, imagine you'd not got the kernel's get_vcpu_info_cache() right,
and you accidentally used v->vcpu_id instead of v->arch.xen.vcpu_id. An
easy mistake to make, right?
If you had made that mistake (you didn't; I checked), and if those
numbers had happened to be equal in this test... then this test
wouldn't have shown it.
@@ -38,6 +38,8 @@
#define VCPU_INFO_VADDR (SHINFO_REGION_GVA + 0x40)
#define RUNSTATE_VADDR (SHINFO_REGION_GVA + PAGE_SIZE + PAGE_SIZE - 15)
+#define VCPU_ID 1 /* Must correspond to offset of VCPU_INFO_[V]ADDR */
+
#define EVTCHN_VECTOR 0x10
#define EVTCHN_TEST1 15
@@ -410,7 +412,7 @@ static void *juggle_shinfo_state(void *arg)
struct kvm_xen_hvm_attr cache_activate = {
.type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
- .u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE
+ .u.shared_info.gfn = SHINFO_ADDR / PAGE_SIZE
};
struct kvm_xen_hvm_attr cache_deactivate = {
@@ -446,6 +448,7 @@ int main(int argc, char *argv[])
bool do_runstate_flag = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG);
bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
bool do_evtchn_tests = do_eventfd_tests && !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_SEND);
+ bool has_vcpu_id = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_SEND);
clock_gettime(CLOCK_REALTIME, &min_ts);
@@ -494,7 +497,7 @@ int main(int argc, char *argv[])
struct kvm_xen_hvm_attr ha = {
.type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
- .u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE,
+ .u.shared_info.gfn = SHINFO_ADDR / PAGE_SIZE,
};
vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
@@ -508,6 +511,14 @@ int main(int argc, char *argv[])
TEST_ASSERT(m == shinfo, "Failed to map /dev/zero over shared info");
shinfo->wc = wc_copy;
+ if (has_vcpu_id) {
+ struct kvm_xen_vcpu_attr vid = {
+ .type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID,
+ .u.vcpu_id = VCPU_ID,
+ };
+ vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &vid);
+ }
+
struct kvm_xen_vcpu_attr vi = {
.type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO,
.u.gpa = VCPU_INFO_ADDR,
@@ -983,8 +994,8 @@ int main(int argc, char *argv[])
struct pvclock_wall_clock *wc;
struct pvclock_vcpu_time_info *ti, *ti2;
- wc = addr_gpa2hva(vm, SHINFO_REGION_GPA + 0xc00);
- ti = addr_gpa2hva(vm, SHINFO_REGION_GPA + 0x40 + 0x20);
+ wc = addr_gpa2hva(vm, SHINFO_ADDR + 0xc00);
+ ti = addr_gpa2hva(vm, VCPU_INFO_ADDR + 0x20);
ti2 = addr_gpa2hva(vm, PVTIME_ADDR);
if (verbose) {