Message ID | 20230918112148.28855-12-paul@xen.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp2784438vqi; Mon, 18 Sep 2023 09:24:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG9mzA/H0lfJTPCZrXqSBmEQXwI8GUssJJdKlNJ6kkOhztohBs4J5hZbpO/h4ntAROYMJjp X-Received: by 2002:a05:6a20:105a:b0:153:73dd:282c with SMTP id gt26-20020a056a20105a00b0015373dd282cmr7851185pzc.58.1695054276669; Mon, 18 Sep 2023 09:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695054276; cv=none; d=google.com; s=arc-20160816; b=gsMofHWnO+hxByonW7R7nSK32KA4PQrdTCzl7Ran5xSUNzw5BUo0Z+dlaXzNB/19pa SUhU8REOizZgliioiPVq8NSFHhMjdAThp+m1cLba9kWm0mvrK1O9R1nrfZ0XzIzdneQ+ nXFoi53CetFcBzQMUyH0ZakU9QU+lANbKTpTUJnHb4EWFYL2v/UacXSn8yHZZJyEApLX I5pImIgaHW0FH5WHajA6O9wdBptc5pa5IMNLTlm0jxmWs6lk/xhXHYO0wc7px0qrPT+L Dmfdv9gMZZChWbj19h7lZkDUw6pAxTGFcMiEEemJ8AMdYITHOHKgz+Ce1MlGGmLBppH9 zhVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eOXlTdKLTa3rMbpImKuxJFenfTeNgDL5ucB7IcBcqTs=; fh=NVKswJvclYk1MEC7AfFzo67YIylApYMwZ7eFzXKDgdk=; b=NDSTjkS8ngujg0o+9J2YdoFnmbGjRZ65BwPEDgU1Tc0y56LQBhz0pXKIOlXuRcIl3w 3Vp5mWKzMUndbb29mICPuTSWelTLaLwrhbhvhCEXC7WMnHoTHxLUbRBZTJiu9aQELMq2 uoV7LvqbT2oCNtcoyV8IweYK1Q75aTPrdvUKZJoTEjcmVynFBt+DW5oT+j2bsdwON9kp DbShCdnoDRF4ub0grKhT4jiw7wbzjBlIEJEW5UTZpi1JNbcCKpSbK6O746QYeeIdfB90 JPqu9W0x8ky8z3tDbu/26fYJX/NT4JaVFsFwxGuHixEs5Rpg1covW3sZtSF7M/UFAGH0 E3Nw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=Q8NK4k4O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id a62-20020a639041000000b00578a9192d90si202699pge.140.2023.09.18.09.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 09:24:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=Q8NK4k4O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id A744E8056A2D; Mon, 18 Sep 2023 04:44:05 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241729AbjIRLnb (ORCPT <rfc822;kernel.ruili@gmail.com> + 27 others); Mon, 18 Sep 2023 07:43:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241708AbjIRLnD (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 18 Sep 2023 07:43:03 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95EF7E7; Mon, 18 Sep 2023 04:42:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=eOXlTdKLTa3rMbpImKuxJFenfTeNgDL5ucB7IcBcqTs=; b=Q8NK4k4O+l9/v/t+fmfVpyecZ7 nC47sQZgLUC7BINw39TMVvaz4u7IWDLoyAikoorf2O5a8Pv+q2Tz2/eDRwdrnKTEcQrqNrDb7gFzS nGQSwKapViQHipY4nwnN1XwHoCPHiGU85i+ya1ip7dXKkv0G3heLShlekgYK/r2jN0+0=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from <paul@xen.org>) id 1qiCeV-0007dB-GO; Mon, 18 Sep 2023 11:42:55 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <paul@xen.org>) id 1qiCKZ-0005f3-He; Mon, 18 Sep 2023 11:22:19 +0000 From: Paul Durrant <paul@xen.org> To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant <pdurrant@amazon.com>, Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw2@infradead.org> Subject: [PATCH v2 11/12] KVM: selftests / xen: don't explicitly set the vcpu_info address Date: Mon, 18 Sep 2023 11:21:47 +0000 Message-Id: <20230918112148.28855-12-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918112148.28855-1-paul@xen.org> References: <20230918112148.28855-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 18 Sep 2023 04:44:05 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777393233455695166 X-GMAIL-MSGID: 1777393233455695166 |
Series |
KVM: xen: update shared_info and vcpu_info handling
|
|
Commit Message
Paul Durrant
Sept. 18, 2023, 11:21 a.m. UTC
From: Paul Durrant <pdurrant@amazon.com> If the vCPU id is set and the shared_info is mapped using HVA then we can infer that KVM has the ability to use a default vcpu_info mapping. Hence we can stop setting the address of the vcpu_info structure. Signed-off-by: Paul Durrant <pdurrant@amazon.com> --- Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> v2: - New in this version. --- tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
Comments
On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote: > From: Paul Durrant <pdurrant@amazon.com> > > If the vCPU id is set and the shared_info is mapped using HVA then we can > infer that KVM has the ability to use a default vcpu_info mapping. Hence > we can stop setting the address of the vcpu_info structure. Again that means we're not *testing* it any more when the test is run on newer kernels. Can we perhaps set it explicitly, after *half* the tests are done? Maybe to a *different* address than the default which is derived from the Xen vcpu_id? And check that the memcpy works right when we do?
On 18/09/2023 14:21, David Woodhouse wrote: > On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> If the vCPU id is set and the shared_info is mapped using HVA then we can >> infer that KVM has the ability to use a default vcpu_info mapping. Hence >> we can stop setting the address of the vcpu_info structure. > > Again that means we're not *testing* it any more when the test is run > on newer kernels. Can we perhaps set it explicitly, after *half* the > tests are done? Maybe to a *different* address than the default which > is derived from the Xen vcpu_id? And check that the memcpy works right > when we do? > Ok. The VMM is currently responsible for that memcpy. Are you suggesting we push that into KVM too? Paul
On Mon, 2023-09-18 at 14:26 +0100, Paul Durrant wrote: > On 18/09/2023 14:21, David Woodhouse wrote: > > On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote: > > > From: Paul Durrant <pdurrant@amazon.com> > > > > > > If the vCPU id is set and the shared_info is mapped using HVA then we can > > > infer that KVM has the ability to use a default vcpu_info mapping. Hence > > > we can stop setting the address of the vcpu_info structure. > > > > Again that means we're not *testing* it any more when the test is run > > on newer kernels. Can we perhaps set it explicitly, after *half* the > > tests are done? Maybe to a *different* address than the default which > > is derived from the Xen vcpu_id? And check that the memcpy works right > > when we do? > > > > Ok. The VMM is currently responsible for that memcpy. Are you suggesting > we push that into KVM too? Ah OK. Hm, maybe we should? What happened before in the case where interrupts were being delivered, and the vcpu_info address was changed. In Xen, I guess it's effectively atomic? Some locking will mean that the event channel is delivered to the vcpu_info either *before* the memcpy, or *after* it, but never to the old address after the copy has been done, so that the event (well the index of it) gets lost? In KVM before we did the automatic placement, it was the VMM's problem. If there are any interrupts set up for direct delivery, I suppose the VMM should have *removed* the vcpu_info mapping before doing the memcpy, then restored it at the new address? I may have to check qemu gets that right. Then again, it's a very hard race to trigger, given that a guest can only set the vcpu_info once. So it can move it from the shinfo to a separate address and attempt to trigger this race just that one time. But in the case where auto-placement has happened, and then the guest sets an explicit vcpu_info location... are we saying that the VMM must explicitly *unmap* the vcpu_info first, then memcpy, then set it to the new location? Or will we handle the memcpy in-kernel?
On 18/09/2023 14:36, David Woodhouse wrote: > On Mon, 2023-09-18 at 14:26 +0100, Paul Durrant wrote: >> On 18/09/2023 14:21, David Woodhouse wrote: >>> On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote: >>>> From: Paul Durrant <pdurrant@amazon.com> >>>> >>>> If the vCPU id is set and the shared_info is mapped using HVA then we can >>>> infer that KVM has the ability to use a default vcpu_info mapping. Hence >>>> we can stop setting the address of the vcpu_info structure. >>> >>> Again that means we're not *testing* it any more when the test is run >>> on newer kernels. Can we perhaps set it explicitly, after *half* the >>> tests are done? Maybe to a *different* address than the default which >>> is derived from the Xen vcpu_id? And check that the memcpy works right >>> when we do? >>> >> >> Ok. The VMM is currently responsible for that memcpy. Are you suggesting >> we push that into KVM too? > > Ah OK. > > Hm, maybe we should? > > What happened before in the case where interrupts were being delivered, > and the vcpu_info address was changed. > > In Xen, I guess it's effectively atomic? Some locking will mean that > the event channel is delivered to the vcpu_info either *before* the > memcpy, or *after* it, but never to the old address after the copy has > been done, so that the event (well the index of it) gets lost? > > In KVM before we did the automatic placement, it was the VMM's problem. > > If there are any interrupts set up for direct delivery, I suppose the > VMM should have *removed* the vcpu_info mapping before doing the > memcpy, then restored it at the new address? I may have to check qemu > gets that right. > > Then again, it's a very hard race to trigger, given that a guest can > only set the vcpu_info once. So it can move it from the shinfo to a > separate address and attempt to trigger this race just that one time. > > But in the case where auto-placement has happened, and then the guest > sets an explicit vcpu_info location... are we saying that the VMM must > explicitly *unmap* the vcpu_info first, then memcpy, then set it to the > new location? Or will we handle the memcpy in-kernel? > Well, if the VMM is using the default then it can't unmap it. But setting a vcpu_info *after* enabling any event channels would be a very odd thing for a guest to do and IMO it gets to keep the pieces if it does so. Paul
On 18 September 2023 14:41:08 BST, Paul Durrant <xadimgnik@gmail.com> wrote:
>Well, if the VMM is using the default then it can't unmap it. But setting a vcpu_info *after* enabling any event channels would be a very odd thing for a guest to do and IMO it gets to keep the pieces if it does so.
Hm, I suppose I'm OK with that approach. The fact that both VMM implementations using this KVM/Xen support let the guest keep precisely those pieces is a testament to that :)
But now we're hard-coding the behaviour in the kernel and declaring that no VMM will be *able* to "fix" that case even if it does want to. So perhaps it wants a modicum more thought and at least some explicit documentation to that effect?
And a hand-wavy plan at least for what we'd do if we suddenly did find a reason to care?
On 18/09/2023 15:05, David Woodhouse wrote: > > > On 18 September 2023 14:41:08 BST, Paul Durrant <xadimgnik@gmail.com> wrote: >> Well, if the VMM is using the default then it can't unmap it. But setting a vcpu_info *after* enabling any event channels would be a very odd thing for a guest to do and IMO it gets to keep the pieces if it does so. > > > Hm, I suppose I'm OK with that approach. The fact that both VMM implementations using this KVM/Xen support let the guest keep precisely those pieces is a testament to that :) > I can have the selftest explicitly set the vcpu_info to point at the one that's already in use, I suppose... so the would at least make sure the attribute is functioning. > But now we're hard-coding the behaviour in the kernel and declaring that no VMM will be *able* to "fix" that case even if it does want to. So perhaps it wants a modicum more thought and at least some explicit documentation to that effect? > > And a hand-wavy plan at least for what we'd do if we suddenly did find a reason to care? Handwavy plan would be for the VMM to: a) Mask all open event channels targetting the vcpu b) Copy vcpu_info content to the new location c) Tell KVM where it is d) Unmask the masked event channels Does that sound ok? If so I can stick it in the API documentation.
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c index fa829d6e0848..d1c88deec0b2 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -550,11 +550,13 @@ int main(int argc, char *argv[]) vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &vid); } - struct kvm_xen_vcpu_attr vi = { - .type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO, - .u.gpa = VCPU_INFO_ADDR, - }; - vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &vi); + if (!has_vcpu_id || !has_shinfo_hva) { + struct kvm_xen_vcpu_attr vi = { + .type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO, + .u.gpa = VCPU_INFO_ADDR, + }; + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &vi); + } struct kvm_xen_vcpu_attr pvclock = { .type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO,