Message ID | 20231002095740.1472907-1-paul@xen.org |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1477793vqb; Mon, 2 Oct 2023 07:49:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGnYzJKTYpQiWULAsOxcjWVF6u0k5yers6Ov5nTDKN+uefQScjN/GSdZZtMcfPZdFmE0YrQ X-Received: by 2002:a9d:6955:0:b0:6b7:5452:df79 with SMTP id p21-20020a9d6955000000b006b75452df79mr13747543oto.0.1696258160655; Mon, 02 Oct 2023 07:49:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696258160; cv=none; d=google.com; s=arc-20160816; b=MF6U2MlSnUXLTPwzkLVKIWE0CSFWuB+pcFpjmZZ36rMK9cd5G88vSS7izaithJYADw 44FIYf0JSXltsAnIu/97i5QBHwG/QlcRWHaWiERpE6VGaN+q/ZTGkrSrtfXZEQROtnaX O95JWzlU7Ick7DJnq1AtShSXGWPIf4Kqy++YmB4e15QHwB6P7357tnwHJxUv89Wxn3Kg PRiYYVlzeGp+vymwFH6IWTZBM4zcS9vLbQpep1WdDSJLxMoFINKPROSLHMopndOumb0t d4JTZFb3OAhyXXvB0/XjK/UbHSUZOSnLUuKbFQ5xMQMg8KTjMsKyW0hmKR4EiyTql6sk LXxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=nDcMECDXAxc3Yrl3Jp44BBQI/ASgsDkZDKNgpLmcJiE=; fh=0aWa75l3ggud5DfKnSMkPbHWchQ8NS3SOm4/A8cHGl4=; b=fOQ7TB/VrpocCs9FaRlzThIvQpdPaTosu5aMLewoVwg4TL38NJQ5rObvHs9ZPxi8LE 7rA06zAyHRS9N6cDhIWehLFErtGcoqsuSF1CRKqMPD/TfaFRzszd7KXS6ogDeooMey89 sfYqZuDJ5iWpoK9RP31YxGBR+Oe7hghGIRwH75aZuT20PGkZs3/zTVH4IuM6Sr2+glE7 ycjFZoCD2JFCKCA3hsf91Fw3Ij7lxjx5gO6Ok/kKD9p+INSplZYoI1SBHf+K4CnIT5LC YVh2lQjdtqi+qp67oPJcxu1SIWn1LkzGgXly3mJpf3lSLr33eDzATxXOUBpCIxJW1kDR hIzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=hl7wLQYc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id t184-20020a6381c1000000b00573fdbc93c0si8274415pgd.892.2023.10.02.07.49.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Oct 2023 07:49:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=hl7wLQYc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 847AA80A13A1; Mon, 2 Oct 2023 02:58:31 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236298AbjJBJ6B (ORCPT <rfc822;pwkd43@gmail.com> + 16 others); Mon, 2 Oct 2023 05:58:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236303AbjJBJ55 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 2 Oct 2023 05:57:57 -0400 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 172E9A7; Mon, 2 Oct 2023 02:57:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date: Subject:Cc:To:From; bh=nDcMECDXAxc3Yrl3Jp44BBQI/ASgsDkZDKNgpLmcJiE=; b=hl7wLQ Ycn5Rmnfeig1ImPI8wakjB8coQiDXmmsx1CzJeSrO8oLeSZRuLBeIG2VEqU5krY1qb8YCHn89CJGe 42ud+yym9CFHgfsSarmAtbywdbqRK67OknwxtG2l0NZyAvn2KMHvHw+LREf40C/XxlzqmZvpX2xJM Tx8xxVZ7Ve0=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from <paul@xen.org>) id 1qnFgS-0000v4-R8; Mon, 02 Oct 2023 09:57:48 +0000 Received: from ec2-63-33-11-17.eu-west-1.compute.amazonaws.com ([63.33.11.17] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <paul@xen.org>) id 1qnFgS-0000Ft-DL; Mon, 02 Oct 2023 09:57:48 +0000 From: Paul Durrant <paul@xen.org> To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paul Durrant <pdurrant@amazon.com>, "H. Peter Anvin" <hpa@zytor.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, David Woodhouse <dwmw2@infradead.org>, Ingo Molnar <mingo@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>, x86@kernel.org Subject: [PATCH v7 00/11] KVM: xen: update shared_info and vcpu_info handling Date: Mon, 2 Oct 2023 09:57:29 +0000 Message-Id: <20231002095740.1472907-1-paul@xen.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 02 Oct 2023 02:58:31 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778655597048451802 X-GMAIL-MSGID: 1778655597048451802 |
Series |
KVM: xen: update shared_info and vcpu_info handling
|
|
Message
Paul Durrant
Oct. 2, 2023, 9:57 a.m. UTC
From: Paul Durrant <pdurrant@amazon.com>
The following text from the original cover letter still serves as an
introduction to the series:
"Currently we treat the shared_info page as guest memory and the VMM
informs KVM of its location using a GFN. However it is not guest memory as
such; it's an overlay page. So we pointlessly invalidate and re-cache a
mapping to the *same page* of memory every time the guest requests that
shared_info be mapped into its address space. Let's avoid doing that by
modifying the pfncache code to allow activation using a fixed userspace HVA
as well as a GPA."
This version of the series is functionally the same as version 6. I have
simply added David Woodhouse's R-b to patch 11 to indicate that he has
now fully reviewed the series.
Paul Durrant (11):
KVM: pfncache: add a map helper function
KVM: pfncache: add a mark-dirty helper
KVM: pfncache: add a helper to get the gpa
KVM: pfncache: base offset check on khva rather than gpa
KVM: pfncache: allow a cache to be activated with a fixed (userspace)
HVA
KVM: xen: allow shared_info to be mapped by fixed HVA
KVM: xen: allow vcpu_info to be mapped by fixed HVA
KVM: selftests / xen: map shared_info using HVA rather than GFN
KVM: selftests / xen: re-map vcpu_info using HVA rather than GPA
KVM: xen: advertize the KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA capability
KVM: xen: allow vcpu_info content to be 'safely' copied
Documentation/virt/kvm/api.rst | 53 +++++--
arch/x86/kvm/x86.c | 5 +-
arch/x86/kvm/xen.c | 92 +++++++++----
include/linux/kvm_host.h | 43 ++++++
include/linux/kvm_types.h | 3 +-
include/uapi/linux/kvm.h | 9 +-
.../selftests/kvm/x86_64/xen_shinfo_test.c | 59 ++++++--
virt/kvm/pfncache.c | 129 +++++++++++++-----
8 files changed, 302 insertions(+), 91 deletions(-)
---
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Comments
On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: > From: Paul Durrant <pdurrant@amazon.com> > > The following text from the original cover letter still serves as an > introduction to the series: > > "Currently we treat the shared_info page as guest memory and the VMM > informs KVM of its location using a GFN. However it is not guest memory as > such; it's an overlay page. So we pointlessly invalidate and re-cache a > mapping to the *same page* of memory every time the guest requests that > shared_info be mapped into its address space. Let's avoid doing that by > modifying the pfncache code to allow activation using a fixed userspace HVA > as well as a GPA." > > This version of the series is functionally the same as version 6. I have > simply added David Woodhouse's R-b to patch 11 to indicate that he has > now fully reviewed the series. Thanks. I believe Sean is probably waiting for us to stop going back and forth, and for the dust to settle. So for the record: I think I'm done heckling and this is ready to go in. Are you doing the QEMU patches or am I?
On 05/10/2023 07:41, David Woodhouse wrote: > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> The following text from the original cover letter still serves as an >> introduction to the series: >> >> "Currently we treat the shared_info page as guest memory and the VMM >> informs KVM of its location using a GFN. However it is not guest memory as >> such; it's an overlay page. So we pointlessly invalidate and re-cache a >> mapping to the *same page* of memory every time the guest requests that >> shared_info be mapped into its address space. Let's avoid doing that by >> modifying the pfncache code to allow activation using a fixed userspace HVA >> as well as a GPA." >> >> This version of the series is functionally the same as version 6. I have >> simply added David Woodhouse's R-b to patch 11 to indicate that he has >> now fully reviewed the series. > > Thanks. I believe Sean is probably waiting for us to stop going back > and forth, and for the dust to settle. So for the record: I think I'm > done heckling and this is ready to go in. > > Are you doing the QEMU patches or am I? > I'll do the QEMU changes, once the patches hit kvm/next.
On 05/10/2023 07:41, David Woodhouse wrote: > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> The following text from the original cover letter still serves as an >> introduction to the series: >> >> "Currently we treat the shared_info page as guest memory and the VMM >> informs KVM of its location using a GFN. However it is not guest memory as >> such; it's an overlay page. So we pointlessly invalidate and re-cache a >> mapping to the *same page* of memory every time the guest requests that >> shared_info be mapped into its address space. Let's avoid doing that by >> modifying the pfncache code to allow activation using a fixed userspace HVA >> as well as a GPA." >> >> This version of the series is functionally the same as version 6. I have >> simply added David Woodhouse's R-b to patch 11 to indicate that he has >> now fully reviewed the series. > > Thanks. I believe Sean is probably waiting for us to stop going back > and forth, and for the dust to settle. So for the record: I think I'm > done heckling and this is ready to go in. > Nudge. Sean, is there anything more I need to do on this series? Paul
On Thu, 2023-10-05 at 09:36 +0100, Paul Durrant wrote: > On 05/10/2023 07:41, David Woodhouse wrote: > > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: > > > From: Paul Durrant <pdurrant@amazon.com> > > > > > > The following text from the original cover letter still serves as an > > > introduction to the series: > > > > > > "Currently we treat the shared_info page as guest memory and the VMM > > > informs KVM of its location using a GFN. However it is not guest memory as > > > such; it's an overlay page. So we pointlessly invalidate and re-cache a > > > mapping to the *same page* of memory every time the guest requests that > > > shared_info be mapped into its address space. Let's avoid doing that by > > > modifying the pfncache code to allow activation using a fixed userspace HVA > > > as well as a GPA." > > > > > > This version of the series is functionally the same as version 6. I have > > > simply added David Woodhouse's R-b to patch 11 to indicate that he has > > > now fully reviewed the series. > > > > Thanks. I believe Sean is probably waiting for us to stop going back > > and forth, and for the dust to settle. So for the record: I think I'm > > done heckling and this is ready to go in. > > > > Are you doing the QEMU patches or am I? > > > > I'll do the QEMU changes, once the patches hit kvm/next. Note that I disabled migration support in QEMU for emulated Xen guests. You might want that for testing, since the reason for this work is to enable pause/serialize workflows. Migration does work all the way up to XenStore itself, and https://gitlab.com/qemu-project/qemu/-/commit/766804b101d *was* tested with migration enabled. There are also unit tests for XenStore serialize/deserialize. I disabled it because the PV backends on the XenBus don't have suspend/resume support. But a guest using other emulated net/disk devices should still be able to suspend/resume OK if we just remove the 'unmigratable' flag from xen_xenstore, I believe.
On 09/11/2023 10:02, David Woodhouse wrote: > On Thu, 2023-10-05 at 09:36 +0100, Paul Durrant wrote: >> On 05/10/2023 07:41, David Woodhouse wrote: >>> On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >>>> From: Paul Durrant <pdurrant@amazon.com> >>>> >>>> The following text from the original cover letter still serves as an >>>> introduction to the series: >>>> >>>> "Currently we treat the shared_info page as guest memory and the VMM >>>> informs KVM of its location using a GFN. However it is not guest memory as >>>> such; it's an overlay page. So we pointlessly invalidate and re-cache a >>>> mapping to the *same page* of memory every time the guest requests that >>>> shared_info be mapped into its address space. Let's avoid doing that by >>>> modifying the pfncache code to allow activation using a fixed userspace HVA >>>> as well as a GPA." >>>> >>>> This version of the series is functionally the same as version 6. I have >>>> simply added David Woodhouse's R-b to patch 11 to indicate that he has >>>> now fully reviewed the series. >>> >>> Thanks. I believe Sean is probably waiting for us to stop going back >>> and forth, and for the dust to settle. So for the record: I think I'm >>> done heckling and this is ready to go in. >>> >>> Are you doing the QEMU patches or am I? >>> >> >> I'll do the QEMU changes, once the patches hit kvm/next. > > Note that I disabled migration support in QEMU for emulated Xen > guests. You might want that for testing, since the reason for this work > is to enable pause/serialize workflows. > > Migration does work all the way up to XenStore itself, and > https://gitlab.com/qemu-project/qemu/-/commit/766804b101d *was* tested > with migration enabled. There are also unit tests for XenStore > serialize/deserialize. > > I disabled it because the PV backends on the XenBus don't have > suspend/resume support. But a guest using other emulated net/disk > devices should still be able to suspend/resume OK if we just remove the > 'unmigratable' flag from xen_xenstore, I believe. Ok. Enabling suspend/resume for backends really ought not to be that hard. The main reason for this series was to enable pause for-for-memory-reconfiguration but I can look into suspend/resume/migrate once I've done the necessary re-work.