[v2,1/2] x86/kvm/async_pf: Use separate percpu variable to track the enabling of asyncpf
Message ID | 20231025055914.1201792-2-xiaoyao.li@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp2390210vqx; Tue, 24 Oct 2023 22:59:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGSIULnVmYihsomeRWXJ0uh5fctGnRxD5E04mDppgTIA6xWwcMGeRPQxP9r3YLsKzyyk8Wv X-Received: by 2002:a25:73ca:0:b0:d9a:4c1f:b580 with SMTP id o193-20020a2573ca000000b00d9a4c1fb580mr14431457ybc.58.1698213593542; Tue, 24 Oct 2023 22:59:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698213593; cv=none; d=google.com; s=arc-20160816; b=UEqM53gldOg8kiUqUjsKgpGjunS9VtuD316qc3eRCA2nqYpu7k/Bsuw4RtYsHYgEmV tpI6xk5q0ewvVitkGYfuI0lNBT3rfx3hfFHR1YMauQUAKuAWMgjuFF6EiBjmCPKJ2E+s v4eY2scsepO9F+c1KE7Mf+eLm6+RgzuI1Nyu+EvnmplpmpRSN4Uma7XYsxteqjcKkMjr 0xI3M3uWNz29AkPAvpzEyMshpumZGYBGAXKU86uChmua2ftx7yaXrovFUaBn+ES1EbXd oF2tn7bkmYmmFGL2uTT4F4zgt5+sfhKdJm4Wyabq9PoKo8hUOwMyBr2ydhpyEg68CHpM ZxHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=vD5ezKlkBVfVThDjz4KTA1Z/j1ngD7fGkh1Mx5XQtPQ=; fh=c0h284fwBPbnjSgqW+1NapS/KZrytKYuy8dfeFaGips=; b=O8VNZkaJdBc+oF9hDy7hmGDCRGSfQMADKJIUwGjkND6gmjbKZugF5po+EJxaGiIhzP IwnZnTAw9i07OoX/sbgwtlzrO5QI+SD4ZyS9gNpmbYmi3QP4swSvrr9vFwgHcx+TCaXv FNpTz89bXU2D/pucqUhc8wPfCvMKRBxnn0pByg3v7DCACMWt9vNUF9Hfw+KF2fFT6I+b xghd6UEBUJuwSzcqOL1Yv/iJEPwTGrjJVBW/BbgR7u/XXQpAscWxt969e/9gpaOmcsDk U48MyiFcbs3znAHLLFfMC3xX1g2V4zLiMW3QDKSpcNLcjsIsx1S3hKTHuQilYmbK0JUI bRWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aIW7aXWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id k8-20020a25b288000000b00d9ab946007fsi9845849ybj.357.2023.10.24.22.59.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 22:59:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aIW7aXWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id A841680A13BA; Tue, 24 Oct 2023 22:59:50 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231912AbjJYF72 (ORCPT <rfc822;aposhian.dev@gmail.com> + 26 others); Wed, 25 Oct 2023 01:59:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbjJYF71 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 25 Oct 2023 01:59:27 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76D19AC; Tue, 24 Oct 2023 22:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698213565; x=1729749565; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zpVz5IHUMxsDVwAwpqXhPcBg+mXIGlfvFTkmBpX+v7I=; b=aIW7aXWCNHeeSCuuPsF/tZe4oZzEiE0Wr2EjSX1knQifozR0U8uHj/uZ bmDJIU9IHnecvXF0Vg1yvQ1wyzb593DtizUWutKEx849myZFht8kCtibK wId5hFicdIBE5XYZWVza4yLH+/02qrvnsuyd9nAcckSw7xnsHjuEfIe21 XImHiFY2cKWPV6zJlBOfDj1vlfeesKuyTEwh6Ia4VfM4rakt1aT8GI+AB wCrFe8OHV7166IhbIydy9Hy6U3KyxOm0o0wObwU06t6HTI5372PSwD0v2 cFvYt63u3+xmW36Y3oWmtsc8dqAP9MPkuYkhyn38NpzflzGhBB76II5Fj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="473479226" X-IronPort-AV: E=Sophos;i="6.03,249,1694761200"; d="scan'208";a="473479226" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 22:59:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="788021758" X-IronPort-AV: E=Sophos;i="6.03,249,1694761200"; d="scan'208";a="788021758" Received: from lxy-clx-4s.sh.intel.com ([10.239.48.52]) by orsmga008.jf.intel.com with ESMTP; 24 Oct 2023 22:59:21 -0700 From: Xiaoyao Li <xiaoyao.li@intel.com> To: Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com> Cc: Jonathan Corbet <corbet@lwn.net>, Wanpeng Li <wanpengli@tencent.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Xiaoyao Li <xiaoyao.li@intel.com> Subject: [PATCH v2 1/2] x86/kvm/async_pf: Use separate percpu variable to track the enabling of asyncpf Date: Wed, 25 Oct 2023 01:59:13 -0400 Message-Id: <20231025055914.1201792-2-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231025055914.1201792-1-xiaoyao.li@intel.com> References: <20231025055914.1201792-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=0.1 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HK_RANDOM_FROM, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 24 Oct 2023 22:59:50 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780706016814284351 X-GMAIL-MSGID: 1780706016814284351 |
Series |
x86/asyncpf: Fixes the size of asyncpf PV data and related docs
|
|
Commit Message
Xiaoyao Li
Oct. 25, 2023, 5:59 a.m. UTC
Refer to commit fd10cde9294f ("KVM paravirt: Add async PF initialization
to PV guest") and commit 344d9588a9df ("KVM: Add PV MSR to enable
asynchronous page faults delivery"). It turns out that at the time when
asyncpf was introduced, the purpose was defining the shared PV data 'struct
kvm_vcpu_pv_apf_data' with the size of 64 bytes. However, it made a mistake
and defined the size to 68 bytes, which failed to make fit in a cache line
and made the code inconsistent with the documentation.
Below justification quoted from Sean[*]
KVM (the host side) has *never* read kvm_vcpu_pv_apf_data.enabled, and
the documentation clearly states that enabling is based solely on the
bit in the synthetic MSR.
So rather than update the documentation, fix the goof by removing the
enabled filed and use the separate percpu variable instread.
KVM-as-a-host obviously doesn't enforce anything or consume the size,
and changing the header will only affect guests that are rebuilt against
the new header, so there's no chance of ABI breakage between KVM and its
guests. The only possible breakage is if some other hypervisor is
emulating KVM's async #PF (LOL) and relies on the guest to set
kvm_vcpu_pv_apf_data.enabled. But (a) I highly doubt such a hypervisor
exists, (b) that would arguably be a violation of KVM's "spec", and
(c) the worst case scenario is that the guest would simply lose async
#PF functionality.
[*] https://lore.kernel.org/all/ZS7ERnnRqs8Fl0ZF@google.com/T/#u
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Documentation/virt/kvm/x86/msr.rst | 1 -
arch/x86/include/uapi/asm/kvm_para.h | 1 -
arch/x86/kernel/kvm.c | 11 ++++++-----
3 files changed, 6 insertions(+), 7 deletions(-)
Comments
Xiaoyao Li <xiaoyao.li@intel.com> writes: > Refer to commit fd10cde9294f ("KVM paravirt: Add async PF initialization > to PV guest") and commit 344d9588a9df ("KVM: Add PV MSR to enable > asynchronous page faults delivery"). It turns out that at the time when > asyncpf was introduced, the purpose was defining the shared PV data 'struct > kvm_vcpu_pv_apf_data' with the size of 64 bytes. However, it made a mistake > and defined the size to 68 bytes, which failed to make fit in a cache line > and made the code inconsistent with the documentation. Oh, I actually though it was done on purpose :-) 'enabled' is not accessed by the host, it's only purpose is to cache MSR_KVM_ASYNC_PF_EN. > > Below justification quoted from Sean[*] > > KVM (the host side) has *never* read kvm_vcpu_pv_apf_data.enabled, and > the documentation clearly states that enabling is based solely on the > bit in the synthetic MSR. > > So rather than update the documentation, fix the goof by removing the > enabled filed and use the separate percpu variable instread. > KVM-as-a-host obviously doesn't enforce anything or consume the size, > and changing the header will only affect guests that are rebuilt against > the new header, so there's no chance of ABI breakage between KVM and its > guests. The only possible breakage is if some other hypervisor is > emulating KVM's async #PF (LOL) and relies on the guest to set > kvm_vcpu_pv_apf_data.enabled. But (a) I highly doubt such a hypervisor > exists, (b) that would arguably be a violation of KVM's "spec", and > (c) the worst case scenario is that the guest would simply lose async > #PF functionality. > > [*] https://lore.kernel.org/all/ZS7ERnnRqs8Fl0ZF@google.com/T/#u > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> > --- > Documentation/virt/kvm/x86/msr.rst | 1 - > arch/x86/include/uapi/asm/kvm_para.h | 1 - > arch/x86/kernel/kvm.c | 11 ++++++----- > 3 files changed, 6 insertions(+), 7 deletions(-) > > diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst > index 9315fc385fb0..f6d70f99a1a7 100644 > --- a/Documentation/virt/kvm/x86/msr.rst > +++ b/Documentation/virt/kvm/x86/msr.rst > @@ -204,7 +204,6 @@ data: > __u32 token; > > __u8 pad[56]; > - __u32 enabled; > }; > > Bits 5-4 of the MSR are reserved and should be zero. Bit 0 is set to 1 > diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h > index 6e64b27b2c1e..605899594ebb 100644 > --- a/arch/x86/include/uapi/asm/kvm_para.h > +++ b/arch/x86/include/uapi/asm/kvm_para.h > @@ -142,7 +142,6 @@ struct kvm_vcpu_pv_apf_data { > __u32 token; > > __u8 pad[56]; > - __u32 enabled; > }; > > #define KVM_PV_EOI_BIT 0 > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index b8ab9ee5896c..388a3fdd3cad 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) > > early_param("no-steal-acc", parse_no_stealacc); > > +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); Would it make a difference is we replace this with a cpumask? I realize that we need to access it on all CPUs from hotpaths but this mask will rarely change so maybe there's no real perfomance hit? > static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); > DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible; > static int has_steal_clock = 0; > @@ -244,7 +245,7 @@ noinstr u32 kvm_read_and_reset_apf_flags(void) > { > u32 flags = 0; > > - if (__this_cpu_read(apf_reason.enabled)) { > + if (__this_cpu_read(async_pf_enabled)) { > flags = __this_cpu_read(apf_reason.flags); > __this_cpu_write(apf_reason.flags, 0); > } > @@ -295,7 +296,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) > > inc_irq_stat(irq_hv_callback_count); > > - if (__this_cpu_read(apf_reason.enabled)) { > + if (__this_cpu_read(async_pf_enabled)) { > token = __this_cpu_read(apf_reason.token); > kvm_async_pf_task_wake(token); > __this_cpu_write(apf_reason.token, 0); > @@ -362,7 +363,7 @@ static void kvm_guest_cpu_init(void) > wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR); > > wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); > - __this_cpu_write(apf_reason.enabled, 1); > + __this_cpu_write(async_pf_enabled, 1); As 'async_pf_enabled' is bool, it would probably be more natural to write __this_cpu_write(async_pf_enabled, true); > pr_debug("setup async PF for cpu %d\n", smp_processor_id()); > } > > @@ -383,11 +384,11 @@ static void kvm_guest_cpu_init(void) > > static void kvm_pv_disable_apf(void) > { > - if (!__this_cpu_read(apf_reason.enabled)) > + if (!__this_cpu_read(async_pf_enabled)) > return; > > wrmsrl(MSR_KVM_ASYNC_PF_EN, 0); > - __this_cpu_write(apf_reason.enabled, 0); > + __this_cpu_write(async_pf_enabled, 0); ... and 'false' here. > > pr_debug("disable async PF for cpu %d\n", smp_processor_id()); > }
On Wed, Oct 25, 2023, Vitaly Kuznetsov wrote: > Xiaoyao Li <xiaoyao.li@intel.com> writes: > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > > index b8ab9ee5896c..388a3fdd3cad 100644 > > --- a/arch/x86/kernel/kvm.c > > +++ b/arch/x86/kernel/kvm.c > > @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) > > > > early_param("no-steal-acc", parse_no_stealacc); > > > > +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); > > Would it make a difference is we replace this with a cpumask? I realize > that we need to access it on all CPUs from hotpaths but this mask will > rarely change so maybe there's no real perfomance hit? FWIW, I personally prefer per-CPU booleans from a readability perspective. I doubt there is a meaningful performance difference for a bitmap vs. individual booleans, the check is already gated by a static key, i.e. kernels that are NOT running as KVM guests don't care. Actually, if there's performance gains to be had, optimizing kvm_read_and_reset_apf_flags() to read the "enabled" flag if and only if it's necessary is a more likely candidate. Assuming the host isn't being malicious/stupid, then apf_reason.flags will be '0' if PV async #PFs are disabled. The only question is whether or not apf_reason.flags is predictable enough for the CPU. Aha! In practice, the CPU already needs to resolve a branch based on apf_reason.flags, it's just "hidden" up in __kvm_handle_async_pf(). If we really want to micro-optimize, provide an __always_inline inner helper so that __kvm_handle_async_pf() doesn't need to make a CALL just to read the flags. Then in the common case where a #PF isn't due to the host swapping out a page, the paravirt happy path doesn't need a taken branch and never reads the enabled variable. E.g. the below generates: 0xffffffff81939ed0 <+0>: 41 54 push %r12 0xffffffff81939ed2 <+2>: 31 c0 xor %eax,%eax 0xffffffff81939ed4 <+4>: 55 push %rbp 0xffffffff81939ed5 <+5>: 53 push %rbx 0xffffffff81939ed6 <+6>: 48 83 ec 08 sub $0x8,%rsp 0xffffffff81939eda <+10>: 65 8b 2d df 81 6f 7e mov %gs:0x7e6f81df(%rip),%ebp # 0x320c0 <apf_reason> 0xffffffff81939ee1 <+17>: 85 ed test %ebp,%ebp 0xffffffff81939ee3 <+19>: 75 09 jne 0xffffffff81939eee <__kvm_handle_async_pf+30> 0xffffffff81939ee5 <+21>: 48 83 c4 08 add $0x8,%rsp 0xffffffff81939ee9 <+25>: 5b pop %rbx 0xffffffff81939eea <+26>: 5d pop %rbp 0xffffffff81939eeb <+27>: 41 5c pop %r12 0xffffffff81939eed <+29>: c3 ret diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index b8ab9ee5896c..b24133dc0731 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -240,22 +240,29 @@ void kvm_async_pf_task_wake(u32 token) } EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake); -noinstr u32 kvm_read_and_reset_apf_flags(void) +static __always_inline u32 __kvm_read_and_reset_apf_flags(void) { - u32 flags = 0; + u32 flags = __this_cpu_read(apf_reason.flags); - if (__this_cpu_read(apf_reason.enabled)) { - flags = __this_cpu_read(apf_reason.flags); - __this_cpu_write(apf_reason.flags, 0); + if (unlikely(flags)) { + if (likely(__this_cpu_read(apf_reason.enabled))) + __this_cpu_write(apf_reason.flags, 0); + else + flags = 0; } return flags; } + +u32 kvm_read_and_reset_apf_flags(void) +{ + return __kvm_read_and_reset_apf_flags(); +} EXPORT_SYMBOL_GPL(kvm_read_and_reset_apf_flags); noinstr bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token) { - u32 flags = kvm_read_and_reset_apf_flags(); + u32 flags = __kvm_read_and_reset_apf_flags(); irqentry_state_t state; if (!flags) > > static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); > > DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible; > > static int has_steal_clock = 0; > > @@ -244,7 +245,7 @@ noinstr u32 kvm_read_and_reset_apf_flags(void) > > { > > u32 flags = 0; > > > > - if (__this_cpu_read(apf_reason.enabled)) { > > + if (__this_cpu_read(async_pf_enabled)) { > > flags = __this_cpu_read(apf_reason.flags); > > __this_cpu_write(apf_reason.flags, 0); > > } > > @@ -295,7 +296,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) > > > > inc_irq_stat(irq_hv_callback_count); > > > > - if (__this_cpu_read(apf_reason.enabled)) { > > + if (__this_cpu_read(async_pf_enabled)) { > > token = __this_cpu_read(apf_reason.token); > > kvm_async_pf_task_wake(token); > > __this_cpu_write(apf_reason.token, 0); > > @@ -362,7 +363,7 @@ static void kvm_guest_cpu_init(void) > > wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR); > > > > wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); > > - __this_cpu_write(apf_reason.enabled, 1); > > + __this_cpu_write(async_pf_enabled, 1); > > As 'async_pf_enabled' is bool, it would probably be more natural to > write > > __this_cpu_write(async_pf_enabled, true); +1000
On 10/25/2023 5:10 PM, Vitaly Kuznetsov wrote: > Xiaoyao Li <xiaoyao.li@intel.com> writes: > >> Refer to commit fd10cde9294f ("KVM paravirt: Add async PF initialization >> to PV guest") and commit 344d9588a9df ("KVM: Add PV MSR to enable >> asynchronous page faults delivery"). It turns out that at the time when >> asyncpf was introduced, the purpose was defining the shared PV data 'struct >> kvm_vcpu_pv_apf_data' with the size of 64 bytes. However, it made a mistake >> and defined the size to 68 bytes, which failed to make fit in a cache line >> and made the code inconsistent with the documentation. > > Oh, I actually though it was done on purpose :-) 'enabled' is not > accessed by the host, it's only purpose is to cache MSR_KVM_ASYNC_PF_EN. I didn't find any clue to show it was on purpose, so thought it was a mistake. Anyway, if the fact is it was done on purpose and people now still accept it. I can drop this patch, and write another one to document it's intentional instead. >> >> Below justification quoted from Sean[*] >> >> KVM (the host side) has *never* read kvm_vcpu_pv_apf_data.enabled, and >> the documentation clearly states that enabling is based solely on the >> bit in the synthetic MSR. >> >> So rather than update the documentation, fix the goof by removing the >> enabled filed and use the separate percpu variable instread. >> KVM-as-a-host obviously doesn't enforce anything or consume the size, >> and changing the header will only affect guests that are rebuilt against >> the new header, so there's no chance of ABI breakage between KVM and its >> guests. The only possible breakage is if some other hypervisor is >> emulating KVM's async #PF (LOL) and relies on the guest to set >> kvm_vcpu_pv_apf_data.enabled. But (a) I highly doubt such a hypervisor >> exists, (b) that would arguably be a violation of KVM's "spec", and >> (c) the worst case scenario is that the guest would simply lose async >> #PF functionality. >> >> [*] https://lore.kernel.org/all/ZS7ERnnRqs8Fl0ZF@google.com/T/#u >> >> Suggested-by: Sean Christopherson <seanjc@google.com> >> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> >> --- >> Documentation/virt/kvm/x86/msr.rst | 1 - >> arch/x86/include/uapi/asm/kvm_para.h | 1 - >> arch/x86/kernel/kvm.c | 11 ++++++----- >> 3 files changed, 6 insertions(+), 7 deletions(-) >> >> diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst >> index 9315fc385fb0..f6d70f99a1a7 100644 >> --- a/Documentation/virt/kvm/x86/msr.rst >> +++ b/Documentation/virt/kvm/x86/msr.rst >> @@ -204,7 +204,6 @@ data: >> __u32 token; >> >> __u8 pad[56]; >> - __u32 enabled; >> }; >> >> Bits 5-4 of the MSR are reserved and should be zero. Bit 0 is set to 1 >> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h >> index 6e64b27b2c1e..605899594ebb 100644 >> --- a/arch/x86/include/uapi/asm/kvm_para.h >> +++ b/arch/x86/include/uapi/asm/kvm_para.h >> @@ -142,7 +142,6 @@ struct kvm_vcpu_pv_apf_data { >> __u32 token; >> >> __u8 pad[56]; >> - __u32 enabled; >> }; >> >> #define KVM_PV_EOI_BIT 0 >> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >> index b8ab9ee5896c..388a3fdd3cad 100644 >> --- a/arch/x86/kernel/kvm.c >> +++ b/arch/x86/kernel/kvm.c >> @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) >> >> early_param("no-steal-acc", parse_no_stealacc); >> >> +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); > > Would it make a difference is we replace this with a cpumask? I realize > that we need to access it on all CPUs from hotpaths but this mask will > rarely change so maybe there's no real perfomance hit? > >> static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); >> DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible; >> static int has_steal_clock = 0; >> @@ -244,7 +245,7 @@ noinstr u32 kvm_read_and_reset_apf_flags(void) >> { >> u32 flags = 0; >> >> - if (__this_cpu_read(apf_reason.enabled)) { >> + if (__this_cpu_read(async_pf_enabled)) { >> flags = __this_cpu_read(apf_reason.flags); >> __this_cpu_write(apf_reason.flags, 0); >> } >> @@ -295,7 +296,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) >> >> inc_irq_stat(irq_hv_callback_count); >> >> - if (__this_cpu_read(apf_reason.enabled)) { >> + if (__this_cpu_read(async_pf_enabled)) { >> token = __this_cpu_read(apf_reason.token); >> kvm_async_pf_task_wake(token); >> __this_cpu_write(apf_reason.token, 0); >> @@ -362,7 +363,7 @@ static void kvm_guest_cpu_init(void) >> wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR); >> >> wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); >> - __this_cpu_write(apf_reason.enabled, 1); >> + __this_cpu_write(async_pf_enabled, 1); > > As 'async_pf_enabled' is bool, it would probably be more natural to > write > > __this_cpu_write(async_pf_enabled, true); > >> pr_debug("setup async PF for cpu %d\n", smp_processor_id()); >> } >> >> @@ -383,11 +384,11 @@ static void kvm_guest_cpu_init(void) >> >> static void kvm_pv_disable_apf(void) >> { >> - if (!__this_cpu_read(apf_reason.enabled)) >> + if (!__this_cpu_read(async_pf_enabled)) >> return; >> >> wrmsrl(MSR_KVM_ASYNC_PF_EN, 0); >> - __this_cpu_write(apf_reason.enabled, 0); >> + __this_cpu_write(async_pf_enabled, 0); > > ... and 'false' here. sure, I can do it in a v3, if v3 is needed. >> >> pr_debug("disable async PF for cpu %d\n", smp_processor_id()); >> } >
On 10/25/2023 10:22 PM, Sean Christopherson wrote: > On Wed, Oct 25, 2023, Vitaly Kuznetsov wrote: >> Xiaoyao Li <xiaoyao.li@intel.com> writes: >>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >>> index b8ab9ee5896c..388a3fdd3cad 100644 >>> --- a/arch/x86/kernel/kvm.c >>> +++ b/arch/x86/kernel/kvm.c >>> @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) >>> >>> early_param("no-steal-acc", parse_no_stealacc); >>> >>> +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); >> >> Would it make a difference is we replace this with a cpumask? I realize >> that we need to access it on all CPUs from hotpaths but this mask will >> rarely change so maybe there's no real perfomance hit? > > FWIW, I personally prefer per-CPU booleans from a readability perspective. I > doubt there is a meaningful performance difference for a bitmap vs. individual > booleans, the check is already gated by a static key, i.e. kernels that are NOT > running as KVM guests don't care. I agree with it. > Actually, if there's performance gains to be had, optimizing kvm_read_and_reset_apf_flags() > to read the "enabled" flag if and only if it's necessary is a more likely candidate. > Assuming the host isn't being malicious/stupid, then apf_reason.flags will be '0' > if PV async #PFs are disabled. The only question is whether or not apf_reason.flags > is predictable enough for the CPU. > > Aha! In practice, the CPU already needs to resolve a branch based on apf_reason.flags, > it's just "hidden" up in __kvm_handle_async_pf(). > > If we really want to micro-optimize, provide an __always_inline inner helper so > that __kvm_handle_async_pf() doesn't need to make a CALL just to read the flags. > Then in the common case where a #PF isn't due to the host swapping out a page, > the paravirt happy path doesn't need a taken branch and never reads the enabled > variable. E.g. the below generates: If this is wanted. It can be a separate patch, irrelevant with this series, I think. > 0xffffffff81939ed0 <+0>: 41 54 push %r12 > 0xffffffff81939ed2 <+2>: 31 c0 xor %eax,%eax > 0xffffffff81939ed4 <+4>: 55 push %rbp > 0xffffffff81939ed5 <+5>: 53 push %rbx > 0xffffffff81939ed6 <+6>: 48 83 ec 08 sub $0x8,%rsp > 0xffffffff81939eda <+10>: 65 8b 2d df 81 6f 7e mov %gs:0x7e6f81df(%rip),%ebp # 0x320c0 <apf_reason> > 0xffffffff81939ee1 <+17>: 85 ed test %ebp,%ebp > 0xffffffff81939ee3 <+19>: 75 09 jne 0xffffffff81939eee <__kvm_handle_async_pf+30> > 0xffffffff81939ee5 <+21>: 48 83 c4 08 add $0x8,%rsp > 0xffffffff81939ee9 <+25>: 5b pop %rbx > 0xffffffff81939eea <+26>: 5d pop %rbp > 0xffffffff81939eeb <+27>: 41 5c pop %r12 > 0xffffffff81939eed <+29>: c3 ret > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index b8ab9ee5896c..b24133dc0731 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -240,22 +240,29 @@ void kvm_async_pf_task_wake(u32 token) > } > EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake); > > -noinstr u32 kvm_read_and_reset_apf_flags(void) > +static __always_inline u32 __kvm_read_and_reset_apf_flags(void) > { > - u32 flags = 0; > + u32 flags = __this_cpu_read(apf_reason.flags); > > - if (__this_cpu_read(apf_reason.enabled)) { > - flags = __this_cpu_read(apf_reason.flags); > - __this_cpu_write(apf_reason.flags, 0); > + if (unlikely(flags)) { > + if (likely(__this_cpu_read(apf_reason.enabled))) > + __this_cpu_write(apf_reason.flags, 0); > + else > + flags = 0; > } > > return flags; > } > + > +u32 kvm_read_and_reset_apf_flags(void) > +{ > + return __kvm_read_and_reset_apf_flags(); > +} > EXPORT_SYMBOL_GPL(kvm_read_and_reset_apf_flags); > > noinstr bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token) > { > - u32 flags = kvm_read_and_reset_apf_flags(); > + u32 flags = __kvm_read_and_reset_apf_flags(); > irqentry_state_t state; > > if (!flags) > >>> static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); >>> DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible; >>> static int has_steal_clock = 0; >>> @@ -244,7 +245,7 @@ noinstr u32 kvm_read_and_reset_apf_flags(void) >>> { >>> u32 flags = 0; >>> >>> - if (__this_cpu_read(apf_reason.enabled)) { >>> + if (__this_cpu_read(async_pf_enabled)) { >>> flags = __this_cpu_read(apf_reason.flags); >>> __this_cpu_write(apf_reason.flags, 0); >>> } >>> @@ -295,7 +296,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) >>> >>> inc_irq_stat(irq_hv_callback_count); >>> >>> - if (__this_cpu_read(apf_reason.enabled)) { >>> + if (__this_cpu_read(async_pf_enabled)) { >>> token = __this_cpu_read(apf_reason.token); >>> kvm_async_pf_task_wake(token); >>> __this_cpu_write(apf_reason.token, 0); >>> @@ -362,7 +363,7 @@ static void kvm_guest_cpu_init(void) >>> wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR); >>> >>> wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); >>> - __this_cpu_write(apf_reason.enabled, 1); >>> + __this_cpu_write(async_pf_enabled, 1); >> >> As 'async_pf_enabled' is bool, it would probably be more natural to >> write >> >> __this_cpu_write(async_pf_enabled, true); > > +1000
On Mon, Oct 30, 2023, Xiaoyao Li wrote: > On 10/25/2023 10:22 PM, Sean Christopherson wrote: > > On Wed, Oct 25, 2023, Vitaly Kuznetsov wrote: > > > Xiaoyao Li <xiaoyao.li@intel.com> writes: > > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > > > > index b8ab9ee5896c..388a3fdd3cad 100644 > > > > --- a/arch/x86/kernel/kvm.c > > > > +++ b/arch/x86/kernel/kvm.c > > > > @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) > > > > early_param("no-steal-acc", parse_no_stealacc); > > > > +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); > > > > > > Would it make a difference is we replace this with a cpumask? I realize > > > that we need to access it on all CPUs from hotpaths but this mask will > > > rarely change so maybe there's no real perfomance hit? > > > > FWIW, I personally prefer per-CPU booleans from a readability perspective. I > > doubt there is a meaningful performance difference for a bitmap vs. individual > > booleans, the check is already gated by a static key, i.e. kernels that are NOT > > running as KVM guests don't care. > > I agree with it. > > > Actually, if there's performance gains to be had, optimizing kvm_read_and_reset_apf_flags() > > to read the "enabled" flag if and only if it's necessary is a more likely candidate. > > Assuming the host isn't being malicious/stupid, then apf_reason.flags will be '0' > > if PV async #PFs are disabled. The only question is whether or not apf_reason.flags > > is predictable enough for the CPU. > > > > Aha! In practice, the CPU already needs to resolve a branch based on apf_reason.flags, > > it's just "hidden" up in __kvm_handle_async_pf(). > > > > If we really want to micro-optimize, provide an __always_inline inner helper so > > that __kvm_handle_async_pf() doesn't need to make a CALL just to read the flags. > > Then in the common case where a #PF isn't due to the host swapping out a page, > > the paravirt happy path doesn't need a taken branch and never reads the enabled > > variable. E.g. the below generates: > > If this is wanted. It can be a separate patch, irrelevant with this series, > I think. Yes, it's definitely beyond the scope of this series.
diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst index 9315fc385fb0..f6d70f99a1a7 100644 --- a/Documentation/virt/kvm/x86/msr.rst +++ b/Documentation/virt/kvm/x86/msr.rst @@ -204,7 +204,6 @@ data: __u32 token; __u8 pad[56]; - __u32 enabled; }; Bits 5-4 of the MSR are reserved and should be zero. Bit 0 is set to 1 diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 6e64b27b2c1e..605899594ebb 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -142,7 +142,6 @@ struct kvm_vcpu_pv_apf_data { __u32 token; __u8 pad[56]; - __u32 enabled; }; #define KVM_PV_EOI_BIT 0 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index b8ab9ee5896c..388a3fdd3cad 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -65,6 +65,7 @@ static int __init parse_no_stealacc(char *arg) early_param("no-steal-acc", parse_no_stealacc); +static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled); static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible; static int has_steal_clock = 0; @@ -244,7 +245,7 @@ noinstr u32 kvm_read_and_reset_apf_flags(void) { u32 flags = 0; - if (__this_cpu_read(apf_reason.enabled)) { + if (__this_cpu_read(async_pf_enabled)) { flags = __this_cpu_read(apf_reason.flags); __this_cpu_write(apf_reason.flags, 0); } @@ -295,7 +296,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) inc_irq_stat(irq_hv_callback_count); - if (__this_cpu_read(apf_reason.enabled)) { + if (__this_cpu_read(async_pf_enabled)) { token = __this_cpu_read(apf_reason.token); kvm_async_pf_task_wake(token); __this_cpu_write(apf_reason.token, 0); @@ -362,7 +363,7 @@ static void kvm_guest_cpu_init(void) wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR); wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); - __this_cpu_write(apf_reason.enabled, 1); + __this_cpu_write(async_pf_enabled, 1); pr_debug("setup async PF for cpu %d\n", smp_processor_id()); } @@ -383,11 +384,11 @@ static void kvm_guest_cpu_init(void) static void kvm_pv_disable_apf(void) { - if (!__this_cpu_read(apf_reason.enabled)) + if (!__this_cpu_read(async_pf_enabled)) return; wrmsrl(MSR_KVM_ASYNC_PF_EN, 0); - __this_cpu_write(apf_reason.enabled, 0); + __this_cpu_write(async_pf_enabled, 0); pr_debug("disable async PF for cpu %d\n", smp_processor_id()); }