Message ID | 20221121234026.3037083-3-vipinsh@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1897208wrr; Mon, 21 Nov 2022 15:43:55 -0800 (PST) X-Google-Smtp-Source: AA0mqf5Rw/hBQz3DdOXAxqbNwTUDPkxodniTAz+lkL+qCTuy9+9oNf0JWv/J0jf0FRFQl1EUT+YD X-Received: by 2002:a05:6402:3707:b0:467:6847:83d3 with SMTP id ek7-20020a056402370700b00467684783d3mr18277981edb.247.1669074235173; Mon, 21 Nov 2022 15:43:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669074235; cv=none; d=google.com; s=arc-20160816; b=ptW/relI+IY2VcTJVGsKvSNQCGY/yzy/jbN+hUffCqJ9eyy2ExGXxCAYcTKTeyUPH8 tKJJOLZlwBff9+QkmNNvqjypfThw/jnqgSZbHgCBa2yFSXbGDWURT+lTmB0kWmdW29dv bob4ZlWzX0Bc6n3Oy8aXj8lScwNyJ1Q7DzQ8utgRc8K/cqZMQIbtCtwX8RN50RPngDhj lie2PJCAiyPT4cisAdcVJv/6Q3P2lY2FvMKZ8MU3y19YrtCJcuNUNfLVWbRNUzNoVcfw oYpsZP8oCEsKOhnrmOGEUx4Hm2NfGKrM4/y7H0jiGrte621BGNYv5VU+8DmUsUWdF2nT qcUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=5FAc41uak7V4S0p5RiUjDgCWZhDYnRWzhKZhqfKXil4=; b=lQpIXX2G0ebCVgCSWPFTIT1oQN9ebZhIbPh70MAFBrUE8kQHLZp5H61DCwJVO6yZZX MzNyve6EtAIvGB0ard2jWJPNp2hgoAqcHrdO19z99Axj05taOgcM1InLii+/iNCZBsnK sPZ5Zpvj1rQK77+WKVSes6bB5I8cNuVeNG3S4Qe9lf688Ao5EyJTU2bXAHl3biphcKdz GNXYjX5ALM95jZmBhWC5ZKGXPR12JfaQ3RhTt8Prh/KoMitKW9x7OANTCSMoHchMQ5TT iIWE5GbnY7NZwN2AYgIJcGiD8/GJXR4MBuZJfMtQavARHQBFPwlHAvc58TAwonTMiVWo EzXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=XMdnfb0Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h12-20020a056402280c00b0045cecdc1fbcsi11693856ede.9.2022.11.21.15.43.31; Mon, 21 Nov 2022 15:43:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=XMdnfb0Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231820AbiKUXkk (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 18:40:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231757AbiKUXkd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 18:40:33 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44AD0C67CC for <linux-kernel@vger.kernel.org>; Mon, 21 Nov 2022 15:40:33 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id a6-20020a170902ecc600b00186f035ed74so10352548plh.12 for <linux-kernel@vger.kernel.org>; Mon, 21 Nov 2022 15:40:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5FAc41uak7V4S0p5RiUjDgCWZhDYnRWzhKZhqfKXil4=; b=XMdnfb0YgLN7WmgPCdeAVpGm2ClvIvjJ7QxQ4eKpOA9UmZvTDEV7sXbb8WlNxP0Wom lJ6Y5VhzeK7P3uuJVTpdy3ew+u1lcDGR9FuWNQY57OMQv0yg3RSpnIBn4VsGXGSj19Z8 EryEezWhhawqfPZZQV9u3MzQk/vWrnhK/kKSA3+1fIXHlk+YyxxpNybwNd1ORC5H5x83 TY3aGzsDFrcKZIsz8PBJaCY6cGuDre2ztFUqEla8/NpXchcxzBFw4QshSDpS4m2WPwuZ camofsf4qtNCeA0oUQwKErEsPig0ICFhAPTs1tOZemy5jYcHx31tcV72dvNO++xhZ/tH I+WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5FAc41uak7V4S0p5RiUjDgCWZhDYnRWzhKZhqfKXil4=; b=JSWQwpbnd98ROVNlJqcMgqccB1ulG84zl6lcuk9c+0LKAPsw+oB+aCsB8kYx2ApKxm sUAVf58XwG+cZSt+w1y772wsGnlKdCEbKBO2A2HXWG/my6Vp9B52TeXqIyp76FGFmoKH f+GHkwvsHoI92l3trQiXCkY0YVKHQQg5t55t+mQWAVWG3BspWMBKRohLpyxanPmU0Jl8 a3YcRKh1PtLLWIapGfnpZAcMldr2Jb8fD7hHqfAKHFVx6t1Jw11bc5ZQvPUi9GK6whiK UHF0joP+PoxjyEBD9T55xo/hlEGrU73ntBGsMhlNmGKBPQZYgxKzmCT3WVxu46TvxYe2 1K5g== X-Gm-Message-State: ANoB5plkjgCqVJnljsrO2E0a1VJf/cFXHM4AeFbmEKoL2ofuse9ygZKk opp/GZ3mPAdnwHSqi4UiGiHsn42pPziI X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a62:5b43:0:b0:573:6cfc:2210 with SMTP id p64-20020a625b43000000b005736cfc2210mr3423492pfb.55.1669074032803; Mon, 21 Nov 2022 15:40:32 -0800 (PST) Date: Mon, 21 Nov 2022 15:40:22 -0800 In-Reply-To: <20221121234026.3037083-1-vipinsh@google.com> Mime-Version: 1.0 References: <20221121234026.3037083-1-vipinsh@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221121234026.3037083-3-vipinsh@google.com> Subject: [PATCH v2 2/6] KVM: x86: hyper-v: Add extended hypercall support in Hyper-v From: Vipin Sharma <vipinsh@google.com> To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma <vipinsh@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750151185329460492?= X-GMAIL-MSGID: =?utf-8?q?1750151185329460492?= |
Series |
Add Hyper-v extended hypercall support in KVM
|
|
Commit Message
Vipin Sharma
Nov. 21, 2022, 11:40 p.m. UTC
Add support for extended hypercall in Hyper-v. Hyper-v TLFS 6.0b
describes hypercalls above call code 0x8000 as extended hypercalls.
A Hyper-v hypervisor's guest VM finds availability of extended
hypercalls via CPUID.0x40000003.EBX BIT(20). If the bit is set then the
guest can call extended hypercalls.
All extended hypercalls will exit to userspace by default. This allows
for easy support of future hypercalls without being dependent on KVM
releases.
If there will be need to process the hypercall in KVM instead of
userspace then KVM can create a capability which userspace can query to
know which hypercalls can be handled by the KVM and enable handling
of those hypercalls.
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
arch/x86/kvm/hyperv.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
Comments
Vipin Sharma <vipinsh@google.com> writes: > Add support for extended hypercall in Hyper-v. Hyper-v TLFS 6.0b > describes hypercalls above call code 0x8000 as extended hypercalls. > > A Hyper-v hypervisor's guest VM finds availability of extended > hypercalls via CPUID.0x40000003.EBX BIT(20). If the bit is set then the > guest can call extended hypercalls. > > All extended hypercalls will exit to userspace by default. This allows > for easy support of future hypercalls without being dependent on KVM > releases. > > If there will be need to process the hypercall in KVM instead of > userspace then KVM can create a capability which userspace can query to > know which hypercalls can be handled by the KVM and enable handling > of those hypercalls. > > Signed-off-by: Vipin Sharma <vipinsh@google.com> > --- > arch/x86/kvm/hyperv.c | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > index 0b6964ed2e66..8551ef495cc9 100644 > --- a/arch/x86/kvm/hyperv.c > +++ b/arch/x86/kvm/hyperv.c > @@ -43,6 +43,12 @@ > > #define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, HV_VCPUS_PER_SPARSE_BANK) > > +/* > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially > + * after the base capabilities extended hypercall. > + */ > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) > + First, I thought there's an off-by-one here (and should be '63') but then I checked with TLFS and figured out that the limit comes from HvExtCallQueryCapabilities's response which doesn't include itself (0x8001) in the mask, this means it can encode 0x8002 == bit0 0x8003 == bit1 .. 0x8041 == bit63 so indeed, the last one supported is 0x8041 == 0x8001 + 64 maybe it's worth extending the commont on where '64' comes from. > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, > bool vcpu_kick); > > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) > case HVCALL_SEND_IPI: > return hv_vcpu->cpuid_cache.enlightenments_eax & > HV_X64_CLUSTER_IPI_RECOMMENDED; > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > + return hv_vcpu->cpuid_cache.features_ebx & > + HV_ENABLE_EXTENDED_HYPERCALLS; > default: > break; > } > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) > } > goto hypercall_userspace_exit; > } > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > + if (unlikely(hc.fast)) { > + ret = HV_STATUS_INVALID_PARAMETER; I wasn't able to find any statement in TLFS stating whether extended hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using it. Unfortunatelly, our userspace exit will have to be modified to handle such stuff. This can stay for the time being I guess.. > + break; > + } > + goto hypercall_userspace_exit; > default: > ret = HV_STATUS_INVALID_HYPERCALL_CODE; > break; > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, > > ent->ebx |= HV_POST_MESSAGES; > ent->ebx |= HV_SIGNAL_EVENTS; > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; > > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
On Tue, Nov 22, 2022 at 8:29 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: > > Vipin Sharma <vipinsh@google.com> writes: > > > +/* > > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially > > + * after the base capabilities extended hypercall. > > + */ > > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) > > + > > First, I thought there's an off-by-one here (and should be '63') but > then I checked with TLFS and figured out that the limit comes from > HvExtCallQueryCapabilities's response which doesn't include itself > (0x8001) in the mask, this means it can encode > > 0x8002 == bit0 > 0x8003 == bit1 > .. > 0x8041 == bit63 > > so indeed, the last one supported is 0x8041 == 0x8001 + 64 > > maybe it's worth extending the commont on where '64' comes from. > Yeah, I will expand comments. > > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, > > bool vcpu_kick); > > > > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) > > case HVCALL_SEND_IPI: > > return hv_vcpu->cpuid_cache.enlightenments_eax & > > HV_X64_CLUSTER_IPI_RECOMMENDED; > > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > > + return hv_vcpu->cpuid_cache.features_ebx & > > + HV_ENABLE_EXTENDED_HYPERCALLS; > > default: > > break; > > } > > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) > > } > > goto hypercall_userspace_exit; > > } > > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > > + if (unlikely(hc.fast)) { > > + ret = HV_STATUS_INVALID_PARAMETER; > > I wasn't able to find any statement in TLFS stating whether extended > hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using > it. Unfortunatelly, our userspace exit will have to be modified to > handle such stuff. This can stay for the time being I guess.. > I agree TLFS doesn't state anything about "fast" extended hypercall but nothing stops in future for some call to be "fast". I think this condition should also be handled by userspace as it is handling everything else. I will remove it in the next version of the patch. I don't see any value in verification here. > > + break; > > + } > > + goto hypercall_userspace_exit; > > default: > > ret = HV_STATUS_INVALID_HYPERCALL_CODE; > > break; > > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, > > > > ent->ebx |= HV_POST_MESSAGES; > > ent->ebx |= HV_SIGNAL_EVENTS; > > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; > > > > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; > > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; > > Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> > > -- > Vitaly >
Vipin Sharma <vipinsh@google.com> writes: > On Tue, Nov 22, 2022 at 8:29 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: >> >> Vipin Sharma <vipinsh@google.com> writes: >> >> > +/* >> > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially >> > + * after the base capabilities extended hypercall. >> > + */ >> > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) >> > + >> >> First, I thought there's an off-by-one here (and should be '63') but >> then I checked with TLFS and figured out that the limit comes from >> HvExtCallQueryCapabilities's response which doesn't include itself >> (0x8001) in the mask, this means it can encode >> >> 0x8002 == bit0 >> 0x8003 == bit1 >> .. >> 0x8041 == bit63 >> >> so indeed, the last one supported is 0x8041 == 0x8001 + 64 >> >> maybe it's worth extending the commont on where '64' comes from. >> > > Yeah, I will expand comments. > >> > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, >> > bool vcpu_kick); >> > >> > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) >> > case HVCALL_SEND_IPI: >> > return hv_vcpu->cpuid_cache.enlightenments_eax & >> > HV_X64_CLUSTER_IPI_RECOMMENDED; >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: >> > + return hv_vcpu->cpuid_cache.features_ebx & >> > + HV_ENABLE_EXTENDED_HYPERCALLS; >> > default: >> > break; >> > } >> > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) >> > } >> > goto hypercall_userspace_exit; >> > } >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: >> > + if (unlikely(hc.fast)) { >> > + ret = HV_STATUS_INVALID_PARAMETER; >> >> I wasn't able to find any statement in TLFS stating whether extended >> hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using >> it. Unfortunatelly, our userspace exit will have to be modified to >> handle such stuff. This can stay for the time being I guess.. >> > > I agree TLFS doesn't state anything about "fast" extended hypercall > but nothing stops in future for some call to be "fast". I think this > condition should also be handled by userspace as it is handling > everything else. > > I will remove it in the next version of the patch. I don't see any > value in verification here. The problem is that we don't currently pass 'fast' flag to userspace, let alone XMM registers. This means that it won't be able to handle fast hypercalls anyway, I guess it's better to keep your check but add a comment saying that it's an implementation shortcoming and not a TLFS requirement. > >> > + break; >> > + } >> > + goto hypercall_userspace_exit; >> > default: >> > ret = HV_STATUS_INVALID_HYPERCALL_CODE; >> > break; >> > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, >> > >> > ent->ebx |= HV_POST_MESSAGES; >> > ent->ebx |= HV_SIGNAL_EVENTS; >> > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; >> > >> > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; >> > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; >> >> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> >> >> -- >> Vitaly >> >
On Thu, Nov 24, 2022 at 12:36 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: > > Vipin Sharma <vipinsh@google.com> writes: > > > On Tue, Nov 22, 2022 at 8:29 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: > >> > >> Vipin Sharma <vipinsh@google.com> writes: > >> > >> > +/* > >> > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially > >> > + * after the base capabilities extended hypercall. > >> > + */ > >> > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) > >> > + > >> > >> First, I thought there's an off-by-one here (and should be '63') but > >> then I checked with TLFS and figured out that the limit comes from > >> HvExtCallQueryCapabilities's response which doesn't include itself > >> (0x8001) in the mask, this means it can encode > >> > >> 0x8002 == bit0 > >> 0x8003 == bit1 > >> .. > >> 0x8041 == bit63 > >> > >> so indeed, the last one supported is 0x8041 == 0x8001 + 64 > >> > >> maybe it's worth extending the commont on where '64' comes from. > >> > > > > Yeah, I will expand comments. > > > >> > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, > >> > bool vcpu_kick); > >> > > >> > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) > >> > case HVCALL_SEND_IPI: > >> > return hv_vcpu->cpuid_cache.enlightenments_eax & > >> > HV_X64_CLUSTER_IPI_RECOMMENDED; > >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > >> > + return hv_vcpu->cpuid_cache.features_ebx & > >> > + HV_ENABLE_EXTENDED_HYPERCALLS; > >> > default: > >> > break; > >> > } > >> > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) > >> > } > >> > goto hypercall_userspace_exit; > >> > } > >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: > >> > + if (unlikely(hc.fast)) { > >> > + ret = HV_STATUS_INVALID_PARAMETER; > >> > >> I wasn't able to find any statement in TLFS stating whether extended > >> hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using > >> it. Unfortunatelly, our userspace exit will have to be modified to > >> handle such stuff. This can stay for the time being I guess.. > >> > > > > I agree TLFS doesn't state anything about "fast" extended hypercall > > but nothing stops in future for some call to be "fast". I think this > > condition should also be handled by userspace as it is handling > > everything else. > > > > I will remove it in the next version of the patch. I don't see any > > value in verification here. > > The problem is that we don't currently pass 'fast' flag to userspace, > let alone XMM registers. This means that it won't be able to handle fast > hypercalls anyway, I guess it's better to keep your check but add a > comment saying that it's an implementation shortcoming and not a TLFS > requirement. > I think "fast" flag gets passed to the userspace via: vcpu->run->hyperv.u.hcall.input = hc.param; Yeah, XMM registers won't be passed, that will require userspace API change. I will keep the check and explain in the comments. > > > > >> > + break; > >> > + } > >> > + goto hypercall_userspace_exit; > >> > default: > >> > ret = HV_STATUS_INVALID_HYPERCALL_CODE; > >> > break; > >> > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, > >> > > >> > ent->ebx |= HV_POST_MESSAGES; > >> > ent->ebx |= HV_SIGNAL_EVENTS; > >> > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; > >> > > >> > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; > >> > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; > >> > >> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> > >> > >> -- > >> Vitaly > >> > > > > -- > Vitaly >
Vipin Sharma <vipinsh@google.com> writes: > On Thu, Nov 24, 2022 at 12:36 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: >> >> Vipin Sharma <vipinsh@google.com> writes: >> >> > On Tue, Nov 22, 2022 at 8:29 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote: >> >> >> >> Vipin Sharma <vipinsh@google.com> writes: >> >> >> >> > +/* >> >> > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially >> >> > + * after the base capabilities extended hypercall. >> >> > + */ >> >> > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) >> >> > + >> >> >> >> First, I thought there's an off-by-one here (and should be '63') but >> >> then I checked with TLFS and figured out that the limit comes from >> >> HvExtCallQueryCapabilities's response which doesn't include itself >> >> (0x8001) in the mask, this means it can encode >> >> >> >> 0x8002 == bit0 >> >> 0x8003 == bit1 >> >> .. >> >> 0x8041 == bit63 >> >> >> >> so indeed, the last one supported is 0x8041 == 0x8001 + 64 >> >> >> >> maybe it's worth extending the commont on where '64' comes from. >> >> >> > >> > Yeah, I will expand comments. >> > >> >> > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, >> >> > bool vcpu_kick); >> >> > >> >> > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) >> >> > case HVCALL_SEND_IPI: >> >> > return hv_vcpu->cpuid_cache.enlightenments_eax & >> >> > HV_X64_CLUSTER_IPI_RECOMMENDED; >> >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: >> >> > + return hv_vcpu->cpuid_cache.features_ebx & >> >> > + HV_ENABLE_EXTENDED_HYPERCALLS; >> >> > default: >> >> > break; >> >> > } >> >> > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) >> >> > } >> >> > goto hypercall_userspace_exit; >> >> > } >> >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: >> >> > + if (unlikely(hc.fast)) { >> >> > + ret = HV_STATUS_INVALID_PARAMETER; >> >> >> >> I wasn't able to find any statement in TLFS stating whether extended >> >> hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using >> >> it. Unfortunatelly, our userspace exit will have to be modified to >> >> handle such stuff. This can stay for the time being I guess.. >> >> >> > >> > I agree TLFS doesn't state anything about "fast" extended hypercall >> > but nothing stops in future for some call to be "fast". I think this >> > condition should also be handled by userspace as it is handling >> > everything else. >> > >> > I will remove it in the next version of the patch. I don't see any >> > value in verification here. >> >> The problem is that we don't currently pass 'fast' flag to userspace, >> let alone XMM registers. This means that it won't be able to handle fast >> hypercalls anyway, I guess it's better to keep your check but add a >> comment saying that it's an implementation shortcoming and not a TLFS >> requirement. >> > > I think "fast" flag gets passed to the userspace via: > vcpu->run->hyperv.u.hcall.input = hc.param; True, for some reason I thought it's just the hypercall code but it's actually the full 64bit thing! > > Yeah, XMM registers won't be passed, that will require userspace API change. > I will keep the check and explain in the comments. > Thanks! >> >> > >> >> > + break; >> >> > + } >> >> > + goto hypercall_userspace_exit; >> >> > default: >> >> > ret = HV_STATUS_INVALID_HYPERCALL_CODE; >> >> > break; >> >> > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, >> >> > >> >> > ent->ebx |= HV_POST_MESSAGES; >> >> > ent->ebx |= HV_SIGNAL_EVENTS; >> >> > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; >> >> > >> >> > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; >> >> > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; >> >> >> >> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> >> >> >> >> -- >> >> Vitaly >> >> >> > >> >> -- >> Vitaly >> >
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 0b6964ed2e66..8551ef495cc9 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -43,6 +43,12 @@ #define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, HV_VCPUS_PER_SPARSE_BANK) +/* + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially + * after the base capabilities extended hypercall. + */ +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64) + static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, bool vcpu_kick); @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code) case HVCALL_SEND_IPI: return hv_vcpu->cpuid_cache.enlightenments_eax & HV_X64_CLUSTER_IPI_RECOMMENDED; + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: + return hv_vcpu->cpuid_cache.features_ebx & + HV_ENABLE_EXTENDED_HYPERCALLS; default: break; } @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) } goto hypercall_userspace_exit; } + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX: + if (unlikely(hc.fast)) { + ret = HV_STATUS_INVALID_PARAMETER; + break; + } + goto hypercall_userspace_exit; default: ret = HV_STATUS_INVALID_HYPERCALL_CODE; break; @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, ent->ebx |= HV_POST_MESSAGES; ent->ebx |= HV_SIGNAL_EVENTS; + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS; ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE; ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE;