Message ID | 20221117143242.102721-10-mlevitsk@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp433767wrr; Thu, 17 Nov 2022 06:36:17 -0800 (PST) X-Google-Smtp-Source: AA0mqf6iKJo3rVp6lBZKIAYHB2swl/3sjqiT9kefBxvQp3O/vhmTjVpYU6BTIy7UMQmeY4lZF+P7 X-Received: by 2002:a17:902:9a44:b0:188:5391:cec2 with SMTP id x4-20020a1709029a4400b001885391cec2mr3030525plv.78.1668695777689; Thu, 17 Nov 2022 06:36:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668695777; cv=none; d=google.com; s=arc-20160816; b=n8MvI+HZZZ3EM13545iJPhNnCBkH4NFn7UnAmWWDiTFRiImd1JBHcfafid6sbCXIEU 0jV2nfA7g2fNYb/X/SYQd59gX5UY30NqUOD303mnE4map2ENigM57gdnEerfvLAWFIDm JFPXOgZiC2f4UHz05jryPqzCf+SizxQMEohu6kI00jHy+hqamk4bVJ9sVyu7MV5uJNKZ WO80cw4Ic2ccLplkF8qVJ59CB1jECHDiuzcgW2dRR9FtSOSJjS3rRdcylPCaS2H/Qowm Jyq8DpKA3wBGB+PqoRQm1JR3aVLYt0Xi2TF6QpzeMEcYOs/87+839qJdaPHsiF7+Uzup KIBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nE6kIG2lOyCUTV0632EqZY8Sms9KZYxObW/NdTbq8mw=; b=RxZnkBtuFx4+FZQcM0wuvaqRNSRnvbSPq7/YVxooDyMokAwdq8l2RQF8xf2hcbqPVo YKL3volqsD+lZPVOSeKvc0Qd6U80FzEpkYRbiUj4D9fHQX1LyP4FHRekDr5V5UyVOqmS FKV2+huH5mAJyOjiCGuDvSeXOAyB14vro5+c6EGio3p/ybY9k3l7re5kXC+IBvPFp1oh Nc8MgsjdszqP9O0MbombiYsHzgtReBj3+CdOCa2Su0zDu3EYC2XP19/RlBkzn5ZvFId1 bTpETd9v7yMHDm+1xWclIJ0XXx8GWGK3XRXZUOMyCKaqKrbZG4jxMRdbgrjp5d51EB0+ vZMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RlovOVr9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c17-20020a170903235100b001870feba7a1si1316005plh.135.2022.11.17.06.36.04; Thu, 17 Nov 2022 06:36:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RlovOVr9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240510AbiKQOfG (ORCPT <rfc822;a1648639935@gmail.com> + 99 others); Thu, 17 Nov 2022 09:35:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240186AbiKQOeZ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 17 Nov 2022 09:34:25 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EBC9DE91 for <linux-kernel@vger.kernel.org>; Thu, 17 Nov 2022 06:33:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668695606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nE6kIG2lOyCUTV0632EqZY8Sms9KZYxObW/NdTbq8mw=; b=RlovOVr933QjMi5NCtDY6u0Cl8XSKh2G+uhn2N5DydOpCbEzFfi3EKAtVl/ekfz3U0M2OR 9dZozM1ADkIHGTBMOuPBDtj7bS62o1lZGw7VftZik0NpRDj97G+0tKGYDUdQU9Y39+Dpjg NKnvcl+jGboy1Fr6KTrNjh8gj7V1DNg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-328-aDa9ab7dNnihyLfs71VXhQ-1; Thu, 17 Nov 2022 09:33:21 -0500 X-MC-Unique: aDa9ab7dNnihyLfs71VXhQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D69B2811E75; Thu, 17 Nov 2022 14:33:20 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7F8022166B29; Thu, 17 Nov 2022 14:33:17 +0000 (UTC) From: Maxim Levitsky <mlevitsk@redhat.com> To: kvm@vger.kernel.org Cc: Paolo Bonzini <pbonzini@redhat.com>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Dave Hansen <dave.hansen@linux.intel.com>, linux-kernel@vger.kernel.org, Peter Zijlstra <peterz@infradead.org>, Thomas Gleixner <tglx@linutronix.de>, Sandipan Das <sandipan.das@amd.com>, Daniel Sneddon <daniel.sneddon@linux.intel.com>, Jing Liu <jing2.liu@intel.com>, Josh Poimboeuf <jpoimboe@kernel.org>, Wyes Karny <wyes.karny@amd.com>, Borislav Petkov <bp@alien8.de>, Babu Moger <babu.moger@amd.com>, Pawan Gupta <pawan.kumar.gupta@linux.intel.com>, Sean Christopherson <seanjc@google.com>, Jim Mattson <jmattson@google.com>, x86@kernel.org, Maxim Levitsky <mlevitsk@redhat.com> Subject: [PATCH 09/13] KVM: SVM: allow NMI window with vNMI Date: Thu, 17 Nov 2022 16:32:38 +0200 Message-Id: <20221117143242.102721-10-mlevitsk@redhat.com> In-Reply-To: <20221117143242.102721-1-mlevitsk@redhat.com> References: <20221117143242.102721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749754343539263486?= X-GMAIL-MSGID: =?utf-8?q?1749754343539263486?= |
Series |
SVM: vNMI (with my fixes)
|
|
Commit Message
Maxim Levitsky
Nov. 17, 2022, 2:32 p.m. UTC
When the vNMI is enabled, the only case when the KVM will use an NMI
window is when the vNMI injection is pending.
In this case on next IRET/RSM/STGI, the injection has to be complete
and a new NMI can be injected.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
arch/x86/kvm/svm/svm.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
Comments
On Thu, Nov 17, 2022, Maxim Levitsky wrote: > When the vNMI is enabled, the only case when the KVM will use an NMI > window is when the vNMI injection is pending. > > In this case on next IRET/RSM/STGI, the injection has to be complete > and a new NMI can be injected. > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > --- > arch/x86/kvm/svm/svm.c | 19 ++++++++++++------- > 1 file changed, 12 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index cfec4c98bb589b..eaa30f8ace518d 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -2477,7 +2477,10 @@ static int iret_interception(struct kvm_vcpu *vcpu) > struct vcpu_svm *svm = to_svm(vcpu); > > ++vcpu->stat.nmi_window_exits; > - vcpu->arch.hflags |= HF_IRET_MASK; > + > + if (!is_vnmi_enabled(svm)) > + vcpu->arch.hflags |= HF_IRET_MASK; Ugh, HF_IRET_MASK is such a terrible name/flag. Given that it lives with GIF and NMI, one would naturally think that it means "IRET is intercepted", but it really means "KVM just intercepted an IRET and is waiting for NMIs to become unblocked". And on a related topic, why on earth are GIF, NMI, and IRET tracked in hflags? They are 100% SVM concepts. IMO, this code would be much easier to follow if by making them bools in vcpu_svm with more descriptive names. > + > if (!sev_es_guest(vcpu->kvm)) { > svm_clr_intercept(svm, INTERCEPT_IRET); > svm->nmi_iret_rip = kvm_rip_read(vcpu); The vNMI interaction with this logic is confusing, as nmi_iret_rip doesn't need to be captured for the vNMI case. SEV-ES actually has unrelated reasons for not reading RIP vs. not intercepting IRET, they just got bundled together here for convenience. This is also an opportunity to clean up the SEV-ES interaction with IRET interception, which is splattered all over the place and isn't documented anywhere. E.g. (with an HF_IRET_MASK => awaiting_iret_completion change) /* * For SEV-ES guests, KVM must not rely on IRET to detect NMI unblocking as * #VC->IRET in the guest will result in KVM thinking NMIs are unblocked before * the guest is ready for a new NMI. Architecturally, KVM is 100% correct to * treat NMIs as unblocked on IRET, but the guest-host ABI for SEV-ES guests is * that KVM must wait for an explicit "NMI Complete" from the guest. */ static void svm_disable_iret_interception(struct vcpu_svm *svm) { if (!sev_es_guest(svm->vcpu.kvm)) svm_clr_intercept(svm, INTERCEPT_IRET); } static void svm_enable_iret_interception(struct vcpu_svm *svm) { if (!sev_es_guest(svm->vcpu.kvm)) svm_set_intercept(svm, INTERCEPT_IRET); } static int iret_interception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); ++vcpu->stat.nmi_window_exits; /* * No need to wait for the IRET to complete if vNMIs are enabled as * hardware will automatically process the pending NMI when NMIs are * unblocked from the guest's perspective. */ if (!is_vnmi_enabled(svm)) { svm->awaiting_iret_completion = true; /* * The guest's RIP is inaccessible for SEV-ES guests, just * assume forward progress was made on the next VM-Exit. */ if (!sev_es_guest(vcpu->kvm)) svm->nmi_iret_rip = kvm_rip_read(vcpu); } svm_disable_iret_interception(svm); kvm_make_request(KVM_REQ_EVENT, vcpu); return 1; } > @@ -3735,9 +3738,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > > - if (is_vnmi_enabled(svm)) > - return; > - > if ((vcpu->arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK) > return; /* IRET will cause a vm exit */ As much as I like incremental patches, in this case I'm having a hell of a time reviewing the code as the vNMI logic ends up being split across four patches. E.g. in this particular case, the above requires knowing that svm_inject_nmi() never sets HF_NMI_MASK when vNMI is enabled. In the next version, any objection to squashing patches 7-10 into a single "Add non-nested vNMI support" patch? As for this code, IMO some pre-work to change the flow would help with the vNMI case. The GIF=0 logic overrides legacy NMI blocking, and so can be handled first. And I vote to explicitly set INTERCEPT_IRET in the above case instead of relying on INTERCEPT_IRET to already be set by svm_inject_nmi(). That would yield this as a final result: static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); /* * GIF=0 blocks NMIs irrespective of legacy NMI blocking. No need to * intercept or single-step IRET if GIF=0, just intercept STGI. */ if (!gif_set(svm)) { if (vgif) svm_set_intercept(svm, INTERCEPT_STGI); return; } /* * NMI is blocked, either because an NMI is in service or because KVM * just injected an NMI. If KVM is waiting for an intercepted IRET to * complete, single-step the IRET to wait for NMIs to become unblocked. * Otherwise, intercept the guest's next IRET. */ if (svm->awaiting_iret_completion) { svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); svm->nmi_singlestep = true; svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); } else { svm_set_intercept(svm, INTERCEPT_IRET); } } > > @@ -3751,9 +3751,14 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) > * Something prevents NMI from been injected. Single step over possible > * problem (IRET or exception injection or interrupt shadow) > */ > - svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); > - svm->nmi_singlestep = true; > - svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); > + > + if (is_vnmi_enabled(svm)) { > + svm_set_intercept(svm, INTERCEPT_IRET); This will break SEV-ES. Per commit 4444dfe4050b ("KVM: SVM: Add NMI support for an SEV-ES guest"), the hypervisor must not rely on IRET interception to detect NMI unblocking for SEV-ES guests. As above, I think we should provide helpers to toggle NMI interception to reduce the probability of breaking SEV-ES. > + } else { > + svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); > + svm->nmi_singlestep = true; > + svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); > + } > } > > static void svm_flush_tlb_current(struct kvm_vcpu *vcpu) > -- > 2.34.3 >
On Thu, 2022-11-17 at 18:21 +0000, Sean Christopherson wrote: > On Thu, Nov 17, 2022, Maxim Levitsky wrote: > > When the vNMI is enabled, the only case when the KVM will use an NMI > > window is when the vNMI injection is pending. > > > > In this case on next IRET/RSM/STGI, the injection has to be complete > > and a new NMI can be injected. > > > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > > --- > > arch/x86/kvm/svm/svm.c | 19 ++++++++++++------- > > 1 file changed, 12 insertions(+), 7 deletions(-) > > > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > > index cfec4c98bb589b..eaa30f8ace518d 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -2477,7 +2477,10 @@ static int iret_interception(struct kvm_vcpu *vcpu) > > struct vcpu_svm *svm = to_svm(vcpu); > > > > ++vcpu->stat.nmi_window_exits; > > - vcpu->arch.hflags |= HF_IRET_MASK; > > + > > + if (!is_vnmi_enabled(svm)) > > + vcpu->arch.hflags |= HF_IRET_MASK; > > Ugh, HF_IRET_MASK is such a terrible name/flag. Given that it lives with GIF > and NMI, one would naturally think that it means "IRET is intercepted", but it > really means "KVM just intercepted an IRET and is waiting for NMIs to become > unblocked". > > And on a related topic, why on earth are GIF, NMI, and IRET tracked in hflags? > They are 100% SVM concepts. IMO, this code would be much easier to follow if > by making them bools in vcpu_svm with more descriptive names. > > > + > > if (!sev_es_guest(vcpu->kvm)) { > > svm_clr_intercept(svm, INTERCEPT_IRET); > > svm->nmi_iret_rip = kvm_rip_read(vcpu); > > The vNMI interaction with this logic is confusing, as nmi_iret_rip doesn't need > to be captured for the vNMI case. SEV-ES actually has unrelated reasons for not > reading RIP vs. not intercepting IRET, they just got bundled together here for > convenience. Yes, this can be cleaned up, again I didn't want to change too much of the code. > > This is also an opportunity to clean up the SEV-ES interaction with IRET interception, > which is splattered all over the place and isn't documented anywhere. > > E.g. (with an HF_IRET_MASK => awaiting_iret_completion change) > > /* > * For SEV-ES guests, KVM must not rely on IRET to detect NMI unblocking as > * #VC->IRET in the guest will result in KVM thinking NMIs are unblocked before > * the guest is ready for a new NMI. Architecturally, KVM is 100% correct to > * treat NMIs as unblocked on IRET, but the guest-host ABI for SEV-ES guests is > * that KVM must wait for an explicit "NMI Complete" from the guest. > */ > static void svm_disable_iret_interception(struct vcpu_svm *svm) > { > if (!sev_es_guest(svm->vcpu.kvm)) > svm_clr_intercept(svm, INTERCEPT_IRET); > } > > static void svm_enable_iret_interception(struct vcpu_svm *svm) > { > if (!sev_es_guest(svm->vcpu.kvm)) > svm_set_intercept(svm, INTERCEPT_IRET); > } This makes sense, but doesn't have to be done in this patch series IMHO. > > static int iret_interception(struct kvm_vcpu *vcpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > > ++vcpu->stat.nmi_window_exits; > > /* > * No need to wait for the IRET to complete if vNMIs are enabled as > * hardware will automatically process the pending NMI when NMIs are > * unblocked from the guest's perspective. > */ > if (!is_vnmi_enabled(svm)) { > svm->awaiting_iret_completion = true; > > /* > * The guest's RIP is inaccessible for SEV-ES guests, just > * assume forward progress was made on the next VM-Exit. > */ > if (!sev_es_guest(vcpu->kvm)) > svm->nmi_iret_rip = kvm_rip_read(vcpu); > } > > svm_disable_iret_interception(svm); > > kvm_make_request(KVM_REQ_EVENT, vcpu); > return 1; > } > > @@ -3735,9 +3738,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) > > { > > struct vcpu_svm *svm = to_svm(vcpu); > > > > - if (is_vnmi_enabled(svm)) > > - return; > > - > > if ((vcpu->arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK) > > return; /* IRET will cause a vm exit */ > > As much as I like incremental patches, in this case I'm having a hell of a time > reviewing the code as the vNMI logic ends up being split across four patches. > E.g. in this particular case, the above requires knowing that svm_inject_nmi() > never sets HF_NMI_MASK when vNMI is enabled. > > In the next version, any objection to squashing patches 7-10 into a single "Add > non-nested vNMI support" patch? No objection at all - again since this is not my patch series, I didn't want to make too many invasive changes to it. > > As for this code, IMO some pre-work to change the flow would help with the vNMI > case. The GIF=0 logic overrides legacy NMI blocking, and so can be handled first. > And I vote to explicitly set INTERCEPT_IRET in the above case instead of relying > on INTERCEPT_IRET to already be set by svm_inject_nmi(). > > That would yield this as a final result: > > static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > > /* > * GIF=0 blocks NMIs irrespective of legacy NMI blocking. No need to > * intercept or single-step IRET if GIF=0, just intercept STGI. > */ > if (!gif_set(svm)) { > if (vgif) > svm_set_intercept(svm, INTERCEPT_STGI); > return; > } > > /* > * NMI is blocked, either because an NMI is in service or because KVM > * just injected an NMI. If KVM is waiting for an intercepted IRET to > * complete, single-step the IRET to wait for NMIs to become unblocked. > * Otherwise, intercept the guest's next IRET. > */ > if (svm->awaiting_iret_completion) { > svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); > svm->nmi_singlestep = true; > svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); > } else { > svm_set_intercept(svm, INTERCEPT_IRET); > } > } > > > > > @@ -3751,9 +3751,14 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) > > * Something prevents NMI from been injected. Single step over possible > > * problem (IRET or exception injection or interrupt shadow) > > */ > > - svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); > > - svm->nmi_singlestep = true; > > - svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); > > + > > + if (is_vnmi_enabled(svm)) { > > + svm_set_intercept(svm, INTERCEPT_IRET); > > This will break SEV-ES. Per commit 4444dfe4050b ("KVM: SVM: Add NMI support for > an SEV-ES guest"), the hypervisor must not rely on IRET interception to detect > NMI unblocking for SEV-ES guests. As above, I think we should provide helpers to > toggle NMI interception to reduce the probability of breaking SEV-ES. Yes, one more reason for the helpers, I didn't notice that I missed that 'if'. > > > + } else { > > + svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); > > + svm->nmi_singlestep = true; > > + svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); > > + } > > } > > > > static void svm_flush_tlb_current(struct kvm_vcpu *vcpu) > > -- > > 2.34.3 > > > Best regards, Maxim Levitsky
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cfec4c98bb589b..eaa30f8ace518d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2477,7 +2477,10 @@ static int iret_interception(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); ++vcpu->stat.nmi_window_exits; - vcpu->arch.hflags |= HF_IRET_MASK; + + if (!is_vnmi_enabled(svm)) + vcpu->arch.hflags |= HF_IRET_MASK; + if (!sev_es_guest(vcpu->kvm)) { svm_clr_intercept(svm, INTERCEPT_IRET); svm->nmi_iret_rip = kvm_rip_read(vcpu); @@ -3735,9 +3738,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - if (is_vnmi_enabled(svm)) - return; - if ((vcpu->arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK) return; /* IRET will cause a vm exit */ @@ -3751,9 +3751,14 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) * Something prevents NMI from been injected. Single step over possible * problem (IRET or exception injection or interrupt shadow) */ - svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); - svm->nmi_singlestep = true; - svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); + + if (is_vnmi_enabled(svm)) { + svm_set_intercept(svm, INTERCEPT_IRET); + } else { + svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); + svm->nmi_singlestep = true; + svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); + } } static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)