Message ID | 20230220065735.1282809-9-zhaotianrui@loongson.cn |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1172171wrn; Sun, 19 Feb 2023 23:17:53 -0800 (PST) X-Google-Smtp-Source: AK7set/MutKtMzmU5x2lPorW8O/6UlRsknvlofaDdiqrCNqVHJeMXP81zR7RxBCIwEVP/DodQtAy X-Received: by 2002:a17:906:261a:b0:8b1:7de3:cfb3 with SMTP id h26-20020a170906261a00b008b17de3cfb3mr10282787ejc.1.1676877473684; Sun, 19 Feb 2023 23:17:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676877473; cv=none; d=google.com; s=arc-20160816; b=N79+2Tsv9U4y6XjsZE7jcSCj5sWwAGvVER4hnJ7j9LCfUGY4BRYP5MiNQ7uVHmdbXZ 36Kyszd2xofkSlig9cIkqCgHh7EvDQi2MDLM2Sxk9G8zq8Pxe5t1V2hJ4lhDu8kKOcjw cDweDgcLRbyXti7XiKWGPlgyhwLdyF0/dg859hfp1iLHU0d8/4/5rfB1/rHeQOgETlO3 TrBAIlb3BIf4CqXMrkpoC5xMtgOXpFGQjZDC/hU+rebQRj1HG04OyjuluC7Gs3HmCA/p FBQPhVqROAKV8nPjEeFyC9Jv2Vn/GkSW7nRbO9EJkq6DROM0LH9eilnbbr/ZrJRmyR5C /J+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=itvjdIaN/M89Up4YW1gP7K3md9oYKDaEEYRmDg0lRpM=; b=ehQHh4We856ZerP9+2arjxvroOHe3eds54Ee1YC0Ogg/EEkqxk0opntlzxF7W9ul73 I5J7wamGpZPyjAwvT0dgl+oOVAstw8FLtx7BEQzkCfpVLjItc2WQeFxNEEI94ppzUZ/u CGmrOnSK147vIHdcpn4QvAe/WkGmqVxRr9aRXi6eaxIVhN1+Qi+y9o7GbWEpzKpsBH0S ct9LafVWL9wN0eJNXAcbO19xBbb5CZe1d2LPSzIzRs0I7gqDVeTLdJYLqwYLLfPL0Wy5 /YN8QcdWlPliCieRX66z73IDyNh09RfE7VxfllqS5VDmtkYbb0uV6H1co2HLI2pSTnpw /c0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j7-20020a170906534700b008b17dae0e07si14015768ejo.643.2023.02.19.23.17.31; Sun, 19 Feb 2023 23:17:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231218AbjBTHLo (ORCPT <rfc822;kautuk.consul.80@gmail.com> + 99 others); Mon, 20 Feb 2023 02:11:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231243AbjBTHLm (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 20 Feb 2023 02:11:42 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A402A659B; Sun, 19 Feb 2023 23:11:31 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxEwzjGfNjUrQCAA--.45S3; Mon, 20 Feb 2023 14:57:39 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax+73fGfNjFvk2AA--.34690S10; Mon, 20 Feb 2023 14:57:38 +0800 (CST) From: Tianrui Zhao <zhaotianrui@loongson.cn> To: Paolo Bonzini <pbonzini@redhat.com> Cc: Huacai Chen <chenhuacai@kernel.org>, WANG Xuerui <kernel@xen0n.name>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe <axboe@kernel.dk>, Mark Brown <broonie@kernel.org>, Alex Deucher <alexander.deucher@amd.com>, Oliver Upton <oliver.upton@linux.dev>, maobibo@loongson.cn Subject: [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Date: Mon, 20 Feb 2023 14:57:14 +0800 Message-Id: <20230220065735.1282809-9-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230220065735.1282809-1-zhaotianrui@loongson.cn> References: <20230220065735.1282809-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: AQAAf8Ax+73fGfNjFvk2AA--.34690S10 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxGw47Zr4UXFWDGFWrWw43Awb_yoW5Ww47pa y8CryY9w48G34xAanayr1qqr4YvrZ3Kr1xZrZrW3yayrsrtas8Jr4kKrZxtFy8W34FvF1f ZF1rt3Z0kr4qyw7anT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU b4xFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVWxJr0_GcWln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_tr0E3s1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Cr0_Gr1UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758333473914888696?= X-GMAIL-MSGID: =?utf-8?q?1758333473914888696?= |
Series |
Add KVM LoongArch support
|
|
Commit Message
zhaotianrui
Feb. 20, 2023, 6:57 a.m. UTC
Implement vcpu handle exit interface, getting the exit code by ESTAT
register and using kvm exception vector to handle it.
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
arch/loongarch/kvm/vcpu.c | 86 +++++++++++++++++++++++++++++++++++++++
1 file changed, 86 insertions(+)
Comments
On 2/20/23 07:57, Tianrui Zhao wrote: > + if (ret == RESUME_GUEST) > + kvm_acquire_timer(vcpu); > + > + if (!(ret & RESUME_HOST)) { > + _kvm_deliver_intr(vcpu); > + /* Only check for signals if not already exiting to userspace */ > + if (signal_pending(current)) { > + run->exit_reason = KVM_EXIT_INTR; > + ret = (-EINTR << 2) | RESUME_HOST; > + ++vcpu->stat.signal_exits; > + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); > + } > + } > + > + if (ret == RESUME_GUEST) { > + trace_kvm_reenter(vcpu); > + > + /* > + * Make sure the read of VCPU requests in vcpu_reenter() > + * callback is not reordered ahead of the write to vcpu->mode, > + * or we could miss a TLB flush request while the requester sees > + * the VCPU as outside of guest mode and not needing an IPI. > + */ > + smp_store_mb(vcpu->mode, IN_GUEST_MODE); > + > + cpu = smp_processor_id(); > + _kvm_check_requests(vcpu, cpu); > + _kvm_check_vmid(vcpu, cpu); > + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); > + > + /* > + * If FPU are enabled (i.e. the guest's FPU context > + * is live), restore FCSR0. > + */ > + if (_kvm_guest_has_fpu(&vcpu->arch) && > + read_csr_euen() & (CSR_EUEN_FPEN)) { > + kvm_restore_fcsr(&vcpu->arch.fpu); > + } > + } Please avoid copying code from arch/mips/kvm since it's already pretty ugly.
On 2/20/23 07:57, Tianrui Zhao wrote: > + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV) As far as I can see, RESUME_FLAG_NV does not exist anymore and this is just copied from arch/mips? You can keep RESUME_HOST/RESUME_GUEST for the individual functions, but here please make it just "1" for resume guest, and "<= 0" for resume host. This is easy enough to check from assembly and removes the srai by 2. > +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) > +{ > + unsigned long exst = vcpu->arch.host_estat; > + u32 intr = exst & 0x1fff; /* ignore NMI */ > + u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; > + u32 __user *opc = (u32 __user *) vcpu->arch.pc; > + int ret = RESUME_GUEST, cpu; > + > + vcpu->mode = OUTSIDE_GUEST_MODE; > + > + /* Set a default exit reason */ > + run->exit_reason = KVM_EXIT_UNKNOWN; > + run->ready_for_interrupt_injection = 1; > + > + /* > + * Set the appropriate status bits based on host CPU features, > + * before we hit the scheduler > + */ Stale comment? > + local_irq_enable(); Please add guest_state_exit_irqoff() here. > + kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n", > + __func__, exst, opc, run, vcpu); Please add the information to the kvm_exit tracepoint (thus also removing variables such as "exst" or "opc" from this function) instead of calling kvm_debug(). > + trace_kvm_exit(vcpu, exccode); > + if (exccode) { > + ret = _kvm_handle_fault(vcpu, exccode); > + } else { > + WARN(!intr, "suspicious vm exiting"); > + ++vcpu->stat.int_exits; > + > + if (need_resched()) > + cond_resched(); This "if" is not necessary because there is already a cond_resched() below. > + ret = RESUME_GUEST; This "ret" is not necessary because "ret" is already initialized to RESUME_GUEST above, you can either remove it or remove the initializer. > + } > + > + cond_resched(); > + local_irq_disable(); At this point, ret is either RESUME_GUEST or RESUME_HOST. So, the "if"s below are either all taken or all not taken, and most of this code: kvm_acquire_timer(vcpu); _kvm_deliver_intr(vcpu); if (signal_pending(current)) { run->exit_reason = KVM_EXIT_INTR; ret = (-EINTR << 2) | RESUME_HOST; ++vcpu->stat.signal_exits; // no need for a tracepoint here // trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); } trace_kvm_reenter(vcpu); /* * Make sure the read of VCPU requests in vcpu_reenter() * callback is not reordered ahead of the write to vcpu->mode, * or we could miss a TLB flush request while the requester sees * the VCPU as outside of guest mode and not needing an IPI. */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); cpu = smp_processor_id(); _kvm_check_requests(vcpu, cpu); _kvm_check_vmid(vcpu, cpu); vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); /* * If FPU are enabled (i.e. the guest's FPU context * is live), restore FCSR0. */ if (_kvm_guest_has_fpu(&vcpu->arch) && read_csr_euen() & (CSR_EUEN_FPEN)) { kvm_restore_fcsr(&vcpu->arch.fpu); } (all except for the "if (signal_pending(current))" and the final "if") is pretty much duplicated with kvm_arch_vcpu_ioctl_run(); the remaining code can also be done from kvm_arch_vcpu_ioctl_run(), the cost is small. Please move it to a separate function, for example: int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) { if (signal_pending(current)) { run->exit_reason = KVM_EXIT_INTR; ++vcpu->stat.signal_exits; return -EINTR; } kvm_acquire_timer(vcpu); _kvm_deliver_intr(vcpu); ... if (_kvm_guest_has_fpu(&vcpu->arch) && read_csr_euen() & (CSR_EUEN_FPEN)) { kvm_restore_fcsr(&vcpu->arch.fpu); } return 1; } Call it from _kvm_handle_exit(): if (ret == RESUME_HOST) return 0; r = kvm_pre_enter_guest(vcpu); if (r > 0) { trace_kvm_reenter(vcpu); guest_state_enter_irqoff(); } return r; and from kvm_arch_vcpu_ioctl_run(): local_irq_disable(); guest_timing_enter_irqoff(); r = kvm_pre_enter_guest(vcpu); if (r > 0) { trace_kvm_enter(vcpu); /* * This should actually not be a function pointer, but * just for clarity */ */ guest_state_enter_irqoff(); r = vcpu->arch.vcpu_run(run, vcpu); /* guest_state_exit_irqoff() already done. */ trace_kvm_out(vcpu); } guest_timing_exit_irqoff(); local_irq_enable(); out: kvm_sigset_deactivate(vcpu); vcpu_put(vcpu); return r; Paolo > + if (ret == RESUME_GUEST) > + kvm_acquire_timer(vcpu); > + > + if (!(ret & RESUME_HOST)) { > + _kvm_deliver_intr(vcpu); > + /* Only check for signals if not already exiting to userspace */ > + if (signal_pending(current)) { > + run->exit_reason = KVM_EXIT_INTR; > + ret = (-EINTR << 2) | RESUME_HOST; > + ++vcpu->stat.signal_exits; > + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); > + } > + } > + > + if (ret == RESUME_GUEST) { > + trace_kvm_reenter(vcpu); > + > + /* > + * Make sure the read of VCPU requests in vcpu_reenter() > + * callback is not reordered ahead of the write to vcpu->mode, > + * or we could miss a TLB flush request while the requester sees > + * the VCPU as outside of guest mode and not needing an IPI. > + */ > + smp_store_mb(vcpu->mode, IN_GUEST_MODE); > + > + cpu = smp_processor_id(); > + _kvm_check_requests(vcpu, cpu); > + _kvm_check_vmid(vcpu, cpu); > + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); > + > + /* > + * If FPU are enabled (i.e. the guest's FPU context > + * is live), restore FCSR0. > + */ > + if (_kvm_guest_has_fpu(&vcpu->arch) && > + read_csr_euen() & (CSR_EUEN_FPEN)) { > + kvm_restore_fcsr(&vcpu->arch.fpu); > + } > + } > + > + return ret; > +} > + > int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > { > int i;
在 2023年02月21日 02:45, Paolo Bonzini 写道: > On 2/20/23 07:57, Tianrui Zhao wrote: >> + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | >> RESUME_FLAG_NV) > > As far as I can see, RESUME_FLAG_NV does not exist anymore and this is > just copied from arch/mips? > > You can keep RESUME_HOST/RESUME_GUEST for the individual functions, > but here please make it just "1" for resume guest, and "<= 0" for > resume host. This is easy enough to check from assembly and removes > the srai by 2. > >> +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) >> +{ >> + unsigned long exst = vcpu->arch.host_estat; >> + u32 intr = exst & 0x1fff; /* ignore NMI */ >> + u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; >> + u32 __user *opc = (u32 __user *) vcpu->arch.pc; >> + int ret = RESUME_GUEST, cpu; >> + >> + vcpu->mode = OUTSIDE_GUEST_MODE; >> + >> + /* Set a default exit reason */ >> + run->exit_reason = KVM_EXIT_UNKNOWN; >> + run->ready_for_interrupt_injection = 1; >> + >> + /* >> + * Set the appropriate status bits based on host CPU features, >> + * before we hit the scheduler >> + */ > > Stale comment? I will remove this comment. Thanks Tianrui Zhao > >> + local_irq_enable(); > > Please add guest_state_exit_irqoff() here. I will add this function here. Thanks Tianrui Zhao > >> + kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n", >> + __func__, exst, opc, run, vcpu); > > Please add the information to the kvm_exit tracepoint (thus also > removing variables such as "exst" or "opc" from this function) instead > of calling kvm_debug(). Ok, i will fix the kvm exit tracepoint function. Thanks Tianrui Zhao > >> + trace_kvm_exit(vcpu, exccode); >> + if (exccode) { >> + ret = _kvm_handle_fault(vcpu, exccode); >> + } else { >> + WARN(!intr, "suspicious vm exiting"); >> + ++vcpu->stat.int_exits; >> + >> + if (need_resched()) >> + cond_resched(); > > This "if" is not necessary because there is already a cond_resched() > below. Thanks, I will remove this cond_resched function. Thanks Tianrui Zhao > >> + ret = RESUME_GUEST; > > This "ret" is not necessary because "ret" is already initialized to > RESUME_GUEST above, you can either remove it or remove the initializer. Ok, i will remove this "ret" . Thanks Tianrui Zhao > >> + } >> + >> + cond_resched(); >> + local_irq_disable(); > > At this point, ret is either RESUME_GUEST or RESUME_HOST. So, the > "if"s below are either all taken or all not taken, and most of this code: > > kvm_acquire_timer(vcpu); > _kvm_deliver_intr(vcpu); > > if (signal_pending(current)) { > run->exit_reason = KVM_EXIT_INTR; > ret = (-EINTR << 2) | RESUME_HOST; > ++vcpu->stat.signal_exits; > // no need for a tracepoint here > // trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); > } > > trace_kvm_reenter(vcpu); > > /* > * Make sure the read of VCPU requests in vcpu_reenter() > * callback is not reordered ahead of the write to vcpu->mode, > * or we could miss a TLB flush request while the requester sees > * the VCPU as outside of guest mode and not needing an IPI. > */ > smp_store_mb(vcpu->mode, IN_GUEST_MODE); > > cpu = smp_processor_id(); > _kvm_check_requests(vcpu, cpu); > _kvm_check_vmid(vcpu, cpu); > vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); > > /* > * If FPU are enabled (i.e. the guest's FPU context > * is live), restore FCSR0. > */ > if (_kvm_guest_has_fpu(&vcpu->arch) && > read_csr_euen() & (CSR_EUEN_FPEN)) { > kvm_restore_fcsr(&vcpu->arch.fpu); > } > > (all except for the "if (signal_pending(current))" and the final "if") > is pretty much duplicated with kvm_arch_vcpu_ioctl_run(); the > remaining code can also be done from kvm_arch_vcpu_ioctl_run(), the > cost is small. Please move it to a separate function, for example: > > int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) > { > if (signal_pending(current)) { > run->exit_reason = KVM_EXIT_INTR; > ++vcpu->stat.signal_exits; > return -EINTR; > } > > kvm_acquire_timer(vcpu); > _kvm_deliver_intr(vcpu); > > ... > > if (_kvm_guest_has_fpu(&vcpu->arch) && > read_csr_euen() & (CSR_EUEN_FPEN)) { > kvm_restore_fcsr(&vcpu->arch.fpu); > } > return 1; > } > > Call it from _kvm_handle_exit(): > > if (ret == RESUME_HOST) > return 0; > > r = kvm_pre_enter_guest(vcpu); > if (r > 0) { > trace_kvm_reenter(vcpu); > guest_state_enter_irqoff(); > } > > return r; > > and from kvm_arch_vcpu_ioctl_run(): > > local_irq_disable(); > guest_timing_enter_irqoff(); > r = kvm_pre_enter_guest(vcpu); > if (r > 0) { > trace_kvm_enter(vcpu); > /* > * This should actually not be a function pointer, but > * just for clarity */ > */ > guest_state_enter_irqoff(); > r = vcpu->arch.vcpu_run(run, vcpu); > /* guest_state_exit_irqoff() already done. */ > trace_kvm_out(vcpu); > } > guest_timing_exit_irqoff(); > local_irq_enable(); > > out: > kvm_sigset_deactivate(vcpu); > > vcpu_put(vcpu); > return r; > > Paolo Thanks, I will reorganize this code and add the kvm_pre_enter_guest function, and apply it in the vcpu_handle_exit and vcpu_run. Thanks Tianrui Zhao > >> + if (ret == RESUME_GUEST) >> + kvm_acquire_timer(vcpu); >> + >> + if (!(ret & RESUME_HOST)) { >> + _kvm_deliver_intr(vcpu); >> + /* Only check for signals if not already exiting to >> userspace */ >> + if (signal_pending(current)) { >> + run->exit_reason = KVM_EXIT_INTR; >> + ret = (-EINTR << 2) | RESUME_HOST; >> + ++vcpu->stat.signal_exits; >> + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); >> + } >> + } >> + >> + if (ret == RESUME_GUEST) { >> + trace_kvm_reenter(vcpu); >> + >> + /* >> + * Make sure the read of VCPU requests in vcpu_reenter() >> + * callback is not reordered ahead of the write to vcpu->mode, >> + * or we could miss a TLB flush request while the requester >> sees >> + * the VCPU as outside of guest mode and not needing an IPI. >> + */ >> + smp_store_mb(vcpu->mode, IN_GUEST_MODE); >> + >> + cpu = smp_processor_id(); >> + _kvm_check_requests(vcpu, cpu); >> + _kvm_check_vmid(vcpu, cpu); >> + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); >> + >> + /* >> + * If FPU are enabled (i.e. the guest's FPU context >> + * is live), restore FCSR0. >> + */ >> + if (_kvm_guest_has_fpu(&vcpu->arch) && >> + read_csr_euen() & (CSR_EUEN_FPEN)) { >> + kvm_restore_fcsr(&vcpu->arch.fpu); >> + } >> + } >> + >> + return ret; >> +} >> + >> int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) >> { >> int i;
diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 571ac8b9d..e08a4faa0 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -38,6 +38,92 @@ static int _kvm_check_requests(struct kvm_vcpu *vcpu, int cpu) return ret; } +/* + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV) + */ +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) +{ + unsigned long exst = vcpu->arch.host_estat; + u32 intr = exst & 0x1fff; /* ignore NMI */ + u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; + u32 __user *opc = (u32 __user *) vcpu->arch.pc; + int ret = RESUME_GUEST, cpu; + + vcpu->mode = OUTSIDE_GUEST_MODE; + + /* Set a default exit reason */ + run->exit_reason = KVM_EXIT_UNKNOWN; + run->ready_for_interrupt_injection = 1; + + /* + * Set the appropriate status bits based on host CPU features, + * before we hit the scheduler + */ + + local_irq_enable(); + + kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n", + __func__, exst, opc, run, vcpu); + trace_kvm_exit(vcpu, exccode); + if (exccode) { + ret = _kvm_handle_fault(vcpu, exccode); + } else { + WARN(!intr, "suspicious vm exiting"); + ++vcpu->stat.int_exits; + + if (need_resched()) + cond_resched(); + + ret = RESUME_GUEST; + } + + cond_resched(); + + local_irq_disable(); + + if (ret == RESUME_GUEST) + kvm_acquire_timer(vcpu); + + if (!(ret & RESUME_HOST)) { + _kvm_deliver_intr(vcpu); + /* Only check for signals if not already exiting to userspace */ + if (signal_pending(current)) { + run->exit_reason = KVM_EXIT_INTR; + ret = (-EINTR << 2) | RESUME_HOST; + ++vcpu->stat.signal_exits; + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); + } + } + + if (ret == RESUME_GUEST) { + trace_kvm_reenter(vcpu); + + /* + * Make sure the read of VCPU requests in vcpu_reenter() + * callback is not reordered ahead of the write to vcpu->mode, + * or we could miss a TLB flush request while the requester sees + * the VCPU as outside of guest mode and not needing an IPI. + */ + smp_store_mb(vcpu->mode, IN_GUEST_MODE); + + cpu = smp_processor_id(); + _kvm_check_requests(vcpu, cpu); + _kvm_check_vmid(vcpu, cpu); + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); + + /* + * If FPU are enabled (i.e. the guest's FPU context + * is live), restore FCSR0. + */ + if (_kvm_guest_has_fpu(&vcpu->arch) && + read_csr_euen() & (CSR_EUEN_FPEN)) { + kvm_restore_fcsr(&vcpu->arch.fpu); + } + } + + return ret; +} + int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) { int i;