From patchwork Tue Feb 14 02:56:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56614 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723261wrn; Mon, 13 Feb 2023 18:58:10 -0800 (PST) X-Google-Smtp-Source: AK7set8NImS+r2gM/KWS4oMNTeJBAnKJlasbGwMQrPnFcbO2BwnYhf5oGzZJ9TfEh9QK4Nd8pNQI X-Received: by 2002:aa7:9836:0:b0:58d:d546:8012 with SMTP id q22-20020aa79836000000b0058dd5468012mr749889pfl.0.1676343490414; Mon, 13 Feb 2023 18:58:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343490; cv=none; d=google.com; s=arc-20160816; b=pVvcl4c/bqnY0JNqdG4yxsa3v7Hx1m9+dT7FjUITP7KPQHQ5l/bOYSZ7OVIdphKvks B/SZ/v1Q8my/VA8hmA6QqkndA2fYTiZMRQcqeCFDDlXkVp4edI8TFA6dOm2yJXPt2IG+ ncPtQTe7f73TJaCvFbAH49HgeyUHH7LoDZ98d8zxiHdterqM/yV97zWL8c4dBLfy0SSD uSeWD8WNr3GKTGPx/UUI7liIFR9jbP4HqSJFiGN7LJcw4M+H92SpVInKZxrNVtaij0Mw RyoMU4DjT125fMXv74cWgsCCVYiLFjctASn5hAhQRjVXgBY/OIIJCbf4t4kg6aPFGllW HSEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LqzloJwFEfo+BEMExIXk+CC0yDadc4fkRxX58WNTkWk=; b=G4i5Zazbxg7WyJQPfj5kN9pzcCo24GN0vJz9ttb/xM1Lcbd+SQ3xnymT6As9CwE4PD zbo4+ASJTqRiU5bx6P869vBEDDnldUZYTpVY2MvXdcTMRigAK/DIMkDvKmr5vggp0m2L JA511V13BUGFnvg8gOz5f5UxyUuw3btAGOD2y8pWTw9C1ykjnE6WWEVj3lBjKxVrQvvI +ek86/yeHckWgL1HRCF6OLcd8fHl2gBUDdxs/dEu3l/P+AYMTdE6+6tL5Jibn4f32MSY IoWjgTABvli0hHkp1n8D1tK9Z5AGs8tWyr9lXaxdpYOP/8EcNfZlzvh24nJb+6kKICTT AJig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a12-20020aa795ac000000b005a8d205db7bsi1820682pfk.272.2023.02.13.18.57.36; Mon, 13 Feb 2023 18:58:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230465AbjBNC46 (ORCPT + 99 others); Mon, 13 Feb 2023 21:56:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbjBNC44 (ORCPT ); Mon, 13 Feb 2023 21:56:56 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C77D617CCC; Mon, 13 Feb 2023 18:56:52 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Axrtpz+OpjJFcAAA--.1083S3; Tue, 14 Feb 2023 10:56:51 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S3; Tue, 14 Feb 2023 10:56:50 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 01/24] LoongArch: KVM: Implement kvm module related interface Date: Tue, 14 Feb 2023 10:56:25 +0800 Message-Id: <20230214025648.1898508-2-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S3 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvAXoW3Aw4DGFW8Xw1fWrWDAF15XFb_yoW8ZryDto W3JF4rWw18Wr1ru398Cr17tayUZrZ5KFsrA3ZxA3s5X3W7Jwn8Wr1xKw4FqF13Xrn5KrZx uasIqwnrXayIy3Wkn29KB7ZKAUJUUUU3529EdanIXcx71UUUUU7KY7ZEXasCq-sGcSsGvf J3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnRJU UUkm1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64 kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW8JVW5JwA2z4x0Y4vE2Ix0cI8IcVCY 1x0267AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK 8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv6xkF7I 0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773551511919922?= X-GMAIL-MSGID: =?utf-8?q?1757773551511919922?= 1. Implement loongarch kvm module init, module exit interface, using kvm context to save the vpid info and vcpu world switch interface pointer. 2. Implement kvm hardware enable, disable interface, setting the guest config reg to enable virtualization features. 3. Add kvm related headers. Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/cpu-features.h | 22 ++ arch/loongarch/include/asm/kvm_host.h | 257 ++++++++++++++++++++++ arch/loongarch/include/asm/kvm_types.h | 11 + arch/loongarch/include/uapi/asm/kvm.h | 121 ++++++++++ arch/loongarch/kvm/main.c | 152 +++++++++++++ include/uapi/linux/kvm.h | 15 ++ 6 files changed, 578 insertions(+) create mode 100644 arch/loongarch/include/asm/kvm_host.h create mode 100644 arch/loongarch/include/asm/kvm_types.h create mode 100644 arch/loongarch/include/uapi/asm/kvm.h create mode 100644 arch/loongarch/kvm/main.c diff --git a/arch/loongarch/include/asm/cpu-features.h b/arch/loongarch/include/asm/cpu-features.h index b07974218..23e7c3ae5 100644 --- a/arch/loongarch/include/asm/cpu-features.h +++ b/arch/loongarch/include/asm/cpu-features.h @@ -64,5 +64,27 @@ #define cpu_has_guestid cpu_opt(LOONGARCH_CPU_GUESTID) #define cpu_has_hypervisor cpu_opt(LOONGARCH_CPU_HYPERVISOR) +#define cpu_has_matc_guest (cpu_data[0].guest_cfg & (1 << 0)) +#define cpu_has_matc_root (cpu_data[0].guest_cfg & (1 << 1)) +#define cpu_has_matc_nest (cpu_data[0].guest_cfg & (1 << 2)) +#define cpu_has_sitp (cpu_data[0].guest_cfg & (1 << 6)) +#define cpu_has_titp (cpu_data[0].guest_cfg & (1 << 8)) +#define cpu_has_toep (cpu_data[0].guest_cfg & (1 << 10)) +#define cpu_has_topp (cpu_data[0].guest_cfg & (1 << 12)) +#define cpu_has_torup (cpu_data[0].guest_cfg & (1 << 14)) +#define cpu_has_gcip_all (cpu_data[0].guest_cfg & (1 << 16)) +#define cpu_has_gcip_hit (cpu_data[0].guest_cfg & (1 << 17)) +#define cpu_has_gcip_secure (cpu_data[0].guest_cfg & (1 << 18)) + +/* + * Guest capabilities + */ +#define cpu_guest_has_conf1 (cpu_data[0].guest.conf & (1 << 1)) +#define cpu_guest_has_conf2 (cpu_data[0].guest.conf & (1 << 2)) +#define cpu_guest_has_conf3 (cpu_data[0].guest.conf & (1 << 3)) +#define cpu_guest_has_fpu (cpu_data[0].guest.options & LOONGARCH_CPU_FPU) +#define cpu_guest_has_perf (cpu_data[0].guest.options & LOONGARCH_CPU_PMP) +#define cpu_guest_has_watch (cpu_data[0].guest.options & LOONGARCH_CPU_WATCH) +#define cpu_guest_has_lsx (cpu_data[0].guest.ases & LOONGARCH_ASE_LSX) #endif /* __ASM_CPU_FEATURES_H */ diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h new file mode 100644 index 000000000..fa464e476 --- /dev/null +++ b/arch/loongarch/include/asm/kvm_host.h @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_HOST_H__ +#define __ASM_LOONGARCH_KVM_HOST_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +/* Loongarch KVM register ids */ +#define LOONGARCH_CSR_32(_R, _S) \ + (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S))) + +#define LOONGARCH_CSR_64(_R, _S) \ + (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S))) + +#define KVM_IOC_CSRID(id) LOONGARCH_CSR_64(id, 0) +#define KVM_GET_IOC_CSRIDX(id) ((id & KVM_CSR_IDX_MASK) >> 3) + +#define KVM_MAX_VCPUS 256 +/* memory slots that does not exposed to userspace */ +#define KVM_PRIVATE_MEM_SLOTS 0 + +#define KVM_HALT_POLL_NS_DEFAULT 500000 + +struct kvm_vm_stat { + struct kvm_vm_stat_generic generic; +}; + +struct kvm_vcpu_stat { + struct kvm_vcpu_stat_generic generic; + u64 idle_exits; + u64 signal_exits; + u64 int_exits; + u64 cpucfg_exits; +}; + +struct kvm_arch_memory_slot { +}; + +struct kvm_context { + unsigned long vpid_mask; + unsigned long vpid_cache; + void *kvm_eentry; + void *kvm_enter_guest; + unsigned long page_order; + struct kvm_vcpu *last_vcpu; +}; + +struct kvm_arch { + /* Guest physical mm */ + struct mm_struct gpa_mm; + /* Mask of CPUs needing GPA ASID flush */ + cpumask_t asid_flush_mask; + + unsigned char online_vcpus; + unsigned char is_migrate; + s64 time_offset; + struct kvm_context __percpu *vmcs; +}; + + +#define LOONGARCH_CSRS 0x100 +#define CSR_UCWIN_BASE 0x100 +#define CSR_UCWIN_SIZE 0x10 +#define CSR_DMWIN_BASE 0x180 +#define CSR_DMWIN_SIZE 0x4 +#define CSR_PERF_BASE 0x200 +#define CSR_PERF_SIZE 0x8 +#define CSR_DEBUG_BASE 0x500 +#define CSR_DEBUG_SIZE 0x3 +#define CSR_ALL_SIZE 0x800 + +struct loongarch_csrs { + unsigned long csrs[CSR_ALL_SIZE]; +}; + +/* Resume Flags */ +#define RESUME_FLAG_DR (1<<0) /* Reload guest nonvolatile state? */ +#define RESUME_FLAG_HOST (1<<1) /* Resume host? */ + +#define RESUME_GUEST 0 +#define RESUME_GUEST_DR RESUME_FLAG_DR +#define RESUME_HOST RESUME_FLAG_HOST + +enum emulation_result { + EMULATE_DONE, /* no further processing */ + EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */ + EMULATE_FAIL, /* can't emulate this instruction */ + EMULATE_WAIT, /* WAIT instruction */ + EMULATE_EXCEPT, /* A guest exception has been generated */ + EMULATE_DO_IOCSR, /* handle IOCSR request */ +}; + +#define KVM_NR_MEM_OBJS 4 +#define KVM_LARCH_FPU (0x1 << 0) + +struct kvm_vcpu_arch { + unsigned long guest_eentry; + unsigned long host_eentry; + int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu); + int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu); + + /* Host registers preserved across guest mode execution */ + unsigned long host_stack; + unsigned long host_gp; + unsigned long host_pgd; + unsigned long host_pgdhi; + unsigned long host_entryhi; + + /* Host CSR registers used when handling exits from guest */ + unsigned long badv; + unsigned long host_estat; + unsigned long badi; + unsigned long host_ecfg; + unsigned long host_percpu; + + /* GPRS */ + unsigned long gprs[32]; + unsigned long pc; + + /* FPU State */ + struct loongarch_fpu fpu FPU_ALIGN; + /* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */ + unsigned int aux_inuse; + + /* CSR State */ + struct loongarch_csrs *csr; + + /* GPR used as IO source/target */ + u32 io_gpr; + + struct hrtimer swtimer; + /* Count timer control KVM register */ + u32 count_ctl; + + /* Bitmask of exceptions that are pending */ + unsigned long irq_pending; + /* Bitmask of pending exceptions to be cleared */ + unsigned long irq_clear; + + /* Cache some mmu pages needed inside spinlock regions */ + struct kvm_mmu_memory_cache mmu_page_cache; + + /* vcpu's vpid is different on each host cpu in an smp system */ + u64 vpid[NR_CPUS]; + + /* Period of stable timer tick in ns */ + u64 timer_period; + /* Frequency of stable timer in Hz */ + u64 timer_mhz; + /* Stable bias from the raw time */ + u64 timer_bias; + /* Dynamic nanosecond bias (multiple of timer_period) to avoid overflow */ + s64 timer_dyn_bias; + /* Save ktime */ + ktime_t stable_ktime_saved; + + u64 core_ext_ioisr[4]; + + /* Last CPU the VCPU state was loaded on */ + int last_sched_cpu; + /* Last CPU the VCPU actually executed guest code on */ + int last_exec_cpu; + + u8 fpu_enabled; + struct kvm_guest_debug_arch guest_debug; +}; + +static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg) +{ + return csr->csrs[reg]; +} + +static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, + unsigned long val) +{ + csr->csrs[reg] = val; +} + +/* Helpers */ +static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch) +{ + return cpu_has_fpu && arch->fpu_enabled; +} + +void _kvm_init_fault(void); + +/* Debug: dump vcpu state */ +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu); + +/* MMU handling */ +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write); +void kvm_flush_tlb_all(void); +void _kvm_destroy_mm(struct kvm *kvm); +pgd_t *kvm_pgd_alloc(void); + +#define KVM_ARCH_WANT_MMU_NOTIFIER +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end, bool blockable); +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); + +static inline void update_pc(struct kvm_vcpu_arch *arch) +{ + arch->pc += 4; +} + +/** + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault. + * @vcpu: Virtual CPU. + * + * Returns: Whether the TLBL exception was likely due to an instruction + * fetch fault rather than a data load fault. + */ +static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch) +{ + if (arch->pc == arch->badv) + return true; + + return false; +} + +/* Misc */ +static inline void kvm_arch_hardware_unsetup(void) {} +static inline void kvm_arch_sync_events(struct kvm *kvm) {} +static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} +static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) {} +void _kvm_check_vmid(struct kvm_vcpu *vcpu, int cpu); +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer); +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa); +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); +void kvm_init_vmcs(struct kvm *kvm); +void kvm_vector_entry(void); +int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu); +extern const unsigned long kvm_vector_size; +extern const unsigned long kvm_enter_guest_size; +#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */ diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h new file mode 100644 index 000000000..060647b5f --- /dev/null +++ b/arch/loongarch/include/asm/kvm_types.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef _ASM_LOONGARCH_KVM_TYPES_H +#define _ASM_LOONGARCH_KVM_TYPES_H + +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 4 + +#endif /* _ASM_LOONGARCH_KVM_TYPES_H */ diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h new file mode 100644 index 000000000..0f90e7913 --- /dev/null +++ b/arch/loongarch/include/uapi/asm/kvm.h @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __UAPI_ASM_LOONGARCH_KVM_H +#define __UAPI_ASM_LOONGARCH_KVM_H + +#include + +/* + * KVM Loongarch specific structures and definitions. + * + * Some parts derived from the x86 version of this file. + */ + +#define __KVM_HAVE_READONLY_MEM + +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 + +/* + * for KVM_GET_REGS and KVM_SET_REGS + */ +struct kvm_regs { + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ + __u64 gpr[32]; + __u64 pc; +}; + +/* + * for KVM_GET_FPU and KVM_SET_FPU + */ +struct kvm_fpu { + __u32 fcsr; + __u32 none; + __u64 fcc; /* 8x8 */ + struct kvm_fpureg { + __u64 val64[4]; //support max 256 bits + } fpr[32]; +}; + +/* + * For LOONGARCH, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various + * registers. The id field is broken down as follows: + * + * bits[63..52] - As per linux/kvm.h + * bits[51..32] - Must be zero. + * bits[31..16] - Register set. + * + * Register set = 0: GP registers from kvm_regs (see definitions below). + * + * Register set = 1: CSR registers. + * + * Register set = 2: KVM specific registers (see definitions below). + * + * Register set = 3: FPU / SIMD registers (see definitions below). + * + * Other sets registers may be added in the future. Each set would + * have its own identifier in bits[31..16]. + */ + +#define KVM_REG_LOONGARCH_GP (KVM_REG_LOONGARCH | 0x00000ULL) +#define KVM_REG_LOONGARCH_CSR (KVM_REG_LOONGARCH | 0x10000ULL) +#define KVM_REG_LOONGARCH_KVM (KVM_REG_LOONGARCH | 0x20000ULL) +#define KVM_REG_LOONGARCH_FPU (KVM_REG_LOONGARCH | 0x30000ULL) +#define KVM_REG_LOONGARCH_MASK (KVM_REG_LOONGARCH | 0x30000ULL) +#define KVM_CSR_IDX_MASK (0x10000 - 1) + +/* + * KVM_REG_LOONGARCH_KVM - KVM specific control registers. + */ + +#define KVM_REG_LOONGARCH_COUNTER (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3) +#define KVM_REG_LOONGARCH_VCPU_RESET (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4) + +struct kvm_debug_exit_arch { +}; + +/* for KVM_SET_GUEST_DEBUG */ +struct kvm_guest_debug_arch { +}; + +/* definition of registers in kvm_run */ +struct kvm_sync_regs { +}; + +/* dummy definition */ +struct kvm_sregs { +}; + +struct kvm_iocsr_entry { + __u32 addr; + __u32 pad; + __u64 data; +}; + +struct kvm_csr_entry { + __u32 index; + __u32 reserved; + __u64 data; +}; + +/* for KVM_GET_CSRS and KVM_SET_CSRS */ +struct kvm_csrs { + __u32 ncsrs; /* number of csrs in entries */ + __u32 pad; + + struct kvm_csr_entry entries[0]; +}; + +struct kvm_loongarch_interrupt { + /* in */ + __u32 cpu; + __u32 irq; +}; + +#define KVM_NR_IRQCHIPS 1 +#define KVM_IRQCHIP_NUM_PINS 64 +#define KVM_MAX_CORES 256 + +#endif /* __UAPI_ASM_LOONGARCH_KVM_H */ diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c new file mode 100644 index 000000000..c16c7e23e --- /dev/null +++ b/arch/loongarch/kvm/main.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include + +static struct kvm_context __percpu *vmcs; + +void kvm_init_vmcs(struct kvm *kvm) +{ + kvm->arch.vmcs = vmcs; +} + +long kvm_arch_dev_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_check_processor_compat(void *opaque) +{ + return 0; +} + +int kvm_arch_hardware_setup(void *opaque) +{ + return 0; +} + +int kvm_arch_hardware_enable(void) +{ + unsigned long gcfg = 0; + + /* First init gtlbc, gcfg, gstat, gintc. All guest use the same config */ + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); + write_csr_gcfg(0); + write_csr_gstat(0); + write_csr_gintc(0); + + /* + * Enable virtualization features granting guest direct control of + * certain features: + * GCI=2: Trap on init or unimplement cache instruction. + * TORU=0: Trap on Root Unimplement. + * CACTRL=1: Root control cache. + * TOP=0: Trap on Previlege. + * TOE=0: Trap on Exception. + * TIT=0: Trap on Timer. + */ + if (cpu_has_gcip_all) + gcfg |= CSR_GCFG_GCI_SECURE; + if (cpu_has_matc_root) + gcfg |= CSR_GCFG_MATC_ROOT; + + gcfg |= CSR_GCFG_TIT; + write_csr_gcfg(gcfg); + + kvm_flush_tlb_all(); + + /* Enable using TGID */ + set_csr_gtlbc(CSR_GTLBC_USETGID); + kvm_debug("gtlbc:%llx gintc:%llx gstat:%llx gcfg:%llx", + read_csr_gtlbc(), read_csr_gintc(), + read_csr_gstat(), read_csr_gcfg()); + + return 0; +} + +void kvm_arch_hardware_disable(void) +{ + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); + write_csr_gcfg(0); + write_csr_gstat(0); + write_csr_gintc(0); + + /* Flush any remaining guest TLB entries */ + kvm_flush_tlb_all(); +} + +int kvm_arch_init(void *opaque) +{ + struct kvm_context *context; + unsigned long vpid_mask; + int cpu, order; + void *addr; + + vmcs = alloc_percpu(struct kvm_context); + if (!vmcs) { + pr_err("kvm: failed to allocate percpu kvm_context\n"); + return -ENOMEM; + } + + order = get_order(kvm_vector_size + kvm_enter_guest_size); + addr = (void *)__get_free_pages(GFP_KERNEL, order); + if (!addr) { + free_percpu(vmcs); + return -ENOMEM; + } + + memcpy(addr, kvm_vector_entry, kvm_vector_size); + memcpy(addr + kvm_vector_size, kvm_enter_guest, kvm_enter_guest_size); + flush_icache_range((unsigned long)addr, (unsigned long)addr + + kvm_vector_size + kvm_enter_guest_size); + + vpid_mask = read_csr_gstat(); + vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT; + if (vpid_mask) + vpid_mask = GENMASK(vpid_mask - 1, 0); + + for_each_possible_cpu(cpu) { + context = per_cpu_ptr(vmcs, cpu); + context->vpid_mask = vpid_mask; + context->vpid_cache = context->vpid_mask + 1; + context->last_vcpu = NULL; + context->kvm_eentry = addr; + context->kvm_enter_guest = addr + kvm_vector_size; + context->page_order = order; + } + + _kvm_init_fault(); + + return 0; +} + +void kvm_arch_exit(void) +{ + struct kvm_context *context = per_cpu_ptr(vmcs, 0); + + free_pages((unsigned long)context->kvm_eentry, context->page_order); + free_percpu(vmcs); +} + +static int kvm_loongarch_init(void) +{ + if (!cpu_has_lvz) + return 0; + + return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE); +} + +static void kvm_loongarch_exit(void) +{ + kvm_exit(); +} + +module_init(kvm_loongarch_init); +module_exit(kvm_loongarch_exit); diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 55155e262..6f3259849 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -264,6 +264,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_LOONGARCH_IOCSR 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -336,6 +337,13 @@ struct kvm_run { __u32 len; __u8 is_write; } mmio; + /* KVM_EXIT_LOONGARCH_IOCSR */ + struct { + __u64 phys_addr; + __u8 data[8]; + __u32 len; + __u8 is_write; + } iocsr_io; /* KVM_EXIT_HYPERCALL */ struct { __u64 nr; @@ -1175,6 +1183,9 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 +#define KVM_CAP_LOONGARCH_FPU 226 +#define KVM_CAP_LOONGARCH_LSX 227 +#define KVM_CAP_LOONGARCH_VZ 228 #ifdef KVM_CAP_IRQ_ROUTING @@ -1345,6 +1356,7 @@ struct kvm_dirty_tlb { #define KVM_REG_ARM64 0x6000000000000000ULL #define KVM_REG_MIPS 0x7000000000000000ULL #define KVM_REG_RISCV 0x8000000000000000ULL +#define KVM_REG_LOONGARCH 0x9000000000000000ULL #define KVM_REG_SIZE_SHIFT 52 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL @@ -1662,6 +1674,9 @@ struct kvm_enc_region { #define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) #define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) +#define KVM_GET_CSRS _IOWR(KVMIO, 0xc5, struct kvm_csrs) +#define KVM_SET_CSRS _IOW(KVMIO, 0xc6, struct kvm_csrs) + struct kvm_s390_pv_sec_parm { __u64 origin; __u64 length; From patchwork Tue Feb 14 02:56:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56611 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723140wrn; Mon, 13 Feb 2023 18:57:48 -0800 (PST) X-Google-Smtp-Source: AK7set+VUrBopyTAab6/su3uCtom7ZPfiXVEwUN/lH6oZ6QXKvpJDddCtkTg4tSLsOOMpLHBmk3I X-Received: by 2002:a05:6a20:3c93:b0:bc:b8d:9eab with SMTP id b19-20020a056a203c9300b000bc0b8d9eabmr804674pzj.24.1676343468644; Mon, 13 Feb 2023 18:57:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343468; cv=none; d=google.com; s=arc-20160816; b=WW30YgrwSiKmqtg4yzfbZGEjifnxs/3CRYLoiUl3yDY2k0NUbOvwGhlwM9npqlc8U1 JwHIkzWhLpPmUeXoiviGnyNS18CD3NJvsy87csOtaDjWhv9rEdHmD3HTHUGJg4FKL/gw yy0eO/NyGLbKgzXO8nZLMFyHpY+cqb5UL/3f+66/wrDGM/mIVUczx7xbPLPrHAMKtCrI fOExPAPhhEhz03508WkAqxkvKOF5wA62P0Vf6+etVuMgvIpjZ0Qvxg/HVHuRKTbMaZhF ezMQEtKyl0B2BxPvYgQx3CCsh4uv+83TlShChAJOIlzT+dXrh2MMG68NmDd6hpDGj8eY VawA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hcf37p//7ytj087r4x9x+BIOrcT7HxG40IEnZ85ZZpg=; b=IQnlcZ/SswZy4LSV5yoS40ZjA+Ddvvv2jldyGRkH6S3CJZGRFWfnnQ+Nm2mRSTSPtz UgP5AYB2fAhRsfHX7f1Uy/2c02ZZrd507rFIc7uiQcz8RHRZpNxoWLmh2gkFTL2/+ZbB gdEQwnEkCbja9qttZ512LAOKWq/VY4os8o9K25WLofvUXy/Q231PgGZCW6y0/M650oWA K60R8ByBsmGnat6DoM9UoARypNscpFfOY0Hx8U/MqSTFFMXASy9+bDu4JNWiZKwNilZ5 /Y8A0oqC8AUcfasCKmlmug9vBhSzUopXHzjW6oGwh7SwGKWzYBZrwdyz1AGGUk6U9oVd jIZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i6-20020aa796e6000000b00596106e2693si14177820pfq.67.2023.02.13.18.57.36; Mon, 13 Feb 2023 18:57:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230063AbjBNC44 (ORCPT + 99 others); Mon, 13 Feb 2023 21:56:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229485AbjBNC4z (ORCPT ); Mon, 13 Feb 2023 21:56:55 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F03CF18A80; Mon, 13 Feb 2023 18:56:52 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8CxC9pz+OpjKFcAAA--.937S3; Tue, 14 Feb 2023 10:56:51 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S4; Tue, 14 Feb 2023 10:56:50 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 02/24] LoongArch: KVM: Implement VM related functions Date: Tue, 14 Feb 2023 10:56:26 +0800 Message-Id: <20230214025648.1898508-3-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S4 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxuFy5ur1UZr4fZFWxZrWxXrb_yoW5Xr15pF 1UAay5Kr4rXwn3tr1fJ3yDuw1S9a93GryxJa42v345CFnxtwn5XFy0yry3GF98J34ruFyf Xa4aqwn09a4Yy3DanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bckFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26ryj6F1UM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4j6F4UM28EF7xvwVC2z280aVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIEc7 CjxVAFwI0_Gr1j6F4UJwAaw2AFwI0_JF0_Jw1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAq jxCEc2xF0cIa020Ex4CE44I27wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E74AGY7Cv6c x26rWlOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JMxkF7I0En4kS14v26r12 6r1DMxAIw28IcxkI7VAKI48JMxAIw28IcVCjz48v1sIEY20_WwCFx2IqxVCFs4IE7xkEbV WUJVW8JwCFI7km07C267AKxVWUAVWUtwC20s026c02F40E14v26r1j6r18MI8I3I0E7480 Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7 IYx2IY67AKxVW8JVW5JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k2 6cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxV AFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773529289152783?= X-GMAIL-MSGID: =?utf-8?q?1757773529289152783?= Implement loongarch VM operations: 1. Init and destroy vm interface, allocating memory page to save the vm pgd when init vm. 2. Implement vm check extension, such as getting vcpu number info, memory slots info, and fpu info. 3. Implement vm status description. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vm.c | 85 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 arch/loongarch/kvm/vm.c diff --git a/arch/loongarch/kvm/vm.c b/arch/loongarch/kvm/vm.c new file mode 100644 index 000000000..6efa6689b --- /dev/null +++ b/arch/loongarch/kvm/vm.c @@ -0,0 +1,85 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include + +#define KVM_LOONGARCH_VERSION 1 + +const struct _kvm_stats_desc kvm_vm_stats_desc[] = { + KVM_GENERIC_VM_STATS(), +}; + +const struct kvm_stats_header kvm_vm_stats_header = { + .name_size = KVM_STATS_NAME_SIZE, + .num_desc = ARRAY_SIZE(kvm_vm_stats_desc), + .id_offset = sizeof(struct kvm_stats_header), + .desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE, + .data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE + + sizeof(kvm_vm_stats_desc), +}; + +int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) +{ + /* Allocate page table to map GPA -> RPA */ + kvm->arch.gpa_mm.pgd = kvm_pgd_alloc(); + if (!kvm->arch.gpa_mm.pgd) + return -ENOMEM; + + kvm_init_vmcs(kvm); + return 0; +} + +void kvm_arch_destroy_vm(struct kvm *kvm) +{ + kvm_destroy_vcpus(kvm); + _kvm_destroy_mm(kvm); +} + +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r; + + switch (ext) { + case KVM_CAP_ONE_REG: + case KVM_CAP_ENABLE_CAP: + case KVM_CAP_READONLY_MEM: + case KVM_CAP_SYNC_MMU: + case KVM_CAP_IMMEDIATE_EXIT: + case KVM_CAP_IOEVENTFD: + r = 1; + break; + case KVM_CAP_NR_VCPUS: + r = num_online_cpus(); + break; + case KVM_CAP_MAX_VCPUS: + r = KVM_MAX_VCPUS; + break; + case KVM_CAP_MAX_VCPU_ID: + r = KVM_MAX_VCPU_IDS; + break; + case KVM_CAP_NR_MEMSLOTS: + r = KVM_USER_MEM_SLOTS; + break; + case KVM_CAP_LOONGARCH_FPU: + /* We don't handle systems with inconsistent cpu_has_fpu */ + r = !!cpu_has_fpu; + break; + case KVM_CAP_LOONGARCH_VZ: + /* get user defined kvm version */ + r = KVM_LOONGARCH_VERSION; + break; + default: + r = 0; + break; + } + + return r; +} + +long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) +{ + return -ENOIOCTLCMD; +} From patchwork Tue Feb 14 02:56:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56612 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723216wrn; Mon, 13 Feb 2023 18:58:00 -0800 (PST) X-Google-Smtp-Source: AK7set9S1m8Fxh/VMlYTi/PYkqRSeHRbb3FnCEnl48wcXScXSUYDg/ozbe1ZiZNjR7m1pVQFftmM X-Received: by 2002:a05:6a21:9718:b0:bf:1b09:5cda with SMTP id ub24-20020a056a21971800b000bf1b095cdamr631072pzb.12.1676343479722; Mon, 13 Feb 2023 18:57:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343479; cv=none; d=google.com; s=arc-20160816; b=uhCRmdL9ns1H6PHw6+Bik/9gzjUEHfVcSrqiv7/WVSovj13wxPsviQf7Sz3t3Ul2pU cbCyoZfkIo2svQ3soLxqgNN8LuA26we4dZoxhEs2NxmRYY3Q2hREEu9n3e6EmwRW1N6t FsIT6adKiw/ROrMPSgO5RO7eTjlWKyXEosHsqt3UwfW28D6Rlfx3FzAaqLA1H0YL3Fn5 TWKd7FcTcoRNtb5pxRnBC0xvGP9DkekTs4OT1RtswnCyxC7uj/N5KgjDZLg1m5mD4cp0 bfgHNXJT97rN8dTCrwqL3ND1wYJGURXz+cjsLTDKOwm0CSyre/unpK8W+38Uvz/yITA4 svJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=HRZOsmQKm6+98RsA8O+frZRIOTBUPjh3DH9TAbEeros=; b=obUv0UpwyxCuLO7DiyoUqenDuhl593DHBQMsxAPFegjefbNvnvPp85MZls0wRAoATL 5CgZNZbpY+w5s3sfbHtljOwnicfkaP3pj98gOGwCBkfik2qMkDgo/NovHJfZTVbkz8Ns qq0n6U50PB0VWWwAAIU1p+ip6/CN4TqU+ZCycD8fvAzEyV0dcO49tUj7k4TpF9R8Zmrm iOCxrj2KDYweXLIv60xhsXs5sj8sSxgt9EMahbSgZvhLBZEZSkJZovG4i1ZC8PAhXve0 UZ9YEdlSIKpc69HpHJo6xCeOhTN4l/cppBE5N3v0enQw+hOBmQfchdAsFQO31QLBHDWl 2J7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e26-20020aa7981a000000b005a8a9d14496si6471158pfl.47.2023.02.13.18.57.46; Mon, 13 Feb 2023 18:57:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231458AbjBNC5G (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230134AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7DCD818A82; Mon, 13 Feb 2023 18:56:53 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxztpz+OpjLVcAAA--.1118S3; Tue, 14 Feb 2023 10:56:51 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S5; Tue, 14 Feb 2023 10:56:50 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 03/24] LoongArch: KVM: Implement vcpu create,run,destroy operations. Date: Tue, 14 Feb 2023 10:56:27 +0800 Message-Id: <20230214025648.1898508-4-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S5 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvAXoWfZw43tw4UCF48Xw1xAr43KFg_yoW5CrykWo W3Ja13G3Z5Jw4aya1q9Fy2qa4UZr9YkFs8Zr1YyryrZ34UJrn8Wr47KayrXr13Xryqga43 uF92gan5CF9YyryDn29KB7ZKAUJUUUU3529EdanIXcx71UUUUU7KY7ZEXasCq-sGcSsGvf J3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnRJU UUkm1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64 kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY 1x0267AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK 8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv6xkF7I 0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773540463839745?= X-GMAIL-MSGID: =?utf-8?q?1757773540463839745?= Implement loongarch vcpu related operations: 1. Implement vcpu create interface, saving some info into vcpu arch structure such as vcpu exception entrance, vcpu enter guest pointer, etc. Init vcpu timer and set address translation mode when vcpu create. 2. Implement vcpu run interface, handling mmio, iocsr reading fault and deliver interrupt, lose fpu before vcpu enter guest. 3. Implement vcpu handle exit interface, getting the exit code by ESTAT register and using kvm exception vector to handle it. Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/cpu-info.h | 13 ++ arch/loongarch/include/asm/kvm_vcpu.h | 112 +++++++++++ arch/loongarch/include/asm/loongarch.h | 195 +++++++++++++++++- arch/loongarch/kvm/trace.h | 137 +++++++++++++ arch/loongarch/kvm/vcpu.c | 261 +++++++++++++++++++++++++ 5 files changed, 712 insertions(+), 6 deletions(-) create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h create mode 100644 arch/loongarch/kvm/trace.h create mode 100644 arch/loongarch/kvm/vcpu.c diff --git a/arch/loongarch/include/asm/cpu-info.h b/arch/loongarch/include/asm/cpu-info.h index cd73a6f57..1b426a2ca 100644 --- a/arch/loongarch/include/asm/cpu-info.h +++ b/arch/loongarch/include/asm/cpu-info.h @@ -32,6 +32,15 @@ struct cache_desc { #define CACHE_LEVEL_MAX 3 #define CACHE_LEAVES_MAX 6 +struct guest_info { + unsigned long ases; + unsigned long ases_dyn; + unsigned long options; + unsigned long options_dyn; + unsigned char conf; + unsigned int kscratch_mask; +}; + struct cpuinfo_loongarch { u64 asid_cache; unsigned long asid_mask; @@ -60,6 +69,10 @@ struct cpuinfo_loongarch { unsigned int watch_dreg_count; /* Number data breakpoints */ unsigned int watch_ireg_count; /* Number instruction breakpoints */ unsigned int watch_reg_use_cnt; /* min(NUM_WATCH_REGS, watch_dreg_count + watch_ireg_count), Usable by ptrace */ + + /* VZ & Guest features */ + struct guest_info guest; + unsigned long guest_cfg; } __aligned(SMP_CACHE_BYTES); extern struct cpuinfo_loongarch cpu_data[]; diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h new file mode 100644 index 000000000..66ec9bc52 --- /dev/null +++ b/arch/loongarch/include/asm/kvm_vcpu.h @@ -0,0 +1,112 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__ +#define __ASM_LOONGARCH_KVM_VCPU_H__ + +#include +#include +#include + +#define LARCH_INT_SIP0 0 +#define LARCH_INT_SIP1 1 +#define LARCH_INT_IP0 2 +#define LARCH_INT_IP1 3 +#define LARCH_INT_IP2 4 +#define LARCH_INT_IP3 5 +#define LARCH_INT_IP4 6 +#define LARCH_INT_IP5 7 +#define LARCH_INT_IP6 8 +#define LARCH_INT_IP7 9 +#define LARCH_INT_PMU 10 +#define LARCH_INT_TIMER 11 +#define LARCH_INT_IPI 12 +#define LOONGARCH_EXC_MAX (LARCH_INT_IPI + 1) +#define LOONGARCH_EXC_IPNUM (LOONGARCH_EXC_MAX) + +/* Controlled by 0x5 guest exst */ +#define CPU_SIP0 (_ULCAST_(1)) +#define CPU_SIP1 (_ULCAST_(1) << 1) +#define CPU_PMU (_ULCAST_(1) << 10) +#define CPU_TIMER (_ULCAST_(1) << 11) +#define CPU_IPI (_ULCAST_(1) << 12) + +/* Controlled by 0x52 guest exception VIP + * aligned to exst bit 5~12 + */ +#define CPU_IP0 (_ULCAST_(1)) +#define CPU_IP1 (_ULCAST_(1) << 1) +#define CPU_IP2 (_ULCAST_(1) << 2) +#define CPU_IP3 (_ULCAST_(1) << 3) +#define CPU_IP4 (_ULCAST_(1) << 4) +#define CPU_IP5 (_ULCAST_(1) << 5) +#define CPU_IP6 (_ULCAST_(1) << 6) +#define CPU_IP7 (_ULCAST_(1) << 7) + +#define MNSEC_PER_SEC (NSEC_PER_SEC >> 20) + +/* KVM_IRQ_LINE irq field index values */ +#define KVM_LOONGSON_IRQ_TYPE_SHIFT 24 +#define KVM_LOONGSON_IRQ_TYPE_MASK 0xff +#define KVM_LOONGSON_IRQ_VCPU_SHIFT 16 +#define KVM_LOONGSON_IRQ_VCPU_MASK 0xff +#define KVM_LOONGSON_IRQ_NUM_SHIFT 0 +#define KVM_LOONGSON_IRQ_NUM_MASK 0xffff + +/* irq_type field */ +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP 0 +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO 1 +#define KVM_LOONGSON_IRQ_TYPE_HT 2 +#define KVM_LOONGSON_IRQ_TYPE_MSI 3 +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC 4 +#define KVM_LOONGSON_IRQ_TYPE_ROUTE 5 + +/* out-of-kernel GIC cpu interrupt injection irq_number field */ +#define KVM_LOONGSON_IRQ_CPU_IRQ 0 +#define KVM_LOONGSON_IRQ_CPU_FIQ 1 +#define KVM_LOONGSON_CPU_IP_NUM 8 + +typedef union loongarch_instruction larch_inst; +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst); +int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst); +int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run); +int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run); +int _kvm_emu_idle(struct kvm_vcpu *vcpu); +int _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu); +int _kvm_pending_timer(struct kvm_vcpu *vcpu); +int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault); +void _kvm_deliver_intr(struct kvm_vcpu *vcpu); + +void kvm_own_fpu(struct kvm_vcpu *vcpu); +void kvm_lose_fpu(struct kvm_vcpu *vcpu); +void kvm_save_fpu(struct loongarch_fpu *fpu); +void kvm_restore_fpu(struct loongarch_fpu *fpu); +void kvm_restore_fcsr(struct loongarch_fpu *fpu); + +void kvm_acquire_timer(struct kvm_vcpu *vcpu); +void kvm_reset_timer(struct kvm_vcpu *vcpu); +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu); +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz); +void kvm_restore_timer(struct kvm_vcpu *vcpu); +void kvm_save_timer(struct kvm_vcpu *vcpu); + +/* + * Loongarch KVM guest interrupt handling. + */ +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq) +{ + set_bit(irq, &vcpu->arch.irq_pending); + clear_bit(irq, &vcpu->arch.irq_clear); +} + +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq) +{ + clear_bit(irq, &vcpu->arch.irq_pending); + set_bit(irq, &vcpu->arch.irq_clear); +} + +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */ diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h index 7f8d57a61..7b74605dd 100644 --- a/arch/loongarch/include/asm/loongarch.h +++ b/arch/loongarch/include/asm/loongarch.h @@ -236,6 +236,44 @@ static __always_inline u64 csr_xchg64(u64 val, u64 mask, u32 reg) return __csrxchg_d(val, mask, reg); } +/* GCSR */ +static inline u64 gcsr_read(u32 reg) +{ + u64 val = 0; + + asm volatile ( + "parse_r __reg, %[val]\n\t" + ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t" + : [val] "+r" (val) + : [reg] "i" (reg) + : "memory"); + + return val; +} + +static inline void gcsr_write(u64 val, u32 reg) +{ + asm volatile ( + "parse_r __reg, %[val]\n\t" + ".word 0x5 << 24 | %[reg] << 10 | 1 << 5 | __reg\n\t" + : [val] "+r" (val) + : [reg] "i" (reg) + : "memory"); +} + +static inline u64 gcsr_xchg(u64 val, u64 mask, u32 reg) +{ + asm volatile ( + "parse_r __rd, %[val]\n\t" + "parse_r __rj, %[mask]\n\t" + ".word 0x5 << 24 | %[reg] << 10 | __rj << 5 | __rd\n\t" + : [val] "+r" (val) + : [mask] "r" (mask), [reg] "i" (reg) + : "memory"); + + return val; +} + /* IOCSR */ static __always_inline u32 iocsr_read32(u32 reg) { @@ -309,6 +347,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg) #define LOONGARCH_CSR_ECFG 0x4 /* Exception config */ #define CSR_ECFG_VS_SHIFT 16 #define CSR_ECFG_VS_WIDTH 3 +#define CSR_ECFG_VS_SHIFT_END (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1) #define CSR_ECFG_VS (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT) #define CSR_ECFG_IM_SHIFT 0 #define CSR_ECFG_IM_WIDTH 13 @@ -397,13 +436,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg) #define CSR_TLBLO1_V (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT) #define LOONGARCH_CSR_GTLBC 0x15 /* Guest TLB control */ -#define CSR_GTLBC_RID_SHIFT 16 -#define CSR_GTLBC_RID_WIDTH 8 -#define CSR_GTLBC_RID (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT) +#define CSR_GTLBC_TGID_SHIFT 16 +#define CSR_GTLBC_TGID_WIDTH 8 +#define CSR_GTLBC_TGID_SHIFT_END (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1) +#define CSR_GTLBC_TGID (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT) #define CSR_GTLBC_TOTI_SHIFT 13 #define CSR_GTLBC_TOTI (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT) -#define CSR_GTLBC_USERID_SHIFT 12 -#define CSR_GTLBC_USERID (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT) +#define CSR_GTLBC_USETGID_SHIFT 12 +#define CSR_GTLBC_USETGID (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT) #define CSR_GTLBC_GMTLBSZ_SHIFT 0 #define CSR_GTLBC_GMTLBSZ_WIDTH 6 #define CSR_GTLBC_GMTLBSZ (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT) @@ -555,6 +595,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg) #define LOONGARCH_CSR_GSTAT 0x50 /* Guest status */ #define CSR_GSTAT_GID_SHIFT 16 #define CSR_GSTAT_GID_WIDTH 8 +#define CSR_GSTAT_GID_SHIFT_END (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1) #define CSR_GSTAT_GID (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT) #define CSR_GSTAT_GIDBIT_SHIFT 4 #define CSR_GSTAT_GIDBIT_WIDTH 6 @@ -605,6 +646,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg) #define CSR_GCFG_MATC_GUEST (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF) #define CSR_GCFG_MATC_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF) #define CSR_GCFG_MATC_NEST (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF) +#define CSR_GCFG_MATP_SHITF 0 +#define CSR_GCFG_MATP_WIDTH 4 +#define CSR_GCFG_MATP_MASK (_ULCAST_(0x3) << CSR_GCFG_MATP_SHITF) +#define CSR_GCFG_MATP_GUEST (_ULCAST_(0x0) << CSR_GCFG_MATP_SHITF) +#define CSR_GCFG_MATP_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATP_SHITF) +#define CSR_GCFG_MATP_NEST (_ULCAST_(0x2) << CSR_GCFG_MATP_SHITF) #define LOONGARCH_CSR_GINTC 0x52 /* Guest interrupt control */ #define CSR_GINTC_HC_SHIFT 16 @@ -1273,6 +1320,131 @@ static inline void write_csr_tlbrefill_pagesize(unsigned int size) #define write_csr_perfctrl3(val) csr_write64(val, LOONGARCH_CSR_PERFCTRL3) #define write_csr_perfcntr3(val) csr_write64(val, LOONGARCH_CSR_PERFCNTR3) +/* Guest related CSRS */ +#define read_csr_gtlbc() csr_read64(LOONGARCH_CSR_GTLBC) +#define write_csr_gtlbc(val) csr_write64(val, LOONGARCH_CSR_GTLBC) +#define read_csr_trgp() csr_read64(LOONGARCH_CSR_TRGP) +#define read_csr_gcfg() csr_read64(LOONGARCH_CSR_GCFG) +#define write_csr_gcfg(val) csr_write64(val, LOONGARCH_CSR_GCFG) +#define read_csr_gstat() csr_read64(LOONGARCH_CSR_GSTAT) +#define write_csr_gstat(val) csr_write64(val, LOONGARCH_CSR_GSTAT) +#define read_csr_gintc() csr_read64(LOONGARCH_CSR_GINTC) +#define write_csr_gintc(val) csr_write64(val, LOONGARCH_CSR_GINTC) +#define read_csr_gcntc() csr_read64(LOONGARCH_CSR_GCNTC) +#define write_csr_gcntc(val) csr_write64(val, LOONGARCH_CSR_GCNTC) + +/* Guest CSRS read and write */ +#define read_gcsr_crmd() gcsr_read(LOONGARCH_CSR_CRMD) +#define write_gcsr_crmd(val) gcsr_write(val, LOONGARCH_CSR_CRMD) +#define read_gcsr_prmd() gcsr_read(LOONGARCH_CSR_PRMD) +#define write_gcsr_prmd(val) gcsr_write(val, LOONGARCH_CSR_PRMD) +#define read_gcsr_euen() gcsr_read(LOONGARCH_CSR_EUEN) +#define write_gcsr_euen(val) gcsr_write(val, LOONGARCH_CSR_EUEN) +#define read_gcsr_misc() gcsr_read(LOONGARCH_CSR_MISC) +#define write_gcsr_misc(val) gcsr_write(val, LOONGARCH_CSR_MISC) +#define read_gcsr_ecfg() gcsr_read(LOONGARCH_CSR_ECFG) +#define write_gcsr_ecfg(val) gcsr_write(val, LOONGARCH_CSR_ECFG) +#define read_gcsr_estat() gcsr_read(LOONGARCH_CSR_ESTAT) +#define write_gcsr_estat(val) gcsr_write(val, LOONGARCH_CSR_ESTAT) +#define read_gcsr_era() gcsr_read(LOONGARCH_CSR_ERA) +#define write_gcsr_era(val) gcsr_write(val, LOONGARCH_CSR_ERA) +#define read_gcsr_badv() gcsr_read(LOONGARCH_CSR_BADV) +#define write_gcsr_badv(val) gcsr_write(val, LOONGARCH_CSR_BADV) +#define read_gcsr_badi() gcsr_read(LOONGARCH_CSR_BADI) +#define write_gcsr_badi(val) gcsr_write(val, LOONGARCH_CSR_BADI) +#define read_gcsr_eentry() gcsr_read(LOONGARCH_CSR_EENTRY) +#define write_gcsr_eentry(val) gcsr_write(val, LOONGARCH_CSR_EENTRY) + +#define read_gcsr_tlbidx() gcsr_read(LOONGARCH_CSR_TLBIDX) +#define write_gcsr_tlbidx(val) gcsr_write(val, LOONGARCH_CSR_TLBIDX) +#define read_gcsr_tlbhi() gcsr_read(LOONGARCH_CSR_TLBEHI) +#define write_gcsr_tlbhi(val) gcsr_write(val, LOONGARCH_CSR_TLBEHI) +#define read_gcsr_tlblo0() gcsr_read(LOONGARCH_CSR_TLBELO0) +#define write_gcsr_tlblo0(val) gcsr_write(val, LOONGARCH_CSR_TLBELO0) +#define read_gcsr_tlblo1() gcsr_read(LOONGARCH_CSR_TLBELO1) +#define write_gcsr_tlblo1(val) gcsr_write(val, LOONGARCH_CSR_TLBELO1) + +#define read_gcsr_asid() gcsr_read(LOONGARCH_CSR_ASID) +#define write_gcsr_asid(val) gcsr_write(val, LOONGARCH_CSR_ASID) +#define read_gcsr_pgdl() gcsr_read(LOONGARCH_CSR_PGDL) +#define write_gcsr_pgdl(val) gcsr_write(val, LOONGARCH_CSR_PGDL) +#define read_gcsr_pgdh() gcsr_read(LOONGARCH_CSR_PGDH) +#define write_gcsr_pgdh(val) gcsr_write(val, LOONGARCH_CSR_PGDH) +#define write_gcsr_pgd(val) gcsr_write(val, LOONGARCH_CSR_PGD) +#define read_gcsr_pgd() gcsr_read(LOONGARCH_CSR_PGD) +#define read_gcsr_pwctl0() gcsr_read(LOONGARCH_CSR_PWCTL0) +#define write_gcsr_pwctl0(val) gcsr_write(val, LOONGARCH_CSR_PWCTL0) +#define read_gcsr_pwctl1() gcsr_read(LOONGARCH_CSR_PWCTL1) +#define write_gcsr_pwctl1(val) gcsr_write(val, LOONGARCH_CSR_PWCTL1) +#define read_gcsr_stlbpgsize() gcsr_read(LOONGARCH_CSR_STLBPGSIZE) +#define write_gcsr_stlbpgsize(val) gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE) +#define read_gcsr_rvacfg() gcsr_read(LOONGARCH_CSR_RVACFG) +#define write_gcsr_rvacfg(val) gcsr_write(val, LOONGARCH_CSR_RVACFG) + +#define read_gcsr_cpuid() gcsr_read(LOONGARCH_CSR_CPUID) +#define write_gcsr_cpuid(val) gcsr_write(val, LOONGARCH_CSR_CPUID) +#define read_gcsr_prcfg1() gcsr_read(LOONGARCH_CSR_PRCFG1) +#define write_gcsr_prcfg1(val) gcsr_write(val, LOONGARCH_CSR_PRCFG1) +#define read_gcsr_prcfg2() gcsr_read(LOONGARCH_CSR_PRCFG2) +#define write_gcsr_prcfg2(val) gcsr_write(val, LOONGARCH_CSR_PRCFG2) +#define read_gcsr_prcfg3() gcsr_read(LOONGARCH_CSR_PRCFG3) +#define write_gcsr_prcfg3(val) gcsr_write(val, LOONGARCH_CSR_PRCFG3) + +#define read_gcsr_kscratch0() gcsr_read(LOONGARCH_CSR_KS0) +#define write_gcsr_kscratch0(val) gcsr_write(val, LOONGARCH_CSR_KS0) +#define read_gcsr_kscratch1() gcsr_read(LOONGARCH_CSR_KS1) +#define write_gcsr_kscratch1(val) gcsr_write(val, LOONGARCH_CSR_KS1) +#define read_gcsr_kscratch2() gcsr_read(LOONGARCH_CSR_KS2) +#define write_gcsr_kscratch2(val) gcsr_write(val, LOONGARCH_CSR_KS2) +#define read_gcsr_kscratch3() gcsr_read(LOONGARCH_CSR_KS3) +#define write_gcsr_kscratch3(val) gcsr_write(val, LOONGARCH_CSR_KS3) +#define read_gcsr_kscratch4() gcsr_read(LOONGARCH_CSR_KS4) +#define write_gcsr_kscratch4(val) gcsr_write(val, LOONGARCH_CSR_KS4) +#define read_gcsr_kscratch5() gcsr_read(LOONGARCH_CSR_KS5) +#define write_gcsr_kscratch5(val) gcsr_write(val, LOONGARCH_CSR_KS5) +#define read_gcsr_kscratch6() gcsr_read(LOONGARCH_CSR_KS6) +#define write_gcsr_kscratch6(val) gcsr_write(val, LOONGARCH_CSR_KS6) +#define read_gcsr_kscratch7() gcsr_read(LOONGARCH_CSR_KS7) +#define write_gcsr_kscratch7(val) gcsr_write(val, LOONGARCH_CSR_KS7) + +#define read_gcsr_timerid() gcsr_read(LOONGARCH_CSR_TMID) +#define write_gcsr_timerid(val) gcsr_write(val, LOONGARCH_CSR_TMID) +#define read_gcsr_timercfg() gcsr_read(LOONGARCH_CSR_TCFG) +#define write_gcsr_timercfg(val) gcsr_write(val, LOONGARCH_CSR_TCFG) +#define read_gcsr_timertick() gcsr_read(LOONGARCH_CSR_TVAL) +#define write_gcsr_timertick(val) gcsr_write(val, LOONGARCH_CSR_TVAL) +#define read_gcsr_timeroffset() gcsr_read(LOONGARCH_CSR_CNTC) +#define write_gcsr_timeroffset(val) gcsr_write(val, LOONGARCH_CSR_CNTC) + +#define read_gcsr_llbctl() gcsr_read(LOONGARCH_CSR_LLBCTL) +#define write_gcsr_llbctl(val) gcsr_write(val, LOONGARCH_CSR_LLBCTL) + +#define read_gcsr_tlbrentry() gcsr_read(LOONGARCH_CSR_TLBRENTRY) +#define write_gcsr_tlbrentry(val) gcsr_write(val, LOONGARCH_CSR_TLBRENTRY) +#define read_gcsr_tlbrbadv() gcsr_read(LOONGARCH_CSR_TLBRBADV) +#define write_gcsr_tlbrbadv(val) gcsr_write(val, LOONGARCH_CSR_TLBRBADV) +#define read_gcsr_tlbrera() gcsr_read(LOONGARCH_CSR_TLBRERA) +#define write_gcsr_tlbrera(val) gcsr_write(val, LOONGARCH_CSR_TLBRERA) +#define read_gcsr_tlbrsave() gcsr_read(LOONGARCH_CSR_TLBRSAVE) +#define write_gcsr_tlbrsave(val) gcsr_write(val, LOONGARCH_CSR_TLBRSAVE) +#define read_gcsr_tlbrelo0() gcsr_read(LOONGARCH_CSR_TLBRELO0) +#define write_gcsr_tlbrelo0(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO0) +#define read_gcsr_tlbrelo1() gcsr_read(LOONGARCH_CSR_TLBRELO1) +#define write_gcsr_tlbrelo1(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO1) +#define read_gcsr_tlbrehi() gcsr_read(LOONGARCH_CSR_TLBREHI) +#define write_gcsr_tlbrehi(val) gcsr_write(val, LOONGARCH_CSR_TLBREHI) +#define read_gcsr_tlbrprmd() gcsr_read(LOONGARCH_CSR_TLBRPRMD) +#define write_gcsr_tlbrprmd(val) gcsr_write(val, LOONGARCH_CSR_TLBRPRMD) + +#define read_gcsr_directwin0() gcsr_read(LOONGARCH_CSR_DMWIN0) +#define write_gcsr_directwin0(val) gcsr_write(val, LOONGARCH_CSR_DMWIN0) +#define read_gcsr_directwin1() gcsr_read(LOONGARCH_CSR_DMWIN1) +#define write_gcsr_directwin1(val) gcsr_write(val, LOONGARCH_CSR_DMWIN1) +#define read_gcsr_directwin2() gcsr_read(LOONGARCH_CSR_DMWIN2) +#define write_gcsr_directwin2(val) gcsr_write(val, LOONGARCH_CSR_DMWIN2) +#define read_gcsr_directwin3() gcsr_read(LOONGARCH_CSR_DMWIN3) +#define write_gcsr_directwin3(val) gcsr_write(val, LOONGARCH_CSR_DMWIN3) + /* * Manipulate bits in a register. */ @@ -1315,15 +1487,26 @@ change_##name(unsigned long change, unsigned long val) \ } #define __BUILD_CSR_OP(name) __BUILD_CSR_COMMON(csr_##name) +#define __BUILD_GCSR_OP(name) __BUILD_CSR_COMMON(gcsr_##name) __BUILD_CSR_OP(euen) __BUILD_CSR_OP(ecfg) __BUILD_CSR_OP(tlbidx) +__BUILD_CSR_OP(gcfg) +__BUILD_CSR_OP(gstat) +__BUILD_CSR_OP(gtlbc) +__BUILD_CSR_OP(gintc) +__BUILD_GCSR_OP(llbctl) +__BUILD_GCSR_OP(tlbidx) #define set_csr_estat(val) \ csr_xchg32(val, val, LOONGARCH_CSR_ESTAT) #define clear_csr_estat(val) \ csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT) +#define set_gcsr_estat(val) \ + gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT) +#define clear_gcsr_estat(val) \ + gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT) #endif /* __ASSEMBLY__ */ @@ -1408,7 +1591,7 @@ __BUILD_CSR_OP(tlbidx) #define EXCCODE_WATCH 19 /* Watch address reference */ #define EXCCODE_BTDIS 20 /* Binary Trans. Disabled */ #define EXCCODE_BTE 21 /* Binary Trans. Exception */ -#define EXCCODE_PSI 22 /* Guest Privileged Error */ +#define EXCCODE_GSPR 22 /* Guest Privileged Error */ #define EXCCODE_HYP 23 /* Hypercall */ #define EXCCODE_GCM 24 /* Guest CSR modified */ #define EXCSUBCODE_GCSC 0 /* Software caused */ diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h new file mode 100644 index 000000000..1813410e2 --- /dev/null +++ b/arch/loongarch/kvm/trace.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVM_H + +#include +#include + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM kvm +#define TRACE_INCLUDE_PATH . +#define TRACE_INCLUDE_FILE trace + +/* + * Tracepoints for VM enters + */ +DECLARE_EVENT_CLASS(kvm_transition, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(unsigned long, pc) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + ), + + TP_printk("PC: 0x%08lx", + __entry->pc) +); + +DEFINE_EVENT(kvm_transition, kvm_enter, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +DEFINE_EVENT(kvm_transition, kvm_reenter, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +DEFINE_EVENT(kvm_transition, kvm_out, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +/* Further exit reasons */ +#define KVM_TRACE_EXIT_IDLE 64 +#define KVM_TRACE_EXIT_CACHE 65 +#define KVM_TRACE_EXIT_SIGNAL 66 + +/* Tracepoints for VM exits */ +#define kvm_trace_symbol_exit_types \ + ({ KVM_TRACE_EXIT_IDLE, "IDLE" }, \ + { KVM_TRACE_EXIT_CACHE, "CACHE" }, \ + { KVM_TRACE_EXIT_SIGNAL, "Signal" }) + +TRACE_EVENT(kvm_exit, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), + TP_ARGS(vcpu, reason), + TP_STRUCT__entry( + __field(unsigned long, pc) + __field(unsigned int, reason) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + __entry->reason = reason; + ), + + TP_printk("[%s]PC: 0x%08lx", + __print_symbolic(__entry->reason, + kvm_trace_symbol_exit_types), + __entry->pc) +); + +#define KVM_TRACE_AUX_RESTORE 0 +#define KVM_TRACE_AUX_SAVE 1 +#define KVM_TRACE_AUX_ENABLE 2 +#define KVM_TRACE_AUX_DISABLE 3 +#define KVM_TRACE_AUX_DISCARD 4 + +#define KVM_TRACE_AUX_FPU 1 + +#define kvm_trace_symbol_aux_op \ + ({ KVM_TRACE_AUX_RESTORE, "restore" }, \ + { KVM_TRACE_AUX_SAVE, "save" }, \ + { KVM_TRACE_AUX_ENABLE, "enable" }, \ + { KVM_TRACE_AUX_DISABLE, "disable" }, \ + { KVM_TRACE_AUX_DISCARD, "discard" }) + +#define kvm_trace_symbol_aux_state \ + { KVM_TRACE_AUX_FPU, "FPU" }, \ + +TRACE_EVENT(kvm_aux, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op, + unsigned int state), + TP_ARGS(vcpu, op, state), + TP_STRUCT__entry( + __field(unsigned long, pc) + __field(u8, op) + __field(u8, state) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + __entry->op = op; + __entry->state = state; + ), + + TP_printk("%s %s PC: 0x%08lx", + __print_symbolic(__entry->op, + kvm_trace_symbol_aux_op), + __print_symbolic(__entry->state, + kvm_trace_symbol_aux_state), + __entry->pc) +); + +TRACE_EVENT(kvm_vpid_change, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid), + TP_ARGS(vcpu, vpid), + TP_STRUCT__entry( + __field(unsigned long, vpid) + ), + + TP_fast_assign( + __entry->vpid = vpid; + ), + + TP_printk("vpid: 0x%08lx", + __entry->vpid) +); + +#endif /* _TRACE_KVM_H */ + +/* This part must be outside protection */ +#include diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c new file mode 100644 index 000000000..1732da8a8 --- /dev/null +++ b/arch/loongarch/kvm/vcpu.c @@ -0,0 +1,261 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include "trace.h" + +int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) +{ + return 0; +} + +/* Returns 1 if the guest TLB may be clobbered */ +static int _kvm_check_requests(struct kvm_vcpu *vcpu, int cpu) +{ + int ret = 0; + int i; + + if (!kvm_request_pending(vcpu)) + return 0; + + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) { + /* Drop all vpids for this VCPU */ + for_each_possible_cpu(i) + vcpu->arch.vpid[i] = 0; + /* This will clobber guest TLB contents too */ + ret = 1; + } + + return ret; +} + +/* + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV) + */ +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) +{ + unsigned long exst = vcpu->arch.host_estat; + u32 intr = exst & 0x1fff; /* ignore NMI */ + u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; + u32 __user *opc = (u32 __user *) vcpu->arch.pc; + int ret = RESUME_GUEST, cpu; + + vcpu->mode = OUTSIDE_GUEST_MODE; + + /* Set a default exit reason */ + run->exit_reason = KVM_EXIT_UNKNOWN; + run->ready_for_interrupt_injection = 1; + + /* + * Set the appropriate status bits based on host CPU features, + * before we hit the scheduler + */ + + local_irq_enable(); + + kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n", + __func__, exst, opc, run, vcpu); + trace_kvm_exit(vcpu, exccode); + if (exccode) { + ret = _kvm_handle_fault(vcpu, exccode); + } else { + WARN(!intr, "suspicious vm exiting"); + ++vcpu->stat.int_exits; + + if (need_resched()) + cond_resched(); + + ret = RESUME_GUEST; + } + + cond_resched(); + + local_irq_disable(); + + if (ret == RESUME_GUEST) + kvm_acquire_timer(vcpu); + + if (!(ret & RESUME_HOST)) { + _kvm_deliver_intr(vcpu); + /* Only check for signals if not already exiting to userspace */ + if (signal_pending(current)) { + run->exit_reason = KVM_EXIT_INTR; + ret = (-EINTR << 2) | RESUME_HOST; + ++vcpu->stat.signal_exits; + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); + } + } + + if (ret == RESUME_GUEST) { + trace_kvm_reenter(vcpu); + + /* + * Make sure the read of VCPU requests in vcpu_reenter() + * callback is not reordered ahead of the write to vcpu->mode, + * or we could miss a TLB flush request while the requester sees + * the VCPU as outside of guest mode and not needing an IPI. + */ + smp_store_mb(vcpu->mode, IN_GUEST_MODE); + + cpu = smp_processor_id(); + _kvm_check_requests(vcpu, cpu); + _kvm_check_vmid(vcpu, cpu); + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); + + /* + * If FPU are enabled (i.e. the guest's FPU context + * is live), restore FCSR0. + */ + if (_kvm_guest_has_fpu(&vcpu->arch) && + read_csr_euen() & (CSR_EUEN_FPEN)) { + kvm_restore_fcsr(&vcpu->arch.fpu); + } + } + + return ret; +} + +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +{ + int i; + unsigned long timer_hz; + struct loongarch_csrs *csr; + struct kvm_context *kvm_context = per_cpu_ptr(vcpu->kvm->arch.vmcs, 0); + + for_each_possible_cpu(i) + vcpu->arch.vpid[i] = 0; + + hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); + vcpu->arch.swtimer.function = kvm_swtimer_wakeup; + vcpu->arch.fpu_enabled = true; + vcpu->kvm->arch.online_vcpus = vcpu->vcpu_id + 1; + + vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry; + vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest; + vcpu->arch.handle_exit = _kvm_handle_exit; + vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL); + if (!vcpu->arch.csr) + return -ENOMEM; + + /* + * kvm all exceptions share one exception entry, and host <-> guest switch + * also switch excfg.VS field, keep host excfg.VS info here + */ + vcpu->arch.host_ecfg = (read_csr_ecfg() & CSR_ECFG_VS); + + /* Init */ + vcpu->arch.last_sched_cpu = -1; + vcpu->arch.last_exec_cpu = -1; + + /* + * Initialize guest register state to valid architectural reset state. + */ + timer_hz = calc_const_freq(); + kvm_init_timer(vcpu, timer_hz); + + /* Set Initialize mode for GUEST */ + csr = vcpu->arch.csr; + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_CRMD, CSR_CRMD_DA); + + /* Set cpuid */ + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TMID, vcpu->vcpu_id); + + /* start with no pending virtual guest interrupts */ + csr->csrs[LOONGARCH_CSR_GINTC] = 0; + + return 0; +} + +void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) +{ +} + +void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + int cpu; + struct kvm_context *context; + + hrtimer_cancel(&vcpu->arch.swtimer); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kfree(vcpu->arch.csr); + + /* + * If the VCPU is freed and reused as another VCPU, we don't want the + * matching pointer wrongly hanging around in last_vcpu. + */ + for_each_possible_cpu(cpu) { + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + if (context->last_vcpu == vcpu) + context->last_vcpu = NULL; + } +} + +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +{ + int r = -EINTR; + int cpu; + struct kvm_run *run = vcpu->run; + + vcpu_load(vcpu); + + kvm_sigset_activate(vcpu); + + if (vcpu->mmio_needed) { + if (!vcpu->mmio_is_write) + _kvm_complete_mmio_read(vcpu, run); + vcpu->mmio_needed = 0; + } + + if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) { + if (!run->iocsr_io.is_write) + _kvm_complete_iocsr_read(vcpu, run); + } + + /* clear exit_reason */ + run->exit_reason = KVM_EXIT_UNKNOWN; + if (run->immediate_exit) + goto out; + + lose_fpu(1); + + local_irq_disable(); + guest_enter_irqoff(); + trace_kvm_enter(vcpu); + + /* + * Make sure the read of VCPU requests in vcpu_run() callback is not + * reordered ahead of the write to vcpu->mode, or we could miss a TLB + * flush request while the requester sees the VCPU as outside of guest + * mode and not needing an IPI. + */ + smp_store_mb(vcpu->mode, IN_GUEST_MODE); + + cpu = smp_processor_id(); + kvm_acquire_timer(vcpu); + /* Check if we have any exceptions/interrupts pending */ + _kvm_deliver_intr(vcpu); + + _kvm_check_requests(vcpu, cpu); + _kvm_check_vmid(vcpu, cpu); + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); + r = vcpu->arch.vcpu_run(run, vcpu); + + trace_kvm_out(vcpu); + guest_exit_irqoff(); + local_irq_enable(); + +out: + kvm_sigset_deactivate(vcpu); + + vcpu_put(vcpu); + return r; +} From patchwork Tue Feb 14 02:56:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56616 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723288wrn; Mon, 13 Feb 2023 18:58:14 -0800 (PST) X-Google-Smtp-Source: AK7set8rdMpLwHX5ZPOxXaESouGhl/LjfuEs/gfpGHaEJ+WadrT+0k322te9haK18J/GkEA/lssw X-Received: by 2002:a17:90b:3ec2:b0:234:157:a264 with SMTP id rm2-20020a17090b3ec200b002340157a264mr680465pjb.43.1676343494409; Mon, 13 Feb 2023 18:58:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343494; cv=none; d=google.com; s=arc-20160816; b=vQKpj+MbLnBnalSManWld0l0xa9N6v6COfxJJfttvjxJFyGKiVsrnqBKC/U5+Yhejw Y16VSIO2SzzgGWJZEHcCb5uZNK61TCeFXoWWV8QvhEdE0zkdxGeESI2rdyEdNprU5/0Z PZhopqvSz77nmqGHLE+B0+1FKSCDdn8OSVzBbBGXlVrG67T3W+4N0QSqKq2/vv4FwN9B Nia2vTW7mfUK3rGOjAgj7J1ot2iY7VLTpBcGv+1XSKhknhBlwbz3Fo13p0jQ3C2YdC5I CHG6ItntNDUgJYpYlcXHoYDRF6oFVZwaWTTswHirPtU6wy7G1EqpClX27o/X+QarjkFi U3qQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=acpdnnornwILsVvXtwOeYpwCzSU94jlA9AdWlU0+2hE=; b=ZbkJ5Eh7XsCUCqgELraox306q/C7m3IEaJHvESLfxhSGUkUZkaIxFdNocGyzUAf0hN Kpzm+6WFmC9hbwxl2AwiYEIRcwkh/EeMJd/EeRAow1RIzbZAUELwjZI53XrAepPFHh7L iTgpirwlH0g2XMta26+x0Qa0tuh6Tjn7H/fg49mMFYSqEPcr/RUbr+ro63FqJqUMqExn DcoVI9/qY9ppGfhAXSayQESn7qfHAMtSwSY4N+nVnjOq9UHlSuGx6M2916by5OH+4Fvn HAP2DM4y1I1Ko+osQ80Z3J9CWD4eNGCa4vs+R4+eVhVLmPcbC5ZfYu2ZsScXVrYqnH/I zxeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m6-20020a17090a414600b002308d40b2b5si16986535pjg.87.2023.02.13.18.58.01; Mon, 13 Feb 2023 18:58:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbjBNC5C (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbjBNC44 (ORCPT ); Mon, 13 Feb 2023 21:56:56 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CAB7918A8A; Mon, 13 Feb 2023 18:56:53 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Bxtth0+OpjPFcAAA--.389S3; Tue, 14 Feb 2023 10:56:52 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S6; Tue, 14 Feb 2023 10:56:51 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 04/24] LoongArch: KVM: Implement vcpu get, vcpu set registers Date: Tue, 14 Feb 2023 10:56:28 +0800 Message-Id: <20230214025648.1898508-5-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S6 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvAXoW3KFWfKr1fXw4rKF15Kr4fGrg_yoW8GF1DWo W8tw48GrWrGw12ywsrtw12qay8JFyxu3WkZF1rAF1F9anrJF4rWr48Can8Zry3Xr9xWryU Aa1jg3Z7uayktFn8n29KB7ZKAUJUUUU3529EdanIXcx71UUUUU7KY7ZEXasCq-sGcSsGvf J3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnRJU UUkl1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64 kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY 1x0267AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK 8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWxJVW8Jr1lIxAIcVC2z280aVCY1x 0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7xRiTKZJUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773556138511786?= X-GMAIL-MSGID: =?utf-8?q?1757773556138511786?= Implement loongarch vcpu get registers and set registers operations, it is called when user space use the ioctl interface to get or set regs. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 442 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 442 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 1732da8a8..a18864284 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,448 @@ #define CREATE_TRACE_POINTS #include "trace.h" +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + GET_HW_GCSR(id, LOONGARCH_CSR_CRMD, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PRMD, v); + GET_HW_GCSR(id, LOONGARCH_CSR_EUEN, v); + GET_HW_GCSR(id, LOONGARCH_CSR_MISC, v); + GET_HW_GCSR(id, LOONGARCH_CSR_ECFG, v); + GET_HW_GCSR(id, LOONGARCH_CSR_ESTAT, v); + GET_HW_GCSR(id, LOONGARCH_CSR_ERA, v); + GET_HW_GCSR(id, LOONGARCH_CSR_BADV, v); + GET_HW_GCSR(id, LOONGARCH_CSR_BADI, v); + GET_HW_GCSR(id, LOONGARCH_CSR_EENTRY, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBIDX, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBEHI, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBELO0, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBELO1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_ASID, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PGDL, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PGDH, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PWCTL0, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PWCTL1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_STLBPGSIZE, v); + GET_HW_GCSR(id, LOONGARCH_CSR_RVACFG, v); + GET_HW_GCSR(id, LOONGARCH_CSR_CPUID, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG2, v); + GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG3, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS0, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS2, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS3, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS4, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS5, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS6, v); + GET_HW_GCSR(id, LOONGARCH_CSR_KS7, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TMID, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TCFG, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TVAL, v); + GET_HW_GCSR(id, LOONGARCH_CSR_CNTC, v); + GET_HW_GCSR(id, LOONGARCH_CSR_LLBCTL, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRENTRY, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRBADV, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRERA, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRSAVE, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRELO0, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRELO1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBREHI, v); + GET_HW_GCSR(id, LOONGARCH_CSR_TLBRPRMD, v); + GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN0, v); + GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN1, v); + GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN2, v); + GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN3, v); + GET_HW_GCSR(id, LOONGARCH_CSR_MWPS, v); + GET_HW_GCSR(id, LOONGARCH_CSR_FWPS, v); + + GET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL1, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL2, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRCTL, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO1, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO2, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRENTRY, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRERA, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRSAVE, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_CTAG, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_DEBUG, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_DERA, v); + GET_SW_GCSR(csr, id, LOONGARCH_CSR_DESAVE, v); + + GET_SW_GCSR(csr, id, LOONGARCH_CSR_TINTCLR, v); + + if (force && (id < CSR_ALL_SIZE)) { + *v = kvm_read_sw_gcsr(csr, id); + return 0; + } + + return -1; +} + +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + int ret; + + SET_HW_GCSR(csr, id, LOONGARCH_CSR_CRMD, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_PRMD, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_EUEN, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_MISC, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_ECFG, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_ERA, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_BADV, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_BADI, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_EENTRY, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBIDX, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBEHI, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBELO0, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBELO1, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_ASID, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_PGDL, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_PGDH, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_PWCTL0, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_PWCTL1, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_STLBPGSIZE, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_RVACFG, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_CPUID, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS0, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS1, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS2, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS3, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS4, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS5, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS6, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS7, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TMID, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TCFG, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TVAL, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_CNTC, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_LLBCTL, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRENTRY, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRBADV, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRERA, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRSAVE, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRELO0, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRELO1, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBREHI, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRPRMD, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN0, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN1, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN2, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN3, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_MWPS, v); + SET_HW_GCSR(csr, id, LOONGARCH_CSR_FWPS, v); + + SET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL1, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL2, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRCTL, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO1, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO2, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRENTRY, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRERA, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRSAVE, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_CTAG, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_DEBUG, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_DERA, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_DESAVE, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG1, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG2, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG3, v); + + SET_SW_GCSR(csr, id, LOONGARCH_CSR_PGD, v); + SET_SW_GCSR(csr, id, LOONGARCH_CSR_TINTCLR, v); + + ret = -1; + switch (id) { + case LOONGARCH_CSR_ESTAT: + write_gcsr_estat(*v); + /* estat IP0~IP7 inject through guestexcept */ + write_csr_gintc(((*v) >> 2) & 0xff); + ret = 0; + break; + default: + if (force && (id < CSR_ALL_SIZE)) { + kvm_set_sw_gcsr(csr, id, *v); + ret = 0; + } + break; + } + + return ret; +} + +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg, s64 *v) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + int reg_idx, ret; + + if ((reg->id & KVM_IOC_CSRID(0)) == KVM_IOC_CSRID(0)) { + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + ret = _kvm_getcsr(vcpu, reg_idx, v, 0); + if (ret == 0) + return ret; + } + + switch (reg->id) { + case KVM_REG_LOONGARCH_COUNTER: + *v = drdtime() + vcpu->kvm->arch.time_offset; + break; + default: + if ((reg->id & KVM_REG_LOONGARCH_MASK) != KVM_REG_LOONGARCH_CSR) + return -EINVAL; + + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + if (reg_idx < CSR_ALL_SIZE) + *v = kvm_read_sw_gcsr(csr, reg_idx); + else + return -EINVAL; + } + + return 0; +} + +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + int ret; + s64 v; + + ret = _kvm_get_one_reg(vcpu, reg, &v); + if (ret) + return ret; + + ret = -EINVAL; + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { + u64 __user *uaddr = (u64 __user *)(long)reg->addr; + + ret = put_user(v, uaddr); + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { + u32 __user *uaddr = (u32 __user *)(long)reg->addr; + u32 v32 = (u32)v; + + ret = put_user(v32, uaddr); + } + + return ret; +} + +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg, + s64 v) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + int ret = 0; + unsigned long flags; + u64 val; + int reg_idx; + + val = v; + if ((reg->id & KVM_IOC_CSRID(0)) == KVM_IOC_CSRID(0)) { + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + ret = _kvm_setcsr(vcpu, reg_idx, &val, 0); + if (ret == 0) + return ret; + } + + switch (reg->id) { + case KVM_REG_LOONGARCH_COUNTER: + local_irq_save(flags); + /* + * gftoffset is relative with board, not vcpu + * only set for the first time for smp system + */ + if (vcpu->vcpu_id == 0) + vcpu->kvm->arch.time_offset = (signed long)(v - drdtime()); + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset); + local_irq_restore(flags); + break; + case KVM_REG_LOONGARCH_VCPU_RESET: + kvm_reset_timer(vcpu); + memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); + memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); + break; + default: + if ((reg->id & KVM_REG_LOONGARCH_MASK) != KVM_REG_LOONGARCH_CSR) + return -EINVAL; + + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + if (reg_idx < CSR_ALL_SIZE) + kvm_write_sw_gcsr(csr, reg_idx, v); + else + return -EINVAL; + } + return ret; +} + +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + s64 v; + int ret; + + ret = -EINVAL; + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { + u64 __user *uaddr; + + uaddr = (u64 __user *)(long)reg->addr; + ret = get_user(v, uaddr); + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { + u32 __user *uaddr; + s32 v32; + + uaddr = (u32 __user *)(long)reg->addr; + ret = get_user(v32, uaddr); + v = (s64)v32; + } + + if (ret) + return -EFAULT; + + return _kvm_set_one_reg(vcpu, reg, v); +} + +/* + * Read or write a bunch of csrs. All parameters are kernel addresses. + * + * @return number of csrs set successfully. + */ +static int _kvm_csr_io(struct kvm_vcpu *vcpu, struct kvm_csrs *csrs, + struct kvm_csr_entry *entries, + int (*do_csr)(struct kvm_vcpu *vcpu, + unsigned int index, u64 *data, int force)) +{ + int i; + + for (i = 0; i < csrs->ncsrs; ++i) + if (do_csr(vcpu, entries[i].index, &entries[i].data, 1)) + break; + + return i; +} + +static int kvm_csr_io(struct kvm_vcpu *vcpu, struct kvm_csrs __user *user_csrs, + int (*do_csr)(struct kvm_vcpu *vcpu, + unsigned int index, u64 *data, int force)) +{ + struct kvm_csrs csrs; + struct kvm_csr_entry *entries; + int r, n; + unsigned int size; + + r = -EFAULT; + if (copy_from_user(&csrs, user_csrs, sizeof(csrs))) + goto out; + + r = -E2BIG; + if (csrs.ncsrs >= CSR_ALL_SIZE) + goto out; + + size = sizeof(struct kvm_csr_entry) * csrs.ncsrs; + entries = memdup_user(user_csrs->entries, size); + if (IS_ERR(entries)) { + r = PTR_ERR(entries); + goto out; + } + + r = n = _kvm_csr_io(vcpu, &csrs, entries, do_csr); + if (r < 0) + goto out_free; + + r = -EFAULT; + if (copy_to_user(user_csrs->entries, entries, size)) + goto out_free; + + r = n; + +out_free: + kfree(entries); +out: + return r; +} + +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + int i; + + vcpu_load(vcpu); + + for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++) + regs->gpr[i] = vcpu->arch.gprs[i]; + + regs->pc = vcpu->arch.pc; + + vcpu_put(vcpu); + return 0; +} + +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + int i; + + vcpu_load(vcpu); + + for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++) + vcpu->arch.gprs[i] = regs->gpr[i]; + vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */ + vcpu->arch.pc = regs->pc; + + vcpu_put(vcpu); + return 0; +} + +long kvm_arch_vcpu_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + long r; + + vcpu_load(vcpu); + + switch (ioctl) { + case KVM_SET_ONE_REG: + case KVM_GET_ONE_REG: { + struct kvm_one_reg reg; + + r = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + if (ioctl == KVM_SET_ONE_REG) + r = _kvm_set_reg(vcpu, ®); + else + r = _kvm_get_reg(vcpu, ®); + break; + } + case KVM_GET_CSRS: { + r = kvm_csr_io(vcpu, argp, _kvm_getcsr); + break; + } + case KVM_SET_CSRS: { + r = kvm_csr_io(vcpu, argp, _kvm_setcsr); + break; + } + default: + r = -ENOIOCTLCMD; + break; + } + + vcpu_put(vcpu); + return r; +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Tue Feb 14 02:56:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56633 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2724504wrn; Mon, 13 Feb 2023 19:00:55 -0800 (PST) X-Google-Smtp-Source: AK7set+3ELzYeufcCQrnx1fBdk/GY7O26hDXRgjqMGkdP3QFH5PXpE/iLvrRUbpv3d51z3LSeFGV X-Received: by 2002:a50:9fe2:0:b0:4ab:4997:4ff2 with SMTP id c89-20020a509fe2000000b004ab49974ff2mr1241669edf.7.1676343655181; Mon, 13 Feb 2023 19:00:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343655; cv=none; d=google.com; s=arc-20160816; b=TrdkEMGMZT6ne8X6XKREwTHp/nUxIUzIgFDtaFd72J2JYv8YBHY5ZvwjKkf+ZjXDcp p0Q8C0+2klrArxzp0YLEk+vSr3SZ2y+O2yXDwSK2o67YPeF6PlGa9YBleti3L8eX0NQj cH4/vUYX/VS2lIe4hWu3+bZIROZjaYigOcXf0pbyzYGwjKdDwok+Y81E6PQHa4egoriu rvSH3CzuKuF7wfXQbtGSouzBXE9e+tWapKYUIxbsfr/tZ7laKkCR58xQ8C+AfRFFSMVj pvmEfFl1eXNixfC08FYgHQrAnfu2TUdwU00a/CjbLveTsAPgRogImvxT9rjfhAlNYX6Q 7Vig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=6J7/haSMC2yo+QgzQqKC5p8IqmaBTcDkvMrBfU9ugLw=; b=I61PxID2W3SKxOit5jKR8ET2DhE9s53PlcnEtXAzD9yd2i9/KTBkXaekMshalrZEnH I5wKDH2IRha0YtlUnu+ghjoGxUL1LCEHvcorbBol7ETLLjVUcppXts7oalmHp+/yJLIl Pf72d/LS4KmYxnxwXFAVxj1I8tgqedYKCTn/QOPy34Ju8kq5SEEeSXEOL6RiVbuPGBfW SHmQ3jXnifd0xCuq1W6chmDLDY7nxC3BkwTFXAOuqi69IA0H2/c+vwHE8rd0tOiYrLpr waKylIMysyoQDSXuFfz5vy17AcQ0Rx5jfCfyjpI/gdF1iJVQtm9LeD2mcogaX4aHZF8K mclg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w19-20020a056402071300b004acbda38e87si8042192edx.126.2023.02.13.19.00.28; Mon, 13 Feb 2023 19:00:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231567AbjBNC6I (ORCPT + 99 others); Mon, 13 Feb 2023 21:58:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229812AbjBNC44 (ORCPT ); Mon, 13 Feb 2023 21:56:56 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 36F2618A94; Mon, 13 Feb 2023 18:56:54 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxWNl0+OpjP1cAAA--.850S3; Tue, 14 Feb 2023 10:56:52 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S7; Tue, 14 Feb 2023 10:56:51 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 05/24] LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl interface Date: Tue, 14 Feb 2023 10:56:29 +0800 Message-Id: <20230214025648.1898508-6-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S7 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7WF17ArW5CryDGr1kGryxGrg_yoW8WFykpF 47A34Fqr4rJ3yIgwn3tws3ur1aqr4kKr4xZFZrX3y5AF12kry5KF4F9rZrJFW5Ja1rWF1I q3WFqF1Y9Fs8Z37anT9S1TB71UUUUjDqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU b4xFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Cr0_Gr1UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773724523752284?= X-GMAIL-MSGID: =?utf-8?q?1757773724523752284?= Implement loongarch vcpu KVM_ENABLE_CAP, KVM_CHECK_EXTENSION ioctl interface. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 46 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index a18864284..dd803f26d 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -415,6 +415,29 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return 0; } +static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, + struct kvm_enable_cap *cap) +{ + int r = 0; + + if (!kvm_vm_ioctl_check_extension(vcpu->kvm, cap->cap)) + return -EINVAL; + if (cap->flags) + return -EINVAL; + if (cap->args[0]) + return -EINVAL; + + switch (cap->cap) { + case KVM_CAP_LOONGARCH_FPU: + break; + default: + r = -EINVAL; + break; + } + + return r; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -438,6 +461,29 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = _kvm_get_reg(vcpu, ®); break; } + case KVM_ENABLE_CAP: { + struct kvm_enable_cap cap; + + r = -EFAULT; + if (copy_from_user(&cap, argp, sizeof(cap))) + break; + r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap); + break; + } + case KVM_CHECK_EXTENSION: { + unsigned int ext; + + if (copy_from_user(&ext, argp, sizeof(ext))) + return -EFAULT; + switch (ext) { + case KVM_CAP_LOONGARCH_FPU: + r = !!cpu_has_fpu; + break; + default: + break; + } + break; + } case KVM_GET_CSRS: { r = kvm_csr_io(vcpu, argp, _kvm_getcsr); break; From patchwork Tue Feb 14 02:56:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56632 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2724270wrn; Mon, 13 Feb 2023 19:00:37 -0800 (PST) X-Google-Smtp-Source: AK7set+6H3hwtTaqZXJu2Ngrqog+EZPASn1vloREDiYAwrdkPNArQbuIfzleNJyvtHhU/Xf1Ge+q X-Received: by 2002:a17:907:984a:b0:8aa:f74:3252 with SMTP id jj10-20020a170907984a00b008aa0f743252mr1363736ejc.2.1676343637424; Mon, 13 Feb 2023 19:00:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343637; cv=none; d=google.com; s=arc-20160816; b=AbLKjqTN9QgomfEvOPQiXJpYOZWeM+J1emTUK69Lubu4948mgrfLugOkmiVDNAgzU6 rjsfyVRUefLvcjMslllyamkzHF7pCsEhPAo2DZu3az+/PhuITIjaeoWWGQ/D/UuGhB1x c5yesojc8IaxQ0flGif+Igr4YC+PoBsL3uOgjhC2pG6ewj0jiftNdseWyBGhAluR3ioF Kggz8HNDYNwxr1umoga1NFx4jRcx1OgJGvt7m68vA/V1xZShyAl6MBDhVWLl1anHDFZ/ fJJvHIfELAipNbRVVIFmu/XcA/FYUbhLFOIgU+lXlGJJ9p1/Fk+1X9epqxisPj8AOyxN b5gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=MoLjJqV3pqG6euVL6WkgFsDJeJnjfkXgSTdQJ4D2df0=; b=T4718MW52cUKYRjWntRtmZbhjdeMOcLOXTtJGdCLmKer/cM7v2DaGcYYYQGKSYASWG E858syLCws1HJIHEqxTPYnjbBEGMXkEkJBy0qHFoVlO8wCN83X1KmhR2qdVaWMA297kR gtY73OxFoS1DcR/7r6Hr2zv7chZ4zh/pSSUirAzOiBlMHItnIp5iLksP2pmsQeDQaGAn VCtjyLH3nDXfHvjgSIa1hsxUDPvxy3WB2MbGSyRuftsxV5myWGixYFmUOM06GENEeHxN dnvTb/+U6vdcCogiK+xxTEVJAZ6EMnk4zjJ2aZGWvZgBPuntZMLe1ChIggVnEyal5nXN PzKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y8-20020a50eb08000000b004aaa3ce59efsi15970531edp.89.2023.02.13.19.00.12; Mon, 13 Feb 2023 19:00:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231501AbjBNC6F (ORCPT + 99 others); Mon, 13 Feb 2023 21:58:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229838AbjBNC44 (ORCPT ); Mon, 13 Feb 2023 21:56:56 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 10DFD18A99; Mon, 13 Feb 2023 18:56:54 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxztp1+OpjTFcAAA--.1121S3; Tue, 14 Feb 2023 10:56:53 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S8; Tue, 14 Feb 2023 10:56:52 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 06/24] LoongArch: KVM: Implement fpu related operations for vcpu Date: Tue, 14 Feb 2023 10:56:30 +0800 Message-Id: <20230214025648.1898508-7-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S8 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7KF1UXFyDKF4rZr17uFy3urg_yoW8KF47pF W7CrZ5Z3yrGF1Ik39xtr1jvr1Yvr4kKr1xXFy7XryfAr1Ut345ZF4vkrZFvFZ8Jw1Sva4I vF1fGF1j9a4DAwUanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU b4xFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Ar0_tr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Cr0_Gr1UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773705951575074?= X-GMAIL-MSGID: =?utf-8?q?1757773705951575074?= Implement loongarch fpu related interface for vcpu, such as get fpu, set fpu, own fpu and lose fpu, etc. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 70 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index dd803f26d..ed569508f 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -501,6 +501,76 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return r; } +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + int i = 0; + + /* no need vcpu_load and vcpu_put */ + fpu->fcsr = vcpu->arch.fpu.fcsr; + fpu->fcc = vcpu->arch.fpu.fcc; + for (i = 0; i < NUM_FPU_REGS; i++) + memcpy(&fpu->fpr[i], &vcpu->arch.fpu.fpr[i], FPU_REG_WIDTH / 64); + + return 0; +} + +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + int i = 0; + + /* no need vcpu_load and vcpu_put */ + vcpu->arch.fpu.fcsr = fpu->fcsr; + vcpu->arch.fpu.fcc = fpu->fcc; + for (i = 0; i < NUM_FPU_REGS; i++) + memcpy(&vcpu->arch.fpu.fpr[i], &fpu->fpr[i], FPU_REG_WIDTH / 64); + + return 0; +} + +/* Enable FPU for guest and restore context */ +void kvm_own_fpu(struct kvm_vcpu *vcpu) +{ + unsigned long sr; + + preempt_disable(); + + sr = kvm_read_hw_gcsr(LOONGARCH_CSR_EUEN); + + /* + * Enable FPU for guest + * We set FR and FRE according to guest context + */ + set_csr_euen(CSR_EUEN_FPEN); + + /* If guest FPU state not active, restore it now */ + if (!(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) { + kvm_restore_fpu(&vcpu->arch.fpu); + vcpu->arch.aux_inuse |= KVM_LARCH_FPU; + trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU); + } else { + trace_kvm_aux(vcpu, KVM_TRACE_AUX_ENABLE, KVM_TRACE_AUX_FPU); + } + + preempt_enable(); +} + +/* Save and disable FPU */ +void kvm_lose_fpu(struct kvm_vcpu *vcpu) +{ + preempt_disable(); + + if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) { + kvm_save_fpu(&vcpu->arch.fpu); + vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU; + trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU); + + /* Disable FPU */ + clear_csr_euen(CSR_EUEN_FPEN); + } + + preempt_enable(); +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Tue Feb 14 02:56:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56617 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723303wrn; Mon, 13 Feb 2023 18:58:17 -0800 (PST) X-Google-Smtp-Source: AK7set8pcW4RsTDrC/8CGBtO0sbmjbWBG/Iec7aHG9H6zLvssPEmQ8+sitFCu1jgSQJpEJ4y420z X-Received: by 2002:a62:1704:0:b0:5a8:b2a1:66a4 with SMTP id 4-20020a621704000000b005a8b2a166a4mr624697pfx.5.1676343497287; Mon, 13 Feb 2023 18:58:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343497; cv=none; d=google.com; s=arc-20160816; b=N510SPcm5Wqz5StcORvBMyON54xBRkMqQaidEJgkSHWeJscGiAzbdB5uTjzIC7Rh4J JsICKzDDCh9kpBVLffvISMlTBE64/2M8gpB1+TIQa3JTc3q5ITFtO9JyhfxhTJUoaYiW aNx/J/VKeLn3Dsm5214xlhjC8UzLa78zv6oDLGGS+c0kXX3fqvVExDo2He0BXXjVGr6V 4hS4cQTPBizZD1o6xe5mPiVEcUQ6psY6jcUD8s1pK2BTZ8N6LCCJqD6NoN259NGoUJmc 4lNN6vTuDo5yQhFaGXB87YJ/hdtojsvfuNLXm6P0UWkcLp+GVz/7Z58EcQceibDy6Gxo Yk9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=sZ5wR098IXaW4R901b6yv0TeQR8z2YMJH92guP7r+Ec=; b=pXdxBRlzkiYKBA4tspDYmth8vQ/3YO++jvFVRGfTMj05BDqzKMwTm2xqloRsOBylmP 44nqtIBo9s0gHW+o90+pRx1/m20nvYWmdhZ4eZl+TYzOecdWaz+UibeUj6b0ZPdt9L2P jsb3cjofx+8Z41RhHtqMohp+GXWrTZLn7oWDDRGcgIPY5mjYUASOosM5qCoK/JR71RRb BwM5qUElKrlvZJilLjBayT+1+n8Q2ywjB0BbFSUAmoN/B47a7NflueCAQswMqnEpmmi/ jG2LcODGfzmGuO5KBd8A7EPKTGLiburZRqN/eP1W3ZQoQGWXa6oBOXNoiu3FuVA3miic thDA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r12-20020aa79ecc000000b0058bb72dea62si13011335pfq.331.2023.02.13.18.58.05; Mon, 13 Feb 2023 18:58:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229831AbjBNC5V (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230207AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5E26A18AAA; Mon, 13 Feb 2023 18:56:55 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxidl2+OpjWVcAAA--.933S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S9; Tue, 14 Feb 2023 10:56:53 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 07/24] LoongArch: KVM: Implement vcpu interrupt operations Date: Tue, 14 Feb 2023 10:56:31 +0800 Message-Id: <20230214025648.1898508-8-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S9 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxCrWfAr1fKFWkWry3Gr1rJFb_yoWrKFWDpF W8Cw45Xw48Gr17G343ZFnYvr4YqrykKFZxCr97C3y3K347tr95XFyvyr98XF1UGw4UKF1f X34SvaykCa45JwUanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU b4kFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Ar0_tr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Cr0_Gr1UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773559397255079?= X-GMAIL-MSGID: =?utf-8?q?1757773559397255079?= Implement vcpu interrupt operations such as vcpu set irq and vcpu clear irq, using set_gcsr_estat to set irq which is parsed by the irq bitmap. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/interrupt.c | 126 +++++++++++++++++++++++++++++++++ arch/loongarch/kvm/vcpu.c | 45 ++++++++++++ 2 files changed, 171 insertions(+) create mode 100644 arch/loongarch/kvm/interrupt.c diff --git a/arch/loongarch/kvm/interrupt.c b/arch/loongarch/kvm/interrupt.c new file mode 100644 index 000000000..02267a71d --- /dev/null +++ b/arch/loongarch/kvm/interrupt.c @@ -0,0 +1,126 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include + +static unsigned int int_to_coreint[LOONGARCH_EXC_MAX] = { + [LARCH_INT_TIMER] = CPU_TIMER, + [LARCH_INT_IPI] = CPU_IPI, + [LARCH_INT_SIP0] = CPU_SIP0, + [LARCH_INT_SIP1] = CPU_SIP1, + [LARCH_INT_IP0] = CPU_IP0, + [LARCH_INT_IP1] = CPU_IP1, + [LARCH_INT_IP2] = CPU_IP2, + [LARCH_INT_IP3] = CPU_IP3, + [LARCH_INT_IP4] = CPU_IP4, + [LARCH_INT_IP5] = CPU_IP5, + [LARCH_INT_IP6] = CPU_IP6, + [LARCH_INT_IP7] = CPU_IP7, +}; + +static int _kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority) +{ + unsigned int irq = 0; + + clear_bit(priority, &vcpu->arch.irq_pending); + if (priority < LOONGARCH_EXC_MAX) + irq = int_to_coreint[priority]; + + switch (priority) { + case LARCH_INT_TIMER: + case LARCH_INT_IPI: + case LARCH_INT_SIP0: + case LARCH_INT_SIP1: + set_gcsr_estat(irq); + break; + + case LARCH_INT_IP0: + case LARCH_INT_IP1: + case LARCH_INT_IP2: + case LARCH_INT_IP3: + case LARCH_INT_IP4: + case LARCH_INT_IP5: + case LARCH_INT_IP6: + case LARCH_INT_IP7: + set_csr_gintc(irq); + break; + + default: + break; + } + + return 1; +} + +static int _kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority) +{ + unsigned int irq = 0; + + clear_bit(priority, &vcpu->arch.irq_clear); + if (priority < LOONGARCH_EXC_MAX) + irq = int_to_coreint[priority]; + + switch (priority) { + case LARCH_INT_TIMER: + case LARCH_INT_IPI: + case LARCH_INT_SIP0: + case LARCH_INT_SIP1: + clear_gcsr_estat(irq); + break; + + case LARCH_INT_IP0: + case LARCH_INT_IP1: + case LARCH_INT_IP2: + case LARCH_INT_IP3: + case LARCH_INT_IP4: + case LARCH_INT_IP5: + case LARCH_INT_IP6: + case LARCH_INT_IP7: + clear_csr_gintc(irq); + break; + + default: + break; + } + + return 1; +} + +void _kvm_deliver_intr(struct kvm_vcpu *vcpu) +{ + unsigned long *pending = &vcpu->arch.irq_pending; + unsigned long *pending_clr = &vcpu->arch.irq_clear; + unsigned int priority; + + if (!(*pending) && !(*pending_clr)) + return; + + if (*pending_clr) { + priority = __ffs(*pending_clr); + while (priority <= LOONGARCH_EXC_IPNUM) { + _kvm_irq_clear(vcpu, priority); + priority = find_next_bit(pending_clr, + BITS_PER_BYTE * sizeof(*pending_clr), + priority + 1); + } + } + + if (*pending) { + priority = __ffs(*pending); + while (priority <= LOONGARCH_EXC_IPNUM) { + _kvm_irq_deliver(vcpu, priority); + priority = find_next_bit(pending, + BITS_PER_BYTE * sizeof(*pending), + priority + 1); + } + } +} + +int _kvm_pending_timer(struct kvm_vcpu *vcpu) +{ + return test_bit(LARCH_INT_TIMER, &vcpu->arch.irq_pending); +} diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index ed569508f..3e94b8537 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -571,6 +571,51 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu) preempt_enable(); } +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, + struct kvm_loongarch_interrupt *irq) +{ + int intr = (int)irq->irq; + struct kvm_vcpu *dvcpu = NULL; + + if (irq->cpu == -1) + dvcpu = vcpu; + else + dvcpu = kvm_get_vcpu(vcpu->kvm, irq->cpu); + + if (intr > 0) + _kvm_queue_irq(dvcpu, intr); + else if (intr < 0) + _kvm_dequeue_irq(dvcpu, -intr); + else { + kvm_err("%s: invalid interrupt ioctl (%d:%d)\n", __func__, + irq->cpu, irq->irq); + return -EINVAL; + } + + kvm_vcpu_kick(dvcpu); + return 0; +} + +long kvm_arch_vcpu_async_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + + if (ioctl == KVM_INTERRUPT) { + struct kvm_loongarch_interrupt irq; + + if (copy_from_user(&irq, argp, sizeof(irq))) + return -EFAULT; + kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__, + irq.irq); + + return kvm_vcpu_ioctl_interrupt(vcpu, &irq); + } + + return -ENOIOCTLCMD; +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Tue Feb 14 02:56:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56613 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723224wrn; Mon, 13 Feb 2023 18:58:01 -0800 (PST) X-Google-Smtp-Source: AK7set81f/ETTickShBxTsmfdg2wBKWnKdsMxQfESxbYwZvIDPRFRyQg2f1/fpA5heizoFkwK7JZ X-Received: by 2002:a05:6a20:a10c:b0:b5:c751:78bb with SMTP id q12-20020a056a20a10c00b000b5c75178bbmr932139pzk.6.1676343481374; Mon, 13 Feb 2023 18:58:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343481; cv=none; d=google.com; s=arc-20160816; b=pdiGhu1Scm/X/ylF+CPJPzxXGTEtNVXsiSEs1scpX9o+3t38KgJ0KzI6ZGFLB04oJ9 OalYw//MX0Ja2By5zfeoo17RfQ3hKZMut68cNieHbyh9KXUNPVNjtGYSHWCpF/jhFmeG pJQ4uDKP7jzj6LGwVToEaSGPCnDxIiefQ5yX2ITN0wDJ19gQ4Q8oT/hMDtdi+va7m8ID +zakNs4hf6JH0BEpjwhVo75lvgrp4GAw+xTrEYMaTfwmJgwU+Tf2yhCyImM9wGAG5cuO AUfTqrXMOYToVDxLrCrhQsln8Urt91AXpCwPQM7rDN4TcQtv9lCUBWBUHM9Bitlh6PaN kzig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jsIUFeBUeoUivlQizWsOopSLVSccULEpiNgA3leszMo=; b=P9CgI/pb/5e8QWFnY5wEZBNAYoQXf70VJTpxWToSNiT1qqG0BjnvUw06feeiHndOU+ 5WEyYJHAPizSy7tzvozwuIu6xaNCvlmB/Nv4nnlsDNS7db6dT7Gl92vsh5QcBD0JrIrJ fsjk9XddNv6rhTeXjbTWNQ2uSkrimjLHhFM5xLliPjkFfol1Na0eSbOce+W7h49/+Fvu evbclBLSzf8W+GepxfFJPnGE44tyHrIybw65V4BFtlsEDfj6uqa5nlpaLEtm5TlEOW3S 95RkXcrU+ob5XqzeIfD4Oey6JGH6rhBDcwC9XONaJenIT/Sm+8aQdFNvoE1ZLl35r1G9 ePWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020a636b05000000b004e33a04ff9fsi13400914pgc.657.2023.02.13.18.57.48; Mon, 13 Feb 2023 18:58:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231396AbjBNC5J (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230364AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7201918AB3; Mon, 13 Feb 2023 18:56:55 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Dx2tl2+OpjYVcAAA--.936S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S10; Tue, 14 Feb 2023 10:56:53 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 08/24] LoongArch: KVM: Implement misc vcpu related interfaces Date: Tue, 14 Feb 2023 10:56:32 +0800 Message-Id: <20230214025648.1898508-9-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S10 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxJw4kGw1DJw47tw47CrWxZwb_yoW5Kr1Upr 1xC3y5Xw4rGr47Gw1ftrs09rsIqws7Kr17Zr9rW3yavr4Dtr15Ja1kKrWDAFW5Jr18uF1S vrn8tayDua1jya7anT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU b4kFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Ar0_tr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Cr0_Gr1UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773542511924229?= X-GMAIL-MSGID: =?utf-8?q?1757773542511924229?= Implement some misc vcpu relaterd interfaces, such as vcpu runnable, vcpu should kick, vcpu dump regs, etc. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 112 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 3e94b8537..a4e825dd1 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,118 @@ #define CREATE_TRACE_POINTS #include "trace.h" +int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) +{ + return !!(vcpu->arch.irq_pending); +} + +int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; +} + +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + return false; +} + +vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) +{ + return VM_FAULT_SIGBUS; +} + +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, + struct kvm_translation *tr) +{ + return 0; +} + +int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) +{ + return _kvm_pending_timer(vcpu) || + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT) & + (1 << (EXCCODE_TIMER - EXCCODE_INT_START)); +} + +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu) +{ + int i; + + if (!vcpu) + return -1; + + kvm_debug("VCPU Register Dump:\n"); + kvm_debug("\tpc = 0x%08lx\n", vcpu->arch.pc); + kvm_debug("\texceptions: %08lx\n", vcpu->arch.irq_pending); + + for (i = 0; i < 32; i += 4) { + kvm_debug("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i, + vcpu->arch.gprs[i], + vcpu->arch.gprs[i + 1], + vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]); + } + + kvm_debug("\tCRMOD: 0x%08llx, exst: 0x%08llx\n", + kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD), + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT)); + + kvm_debug("\tERA: 0x%08llx\n", kvm_read_hw_gcsr(LOONGARCH_CSR_ERA)); + + return 0; +} + +int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, + struct kvm_guest_debug *dbg) +{ + return -EINVAL; +} + +static int lvcpu_stat_get(void *address, u64 *val) +{ + *val = *(u64 *)address; + return 0; +} +DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n"); + +static int vcpu_pid_get(void *arg, u64 *val) +{ + struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg; + + if (vcpu) + *val = pid_vnr(vcpu->pid); + return 0; +} +DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n"); + +/** + * kvm_migrate_count() - Migrate timer. + * @vcpu: Virtual CPU. + * + * Migrate hrtimer to the current CPU by cancelling and restarting it + * if it was running prior to being cancelled. + * + * Must be called when the VCPU is migrated to a different CPU to ensure that + * timer expiry during guest execution interrupts the guest and causes the + * interrupt to be delivered in a timely manner. + */ +static void kvm_migrate_count(struct kvm_vcpu *vcpu) +{ + if (hrtimer_cancel(&vcpu->arch.swtimer)) + hrtimer_restart(&vcpu->arch.swtimer); +} + int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force) { struct loongarch_csrs *csr = vcpu->arch.csr; From patchwork Tue Feb 14 02:56:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56634 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2729454wrn; Mon, 13 Feb 2023 19:12:54 -0800 (PST) X-Google-Smtp-Source: AK7set8C7mbnXiZVGCI7wRp2+ANn539f4k+oJPvLxmnY1V8X70Tu+McA4SlOrfamISQNoZQUnZFB X-Received: by 2002:a62:180a:0:b0:5a8:b1b9:8306 with SMTP id 10-20020a62180a000000b005a8b1b98306mr553042pfy.26.1676344373687; Mon, 13 Feb 2023 19:12:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676344373; cv=none; d=google.com; s=arc-20160816; b=Zzt1jt3mnkJYikNNP1kDNoYhkZ91bkAH9cfC+/40Q466RCvR1MhfVZYsSfy9Gtfccq 51auh3xwZ6rugPyYC8rPMe6PcN6J0tflVdOL/EAktjVJ49Pqw3lZ2tDd40yIs0E+wACg GEvGD4kSELbUP1ZWC5qjFHhN5EwmSrpQTU7rZJCjQ+jlVqebFtgIUYgfKxFDFwouuBcC Kve4XAiXPo9KSrQKHsa7M8Cb1E7ckA58l8sfyJcIqsE2gOoobGP5Ur4A1/fvbzfcj21E 6oo/1tJQZJ9Zst1+TPB/fiLPTy5wJd84lI/VZYXaf//wyyE3EqJbDDp/btWjKf+8QloF doEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Z8NXfBdLIxyap/fKkerhLMR2rEjWRWKX1gWLi1j8wR8=; b=ystLfEJEtt/JOo4er4sW35vghh7HD90u6nlUI+MhvQKgEqvpLEOmfCbhw38MBbh6cc RlY/cbi0glGv1Yb++LC+PRcG/X8n6Bz3TT9vBc0FT7hd1uxTXyrU0QSOVK1g92Z3XcoI yITYAhQOAgePiZnY9PS7PUt1Unv5eD4ANrHLnVhrelyXqnzfuhuyjNYwbfNDuG156vg2 wiYLDRzuxpACGsDIxlHy9ehldGNkzQLatA+KFjEiubOEI1dc88YaWLL5Zyv/LciVS5gu N6jugSZ9cDM1JFu+4v6brDEp1jtS7wrDhkNljfJfgDv3bYNSdZrcbJ9arp4cj9/l/jun zEMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v129-20020a622f87000000b0059d96ba730dsi11979744pfv.110.2023.02.13.19.12.38; Mon, 13 Feb 2023 19:12:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229720AbjBNDDa (ORCPT + 99 others); Mon, 13 Feb 2023 22:03:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbjBNDD2 (ORCPT ); Mon, 13 Feb 2023 22:03:28 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C0EEFBDEA; Mon, 13 Feb 2023 19:02:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxidl2+Opjd1cAAA--.935S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S11; Tue, 14 Feb 2023 10:56:53 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 09/24] LoongArch: KVM: Implement vcpu load and vcpu put operations Date: Tue, 14 Feb 2023 10:56:33 +0800 Message-Id: <20230214025648.1898508-10-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S11 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW3JFWrXFW8XFWkAF18Xr43Jrb_yoW3uFykpr 1qgFW09rW7KasrtF15ArsFvr13WF4Sy34rJr47t3y2qrn8Z3s5ZF4IyFy7JFyFq3WxXF1I y3s8C39avr4ktw7anT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757774477731693246?= X-GMAIL-MSGID: =?utf-8?q?1757774477731693246?= Implement loongarch vcpu load and vcpu put operations, including load csr value into hardware and save csr value into vcpu structure. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 192 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 192 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index a4e825dd1..0228941ec 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -914,6 +914,198 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) } } +static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_context *context; + struct loongarch_csrs *csr = vcpu->arch.csr; + bool migrated, all; + + /* + * Have we migrated to a different CPU? + * If so, any old guest TLB state may be stale. + */ + migrated = (vcpu->arch.last_sched_cpu != cpu); + + /* + * Was this the last VCPU to run on this CPU? + * If not, any old guest state from this VCPU will have been clobbered. + */ + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + all = migrated || (context->last_vcpu != vcpu); + context->last_vcpu = vcpu; + + /* + * Restore timer state regardless + */ + kvm_restore_timer(vcpu); + + /* Control guest page CCA attribute */ + change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT); + /* Don't bother restoring registers multiple times unless necessary */ + if (!all) + return 0; + + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset); + /* + * Restore guest CSR registers + */ + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EUEN); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_MISC); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ECFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ERA); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADV); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ASID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDL); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDH); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CPUID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS2); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS3); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS4); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS5); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS6); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS7); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TMID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CNTC); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); + + /* restore Root.Guestexcept from unused Guest guestexcept register */ + write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]); + + /* + * We should clear linked load bit to break interrupted atomics. This + * prevents a SC on the next VCPU from succeeding by matching a LL on + * the previous VCPU. + */ + if (vcpu->kvm->created_vcpus > 1) + set_gcsr_llbctl(CSR_LLBCTL_WCLLB); + + return 0; +} + +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + unsigned long flags; + + local_irq_save(flags); + vcpu->cpu = cpu; + if (vcpu->arch.last_sched_cpu != cpu) { + kvm_debug("[%d->%d]KVM VCPU[%d] switch\n", + vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id); + /* + * Migrate the timer interrupt to the current CPU so that it + * always interrupts the guest and synchronously triggers a + * guest timer interrupt. + */ + kvm_migrate_count(vcpu); + } + + /* restore guest state to registers */ + _kvm_vcpu_load(vcpu, cpu); + local_irq_restore(flags); +} + +static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + kvm_lose_fpu(vcpu); + + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EUEN); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_MISC); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ECFG); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ERA); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADV); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ASID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDL); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDH); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CPUID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG3); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS3); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS4); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS5); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS6); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS7); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TMID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CNTC); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); + + /* save Root.Guestexcept in unused Guest guestexcept register */ + kvm_save_timer(vcpu); + csr->csrs[LOONGARCH_CSR_GINTC] = read_csr_gintc(); + return 0; +} + +void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) +{ + unsigned long flags; + int cpu; + + local_irq_save(flags); + cpu = smp_processor_id(); + vcpu->arch.last_sched_cpu = cpu; + vcpu->cpu = -1; + + /* save guest state in registers */ + _kvm_vcpu_put(vcpu, cpu); + local_irq_restore(flags); +} + int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { int r = -EINTR; From patchwork Tue Feb 14 02:56:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56618 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723370wrn; Mon, 13 Feb 2023 18:58:27 -0800 (PST) X-Google-Smtp-Source: AK7set8uJG6Ssaoupcq+hrC+c8JqowRuymlRoI1W0cYAe6m4Hz31plkD8MoS4xBRlJP27ErpuFhr X-Received: by 2002:a05:6a20:144e:b0:bc:ee04:275d with SMTP id a14-20020a056a20144e00b000bcee04275dmr685207pzi.61.1676343507364; Mon, 13 Feb 2023 18:58:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343507; cv=none; d=google.com; s=arc-20160816; b=0lOj6wCNvmpE5AiMBdC/hoW6n6DQfwuI9CEUUVMkKcsV/Mp07qehL1Bu/al7d1Xgp8 sX8lltmPabP6D8bI358UmB3nu6X0e+c80hln2FPvM21EJRUeNR/G5rFuMTs5OekqYVpk lpKdSp/m4N4wD/DiKvjqp483RnyvX7XAZi8O2w4tL4ijmU0FBY9HWNGaxuogk3F2u8No JgYa18IRswPU4wqdXkHXhZi0aVyrGfUAkjptgLfa6BzAM8252Ohi6KZTPP4TNcLQuTjg BHxUfCByX/5k+/RnrCHP3lJCugdsLqCuHIBCR67tztrJJbQwDqSNoZEHGnqN712HGs1s RxvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=T52CXVYdX2OKUtxBBtfzEFZSvZIpJMxKRpY/L2rYuwM=; b=maR9Mv+P5lnAOnFe2lOS8u6G2fez3ajm3AUhhCuIfF2XgHLpv0Djo94CF7SW0GCPZK xnD0Kqhv3jU6aBH6wGbAOrciouwjrg2L4MXKWMVUyjMghEpd2Cp4CL6CnuGsVFd+J2vW YjmNTt/tdneCxaWu8Tf8I6JJuSjYOoF8tchkkRhYGPISnoklBitN7jjkXVJOwrGCtr8H a92d4tdDmumVYWt6hjRoGPFt7YB5p1CLR2MYkHfDXaQ/5NUTr9dRRQ6WCMgWiQlac7mi qHq4PqUdb3Jz46NxAS15IOCojVtcZRlrsN/7JVGCoKe8N4KEvlDwnLBToYQJa7+v2na6 K6ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b184-20020a6334c1000000b004fb90621a9esi7223521pga.40.2023.02.13.18.58.13; Mon, 13 Feb 2023 18:58:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231493AbjBNC5T (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230175AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 830D418B03; Mon, 13 Feb 2023 18:56:55 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Bxtth2+Opja1cAAA--.391S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S12; Tue, 14 Feb 2023 10:56:54 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 10/24] LoongArch: KVM: Implement vcpu status description Date: Tue, 14 Feb 2023 10:56:34 +0800 Message-Id: <20230214025648.1898508-11-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S12 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxJrWfXw17tr1kuFWUKF1kGrg_yoW8Jw4rpF nrCa4Ygr4rWwnxWw1fJ39xJw42qrZ5W3WfWr9FqryakFnrtrn5XFW0krWDGFykA34rAr1S qayrtwnrua90kw7anT9S1TB71UUUUj7qnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bx8Fc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26F1j6w1UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4UJVWxJr1lIxAIcVC2z280aV CY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773569233773127?= X-GMAIL-MSGID: =?utf-8?q?1757773569233773127?= Implement loongarch vcpu status description such as idle exits counter, signal exits counter, cpucfg exits counter, etc. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 0228941ec..0136ee3a1 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,23 @@ #define CREATE_TRACE_POINTS #include "trace.h" +const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { + KVM_GENERIC_VCPU_STATS(), + STATS_DESC_COUNTER(VCPU, idle_exits), + STATS_DESC_COUNTER(VCPU, signal_exits), + STATS_DESC_COUNTER(VCPU, int_exits), + STATS_DESC_COUNTER(VCPU, cpucfg_exits), +}; + +const struct kvm_stats_header kvm_vcpu_stats_header = { + .name_size = KVM_STATS_NAME_SIZE, + .num_desc = ARRAY_SIZE(kvm_vcpu_stats_desc), + .id_offset = sizeof(struct kvm_stats_header), + .desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE, + .data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE + + sizeof(kvm_vcpu_stats_desc), +}; + int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { return !!(vcpu->arch.irq_pending); From patchwork Tue Feb 14 02:56:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56623 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723597wrn; Mon, 13 Feb 2023 18:59:07 -0800 (PST) X-Google-Smtp-Source: AK7set/vshOis/jJrSj98OYBC2xOnve0sTzjbHKHuJcmfIw9AlfgIvSmvhxFeAQxtBTWnFT7Nhu5 X-Received: by 2002:a05:6a20:a004:b0:bc:6bda:af51 with SMTP id p4-20020a056a20a00400b000bc6bdaaf51mr889296pzj.34.1676343547057; Mon, 13 Feb 2023 18:59:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343547; cv=none; d=google.com; s=arc-20160816; b=qLMfYoCnbyC8JX4gkCtKAAOwDTwAj1BVHl9ogu0yoWhBA46HR4WEEVxNYs4bNjvRQl ngmqv0b4K0Yo/1fNaELL1baQAySagYB/W8E1pkjciVvRDL1zUaoSPI7slJkcEjKEVvhK bcvak2noRTWbwTreX3O942LOoa1WtcLHQ7xVX0wMOA7a0+9vWWrY4w3qDEpUwyqz8s64 9UF9W8OlwMmMYHmFAk97J3OGLVtad6xdYZh8WGOh15spo76g+H31j05Psdo4Q849ITtL zJGz7tqh61t9XNtjyLXYx96B0De/hMajrOpBs4vHSPHRGx2Okin42wyz4UJ5ta7bUu25 apsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=T01RqdrusQNVC58R1Swn4XXvUbmdxmprJDjhao8bCIs=; b=G+y00QNZSW8zrEuiuMNh/BuNUMScXtvALdAgSvRtVOb/yJtfkDf6dbKFhPCmcBXLgj SCOrNxmoiYPQUEUt0qrb0T6TvqYO+mC7RIP32tLYfk6NaTXyKVvUi/vaZRf7/Ywrimli HPSTS48qgOvdonmIfEkgtrWER9IPdZ7kLRKrx1oBtsvLkMJ3MVSmLngOeflRHap5tXzy WSfDhxkT1z+EoDaUouZY/RE+00IYdrDjAvn1MH3NX+N9ZOdSVI+TZaO79Wkt9d07UXkx vfSOgozTDnUmff2z4AyCCH2v2yDbF0d+81W7ZtV8FNgE7JzYw9p2WcLi1mTUZbT/ttxb 4fsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t11-20020a63954b000000b004e0d688fb84si12552671pgn.847.2023.02.13.18.58.54; Mon, 13 Feb 2023 18:59:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231260AbjBNC5N (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230245AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 136A018A80; Mon, 13 Feb 2023 18:56:55 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Axbdp2+OpjclcAAA--.1148S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S13; Tue, 14 Feb 2023 10:56:54 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 11/24] LoongArch: KVM: Implement update VM id function Date: Tue, 14 Feb 2023 10:56:35 +0800 Message-Id: <20230214025648.1898508-12-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S13 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7Aw1kJrW5GFWrWry3CF4rZrb_yoW8tFW8pr WxCrn5Xr4rXrnxC3sIq3Wvqr1Y93yrGF13XasrAa4Yyry7t3sFkrZYk3yDAFyxXr1rAryI qF1FyF4YkF1kA37anT9S1TB71UUUUj7qnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773611455596320?= X-GMAIL-MSGID: =?utf-8?q?1757773611455596320?= Implement kvm check vmid and update vmid, the vmid should be checked before vcpu enter guest. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vmid.c | 64 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 arch/loongarch/kvm/vmid.c diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c new file mode 100644 index 000000000..82729968e --- /dev/null +++ b/arch/loongarch/kvm/vmid.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include "trace.h" + +static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_context *context; + unsigned long vpid; + + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + vpid = context->vpid_cache + 1; + if (!(vpid & context->vpid_mask)) { + /* finish round of 64 bit loop */ + if (unlikely(!vpid)) + vpid = context->vpid_mask + 1; + + /* vpid 0 reserved for root */ + ++vpid; + + /* start new vpid cycle */ + kvm_flush_tlb_all(); + } + + context->vpid_cache = vpid; + vcpu->arch.vpid[cpu] = vpid; +} + +void _kvm_check_vmid(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_context *context; + bool migrated; + unsigned long ver, old, vpid; + + /* + * Are we entering guest context on a different CPU to last time? + * If so, the VCPU's guest TLB state on this CPU may be stale. + */ + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + migrated = (vcpu->arch.last_exec_cpu != cpu); + vcpu->arch.last_exec_cpu = cpu; + + /* + * Check if our vpid is of an older version + * + * We also discard the stored vpid if we've executed on + * another CPU, as the guest mappings may have changed without + * hypervisor knowledge. + */ + ver = vcpu->arch.vpid[cpu] & ~context->vpid_mask; + old = context->vpid_cache & ~context->vpid_mask; + if (migrated || (ver != old)) { + _kvm_update_vpid(vcpu, cpu); + trace_kvm_vpid_change(vcpu, vcpu->arch.vpid[cpu]); + } + + /* Restore GSTAT(0x50).vpid */ + vpid = (vcpu->arch.vpid[cpu] & context->vpid_mask) + << CSR_GSTAT_GID_SHIFT; + change_csr_gstat(context->vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); +} From patchwork Tue Feb 14 02:56:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56615 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723265wrn; Mon, 13 Feb 2023 18:58:11 -0800 (PST) X-Google-Smtp-Source: AK7set+aDEI/EJod6XIycYqVNPrEAcidoHPnuxbl7EkC0qFk378dY0KgHgGa0ZvqL/ZGfDkjeYis X-Received: by 2002:a17:903:d1:b0:19a:8436:70b5 with SMTP id x17-20020a17090300d100b0019a843670b5mr821968plc.28.1676343491263; Mon, 13 Feb 2023 18:58:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343491; cv=none; d=google.com; s=arc-20160816; b=YtwvziLOcYy0dZb52uoZAcRfVX2vyrfdeXxBIwbm/FcbiNOgFqa7lHzHUIs/AGojD9 lNHBm9MHz9FqXnmx2R5FujL33DN45KklpSUdDC5BWuMJ0seCVKMsASl8kuH7f10FIi/N pCWGdM21RlFfRCvMRYsQcMi5mmsO0j9NBwOkzvuMXx+PT0chQb6dtSLSISO9nFMK6YP1 Q24cN5ER/OLDHcXtKHRxESpMInQ/4ws/lyS+Bqilb5qqLTizNcyY9MdH10Zvvt/zGfi8 eIrl1vIO4bkM8twiQcQix3VcM3sL2nKwJOv0D0qDS5ozv6DjS3sRd/Vou6SzQbpKpSBo WY/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zMRI6a2SXLpjkV2h3gZK1i9HMVDf4wHBM98N9yGOQL4=; b=wVGqpF8o2TaxxSZ9OgyQ+wKHkZUbpNtwlbnshRtFKIx4L1PpiZ6ummnYJ/VxvV/3th m0+84AhUijd96HM0auGj47PUDtoQg1v8P6JpHmMM8zQleNL8t1lVXkpQwvPnZEyfOUfj t2rIGkGuaeBrhHwTM3gJGkM0nXs8JXUpOMbPN7z6yhIaID9gp+b+4rojbah9bsiIczeh MOM8nLvNcImfI753oZJATHp0otTM52K5EVUwM2vEjMtgzcCMbpn7fWVFk+nve1ilBn66 QBkZEV2zUlXbCRktCMmnlFAdLmP0c7XZDssltznu6dny1d8RyWk5CeMNujcZDiNrJYOk tDHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z8-20020a1709027e8800b0019a66d00e3csi12530984pla.553.2023.02.13.18.57.58; Mon, 13 Feb 2023 18:58:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230127AbjBNC5P (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229485AbjBNC45 (ORCPT ); Mon, 13 Feb 2023 21:56:57 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 38297193E9; Mon, 13 Feb 2023 18:56:56 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Ax59h2+OpjeVcAAA--.339S3; Tue, 14 Feb 2023 10:56:54 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S14; Tue, 14 Feb 2023 10:56:54 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 12/24] LoongArch: KVM: Implement virtual machine tlb operations Date: Tue, 14 Feb 2023 10:56:36 +0800 Message-Id: <20230214025648.1898508-13-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S14 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7XF1kXF1ktFWUXr1ftF13Arb_yoW8JF4rpF yfurs5Kw4fX3ZrW39xXwn7Wr13Xr4vkF17ZFW3ua4rZrZrtr1vyFnakryDJFWUtayrCr48 W34ftF4jgFWUJwUanT9S1TB71UUUUjDqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773552960993682?= X-GMAIL-MSGID: =?utf-8?q?1757773552960993682?= Implement loongarch virtual machine tlb operations such as flush tlb by specific gpa parameter and flush all of the virt machines tlb. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/tlb.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 arch/loongarch/kvm/tlb.c diff --git a/arch/loongarch/kvm/tlb.c b/arch/loongarch/kvm/tlb.c new file mode 100644 index 000000000..66e116cf2 --- /dev/null +++ b/arch/loongarch/kvm/tlb.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include + +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa) +{ + preempt_disable(); + gpa &= (PAGE_MASK << 1); + invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa); + preempt_enable(); + return 0; +} + +/** + * kvm_flush_tlb_all() - Flush all root TLB entries for + * guests. + * + * Invalidate all entries including GVA-->GPA and GPA-->HPA mappings. + */ +void kvm_flush_tlb_all(void) +{ + unsigned long flags; + + local_irq_save(flags); + invtlb_all(INVTLB_ALLGID, 0, 0); + local_irq_restore(flags); +} From patchwork Tue Feb 14 02:56:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56619 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723486wrn; Mon, 13 Feb 2023 18:58:45 -0800 (PST) X-Google-Smtp-Source: AK7set8/VX0wJpaqBmSMExGIJzFq5ST1yS29RiLIAeT/X4F8ZKQM1by0mVecHiYi8u36kq83n18G X-Received: by 2002:a17:90b:1d01:b0:234:2c8b:6db5 with SMTP id on1-20020a17090b1d0100b002342c8b6db5mr681596pjb.40.1676343525128; Mon, 13 Feb 2023 18:58:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343525; cv=none; d=google.com; s=arc-20160816; b=PujXMwiXh2oMxhkjEmL5pmqpIKPtzUrijE3Utz4jjryUGFgLt2JcNgqFRAc70RUsLS Ad+FJUINPP6tAe9TEKEkoD+bCBm+xmoAubbMmBLmXiTWEuc+swGkE0zUHqqsZ9lIX/+v 7o0M6U4c2/IFSXFQidwk04EslUtFb1XngqKgHbtPN2LA+16WRowIUXUdZQCc+Ef+/YFv zRDibM2hYA+p4A7zsEP8MYVOkcGEE0VuN+DHH0Bzy21gybrJoK+31pkAK/+mK3vX1AFm BJ2yHBMQ+anzS0di6em6P+b1bOHIlmw8tM3y6PWqF+1BNq6NpCuYFgQVM3/eLwRV/l/4 howQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kJE227XuS5uNyNUlW8eOGRMikAtOTqJ9q2qEHqGV04Q=; b=BGJh1z/DwLNw18HEJKvY2uebENcYS1mArV64kf+GeT87uZl10iqIdt3MlajsFTpE+e ++uaDvQCL+ux8mzpFGvROGTG0zk3Mad2zzM97IVvn2WYTaXDxhk+51wzHZkYdkcC3tgP 8ffyoKSETZyhdrqlO3DklzMw07/oRvu4hXTKgB/C1jgCVxNU5XfT8f6CnAqyg0uOF0MH kbtwCFIhnxUaEPJHKkhDUDa3qK9F31yCWT/fXBSCpB9gWwoUXmQc2V88eO7wGWOZ4NhF hhqjnTjLhPmnP5+STXjOKelhNzaLKR3XBq9Y/lBX9ri2FJARbN1gGvsCbr1dbWoPeA/Q Mlaw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lp16-20020a17090b4a9000b00233f2b7b99fsi6032186pjb.82.2023.02.13.18.58.32; Mon, 13 Feb 2023 18:58:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230457AbjBNC53 (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230506AbjBNC47 (ORCPT ); Mon, 13 Feb 2023 21:56:59 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B296618AAA; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8BxPNp3+OpjjVcAAA--.1108S3; Tue, 14 Feb 2023 10:56:55 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S15; Tue, 14 Feb 2023 10:56:54 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 13/24] LoongArch: KVM: Implement vcpu timer operations Date: Tue, 14 Feb 2023 10:56:37 +0800 Message-Id: <20230214025648.1898508-14-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S15 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxtw1UCF1fuF43ArykXw47Jwb_yoW3Zw1xpr WIkryIqr48Xryqgwn3AFs0vrn8W3yrKw17Gry7J3ySyrnxJ3s8XF40gryDJFZxGFyIvF1S vryrAwn8Ar4kA3DanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773588065498034?= X-GMAIL-MSGID: =?utf-8?q?1757773588065498034?= Implement loongarch vcpu timer operations such as init kvm timer, require kvm timer, save kvm timer and restore kvm timer. When vcpu exit, we use kvm soft timer to emulate hardware timer. If timeout happens, the vcpu timer interrupt will be set and it is going to be handled at vcpu next entrance. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/timer.c | 266 +++++++++++++++++++++++++++++++++++++ 1 file changed, 266 insertions(+) create mode 100644 arch/loongarch/kvm/timer.c diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c new file mode 100644 index 000000000..2c7677248 --- /dev/null +++ b/arch/loongarch/kvm/timer.c @@ -0,0 +1,266 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include + +/* low level hrtimer wake routine */ +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer) +{ + struct kvm_vcpu *vcpu; + + vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer); + _kvm_queue_irq(vcpu, LARCH_INT_TIMER); + rcuwait_wake_up(&vcpu->wait); + return kvm_count_timeout(vcpu); +} + +/* + * ktime_to_tick() - Scale ktime_t to a 64-bit stable timer. + * + * Caches the dynamic nanosecond bias in vcpu->arch.timer_dyn_bias. + */ +static unsigned long ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now) +{ + s64 now_ns, periods; + unsigned long delta; + + now_ns = ktime_to_ns(now); + delta = now_ns + vcpu->arch.timer_dyn_bias; + + if (delta >= vcpu->arch.timer_period) { + /* If delta is out of safe range the bias needs adjusting */ + periods = div64_s64(now_ns, vcpu->arch.timer_period); + vcpu->arch.timer_dyn_bias = -periods * vcpu->arch.timer_period; + /* Recalculate delta with new bias */ + delta = now_ns + vcpu->arch.timer_dyn_bias; + } + + /* + * We've ensured that: + * delta < timer_period + */ + return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC); +} + +/** + * kvm_resume_hrtimer() - Resume hrtimer, updating expiry. + * @vcpu: Virtual CPU. + * @now: ktime at point of resume. + * @val: stable timer at point of resume. + * + * Resumes the timer and updates the timer expiry based on @now and @count. + */ +static void kvm_resume_hrtimer(struct kvm_vcpu *vcpu, ktime_t now, + unsigned long val) +{ + unsigned long delta; + ktime_t expire; + + /* Stable timer decreased to zero or + * initialize to zero, set 4 second timer + */ + delta = div_u64(val * MNSEC_PER_SEC, vcpu->arch.timer_mhz); + expire = ktime_add_ns(now, delta); + + /* Update hrtimer to use new timeout */ + hrtimer_cancel(&vcpu->arch.swtimer); + hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED); +} + +/** + * kvm_init_timer() - Initialise stable timer. + * @vcpu: Virtual CPU. + * @timer_hz: Frequency of timer. + * + * Initialise the timer to the specified frequency, zero it, and set it going if + * it's enabled. + */ +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz) +{ + ktime_t now; + unsigned long ticks; + struct loongarch_csrs *csr = vcpu->arch.csr; + + ticks = (unsigned long)MNSEC_PER_SEC * CSR_TCFG_VAL; + vcpu->arch.timer_mhz = timer_hz >> 20; + vcpu->arch.timer_period = div_u64(ticks, vcpu->arch.timer_mhz); + vcpu->arch.timer_dyn_bias = 0; + + /* Starting at 0 */ + ticks = 0; + now = ktime_get(); + vcpu->arch.timer_bias = ticks - ktime_to_tick(vcpu, now); + vcpu->arch.timer_bias &= CSR_TCFG_VAL; + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TVAL, ticks); +} + +/** + * kvm_count_timeout() - Push timer forward on timeout. + * @vcpu: Virtual CPU. + * + * Handle an hrtimer event by push the hrtimer forward a period. + * + * Returns: The hrtimer_restart value to return to the hrtimer subsystem. + */ +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu) +{ + unsigned long cfg; + + /* Add the Count period to the current expiry time */ + cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG); + if (cfg & CSR_TCFG_PERIOD) { + hrtimer_add_expires_ns(&vcpu->arch.swtimer, cfg & CSR_TCFG_VAL); + return HRTIMER_RESTART; + } else + return HRTIMER_NORESTART; +} + +/* + * kvm_restore_timer() - Restore timer state. + * @vcpu: Virtual CPU. + * + * Restore soft timer state from saved context. + */ +void kvm_restore_timer(struct kvm_vcpu *vcpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + ktime_t saved_ktime, now; + unsigned long val, new, delta; + int expired = 0; + unsigned long cfg; + + /* + * Set guest stable timer cfg csr + */ + cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); + if (!(cfg & CSR_TCFG_EN)) { + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL); + return; + } + + now = ktime_get(); + saved_ktime = vcpu->arch.stable_ktime_saved; + val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL); + + /*hrtimer not expire */ + delta = ktime_to_tick(vcpu, ktime_sub(now, saved_ktime)); + if (delta >= val) { + expired = 1; + if (cfg & CSR_TCFG_PERIOD) + new = (delta - val) % (cfg & CSR_TCFG_VAL); + else + new = 1; + } else + new = val - delta; + + new &= CSR_TCFG_VAL; + write_gcsr_timercfg(cfg); + write_gcsr_timertick(new); + if (expired) + _kvm_queue_irq(vcpu, LARCH_INT_TIMER); +} + +/* + * kvm_acquire_timer() - Switch to hard timer state. + * @vcpu: Virtual CPU. + * + * Restore hard timer state on top of existing soft timer state if possible. + * + * Since hard timer won't remain active over preemption, preemption should be + * disabled by the caller. + */ +void kvm_acquire_timer(struct kvm_vcpu *vcpu) +{ + unsigned long flags, guestcfg; + + guestcfg = read_csr_gcfg(); + if (!(guestcfg & CSR_GCFG_TIT)) + return; + + /* enable guest access to hard timer */ + write_csr_gcfg(guestcfg & ~CSR_GCFG_TIT); + + /* + * Freeze the soft-timer and sync the guest stable timer with it. We do + * this with interrupts disabled to avoid latency. + */ + local_irq_save(flags); + hrtimer_cancel(&vcpu->arch.swtimer); + local_irq_restore(flags); +} + + +/* + * _kvm_save_timer() - Switch to software emulation of guest timer. + * @vcpu: Virtual CPU. + * + * Save guest timer state and switch to software emulation of guest + * timer. The hard timer must already be in use, so preemption should be + * disabled. + */ +static ktime_t _kvm_save_timer(struct kvm_vcpu *vcpu, unsigned long *val) +{ + unsigned long end_time; + ktime_t before_time; + + before_time = ktime_get(); + + /* + * Record a final stable timer which we will transfer to the soft-timer. + */ + end_time = read_gcsr_timertick(); + *val = end_time; + + kvm_resume_hrtimer(vcpu, before_time, end_time); + return before_time; +} + +/* + * kvm_save_timer() - Save guest timer state. + * @vcpu: Virtual CPU. + * + * Save guest timer state and switch to soft guest timer if hard timer was in + * use. + */ +void kvm_save_timer(struct kvm_vcpu *vcpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + unsigned long guestcfg, val; + ktime_t save_ktime; + + preempt_disable(); + guestcfg = read_csr_gcfg(); + if (!(guestcfg & CSR_GCFG_TIT)) { + /* disable guest use of hard timer */ + write_csr_gcfg(guestcfg | CSR_GCFG_TIT); + + /* save hard timer state */ + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG); + if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN) { + save_ktime = _kvm_save_timer(vcpu, &val); + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TVAL, val); + vcpu->arch.stable_ktime_saved = save_ktime; + if (val == CSR_TCFG_VAL) + _kvm_queue_irq(vcpu, LARCH_INT_TIMER); + } else { + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL); + } + } + + /* save timer-related state to VCPU context */ + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); + preempt_enable(); +} + +void kvm_reset_timer(struct kvm_vcpu *vcpu) +{ + write_gcsr_timercfg(0); + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0); + hrtimer_cancel(&vcpu->arch.swtimer); +} From patchwork Tue Feb 14 02:56:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56631 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2724247wrn; Mon, 13 Feb 2023 19:00:35 -0800 (PST) X-Google-Smtp-Source: AK7set8/QoO7MOumwDcl4VcN4IZROHqSJm1A2yZW2W8HyaRpLJ2uIP5O4iXS38s0tgFr7I/PKmC2 X-Received: by 2002:a50:99cd:0:b0:4ac:b937:3976 with SMTP id n13-20020a5099cd000000b004acb9373976mr1171374edb.4.1676343635144; Mon, 13 Feb 2023 19:00:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343635; cv=none; d=google.com; s=arc-20160816; b=C9kjzANsOZhM36XbU+DWYJ/CxdNEqoq+admcVnBBSf79awLciFkizE/iEBR+7NYPBL UzsPtj7WcNvhgWk2e7tiW/je1LUo2HoOqrSmt0Qc6dlkRG1ALQazjxJxklZZF4TkkzK3 xm1BD4LsOFQSTU+sKmZvQlOWxhb/Y/QrMifJ7mRiKnXRsfJVmf9zkCYNxHoV5f8mTFB1 idm1lwRRLuX7gS/erWuCHgD3ekZAFqfIYPSI5QboiquGxgmOouNDUGfKgqLBv8zcRPrj j9kGJbSfiOA7YKe0Fs6ek1UC6s0d2iJnaJ164x/AmfNf1m+c4pdmCbseioL2wMWDmNna 1wLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=4zutMD/2BPgTj6QHncBr/LKZYtExLCGGmn1UXCLiQ34=; b=U8wHdsiQj3xhpfQm+5Zj6ozVOA3PaKGFlXrObcbbJKTRaw4udLo/55scbrdWbKSDtD h1PRJjFNMVAXAy1Aw8qjIN7rakzoPGdfGTB03D78wC+LCfbPXv1lGPiDnyugUGNSG/WQ riH42EGIc7JbpxKSc6r0ika8D6vJ/i7YmpUxW946a4rjpiUKjwIG4ghbBGgV4dSdzpNG GbFdXpUSsSuJmK71aUH/jgyxyrPBdl9uEpMFOq+faGbkI1bb7ivh9g++jwYOkjF72YmJ HA4lWDwCCt6TjO7vEMPwfvFd3K/WhkZXi1FdhXOs8EukbzjrbdGKcoEfrj+iqYXLC8HH R/Xw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b20-20020a056402139400b004a2588cfa24si10680685edv.132.2023.02.13.19.00.11; Mon, 13 Feb 2023 19:00:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231309AbjBNC6B (ORCPT + 99 others); Mon, 13 Feb 2023 21:58:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231404AbjBNC5C (ORCPT ); Mon, 13 Feb 2023 21:57:02 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EE2A5193E9; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxjdp4+OpjolcAAA--.1153S3; Tue, 14 Feb 2023 10:56:56 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S16; Tue, 14 Feb 2023 10:56:55 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 14/24] LoongArch: KVM: Implement kvm mmu operations Date: Tue, 14 Feb 2023 10:56:38 +0800 Message-Id: <20230214025648.1898508-15-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S16 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvAXoWfKr4DXF4ftF1kJw4UXw1UKFg_yoW8tr13Go Wfuws7Gw18ur1q9FZ5Crn8t3WUZ3ykWrW7GrnaywsxXFyUX345XrZxua1rZry5Ar1FkF93 Zanaqw1fXFZ3JFn3n29KB7ZKAUJUUUUf529EdanIXcx71UUUUU7KY7ZEXasCq-sGcSsGvf J3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnRJU UUv01xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64 kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY 1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJwAaw2AFwI0_JF0_Jw1le2I262IYc4CY6c8Ij28IcVAaY2xG 8wAqjxCEc2xF0cIa020Ex4CE44I27wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E74AGY7 Cv6cx26rWlOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JMxkF7I0En4kS14v2 6r126r1DMxAIw28IcxkI7VAKI48JMxAIw28IcVCjz48v1sIEY20_WwCFx2IqxVCFs4IE7x kEbVWUJVW8JwCFI7km07C267AKxVWUAVWUtwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWDJVCq3wCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY 6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr1j6F4UJwCI42IY6I8E87 Iv6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7xRiTKZJUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773703571913970?= X-GMAIL-MSGID: =?utf-8?q?1757773703571913970?= Implement loongarch kvm mmu, it is used to switch gpa to hpa when guest exit because of address translation exception. This patch implement allocate gpa page table, search gpa from it and flush guest gpa in the table. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/mmu.c | 821 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 821 insertions(+) create mode 100644 arch/loongarch/kvm/mmu.c diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c new file mode 100644 index 000000000..049824f8e --- /dev/null +++ b/arch/loongarch/kvm/mmu.c @@ -0,0 +1,821 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels + * for which pages need to be cached. + */ +#if defined(__PAGETABLE_PMD_FOLDED) +#define KVM_MMU_CACHE_MIN_PAGES 1 +#else +#define KVM_MMU_CACHE_MIN_PAGES 2 +#endif + +/** + * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory. + * + * Allocate a blank KVM GPA page directory (PGD) for representing guest physical + * to host physical page mappings. + * + * Returns: Pointer to new KVM GPA page directory. + * NULL on allocation failure. + */ +pgd_t *kvm_pgd_alloc(void) +{ + pgd_t *pgd; + + pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0); + if (pgd) + pgd_init((void *)pgd); + + return pgd; +} + +/** + * kvm_walk_pgd() - Walk page table with optional allocation. + * @pgd: Page directory pointer. + * @addr: Address to index page table using. + * @cache: MMU page cache to allocate new page tables from, or NULL. + * + * Walk the page tables pointed to by @pgd to find the PTE corresponding to the + * address @addr. If page tables don't exist for @addr, they will be created + * from the MMU cache if @cache is not NULL. + * + * Returns: Pointer to pte_t corresponding to @addr. + * NULL if a page table doesn't exist for @addr and !@cache. + * NULL if a page table allocation failed. + */ +static pte_t *kvm_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache, + unsigned long addr) +{ + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + pgd += pgd_index(addr); + if (pgd_none(*pgd)) { + /* Not used yet */ + BUG(); + return NULL; + } + p4d = p4d_offset(pgd, addr); + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) { + pmd_t *new_pmd; + + if (!cache) + return NULL; + new_pmd = kvm_mmu_memory_cache_alloc(cache); + pmd_init((void *)new_pmd); + pud_populate(NULL, pud, new_pmd); + } + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + pte_t *new_pte; + + if (!cache) + return NULL; + new_pte = kvm_mmu_memory_cache_alloc(cache); + clear_page(new_pte); + pmd_populate_kernel(NULL, pmd, new_pte); + } + return pte_offset_kernel(pmd, addr); +} + +/* Caller must hold kvm->mm_lock */ +static pte_t *kvm_pte_for_gpa(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + unsigned long addr) +{ + return kvm_walk_pgd(kvm->arch.gpa_mm.pgd, cache, addr); +} + +/* + * kvm_flush_gpa_{pte,pmd,pud,pgd,pt}. + * Flush a range of guest physical address space from the VM's GPA page tables. + */ + +static bool kvm_flush_gpa_pte(pte_t *pte, unsigned long start_gpa, + unsigned long end_gpa, unsigned long *data) +{ + int i_min = pte_index(start_gpa); + int i_max = pte_index(end_gpa); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1); + int i; + + for (i = i_min; i <= i_max; ++i) { + if (!pte_present(pte[i])) + continue; + + set_pte(pte + i, __pte(0)); + if (data) + *data += 1; + } + return safe_to_remove; +} + +static bool kvm_flush_gpa_pmd(pmd_t *pmd, unsigned long start_gpa, + unsigned long end_gpa, unsigned long *data) +{ + pte_t *pte; + unsigned long end = ~0ul; + int i_min = pmd_index(start_gpa); + int i_max = pmd_index(end_gpa); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PMD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gpa = 0) { + if (!pmd_present(pmd[i])) + continue; + + pte = pte_offset_kernel(pmd + i, 0); + if (i == i_max) + end = end_gpa; + + if (kvm_flush_gpa_pte(pte, start_gpa, end, data)) { + pmd_clear(pmd + i); + pte_free_kernel(NULL, pte); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_flush_gpa_pud(pud_t *pud, unsigned long start_gpa, + unsigned long end_gpa, unsigned long *data) +{ + pmd_t *pmd; + unsigned long end = ~0ul; + int i_min = pud_index(start_gpa); + int i_max = pud_index(end_gpa); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PUD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gpa = 0) { + if (!pud_present(pud[i])) + continue; + + pmd = pmd_offset(pud + i, 0); + if (i == i_max) + end = end_gpa; + + if (kvm_flush_gpa_pmd(pmd, start_gpa, end, data)) { + pud_clear(pud + i); + pmd_free(NULL, pmd); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_flush_gpa_pgd(pgd_t *pgd, unsigned long start_gpa, + unsigned long end_gpa, unsigned long *data) +{ + p4d_t *p4d; + pud_t *pud; + unsigned long end = ~0ul; + int i_min = pgd_index(start_gpa); + int i_max = pgd_index(end_gpa); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PGD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gpa = 0) { + if (!pgd_present(pgd[i])) + continue; + + p4d = p4d_offset(pgd, 0); + pud = pud_offset(p4d + i, 0); + if (i == i_max) + end = end_gpa; + + if (kvm_flush_gpa_pud(pud, start_gpa, end, data)) { + pgd_clear(pgd + i); + pud_free(NULL, pud); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +/** + * kvm_flush_gpa_range() - Flush a range of guest physical addresses. + * @kvm: KVM pointer. + * @start_gfn: Guest frame number of first page in GPA range to flush. + * @end_gfn: Guest frame number of last page in GPA range to flush. + * + * Flushes a range of GPA mappings from the GPA page tables. + * + * The caller must hold the @kvm->mmu_lock spinlock. + * + * Returns: Whether its safe to remove the top level page directory because + * all lower levels have been removed. + */ +static bool kvm_flush_gpa_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn, void *data) +{ + return kvm_flush_gpa_pgd(kvm->arch.gpa_mm.pgd, + start_gfn << PAGE_SHIFT, + end_gfn << PAGE_SHIFT, (unsigned long *)data); +} + +/* + * kvm_mkclean_gpa_pt. + * Mark a range of guest physical address space clean (writes fault) in the VM's + * GPA page table to allow dirty page tracking. + */ + +static int kvm_mkclean_pte(pte_t *pte, unsigned long start, unsigned long end) +{ + int ret = 0; + int i_min = pte_index(start); + int i_max = pte_index(end); + int i; + pte_t val; + + for (i = i_min; i <= i_max; ++i) { + val = pte[i]; + if (pte_present(val) && pte_dirty(val)) { + set_pte(pte + i, pte_mkclean(val)); + ret = 1; + } + } + return ret; +} + +static int kvm_mkclean_pmd(pmd_t *pmd, unsigned long start, unsigned long end) +{ + int ret = 0; + pte_t *pte; + unsigned long cur_end = ~0ul; + int i_min = pmd_index(start); + int i_max = pmd_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pmd_present(pmd[i])) + continue; + + pte = pte_offset_kernel(pmd + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkclean_pte(pte, start, cur_end); + } + + return ret; +} + +static int kvm_mkclean_pud(pud_t *pud, unsigned long start, unsigned long end) +{ + int ret = 0; + pmd_t *pmd; + unsigned long cur_end = ~0ul; + int i_min = pud_index(start); + int i_max = pud_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pud_present(pud[i])) + continue; + + pmd = pmd_offset(pud + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkclean_pmd(pmd, start, cur_end); + } + return ret; +} + +static int kvm_mkclean_pgd(pgd_t *pgd, unsigned long start, unsigned long end) +{ + int ret = 0; + p4d_t *p4d; + pud_t *pud; + unsigned long cur_end = ~0ul; + int i_min = pgd_index(start); + int i_max = pgd_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pgd_present(pgd[i])) + continue; + + p4d = p4d_offset(pgd, 0); + pud = pud_offset(p4d + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkclean_pud(pud, start, cur_end); + } + return ret; +} + +/** + * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean. + * @kvm: KVM pointer. + * @start_gfn: Guest frame number of first page in GPA range to flush. + * @end_gfn: Guest frame number of last page in GPA range to flush. + * + * Make a range of GPA mappings clean so that guest writes will fault and + * trigger dirty page logging. + * + * The caller must hold the @kvm->mmu_lock spinlock. + * + * Returns: Whether any GPA mappings were modified, which would require + * derived mappings (GVA page tables & TLB enties) to be + * invalidated. + */ +static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn) +{ + return kvm_mkclean_pgd(kvm->arch.gpa_mm.pgd, start_gfn << PAGE_SHIFT, + end_gfn << PAGE_SHIFT); +} + +/** + * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire @kvm->mmu_lock. + */ +void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + gfn_t base_gfn = slot->base_gfn + gfn_offset; + gfn_t start = base_gfn + __ffs(mask); + gfn_t end = base_gfn + __fls(mask); + + kvm_mkclean_gpa_pt(kvm, start, end); +} + +void kvm_arch_commit_memory_region(struct kvm *kvm, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + int needs_flush; + + /* + * If dirty page logging is enabled, write protect all pages in the slot + * ready for dirty logging. + * + * There is no need to do this in any of the following cases: + * CREATE: No dirty mappings will already exist. + * MOVE/DELETE: The old mappings will already have been cleaned up by + * kvm_arch_flush_shadow_memslot() + */ + if (change == KVM_MR_FLAGS_ONLY && + (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) && + new->flags & KVM_MEM_LOG_DIRTY_PAGES)) { + spin_lock(&kvm->mmu_lock); + /* Write protect GPA page table entries */ + needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn, + new->base_gfn + new->npages - 1); + if (needs_flush) + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); + } +} + +void kvm_arch_flush_shadow_all(struct kvm *kvm) +{ + /* Flush whole GPA */ + kvm_flush_gpa_range(kvm, 0, ~0UL, NULL); + /* Flush vpid for each VCPU individually */ + kvm_flush_remote_tlbs(kvm); +} + +void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ + unsigned long npages; + + /* + * The slot has been made invalid (ready for moving or deletion), so we + * need to ensure that it can no longer be accessed by any guest VCPUs. + */ + + npages = 0; + spin_lock(&kvm->mmu_lock); + /* Flush slot from GPA */ + kvm_flush_gpa_range(kvm, slot->base_gfn, + slot->base_gfn + slot->npages - 1, &npages); + /* Let implementation do the rest */ + if (npages) + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} + +void _kvm_destroy_mm(struct kvm *kvm) +{ + /* It should always be safe to remove after flushing the whole range */ + WARN_ON(!kvm_flush_gpa_range(kvm, 0, ~0UL, NULL)); + pgd_free(NULL, kvm->arch.gpa_mm.pgd); + kvm->arch.gpa_mm.pgd = NULL; +} + +/* + * Mark a range of guest physical address space old (all accesses fault) in the + * VM's GPA page table to allow detection of commonly used pages. + */ + +static int kvm_mkold_pte(pte_t *pte, unsigned long start, unsigned long end) +{ + int ret = 0; + int i_min = pte_index(start); + int i_max = pte_index(end); + int i; + pte_t old, new; + + for (i = i_min; i <= i_max; ++i) { + if (!pte_present(pte[i])) + continue; + + old = pte[i]; + new = pte_mkold(old); + if (pte_val(new) == pte_val(old)) + continue; + set_pte(pte + i, new); + ret = 1; + } + + return ret; +} + +static int kvm_mkold_pmd(pmd_t *pmd, unsigned long start, unsigned long end) +{ + int ret = 0; + pte_t *pte; + unsigned long cur_end = ~0ul; + int i_min = pmd_index(start); + int i_max = pmd_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pmd_present(pmd[i])) + continue; + + pte = pte_offset_kernel(pmd + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkold_pte(pte, start, cur_end); + } + + return ret; +} + +static int kvm_mkold_pud(pud_t *pud, unsigned long start, unsigned long end) +{ + int ret = 0; + pmd_t *pmd; + unsigned long cur_end = ~0ul; + int i_min = pud_index(start); + int i_max = pud_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pud_present(pud[i])) + continue; + + pmd = pmd_offset(pud + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkold_pmd(pmd, start, cur_end); + } + + return ret; +} + +static int kvm_mkold_pgd(pgd_t *pgd, unsigned long start, unsigned long end) +{ + int ret = 0; + p4d_t *p4d; + pud_t *pud; + unsigned long cur_end = ~0ul; + int i_min = pgd_index(start); + int i_max = pgd_index(end); + int i; + + for (i = i_min; i <= i_max; ++i, start = 0) { + if (!pgd_present(pgd[i])) + continue; + + p4d = p4d_offset(pgd, 0); + pud = pud_offset(p4d + i, 0); + if (i == i_max) + cur_end = end; + + ret |= kvm_mkold_pud(pud, start, cur_end); + } + + return ret; +} + +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + unsigned long npages = 0; + + kvm_flush_gpa_range(kvm, range->start, range->end, &npages); + return npages > 0; +} + +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + gpa_t gpa = range->start << PAGE_SHIFT; + pte_t hva_pte = range->pte; + pte_t *ptep = kvm_pte_for_gpa(kvm, NULL, gpa); + pte_t old_pte; + + if (!ptep) + return false; + + /* Mapping may need adjusting depending on memslot flags */ + old_pte = *ptep; + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) + hva_pte = pte_mkclean(hva_pte); + else if (range->slot->flags & KVM_MEM_READONLY) + hva_pte = pte_wrprotect(hva_pte); + + set_pte(ptep, hva_pte); + + /* Replacing an absent or old page doesn't need flushes */ + if (!pte_present(old_pte) || !pte_young(old_pte)) + return false; + + /* Pages swapped, aged, moved, or cleaned require flushes */ + return !pte_present(hva_pte) || + !pte_young(hva_pte) || + pte_pfn(old_pte) != pte_pfn(hva_pte) || + (pte_dirty(old_pte) && !pte_dirty(hva_pte)); +} + +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + return kvm_mkold_pgd(kvm->arch.gpa_mm.pgd, range->start << PAGE_SHIFT, + range->end << PAGE_SHIFT); +} + +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + gpa_t gpa = range->start << PAGE_SHIFT; + pte_t *ptep = kvm_pte_for_gpa(kvm, NULL, gpa); + + if (ptep && pte_present(*ptep) && pte_young(*ptep)) + return true; + + return false; +} + +/** + * kvm_map_page_fast() - Fast path GPA fault handler. + * @vcpu: VCPU pointer. + * @gpa: Guest physical address of fault. + * @write: Whether the fault was due to a write. + * + * Perform fast path GPA fault handling, doing all that can be done without + * calling into KVM. This handles marking old pages young (for idle page + * tracking), and dirtying of clean pages (for dirty page logging). + * + * Returns: 0 on success, in which case we can update derived mappings and + * resume guest execution. + * -EFAULT on failure due to absent GPA mapping or write to + * read-only page, in which case KVM must be consulted. + */ +static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, + bool write) +{ + struct kvm *kvm = vcpu->kvm; + gfn_t gfn = gpa >> PAGE_SHIFT; + pte_t *ptep; + kvm_pfn_t pfn = 0; + bool pfn_valid = false; + int ret = 0; + + spin_lock(&kvm->mmu_lock); + + /* Fast path - just check GPA page table for an existing entry */ + ptep = kvm_pte_for_gpa(kvm, NULL, gpa); + if (!ptep || !pte_present(*ptep)) { + ret = -EFAULT; + goto out; + } + + /* Track access to pages marked old */ + if (!pte_young(*ptep)) { + set_pte(ptep, pte_mkyoung(*ptep)); + pfn = pte_pfn(*ptep); + pfn_valid = true; + /* call kvm_set_pfn_accessed() after unlock */ + } + if (write && !pte_dirty(*ptep)) { + if (!pte_write(*ptep)) { + ret = -EFAULT; + goto out; + } + + /* Track dirtying of writeable pages */ + set_pte(ptep, pte_mkdirty(*ptep)); + pfn = pte_pfn(*ptep); + mark_page_dirty(kvm, gfn); + kvm_set_pfn_dirty(pfn); + } + +out: + spin_unlock(&kvm->mmu_lock); + if (pfn_valid) + kvm_set_pfn_accessed(pfn); + return ret; +} + +/** + * kvm_map_page() - Map a guest physical page. + * @vcpu: VCPU pointer. + * @gpa: Guest physical address of fault. + * @write: Whether the fault was due to a write. + * + * Handle GPA faults by creating a new GPA mapping (or updating an existing + * one). + * + * This takes care of marking pages young or dirty (idle/dirty page tracking), + * asking KVM for the corresponding PFN, and creating a mapping in the GPA page + * tables. Derived mappings (GVA page tables and TLBs) must be handled by the + * caller. + * + * Returns: 0 on success + * -EFAULT if there is no memory region at @gpa or a write was + * attempted to a read-only memory region. This is usually handled + * as an MMIO access. + */ +static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) +{ + bool writeable; + int srcu_idx, err = 0, retry_no = 0; + unsigned long hva; + unsigned long mmu_seq; + unsigned long prot_bits; + pte_t *ptep, new_pte; + kvm_pfn_t pfn; + gfn_t gfn = gpa >> PAGE_SHIFT; + struct vm_area_struct *vma; + struct kvm *kvm = vcpu->kvm; + struct kvm_memory_slot *memslot; + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + + /* Try the fast path to handle old / clean pages */ + srcu_idx = srcu_read_lock(&kvm->srcu); + err = kvm_map_page_fast(vcpu, gpa, write); + if (!err) + goto out; + + memslot = gfn_to_memslot(kvm, gfn); + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable); + if (kvm_is_error_hva(hva) || (write && !writeable)) + goto out; + + /* Let's check if we will get back a huge page backed by hugetlbfs */ + mmap_read_lock(current->mm); + vma = find_vma_intersection(current->mm, hva, hva + 1); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + err = -EFAULT; + goto out; + } + mmap_read_unlock(current->mm); + + /* We need a minimum of cached pages ready for page table creation */ + err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES); + if (err) + goto out; + +retry: + /* + * Used to check for invalidations in progress, of the pfn that is + * returned by pfn_to_pfn_prot below. + */ + mmu_seq = kvm->mmu_invalidate_seq; + /* + * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in + * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * risk the page we get a reference to getting unmapped before we have a + * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. + * + * This smp_rmb() pairs with the effective smp_wmb() of the combination + * of the pte_unmap_unlock() after the PTE is zapped, and the + * spin_lock() in kvm_mmu_invalidate_invalidate_() before + * mmu_invalidate_seq is incremented. + */ + smp_rmb(); + + /* Slow path - ask KVM core whether we can access this GPA */ + pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable); + if (is_error_noslot_pfn(pfn)) { + err = -EFAULT; + goto out; + } + + spin_lock(&kvm->mmu_lock); + /* Check if an invalidation has taken place since we got pfn */ + if (mmu_invalidate_retry(kvm, mmu_seq)) { + /* + * This can happen when mappings are changed asynchronously, but + * also synchronously if a COW is triggered by + * gfn_to_pfn_prot(). + */ + spin_unlock(&kvm->mmu_lock); + kvm_set_pfn_accessed(pfn); + kvm_release_pfn_clean(pfn); + if (retry_no > 100) { + retry_no = 0; + schedule(); + } + retry_no++; + goto retry; + } + + /* + * For emulated devices such virtio device, actual cache attribute is + * determined by physical machine. + * For pass through physical device, it should be uncachable + */ + prot_bits = _PAGE_PRESENT | __READABLE; + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + prot_bits |= _CACHE_SUC; + else + prot_bits |= _CACHE_CC; + + if (writeable) { + prot_bits |= _PAGE_WRITE; + if (write) { + prot_bits |= __WRITEABLE; + mark_page_dirty(kvm, gfn); + kvm_set_pfn_dirty(pfn); + } + } + + /* Ensure page tables are allocated */ + ptep = kvm_pte_for_gpa(kvm, memcache, gpa); + new_pte = pfn_pte(pfn, __pgprot(prot_bits)); + set_pte(ptep, new_pte); + + err = 0; + spin_unlock(&kvm->mmu_lock); + kvm_release_pfn_clean(pfn); + kvm_set_pfn_accessed(pfn); +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return err; +} + +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) +{ + int ret; + + ret = kvm_map_page(vcpu, gpa, write); + if (ret) + return ret; + + /* Invalidate this entry in the TLB */ + return kvm_flush_tlb_gpa(vcpu, gpa); +} + +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + +} + +int kvm_arch_prepare_memory_region(struct kvm *kvm, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + return 0; +} + +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + kvm_flush_remote_tlbs(kvm); +} From patchwork Tue Feb 14 02:56:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56627 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723875wrn; Mon, 13 Feb 2023 18:59:55 -0800 (PST) X-Google-Smtp-Source: AK7set9Cfs3fZZhNFzPhJk2F0/vcRzBDO3h62TVQlcW/H/KbdY4Xt4eMcsZm4iLeaGlGAGjH7OkI X-Received: by 2002:a50:cdc9:0:b0:4ac:c12b:8ef1 with SMTP id h9-20020a50cdc9000000b004acc12b8ef1mr941300edj.17.1676343595351; Mon, 13 Feb 2023 18:59:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343595; cv=none; d=google.com; s=arc-20160816; b=Ng8Sqy1kyUUo44Kg1QZ2pJysF3tgMdxYau8v4JuE2gTrob8V1wC8GcgUxQFRzPCkBD /y1qfXb28w9cNi1MovVhT+K+BlT8jFEiUMxpaTF98ql5zLIkTkblJe5KaHx+njX1c8mJ lUDa86RNxEGzW6+iby2rHg66ra1YrKBZe7lLNuo23sboL9oSvjoY6VSqhUtesDm8evPU tYzLC3hTV/t9A1cF0NACdI4IhqLRJDSCENn8HwZes6Jm9EYiy1gs+RooEwB/skzdijb1 av+Q+bigoDaV6uUxjvV4xluPZG4rBxLc0LH4SmedpFr8N3DD8KXjzd9N7hhDZ1N/XQnY e4Gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=xTjbXRCbU/HDfY+YpjkUpz0DwnwNv50GbuWoflAsFgw=; b=DdUIOhcoiFTnmgNk9D79yhIYDGA9RLp+XRZ7+zJ8ZIgFnMmlWuaIIl6izluIq5yrOV MO/UgKYwsQTlPMkLFoNiRiPR0/DoHGnqguxqSDNZFiF396I+WvtFl26YNjtsOW7/XUmh 3G/gcbbyByNrJZ+3inorG8ODUxtTKsSywUVfRmGfPU9cwUUv+kl5URkE7tkzDvMBRsVp rm6nAmvEVGe1CRogXbh1btUQSZpWCpqdUlFA/y6LCwqH8zKDNk4LO0aFBaarc5IbvD1x y9nK41/gh+GGbm6ppEgRw2k8v+qwOIlpHS3ACb2yhWh0OcRJXc3aibZwhH5+bR5HKNCf ZOrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d18-20020a50fe92000000b004ab09d10379si16186647edt.450.2023.02.13.18.59.32; Mon, 13 Feb 2023 18:59:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231534AbjBNC5x (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231169AbjBNC5A (ORCPT ); Mon, 13 Feb 2023 21:57:00 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C42F1196AF; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Bxfdp4+OpjmFcAAA--.1180S3; Tue, 14 Feb 2023 10:56:56 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S17; Tue, 14 Feb 2023 10:56:55 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 15/24] LoongArch: KVM: Implement handle csr excption Date: Tue, 14 Feb 2023 10:56:39 +0800 Message-Id: <20230214025648.1898508-16-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S17 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW3Ar1ktryxJF1fKF1DAFW5Wrg_yoW7Cw47pF WUCa15Zw40q3W7Kas3JFs0vrZ8X3ykGw17tFy7t34SvFnFyFn5XFWvgryDXF98ta93WFWa qay5trn5Cr1DtaUanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773661951963387?= X-GMAIL-MSGID: =?utf-8?q?1757773661951963387?= Implement kvm handle loongarch vcpu exit caused by reading and writing csr. Using loongarch_csr structure to emulate the registers. Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/kvm_csr.h | 89 +++++++++++++++++++++++ arch/loongarch/kvm/exit.c | 101 +++++++++++++++++++++++++++ 2 files changed, 190 insertions(+) create mode 100644 arch/loongarch/include/asm/kvm_csr.h create mode 100644 arch/loongarch/kvm/exit.c diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h new file mode 100644 index 000000000..44fcd724c --- /dev/null +++ b/arch/loongarch/include/asm/kvm_csr.h @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_CSR_H__ +#define __ASM_LOONGARCH_KVM_CSR_H__ +#include +#include +#include +#include +#include + +#define kvm_read_hw_gcsr(id) gcsr_read(id) +#define kvm_write_hw_gcsr(csr, id, val) gcsr_write(val, id) + +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force); +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force); + +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu); + +static inline void kvm_save_hw_gcsr(struct loongarch_csrs *csr, int gid) +{ + csr->csrs[gid] = gcsr_read(gid); +} + +static inline void kvm_restore_hw_gcsr(struct loongarch_csrs *csr, int gid) +{ + gcsr_write(csr->csrs[gid], gid); +} + +static inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid) +{ + return csr->csrs[gid]; +} + +static inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val) +{ + csr->csrs[gid] = val; +} + +static inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val) +{ + csr->csrs[gid] |= val; +} + +static inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long mask, + unsigned long val) +{ + unsigned long _mask = mask; + + csr->csrs[gid] &= ~_mask; + csr->csrs[gid] |= val & _mask; +} + + +#define GET_HW_GCSR(id, csrid, v) \ + do { \ + if (csrid == id) { \ + *v = (long)kvm_read_hw_gcsr(csrid); \ + return 0; \ + } \ + } while (0) + +#define GET_SW_GCSR(csr, id, csrid, v) \ + do { \ + if (csrid == id) { \ + *v = kvm_read_sw_gcsr(csr, id); \ + return 0; \ + } \ + } while (0) + +#define SET_HW_GCSR(csr, id, csrid, v) \ + do { \ + if (csrid == id) { \ + kvm_write_hw_gcsr(csr, csrid, *v); \ + return 0; \ + } \ + } while (0) + +#define SET_SW_GCSR(csr, id, csrid, v) \ + do { \ + if (csrid == id) { \ + kvm_write_sw_gcsr(csr, csrid, *v); \ + return 0; \ + } \ + } while (0) + +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */ diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c new file mode 100644 index 000000000..dc37827d9 --- /dev/null +++ b/arch/loongarch/kvm/exit.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include "trace.h" + +static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + unsigned long val = 0; + + val = 0; + if (csrid < 4096) + val = kvm_read_sw_gcsr(csr, csrid); + else + pr_warn_once("Unsupport csrread 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); + return val; +} + +static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (csrid < 4096) + kvm_write_sw_gcsr(csr, csrid, val); + else + pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long csr_mask, unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (csrid < 4096) { + unsigned long orig; + + orig = kvm_read_sw_gcsr(csr, csrid); + orig &= ~csr_mask; + orig |= val & csr_mask; + kvm_write_sw_gcsr(csr, csrid, orig); + } else + pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) +{ + unsigned int rd, rj, csrid; + unsigned long csr_mask; + unsigned long val = 0; + + /* + * CSR value mask imm + * rj = 0 means csrrd + * rj = 1 means csrwr + * rj != 0,1 means csrxchg + */ + rd = inst.reg2csr_format.rd; + rj = inst.reg2csr_format.rj; + csrid = inst.reg2csr_format.csr; + + /* Process CSR ops */ + if (rj == 0) { + /* process csrrd */ + val = _kvm_emu_read_csr(vcpu, csrid); + vcpu->arch.gprs[rd] = val; + } else if (rj == 1) { + /* process csrwr */ + val = vcpu->arch.gprs[rd]; + _kvm_emu_write_csr(vcpu, csrid, val); + } else { + /* process csrxchg */ + val = vcpu->arch.gprs[rd]; + csr_mask = vcpu->arch.gprs[rj]; + _kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val); + } + + return EMULATE_DONE; +} From patchwork Tue Feb 14 02:56:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56622 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723510wrn; Mon, 13 Feb 2023 18:58:49 -0800 (PST) X-Google-Smtp-Source: AK7set+W9FUOnOGxxTrsg9SpBZ8e9KxOPugsKALf7uiW7LAUJ8LZC/uy7heTlGO8g5ezihqsVnAp X-Received: by 2002:a05:6a20:a009:b0:bc:55ca:e63e with SMTP id p9-20020a056a20a00900b000bc55cae63emr705048pzj.53.1676343529364; Mon, 13 Feb 2023 18:58:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343529; cv=none; d=google.com; s=arc-20160816; b=VkfPEfz9U6xzBxxikdzlJ9nPv1v/ViMiHRd0VCaAxqpdbOv3uyIDCJUDCgT7zWE0dj J+nDjDNEC9uTQrS215oX0i5czNeKbyszVtGuNJddM08d5HbgChOubg1sEojnn3n4eewl MuWVEY/xBhEdX8s1Lld/n6bcrVzFiBti9wT+xemHh4k22mkXTNrm7ZCB6Xqst0amIXLC MBcRhsNVmhjmiLeo+rDGzSk+3Dj7O0lU+/TgNYrZKqOKa5VQNkOkYIL385YT+Nl6pbTF QLpS91ISj15jSRqfHlGJZZh8v/N2+c7APhcsSrJY44s3+wGJVKSHZbxkIG8/+hlLINAk YycQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Z6QzmoCCP/xBD7LIb8wtERagAQHrKqoYsHzsUQhxXDY=; b=nDD2DiVoigEZPk8ogDkh/ZL8Qj5EzS9Jdl0DON32Durmu2LLdb/eovqbUP6ZVyxTGq ugarKBIhdDWu7aUuY4NvpS0GpAUMbGd5hzyQRY2PyUVeTqdAJ3gXPSI94uREZWB59z5k v+axcOLBbgwcFSr9Q6K84qfJRgOmvesRspDyYck++8jacvq5ISSWHcL02l1FOutxAIr8 SO+ystBlPivAVv83XI8tKelRNLxVENd5ULIAJ35cPgbBzVydnwNWvWujH7zGeAT2LBbC 8eISJK9821PbZRm7D8m0mb/dLoP3ziC/ErmmZx1qxSXVvE0BUjC/Q8uyzpwBfdbWGALW pk8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o6-20020a637306000000b004f21dbf749esi13636458pgc.551.2023.02.13.18.58.36; Mon, 13 Feb 2023 18:58:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbjBNC5l (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231214AbjBNC5A (ORCPT ); Mon, 13 Feb 2023 21:57:00 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E7FFB18A80; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Axrtp4+OpjpVcAAA--.1091S3; Tue, 14 Feb 2023 10:56:56 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S18; Tue, 14 Feb 2023 10:56:55 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 16/24] LoongArch: KVM: Implement handle iocsr exception Date: Tue, 14 Feb 2023 10:56:40 +0800 Message-Id: <20230214025648.1898508-17-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S18 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxWFW5XFyDGr4xCry7trWDArb_yoWrJryDpa yUu34ktrW8Ww1ftwsxJFs7Xr1aqF48Gr9rJFZxJw4rurW2ya45Jr4vyrnFvF98K39xGr4I vw1fJryxuF1qy3JanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773592369604651?= X-GMAIL-MSGID: =?utf-8?q?1757773592369604651?= Implement kvm handle vcpu iocsr exception, setting the iocsr info into vcpu_run and return to user space to handle it. Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/inst.h | 16 ++++++ arch/loongarch/kvm/exit.c | 92 +++++++++++++++++++++++++++++++ 2 files changed, 108 insertions(+) diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h index 7eedd83fd..8ed137814 100644 --- a/arch/loongarch/include/asm/inst.h +++ b/arch/loongarch/include/asm/inst.h @@ -50,6 +50,14 @@ enum reg2_op { revbd_op = 0x0f, revh2w_op = 0x10, revhd_op = 0x11, + iocsrrdb_op = 0x19200, + iocsrrdh_op = 0x19201, + iocsrrdw_op = 0x19202, + iocsrrdd_op = 0x19203, + iocsrwrb_op = 0x19204, + iocsrwrh_op = 0x19205, + iocsrwrw_op = 0x19206, + iocsrwrd_op = 0x19207, }; enum reg2i5_op { @@ -261,6 +269,13 @@ struct reg3sa2_format { unsigned int opcode : 15; }; +struct reg2csr_format { + unsigned int rd : 5; + unsigned int rj : 5; + unsigned int csr : 14; + unsigned int opcode : 8; +}; + union loongarch_instruction { unsigned int word; struct reg0i26_format reg0i26_format; @@ -275,6 +290,7 @@ union loongarch_instruction { struct reg2bstrd_format reg2bstrd_format; struct reg3_format reg3_format; struct reg3sa2_format reg3sa2_format; + struct reg2csr_format reg2csr_format; }; #define LOONGARCH_INSN_SIZE sizeof(union loongarch_instruction) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index dc37827d9..f02e2b940 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -99,3 +99,95 @@ static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) return EMULATE_DONE; } + +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu) +{ + u32 rd, rj, opcode; + u32 addr; + unsigned long val; + int ret; + + /* + * Each IOCSR with different opcode + */ + rd = inst.reg2_format.rd; + rj = inst.reg2_format.rj; + opcode = inst.reg2_format.opcode; + addr = vcpu->arch.gprs[rj]; + ret = EMULATE_DO_IOCSR; + run->iocsr_io.phys_addr = addr; + run->iocsr_io.is_write = 0; + + /* LoongArch is Little endian */ + switch (opcode) { + case iocsrrdb_op: + run->iocsr_io.len = 1; + break; + case iocsrrdh_op: + run->iocsr_io.len = 2; + break; + case iocsrrdw_op: + run->iocsr_io.len = 4; + break; + case iocsrrdd_op: + run->iocsr_io.len = 8; + break; + case iocsrwrb_op: + run->iocsr_io.len = 1; + run->iocsr_io.is_write = 1; + break; + case iocsrwrh_op: + run->iocsr_io.len = 2; + run->iocsr_io.is_write = 1; + break; + case iocsrwrw_op: + run->iocsr_io.len = 4; + run->iocsr_io.is_write = 1; + break; + case iocsrwrd_op: + run->iocsr_io.len = 8; + run->iocsr_io.is_write = 1; + break; + default: + ret = EMULATE_FAIL; + break; + } + + if (ret == EMULATE_DO_IOCSR) { + if (run->iocsr_io.is_write) { + val = vcpu->arch.gprs[rd]; + memcpy(run->iocsr_io.data, &val, run->iocsr_io.len); + } + vcpu->arch.io_gpr = rd; + } + + return ret; +} + +int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; + enum emulation_result er = EMULATE_DONE; + + switch (run->iocsr_io.len) { + case 8: + *gpr = *(s64 *)run->iocsr_io.data; + break; + case 4: + *gpr = *(int *)run->iocsr_io.data; + break; + case 2: + *gpr = *(short *)run->iocsr_io.data; + break; + case 1: + *gpr = *(char *) run->iocsr_io.data; + break; + default: + kvm_err("Bad IOCSR length: %d,addr is 0x%lx", + run->iocsr_io.len, vcpu->arch.badv); + er = EMULATE_FAIL; + break; + } + + return er; +} From patchwork Tue Feb 14 02:56:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56624 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723612wrn; Mon, 13 Feb 2023 18:59:09 -0800 (PST) X-Google-Smtp-Source: AK7set+LHAN/87hr7YWPQFx+zwPGYb7ibjai3Kn1nPVGqUDF0PMOGmtekDFak82QPVavV2qaWkI3 X-Received: by 2002:a17:90b:1d87:b0:233:ee50:d27a with SMTP id pf7-20020a17090b1d8700b00233ee50d27amr842083pjb.5.1676343549450; Mon, 13 Feb 2023 18:59:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343549; cv=none; d=google.com; s=arc-20160816; b=PfeSplzpSk3tbb64T4B/V7sTqw4rw4Uxr/mCXWW8ktbSwwQ9CSYA1IV72okygvp3zB PjoPRcwYXqf++z5EmmWxrulywCM4z4OFA4hJrQ0AfZZnFymy3tO9QBmsNKs+TeeJMbQn ZMF11SeorqDRPWDaZtXg9RZX0UeecsyQ08PKut0CJ1GnVxFceMTAxCsGbP0n4IjfKzbo h5vQ9jgj0RTIkXjHkLD6q2ybmDldIaDGZNXLhm+riGwENlHRfmzuZi1Sa0aSyjc4FHRE drAHfzvPDy0jrNByNoqs7OSxVZ/cKLb4i3AV3WhXMu7D/sno7f1KJOVFVzZE5Gvlz4rt 4MSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=fZx5IZlIZtaY3u4UMj+eKGDfgof2Z9hvNdQxrag+KlE=; b=wRGpfeQe1G3nTUHXSCGuNyWpogn9x6jwNmaUdFiGMnmqKLD9qAwq+jT7WM9jbUsLXR O96brB6TqHI3Fv5c+sfRXGfK4tEvVDm4Y3mHZOb6h0CAY2CTZ0xLzTho4P11gfM6SOJV McmnAGEqmhevWP0/IOI8MaItbINRDex01EeHv4R0T1NzWSH15vV2jEhIZUEwanXu12GM koGWw6pzp3xPAA7sOtxN/ZDuiAF/nKRycJafmcK94jHHBN5RLEHmW2LDxyog2tRCEpcg aZmEPpveawMv5MCu/Vjnf5zmGo11Sbfk8fSj6sXo562FG0nYnX/8T5LnPF0s+ugwTjhS oirA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j11-20020a17090a94cb00b00233b40ad95dsi9612689pjw.177.2023.02.13.18.58.56; Mon, 13 Feb 2023 18:59:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231487AbjBNC5Z (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230503AbjBNC47 (ORCPT ); Mon, 13 Feb 2023 21:56:59 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B376118B03; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8CxC9p4+Opjo1cAAA--.948S3; Tue, 14 Feb 2023 10:56:56 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S19; Tue, 14 Feb 2023 10:56:55 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 17/24] LoongArch: KVM: Implement handle idle exception Date: Tue, 14 Feb 2023 10:56:41 +0800 Message-Id: <20230214025648.1898508-18-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S19 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvdXoW7JrW5uFykWr43XF43Ww48Zwb_yoW3CwbEqF WfJas3GrWrJ3W5ta4Dt3Z8Ca43Ga1kXFy5ZF17Zry3Gr1DtrW5GrWDWwn5ZryktrWUuF45 t3yvv3s7Aw1UtjkaLaAFLSUrUUUUnb8apTn2vfkv8UJUUUU8wcxFpf9Il3svdxBIdaVrn0 xqx4xG64xvF2IEw4CE5I8CrVC2j2Jv73VFW2AGmfu7bjvjm3AaLaJ3UjIYCTnIWjp_UUUY 27CY07I20VC2zVCF04k26cxKx2IYs7xG6rWj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6x kF7I0E14v26r4UJVWxJr1ln4kS14v26r126r1DM2AIxVAIcxkEcVAq07x20xvEncxIr21l 57IF6xkI12xvs2x26I8E6xACxx1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x8ErcxFaV Av8VWrMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxVWU AVWUtwCF04k20xvY0x0EwIxGrwCF04k20xvE74AGY7Cv6cx26rWl4I8I3I0E4IkC6x0Yz7 v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_tr0E3s1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4UJVWxJr1lIxAIcVC2z280aV CY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0zR9iSdUUUUU= X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773613183908606?= X-GMAIL-MSGID: =?utf-8?q?1757773613183908606?= Implement kvm handle loongarch vcpu idle exception, using kvm_vcpu_block to emulate it. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index f02e2b940..a6beb83a0 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -191,3 +191,15 @@ int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) return er; } + +int _kvm_emu_idle(struct kvm_vcpu *vcpu) +{ + ++vcpu->stat.idle_exits; + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_IDLE); + if (!vcpu->arch.irq_pending) { + kvm_save_timer(vcpu); + kvm_vcpu_block(vcpu); + } + + return EMULATE_DONE; +} From patchwork Tue Feb 14 02:56:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56625 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723677wrn; Mon, 13 Feb 2023 18:59:19 -0800 (PST) X-Google-Smtp-Source: AK7set8t5xAwUWHfQ55MncaUBEmq5+Tl9h2tHjCTiB81pYUw6AB/jmSKfR+rRC7q21lUzcFrAty7 X-Received: by 2002:a05:6a00:4399:b0:5a8:4c8a:2ce4 with SMTP id bt25-20020a056a00439900b005a84c8a2ce4mr17047454pfb.3.1676343558715; Mon, 13 Feb 2023 18:59:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343558; cv=none; d=google.com; s=arc-20160816; b=BGaX88rrLcqFqYnFxpncXd4iwGtklY2j8/N5rVrda3Ho6FMF7nrySZDCi6oRKwtZ83 F9yVU6Cy/MpBafTRANjwf5360Gx0gJQkXbdrwp2mc/AG7c2t4SMqaSikjkU/kqPHK/h2 pCLrbzAK7rWZHCsgfSKBTugJwmPyYjskcsW8sb1YCT/nuE2uDyjMC/6UYulSlFnZUB/s TvGQTwW8bhAH9SvcOaHdCDXmrbhJkos0xxl2AmkTUFj/lvj6En29Yh86vipjDzVgZqiz KwH8xl14W+I/8DTxe1E+xx/KiZen+IakdyOerFdm0oXy/RmATMPi+Qi9dWoXRrt3JoAo 5+Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=mX1Zn5g/V0vZlqxEZGM+72NJEpEiL14bU4uheTe5L3I=; b=WxYa2JWAiCKqwXWQ4xgItCKxi+AHLhpDVp75oeWC4lCT8pgCNHhm0jC/W5KXpTqU2S SshY+JqDr6fqgl2ScIy4gYhq3k8QQ+BV5mYGYy7wn2ipnr4AKPUC1de3JoJ+RLzYkrig MDT/oQN8w1l6pINHZLvzSc+doq3jqAo8+IuzfMO7K/Mv/5jbD+sy69EjANXvt2w50mdZ we8x7ERpAW9Lu1WA+spTZL34fs75soI3DD5qtfTRrr6E1gp9SjQGr4dmRiSAJgFYsLvi IJZLbTJzz++J8T2qhq80sC5H6EJTa2n3ZOWPi3HeO7yVFMNTOG/6NK7kA6lAigBo3eS6 c9bA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y23-20020a626417000000b0058b513dc3aasi13079683pfb.353.2023.02.13.18.59.06; Mon, 13 Feb 2023 18:59:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230083AbjBNC5e (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231288AbjBNC5A (ORCPT ); Mon, 13 Feb 2023 21:57:00 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 25FC3199F0; Mon, 13 Feb 2023 18:56:57 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Ax69l4+OpjrVcAAA--.947S3; Tue, 14 Feb 2023 10:56:56 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S20; Tue, 14 Feb 2023 10:56:56 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 18/24] LoongArch: KVM: Implement handle gspr exception Date: Tue, 14 Feb 2023 10:56:42 +0800 Message-Id: <20230214025648.1898508-19-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S20 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxZrW3CryxKFy3XrWrAF48Crg_yoW5Zr4Upr W7Z34Fvr4kJryft3yaqrsYvrn0va18Kry7Xr9xJ343Z3y7tas5Jr40yrZFyF1DGryfZF4x Z3W5tF13CF1UAaUanT9S1TB71UUUUjUqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bzAFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUXVWUAwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41l42xK82IYc2Ij64vI r41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4AY6r1j6r4UMxCIbckI1I0E14 v26r1Y6r17MI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE 17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26w1j6s0DMI IF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWU CwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJb IYCTnIWIevJa73UjIFyTuYvj4RKtC7UUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773623429728636?= X-GMAIL-MSGID: =?utf-8?q?1757773623429728636?= Implement kvm handle gspr exception interface, including emulate the reading and writing of cpucfg, csr, iocsr resource. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 114 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 114 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index a6beb83a0..75c61272b 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -203,3 +203,117 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu) return EMULATE_DONE; } + +static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) +{ + enum emulation_result er = EMULATE_DONE; + struct kvm_run *run = vcpu->run; + larch_inst inst; + unsigned long curr_pc; + int rd, rj; + unsigned int index; + + /* + * Fetch the instruction. + */ + inst.word = vcpu->arch.badi; + curr_pc = vcpu->arch.pc; + update_pc(&vcpu->arch); + + er = EMULATE_FAIL; + switch (((inst.word >> 24) & 0xff)) { + case 0x0: + /* cpucfg GSPR */ + if (inst.reg2_format.opcode == 0x1B) { + rd = inst.reg2_format.rd; + rj = inst.reg2_format.rj; + ++vcpu->stat.cpucfg_exits; + index = vcpu->arch.gprs[rj]; + + vcpu->arch.gprs[rd] = read_cpucfg(index); + /* Nested KVM is not supported */ + if (index == 2) + vcpu->arch.gprs[rd] &= ~CPUCFG2_LVZP; + if (index == 6) + vcpu->arch.gprs[rd] &= ~CPUCFG6_PMP; + er = EMULATE_DONE; + } + break; + case 0x4: + /* csr GSPR */ + er = _kvm_handle_csr(vcpu, inst); + break; + case 0x6: + /* iocsr,cache,idle GSPR */ + switch (((inst.word >> 22) & 0x3ff)) { + case 0x18: + /* cache GSPR */ + er = EMULATE_DONE; + trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE); + break; + case 0x19: + /* iocsr/idle GSPR */ + switch (((inst.word >> 15) & 0x1ffff)) { + case 0xc90: + /* iocsr GSPR */ + er = _kvm_emu_iocsr(inst, run, vcpu); + break; + case 0xc91: + /* idle GSPR */ + er = _kvm_emu_idle(vcpu); + break; + default: + er = EMULATE_FAIL; + break; + } + break; + default: + er = EMULATE_FAIL; + break; + } + break; + default: + er = EMULATE_FAIL; + break; + } + + /* Rollback PC only if emulation was unsuccessful */ + if (er == EMULATE_FAIL) { + kvm_err("[%#lx]%s: unsupported gspr instruction 0x%08x\n", + curr_pc, __func__, inst.word); + + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->arch.pc = curr_pc; + } + return er; +} + +/* + * Execute cpucfg instruction will tirggerGSPR, + * Also the access to unimplemented csrs 0x15 + * 0x16, 0x50~0x53, 0x80, 0x81, 0x90~0x95, 0x98 + * 0xc0~0xff, 0x100~0x109, 0x500~0x502, + * cache_op, idle_op iocsr ops the same + */ +static int _kvm_handle_gspr(struct kvm_vcpu *vcpu) +{ + enum emulation_result er = EMULATE_DONE; + int ret = RESUME_GUEST; + + er = _kvm_trap_handle_gspr(vcpu); + + if (er == EMULATE_DONE) { + ret = RESUME_GUEST; + } else if (er == EMULATE_DO_MMIO) { + vcpu->run->exit_reason = KVM_EXIT_MMIO; + ret = RESUME_HOST; + } else if (er == EMULATE_DO_IOCSR) { + vcpu->run->exit_reason = KVM_EXIT_LOONGARCH_IOCSR; + ret = RESUME_HOST; + } else { + kvm_err("%s internal error\n", __func__); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + ret = RESUME_HOST; + } + return ret; +} From patchwork Tue Feb 14 02:56:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56621 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723498wrn; Mon, 13 Feb 2023 18:58:47 -0800 (PST) X-Google-Smtp-Source: AK7set/ngH2qJrAjWC0GWqNxqut+cVg9gniva/x9Yl3A8vcuBsLsI/5MB73xbKJzdJYc+aoyysQW X-Received: by 2002:a05:6a21:3397:b0:bf:58d1:ce9c with SMTP id yy23-20020a056a21339700b000bf58d1ce9cmr19437164pzb.27.1676343527130; Mon, 13 Feb 2023 18:58:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343527; cv=none; d=google.com; s=arc-20160816; b=Q1zSkGKGT6ic+BxRifPda9cMc6zY914nxhKbC+X+7JZChKRr6N7pAoYDHNXecrW6Yv faiby/TQ3P1A+Jo8+h5yw/hZerahZTQKD0gJ7TKBlqavyTBIk/sfnepYJVucwonaQp62 8IDM7zDRLXY850Y3+/MRoizxfTWy/SOSBS8mXvSkRjLcU1tyOtmgZYJPMqqHw8JssrD3 giDHRHUthRDiH7bjwmqS6wUiP0n8+x1jpxnqEinbIV1Kc31s3P3lLc3KLsJeS4agw4c4 Ri0YzLqYDwp2HQyYGrGgI9Emxc8vZaGxW8P3zUKFhYUebciuVrv0eky/u1fpig09j6gy 93ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=tIteCgmx/JNxHjE+bR5TvV/qO7y4Z1NaO6sgA4jUGWs=; b=Oz4IvlwNdAJUhe/hTAZD6o+/IakMRCBXbDM0IyQ+h8ek+KyVXhuaS/h3PP1x3hVt/c zxo3PlSTGHrAR/HU3SnT0Xf453nFbu3rwylgW33sXRfoiG8ULNeyXJH+tY6qGaBDNOzG lnsaMpWh+AZDolDSR7zZ6qKRfM4QGxgz3sorV40kyTpJhpBLVEsbrkjx/KtRjQTi6Bk1 Zq/07Mf90FgV/BLRGrLduMcyIlKbDMtBYYmfn0UgbyS+NN9yA2CZp4CjcFZpbfOWFAJE wj7oulxMTI4368p/guRA3z/Oq8dkra4Nab3IU2A2e7TkujSxoBp/P29zu2cbGMj0aWT8 HdTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x5-20020a63b345000000b004f83b89ae67si14017878pgt.384.2023.02.13.18.58.34; Mon, 13 Feb 2023 18:58:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231295AbjBNC5i (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231313AbjBNC5B (ORCPT ); Mon, 13 Feb 2023 21:57:01 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8196718A99; Mon, 13 Feb 2023 18:56:58 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8BxPNp5+OpjsVcAAA--.1111S3; Tue, 14 Feb 2023 10:56:57 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S21; Tue, 14 Feb 2023 10:56:56 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 19/24] LoongArch: KVM: Implement handle mmio exception Date: Tue, 14 Feb 2023 10:56:43 +0800 Message-Id: <20230214025648.1898508-20-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S21 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW3JFWfZF48Cr4Utry5Kr1kAFb_yoW3GF18pr WUC34jvrsaqryYy3srKrs5Xr1a9F48GrsrJrZ7t39Fgr17tFy5Ar4v9rW2vFW3CrWF9a1x Z3Z3JF47uF1UAa7anT9S1TB71UUUUj7qnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773590382090933?= X-GMAIL-MSGID: =?utf-8?q?1757773590382090933?= Implement handle mmio exception, setting the mmio info into vcpu_run and return to user space to handle it. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 308 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 308 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 75c61272b..30e64ba72 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -204,6 +204,265 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu) return EMULATE_DONE; } +int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst) +{ + struct kvm_run *run = vcpu->run; + unsigned int rd, op8, opcode; + unsigned long rd_val = 0; + void *data = run->mmio.data; + unsigned long curr_pc; + int ret; + + /* + * Update PC and hold onto current PC in case there is + * an error and we want to rollback the PC + */ + curr_pc = vcpu->arch.pc; + update_pc(&vcpu->arch); + + op8 = (inst.word >> 24) & 0xff; + run->mmio.phys_addr = vcpu->arch.badv; + ret = EMULATE_DO_MMIO; + if (op8 < 0x28) { + /* stptrw/d process */ + rd = inst.reg2i14_format.rd; + opcode = inst.reg2i14_format.opcode; + + switch (opcode) { + case stptrd_op: + run->mmio.len = 8; + *(unsigned long *)data = vcpu->arch.gprs[rd]; + break; + case stptrw_op: + run->mmio.len = 4; + *(unsigned int *)data = vcpu->arch.gprs[rd]; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 < 0x30) { + /* st.b/h/w/d process */ + rd = inst.reg2i12_format.rd; + opcode = inst.reg2i12_format.opcode; + rd_val = vcpu->arch.gprs[rd]; + + switch (opcode) { + case std_op: + run->mmio.len = 8; + *(unsigned long *)data = rd_val; + break; + case stw_op: + run->mmio.len = 4; + *(unsigned int *)data = rd_val; + break; + case sth_op: + run->mmio.len = 2; + *(unsigned short *)data = rd_val; + break; + case stb_op: + run->mmio.len = 1; + *(unsigned char *)data = rd_val; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 == 0x38) { + /* stxb/h/w/d process */ + rd = inst.reg3_format.rd; + opcode = inst.reg3_format.opcode; + + switch (opcode) { + case stxb_op: + run->mmio.len = 1; + *(unsigned char *)data = vcpu->arch.gprs[rd]; + break; + case stxh_op: + run->mmio.len = 2; + *(unsigned short *)data = vcpu->arch.gprs[rd]; + break; + case stxw_op: + run->mmio.len = 4; + *(unsigned int *)data = vcpu->arch.gprs[rd]; + break; + case stxd_op: + run->mmio.len = 8; + *(unsigned long *)data = vcpu->arch.gprs[rd]; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else + ret = EMULATE_FAIL; + + if (ret == EMULATE_DO_MMIO) { + run->mmio.is_write = 1; + vcpu->mmio_needed = 1; + vcpu->mmio_is_write = 1; + } else { + vcpu->arch.pc = curr_pc; + kvm_err("Write not supporded inst=0x%08x @%lx BadVaddr:%#lx\n", + inst.word, vcpu->arch.pc, vcpu->arch.badv); + kvm_arch_vcpu_dump_regs(vcpu); + /* Rollback PC if emulation was unsuccessful */ + } + + return ret; +} + +int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst) +{ + unsigned int op8, opcode, rd; + struct kvm_run *run = vcpu->run; + int ret; + + run->mmio.phys_addr = vcpu->arch.badv; + vcpu->mmio_needed = 2; /* signed */ + op8 = (inst.word >> 24) & 0xff; + ret = EMULATE_DO_MMIO; + + if (op8 < 0x28) { + /* ldptr.w/d process */ + rd = inst.reg2i14_format.rd; + opcode = inst.reg2i14_format.opcode; + + switch (opcode) { + case ldptrd_op: + run->mmio.len = 8; + break; + case ldptrw_op: + run->mmio.len = 4; + break; + default: + break; + } + } else if (op8 < 0x2f) { + /* ld.b/h/w/d, ld.bu/hu/wu process */ + rd = inst.reg2i12_format.rd; + opcode = inst.reg2i12_format.opcode; + + switch (opcode) { + case ldd_op: + run->mmio.len = 8; + break; + case ldwu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 4; + break; + case ldw_op: + run->mmio.len = 4; + break; + case ldhu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 2; + break; + case ldh_op: + run->mmio.len = 2; + break; + case ldbu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 1; + break; + case ldb_op: + run->mmio.len = 1; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 == 0x38) { + /* ldxb/h/w/d, ldxb/h/wu, ldgtb/h/w/d, ldleb/h/w/d process */ + rd = inst.reg3_format.rd; + opcode = inst.reg3_format.opcode; + + switch (opcode) { + case ldxb_op: + run->mmio.len = 1; + break; + case ldxbu_op: + run->mmio.len = 1; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxh_op: + run->mmio.len = 2; + break; + case ldxhu_op: + run->mmio.len = 2; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxw_op: + run->mmio.len = 4; + break; + case ldxwu_op: + run->mmio.len = 4; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxd_op: + run->mmio.len = 8; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else + ret = EMULATE_FAIL; + + if (ret == EMULATE_DO_MMIO) { + /* Set for _kvm_complete_mmio_read use */ + vcpu->arch.io_gpr = rd; + run->mmio.is_write = 0; + vcpu->mmio_is_write = 0; + } else { + kvm_err("Load not supporded inst=0x%08x @%lx BadVaddr:%#lx\n", + inst.word, vcpu->arch.pc, vcpu->arch.badv); + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->mmio_needed = 0; + } + return ret; +} + +int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; + enum emulation_result er = EMULATE_DONE; + + /* update with new PC */ + update_pc(&vcpu->arch); + switch (run->mmio.len) { + case 8: + *gpr = *(s64 *)run->mmio.data; + break; + case 4: + if (vcpu->mmio_needed == 2) + *gpr = *(int *)run->mmio.data; + else + *gpr = *(unsigned int *)run->mmio.data; + break; + case 2: + if (vcpu->mmio_needed == 2) + *gpr = *(short *) run->mmio.data; + else + *gpr = *(unsigned short *)run->mmio.data; + + break; + case 1: + if (vcpu->mmio_needed == 2) + *gpr = *(char *) run->mmio.data; + else + *gpr = *(unsigned char *) run->mmio.data; + break; + default: + kvm_err("Bad MMIO length: %d,addr is 0x%lx", + run->mmio.len, vcpu->arch.badv); + er = EMULATE_FAIL; + break; + } + + return er; +} + static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) { enum emulation_result er = EMULATE_DONE; @@ -317,3 +576,52 @@ static int _kvm_handle_gspr(struct kvm_vcpu *vcpu) } return ret; } + +static int _kvm_handle_mmu_fault(struct kvm_vcpu *vcpu, bool write) +{ + struct kvm_run *run = vcpu->run; + unsigned long badv = vcpu->arch.badv; + larch_inst inst; + enum emulation_result er = EMULATE_DONE; + int ret; + + ret = kvm_handle_mm_fault(vcpu, badv, write); + if (ret) { + /* Treat as MMIO */ + inst.word = vcpu->arch.badi; + if (write) { + er = _kvm_emu_mmio_write(vcpu, inst); + } else { + /* A code fetch fault doesn't count as an MMIO */ + if (kvm_is_ifetch_fault(&vcpu->arch)) { + kvm_err("%s ifetch error addr:%lx\n", __func__, badv); + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + return RESUME_HOST; + } + + er = _kvm_emu_mmio_read(vcpu, inst); + } + } + + if (er == EMULATE_DONE) { + ret = RESUME_GUEST; + } else if (er == EMULATE_DO_MMIO) { + run->exit_reason = KVM_EXIT_MMIO; + ret = RESUME_HOST; + } else { + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + ret = RESUME_HOST; + } + + return ret; +} + +static int _kvm_handle_write_fault(struct kvm_vcpu *vcpu) +{ + return _kvm_handle_mmu_fault(vcpu, true); +} + +static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu) +{ + return _kvm_handle_mmu_fault(vcpu, false); +} From patchwork Tue Feb 14 02:56:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56620 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723487wrn; Mon, 13 Feb 2023 18:58:46 -0800 (PST) X-Google-Smtp-Source: AK7set/JmoLjJV6gmLULvOdkYj9DNq8f8zHQk6x+RqfNrXmrqG85S6+AiAzf9je4xpGqqPDOG4AS X-Received: by 2002:a17:90b:4c8d:b0:230:acb2:e3f0 with SMTP id my13-20020a17090b4c8d00b00230acb2e3f0mr690742pjb.33.1676343526201; Mon, 13 Feb 2023 18:58:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343526; cv=none; d=google.com; s=arc-20160816; b=LQNV63dXmK2p9UpZ5bWDU3h40ZkQ0KgqvRXmd9GYPGtL70oNo1aT67Ogk2SV2K/jmO gTTW1lF3ZRxL1rz0J4T054Jkm0mZL+fWjJBEoA8IGni1nnCwbyXv5BId9jb4zI62eRuz Bfmlx/gliqsmMJF3NwzfAo9XZXldMO6ljbmiaYKcLOcDSyiS4JLmO/fH6PhCZARR1Vc6 S+IfXNs/fxN6KimAIwUoTLv/WG18GZ5lkTNA9aWG2qXX+uXQghpErYcuT3HLR88CRleB T74Y1etcPt0n6D7sAMPqfRe0RR86WCQ1Q1wgOVCGChSARVz5mONXZunjMS8XLR/3PVMD B5HQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=sBajucdtqV5fmmbTtcYinA1OuMxRL3FL1UAbFK06xXI=; b=PIVzQXd+rzNTHhFoEXSVnA3rtH2FMjolGnNxw8mR/eABGzbgQbYBeCZ1N0G1ZXHyu+ Uuw9fAdA5chGpiYMeu4LJWAVhF8wEmizvT/PnZtdxBWfuD2f27QPJ1MYJOzfX0gWgjm+ dkvd1Rq2hG/scGKwyuN2G3GFNVm5ZF0Uf+3dSJHSdenJGqMCGACYF8qWXuFpHsMiD9hH UmUPSArXl2ukM2Fyem1gzUjp5DumtrmTg9fYTlew9EiRB+VICh1/uEX4a6MFmprUHcRk MwzPaSNiqffCUG2SyIDoYctovBQ2TF/QluUDJceb/ZzUA7s9ij6Mn0DPtNiuPj3qmMle cs7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k10-20020a17090a9d8a00b002340e8e4d6csi3001044pjp.94.2023.02.13.18.58.33; Mon, 13 Feb 2023 18:58:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231526AbjBNC5r (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbjBNC5B (ORCPT ); Mon, 13 Feb 2023 21:57:01 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3816C17CCC; Mon, 13 Feb 2023 18:56:59 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Dx3tp5+Opjw1cAAA--.1170S3; Tue, 14 Feb 2023 10:56:57 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S22; Tue, 14 Feb 2023 10:56:56 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 20/24] LoongArch: KVM: Implement handle fpu exception Date: Tue, 14 Feb 2023 10:56:44 +0800 Message-Id: <20230214025648.1898508-21-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S22 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7tF45ur4xuFW7KFWUtF47Arb_yoW8Gr18pF WfZwnY9r48Wry7ta9Iy3Wqqr43CrZ5Kry7WrZFk345Zw4Uta4rXF48KrWvgFy5tr1rXa1S qr13KFn8uF1UA3DanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773589227150458?= X-GMAIL-MSGID: =?utf-8?q?1757773589227150458?= Implement handle fpu exception, using kvm_own_fpu to enable fpu for guest. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 30e64ba72..89c90faa1 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -625,3 +625,30 @@ static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu) { return _kvm_handle_mmu_fault(vcpu, false); } + +/** + * _kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host + * @vcpu: Virtual CPU context. + * + * Handle when the guest attempts to use fpu which hasn't been allowed + * by the root context. + */ +static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) +{ + struct kvm_run *run = vcpu->run; + + /* + * If guest FPU not present, the FPU operation should have been + * treated as a reserved instruction! + * If FPU already in use, we shouldn't get this at all. + */ + if (WARN_ON(!_kvm_guest_has_fpu(&vcpu->arch) || + vcpu->arch.aux_inuse & KVM_LARCH_FPU)) { + kvm_err("%s internal error\n", __func__); + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + return RESUME_HOST; + } + + kvm_own_fpu(vcpu); + return RESUME_GUEST; +} From patchwork Tue Feb 14 02:56:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56626 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723807wrn; Mon, 13 Feb 2023 18:59:43 -0800 (PST) X-Google-Smtp-Source: AK7set/RLcZYev7qCMauIUrMzVLH85nUROvC1hbwAsYVHJbabO6XN6qNcc7m486jCPb9WJ87DG69 X-Received: by 2002:a50:d5c1:0:b0:4ab:16a8:bc61 with SMTP id g1-20020a50d5c1000000b004ab16a8bc61mr837550edj.23.1676343582874; Mon, 13 Feb 2023 18:59:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343582; cv=none; d=google.com; s=arc-20160816; b=0BfuJXJn+EB8h0A8h5kd6oWxvtIpyyC1JJY2skxPp7LVUREhKwdx5m8D8T7oFeWESy 1vAxTzDtP0Zs0X9rwCfOBZKaOOewThdTyUl8W3wr/2paEqvHaEzUh8pdpOWToD+hntKl FdU++pKXBAlg3BkXS4vz4vYodf/ligF1VDVBw7erNAXVHtTIjSqk8RcJTq5uBs9yk0/E LHrk6fBtDk1CXjgqJK3cudicnXO/g0rPMpX/9d1dQMczgSw0lbGJN9c0t4E9OMi2ex8Y O1SgO/MoHKuubEtP0d4mOSC61YZGGKQoNaKyQwd3DHbvS0KYfezQKXKQSgw0Vv8mliTn N8lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=EIluuIuO2SdbfVFGRBH9IxzZKcuhm9g5/03Ghh7ZSPk=; b=MwCRHdRyJCf6Vgc1/mKipzHdcaHhl7nwq/zMWlR5EH9IfOCF3NcH+XRLa534T8GBE0 XpLg/hC8N21L57XEcDqr9sXy+AnqPG00pywZErq3WRPzRWeLhCtdD4cymwh4PApwzulQ 2t4FRQPH01qBhWTSJkKzUKShJUC/1DhGEJDy0G3sRZ2b3aRm5sxVrQWBeOmLVafmfmdM W/ulbYJuHE2Tg9/ncyFXftzVZgopSPQUhb1OqqRrRugUBxp3+JpA4KIj8vvkTiPqTstR 2SrQil0IuYMeFMWhj5TD79P6aNsGGGm1mIQga2R1J4a+k831WsycQweeQ7Cio33SWgdX wYDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e16-20020a056402105000b004aac89c74a9si15363232edu.630.2023.02.13.18.59.19; Mon, 13 Feb 2023 18:59:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231496AbjBNC5p (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231287AbjBNC5A (ORCPT ); Mon, 13 Feb 2023 21:57:00 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E41341ADCE; Mon, 13 Feb 2023 18:56:58 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Axadl5+Opjx1cAAA--.835S3; Tue, 14 Feb 2023 10:56:57 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S23; Tue, 14 Feb 2023 10:56:57 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 21/24] LoongArch: KVM: Implement kvm exception vector Date: Tue, 14 Feb 2023 10:56:45 +0800 Message-Id: <20230214025648.1898508-22-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S23 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7Kw15tF1UurWUKF17uF4UJwb_yoW8ZFW8pF yfA34Yyr48Ww12ya4akw1vgF13AayxKr17WrW7G345uw4qqryrtrWvk397JF43KryrZF1x JFZ8tFn8uF4UG37anT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773649031955455?= X-GMAIL-MSGID: =?utf-8?q?1757773649031955455?= Implement kvm exception vector, using _kvm_fault_tables array to save the handle function pointer and it is used when vcpu handle exit. Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 48 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 89c90faa1..6fd3219bb 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -652,3 +652,51 @@ static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) kvm_own_fpu(vcpu); return RESUME_GUEST; } + +/* + * Loongarch KVM callback handling for not implemented guest exiting + */ +static int _kvm_fault_ni(struct kvm_vcpu *vcpu) +{ + unsigned long estat, badv; + unsigned int exccode, inst; + + /* + * Fetch the instruction. + */ + badv = vcpu->arch.badv; + estat = vcpu->arch.host_estat; + exccode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; + inst = vcpu->arch.badi; + kvm_err("Exccode: %d PC=%#lx inst=0x%08x BadVaddr=%#lx estat=%#llx\n", + exccode, vcpu->arch.pc, inst, badv, read_gcsr_estat()); + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + + return RESUME_HOST; +} + +static exit_handle_fn _kvm_fault_tables[EXCCODE_INT_START] = { + [EXCCODE_TLBL] = _kvm_handle_read_fault, + [EXCCODE_TLBI] = _kvm_handle_read_fault, + [EXCCODE_TLBNR] = _kvm_handle_read_fault, + [EXCCODE_TLBNX] = _kvm_handle_read_fault, + [EXCCODE_TLBS] = _kvm_handle_write_fault, + [EXCCODE_TLBM] = _kvm_handle_write_fault, + [EXCCODE_FPDIS] = _kvm_handle_fpu_disabled, + [EXCCODE_GSPR] = _kvm_handle_gspr, +}; + +int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault) +{ + return _kvm_fault_tables[fault](vcpu); +} + +void _kvm_init_fault(void) +{ + int i; + + for (i = 0; i < EXCCODE_INT_START; i++) + if (!_kvm_fault_tables[i]) + _kvm_fault_tables[i] = _kvm_fault_ni; +} From patchwork Tue Feb 14 02:56:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56630 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2724131wrn; Mon, 13 Feb 2023 19:00:26 -0800 (PST) X-Google-Smtp-Source: AK7set8gDQ0xnrHvHREgAc+gq5wPHs0k6Jl93p21wAGvV9F9XevOKQahmy+ZhlOYPwQHDewvg5Ap X-Received: by 2002:a17:906:7621:b0:87a:ee05:f7b with SMTP id c1-20020a170906762100b0087aee050f7bmr1130777ejn.24.1676343626162; Mon, 13 Feb 2023 19:00:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343626; cv=none; d=google.com; s=arc-20160816; b=oATln53A1hSaek0K+Y0npVmF3FPJrq30yPzQ7kjee5rJsclfUYWXJcL/rdvVjbcaL9 IwvSiY52lh/XqJ7WzFI9vOF582BSftAzzp6FDgIvBiz2gOC+eHJbHb+KUvo5g62Cks3L 0y1O38NhI4kNc9bR4z/IoooQvK93eCywPkYwSKjpo9BpPYDhoTSOAxKVWbv/PNsZCOnU 6ArhIhzdPTrffmo4OyzBBE0XnVfJC08Os6vzz1yJHy/2jHRiV+bpRamU1SVGbdhNTRFl uH4hDNWdsowT/MxDFnrjbXm2KVGDaGK7yQ5UkTbx2FrHgcvLdcOoD1OLq3SS2Y/GJvw+ v6RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3MmDK/N6FBkdo3D500Tv2a67BxMqsl8SqiaBXfYRlyU=; b=w6tU74LzKsk3JBu20BMGuKpe/IzjHaFrmSHMcS6+i3A1LZ5QbOYLeG2NrFHVFkFanj F8s5w3lmjtnD5kIKyBE4x+t2kUCgBH5QjRd5hyJf7ZFgLI0mRqTHRyDx3mnvhQHjd1Tp 1DvUJAiZSfQ/RpfQM9qTDgH7F7CVEAsMuwGpngvKFzWOxwfYBQjH911YZDM6gG0eCjFT F4BR1kUlzTK5dYSYrs3iXrV6L18ZHmO1sMuofS1oQPSwbxhtYw43ZVvVr8pu+M0JazGq qZ/6Tr13N01Ci9A5XbwWI6way/NRKihzA18+o565TRxyTh4d6uKabQS/9PLHS5qwh2OP 1B2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ge8-20020a170907908800b0087f5eb7d732si12981400ejb.730.2023.02.13.19.00.03; Mon, 13 Feb 2023 19:00:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231544AbjBNC56 (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231294AbjBNC5D (ORCPT ); Mon, 13 Feb 2023 21:57:03 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D244A18AB3; Mon, 13 Feb 2023 18:57:00 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxidl7+Opj5FcAAA--.939S3; Tue, 14 Feb 2023 10:56:59 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S24; Tue, 14 Feb 2023 10:56:57 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 22/24] LoongArch: KVM: Implement vcpu world switch Date: Tue, 14 Feb 2023 10:56:46 +0800 Message-Id: <20230214025648.1898508-23-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S24 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW3tFW3uw15Zr15Xr18tw4kCrg_yoWkZFyxp3 48CrZavayUKrn7uFs2qryj9r17JF4fu3ySgwnrWrs5XryDWFW0v3Wvkr1DGa4UJw48XF1S vFyrJw47CrZrAwUanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773694107591076?= X-GMAIL-MSGID: =?utf-8?q?1757773694107591076?= Implement loongarch vcpu world switch, including vcpu enter guest and vcpu exit from guest, both operations need to save or restore the host and guest registers. Signed-off-by: Tianrui Zhao --- arch/loongarch/kernel/asm-offsets.c | 32 +++ arch/loongarch/kvm/switch.S | 327 ++++++++++++++++++++++++++++ 2 files changed, 359 insertions(+) create mode 100644 arch/loongarch/kvm/switch.S diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index 4bdb203fc..655741c03 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -272,3 +273,34 @@ void output_pbe_defines(void) BLANK(); } #endif + +void output_kvm_defines(void) +{ + COMMENT(" KVM/LOONGARCH Specific offsets. "); + + OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr); + OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc); + BLANK(); + + OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch); + OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm); + OFFSET(KVM_VCPU_RUN, kvm_vcpu, run); + BLANK(); + + OFFSET(KVM_ARCH_HSTACK, kvm_vcpu_arch, host_stack); + OFFSET(KVM_ARCH_HGP, kvm_vcpu_arch, host_gp); + OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit); + OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd); + OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry); + OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc); + OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs); + OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat); + OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv); + OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi); + OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg); + OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry); + OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu); + + OFFSET(KVM_GPGD, kvm, arch.gpa_mm.pgd); + BLANK(); +} diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S new file mode 100644 index 000000000..c0b8062ac --- /dev/null +++ b/arch/loongarch/kvm/switch.S @@ -0,0 +1,327 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include + +#define RESUME_HOST (1 << 1) + +#define PT_GPR_OFFSET(x) (PT_R0 + 8*x) +#define CONFIG_GUEST_CRMD ((1 << CSR_CRMD_DACM_SHIFT) | \ + (1 << CSR_CRMD_DACF_SHIFT) | \ + CSR_CRMD_PG | PLV_KERN) + .text + +.macro kvm_save_host_gpr base + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 + st.d $r\n, \base, PT_GPR_OFFSET(\n) + .endr +.endm + +.macro kvm_restore_host_gpr base + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 + ld.d $r\n, \base, PT_GPR_OFFSET(\n) + .endr +.endm + +/* + * prepare switch to guest + * @param: + * KVM_ARCH: kvm_vcpu_arch, don't touch it until 'ertn' + * GPRNUM: KVM_ARCH gpr number + * tmp, tmp1: temp register + */ +.macro kvm_switch_to_guest KVM_ARCH GPRNUM tmp tmp1 + /* set host excfg.VS=0, all exceptions share one exception entry */ + csrrd \tmp, LOONGARCH_CSR_ECFG + bstrins.w \tmp, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT + csrwr \tmp, LOONGARCH_CSR_ECFG + + /* Load up the new EENTRY */ + ld.d \tmp, \KVM_ARCH, KVM_ARCH_GEENTRY + csrwr \tmp, LOONGARCH_CSR_EENTRY + + /* Set Guest ERA */ + ld.d \tmp, \KVM_ARCH, KVM_ARCH_GPC + csrwr \tmp, LOONGARCH_CSR_ERA + + /* Save host PGDL */ + csrrd \tmp, LOONGARCH_CSR_PGDL + st.d \tmp, \KVM_ARCH, KVM_ARCH_HPGD + + /* Switch to kvm */ + ld.d \tmp1, \KVM_ARCH, KVM_VCPU_KVM - KVM_VCPU_ARCH + + /* Load guest PGDL */ + lu12i.w \tmp, KVM_GPGD + srli.w \tmp, \tmp, 12 + ldx.d \tmp, \tmp1, \tmp + csrwr \tmp, LOONGARCH_CSR_PGDL + + /* Mix GID and RID */ + csrrd \tmp1, LOONGARCH_CSR_GSTAT + bstrpick.w \tmp1, \tmp1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT + csrrd \tmp, LOONGARCH_CSR_GTLBC + bstrins.w \tmp, \tmp1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT + csrwr \tmp, LOONGARCH_CSR_GTLBC + + /* + * Switch to guest: + * GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0 + * ertn + */ + + /* Prepare enable Intr before enter guest */ + ori \tmp, zero, CSR_PRMD_PIE + csrxchg \tmp, \tmp, LOONGARCH_CSR_PRMD + + /* Set PVM bit to setup ertn to guest context */ + ori \tmp, zero, CSR_GSTAT_PVM + csrxchg \tmp, \tmp, LOONGARCH_CSR_GSTAT + + /* Load Guest gprs */ + ld.d $r1, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 1) + ld.d $r2, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 2) + ld.d $r3, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 3) + ld.d $r4, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 4) + ld.d $r5, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 5) + ld.d $r7, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 7) + ld.d $r8, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 8) + ld.d $r9, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 9) + ld.d $r10, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 10) + ld.d $r11, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 11) + ld.d $r12, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 12) + ld.d $r13, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 13) + ld.d $r14, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 14) + ld.d $r15, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 15) + ld.d $r16, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 16) + ld.d $r17, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 17) + ld.d $r18, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 18) + ld.d $r19, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 19) + ld.d $r20, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 20) + ld.d $r21, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 21) + ld.d $r22, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 22) + ld.d $r23, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 23) + ld.d $r24, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 24) + ld.d $r25, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 25) + ld.d $r26, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 26) + ld.d $r27, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 27) + ld.d $r28, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 28) + ld.d $r29, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 29) + ld.d $r30, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 30) + ld.d $r31, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * 31) + /* Load KVM_ARCH register */ + ld.d \KVM_ARCH, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * \GPRNUM) + + ertn +.endm + +/* load kvm_vcpu to a2 and store a1 for free use */ + .section .text + .cfi_sections .debug_frame +SYM_CODE_START(kvm_vector_entry) + csrwr a2, KVM_TEMP_KS + csrrd a2, KVM_VCPU_KS + addi.d a2, a2, KVM_VCPU_ARCH + + /* After save gprs, free to use any gpr */ + st.d $r1, a2, (KVM_ARCH_GGPR + 8 * 1) + st.d $r2, a2, (KVM_ARCH_GGPR + 8 * 2) + st.d $r3, a2, (KVM_ARCH_GGPR + 8 * 3) + st.d $r4, a2, (KVM_ARCH_GGPR + 8 * 4) + st.d $r5, a2, (KVM_ARCH_GGPR + 8 * 5) + st.d $r7, a2, (KVM_ARCH_GGPR + 8 * 7) + st.d $r8, a2, (KVM_ARCH_GGPR + 8 * 8) + st.d $r9, a2, (KVM_ARCH_GGPR + 8 * 9) + st.d $r10, a2, (KVM_ARCH_GGPR + 8 * 10) + st.d $r11, a2, (KVM_ARCH_GGPR + 8 * 11) + st.d $r12, a2, (KVM_ARCH_GGPR + 8 * 12) + st.d $r13, a2, (KVM_ARCH_GGPR + 8 * 13) + st.d $r14, a2, (KVM_ARCH_GGPR + 8 * 14) + st.d $r15, a2, (KVM_ARCH_GGPR + 8 * 15) + st.d $r16, a2, (KVM_ARCH_GGPR + 8 * 16) + st.d $r17, a2, (KVM_ARCH_GGPR + 8 * 17) + st.d $r18, a2, (KVM_ARCH_GGPR + 8 * 18) + st.d $r19, a2, (KVM_ARCH_GGPR + 8 * 19) + st.d $r20, a2, (KVM_ARCH_GGPR + 8 * 20) + st.d $r21, a2, (KVM_ARCH_GGPR + 8 * 21) + st.d $r22, a2, (KVM_ARCH_GGPR + 8 * 22) + st.d $r23, a2, (KVM_ARCH_GGPR + 8 * 23) + st.d $r24, a2, (KVM_ARCH_GGPR + 8 * 24) + st.d $r25, a2, (KVM_ARCH_GGPR + 8 * 25) + st.d $r26, a2, (KVM_ARCH_GGPR + 8 * 26) + st.d $r27, a2, (KVM_ARCH_GGPR + 8 * 27) + st.d $r28, a2, (KVM_ARCH_GGPR + 8 * 28) + st.d $r29, a2, (KVM_ARCH_GGPR + 8 * 29) + st.d $r30, a2, (KVM_ARCH_GGPR + 8 * 30) + st.d $r31, a2, (KVM_ARCH_GGPR + 8 * 31) + /* Save guest a2 */ + csrrd t0, KVM_TEMP_KS + st.d t0, a2, (KVM_ARCH_GGPR + 8 * REG_A2) + + /* a2: kvm_vcpu_arch, a1 is free to use */ + csrrd s1, KVM_VCPU_KS + ld.d s0, s1, KVM_VCPU_RUN + + csrrd t0, LOONGARCH_CSR_ESTAT + st.d t0, a2, KVM_ARCH_HESTAT + csrrd t0, LOONGARCH_CSR_ERA + st.d t0, a2, KVM_ARCH_GPC + csrrd t0, LOONGARCH_CSR_BADV + st.d t0, a2, KVM_ARCH_HBADV + csrrd t0, LOONGARCH_CSR_BADI + st.d t0, a2, KVM_ARCH_HBADI + + /* Restore host excfg.VS */ + csrrd t0, LOONGARCH_CSR_ECFG + ld.d t1, a2, KVM_ARCH_HECFG + or t0, t0, t1 + csrwr t0, LOONGARCH_CSR_ECFG + + /* Restore host eentry */ + ld.d t0, a2, KVM_ARCH_HEENTRY + csrwr t0, LOONGARCH_CSR_EENTRY + +#if defined(CONFIG_CPU_HAS_FPU) + /* Save FPU context */ + csrrd t0, LOONGARCH_CSR_EUEN + ori t1, zero, CSR_EUEN_FPEN + and t2, t0, t1 + beqz t2, 1f + movfcsr2gr t3, fcsr0 + st.d t3, a2, VCPU_FCSR0 + + movcf2gr t3, $fcc0 + or t2, t3, zero + movcf2gr t3, $fcc1 + bstrins.d t2, t3, 0xf, 0x8 + movcf2gr t3, $fcc2 + bstrins.d t2, t3, 0x17, 0x10 + movcf2gr t3, $fcc3 + bstrins.d t2, t3, 0x1f, 0x18 + movcf2gr t3, $fcc4 + bstrins.d t2, t3, 0x27, 0x20 + movcf2gr t3, $fcc5 + bstrins.d t2, t3, 0x2f, 0x28 + movcf2gr t3, $fcc6 + bstrins.d t2, t3, 0x37, 0x30 + movcf2gr t3, $fcc7 + bstrins.d t2, t3, 0x3f, 0x38 + st.d t2, a2, VCPU_FCC + movgr2fcsr fcsr0, zero +1: +#endif + ld.d t0, a2, KVM_ARCH_HPGD + csrwr t0, LOONGARCH_CSR_PGDL + + /* Disable PVM bit for keeping from into guest */ + ori t0, zero, CSR_GSTAT_PVM + csrxchg zero, t0, LOONGARCH_CSR_GSTAT + /* Clear GTLBC.TGID field */ + csrrd t0, LOONGARCH_CSR_GTLBC + bstrins.w t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT + csrwr t0, LOONGARCH_CSR_GTLBC + /* Enable Address Map mode */ + ori t0, zero, CONFIG_GUEST_CRMD + csrwr t0, LOONGARCH_CSR_CRMD + ld.d tp, a2, KVM_ARCH_HGP + ld.d sp, a2, KVM_ARCH_HSTACK + /* restore per cpu register */ + ld.d $r21, a2, KVM_ARCH_HPERCPU + addi.d sp, sp, -PT_SIZE + + /* Prepare handle exception */ + or a0, s0, zero + or a1, s1, zero + ld.d t8, a2, KVM_ARCH_HANDLE_EXIT + jirl ra,t8, 0 + + ori t0, zero, CSR_CRMD_IE + csrxchg zero, t0, LOONGARCH_CSR_CRMD + or a2, s1, zero + addi.d a2, a2, KVM_VCPU_ARCH + + andi t0, a0, RESUME_HOST + bnez t0, ret_to_host + + /* + * return to guest + * save per cpu register again, maybe switched to another cpu + */ + st.d $r21, a2, KVM_ARCH_HPERCPU + + /* Save kvm_vcpu to kscratch */ + csrwr s1, KVM_VCPU_KS + kvm_switch_to_guest a2 REG_A2 t0 t1 + +ret_to_host: + ld.d a2, a2, KVM_ARCH_HSTACK + addi.d a2, a2, -PT_SIZE + srai.w a3, a0, 2 + or a0, a3, zero + kvm_restore_host_gpr a2 + jirl zero, ra, 0 +SYM_CODE_END(kvm_vector_entry) +kvm_vector_entry_end: + +/* + * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu) + * + * @register_param: + * a0: kvm_run* run + * a1: kvm_vcpu* vcpu + */ +SYM_FUNC_START(kvm_enter_guest) + /* allocate space in stack bottom */ + addi.d a2, sp, -PT_SIZE + /* save host gprs */ + kvm_save_host_gpr a2 + + /* save host crmd,prmd csr to stack */ + csrrd a3, LOONGARCH_CSR_CRMD + st.d a3, a2, PT_CRMD + csrrd a3, LOONGARCH_CSR_PRMD + st.d a3, a2, PT_PRMD + + addi.d a2, a1, KVM_VCPU_ARCH + st.d sp, a2, KVM_ARCH_HSTACK + st.d tp, a2, KVM_ARCH_HGP + /* Save per cpu register */ + st.d $r21, a2, KVM_ARCH_HPERCPU + + /* Save kvm_vcpu to kscratch */ + csrwr a1, KVM_VCPU_KS + kvm_switch_to_guest a2 REG_A2 t0 t1 +SYM_FUNC_END(kvm_enter_guest) +kvm_enter_guest_end: + + .section ".rodata" +SYM_DATA(kvm_vector_size, + .quad kvm_vector_entry_end - kvm_vector_entry) +SYM_DATA(kvm_enter_guest_size, + .quad kvm_enter_guest_end - kvm_enter_guest) + + +SYM_FUNC_START(kvm_save_fpu) + fpu_save_double a0 t1 + jirl zero, ra, 0 +SYM_FUNC_END(kvm_save_fpu) + +SYM_FUNC_START(kvm_restore_fpu) + fpu_restore_double a0 t1 + jirl zero, ra, 0 +SYM_FUNC_END(kvm_restore_fpu) + +SYM_FUNC_START(kvm_restore_fcsr) + fpu_restore_csr a0 t1 + fpu_restore_cc a0 t1 t2 + + jirl zero, ra, 0 +SYM_FUNC_END(kvm_restore_fcsr) From patchwork Tue Feb 14 02:56:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56628 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2723937wrn; Mon, 13 Feb 2023 19:00:06 -0800 (PST) X-Google-Smtp-Source: AK7set+TNaEL8uOy6w38MjlVl3yVhPpaXW/kF7u3emevUgUq0OJyXorTbmiOidE6iMuOfcEHFVJd X-Received: by 2002:a17:906:231a:b0:8b1:fc:b1b0 with SMTP id l26-20020a170906231a00b008b100fcb1b0mr1230587eja.44.1676343606344; Mon, 13 Feb 2023 19:00:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343606; cv=none; d=google.com; s=arc-20160816; b=xdDbiBXoJ15iMfZFEbl/tJ26XPTlnTZ5QJycex+hLSH3r3HQD4DQCdqIHFffVSuYRv u8Ah5MePypjQ83H0ec+TdImf+xKWMbQbPA8jTSALVBQJiBDJBbjw1zGO5facqbZUxwji qc2Dplj9pn+gr6XvDct7qPEmST9nAhSfB2eeIDAGppHRHrbUaJNQ10Wd39CJiALZNYhp MgqpZvmQ8q40mN87DMBDDqlbIxG/vJPn1WrP4409A8mxSt4skJwqFn9oiNDczg3BVLSr V4KQPzSgfjkBQOIT2dVYyL2hf1blx4YCY3acu+3TF09EPUM/C6r9prFSZV8kYjiLRg/O rkpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=9eAZapGXnY+eAx0bNfP92DFASPVlUHnyygoK6Tfp65o=; b=Ax8Jg9feNx+RXJvz6qcZTEeHWZ1RK4txISDIteCiXH/XyRCsZ7qV0cBPJgIIeLquWx 29GkzCwC5Rmt+ScXky0CgZ8YNbnVko6pjCygdcGur3/0+PhWZ+6um+dc+f54HgTgGsji TsTUdu6jgQEHl6+7H+v89IgN+KU4JXb5ms73bskVO+GwMH6k31MN1LaKsounC6ZvRGKi 4TZBElUg1UVhLhD0NrBOwebuoZSZHwIiJlzec0cGfXn7cZL3EkNgQMGg+wVvc1+iClwJ YWy4WKhISy7C3FnY8t+AhHXMAf+QxMpzlRi7YRlix/QDZwXl6a2/lnJ/EgWcNqsUoyYK DAYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 26-20020a17090600da00b0084c7b0977b6si24161381eji.852.2023.02.13.18.59.43; Mon, 13 Feb 2023 19:00:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231530AbjBNC5u (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231315AbjBNC5B (ORCPT ); Mon, 13 Feb 2023 21:57:01 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 512021B30D; Mon, 13 Feb 2023 18:56:59 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxxth6+Opj1VcAAA--.368S3; Tue, 14 Feb 2023 10:56:58 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S25; Tue, 14 Feb 2023 10:56:57 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 23/24] LoongArch: KVM: Implement probe virtualization when loongarch cpu init Date: Tue, 14 Feb 2023 10:56:47 +0800 Message-Id: <20230214025648.1898508-24-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S25 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW7Zw1fCFWfZw1DJr47ZrW5KFg_yoW8tFy5pr W2vFW3trWUKr92ga93Gr1agrnxtFWkKa129F47tayfAr4Ut3W5Xwn3C34UCFs7Zw4xAryr Xrn7A3WvqF1DX3JanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773673581630386?= X-GMAIL-MSGID: =?utf-8?q?1757773673581630386?= Implement probe virtualization when loongarch cpu init, including guest gid info, guest fpu info, etc. Signed-off-by: Tianrui Zhao --- arch/loongarch/kernel/cpu-probe.c | 53 +++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/arch/loongarch/kernel/cpu-probe.c b/arch/loongarch/kernel/cpu-probe.c index 3a3fce2d7..9c3483d9a 100644 --- a/arch/loongarch/kernel/cpu-probe.c +++ b/arch/loongarch/kernel/cpu-probe.c @@ -176,6 +176,57 @@ static void cpu_probe_common(struct cpuinfo_loongarch *c) } } +static inline void cpu_probe_guestinfo(struct cpuinfo_loongarch *c) +{ + unsigned long guestinfo; + + guestinfo = read_csr_gstat(); + if (guestinfo & CSR_GSTAT_GIDBIT) { + c->options |= LOONGARCH_CPU_GUESTID; + write_csr_gstat(0); + } +} + +static inline void cpu_probe_lvz(struct cpuinfo_loongarch *c) +{ + unsigned long gcfg, gprcfg1; + + cpu_probe_guestinfo(c); + + c->guest.options |= LOONGARCH_CPU_FPU; + c->guest.options_dyn |= LOONGARCH_CPU_FPU; + c->guest.options_dyn |= LOONGARCH_CPU_PMP; + + c->guest.ases |= LOONGARCH_CPU_LSX; + c->guest.ases_dyn |= LOONGARCH_CPU_LSX; + gprcfg1 = read_gcsr_prcfg1(); + c->guest.kscratch_mask = GENMASK((gprcfg1 & CSR_CONF1_KSNUM) - 1, 0); + + gcfg = read_csr_gcfg(); + if (gcfg & CSR_GCFG_MATP_GUEST) + c->guest_cfg |= BIT(0); + if (gcfg & CSR_GCFG_MATP_ROOT) + c->guest_cfg |= BIT(1); + if (gcfg & CSR_GCFG_MATP_NEST) + c->guest_cfg |= BIT(2); + if (gcfg & CSR_GCFG_SITP) + c->guest_cfg |= BIT(6); + if (gcfg & CSR_GCFG_TITP) + c->guest_cfg |= BIT(8); + if (gcfg & CSR_GCFG_TOEP) + c->guest_cfg |= BIT(10); + if (gcfg & CSR_GCFG_TOPP) + c->guest_cfg |= BIT(12); + if (gcfg & CSR_GCFG_TORUP) + c->guest_cfg |= BIT(14); + if (gcfg & CSR_GCFG_GCIP_ALL) + c->guest_cfg |= BIT(16); + if (gcfg & CSR_GCFG_GCIP_HIT) + c->guest_cfg |= BIT(17); + if (gcfg & CSR_GCFG_GCIP_SECURE) + c->guest_cfg |= BIT(18); +} + #define MAX_NAME_LEN 32 #define VENDOR_OFFSET 0 #define CPUNAME_OFFSET 9 @@ -289,6 +340,8 @@ void cpu_probe(void) if (cpu == 0) __ua_limit = ~((1ull << cpu_vabits) - 1); #endif + if (cpu_has_lvz) + cpu_probe_lvz(c); cpu_report(); } From patchwork Tue Feb 14 02:56:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 56629 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2724067wrn; Mon, 13 Feb 2023 19:00:20 -0800 (PST) X-Google-Smtp-Source: AK7set/8MOUlQrpuUafTlLOmoQ+xcgm3oCrAA6zSRTqtkzayDEyVG/wo+jh9CKJ4zw2KFPr7HLpZ X-Received: by 2002:a17:906:6d10:b0:86f:ae1f:9234 with SMTP id m16-20020a1709066d1000b0086fae1f9234mr1102737ejr.7.1676343620403; Mon, 13 Feb 2023 19:00:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676343620; cv=none; d=google.com; s=arc-20160816; b=pORQ8Mu9FqLpYq23pykWjUepn1T+CacdiY5YG/2s+KvaPflosTdXgdNK4rjUnjOQkn oAPIEr5AR94nu6SLSNAuBvIGFZTOFUqTsglDaxiiyED2btwTdiEmIgoVttuRpFsAGJS+ nF5gwbTzmVJ4VH1Y9OD1JfpGiBknfBv+2ip0CX/H77X9xFS26LgcoWMkAgy8P9bmB7TV qA6PWWf1apS8B4Xbh9BY9JPNbmwwowbQNuTxoHjSXTmIs8UTLVMzMq/V5T6BiLgSPZUv K8jbq0owQb4LS6/9mxNp4sqZHjRHxN09MBpH76ffl1ZxahzeqgQF9ilTDyCcyspIwMPR o3CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=97MH4HzzkQv/xM08Yj8uTqxXHWzcFwAPE+ljnMnoH1o=; b=smZ2RToFGtgfG1BCjVuWDhRvGnIjY4Srq7TM/d3dQ8pjmUndFpUiYYwhCwgd0yR7aJ prYSb1iQGSSKh8ChDKONc7ZvZMyJ5R5efmtZHD/ed+oKabeg6HZupfvFbDVIukNPpZ+f sX06minlxnXhopx0fO9wvk8kuTKLi3/6AO8wFweZJyCbspcLRS6eiVnTDLHVR33TVvVS a2FRGRoGP5yUTE676lbsikIrVGY1rcyLxhAwSSsnZvE6aOzDg13CLfSDtEzOezKgugQ5 IFpR0fHBS42N3a84iwKCNyWFm56TIJq/EZVvwcd3Gq+Wy7nCrwbEMGc2Qe1knE/5/n6y AxHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wc8-20020a170907124800b008845c668409si14952936ejb.187.2023.02.13.18.59.57; Mon, 13 Feb 2023 19:00:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231538AbjBNC54 (ORCPT + 99 others); Mon, 13 Feb 2023 21:57:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231406AbjBNC5C (ORCPT ); Mon, 13 Feb 2023 21:57:02 -0500 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ABF301BAD2; Mon, 13 Feb 2023 18:56:59 -0800 (PST) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxF9l6+Opj4FcAAA--.917S3; Tue, 14 Feb 2023 10:56:58 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Axeb1w+OpjmZwyAA--.28802S26; Tue, 14 Feb 2023 10:56:58 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher Subject: [PATCH v1 24/24] LoongArch: KVM: Enable kvm config and add the makefile Date: Tue, 14 Feb 2023 10:56:48 +0800 Message-Id: <20230214025648.1898508-25-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230214025648.1898508-1-zhaotianrui@loongson.cn> References: <20230214025648.1898508-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Axeb1w+OpjmZwyAA--.28802S26 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoWxXFW3Ww4DWry3Cr48KFy7GFg_yoW5KF1DpF s7Ar1kGr4xWFn3JrZ3t34kWrs8CFn7Kr47u3Waya48Cry7Z34kur1ktr9rXFyUA393JrW0 gr1rGa1agayUJw7anT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_PASS, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757773688268815726?= X-GMAIL-MSGID: =?utf-8?q?1757773688268815726?= Enable loongarch kvm config and add the makefile to support build kvm module. Signed-off-by: Tianrui Zhao --- arch/loongarch/Kbuild | 1 + arch/loongarch/Kconfig | 2 ++ arch/loongarch/configs/loongson3_defconfig | 2 ++ arch/loongarch/kvm/Kconfig | 38 ++++++++++++++++++++++ arch/loongarch/kvm/Makefile | 21 ++++++++++++ 5 files changed, 64 insertions(+) create mode 100644 arch/loongarch/kvm/Kconfig create mode 100644 arch/loongarch/kvm/Makefile diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild index b01f5cdb2..40be8a169 100644 --- a/arch/loongarch/Kbuild +++ b/arch/loongarch/Kbuild @@ -2,6 +2,7 @@ obj-y += kernel/ obj-y += mm/ obj-y += net/ obj-y += vdso/ +obj-y += kvm/ # for cleaning subdir- += boot diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 9cc8b84f7..424ad9392 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -142,6 +142,7 @@ config LOONGARCH select USE_PERCPU_NUMA_NODE_ID select USER_STACKTRACE_SUPPORT select ZONE_DMA32 + select HAVE_KVM config 32BIT bool @@ -541,3 +542,4 @@ source "drivers/acpi/Kconfig" endmenu source "drivers/firmware/Kconfig" +source "arch/loongarch/kvm/Kconfig" diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig index eb84cae64..9a6e31b43 100644 --- a/arch/loongarch/configs/loongson3_defconfig +++ b/arch/loongarch/configs/loongson3_defconfig @@ -62,6 +62,8 @@ CONFIG_EFI_ZBOOT=y CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y CONFIG_EFI_CAPSULE_LOADER=m CONFIG_EFI_TEST=m +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=m CONFIG_MODULES=y CONFIG_MODULE_FORCE_LOAD=y CONFIG_MODULE_UNLOAD=y diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig new file mode 100644 index 000000000..8a999b4c0 --- /dev/null +++ b/arch/loongarch/kvm/Kconfig @@ -0,0 +1,38 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# KVM configuration +# + +source "virt/kvm/Kconfig" + +menuconfig VIRTUALIZATION + bool "Virtualization" + help + Say Y here to get to see options for using your Linux host to run + other operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and + disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on HAVE_KVM + select MMU_NOTIFIER + select ANON_INODES + select PREEMPT_NOTIFIERS + select KVM_MMIO + select KVM_GENERIC_DIRTYLOG_READ_PROTECT + select HAVE_KVM_VCPU_ASYNC_IOCTL + select HAVE_KVM_EVENTFD + select SRCU + help + Support hosting virtualized guest machines using hardware + virtualization extensions. You will need a fairly processor + equipped with virtualization extensions. + + If unsure, say N. + +endif # VIRTUALIZATION diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile new file mode 100644 index 000000000..42e9dcc18 --- /dev/null +++ b/arch/loongarch/kvm/Makefile @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for LOONGARCH KVM support +# + +ccflags-y += -I $(srctree)/$(src) + +include $(srctree)/virt/kvm/Makefile.kvm + +obj-$(CONFIG_KVM) += kvm.o + +kvm-y += main.o +kvm-y += vm.o +kvm-y += vmid.o +kvm-y += tlb.o +kvm-y += mmu.o +kvm-y += vcpu.o +kvm-y += exit.o +kvm-y += interrupt.o +kvm-y += timer.o +kvm-y += switch.o