From patchwork Thu Aug 31 08:29:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137250 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp105457vqu; Thu, 31 Aug 2023 01:47:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE/q+8F6+DnXNfXVtUJsgN0ErmERvzEi8zRJD+skgFAt8R3bpk6MlKBNLIC+z4K+8FRdFjc X-Received: by 2002:a17:903:25d4:b0:1bb:94ed:20a with SMTP id jc20-20020a17090325d400b001bb94ed020amr4223157plb.24.1693471650218; Thu, 31 Aug 2023 01:47:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693471650; cv=none; d=google.com; s=arc-20160816; b=ottn4P8NrzPFDYnXJCWnt2eKMYhGfrWKfLlXGlaZOwn9vQHGfX49FtJ0+5Dwb2Mhd2 HuViQ9iCewdyFMkg7PekbmcMvJ7f0x3APleTqJQRkx3Vur6Ej4cfixQVT3M0tAop+KhB SH5lMq6kiXCVdSmPjEE04c7KtFNHUyKKCHue9HhTCv6fjasrTQVXhhzNtpm+qKvLeCBa yoOndqvfTiBjt8JzzQWtlSCuM0WZrHTx0vs6oCn2le7vXgao8Rf6+Lv7lPzWJBEurfTK ULj/BQ2ejvzL5o2VEl8bIfYMZnZR46C6t2GaNMUs/uO1cxN6HtPcn4TvENB0ZCoBMpaw DMqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=2RQi9B/2iZfu4SVjYlZiuqX7AElJFw/dM8DjNkRksKo=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=U04eoR++1+UiryyF02AhiiIJo5uHmfdHkVHL+DWElcgAb5iez11wfAcutCMpcPBeI6 ve6VwIlqYqPo4dclgA+nTWzN35yzOt7GSCbS20mpqQntHZ7lD0Zn8zoRdl92Z4xWghgj VLVyCYfnXR8DcYLWk2C/XFtb8nkHv24hDVHHADIYYGzzWJIPcx1wXVx2ftYe+N6f53Gv 04EBETVLiu4MTjSOE7sZPtNK1MA4hZIHO82l+9+TLCe0XatKOA2FMMtT5Z5Y7vw7AYCB 7ZiAz8gf0aRr8fGGLG27Liu1dBNKl/RjmMpswok3Z2tFiv8WMo9wShOxqSpqROYtP+ij wTow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u9-20020a170902e5c900b001b9ffda162csi814464plf.441.2023.08.31.01.47.00; Thu, 31 Aug 2023 01:47:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245446AbjHaIaj (ORCPT + 99 others); Thu, 31 Aug 2023 04:30:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229785AbjHaIae (ORCPT ); Thu, 31 Aug 2023 04:30:34 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E66881A4; Thu, 31 Aug 2023 01:30:29 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Ax1fCjT_Bk8F0dAA--.60027S3; Thu, 31 Aug 2023 16:30:27 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S3; Thu, 31 Aug 2023 16:30:26 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Date: Thu, 31 Aug 2023 16:29:51 +0800 Message-Id: <20230831083020.2187109-2-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S3 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775733728753316292 X-GMAIL-MSGID: 1775733728753316292 Add LoongArch KVM related header files, including kvm.h, kvm_host.h, kvm_types.h. All of those are about LoongArch virtualization features and kvm interfaces. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/kvm_host.h | 238 +++++++++++++++++++++++++ arch/loongarch/include/asm/kvm_types.h | 11 ++ arch/loongarch/include/uapi/asm/kvm.h | 101 +++++++++++ include/uapi/linux/kvm.h | 9 + 4 files changed, 359 insertions(+) create mode 100644 arch/loongarch/include/asm/kvm_host.h create mode 100644 arch/loongarch/include/asm/kvm_types.h create mode 100644 arch/loongarch/include/uapi/asm/kvm.h diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h new file mode 100644 index 0000000000..9f23ddaaae --- /dev/null +++ b/arch/loongarch/include/asm/kvm_host.h @@ -0,0 +1,238 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_HOST_H__ +#define __ASM_LOONGARCH_KVM_HOST_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +/* Loongarch KVM register ids */ +#define LOONGARCH_CSR_32(_R, _S) \ + (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S))) + +#define LOONGARCH_CSR_64(_R, _S) \ + (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S))) + +#define KVM_IOC_CSRID(id) LOONGARCH_CSR_64(id, 0) +#define KVM_GET_IOC_CSRIDX(id) ((id & KVM_CSR_IDX_MASK) >> 3) + +#define KVM_MAX_VCPUS 256 +/* memory slots that does not exposed to userspace */ +#define KVM_PRIVATE_MEM_SLOTS 0 + +#define KVM_HALT_POLL_NS_DEFAULT 500000 + +struct kvm_vm_stat { + struct kvm_vm_stat_generic generic; +}; + +struct kvm_vcpu_stat { + struct kvm_vcpu_stat_generic generic; + u64 idle_exits; + u64 signal_exits; + u64 int_exits; + u64 cpucfg_exits; +}; + +struct kvm_arch_memory_slot { +}; + +struct kvm_context { + unsigned long vpid_cache; + struct kvm_vcpu *last_vcpu; +}; + +struct kvm_world_switch { + int (*guest_eentry)(void); + int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu); + unsigned long page_order; +}; + +struct kvm_arch { + /* Guest physical mm */ + pgd_t *pgd; + unsigned long gpa_size; + + s64 time_offset; + struct kvm_context __percpu *vmcs; +}; + +#define CSR_MAX_NUMS 0x800 + +struct loongarch_csrs { + unsigned long csrs[CSR_MAX_NUMS]; +}; + +/* Resume Flags */ +#define RESUME_HOST 0 +#define RESUME_GUEST 1 + +enum emulation_result { + EMULATE_DONE, /* no further processing */ + EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */ + EMULATE_FAIL, /* can't emulate this instruction */ + EMULATE_EXCEPT, /* A guest exception has been generated */ + EMULATE_DO_IOCSR, /* handle IOCSR request */ +}; + +#define KVM_LARCH_CSR (0x1 << 1) +#define KVM_LARCH_FPU (0x1 << 0) + +struct kvm_vcpu_arch { + /* + * Switch pointer-to-function type to unsigned long + * for loading the value into register directly. + */ + unsigned long host_eentry; + unsigned long guest_eentry; + + /* Pointers stored here for easy accessing from assembly code */ + int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu); + + /* Host registers preserved across guest mode execution */ + unsigned long host_sp; + unsigned long host_tp; + unsigned long host_pgd; + + /* Host CSRs are used when handling exits from guest */ + unsigned long badi; + unsigned long badv; + unsigned long host_ecfg; + unsigned long host_estat; + unsigned long host_percpu; + + /* GPRs */ + unsigned long gprs[32]; + unsigned long pc; + + /* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */ + unsigned int aux_inuse; + /* FPU state */ + struct loongarch_fpu fpu FPU_ALIGN; + + /* CSR state */ + struct loongarch_csrs *csr; + + /* GPR used as IO source/target */ + u32 io_gpr; + + struct hrtimer swtimer; + /* KVM register to control count timer */ + u32 count_ctl; + + /* Bitmask of exceptions that are pending */ + unsigned long irq_pending; + /* Bitmask of pending exceptions to be cleared */ + unsigned long irq_clear; + + /* Cache for pages needed inside spinlock regions */ + struct kvm_mmu_memory_cache mmu_page_cache; + + /* vcpu's vpid */ + u64 vpid; + + /* Frequency of stable timer in Hz */ + u64 timer_mhz; + ktime_t expire; + + u64 core_ext_ioisr[4]; + + /* Last CPU the vCPU state was loaded on */ + int last_sched_cpu; + /* mp state */ + struct kvm_mp_state mp_state; +}; + +static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg) +{ + return csr->csrs[reg]; +} + +static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val) +{ + csr->csrs[reg] = val; +} + +/* Helpers */ +static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch) +{ + return cpu_has_fpu; +} + +void _kvm_init_fault(void); + +/* Debug: dump vcpu state */ +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu); + +/* MMU handling */ +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write); +void kvm_flush_tlb_all(void); +void _kvm_destroy_mm(struct kvm *kvm); +pgd_t *kvm_pgd_alloc(void); + +#define KVM_ARCH_WANT_MMU_NOTIFIER +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end, bool blockable); +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); + +static inline void update_pc(struct kvm_vcpu_arch *arch) +{ + arch->pc += 4; +} + +/** + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault. + * @vcpu: Virtual CPU. + * + * Returns: Whether the TLBL exception was likely due to an instruction + * fetch fault rather than a data load fault. + */ +static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch) +{ + return arch->pc == arch->badv; +} + +/* Misc */ +static inline void kvm_arch_hardware_unsetup(void) {} +static inline void kvm_arch_sync_events(struct kvm *kvm) {} +static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} +static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) {} +void _kvm_check_vmid(struct kvm_vcpu *vcpu); +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer); +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa); +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); +void kvm_init_vmcs(struct kvm *kvm); +void kvm_vector_entry(void); +int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu); +extern const unsigned long kvm_vector_size; +extern const unsigned long kvm_enter_guest_size; +extern unsigned long vpid_mask; +extern struct kvm_world_switch *kvm_loongarch_ops; + +#define SW_GCSR (1 << 0) +#define HW_GCSR (1 << 1) +#define INVALID_GCSR (1 << 2) +int get_gcsr_flag(int csr); +extern void set_hw_gcsr(int csr_id, unsigned long val); +#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */ diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h new file mode 100644 index 0000000000..2fe1d4bdff --- /dev/null +++ b/arch/loongarch/include/asm/kvm_types.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef _ASM_LOONGARCH_KVM_TYPES_H +#define _ASM_LOONGARCH_KVM_TYPES_H + +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 + +#endif /* _ASM_LOONGARCH_KVM_TYPES_H */ diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h new file mode 100644 index 0000000000..7ec2f34018 --- /dev/null +++ b/arch/loongarch/include/uapi/asm/kvm.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __UAPI_ASM_LOONGARCH_KVM_H +#define __UAPI_ASM_LOONGARCH_KVM_H + +#include + +/* + * KVM Loongarch specific structures and definitions. + * + * Some parts derived from the x86 version of this file. + */ + +#define __KVM_HAVE_READONLY_MEM + +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 +#define KVM_DIRTY_LOG_PAGE_OFFSET 64 + +/* + * for KVM_GET_REGS and KVM_SET_REGS + */ +struct kvm_regs { + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ + __u64 gpr[32]; + __u64 pc; +}; + +/* + * for KVM_GET_FPU and KVM_SET_FPU + */ +struct kvm_fpu { + __u32 fcsr; + __u64 fcc; /* 8x8 */ + struct kvm_fpureg { + __u64 val64[4]; + } fpr[32]; +}; + +/* + * For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various + * registers. The id field is broken down as follows: + * + * bits[63..52] - As per linux/kvm.h + * bits[51..32] - Must be zero. + * bits[31..16] - Register set. + * + * Register set = 0: GP registers from kvm_regs (see definitions below). + * + * Register set = 1: CSR registers. + * + * Register set = 2: KVM specific registers (see definitions below). + * + * Register set = 3: FPU / SIMD registers (see definitions below). + * + * Other sets registers may be added in the future. Each set would + * have its own identifier in bits[31..16]. + */ + +#define KVM_REG_LOONGARCH_GPR (KVM_REG_LOONGARCH | 0x00000ULL) +#define KVM_REG_LOONGARCH_CSR (KVM_REG_LOONGARCH | 0x10000ULL) +#define KVM_REG_LOONGARCH_KVM (KVM_REG_LOONGARCH | 0x20000ULL) +#define KVM_REG_LOONGARCH_FPU (KVM_REG_LOONGARCH | 0x30000ULL) +#define KVM_REG_LOONGARCH_MASK (KVM_REG_LOONGARCH | 0x30000ULL) +#define KVM_CSR_IDX_MASK (0x10000 - 1) + +/* + * KVM_REG_LOONGARCH_KVM - KVM specific control registers. + */ + +#define KVM_REG_LOONGARCH_COUNTER (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3) +#define KVM_REG_LOONGARCH_VCPU_RESET (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4) + +struct kvm_debug_exit_arch { +}; + +/* for KVM_SET_GUEST_DEBUG */ +struct kvm_guest_debug_arch { +}; + +/* definition of registers in kvm_run */ +struct kvm_sync_regs { +}; + +/* dummy definition */ +struct kvm_sregs { +}; + +struct kvm_iocsr_entry { + __u32 addr; + __u32 pad; + __u64 data; +}; + +#define KVM_NR_IRQCHIPS 1 +#define KVM_IRQCHIP_NUM_PINS 64 +#define KVM_MAX_CORES 256 + +#endif /* __UAPI_ASM_LOONGARCH_KVM_H */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index f089ab2909..1184171224 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -264,6 +264,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_LOONGARCH_IOCSR 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -336,6 +337,13 @@ struct kvm_run { __u32 len; __u8 is_write; } mmio; + /* KVM_EXIT_LOONGARCH_IOCSR */ + struct { + __u64 phys_addr; + __u8 data[8]; + __u32 len; + __u8 is_write; + } iocsr_io; /* KVM_EXIT_HYPERCALL */ struct { __u64 nr; @@ -1362,6 +1370,7 @@ struct kvm_dirty_tlb { #define KVM_REG_ARM64 0x6000000000000000ULL #define KVM_REG_MIPS 0x7000000000000000ULL #define KVM_REG_RISCV 0x8000000000000000ULL +#define KVM_REG_LOONGARCH 0x9000000000000000ULL #define KVM_REG_SIZE_SHIFT 52 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL From patchwork Thu Aug 31 08:29:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137261 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp127901vqu; Thu, 31 Aug 2023 02:44:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFXWh34kv1+1UBRasEJWHLzuDiNyNyDtNFhbg5X5pXY7bY0A7lZj1SkmSX6WsP+miuarnWN X-Received: by 2002:aa7:d711:0:b0:522:b1cb:e6c with SMTP id t17-20020aa7d711000000b00522b1cb0e6cmr3629219edq.38.1693475051703; Thu, 31 Aug 2023 02:44:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693475051; cv=none; d=google.com; s=arc-20160816; b=D/DNrvqocHQEANchGhVRF1YGPjYaeTP1vCBTlTQhAqFS3b+TDWZOZGTfWNuqaDOE6W dg1dnoHMOCATCV85MPXESYvw7+7cyM8PsjtINxmZ4WxZuC6LW/YJgW75Cpy2inAvu8Lk x9MJ0rCOLXzIEQfQZ2hFBtcsBb68c9TJrwZcMtxTYYAbUPktoqU5LkPKqfS37MGzam2U ynbFiG9qlRx9rdMTbZ0+OxGwvaD4wJrN8hi1TEnYRG8v0kypTUh6cWIeQT+XHAnoMfHd 3wzxLqt8op8zDm2AeVBI9+zJ91NZHaRzQwZYO0KCZTmq7SwJF9xu1eQvYqcka3qax3q0 FOOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=tUbe4pjX7Ba9wZPAW9H0eONkzrGNSvogQBOA4RJu5XQ=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=yTmOBxonTnnweroHWiuipU7UWgQbp9x+q2eUx7OvlZgPXUBtzKJk+pwEAw3tQu7jCh iADPjy0cNppXzQptBY76NfG2/vJ7Ys2KllN9aIENg93fUahaZ2R2VKTcN/U7YXLNuiGB wexvfPs8juPsCTu1fAvlwHA62uNWQyaLFiQrIPtLgUv0bLiZ0HDnNRRAujmBNfbNgEI2 Uf4DEPLMdKQ/NIjZyNYpRw4CiJixDBEcSpAgYp3kZv9rYCYkYhY0a/Ttl3iPJ0vIyVpi ZqixwD+AXLL9PfdQMDXBDF36dUX0UgwSrTd3tNnHsuI5l5fDLB6bYUI1yK9OSDLIKJh2 tpkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i12-20020a056402054c00b0052a48394392si762478edx.363.2023.08.31.02.43.45; Thu, 31 Aug 2023 02:44:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245251AbjHaIai (ORCPT + 99 others); Thu, 31 Aug 2023 04:30:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243151AbjHaIae (ORCPT ); Thu, 31 Aug 2023 04:30:34 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 413C7185; Thu, 31 Aug 2023 01:30:31 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxV_GkT_Bk_F0dAA--.60590S3; Thu, 31 Aug 2023 16:30:28 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S5; Thu, 31 Aug 2023 16:30:27 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Date: Thu, 31 Aug 2023 16:29:53 +0800 Message-Id: <20230831083020.2187109-4-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S5 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775737295794668332 X-GMAIL-MSGID: 1775737295794668332 Implement kvm hardware enable, disable interface, setting the guest config register to enable virtualization features when called the interface. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/main.c | 62 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c index c204853b8c..46a042735d 100644 --- a/arch/loongarch/kvm/main.c +++ b/arch/loongarch/kvm/main.c @@ -195,6 +195,68 @@ static void _kvm_init_gcsr_flag(void) set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR3); } +void kvm_init_vmcs(struct kvm *kvm) +{ + kvm->arch.vmcs = vmcs; +} + +long kvm_arch_dev_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_hardware_enable(void) +{ + unsigned long env, gcfg = 0; + + env = read_csr_gcfg(); + /* First init gtlbc, gcfg, gstat, gintc. All guest use the same config */ + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); + write_csr_gcfg(0); + write_csr_gstat(0); + write_csr_gintc(0); + + /* + * Enable virtualization features granting guest direct control of + * certain features: + * GCI=2: Trap on init or unimplement cache instruction. + * TORU=0: Trap on Root Unimplement. + * CACTRL=1: Root control cache. + * TOP=0: Trap on Previlege. + * TOE=0: Trap on Exception. + * TIT=0: Trap on Timer. + */ + if (env & CSR_GCFG_GCIP_ALL) + gcfg |= CSR_GCFG_GCI_SECURE; + if (env & CSR_GCFG_MATC_ROOT) + gcfg |= CSR_GCFG_MATC_ROOT; + + gcfg |= CSR_GCFG_TIT; + write_csr_gcfg(gcfg); + + kvm_flush_tlb_all(); + + /* Enable using TGID */ + set_csr_gtlbc(CSR_GTLBC_USETGID); + kvm_debug("gtlbc:%lx gintc:%lx gstat:%lx gcfg:%lx", + read_csr_gtlbc(), read_csr_gintc(), + read_csr_gstat(), read_csr_gcfg()); + + return 0; +} + +void kvm_arch_hardware_disable(void) +{ + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); + write_csr_gcfg(0); + write_csr_gstat(0); + write_csr_gintc(0); + + /* Flush any remaining guest TLB entries */ + kvm_flush_tlb_all(); +} + static int kvm_loongarch_env_init(void) { struct kvm_context *context; From patchwork Thu Aug 31 08:29:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137313 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp332366vqu; Thu, 31 Aug 2023 08:52:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHoPeFJ0oyR10Jb78Qm4NODuhVq3LAg5mrq28UyIQxg0ZUB6G3taI1swk7OOZZrjL2kjjDV X-Received: by 2002:a17:903:244c:b0:1b8:971c:b7b7 with SMTP id l12-20020a170903244c00b001b8971cb7b7mr14752pls.56.1693497160743; Thu, 31 Aug 2023 08:52:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693497160; cv=none; d=google.com; s=arc-20160816; b=EFi+o8HPpbk340QYKDCME32UfG5o8ZjSDjOqUomvlTAVRQH7QNibcX6DQk7K28xO+X OzXG1CfrxDCZx7EPDKh393UKw/x9QnK1inKLAoEusCUWe9UEUx8TfvuJZJmgkS/8wf+U MN4Yp1I1UzSMuQX2TDyY88B2SkZAIKIg0XNx2AJFD3xQoQwA/NvRXKMYQx8Q7Mzok6Ej H/VfRRZMRh8ntsix4se/1ALhwfEYeP/VjShoKw+xr3MT3QZiQ5eKXE/q5rQsyqbgMwwX Ga+q3pwIPQlIJdCLi00gW+Zq1sGdudq0xuL/0+AnGtlYTcTXCv269DN6DRxSW3pvr527 3QPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1J2JligXr2yuRSIBtjy273xPBZzuKKg95b+YakTw1GM=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=e7e1Jiwkmc051xQIZ4z5Lzg5rWM0EGDjGvUU512roil8eIplaENkDKBQdbinpuBBr7 u5WJZvpOhv4jsUDl9ws9J1yiGZEX7fR/hrZ0oCODWDUy2wa9l/rki/SoR+Fq21f0P1Rf V7IyZBS+GVi891Yt6DNMH4V2xAiBggSFen37eANSROwXc37j7cgHOLhf69PT4+40yxPi OQ+zay2XxT/XEuMPisk8TZaeC01B+AEhroCqJzamMKofjq37X0Rznbt2GLf0qzptugd5 Zgw+fJxVbwiGSkm0/wcxNrpKGIGvKQsn+HwrGdhh2+ny91cjdVJ/E27sfc97cFQkuryo UV9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ks11-20020a170903084b00b001b9da7af2c0si1305186plb.214.2023.08.31.08.52.10; Thu, 31 Aug 2023 08:52:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242431AbjHaIan (ORCPT + 99 others); Thu, 31 Aug 2023 04:30:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245162AbjHaIah (ORCPT ); Thu, 31 Aug 2023 04:30:37 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8C281CEA; Thu, 31 Aug 2023 01:30:32 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Bxd+ikT_BkEV4dAA--.24015S3; Thu, 31 Aug 2023 16:30:28 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S7; Thu, 31 Aug 2023 16:30:28 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files Date: Thu, 31 Aug 2023 16:29:55 +0800 Message-Id: <20230831083020.2187109-6-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S7 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775760479263848939 X-GMAIL-MSGID: 1775760479263848939 Add LoongArch vcpu related header files, including vcpu csr information, irq number defines, and some vcpu interfaces. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/kvm_csr.h | 222 +++++++++++++++++++++++++ arch/loongarch/include/asm/kvm_vcpu.h | 95 +++++++++++ arch/loongarch/include/asm/loongarch.h | 19 ++- arch/loongarch/kvm/trace.h | 168 +++++++++++++++++++ 4 files changed, 499 insertions(+), 5 deletions(-) create mode 100644 arch/loongarch/include/asm/kvm_csr.h create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h create mode 100644 arch/loongarch/kvm/trace.h diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h new file mode 100644 index 0000000000..e27dcacd00 --- /dev/null +++ b/arch/loongarch/include/asm/kvm_csr.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_CSR_H__ +#define __ASM_LOONGARCH_KVM_CSR_H__ +#include +#include +#include +#include + +/* binutils support virtualization instructions */ +#define gcsr_read(csr) \ +({ \ + register unsigned long __v; \ + __asm__ __volatile__( \ + " gcsrrd %[val], %[reg]\n\t" \ + : [val] "=r" (__v) \ + : [reg] "i" (csr) \ + : "memory"); \ + __v; \ +}) + +#define gcsr_write(v, csr) \ +({ \ + register unsigned long __v = v; \ + __asm__ __volatile__ ( \ + " gcsrwr %[val], %[reg]\n\t" \ + : [val] "+r" (__v) \ + : [reg] "i" (csr) \ + : "memory"); \ +}) + +#define gcsr_xchg(v, m, csr) \ +({ \ + register unsigned long __v = v; \ + __asm__ __volatile__( \ + " gcsrxchg %[val], %[mask], %[reg]\n\t" \ + : [val] "+r" (__v) \ + : [mask] "r" (m), [reg] "i" (csr) \ + : "memory"); \ + __v; \ +}) + +/* Guest CSRS read and write */ +#define read_gcsr_crmd() gcsr_read(LOONGARCH_CSR_CRMD) +#define write_gcsr_crmd(val) gcsr_write(val, LOONGARCH_CSR_CRMD) +#define read_gcsr_prmd() gcsr_read(LOONGARCH_CSR_PRMD) +#define write_gcsr_prmd(val) gcsr_write(val, LOONGARCH_CSR_PRMD) +#define read_gcsr_euen() gcsr_read(LOONGARCH_CSR_EUEN) +#define write_gcsr_euen(val) gcsr_write(val, LOONGARCH_CSR_EUEN) +#define read_gcsr_misc() gcsr_read(LOONGARCH_CSR_MISC) +#define write_gcsr_misc(val) gcsr_write(val, LOONGARCH_CSR_MISC) +#define read_gcsr_ecfg() gcsr_read(LOONGARCH_CSR_ECFG) +#define write_gcsr_ecfg(val) gcsr_write(val, LOONGARCH_CSR_ECFG) +#define read_gcsr_estat() gcsr_read(LOONGARCH_CSR_ESTAT) +#define write_gcsr_estat(val) gcsr_write(val, LOONGARCH_CSR_ESTAT) +#define read_gcsr_era() gcsr_read(LOONGARCH_CSR_ERA) +#define write_gcsr_era(val) gcsr_write(val, LOONGARCH_CSR_ERA) +#define read_gcsr_badv() gcsr_read(LOONGARCH_CSR_BADV) +#define write_gcsr_badv(val) gcsr_write(val, LOONGARCH_CSR_BADV) +#define read_gcsr_badi() gcsr_read(LOONGARCH_CSR_BADI) +#define write_gcsr_badi(val) gcsr_write(val, LOONGARCH_CSR_BADI) +#define read_gcsr_eentry() gcsr_read(LOONGARCH_CSR_EENTRY) +#define write_gcsr_eentry(val) gcsr_write(val, LOONGARCH_CSR_EENTRY) + +#define read_gcsr_tlbidx() gcsr_read(LOONGARCH_CSR_TLBIDX) +#define write_gcsr_tlbidx(val) gcsr_write(val, LOONGARCH_CSR_TLBIDX) +#define read_gcsr_tlbhi() gcsr_read(LOONGARCH_CSR_TLBEHI) +#define write_gcsr_tlbhi(val) gcsr_write(val, LOONGARCH_CSR_TLBEHI) +#define read_gcsr_tlblo0() gcsr_read(LOONGARCH_CSR_TLBELO0) +#define write_gcsr_tlblo0(val) gcsr_write(val, LOONGARCH_CSR_TLBELO0) +#define read_gcsr_tlblo1() gcsr_read(LOONGARCH_CSR_TLBELO1) +#define write_gcsr_tlblo1(val) gcsr_write(val, LOONGARCH_CSR_TLBELO1) + +#define read_gcsr_asid() gcsr_read(LOONGARCH_CSR_ASID) +#define write_gcsr_asid(val) gcsr_write(val, LOONGARCH_CSR_ASID) +#define read_gcsr_pgdl() gcsr_read(LOONGARCH_CSR_PGDL) +#define write_gcsr_pgdl(val) gcsr_write(val, LOONGARCH_CSR_PGDL) +#define read_gcsr_pgdh() gcsr_read(LOONGARCH_CSR_PGDH) +#define write_gcsr_pgdh(val) gcsr_write(val, LOONGARCH_CSR_PGDH) +#define write_gcsr_pgd(val) gcsr_write(val, LOONGARCH_CSR_PGD) +#define read_gcsr_pgd() gcsr_read(LOONGARCH_CSR_PGD) +#define read_gcsr_pwctl0() gcsr_read(LOONGARCH_CSR_PWCTL0) +#define write_gcsr_pwctl0(val) gcsr_write(val, LOONGARCH_CSR_PWCTL0) +#define read_gcsr_pwctl1() gcsr_read(LOONGARCH_CSR_PWCTL1) +#define write_gcsr_pwctl1(val) gcsr_write(val, LOONGARCH_CSR_PWCTL1) +#define read_gcsr_stlbpgsize() gcsr_read(LOONGARCH_CSR_STLBPGSIZE) +#define write_gcsr_stlbpgsize(val) gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE) +#define read_gcsr_rvacfg() gcsr_read(LOONGARCH_CSR_RVACFG) +#define write_gcsr_rvacfg(val) gcsr_write(val, LOONGARCH_CSR_RVACFG) + +#define read_gcsr_cpuid() gcsr_read(LOONGARCH_CSR_CPUID) +#define write_gcsr_cpuid(val) gcsr_write(val, LOONGARCH_CSR_CPUID) +#define read_gcsr_prcfg1() gcsr_read(LOONGARCH_CSR_PRCFG1) +#define write_gcsr_prcfg1(val) gcsr_write(val, LOONGARCH_CSR_PRCFG1) +#define read_gcsr_prcfg2() gcsr_read(LOONGARCH_CSR_PRCFG2) +#define write_gcsr_prcfg2(val) gcsr_write(val, LOONGARCH_CSR_PRCFG2) +#define read_gcsr_prcfg3() gcsr_read(LOONGARCH_CSR_PRCFG3) +#define write_gcsr_prcfg3(val) gcsr_write(val, LOONGARCH_CSR_PRCFG3) + +#define read_gcsr_kscratch0() gcsr_read(LOONGARCH_CSR_KS0) +#define write_gcsr_kscratch0(val) gcsr_write(val, LOONGARCH_CSR_KS0) +#define read_gcsr_kscratch1() gcsr_read(LOONGARCH_CSR_KS1) +#define write_gcsr_kscratch1(val) gcsr_write(val, LOONGARCH_CSR_KS1) +#define read_gcsr_kscratch2() gcsr_read(LOONGARCH_CSR_KS2) +#define write_gcsr_kscratch2(val) gcsr_write(val, LOONGARCH_CSR_KS2) +#define read_gcsr_kscratch3() gcsr_read(LOONGARCH_CSR_KS3) +#define write_gcsr_kscratch3(val) gcsr_write(val, LOONGARCH_CSR_KS3) +#define read_gcsr_kscratch4() gcsr_read(LOONGARCH_CSR_KS4) +#define write_gcsr_kscratch4(val) gcsr_write(val, LOONGARCH_CSR_KS4) +#define read_gcsr_kscratch5() gcsr_read(LOONGARCH_CSR_KS5) +#define write_gcsr_kscratch5(val) gcsr_write(val, LOONGARCH_CSR_KS5) +#define read_gcsr_kscratch6() gcsr_read(LOONGARCH_CSR_KS6) +#define write_gcsr_kscratch6(val) gcsr_write(val, LOONGARCH_CSR_KS6) +#define read_gcsr_kscratch7() gcsr_read(LOONGARCH_CSR_KS7) +#define write_gcsr_kscratch7(val) gcsr_write(val, LOONGARCH_CSR_KS7) + +#define read_gcsr_timerid() gcsr_read(LOONGARCH_CSR_TMID) +#define write_gcsr_timerid(val) gcsr_write(val, LOONGARCH_CSR_TMID) +#define read_gcsr_timercfg() gcsr_read(LOONGARCH_CSR_TCFG) +#define write_gcsr_timercfg(val) gcsr_write(val, LOONGARCH_CSR_TCFG) +#define read_gcsr_timertick() gcsr_read(LOONGARCH_CSR_TVAL) +#define write_gcsr_timertick(val) gcsr_write(val, LOONGARCH_CSR_TVAL) +#define read_gcsr_timeroffset() gcsr_read(LOONGARCH_CSR_CNTC) +#define write_gcsr_timeroffset(val) gcsr_write(val, LOONGARCH_CSR_CNTC) + +#define read_gcsr_llbctl() gcsr_read(LOONGARCH_CSR_LLBCTL) +#define write_gcsr_llbctl(val) gcsr_write(val, LOONGARCH_CSR_LLBCTL) + +#define read_gcsr_tlbrentry() gcsr_read(LOONGARCH_CSR_TLBRENTRY) +#define write_gcsr_tlbrentry(val) gcsr_write(val, LOONGARCH_CSR_TLBRENTRY) +#define read_gcsr_tlbrbadv() gcsr_read(LOONGARCH_CSR_TLBRBADV) +#define write_gcsr_tlbrbadv(val) gcsr_write(val, LOONGARCH_CSR_TLBRBADV) +#define read_gcsr_tlbrera() gcsr_read(LOONGARCH_CSR_TLBRERA) +#define write_gcsr_tlbrera(val) gcsr_write(val, LOONGARCH_CSR_TLBRERA) +#define read_gcsr_tlbrsave() gcsr_read(LOONGARCH_CSR_TLBRSAVE) +#define write_gcsr_tlbrsave(val) gcsr_write(val, LOONGARCH_CSR_TLBRSAVE) +#define read_gcsr_tlbrelo0() gcsr_read(LOONGARCH_CSR_TLBRELO0) +#define write_gcsr_tlbrelo0(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO0) +#define read_gcsr_tlbrelo1() gcsr_read(LOONGARCH_CSR_TLBRELO1) +#define write_gcsr_tlbrelo1(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO1) +#define read_gcsr_tlbrehi() gcsr_read(LOONGARCH_CSR_TLBREHI) +#define write_gcsr_tlbrehi(val) gcsr_write(val, LOONGARCH_CSR_TLBREHI) +#define read_gcsr_tlbrprmd() gcsr_read(LOONGARCH_CSR_TLBRPRMD) +#define write_gcsr_tlbrprmd(val) gcsr_write(val, LOONGARCH_CSR_TLBRPRMD) + +#define read_gcsr_directwin0() gcsr_read(LOONGARCH_CSR_DMWIN0) +#define write_gcsr_directwin0(val) gcsr_write(val, LOONGARCH_CSR_DMWIN0) +#define read_gcsr_directwin1() gcsr_read(LOONGARCH_CSR_DMWIN1) +#define write_gcsr_directwin1(val) gcsr_write(val, LOONGARCH_CSR_DMWIN1) +#define read_gcsr_directwin2() gcsr_read(LOONGARCH_CSR_DMWIN2) +#define write_gcsr_directwin2(val) gcsr_write(val, LOONGARCH_CSR_DMWIN2) +#define read_gcsr_directwin3() gcsr_read(LOONGARCH_CSR_DMWIN3) +#define write_gcsr_directwin3(val) gcsr_write(val, LOONGARCH_CSR_DMWIN3) + +/* Guest related CSRs */ +#define read_csr_gtlbc() csr_read64(LOONGARCH_CSR_GTLBC) +#define write_csr_gtlbc(val) csr_write64(val, LOONGARCH_CSR_GTLBC) +#define read_csr_trgp() csr_read64(LOONGARCH_CSR_TRGP) +#define read_csr_gcfg() csr_read64(LOONGARCH_CSR_GCFG) +#define write_csr_gcfg(val) csr_write64(val, LOONGARCH_CSR_GCFG) +#define read_csr_gstat() csr_read64(LOONGARCH_CSR_GSTAT) +#define write_csr_gstat(val) csr_write64(val, LOONGARCH_CSR_GSTAT) +#define read_csr_gintc() csr_read64(LOONGARCH_CSR_GINTC) +#define write_csr_gintc(val) csr_write64(val, LOONGARCH_CSR_GINTC) +#define read_csr_gcntc() csr_read64(LOONGARCH_CSR_GCNTC) +#define write_csr_gcntc(val) csr_write64(val, LOONGARCH_CSR_GCNTC) + +#define __BUILD_GCSR_OP(name) __BUILD_CSR_COMMON(gcsr_##name) + +__BUILD_GCSR_OP(llbctl) +__BUILD_GCSR_OP(tlbidx) +__BUILD_CSR_OP(gcfg) +__BUILD_CSR_OP(gstat) +__BUILD_CSR_OP(gtlbc) +__BUILD_CSR_OP(gintc) + +#define set_gcsr_estat(val) \ + gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT) +#define clear_gcsr_estat(val) \ + gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT) + +#define kvm_read_hw_gcsr(id) gcsr_read(id) +#define kvm_write_hw_gcsr(csr, id, val) gcsr_write(val, id) + +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v); +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v); + +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu); + +#define kvm_save_hw_gcsr(csr, gid) (csr->csrs[gid] = gcsr_read(gid)) +#define kvm_restore_hw_gcsr(csr, gid) (gcsr_write(csr->csrs[gid], gid)) + +static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid) +{ + return csr->csrs[gid]; +} + +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, + int gid, unsigned long val) +{ + csr->csrs[gid] = val; +} + +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr, + int gid, unsigned long val) +{ + csr->csrs[gid] |= val; +} + +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, + int gid, unsigned long mask, + unsigned long val) +{ + unsigned long _mask = mask; + + csr->csrs[gid] &= ~_mask; + csr->csrs[gid] |= val & _mask; +} +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */ diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h new file mode 100644 index 0000000000..3d23a656fe --- /dev/null +++ b/arch/loongarch/include/asm/kvm_vcpu.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__ +#define __ASM_LOONGARCH_KVM_VCPU_H__ + +#include +#include + +/* Controlled by 0x5 guest exst */ +#define CPU_SIP0 (_ULCAST_(1)) +#define CPU_SIP1 (_ULCAST_(1) << 1) +#define CPU_PMU (_ULCAST_(1) << 10) +#define CPU_TIMER (_ULCAST_(1) << 11) +#define CPU_IPI (_ULCAST_(1) << 12) + +/* Controlled by 0x52 guest exception VIP + * aligned to exst bit 5~12 + */ +#define CPU_IP0 (_ULCAST_(1)) +#define CPU_IP1 (_ULCAST_(1) << 1) +#define CPU_IP2 (_ULCAST_(1) << 2) +#define CPU_IP3 (_ULCAST_(1) << 3) +#define CPU_IP4 (_ULCAST_(1) << 4) +#define CPU_IP5 (_ULCAST_(1) << 5) +#define CPU_IP6 (_ULCAST_(1) << 6) +#define CPU_IP7 (_ULCAST_(1) << 7) + +#define MNSEC_PER_SEC (NSEC_PER_SEC >> 20) + +/* KVM_IRQ_LINE irq field index values */ +#define KVM_LOONGSON_IRQ_TYPE_SHIFT 24 +#define KVM_LOONGSON_IRQ_TYPE_MASK 0xff +#define KVM_LOONGSON_IRQ_VCPU_SHIFT 16 +#define KVM_LOONGSON_IRQ_VCPU_MASK 0xff +#define KVM_LOONGSON_IRQ_NUM_SHIFT 0 +#define KVM_LOONGSON_IRQ_NUM_MASK 0xffff + +/* Irq_type field */ +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP 0 +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO 1 +#define KVM_LOONGSON_IRQ_TYPE_HT 2 +#define KVM_LOONGSON_IRQ_TYPE_MSI 3 +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC 4 +#define KVM_LOONGSON_IRQ_TYPE_ROUTE 5 + +/* Out-of-kernel GIC cpu interrupt injection irq_number field */ +#define KVM_LOONGSON_IRQ_CPU_IRQ 0 +#define KVM_LOONGSON_IRQ_CPU_FIQ 1 +#define KVM_LOONGSON_CPU_IP_NUM 8 + +typedef union loongarch_instruction larch_inst; +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst); +int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst); +int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run); +int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run); +int _kvm_emu_idle(struct kvm_vcpu *vcpu); +int _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu); +int _kvm_pending_timer(struct kvm_vcpu *vcpu); +int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault); +void _kvm_deliver_intr(struct kvm_vcpu *vcpu); + +void kvm_own_fpu(struct kvm_vcpu *vcpu); +void kvm_lose_fpu(struct kvm_vcpu *vcpu); +void kvm_save_fpu(struct loongarch_fpu *fpu); +void kvm_restore_fpu(struct loongarch_fpu *fpu); +void kvm_restore_fcsr(struct loongarch_fpu *fpu); + +void kvm_acquire_timer(struct kvm_vcpu *vcpu); +void kvm_reset_timer(struct kvm_vcpu *vcpu); +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz); +void kvm_restore_timer(struct kvm_vcpu *vcpu); +void kvm_save_timer(struct kvm_vcpu *vcpu); + +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq); +/* + * Loongarch KVM guest interrupt handling + */ +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq) +{ + set_bit(irq, &vcpu->arch.irq_pending); + clear_bit(irq, &vcpu->arch.irq_clear); +} + +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq) +{ + clear_bit(irq, &vcpu->arch.irq_pending); + set_bit(irq, &vcpu->arch.irq_clear); +} + +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */ diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h index 10748a20a2..b9044c8dfa 100644 --- a/arch/loongarch/include/asm/loongarch.h +++ b/arch/loongarch/include/asm/loongarch.h @@ -269,6 +269,7 @@ __asm__(".macro parse_r var r\n\t" #define LOONGARCH_CSR_ECFG 0x4 /* Exception config */ #define CSR_ECFG_VS_SHIFT 16 #define CSR_ECFG_VS_WIDTH 3 +#define CSR_ECFG_VS_SHIFT_END (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1) #define CSR_ECFG_VS (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT) #define CSR_ECFG_IM_SHIFT 0 #define CSR_ECFG_IM_WIDTH 14 @@ -357,13 +358,14 @@ __asm__(".macro parse_r var r\n\t" #define CSR_TLBLO1_V (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT) #define LOONGARCH_CSR_GTLBC 0x15 /* Guest TLB control */ -#define CSR_GTLBC_RID_SHIFT 16 -#define CSR_GTLBC_RID_WIDTH 8 -#define CSR_GTLBC_RID (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT) +#define CSR_GTLBC_TGID_SHIFT 16 +#define CSR_GTLBC_TGID_WIDTH 8 +#define CSR_GTLBC_TGID_SHIFT_END (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1) +#define CSR_GTLBC_TGID (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT) #define CSR_GTLBC_TOTI_SHIFT 13 #define CSR_GTLBC_TOTI (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT) -#define CSR_GTLBC_USERID_SHIFT 12 -#define CSR_GTLBC_USERID (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT) +#define CSR_GTLBC_USETGID_SHIFT 12 +#define CSR_GTLBC_USETGID (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT) #define CSR_GTLBC_GMTLBSZ_SHIFT 0 #define CSR_GTLBC_GMTLBSZ_WIDTH 6 #define CSR_GTLBC_GMTLBSZ (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT) @@ -518,6 +520,7 @@ __asm__(".macro parse_r var r\n\t" #define LOONGARCH_CSR_GSTAT 0x50 /* Guest status */ #define CSR_GSTAT_GID_SHIFT 16 #define CSR_GSTAT_GID_WIDTH 8 +#define CSR_GSTAT_GID_SHIFT_END (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1) #define CSR_GSTAT_GID (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT) #define CSR_GSTAT_GIDBIT_SHIFT 4 #define CSR_GSTAT_GIDBIT_WIDTH 6 @@ -568,6 +571,12 @@ __asm__(".macro parse_r var r\n\t" #define CSR_GCFG_MATC_GUEST (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF) #define CSR_GCFG_MATC_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF) #define CSR_GCFG_MATC_NEST (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF) +#define CSR_GCFG_MATP_NEST_SHIFT 2 +#define CSR_GCFG_MATP_NEST (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT) +#define CSR_GCFG_MATP_ROOT_SHIFT 1 +#define CSR_GCFG_MATP_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT) +#define CSR_GCFG_MATP_GUEST_SHIFT 0 +#define CSR_GCFG_MATP_GUEST (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT) #define LOONGARCH_CSR_GINTC 0x52 /* Guest interrupt control */ #define CSR_GINTC_HC_SHIFT 16 diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h new file mode 100644 index 0000000000..17b28d94d5 --- /dev/null +++ b/arch/loongarch/kvm/trace.h @@ -0,0 +1,168 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVM_H + +#include +#include + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM kvm + +/* + * Tracepoints for VM enters + */ +DECLARE_EVENT_CLASS(kvm_transition, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu), + TP_STRUCT__entry( + __field(unsigned long, pc) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + ), + + TP_printk("PC: 0x%08lx", + __entry->pc) +); + +DEFINE_EVENT(kvm_transition, kvm_enter, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +DEFINE_EVENT(kvm_transition, kvm_reenter, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +DEFINE_EVENT(kvm_transition, kvm_out, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + +/* Further exit reasons */ +#define KVM_TRACE_EXIT_IDLE 64 +#define KVM_TRACE_EXIT_CACHE 65 +#define KVM_TRACE_EXIT_SIGNAL 66 + +/* Tracepoints for VM exits */ +#define kvm_trace_symbol_exit_types \ + { KVM_TRACE_EXIT_IDLE, "IDLE" }, \ + { KVM_TRACE_EXIT_CACHE, "CACHE" }, \ + { KVM_TRACE_EXIT_SIGNAL, "Signal" } + +TRACE_EVENT(kvm_exit_gspr, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word), + TP_ARGS(vcpu, inst_word), + TP_STRUCT__entry( + __field(unsigned int, inst_word) + ), + + TP_fast_assign( + __entry->inst_word = inst_word; + ), + + TP_printk("inst word: 0x%08x", + __entry->inst_word) +); + + +DECLARE_EVENT_CLASS(kvm_exit, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), + TP_ARGS(vcpu, reason), + TP_STRUCT__entry( + __field(unsigned long, pc) + __field(unsigned int, reason) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + __entry->reason = reason; + ), + + TP_printk("[%s]PC: 0x%08lx", + __print_symbolic(__entry->reason, + kvm_trace_symbol_exit_types), + __entry->pc) +); + +DEFINE_EVENT(kvm_exit, kvm_exit_idle, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), + TP_ARGS(vcpu, reason)); + +DEFINE_EVENT(kvm_exit, kvm_exit_cache, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), + TP_ARGS(vcpu, reason)); + +DEFINE_EVENT(kvm_exit, kvm_exit, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), + TP_ARGS(vcpu, reason)); + +#define KVM_TRACE_AUX_RESTORE 0 +#define KVM_TRACE_AUX_SAVE 1 +#define KVM_TRACE_AUX_ENABLE 2 +#define KVM_TRACE_AUX_DISABLE 3 +#define KVM_TRACE_AUX_DISCARD 4 + +#define KVM_TRACE_AUX_FPU 1 + +#define kvm_trace_symbol_aux_op \ + { KVM_TRACE_AUX_RESTORE, "restore" }, \ + { KVM_TRACE_AUX_SAVE, "save" }, \ + { KVM_TRACE_AUX_ENABLE, "enable" }, \ + { KVM_TRACE_AUX_DISABLE, "disable" }, \ + { KVM_TRACE_AUX_DISCARD, "discard" } + +#define kvm_trace_symbol_aux_state \ + { KVM_TRACE_AUX_FPU, "FPU" } + +TRACE_EVENT(kvm_aux, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op, + unsigned int state), + TP_ARGS(vcpu, op, state), + TP_STRUCT__entry( + __field(unsigned long, pc) + __field(u8, op) + __field(u8, state) + ), + + TP_fast_assign( + __entry->pc = vcpu->arch.pc; + __entry->op = op; + __entry->state = state; + ), + + TP_printk("%s %s PC: 0x%08lx", + __print_symbolic(__entry->op, + kvm_trace_symbol_aux_op), + __print_symbolic(__entry->state, + kvm_trace_symbol_aux_state), + __entry->pc) +); + +TRACE_EVENT(kvm_vpid_change, + TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid), + TP_ARGS(vcpu, vpid), + TP_STRUCT__entry( + __field(unsigned long, vpid) + ), + + TP_fast_assign( + __entry->vpid = vpid; + ), + + TP_printk("vpid: 0x%08lx", + __entry->vpid) +); + +#endif /* _TRACE_LOONGARCH64_KVM_H */ + +#undef TRACE_INCLUDE_PATH +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_FILE trace + +/* This part must be outside protection */ +#include From patchwork Thu Aug 31 08:29:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137272 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp167160vqu; Thu, 31 Aug 2023 04:12:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEVc0ZOJWICrDhAF0fQBWp+DGsGq7wXVr44RqWlPm17/TqlKfOI61YPGWhEIDKlFX1S7IKP X-Received: by 2002:a05:651c:209:b0:2bd:1908:4433 with SMTP id y9-20020a05651c020900b002bd19084433mr3597748ljn.50.1693480369678; Thu, 31 Aug 2023 04:12:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693480369; cv=none; d=google.com; s=arc-20160816; b=jan8fbfAl0qKFHTbqmO1elyg5UQxLngEjVc8+gFDPh+7UhBD+Z/+Y4WFA5OpyqETgU 8Lr0f2LrVPkW8Yo+SyRB3c8LepzGAz+KO7wvkofDNUugO8e2HB+KsPgDObpeXBKg6zo/ GMLEZgRRpuaQWkHXTQcVLlv+7+FV4eCmr9siyLNrd02tXSovZbE/8RxO/t7wE/Xd8k73 wVO8Da226z1U3x1YFbVZpk8nOBOja4vRejQSIV5vJPGRcHpQ0CWHj/2qAvRL7nrqx+qk RSebG4ugutXZuUkec57+E0I9je7ugQduIfS75MOJqwLITz4lIUy2+r4XzHb0hu3Is9p4 4CvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ZMFE2G+2KG2RRZnPQE6i/1cCN1cjTdp5lQ7Nly3t8m8=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=IpKhI1pjXO8f0HGjOsy2dEENDdr4KRX78nkxXMVrW0eZK7yekCzXFV3b4sWFBj+I9l 5Sa5/hRU0FswBDQpdX+6gOe9WRCG8L2RTFUnD7oX+l9FjIIl3RYJvpmA5BBarvB6HnFS VNEC1N1snZzO2h7ONg1S1U/CVA8Y/nk3nXpH5hlgxAyeen1+PaKpo77oa6tsQcTSjuZT w5o9T5pkwTf1flbJGGn5Ai5g3vf59khpb6vFbRK5NatvNfhEKF8URAxcBCUXoORIThsd ZiBFY4VcVKSP7f62J+VE3tFRj0MjTKd4XGDqmmK6vAWa6iyJGo3HLc4NX7kPt4KwBmtU 5Ybw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lw28-20020a170906bcdc00b009a1708a9268si792001ejb.990.2023.08.31.04.12.20; Thu, 31 Aug 2023 04:12:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343736AbjHaIaz (ORCPT + 99 others); Thu, 31 Aug 2023 04:30:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245066AbjHaIag (ORCPT ); Thu, 31 Aug 2023 04:30:36 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DDCA2CED; Thu, 31 Aug 2023 01:30:32 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8BxuOikT_BkGF4dAA--.24387S3; Thu, 31 Aug 2023 16:30:28 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S8; Thu, 31 Aug 2023 16:30:28 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 06/30] LoongArch: KVM: Implement vcpu create and destroy interface Date: Thu, 31 Aug 2023 16:29:56 +0800 Message-Id: <20230831083020.2187109-7-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S8 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775742871678784398 X-GMAIL-MSGID: 1775742871678784398 Implement vcpu create and destroy interface, saving some info into vcpu arch structure such as vcpu exception entrance, vcpu enter guest pointer, etc. Init vcpu timer and set address translation mode when vcpu create. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 87 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 arch/loongarch/kvm/vcpu.c diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c new file mode 100644 index 0000000000..545b18cd1c --- /dev/null +++ b/arch/loongarch/kvm/vcpu.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include "trace.h" + +int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) +{ + return 0; +} + +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +{ + unsigned long timer_hz; + struct loongarch_csrs *csr; + + vcpu->arch.vpid = 0; + + hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); + vcpu->arch.swtimer.function = kvm_swtimer_wakeup; + + vcpu->arch.guest_eentry = (unsigned long)kvm_loongarch_ops->guest_eentry; + vcpu->arch.handle_exit = _kvm_handle_exit; + vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL); + if (!vcpu->arch.csr) + return -ENOMEM; + + /* + * kvm all exceptions share one exception entry, and host <-> guest switch + * also switch excfg.VS field, keep host excfg.VS info here + */ + vcpu->arch.host_ecfg = (read_csr_ecfg() & CSR_ECFG_VS); + + /* Init */ + vcpu->arch.last_sched_cpu = -1; + + /* + * Initialize guest register state to valid architectural reset state. + */ + timer_hz = calc_const_freq(); + kvm_init_timer(vcpu, timer_hz); + + /* Set Initialize mode for GUEST */ + csr = vcpu->arch.csr; + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_CRMD, CSR_CRMD_DA); + + /* Set cpuid */ + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TMID, vcpu->vcpu_id); + + /* start with no pending virtual guest interrupts */ + csr->csrs[LOONGARCH_CSR_GINTC] = 0; + + return 0; +} + +void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) +{ +} + +void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + int cpu; + struct kvm_context *context; + + hrtimer_cancel(&vcpu->arch.swtimer); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kfree(vcpu->arch.csr); + + /* + * If the vCPU is freed and reused as another vCPU, we don't want the + * matching pointer wrongly hanging around in last_vcpu. + */ + for_each_possible_cpu(cpu) { + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + if (context->last_vcpu == vcpu) + context->last_vcpu = NULL; + } +} From patchwork Thu Aug 31 08:29:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137314 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp333832vqu; Thu, 31 Aug 2023 08:55:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGC6rkePFr9eTaXYoguTE9RVC3zMeXb6FVuA2MS3uJISB2qDEE4xDityl6P88FiohjIWPMa X-Received: by 2002:a05:6a21:9987:b0:13e:debc:3657 with SMTP id ve7-20020a056a21998700b0013edebc3657mr148089pzb.30.1693497327412; Thu, 31 Aug 2023 08:55:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693497327; cv=none; d=google.com; s=arc-20160816; b=rzh5NbxsPcfn/p7IprqH3YIHbmMgLCWnvERiCnW+IGfpRbx1dspawBGzRg4Tz1+MNr X+IWgFw5UIMHqkAxVEo5LgltdA5CRSWnMrgyHMpFiSQ0gxlz9eiy7f3mTkMdWNxCnbG3 GWKJxOsh3Pbiy6W//kFS8kaHVSVlbMxIuneZFmDcnbpkOpCx9DOsukCCgtZVNLcYHCF5 /DPjx0LgTUpAGz7jUhRAT2Xj9HiordJFzYtVijZ29eSp9jmP3+kpDOQCL/ZgILyQDFxU BcHfmHjR71+AV0JPmiI4U34uMp1nxeNAWnBbKdEtCwOMR22Kvf5YiAmVsTzao+Dwk100 +F5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=XlJaKw+RS7xvbBSiQXwoFUlF+Wc3bXEjSurC0EwLeoY=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=ON2eDeJHg1oYAinsajS4LF0FVTHpL9eJRK0xa40sNTCaHop6HeN0hu8vB4gGSLlxWT FywZEZmHzw2a8rr8uUj09v/iqX4eyfaCG7ca+U/+lO1HXCBmBNnOTwYjY+elu6Zjn8S0 o/HnFXZk/SnnW3eqa88qYjHgdIBDSoh3+JI0bulmiHOeIsHsZ5GmkkdkTOEYod7FwHpY 6V1KJ1tToMh861FmKeuRoeJdE4aDLC2kKiehG3vO2osizFYw7Udapvt+u0qFMwTEjm3T kREd5EfD5vBug06QRc3MNA1JhDZNGEFzfoIL9i33DorqLQwcnUVYKxW0B2R7XzmbaIdx 2ptA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j193-20020a638bca000000b00566069f730esi1391557pge.758.2023.08.31.08.55.09; Thu, 31 Aug 2023 08:55:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343885AbjHaIbC (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244692AbjHaIao (ORCPT ); Thu, 31 Aug 2023 04:30:44 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8E9E5CE4; Thu, 31 Aug 2023 01:30:36 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxPOumT_BkQV4dAA--.54717S3; Thu, 31 Aug 2023 16:30:30 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S11; Thu, 31 Aug 2023 16:30:29 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers Date: Thu, 31 Aug 2023 16:29:59 +0800 Message-Id: <20230831083020.2187109-10-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S11 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775760653748518731 X-GMAIL-MSGID: 1775760653748518731 Implement LoongArch vcpu get registers and set registers operations, it is called when user space use the ioctl interface to get or set regs. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/csr_ops.S | 67 ++++++++++++ arch/loongarch/kvm/vcpu.c | 206 +++++++++++++++++++++++++++++++++++ 2 files changed, 273 insertions(+) create mode 100644 arch/loongarch/kvm/csr_ops.S diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S new file mode 100644 index 0000000000..53e44b23a5 --- /dev/null +++ b/arch/loongarch/kvm/csr_ops.S @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include + .text + .cfi_sections .debug_frame +/* + * we have splited hw gcsr into three parts, so we can + * calculate the code offset by gcsrid and jump here to + * run the gcsrwr instruction. + */ +SYM_FUNC_START(set_hw_gcsr) + addi.d t0, a0, 0 + addi.w t1, zero, 96 + bltu t1, t0, 1f + la.pcrel t0, 10f + alsl.d t0, a0, t0, 3 + jr t0 +1: + addi.w t1, a0, -128 + addi.w t2, zero, 15 + bltu t2, t1, 2f + la.pcrel t0, 11f + alsl.d t0, t1, t0, 3 + jr t0 +2: + addi.w t1, a0, -384 + addi.w t2, zero, 3 + bltu t2, t1, 3f + la.pcrel t0, 12f + alsl.d t0, t1, t0, 3 + jr t0 +3: + addi.w a0, zero, -1 + jr ra + +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */ +10: + csrnum = 0 + .rept 0x61 + gcsrwr a1, csrnum + jr ra + csrnum = csrnum + 1 + .endr + +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */ +11: + csrnum = 0x80 + .rept 0x10 + gcsrwr a1, csrnum + jr ra + csrnum = csrnum + 1 + .endr + +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */ +12: + csrnum = 0x180 + .rept 0x4 + gcsrwr a1, csrnum + jr ra + csrnum = csrnum + 1 + .endr + +SYM_FUNC_END(set_hw_gcsr) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index ca4e8d074e..f17422a942 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,212 @@ #define CREATE_TRACE_POINTS #include "trace.h" +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v) +{ + unsigned long val; + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (get_gcsr_flag(id) & INVALID_GCSR) + return -EINVAL; + + if (id == LOONGARCH_CSR_ESTAT) { + /* interrupt status IP0 -- IP7 from GINTC */ + val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff; + *v = kvm_read_sw_gcsr(csr, id) | (val << 2); + return 0; + } + + /* + * get software csr state if csrid is valid, since software + * csr state is consistent with hardware + */ + *v = kvm_read_sw_gcsr(csr, id); + + return 0; +} + +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + int ret = 0, gintc; + + if (get_gcsr_flag(id) & INVALID_GCSR) + return -EINVAL; + + if (id == LOONGARCH_CSR_ESTAT) { + /* estat IP0~IP7 inject through guestexcept */ + gintc = (val >> 2) & 0xff; + write_csr_gintc(gintc); + kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc); + + gintc = val & ~(0xffUL << 2); + write_gcsr_estat(gintc); + kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc); + + return ret; + } + + if (get_gcsr_flag(id) & HW_GCSR) { + set_hw_gcsr(id, val); + /* write sw gcsr to keep consistent with hardware */ + kvm_write_sw_gcsr(csr, id, val); + } else + kvm_write_sw_gcsr(csr, id, val); + + return ret; +} + +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg, s64 *v) +{ + int reg_idx, ret = 0; + + if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) { + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + ret = _kvm_getcsr(vcpu, reg_idx, v); + } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) + *v = drdtime() + vcpu->kvm->arch.time_offset; + else + ret = -EINVAL; + + return ret; +} + +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + int ret = -EINVAL; + s64 v; + + if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64) + return ret; + + if (_kvm_get_one_reg(vcpu, reg, &v)) + return ret; + + return put_user(v, (u64 __user *)(long)reg->addr); +} + +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg, + s64 v) +{ + int ret = 0; + unsigned long flags; + u64 val; + int reg_idx; + + val = v; + if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) { + reg_idx = KVM_GET_IOC_CSRIDX(reg->id); + ret = _kvm_setcsr(vcpu, reg_idx, val); + } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) { + local_irq_save(flags); + /* + * gftoffset is relative with board, not vcpu + * only set for the first time for smp system + */ + if (vcpu->vcpu_id == 0) + vcpu->kvm->arch.time_offset = (signed long)(v - drdtime()); + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset); + local_irq_restore(flags); + } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) { + kvm_reset_timer(vcpu); + memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); + memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); + } else + ret = -EINVAL; + + return ret; +} + +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + s64 v; + int ret = -EINVAL; + + if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64) + return ret; + + if (get_user(v, (u64 __user *)(long)reg->addr)) + return ret; + + return _kvm_set_one_reg(vcpu, reg, v); +} + +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -ENOIOCTLCMD; +} + +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + int i; + + vcpu_load(vcpu); + + for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++) + regs->gpr[i] = vcpu->arch.gprs[i]; + + regs->pc = vcpu->arch.pc; + + vcpu_put(vcpu); + return 0; +} + +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + int i; + + vcpu_load(vcpu); + + for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++) + vcpu->arch.gprs[i] = regs->gpr[i]; + vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */ + vcpu->arch.pc = regs->pc; + + vcpu_put(vcpu); + return 0; +} + +long kvm_arch_vcpu_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + long r; + + vcpu_load(vcpu); + + switch (ioctl) { + case KVM_SET_ONE_REG: + case KVM_GET_ONE_REG: { + struct kvm_one_reg reg; + + r = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + if (ioctl == KVM_SET_ONE_REG) + r = _kvm_set_reg(vcpu, ®); + else + r = _kvm_get_reg(vcpu, ®); + break; + } + default: + r = -ENOIOCTLCMD; + break; + } + + vcpu_put(vcpu); + return r; +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Thu Aug 31 08:30:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137265 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp148705vqu; Thu, 31 Aug 2023 03:32:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEyhx2pSyvaci8RXHZ9hz478NJ35mbDnrTDzhVlo3vneM6t35wuu11egvaA8KjsHDoLyWr1 X-Received: by 2002:a05:6808:92:b0:3a7:2570:dcfc with SMTP id s18-20020a056808009200b003a72570dcfcmr4240094oic.43.1693477969386; Thu, 31 Aug 2023 03:32:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693477969; cv=none; d=google.com; s=arc-20160816; b=XSOH6hZlpC0a2QEA5/xZ6CXruS961fPgOW1MbtxzkaY8HZiJvFoAcTFzqV2z21zuiH okbK0rpqGWqW7NIT4ozagvD4TDqULl6eFcbtaOkcx+PYeqLJ9sIdL9BexG9oyjL35UNz YHZ5KW7F7gVWdJa1MGq80Yj8WwYXsAygwGotM+qddHRnBoE6HDblpN1ftELH+YLnaPFd aJvC4weCZ/BYQQmWtQshQtk0aFae5J/Y3I+b06ChYHuicZO43dzrAtI0xYUFNEojmYXa auDd0av9Ro4H7Sbh3bBspYNNDaqgypxIKueAGCiDpTU75wsfYNNzPXTJaFdVFda5FPyc 2lGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=sXBGuknOzdUBJgZ/wOkIuo0/3VJ7kUs6zXQBwS4qZoQ=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=ukXz+885NSTbNAsMKoN/TSZ47EXE8dfxrzDTmX8NmRgA8t8irg7CtsrKLGYuSQZXW0 8YQw9sN9pya3Ud/ZEf9hXxUEaRZouClQYiJty9C+KCuCEX1Rpi1nb66dAWIXzUsmawmc xQkJdFaCHbQK/NjpV05nYu06Ft0wOyspFSnlzhUKRvZLHCYQvFWnOc8UvcG+LdKAFqb1 V/x5Pbps7tqbt7Y6986V2uNl6e5i2tIuh+6iUHxOF5PuiGZHHRTlmDgeDKFhjRvkZr2f Au8jZqc+Odi7G8cnL8mUW8+QRLrANsAd2dTtTZpftZtdm45ibXRqpbe8E4N+MiilRNGn EiGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s21-20020a656915000000b0055b640a6b3csi1072404pgq.884.2023.08.31.03.32.24; Thu, 31 Aug 2023 03:32:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236152AbjHaIbH (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245672AbjHaIan (ORCPT ); Thu, 31 Aug 2023 04:30:43 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B7431CF2; Thu, 31 Aug 2023 01:30:35 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxBfGmT_BkPl4dAA--.60183S3; Thu, 31 Aug 2023 16:30:30 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S12; Thu, 31 Aug 2023 16:30:29 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Date: Thu, 31 Aug 2023 16:30:00 +0800 Message-Id: <20230831083020.2187109-11-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S12 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775740355128749087 X-GMAIL-MSGID: 1775740355128749087 Implement LoongArch vcpu KVM_ENABLE_CAP ioctl interface. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index f17422a942..be0c17a433 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -187,6 +187,16 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return 0; } +static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, + struct kvm_enable_cap *cap) +{ + /* + * FPU is enable by default, do not support any other caps, + * and later we will support such as LSX cap. + */ + return -EINVAL; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -210,6 +220,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = _kvm_get_reg(vcpu, ®); break; } + case KVM_ENABLE_CAP: { + struct kvm_enable_cap cap; + + r = -EFAULT; + if (copy_from_user(&cap, argp, sizeof(cap))) + break; + r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap); + break; + } default: r = -ENOIOCTLCMD; break; From patchwork Thu Aug 31 08:30:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137259 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp127171vqu; Thu, 31 Aug 2023 02:42:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGHhAQZwwMS3ift3UH+zdrXqgKTXm/809NZbi0sV4Hkp9DYLaBx1KVqTG6BZ139aOamMzAs X-Received: by 2002:a17:906:3299:b0:9a5:8155:6de with SMTP id 25-20020a170906329900b009a5815506demr3578170ejw.45.1693474927580; Thu, 31 Aug 2023 02:42:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693474927; cv=none; d=google.com; s=arc-20160816; b=GbVscLwkzQMLb0ESkJwNzIIDWbAeiItUh98pqLCWq9lNtunC+c4msS1lBDW3tvDou2 8rdPElbQ4gmrVi/qi40hPNppo0X5xS2nSBi/QyXedm585DxF04RgHGhiSbc8kgOKlypg AflQApB+hopNwzDbKgwxuzBi2R202RYvUlok6p3Uq8t/zE/JJVRqlLZH023AaOJIOZa8 2paKjxBrsfFdQxbjsVjoqUmQSGAN2jTINui0t28LqfdOYqEuIn67os8gONKLHz7LXfwG gF7EnQVhXEXd21Y/xT54x+aktJikBFcss02A8+MkcWrhU2SkVjgWO5Nbx0jnB4TLxI3X dZdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=2sEAVxksvfcmbQphWB6a4uSl8iiSMwUT65LbOXx2mE0=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=J+WLgseTz7nQngdYeqAprFKLbtz8eR8q3ep1w/iynHO9tuU6lVsds5K5E8c7gq7VXg hUhIMZPYE1G0MhQAV9bAFrPLtmce5QIHlFkstPXejz9/gHsETlMsIcgUYtCRXcx3rCWE lF4KA2jW4vAehBj96asLiTYyqqvbM8ZZYKafv1DcPkwOIdxwq+8yAbqV2ct1fSZeRFph b0+/f6M4qqstsH7uDdbSsqahvVoU9RAGVyt1WhsLVFAhNPQC/dmgwfEF/3sWIkN/Zduq FQolzdi+nNPbebAsRahoLEN9NoaY4FbTm0zyyFyeIjsZJVJDT5aD5BkCf04s+fHTlg8S BVig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rv20-20020a17090710d400b0098d7e44a637si789730ejb.794.2023.08.31.02.41.30; Thu, 31 Aug 2023 02:42:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345204AbjHaIb1 (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242341AbjHaIax (ORCPT ); Thu, 31 Aug 2023 04:30:53 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5B84BE48; Thu, 31 Aug 2023 01:30:38 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxFeinT_BkS14dAA--.1000S3; Thu, 31 Aug 2023 16:30:31 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S13; Thu, 31 Aug 2023 16:30:30 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 11/30] LoongArch: KVM: Implement fpu related operations for vcpu Date: Thu, 31 Aug 2023 16:30:01 +0800 Message-Id: <20230831083020.2187109-12-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S13 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775737165214238058 X-GMAIL-MSGID: 1775737165214238058 Implement LoongArch fpu related interface for vcpu, such as get fpu, set fpu, own fpu and lose fpu, etc. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 60 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index be0c17a433..2094afcfcd 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -238,6 +238,66 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return r; } +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + int i = 0; + + /* no need vcpu_load and vcpu_put */ + fpu->fcsr = vcpu->arch.fpu.fcsr; + fpu->fcc = vcpu->arch.fpu.fcc; + for (i = 0; i < NUM_FPU_REGS; i++) + memcpy(&fpu->fpr[i], &vcpu->arch.fpu.fpr[i], FPU_REG_WIDTH / 64); + + return 0; +} + +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + int i = 0; + + /* no need vcpu_load and vcpu_put */ + vcpu->arch.fpu.fcsr = fpu->fcsr; + vcpu->arch.fpu.fcc = fpu->fcc; + for (i = 0; i < NUM_FPU_REGS; i++) + memcpy(&vcpu->arch.fpu.fpr[i], &fpu->fpr[i], FPU_REG_WIDTH / 64); + + return 0; +} + +/* Enable FPU for guest and restore context */ +void kvm_own_fpu(struct kvm_vcpu *vcpu) +{ + preempt_disable(); + + /* + * Enable FPU for guest + */ + set_csr_euen(CSR_EUEN_FPEN); + + kvm_restore_fpu(&vcpu->arch.fpu); + vcpu->arch.aux_inuse |= KVM_LARCH_FPU; + trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU); + + preempt_enable(); +} + +/* Save and disable FPU */ +void kvm_lose_fpu(struct kvm_vcpu *vcpu) +{ + preempt_disable(); + + if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) { + kvm_save_fpu(&vcpu->arch.fpu); + vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU; + trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU); + + /* Disable FPU */ + clear_csr_euen(CSR_EUEN_FPEN); + } + + preempt_enable(); +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Thu Aug 31 08:30:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137262 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp137112vqu; Thu, 31 Aug 2023 03:06:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE4CNmMLbeqMCzBV6bgJF4Ebw386dwctptm/BgVKtQyyxox+basFXQ3bHHvNhWALtQ2DK4r X-Received: by 2002:a17:906:186:b0:9a5:9038:b1e7 with SMTP id 6-20020a170906018600b009a59038b1e7mr3129360ejb.36.1693476360859; Thu, 31 Aug 2023 03:06:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693476360; cv=none; d=google.com; s=arc-20160816; b=t9uZO9r//luMKWfocbsLOu7LzAqY0FHnKsnLclykpT2cYjO13qxinhzaGe8BFwJURt J5nDDShBF44HwAYzWtinOGgtn377SFgKbb/6O0njIZXw6ZkegkovBmWMwSFbwW4AV+BF qFUlmTdScmVAUkmPJ8iZw7ZFA4N9eJhaZzzNx0OusfIJgVNEOcc2Kq58uzXWGMJMQy00 iav2PsHEdA62mSIhSJXQVUI8RKw1hSQcRTxsvI0lx3zLlOBv7dT3Tiupa1sZ26in/o+S WEnE1hNcGr7AkENIMBp1MSGCr7QqTPvzWHiGiZNYJMChZt9KzfWYqpdh88noOwKbyvcn SlIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Exp/yt+UA5uydq6BJz8HO/idtq+IV+5L8JnRgnSl1Hc=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=iCZjx8v0Nwzv19XfHRL8kD9KeRpuE4xIbfbMJovjdNb97G7v8qUZAngYTXWLgp5TPH X+l3xFMRtcZsn957DG2eEUs6aYBiDLXSDdL5fBq4Bz81wsYho3zMdW/t8F4pqNWYb9qZ zFSBY4f9ngMAd23NSVFxlW87RS4ibc9qvqDHUqoo0ydtwDZP0DCsjJ/gCmM9kCI1S6FQ izPsqVHD4QQ/Q6A5LKot4vWCmKMaX32LzQcsNK8AoGPKej+hiZ90AQgSGcdDsonjnTcN ZU+yZm/IPexAbp4AVava28/x+UzZtaFHaKBNuWT6VUxZWOFYegsEjSxLt6eSuRA54Kz1 biyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s21-20020a170906961500b009a18d9d3fa7si737637ejx.669.2023.08.31.03.05.29; Thu, 31 Aug 2023 03:06:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbjHaIbX (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245599AbjHaIax (ORCPT ); Thu, 31 Aug 2023 04:30:53 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 061E1E50; Thu, 31 Aug 2023 01:30:38 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxxPCoT_BkWF4dAA--.60808S3; Thu, 31 Aug 2023 16:30:32 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S14; Thu, 31 Aug 2023 16:30:30 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 12/30] LoongArch: KVM: Implement vcpu interrupt operations Date: Thu, 31 Aug 2023 16:30:02 +0800 Message-Id: <20230831083020.2187109-13-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S14 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775738668721409331 X-GMAIL-MSGID: 1775738668721409331 Implement vcpu interrupt operations such as vcpu set irq and vcpu clear irq, using set_gcsr_estat to set irq which is parsed by the irq bitmap. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/interrupt.c | 113 +++++++++++++++++++++++++++++++++ arch/loongarch/kvm/vcpu.c | 37 +++++++++++ 2 files changed, 150 insertions(+) create mode 100644 arch/loongarch/kvm/interrupt.c diff --git a/arch/loongarch/kvm/interrupt.c b/arch/loongarch/kvm/interrupt.c new file mode 100644 index 0000000000..14e19653b2 --- /dev/null +++ b/arch/loongarch/kvm/interrupt.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include + +static unsigned int int_to_coreint[EXCCODE_INT_NUM] = { + [INT_TI] = CPU_TIMER, + [INT_IPI] = CPU_IPI, + [INT_SWI0] = CPU_SIP0, + [INT_SWI1] = CPU_SIP1, + [INT_HWI0] = CPU_IP0, + [INT_HWI1] = CPU_IP1, + [INT_HWI2] = CPU_IP2, + [INT_HWI3] = CPU_IP3, + [INT_HWI4] = CPU_IP4, + [INT_HWI5] = CPU_IP5, + [INT_HWI6] = CPU_IP6, + [INT_HWI7] = CPU_IP7, +}; + +static int _kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority) +{ + unsigned int irq = 0; + + clear_bit(priority, &vcpu->arch.irq_pending); + if (priority < EXCCODE_INT_NUM) + irq = int_to_coreint[priority]; + + switch (priority) { + case INT_TI: + case INT_IPI: + case INT_SWI0: + case INT_SWI1: + set_gcsr_estat(irq); + break; + + case INT_HWI0 ... INT_HWI7: + set_csr_gintc(irq); + break; + + default: + break; + } + + return 1; +} + +static int _kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority) +{ + unsigned int irq = 0; + + clear_bit(priority, &vcpu->arch.irq_clear); + if (priority < EXCCODE_INT_NUM) + irq = int_to_coreint[priority]; + + switch (priority) { + case INT_TI: + case INT_IPI: + case INT_SWI0: + case INT_SWI1: + clear_gcsr_estat(irq); + break; + + case INT_HWI0 ... INT_HWI7: + clear_csr_gintc(irq); + break; + + default: + break; + } + + return 1; +} + +void _kvm_deliver_intr(struct kvm_vcpu *vcpu) +{ + unsigned long *pending = &vcpu->arch.irq_pending; + unsigned long *pending_clr = &vcpu->arch.irq_clear; + unsigned int priority; + + if (!(*pending) && !(*pending_clr)) + return; + + if (*pending_clr) { + priority = __ffs(*pending_clr); + while (priority <= INT_IPI) { + _kvm_irq_clear(vcpu, priority); + priority = find_next_bit(pending_clr, + BITS_PER_BYTE * sizeof(*pending_clr), + priority + 1); + } + } + + if (*pending) { + priority = __ffs(*pending); + while (priority <= INT_IPI) { + _kvm_irq_deliver(vcpu, priority); + priority = find_next_bit(pending, + BITS_PER_BYTE * sizeof(*pending), + priority + 1); + } + } +} + +int _kvm_pending_timer(struct kvm_vcpu *vcpu) +{ + return test_bit(INT_TI, &vcpu->arch.irq_pending); +} diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 2094afcfcd..9e36482c53 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -298,6 +298,43 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu) preempt_enable(); } +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq) +{ + int intr = (int)irq->irq; + + if (intr > 0) + _kvm_queue_irq(vcpu, intr); + else if (intr < 0) + _kvm_dequeue_irq(vcpu, -intr); + else { + kvm_err("%s: invalid interrupt ioctl %d\n", __func__, irq->irq); + return -EINVAL; + } + + kvm_vcpu_kick(vcpu); + return 0; +} + +long kvm_arch_vcpu_async_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + + if (ioctl == KVM_INTERRUPT) { + struct kvm_interrupt irq; + + if (copy_from_user(&irq, argp, sizeof(irq))) + return -EFAULT; + + kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__, irq.irq); + + return kvm_vcpu_ioctl_interrupt(vcpu, &irq); + } + + return -ENOIOCTLCMD; +} + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { return 0; From patchwork Thu Aug 31 08:30:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137336 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp429894vqu; Thu, 31 Aug 2023 11:43:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE9LndzHUxVfg4LlL5K+m9jwC5sUC/flVQdOpynwbGclJUHx9p7/ZETvAjS3kncjWfiMh7o X-Received: by 2002:a05:6512:70b:b0:4ff:8f71:9713 with SMTP id b11-20020a056512070b00b004ff8f719713mr63662lfs.42.1693507396678; Thu, 31 Aug 2023 11:43:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693507396; cv=none; d=google.com; s=arc-20160816; b=N/QNPTfEi26AEbN3MAqhe4yA+aHBUtqF8j+RXEnd0tRd2wicAEvnQ0xrTnnbqMUJYj nlLvThgGmnihSp23JJ/IqOJfpprr8B16HAeqqetDoAZ5H+CYVi2BqF5IMPvH+Z9R+qM/ brmfiPDsvB8GWC8j97/1TvZGMlPZPw/C9LLCE+V2BnAWIjmN93y/wn4X80Kainotuilx 1hgpg0aGrpUM+cglK8V3ov4KA//pS0PX/IX0P1dj8q69NuMsGKDep3g0ZNEP0/wDc4y/ FtwIO8FUVsbXcNax3mPviK1Hko7psGUYL3AuhbgFJOlLInO7IZ0jdnTsuuqjbSP8n5XJ S2bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=O1GMA22+PpdCcNPNRnb0tlwl2hzGdrWS4aSBQxMSRYA=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=mPMef8b6dSRFWc6J6dDL/JIefBs+fmBbpTkUJPWvKRiF6rJ+KmDq64ljM8FC+xfjfF JWPgwHx5uqQHFkO5eZzMkrkcTlCn/uOR7p8yesV8h/KIHmNAZHSn+cVii+7eVwN3Gik6 6K3ab9cUgaWmlJgcKT+worKwfZ2U0ccuM74vtOoLDiyJ6jYCZk466/RfYodefn86H25/ pJxnnyT5W2OmZv29+foxvSBueYjYuq7uNkJtAPPzDSqhcRIFJ+28GHObftu5BznEyX9t lAjatA6Gv0sOoHkZwuRNkuxIscUGRUwM53PLqO12sHUsI1ziaylQ0OZhDkUDzvfg5euF CBzg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d19-20020a50fb13000000b00529fa041d40si1494110edq.353.2023.08.31.11.42.34; Thu, 31 Aug 2023 11:43:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344263AbjHaIbU (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343728AbjHaIax (ORCPT ); Thu, 31 Aug 2023 04:30:53 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BE4BCE4A; Thu, 31 Aug 2023 01:30:38 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxZ+ioT_BkX14dAA--.24079S3; Thu, 31 Aug 2023 16:30:32 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S15; Thu, 31 Aug 2023 16:30:31 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 13/30] LoongArch: KVM: Implement misc vcpu related interfaces Date: Thu, 31 Aug 2023 16:30:03 +0800 Message-Id: <20230831083020.2187109-14-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S15 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775771211950842687 X-GMAIL-MSGID: 1775771211950842687 Implement some misc vcpu relaterd interfaces, such as vcpu runnable, vcpu should kick, vcpu dump regs, etc. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 105 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 105 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 9e36482c53..f170dbf539 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,111 @@ #define CREATE_TRACE_POINTS #include "trace.h" +int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) +{ + return !!(vcpu->arch.irq_pending) && + vcpu->arch.mp_state.mp_state == KVM_MP_STATE_RUNNABLE; +} + +int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; +} + +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + return false; +} + +vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) +{ + return VM_FAULT_SIGBUS; +} + +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, + struct kvm_translation *tr) +{ + return -EINVAL; +} + +int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) +{ + return _kvm_pending_timer(vcpu) || + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT) & + (1 << INT_TI); +} + +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu) +{ + int i; + + kvm_debug("vCPU Register Dump:\n"); + kvm_debug("\tpc = 0x%08lx\n", vcpu->arch.pc); + kvm_debug("\texceptions: %08lx\n", vcpu->arch.irq_pending); + + for (i = 0; i < 32; i += 4) { + kvm_debug("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i, + vcpu->arch.gprs[i], + vcpu->arch.gprs[i + 1], + vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]); + } + + kvm_debug("\tCRMOD: 0x%08lx, exst: 0x%08lx\n", + kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD), + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT)); + + kvm_debug("\tERA: 0x%08lx\n", kvm_read_hw_gcsr(LOONGARCH_CSR_ERA)); + + return 0; +} + +int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + *mp_state = vcpu->arch.mp_state; + + return 0; +} + +int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + int ret = 0; + + switch (mp_state->mp_state) { + case KVM_MP_STATE_RUNNABLE: + vcpu->arch.mp_state = *mp_state; + break; + default: + ret = -EINVAL; + } + + return ret; +} + +int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, + struct kvm_guest_debug *dbg) +{ + return -EINVAL; +} + +/** + * kvm_migrate_count() - Migrate timer. + * @vcpu: Virtual CPU. + * + * Migrate hrtimer to the current CPU by cancelling and restarting it + * if it was running prior to being cancelled. + * + * Must be called when the vCPU is migrated to a different CPU to ensure that + * timer expiry during guest execution interrupts the guest and causes the + * interrupt to be delivered in a timely manner. + */ +static void kvm_migrate_count(struct kvm_vcpu *vcpu) +{ + if (hrtimer_cancel(&vcpu->arch.swtimer)) + hrtimer_restart(&vcpu->arch.swtimer); +} + int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v) { unsigned long val; From patchwork Thu Aug 31 08:30:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137275 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp170864vqu; Thu, 31 Aug 2023 04:19:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHHjBjQAqqciI8io9WNXIH4DvQkIOLAQiSQnnj0Q9M73ZHa6WVXo4qrsDQA0lrYz1uG3LZn X-Received: by 2002:a17:90a:6945:b0:269:1d16:25fa with SMTP id j5-20020a17090a694500b002691d1625famr4762509pjm.12.1693480784673; Thu, 31 Aug 2023 04:19:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693480784; cv=none; d=google.com; s=arc-20160816; b=VC1gVyG++rWZdVbN8nmi1ArI0vFDioA43A2Ae1h6vhHjqzE2yx/PPvCOozCmGAqHOo h/dbfFurxQad3cLAhy3eJ9wr8As6r/SPUrQhbrpJRlROuOr9NNMxlvhY3QCxL4KXu6+T 0/Tr5vuWMV3rmA0n8+876LxjKaRMkjs17W6165kvoUJU082HPxUGij0I+qhLSL6lf9Zk JCHVTr0bdv5UuIUhxXVz+d9DTNQoqtpbWR1GMctfGKcI4/CS49kj0ACITLXcazCm95ba YEd1hYLy/sXKaXeiPb0fDIAs7f53+RRZxek7s1f0BxN6UhGXkP3MiH3sr+bVb36/Zr0d JmCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zq2bcUSkDj81G9YEafWHanHZEH+tyehRB4ER+Y51Sf8=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=JP/p093M7Y9sYfqny4jh4wOQtfhTU+UF9YS0/aoSML+yiVWz/aVfZpQeZOqSpXgths y3sVo+N1d7yQ2+i0NjW6PME8H52fNFMoJiIme78k+SZrNpaAw/dHrCwsUta06CRFo5dE dV/fHLEJCXtb4bLp7gON5QsR7t6a6oVs0EIi5FR6MFHQXW1/dtPgU9H20/SWm7XX6HuX M0AAHTDVXTHx8XiZaGB9e1Iu71EY6TGc+MXjH2YLvURwn+XaOhJHRR5SfRJqpMi9iQ7c XifMYk5fb5DKaEZlB+j8KL4d8AqzUN2KuoC/I56DyXjxpzvfEq5Jkc+7PYFBi5JvOD5X ZyBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cp19-20020a17090afb9300b002680abc3699si3543225pjb.115.2023.08.31.04.19.28; Thu, 31 Aug 2023 04:19:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345300AbjHaIba (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343525AbjHaIbQ (ORCPT ); Thu, 31 Aug 2023 04:31:16 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 837F4E5F; Thu, 31 Aug 2023 01:30:40 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxxPCpT_Bkdl4dAA--.60811S3; Thu, 31 Aug 2023 16:30:33 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S16; Thu, 31 Aug 2023 16:30:32 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 14/30] LoongArch: KVM: Implement vcpu load and vcpu put operations Date: Thu, 31 Aug 2023 16:30:04 +0800 Message-Id: <20230831083020.2187109-15-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S16 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775743307272923618 X-GMAIL-MSGID: 1775743307272923618 Implement LoongArch vcpu load and vcpu put operations, including load csr value into hardware and save csr value into vcpu structure. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 196 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 196 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index f170dbf539..79e4e22773 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -639,6 +639,202 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) } } +static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_context *context; + struct loongarch_csrs *csr = vcpu->arch.csr; + bool migrated, all; + + /* + * Have we migrated to a different CPU? + * If so, any old guest TLB state may be stale. + */ + migrated = (vcpu->arch.last_sched_cpu != cpu); + + /* + * Was this the last vCPU to run on this CPU? + * If not, any old guest state from this vCPU will have been clobbered. + */ + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + all = migrated || (context->last_vcpu != vcpu); + context->last_vcpu = vcpu; + + /* + * Restore timer state regardless + */ + kvm_restore_timer(vcpu); + + /* Control guest page CCA attribute */ + change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT); + /* Don't bother restoring registers multiple times unless necessary */ + if (!all) + return 0; + + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset); + /* + * Restore guest CSR registers + */ + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EUEN); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_MISC); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ECFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ERA); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADV); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ASID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDL); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDH); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CPUID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS2); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS3); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS4); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS5); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS6); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS7); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TMID); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CNTC); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); + + /* restore Root.Guestexcept from unused Guest guestexcept register */ + write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]); + + /* + * We should clear linked load bit to break interrupted atomics. This + * prevents a SC on the next vCPU from succeeding by matching a LL on + * the previous vCPU. + */ + if (vcpu->kvm->created_vcpus > 1) + set_gcsr_llbctl(CSR_LLBCTL_WCLLB); + + return 0; +} + +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + unsigned long flags; + + local_irq_save(flags); + if (vcpu->arch.last_sched_cpu != cpu) { + kvm_debug("[%d->%d]KVM vCPU[%d] switch\n", + vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id); + /* + * Migrate the timer interrupt to the current CPU so that it + * always interrupts the guest and synchronously triggers a + * guest timer interrupt. + */ + kvm_migrate_count(vcpu); + } + + /* restore guest state to registers */ + _kvm_vcpu_load(vcpu, cpu); + local_irq_restore(flags); +} + +static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + kvm_lose_fpu(vcpu); + /* + * update csr state from hardware if software csr state is stale, + * most csr registers are kept unchanged during process context + * switch except csr registers like remaining timer tick value and + * injected interrupt state. + */ + if (!(vcpu->arch.aux_inuse & KVM_LARCH_CSR)) { + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EUEN); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_MISC); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ECFG); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ERA); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADV); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ASID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDL); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDH); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CPUID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG3); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS3); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS4); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS5); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS6); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS7); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TMID); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CNTC); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); + vcpu->arch.aux_inuse |= KVM_LARCH_CSR; + } + /* save Root.Guestexcept in unused Guest guestexcept register */ + kvm_save_timer(vcpu); + csr->csrs[LOONGARCH_CSR_GINTC] = read_csr_gintc(); + return 0; +} + +void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) +{ + unsigned long flags; + int cpu; + + local_irq_save(flags); + cpu = smp_processor_id(); + vcpu->arch.last_sched_cpu = cpu; + + /* save guest state in registers */ + _kvm_vcpu_put(vcpu, cpu); + local_irq_restore(flags); +} + int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { int r = -EINTR; From patchwork Thu Aug 31 08:30:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137327 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp388978vqu; Thu, 31 Aug 2023 10:24:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFeWGnZJSqG4RD3mVxm7+TZattJtU6NMTxZRuYGh7FIWIAjbXrylDyh3kwN0f2/rmjHqMpr X-Received: by 2002:a92:d90b:0:b0:34c:e5c2:918b with SMTP id s11-20020a92d90b000000b0034ce5c2918bmr153946iln.13.1693502694150; Thu, 31 Aug 2023 10:24:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693502694; cv=none; d=google.com; s=arc-20160816; b=Jy2M4A+QwKA5KoEhi31k8Q6bdzWK6kMpGKQEMsqUsVgAQdgwTKuZtl7SUycvhVQ87V 5CRbDc/csLDnjAOoQDWxGIAVvNq6WO3AQWFON4hsHYOUFSS8QkluA41UtnFPvaUFznog YABHGqY7N4IKmSlOjw+i/40lqda6EEEXdgnKmwj9pS2F+8YhcNQFxGG33TBkZZ25RWa0 MKtHK9ojfcq9ylrtH7TKkRgDLZz5GhSiwHB/E8+OYxOTzaxmBpmT2X65yMPOXjP9YdWq o7vwUsUUmylFdUHJmbnn7qeuKy4h98gNAhOcA8C0hhVuz6ILR55KW0uduMA0febBQy3L LrVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3zmQQG7VfEWBwNMHK0tSqS6shlFJngXdRue/sBCLOJI=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=XWcqm3z2YzRFsrej3MnLwRpHOTiRtNNILIrs6E7KOVy6TE1MSjaZ6PghGSAY4dHtKT fGZxoC0be+rDkMB9xeXz+o7Wen3uAF8XLUjILuwunpy6db7DWDdeXR/8AyicEEtEqt7G RtmREf3PhAqKL2qkj08ymFysxnLYgCoR/2tRbCAwizpy9LwDRnir/Cr7NvmTyJTwMmtA 9DXwa64+ZAe9kVRLwYSSIsHTCuUvifUCZh5kqes4NWuk7PnvIWPksyGo8qSAjRNJkiml aUSkhvRrFLvD7urrx1WeA56heJt/vKXwtwlX5cg2TunwiUlbn8HCed0BB47SxgwnETme z1XA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 203-20020a6301d4000000b00565e386ff44si1478368pgb.702.2023.08.31.10.24.38; Thu, 31 Aug 2023 10:24:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242308AbjHaIbe (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244373AbjHaIbQ (ORCPT ); Thu, 31 Aug 2023 04:31:16 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 795EFE5E; Thu, 31 Aug 2023 01:30:39 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxDOupT_Bkbl4dAA--.54712S3; Thu, 31 Aug 2023 16:30:33 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S17; Thu, 31 Aug 2023 16:30:32 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 15/30] LoongArch: KVM: Implement vcpu status description Date: Thu, 31 Aug 2023 16:30:05 +0800 Message-Id: <20230831083020.2187109-16-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S17 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775766281045942790 X-GMAIL-MSGID: 1775766281045942790 Implement LoongArch vcpu status description such as idle exits counter, signal exits counter, cpucfg exits counter, etc. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vcpu.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 79e4e22773..60b2e584c6 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -13,6 +13,23 @@ #define CREATE_TRACE_POINTS #include "trace.h" +const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { + KVM_GENERIC_VCPU_STATS(), + STATS_DESC_COUNTER(VCPU, idle_exits), + STATS_DESC_COUNTER(VCPU, signal_exits), + STATS_DESC_COUNTER(VCPU, int_exits), + STATS_DESC_COUNTER(VCPU, cpucfg_exits), +}; + +const struct kvm_stats_header kvm_vcpu_stats_header = { + .name_size = KVM_STATS_NAME_SIZE, + .num_desc = ARRAY_SIZE(kvm_vcpu_stats_desc), + .id_offset = sizeof(struct kvm_stats_header), + .desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE, + .data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE + + sizeof(kvm_vcpu_stats_desc), +}; + int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { return !!(vcpu->arch.irq_pending) && From patchwork Thu Aug 31 08:30:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137251 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp111492vqu; Thu, 31 Aug 2023 02:02:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEvsLsQJ49AlNpV9P4yFF9z9RTIGH2Db40fawZgcvrDZgqjxoWN5Cf9TWbxDrfJOK2xKV1E X-Received: by 2002:a05:6a00:1913:b0:68a:49bc:e091 with SMTP id y19-20020a056a00191300b0068a49bce091mr5020164pfi.2.1693472577863; Thu, 31 Aug 2023 02:02:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693472577; cv=none; d=google.com; s=arc-20160816; b=qHA+lJtI5+REs2BS5pd8F/7oPUHO3P54lftVHgLoDV/ue048jS2joMXMRfyNWHs5jL lJlI5cLmUEZlI6qXttsfubwNkwYrequruGuhK0jiamTxZdqttBhDf2Vvb6EO2xZ3fVlz 0ueaq/FddpJSNo7BQ8pLSCckLLasCa6hZOvsmTFxpFBfjDAUvkcd/LuIvona198bpbxW IAWP8fMvthFyVsidcdNeinTYSy/uC7EZ7GfmDxKZox0yWvGSeW2AHMc+LVq5hR1Hqqdf lTs+rXpThw5QvSdSpqEnUd2ac17yld4amDtXWtiSVmp9NnGoliUyIjhm/nFIa1/uuj5v 07lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=xpQJlKgpjoWeE0Xs5GyLpOIS18vq3JGVQj4s3mOIUlA=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=gf6QdxQPUXT4DeZXgHfCwk7LPFBByJN93i9UiWLG03AWL5kBhO7nyXimRD+tv3lEB3 ox602BM8nb6rTRqH85yHmIRbu7wqum9P1Bi7YMvAs6c8MmA8pYpB+WkPR5LIRD9lhp0S ld4sMJNTOCEymACCiRTO3iJLV4ZqO2I5/srqLCuKg/JPwfpI/EKtXxw7QhyYXVo09ZNA +yDy+DGRitLytP1ktCSVRtoyerrdaZSLSAop6mvmZMSp7boOeh6ZLFNx3pWkb+wVCaP7 zAORvWueVGb5EBncfNa+TPV6HueQl3gPcfxYS0MwjMOB+/XWhdLwYA+VbY8gT6o3aoSD GYIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p11-20020a056a000a0b00b0068c6710f269si926229pfh.183.2023.08.31.02.02.05; Thu, 31 Aug 2023 02:02:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344673AbjHaIbq (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241522AbjHaIbC (ORCPT ); Thu, 31 Aug 2023 04:31:02 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 255D8E57; Thu, 31 Aug 2023 01:30:39 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8BxIvCpT_BkfF4dAA--.59322S3; Thu, 31 Aug 2023 16:30:33 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S18; Thu, 31 Aug 2023 16:30:33 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function Date: Thu, 31 Aug 2023 16:30:06 +0800 Message-Id: <20230831083020.2187109-17-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S18 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775734701751372115 X-GMAIL-MSGID: 1775734701751372115 Implement kvm check vmid and update vmid, the vmid should be checked before vcpu enter guest. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/vmid.c | 66 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 arch/loongarch/kvm/vmid.c diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c new file mode 100644 index 0000000000..fc25ddc3b7 --- /dev/null +++ b/arch/loongarch/kvm/vmid.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include "trace.h" + +static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu) +{ + struct kvm_context *context; + unsigned long vpid; + + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + vpid = context->vpid_cache + 1; + if (!(vpid & vpid_mask)) { + /* finish round of 64 bit loop */ + if (unlikely(!vpid)) + vpid = vpid_mask + 1; + + /* vpid 0 reserved for root */ + ++vpid; + + /* start new vpid cycle */ + kvm_flush_tlb_all(); + } + + context->vpid_cache = vpid; + vcpu->arch.vpid = vpid; +} + +void _kvm_check_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm_context *context; + bool migrated; + unsigned long ver, old, vpid; + int cpu; + + cpu = smp_processor_id(); + /* + * Are we entering guest context on a different CPU to last time? + * If so, the vCPU's guest TLB state on this CPU may be stale. + */ + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); + migrated = (vcpu->cpu != cpu); + + /* + * Check if our vpid is of an older version + * + * We also discard the stored vpid if we've executed on + * another CPU, as the guest mappings may have changed without + * hypervisor knowledge. + */ + ver = vcpu->arch.vpid & ~vpid_mask; + old = context->vpid_cache & ~vpid_mask; + if (migrated || (ver != old)) { + _kvm_update_vpid(vcpu, cpu); + trace_kvm_vpid_change(vcpu, vcpu->arch.vpid); + vcpu->cpu = cpu; + } + + /* Restore GSTAT(0x50).vpid */ + vpid = (vcpu->arch.vpid & vpid_mask) + << CSR_GSTAT_GID_SHIFT; + change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); +} From patchwork Thu Aug 31 08:30:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137264 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp141905vqu; Thu, 31 Aug 2023 03:16:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHwRV1H1T5q9ydTe4A69lsl+Dl+jAraiTJ/ecbigrNBk5pMwWf3mQxG8Yr/W4mw7xewzV7F X-Received: by 2002:a05:6808:1789:b0:3a7:8a1:9cdd with SMTP id bg9-20020a056808178900b003a708a19cddmr6096997oib.28.1693476975594; Thu, 31 Aug 2023 03:16:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693476975; cv=none; d=google.com; s=arc-20160816; b=fIgs0sSvI8CYbI1fKbExfDJEyGbJWoZQuBCbXbbWZ5xt+XsWsa//1eC+GmhRBAac+j aZ7a2IEbyHqM7Q/6JH+s1vxf8xVAO+AGb0tgFD3lDEPBhHT7JvxkwI3o6y0pF1JmBaaI AfLBP4dluVmpCp/M0OPxuFBqzIWLerzAS2D8M5Fn35Ooc6hPCbtw3zHmBiGBwOr8rbdf +sWOms7VYC0IkgfvnErE5HACZL+8XVC5V+OMqu4Q14kXkhWwClPcRK6CCTmEtp45MDJD vC2ChLuoXBc7s4UCTUI2GL5sGDOgHUj0MPbuuxgEfRcjh0tO66pIbLMYbumTftL+pw9w fWzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Igs38jGXv7cxZEa3qw0nl0IVlkRZD5BQ+RfjJtSRDg8=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=ZyeJP426+I6vTdeFDlkZfXYdtiJ1kzs8T00pbTwaEj4JoCahQXSwuqcl0SuBLo90AP /jhfbhQUK8usovCuJGn2rXi4J8cFmERB4tzCvCBd/gegoD9vRQTn8OzFWoag34JTqXc+ z9gY+dF0lQPw55KA6kBSrkr2BHiufRsFqASOU1z3mpgiV+suBopIAF/68bLGpFBLLtjz nqOdV958XN9EKFYR2IQzuGoa5kOmpATGu2NU4A2rMp78iWtOKT25ez3n82T/lpYR+RWD H3HeujV+Zl/eJacu6zucvugms+426QtuI1rOSiBSiJpRd/R3mIFRajs66waoySM1epYF WQCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l62-20020a638841000000b00565f79f0acbsi971436pgd.555.2023.08.31.03.15.56; Thu, 31 Aug 2023 03:16:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjHaIcB (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344052AbjHaIbS (ORCPT ); Thu, 31 Aug 2023 04:31:18 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BAA59E69; Thu, 31 Aug 2023 01:30:41 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxEvCrT_BkkV4dAA--.59298S3; Thu, 31 Aug 2023 16:30:35 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S20; Thu, 31 Aug 2023 16:30:33 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 18/30] LoongArch: KVM: Implement vcpu timer operations Date: Thu, 31 Aug 2023 16:30:08 +0800 Message-Id: <20230831083020.2187109-19-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S20 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775739313065165938 X-GMAIL-MSGID: 1775739313065165938 Implement LoongArch vcpu timer operations such as init kvm timer, require kvm timer, save kvm timer and restore kvm timer. When vcpu exit, we use kvm soft timer to emulate hardware timer. If timeout happens, the vcpu timer interrupt will be set and it is going to be handled at vcpu next entrance. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/timer.c | 200 +++++++++++++++++++++++++++++++++++++ 1 file changed, 200 insertions(+) create mode 100644 arch/loongarch/kvm/timer.c diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c new file mode 100644 index 0000000000..df56d6fa81 --- /dev/null +++ b/arch/loongarch/kvm/timer.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include + +/* + * ktime_to_tick() - Scale ktime_t to timer tick value. + */ +static inline u64 ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now) +{ + u64 delta; + + delta = ktime_to_ns(now); + return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC); +} + +static inline u64 tick_to_ns(struct kvm_vcpu *vcpu, u64 tick) +{ + return div_u64(tick * MNSEC_PER_SEC, vcpu->arch.timer_mhz); +} + +/* + * Push timer forward on timeout. + * Handle an hrtimer event by push the hrtimer forward a period. + */ +static enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu) +{ + unsigned long cfg, period; + + /* Add periodic tick to current expire time */ + cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG); + if (cfg & CSR_TCFG_PERIOD) { + period = tick_to_ns(vcpu, cfg & CSR_TCFG_VAL); + hrtimer_add_expires_ns(&vcpu->arch.swtimer, period); + return HRTIMER_RESTART; + } else + return HRTIMER_NORESTART; +} + +/* low level hrtimer wake routine */ +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer) +{ + struct kvm_vcpu *vcpu; + + vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer); + _kvm_queue_irq(vcpu, INT_TI); + rcuwait_wake_up(&vcpu->wait); + return kvm_count_timeout(vcpu); +} + +/* + * Initialise the timer to the specified frequency, zero it + */ +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz) +{ + vcpu->arch.timer_mhz = timer_hz >> 20; + + /* Starting at 0 */ + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TVAL, 0); +} + +/* + * Restore soft timer state from saved context. + */ +void kvm_restore_timer(struct kvm_vcpu *vcpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + ktime_t expire, now; + unsigned long cfg, delta, period; + + /* + * Set guest stable timer cfg csr + */ + cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG); + if (!(cfg & CSR_TCFG_EN)) { + /* guest timer is disabled, just restore timer registers */ + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL); + return; + } + + /* + * set remainder tick value if not expired + */ + now = ktime_get(); + expire = vcpu->arch.expire; + if (ktime_before(now, expire)) + delta = ktime_to_tick(vcpu, ktime_sub(expire, now)); + else { + if (cfg & CSR_TCFG_PERIOD) { + period = cfg & CSR_TCFG_VAL; + delta = ktime_to_tick(vcpu, ktime_sub(now, expire)); + delta = period - (delta % period); + } else + delta = 0; + /* + * inject timer here though sw timer should inject timer + * interrupt async already, since sw timer may be cancelled + * during injecting intr async in function kvm_acquire_timer + */ + _kvm_queue_irq(vcpu, INT_TI); + } + + write_gcsr_timertick(delta); +} + +/* + * + * Restore hard timer state and enable guest to access timer registers + * without trap + * + * it is called with irq disabled + */ +void kvm_acquire_timer(struct kvm_vcpu *vcpu) +{ + unsigned long cfg; + + cfg = read_csr_gcfg(); + if (!(cfg & CSR_GCFG_TIT)) + return; + + /* enable guest access to hard timer */ + write_csr_gcfg(cfg & ~CSR_GCFG_TIT); + + /* + * Freeze the soft-timer and sync the guest stable timer with it. We do + * this with interrupts disabled to avoid latency. + */ + hrtimer_cancel(&vcpu->arch.swtimer); +} + +/* + * Save guest timer state and switch to software emulation of guest + * timer. The hard timer must already be in use, so preemption should be + * disabled. + */ +static void _kvm_save_timer(struct kvm_vcpu *vcpu) +{ + unsigned long ticks, delta; + ktime_t expire; + struct loongarch_csrs *csr = vcpu->arch.csr; + + ticks = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL); + delta = tick_to_ns(vcpu, ticks); + expire = ktime_add_ns(ktime_get(), delta); + vcpu->arch.expire = expire; + if (ticks) { + /* + * Update hrtimer to use new timeout + * HRTIMER_MODE_PINNED is suggested since vcpu may run in + * the same physical cpu in next time + */ + hrtimer_cancel(&vcpu->arch.swtimer); + hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED); + } else + /* + * inject timer interrupt so that hall polling can dectect + * and exit + */ + _kvm_queue_irq(vcpu, INT_TI); +} + +/* + * Save guest timer state and switch to soft guest timer if hard timer was in + * use. + */ +void kvm_save_timer(struct kvm_vcpu *vcpu) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + unsigned long cfg; + + preempt_disable(); + cfg = read_csr_gcfg(); + if (!(cfg & CSR_GCFG_TIT)) { + /* disable guest use of hard timer */ + write_csr_gcfg(cfg | CSR_GCFG_TIT); + + /* save hard timer state */ + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG); + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL); + if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN) + _kvm_save_timer(vcpu); + } + + /* save timer-related state to vCPU context */ + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); + preempt_enable(); +} + +void kvm_reset_timer(struct kvm_vcpu *vcpu) +{ + write_gcsr_timercfg(0); + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0); + hrtimer_cancel(&vcpu->arch.swtimer); +} From patchwork Thu Aug 31 08:30:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137260 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp127799vqu; Thu, 31 Aug 2023 02:43:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEXHn78ACAEhl00W/Ykk3bSyD2DktZVH5NlsWaFukkR9BHhQ3+6ApcvSguEDHop2YFf2Mmt X-Received: by 2002:a17:907:7283:b0:9a5:83f0:9bc5 with SMTP id dt3-20020a170907728300b009a583f09bc5mr2519558ejc.18.1693475032500; Thu, 31 Aug 2023 02:43:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693475032; cv=none; d=google.com; s=arc-20160816; b=qj0XhTvurZgi3nUXYoX/A10CKAzZe5kWSXc44kmmo5e6yjzL/O6cr7wD1wf5o/QJ0S SNJ8FeI9NTua8oei7G/Z7ubGvRzeCPEZrUCljup4PxTyHAlhD6dxsS5VhlXBEhm17xKN fxpBCSjCYDcwNQpKni8OJhREegX4jhRuq172sWfhQeNb/MvRk/sSIcsz0uNoh6b+jcYT HYeEfeUIx5Ce6UFDe6tAUwqdUOBVFNXx+JvjseZvTggOQq+Ofs6KkfnXNA/lmXozfe4I Qk8shQGXTk4LP36EPI0NjDgxLUUymiEx+JLOWQttgkNz6KLCQnOVxe4D6vmCWoMB34cV a1cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=rN2w+hjLdqz2PXWM49KYj3SFxDxsHZT4sT+6fvw3gmU=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=1Dt7Ds95XGblGZqakBvWGwFSJh5YDh8EpzaswU8al2BJdA1JyhR0cZThzgDSiILt/z FBVoE9msz73Q0ySvYMSdhq9IDiWWxkKKMNVqLtgWgjf+1N+qemvxE7H+tWOmIFkeRAob 602q+lHgpKS19Ywc/q9pJqJcWFNyicYTDcRgtq9aMtfd8aFSt/U6Xins7gSart73cg0A yEVPxUthOHuzQCI/Jb2N3E0Urx7Ue1GFXn07tL3t15z17piH3ZP06iGPUHaH/lqsRRs5 jnMn6C9Zzifwosa7CjwL58m/bKcn1TI2b705X9qqeJx4fSECc2agpuPfV8jw61ZwRBjx N8Ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r11-20020aa7d14b000000b00527231b121fsi838019edo.31.2023.08.31.02.43.25; Thu, 31 Aug 2023 02:43:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344752AbjHaIbt (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344073AbjHaIbS (ORCPT ); Thu, 31 Aug 2023 04:31:18 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2F607E6E; Thu, 31 Aug 2023 01:30:41 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Bx5fCrT_Bknl4dAA--.60365S3; Thu, 31 Aug 2023 16:30:35 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S21; Thu, 31 Aug 2023 16:30:34 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations Date: Thu, 31 Aug 2023 16:30:09 +0800 Message-Id: <20230831083020.2187109-20-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S21 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775737275663228181 X-GMAIL-MSGID: 1775737275663228181 Implement LoongArch kvm mmu, it is used to switch gpa to hpa when guest exit because of address translation exception. This patch implement allocate gpa page table, search gpa from it and flush guest gpa in the table. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/mmu.c | 678 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 678 insertions(+) create mode 100644 arch/loongarch/kvm/mmu.c diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c new file mode 100644 index 0000000000..4bb20393f4 --- /dev/null +++ b/arch/loongarch/kvm/mmu.c @@ -0,0 +1,678 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include + +/* + * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels + * for which pages need to be cached. + */ +#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1) + +static inline void kvm_set_pte(pte_t *ptep, pte_t pteval) +{ + *ptep = pteval; +} + +/** + * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory. + * + * Allocate a blank KVM GPA page directory (PGD) for representing guest physical + * to host physical page mappings. + * + * Returns: Pointer to new KVM GPA page directory. + * NULL on allocation failure. + */ +pgd_t *kvm_pgd_alloc(void) +{ + pgd_t *pgd; + + pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0); + if (pgd) + pgd_init((void *)pgd); + + return pgd; +} + +/* + * Caller must hold kvm->mm_lock + * + * Walk the page tables of kvm to find the PTE corresponding to the + * address @addr. If page tables don't exist for @addr, they will be created + * from the MMU cache if @cache is not NULL. + */ +static pte_t *kvm_populate_gpa(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + pgd = kvm->arch.pgd + pgd_index(addr); + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) { + if (!cache) + return NULL; + + pud = kvm_mmu_memory_cache_alloc(cache); + pud_init(pud); + p4d_populate(NULL, p4d, pud); + } + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) { + if (!cache) + return NULL; + pmd = kvm_mmu_memory_cache_alloc(cache); + pmd_init(pmd); + pud_populate(NULL, pud, pmd); + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + pte_t *pte; + + if (!cache) + return NULL; + pte = kvm_mmu_memory_cache_alloc(cache); + clear_page(pte); + pmd_populate_kernel(NULL, pmd, pte); + } + + return pte_offset_kernel(pmd, addr); +} + +typedef int (*kvm_pte_ops)(pte_t *pte); + +struct kvm_ptw_ctx { + kvm_pte_ops ops; + int need_flush; +}; + +static int kvm_ptw_pte(pmd_t *pmd, unsigned long addr, unsigned long end, + struct kvm_ptw_ctx *context) +{ + pte_t *pte; + unsigned long next, start; + int ret; + + ret = 0; + start = addr; + pte = pte_offset_kernel(pmd, addr); + do { + next = addr + PAGE_SIZE; + if (!pte_present(*pte)) + continue; + + ret |= context->ops(pte); + } while (pte++, addr = next, addr != end); + + if (context->need_flush && (start + PMD_SIZE == end)) { + pte = pte_offset_kernel(pmd, 0); + pmd_clear(pmd); + free_page((unsigned long)pte); + } + + return ret; +} + +static int kvm_ptw_pmd(pud_t *pud, unsigned long addr, unsigned long end, + struct kvm_ptw_ctx *context) +{ + pmd_t *pmd; + unsigned long next, start; + int ret; + + ret = 0; + start = addr; + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (!pmd_present(*pmd)) + continue; + + ret |= kvm_ptw_pte(pmd, addr, next, context); + } while (pmd++, addr = next, addr != end); + +#ifndef __PAGETABLE_PMD_FOLDED + if (context->need_flush && (start + PUD_SIZE == end)) { + pmd = pmd_offset(pud, 0); + pud_clear(pud); + free_page((unsigned long)pmd); + } +#endif + + return ret; +} + +static int kvm_ptw_pud(pgd_t *pgd, unsigned long addr, unsigned long end, + struct kvm_ptw_ctx *context) +{ + p4d_t *p4d; + pud_t *pud; + int ret = 0; + unsigned long next; +#ifndef __PAGETABLE_PUD_FOLDED + unsigned long start = addr; +#endif + + p4d = p4d_offset(pgd, addr); + pud = pud_offset(p4d, addr); + do { + next = pud_addr_end(addr, end); + if (!pud_present(*pud)) + continue; + + ret |= kvm_ptw_pmd(pud, addr, next, context); + } while (pud++, addr = next, addr != end); + +#ifndef __PAGETABLE_PUD_FOLDED + if (context->need_flush && (start + PGDIR_SIZE == end)) { + pud = pud_offset(p4d, 0); + p4d_clear(p4d); + free_page((unsigned long)pud); + } +#endif + + return ret; +} + +static int kvm_ptw_pgd(pgd_t *pgd, unsigned long addr, unsigned long end, + struct kvm_ptw_ctx *context) +{ + unsigned long next; + int ret; + + ret = 0; + if (addr > end - 1) + return ret; + pgd = pgd + pgd_index(addr); + do { + next = pgd_addr_end(addr, end); + if (!pgd_present(*pgd)) + continue; + + ret |= kvm_ptw_pud(pgd, addr, next, context); + } while (pgd++, addr = next, addr != end); + + return ret; +} + +/* + * clear pte entry + */ +static int kvm_flush_pte(pte_t *pte) +{ + kvm_set_pte(pte, __pte(0)); + return 1; +} + +/** + * kvm_flush_range() - Flush a range of guest physical addresses. + * @kvm: KVM pointer. + * @start_gfn: Guest frame number of first page in GPA range to flush. + * @end_gfn: Guest frame number of last page in GPA range to flush. + * + * Flushes a range of GPA mappings from the GPA page tables. + * + * The caller must hold the @kvm->mmu_lock spinlock. + * + * Returns: Whether its safe to remove the top level page directory because + * all lower levels have been removed. + */ +static bool kvm_flush_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn) +{ + struct kvm_ptw_ctx ctx; + + ctx.ops = kvm_flush_pte; + ctx.need_flush = 1; + + return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT, + end_gfn << PAGE_SHIFT, &ctx); +} + +/* + * kvm_mkclean_pte + * Mark a range of guest physical address space clean (writes fault) in the VM's + * GPA page table to allow dirty page tracking. + */ +static int kvm_mkclean_pte(pte_t *pte) +{ + pte_t val; + + val = *pte; + if (pte_dirty(val)) { + *pte = pte_mkclean(val); + return 1; + } + return 0; +} + +/* + * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean. + * @kvm: KVM pointer. + * @start_gfn: Guest frame number of first page in GPA range to flush. + * @end_gfn: Guest frame number of last page in GPA range to flush. + * + * Make a range of GPA mappings clean so that guest writes will fault and + * trigger dirty page logging. + * + * The caller must hold the @kvm->mmu_lock spinlock. + * + * Returns: Whether any GPA mappings were modified, which would require + * derived mappings (GVA page tables & TLB enties) to be + * invalidated. + */ +static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn) +{ + struct kvm_ptw_ctx ctx; + + ctx.ops = kvm_mkclean_pte; + ctx.need_flush = 0; + return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT, + end_gfn << PAGE_SHIFT, &ctx); +} + +/* + * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire @kvm->mmu_lock. + */ +void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + gfn_t base_gfn = slot->base_gfn + gfn_offset; + gfn_t start = base_gfn + __ffs(mask); + gfn_t end = base_gfn + __fls(mask) + 1; + + kvm_mkclean_gpa_pt(kvm, start, end); +} + +void kvm_arch_commit_memory_region(struct kvm *kvm, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + int needs_flush; + + /* + * If dirty page logging is enabled, write protect all pages in the slot + * ready for dirty logging. + * + * There is no need to do this in any of the following cases: + * CREATE: No dirty mappings will already exist. + * MOVE/DELETE: The old mappings will already have been cleaned up by + * kvm_arch_flush_shadow_memslot() + */ + if (change == KVM_MR_FLAGS_ONLY && + (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) && + new->flags & KVM_MEM_LOG_DIRTY_PAGES)) { + spin_lock(&kvm->mmu_lock); + /* Write protect GPA page table entries */ + needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn, + new->base_gfn + new->npages); + if (needs_flush) + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); + } +} + +void kvm_arch_flush_shadow_all(struct kvm *kvm) +{ + /* Flush whole GPA */ + kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT); + /* Flush vpid for each vCPU individually */ + kvm_flush_remote_tlbs(kvm); +} + +void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ + int ret; + + /* + * The slot has been made invalid (ready for moving or deletion), so we + * need to ensure that it can no longer be accessed by any guest vCPUs. + */ + spin_lock(&kvm->mmu_lock); + /* Flush slot from GPA */ + ret = kvm_flush_range(kvm, slot->base_gfn, + slot->base_gfn + slot->npages); + /* Let implementation do the rest */ + if (ret) + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} + +void _kvm_destroy_mm(struct kvm *kvm) +{ + /* It should always be safe to remove after flushing the whole range */ + kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT); + pgd_free(NULL, kvm->arch.pgd); + kvm->arch.pgd = NULL; +} + +/* + * Mark a range of guest physical address space old (all accesses fault) in the + * VM's GPA page table to allow detection of commonly used pages. + */ +static int kvm_mkold_pte(pte_t *pte) +{ + pte_t val; + + val = *pte; + if (pte_young(val)) { + *pte = pte_mkold(val); + return 1; + } + return 0; +} + +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + return kvm_flush_range(kvm, range->start, range->end); +} + +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + gpa_t gpa = range->start << PAGE_SHIFT; + pte_t hva_pte = range->pte; + pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa); + pte_t old_pte; + + if (!ptep) + return false; + + /* Mapping may need adjusting depending on memslot flags */ + old_pte = *ptep; + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) + hva_pte = pte_mkclean(hva_pte); + else if (range->slot->flags & KVM_MEM_READONLY) + hva_pte = pte_wrprotect(hva_pte); + + kvm_set_pte(ptep, hva_pte); + + /* Replacing an absent or old page doesn't need flushes */ + if (!pte_present(old_pte) || !pte_young(old_pte)) + return false; + + /* Pages swapped, aged, moved, or cleaned require flushes */ + return !pte_present(hva_pte) || + !pte_young(hva_pte) || + pte_pfn(old_pte) != pte_pfn(hva_pte) || + (pte_dirty(old_pte) && !pte_dirty(hva_pte)); +} + +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + struct kvm_ptw_ctx ctx; + + ctx.ops = kvm_mkold_pte; + ctx.need_flush = 0; + return kvm_ptw_pgd(kvm->arch.pgd, range->start << PAGE_SHIFT, + range->end << PAGE_SHIFT, &ctx); +} + +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +{ + gpa_t gpa = range->start << PAGE_SHIFT; + pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa); + + if (ptep && pte_present(*ptep) && pte_young(*ptep)) + return true; + + return false; +} + +/** + * kvm_map_page_fast() - Fast path GPA fault handler. + * @vcpu: vCPU pointer. + * @gpa: Guest physical address of fault. + * @write: Whether the fault was due to a write. + * + * Perform fast path GPA fault handling, doing all that can be done without + * calling into KVM. This handles marking old pages young (for idle page + * tracking), and dirtying of clean pages (for dirty page logging). + * + * Returns: 0 on success, in which case we can update derived mappings and + * resume guest execution. + * -EFAULT on failure due to absent GPA mapping or write to + * read-only page, in which case KVM must be consulted. + */ +static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, + bool write) +{ + struct kvm *kvm = vcpu->kvm; + gfn_t gfn = gpa >> PAGE_SHIFT; + pte_t *ptep; + kvm_pfn_t pfn = 0; + bool pfn_valid = false, pfn_dirty = false; + int ret = 0; + + spin_lock(&kvm->mmu_lock); + + /* Fast path - just check GPA page table for an existing entry */ + ptep = kvm_populate_gpa(kvm, NULL, gpa); + if (!ptep || !pte_present(*ptep)) { + ret = -EFAULT; + goto out; + } + + /* Track access to pages marked old */ + if (!pte_young(*ptep)) { + kvm_set_pte(ptep, pte_mkyoung(*ptep)); + pfn = pte_pfn(*ptep); + pfn_valid = true; + /* call kvm_set_pfn_accessed() after unlock */ + } + if (write && !pte_dirty(*ptep)) { + if (!pte_write(*ptep)) { + ret = -EFAULT; + goto out; + } + + /* Track dirtying of writeable pages */ + kvm_set_pte(ptep, pte_mkdirty(*ptep)); + pfn = pte_pfn(*ptep); + pfn_dirty = true; + } + +out: + spin_unlock(&kvm->mmu_lock); + if (pfn_valid) + kvm_set_pfn_accessed(pfn); + if (pfn_dirty) { + mark_page_dirty(kvm, gfn); + kvm_set_pfn_dirty(pfn); + } + return ret; +} + +/** + * kvm_map_page() - Map a guest physical page. + * @vcpu: vCPU pointer. + * @gpa: Guest physical address of fault. + * @write: Whether the fault was due to a write. + * + * Handle GPA faults by creating a new GPA mapping (or updating an existing + * one). + * + * This takes care of marking pages young or dirty (idle/dirty page tracking), + * asking KVM for the corresponding PFN, and creating a mapping in the GPA page + * tables. Derived mappings (GVA page tables and TLBs) must be handled by the + * caller. + * + * Returns: 0 on success + * -EFAULT if there is no memory region at @gpa or a write was + * attempted to a read-only memory region. This is usually handled + * as an MMIO access. + */ +static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) +{ + bool writeable; + int srcu_idx, err = 0, retry_no = 0; + unsigned long hva; + unsigned long mmu_seq; + unsigned long prot_bits; + pte_t *ptep, new_pte; + kvm_pfn_t pfn; + gfn_t gfn = gpa >> PAGE_SHIFT; + struct vm_area_struct *vma; + struct kvm *kvm = vcpu->kvm; + struct kvm_memory_slot *memslot; + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + + /* Try the fast path to handle old / clean pages */ + srcu_idx = srcu_read_lock(&kvm->srcu); + err = kvm_map_page_fast(vcpu, gpa, write); + if (!err) + goto out; + + memslot = gfn_to_memslot(kvm, gfn); + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable); + if (kvm_is_error_hva(hva) || (write && !writeable)) + goto out; + + mmap_read_lock(current->mm); + vma = find_vma_intersection(current->mm, hva, hva + 1); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + err = -EFAULT; + goto out; + } + mmap_read_unlock(current->mm); + + /* We need a minimum of cached pages ready for page table creation */ + err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES); + if (err) + goto out; + +retry: + /* + * Used to check for invalidations in progress, of the pfn that is + * returned by pfn_to_pfn_prot below. + */ + mmu_seq = kvm->mmu_invalidate_seq; + /* + * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in + * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * risk the page we get a reference to getting unmapped before we have a + * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. + * + * This smp_rmb() pairs with the effective smp_wmb() of the combination + * of the pte_unmap_unlock() after the PTE is zapped, and the + * spin_lock() in kvm_mmu_invalidate_invalidate_() before + * mmu_invalidate_seq is incremented. + */ + smp_rmb(); + + /* Slow path - ask KVM core whether we can access this GPA */ + pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable); + if (is_error_noslot_pfn(pfn)) { + err = -EFAULT; + goto out; + } + + /* Check if an invalidation has taken place since we got pfn */ + if (mmu_invalidate_retry(kvm, mmu_seq)) { + /* + * This can happen when mappings are changed asynchronously, but + * also synchronously if a COW is triggered by + * gfn_to_pfn_prot(). + */ + kvm_set_pfn_accessed(pfn); + kvm_release_pfn_clean(pfn); + if (retry_no > 100) { + retry_no = 0; + schedule(); + } + retry_no++; + goto retry; + } + + /* + * For emulated devices such virtio device, actual cache attribute is + * determined by physical machine. + * For pass through physical device, it should be uncachable + */ + prot_bits = _PAGE_PRESENT | __READABLE; + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + prot_bits |= _CACHE_SUC; + else + prot_bits |= _CACHE_CC; + + if (writeable) { + prot_bits |= _PAGE_WRITE; + if (write) + prot_bits |= __WRITEABLE; + } + + /* Ensure page tables are allocated */ + spin_lock(&kvm->mmu_lock); + ptep = kvm_populate_gpa(kvm, memcache, gpa); + new_pte = pfn_pte(pfn, __pgprot(prot_bits)); + kvm_set_pte(ptep, new_pte); + + err = 0; + spin_unlock(&kvm->mmu_lock); + + if (prot_bits & _PAGE_DIRTY) { + mark_page_dirty(kvm, gfn); + kvm_set_pfn_dirty(pfn); + } + + kvm_set_pfn_accessed(pfn); + kvm_release_pfn_clean(pfn); +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return err; +} + +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) +{ + int ret; + + ret = kvm_map_page(vcpu, gpa, write); + if (ret) + return ret; + + /* Invalidate this entry in the TLB */ + return kvm_flush_tlb_gpa(vcpu, gpa); +} + +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + +} + +int kvm_arch_prepare_memory_region(struct kvm *kvm, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + return 0; +} + +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + kvm_flush_remote_tlbs(kvm); +} From patchwork Thu Aug 31 08:30:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137268 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp153762vqu; Thu, 31 Aug 2023 03:45:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHW5NRq/6Trx8f/nziXgBPcppAKUGkUILaUU1FMW8NlJQHSPwscYfBfVjL2MwtaFkU9HzHh X-Received: by 2002:a17:906:73c4:b0:99b:ead0:2733 with SMTP id n4-20020a17090673c400b0099bead02733mr3377528ejl.72.1693478734418; Thu, 31 Aug 2023 03:45:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693478734; cv=none; d=google.com; s=arc-20160816; b=qVYhkhSIsnDdXigVUK7wTehyDSvwinhPni73S6Hh5f9uTuxx/RzR9nxnegBj0O3sTx /8YFhi5Dy6MecDY6oz7Cx9QPaaOIDdzRnC8nS8ml0WoO5SsJPwKCzmX9quCbEqFxfkze lvkOhPwPCznokAZGKePF+uS68hW/808JQVKY1fk67RHMVVaH/fxx0bV+zq7NelADlJLW +iwSbf8Ea8PZhvQ1RFZ75+JHdBVNg88G3rugGbGSnDffO2wvmIM0Ql1wncBJNYjyYLw7 QManRW1FZq86fmQlrkKtz5v0Knz7BPRANUdZ1fP5Jooc+kIkHseFgf5lr3kDhAqFuA9R nbjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Mx9BEFT9skhuUiA7KEEftuqILAQKiJDjEKLkp9xXGcg=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=EJ1ZpfH2pWx5E2rY/Z5MBdTH+awmzqRW3L0pbuTuf6lrMP9qlpSqvSH7u9y7uZtKv8 C/ca8fDO8nAcYzwBwa/yC9/b7Zogkp7D+m/P0xWoha0RmFamhDdI+H9GhcNYyzmVOtHB WGY7gT/bkBV5/Wwax8fi+HlQJ/Cxy71IZ5mQtEJzvOMTwBtcxJ6/aByjFkgze1VhnNNF KFhvAzUcxxNQ0iFgWlSU3ttaWDWrlvEqJPNJuQaSGfarZVDe86qS+lnoUVSwY5Nm59J1 GkpSZJukrGOFYeqO8FOXRAhc4DMCebcfTQsBsy2rSKdYkl98K4gRfNeErvmp4G/02z+w yNWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gz23-20020a170906f2d700b009885c5f1d7asi818452ejb.319.2023.08.31.03.45.02; Thu, 31 Aug 2023 03:45:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245753AbjHaIby (ORCPT + 99 others); Thu, 31 Aug 2023 04:31:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243408AbjHaIbU (ORCPT ); Thu, 31 Aug 2023 04:31:20 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ADD2BE71; Thu, 31 Aug 2023 01:30:42 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cx5_GsT_BkpV4dAA--.60025S3; Thu, 31 Aug 2023 16:30:36 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S22; Thu, 31 Aug 2023 16:30:35 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 20/30] LoongArch: KVM: Implement handle csr excption Date: Thu, 31 Aug 2023 16:30:10 +0800 Message-Id: <20230831083020.2187109-21-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S22 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775741157795011052 X-GMAIL-MSGID: 1775741157795011052 Implement kvm handle LoongArch vcpu exit caused by reading and writing csr. Using csr structure to emulate the registers. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 98 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 arch/loongarch/kvm/exit.c diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c new file mode 100644 index 0000000000..18635333fc --- /dev/null +++ b/arch/loongarch/kvm/exit.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "trace.h" + +static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + unsigned long val = 0; + + if (get_gcsr_flag(csrid) & SW_GCSR) + val = kvm_read_sw_gcsr(csr, csrid); + else + pr_warn_once("Unsupport csrread 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); + return val; +} + +static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (get_gcsr_flag(csrid) & SW_GCSR) + kvm_write_sw_gcsr(csr, csrid, val); + else + pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long csr_mask, unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (get_gcsr_flag(csrid) & SW_GCSR) { + unsigned long orig; + + orig = kvm_read_sw_gcsr(csr, csrid); + orig &= ~csr_mask; + orig |= val & csr_mask; + kvm_write_sw_gcsr(csr, csrid, orig); + } else + pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) +{ + unsigned int rd, rj, csrid; + unsigned long csr_mask; + unsigned long val = 0; + + /* + * CSR value mask imm + * rj = 0 means csrrd + * rj = 1 means csrwr + * rj != 0,1 means csrxchg + */ + rd = inst.reg2csr_format.rd; + rj = inst.reg2csr_format.rj; + csrid = inst.reg2csr_format.csr; + + /* Process CSR ops */ + if (rj == 0) { + /* process csrrd */ + val = _kvm_emu_read_csr(vcpu, csrid); + vcpu->arch.gprs[rd] = val; + } else if (rj == 1) { + /* process csrwr */ + val = vcpu->arch.gprs[rd]; + _kvm_emu_write_csr(vcpu, csrid, val); + } else { + /* process csrxchg */ + val = vcpu->arch.gprs[rd]; + csr_mask = vcpu->arch.gprs[rj]; + _kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val); + } + + return EMULATE_DONE; +} From patchwork Thu Aug 31 08:30:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137277 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp173955vqu; Thu, 31 Aug 2023 04:26:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH3IEZFv5W/nFE8Eq2ZjKN6pmQFDqRTW5BcjZGSDmRCViWUfDvzGz3gVz6S8IPpV8gjmCoe X-Received: by 2002:a05:6512:3a83:b0:4f8:7325:bcd4 with SMTP id q3-20020a0565123a8300b004f87325bcd4mr4030892lfu.0.1693481177839; Thu, 31 Aug 2023 04:26:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693481177; cv=none; d=google.com; s=arc-20160816; b=wEwWEOZztqFs8tpc3rAnxlGQKJCXKFXsjNGY6EReAkpAIs1SOXze3FPA+BTLmu3L2J z+GbIz2zrbWqmxN+8+G7AKWp8YGmQcOsABROF5SraZ2QbtvkECc/NslpBBVmRLDIDD8T 0EyHZ9hAHwzOlBfh8EYBqCIC+VBKVOer/abZqwyERLNzXviGtEFhLxe+jSA8IJ91Bl3V q54mX0HHOWVQk0xskVrXUBwp5lG2IksZkVBrTRzM89aAw6VCeSQPPRn8D7VwmTyzDM+Q XxCYdw19JwtTwdE8y3AARDp9CFLxJA6bTo69+sefavtg7Apod3Gv2Mr+4juyjLXwof81 yu7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=B0VJoibCFRRD0T4cwGwykHgMUbXIZRQ5U8KjCm+GEoQ=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=cCyFXNU6CCBCrd6oX3NC3wujXTSbrKMoqwvepH+QL1j2qj9MuNSpXl7AO60hf8jV5P W8LExc3Ero/Kxw4K/7fuHuERaEMjWhsrosMIVNLtSmg674Dsi/XXLcg6EtMxSgAV1nEU ZervbkcTE1uigco+bHjntRvxXGsnqllk4cutnx68WGBwzRlTsu85saMHbD8SmADBZjHl McrhnWv5y9Y0xMF0DzEUu/dBd3XDDt5JkdEJBu/voSUzS0C/EDV30S0/KYyDKcn/h75r cY2YIhD167Nm/Xsh2KTnrM8GvBe/WUUrINP7+Fs+YozLTdFm0PJzgRXAMFIvHb5O26S3 GE+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d25-20020a056402079900b0052566663542si932870edy.76.2023.08.31.04.25.23; Thu, 31 Aug 2023 04:26:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242460AbjHaIlj (ORCPT + 99 others); Thu, 31 Aug 2023 04:41:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239478AbjHaIlc (ORCPT ); Thu, 31 Aug 2023 04:41:32 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A2016E4C; Thu, 31 Aug 2023 01:41:04 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Dx_+utT_Bkr14dAA--.58995S3; Thu, 31 Aug 2023 16:30:37 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S23; Thu, 31 Aug 2023 16:30:35 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 21/30] LoongArch: KVM: Implement handle iocsr exception Date: Thu, 31 Aug 2023 16:30:11 +0800 Message-Id: <20230831083020.2187109-22-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S23 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775743719309262928 X-GMAIL-MSGID: 1775743719309262928 Implement kvm handle vcpu iocsr exception, setting the iocsr info into vcpu_run and return to user space to handle it. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/inst.h | 16 ++++++ arch/loongarch/kvm/exit.c | 92 +++++++++++++++++++++++++++++++ 2 files changed, 108 insertions(+) diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h index 71e1ed4165..008a88ead6 100644 --- a/arch/loongarch/include/asm/inst.h +++ b/arch/loongarch/include/asm/inst.h @@ -65,6 +65,14 @@ enum reg2_op { revbd_op = 0x0f, revh2w_op = 0x10, revhd_op = 0x11, + iocsrrdb_op = 0x19200, + iocsrrdh_op = 0x19201, + iocsrrdw_op = 0x19202, + iocsrrdd_op = 0x19203, + iocsrwrb_op = 0x19204, + iocsrwrh_op = 0x19205, + iocsrwrw_op = 0x19206, + iocsrwrd_op = 0x19207, }; enum reg2i5_op { @@ -318,6 +326,13 @@ struct reg2bstrd_format { unsigned int opcode : 10; }; +struct reg2csr_format { + unsigned int rd : 5; + unsigned int rj : 5; + unsigned int csr : 14; + unsigned int opcode : 8; +}; + struct reg3_format { unsigned int rd : 5; unsigned int rj : 5; @@ -346,6 +361,7 @@ union loongarch_instruction { struct reg2i14_format reg2i14_format; struct reg2i16_format reg2i16_format; struct reg2bstrd_format reg2bstrd_format; + struct reg2csr_format reg2csr_format; struct reg3_format reg3_format; struct reg3sa2_format reg3sa2_format; }; diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 18635333fc..32edd915eb 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -96,3 +96,95 @@ static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) return EMULATE_DONE; } + +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu) +{ + u32 rd, rj, opcode; + u32 addr; + unsigned long val; + int ret; + + /* + * Each IOCSR with different opcode + */ + rd = inst.reg2_format.rd; + rj = inst.reg2_format.rj; + opcode = inst.reg2_format.opcode; + addr = vcpu->arch.gprs[rj]; + ret = EMULATE_DO_IOCSR; + run->iocsr_io.phys_addr = addr; + run->iocsr_io.is_write = 0; + + /* LoongArch is Little endian */ + switch (opcode) { + case iocsrrdb_op: + run->iocsr_io.len = 1; + break; + case iocsrrdh_op: + run->iocsr_io.len = 2; + break; + case iocsrrdw_op: + run->iocsr_io.len = 4; + break; + case iocsrrdd_op: + run->iocsr_io.len = 8; + break; + case iocsrwrb_op: + run->iocsr_io.len = 1; + run->iocsr_io.is_write = 1; + break; + case iocsrwrh_op: + run->iocsr_io.len = 2; + run->iocsr_io.is_write = 1; + break; + case iocsrwrw_op: + run->iocsr_io.len = 4; + run->iocsr_io.is_write = 1; + break; + case iocsrwrd_op: + run->iocsr_io.len = 8; + run->iocsr_io.is_write = 1; + break; + default: + ret = EMULATE_FAIL; + break; + } + + if (ret == EMULATE_DO_IOCSR) { + if (run->iocsr_io.is_write) { + val = vcpu->arch.gprs[rd]; + memcpy(run->iocsr_io.data, &val, run->iocsr_io.len); + } + vcpu->arch.io_gpr = rd; + } + + return ret; +} + +int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; + enum emulation_result er = EMULATE_DONE; + + switch (run->iocsr_io.len) { + case 8: + *gpr = *(s64 *)run->iocsr_io.data; + break; + case 4: + *gpr = *(int *)run->iocsr_io.data; + break; + case 2: + *gpr = *(short *)run->iocsr_io.data; + break; + case 1: + *gpr = *(char *) run->iocsr_io.data; + break; + default: + kvm_err("Bad IOCSR length: %d,addr is 0x%lx", + run->iocsr_io.len, vcpu->arch.badv); + er = EMULATE_FAIL; + break; + } + + return er; +} From patchwork Thu Aug 31 08:30:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137283 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp182222vqu; Thu, 31 Aug 2023 04:43:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG1FYLcilDPmBVwRRixhtsKbzgeTOI+J2IL8UbzxWEelG1xkimgP1wGtVoSQu20gFDcDfOH X-Received: by 2002:a05:6402:1288:b0:521:a4bb:374f with SMTP id w8-20020a056402128800b00521a4bb374fmr4265339edv.5.1693482237427; Thu, 31 Aug 2023 04:43:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693482237; cv=none; d=google.com; s=arc-20160816; b=ignh3kNgjfZnMizeGMe53aM6A4FbXkBnAbYpdC9N3tsHN0x5zSqj8RCynTGjI5/VC2 wHbA6Ln1pT8QnMaMbQEHbOP0fZvGbqOoJHo7kiXHJac5LfE87552InGtHUeseDsunyxp +3yF3JdWOCvu0QQTAZ7YO7RBQ2RwC55MdgtUH4DlvxSJSDOBcowNuqq83rU2B1YRjMUS hWbegcsZW0PViHnJZKwk2kSRdRh6wDbIb+MrK0eClZRu8TN1kRuIWcg/jy2XT4u1tQSa FMzvjhVGULiJr8JBnCypUN7CAdH2k7igOAre2Y88S2zK4ZERAxRcFS+TCvDEYkb3sdgW aAWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YR6oMRucCHft1Qgi26mumQwdsMX1HhZIFD/+6LTQP1M=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=Ywm3Z+kyUxiaZT4hOS4bHj6hTCvJcK0+VuIL9jAqljtJOJSabDux+tGwVFceD0zTkt dcTif9gaoXH3CAngZ3kwwSDCPex5GDpNjPFPQWRSk8ErI4dCnHZflpsBtsOMRhnP7dqU 0jcUFLYUHQqEtkmBqNeYG7aEylrYFTju/Oty4CenUUwRgxq6Pdtt5fKe1xBMhfTRT3oY ye9df1POd7XfK5EMOuz7oS/SOtd3a/Hi5IAaqbyI0/EbgK4SlAoGFWBbqG/yw69HKs37 tZ7skpMu5dwvdyWsibeKThfaVBiMazZgJrKQPTudbbgVmCARzoOBSEkZx3GLQMJKunc1 bGow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j16-20020aa7c0d0000000b00522b8b66749si907216edp.492.2023.08.31.04.43.07; Thu, 31 Aug 2023 04:43:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343715AbjHaIcF (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345099AbjHaIb0 (ORCPT ); Thu, 31 Aug 2023 04:31:26 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7777410CB; Thu, 31 Aug 2023 01:30:43 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxTeutT_BkuV4dAA--.54667S3; Thu, 31 Aug 2023 16:30:37 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S24; Thu, 31 Aug 2023 16:30:36 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 22/30] LoongArch: KVM: Implement handle idle exception Date: Thu, 31 Aug 2023 16:30:12 +0800 Message-Id: <20230831083020.2187109-23-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S24 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775744830399067205 X-GMAIL-MSGID: 1775744830399067205 Implement kvm handle LoongArch vcpu idle exception, using kvm_vcpu_block to emulate it. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 32edd915eb..30748238c7 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -188,3 +188,23 @@ int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) return er; } + +int _kvm_emu_idle(struct kvm_vcpu *vcpu) +{ + ++vcpu->stat.idle_exits; + trace_kvm_exit_idle(vcpu, KVM_TRACE_EXIT_IDLE); + + if (!kvm_arch_vcpu_runnable(vcpu)) { + /* + * Switch to the software timer before halt-polling/blocking as + * the guest's timer may be a break event for the vCPU, and the + * hypervisor timer runs only when the CPU is in guest mode. + * Switch before halt-polling so that KVM recognizes an expired + * timer before blocking. + */ + kvm_save_timer(vcpu); + kvm_vcpu_block(vcpu); + } + + return EMULATE_DONE; +} From patchwork Thu Aug 31 08:30:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137340 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp459154vqu; Thu, 31 Aug 2023 12:42:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFdhMbmB1MhAWDr/61VhlQetLtJmYAxcv1oMdbXI+u7+oDJbFsgfDlzJdGy7UQmPjC5KYyJ X-Received: by 2002:a05:6808:210b:b0:3a7:b0de:902b with SMTP id r11-20020a056808210b00b003a7b0de902bmr617636oiw.38.1693510925437; Thu, 31 Aug 2023 12:42:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693510925; cv=none; d=google.com; s=arc-20160816; b=X7gDd/l5G6ZHB2ZllBYvesZhTeGhZTzMy2PvY9iWgSbZxPyux9cD1c6uW2JGLiVoa4 AUmpVUCgWCGVAWyJXaxTzGnj0Nr82EE8Act42+uHo8J5WmYVWDTRUsx69O2UeBmWA0/i I92sLMsUFGel+Px+1opRLh3hJfCAurBqIkxVTdusTRPhVNRKRNodvJFpA5z6ila61rls RnWkXr9brbWLsae9MEaY6Ym1zftwBi3GninIN/EqLVxEZgGH+yYQYaB/0HWa1slpsbg9 ZNeizp1pf24GuTp3EA3EFAgl/7Ko/9BQcfkePhGQFG709veEGmsSp40YPcPay75VlRYW Sc9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=OzqUoprUAQVOZ1fcUwFib7o0Ag487kJCxoxkArJuocg=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=gW8rxTZ7gB5mGusDCYnruw5P2NUWZnz9BziFPcvGIUewE2b/iyMZXgTQ/yLiJddaMe lNiXZIhPsY9LT+V3O+uFJYmVAd6+0UIf6EPe+4lgQHpPsdng08DKuX3oELuC46cYd0Ao 3i+aVzfxkUo/2SEiMPrQHpjMZw/D2e+hKPm1tkxhOUeAQ/qgCoz7gvbew+L5FpvBjmLj HMvx+ivuBoim4a6cXP/GdJo8zzEtUUUghE761zrXmoKtkp58EA9DFzalvwVOitYgKuvF Dm+uKKvy7RFEVWrOAGQTqcNWVNYk8lXmi8nS7C2qc5+bEUYHDSSm+pxLNU9DsA1Wpp8p USAA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k70-20020a638449000000b005698cf29f75si1638905pgd.222.2023.08.31.12.41.31; Thu, 31 Aug 2023 12:42:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235250AbjHaIgc (ORCPT + 99 others); Thu, 31 Aug 2023 04:36:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbjHaIga (ORCPT ); Thu, 31 Aug 2023 04:36:30 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 332A810DA; Thu, 31 Aug 2023 01:35:53 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cx5_GtT_Bkw14dAA--.60029S3; Thu, 31 Aug 2023 16:30:38 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S25; Thu, 31 Aug 2023 16:30:37 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 23/30] LoongArch: KVM: Implement handle gspr exception Date: Thu, 31 Aug 2023 16:30:13 +0800 Message-Id: <20230831083020.2187109-24-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S25 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775774912361732762 X-GMAIL-MSGID: 1775774912361732762 Implement kvm handle gspr exception interface, including emulate the reading and writing of cpucfg, csr, iocsr resource. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 112 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 30748238c7..b0781ea100 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -208,3 +208,115 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu) return EMULATE_DONE; } + +static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) +{ + enum emulation_result er = EMULATE_DONE; + struct kvm_run *run = vcpu->run; + larch_inst inst; + unsigned long curr_pc; + int rd, rj; + unsigned int index; + + /* + * Fetch the instruction. + */ + inst.word = vcpu->arch.badi; + curr_pc = vcpu->arch.pc; + update_pc(&vcpu->arch); + + trace_kvm_exit_gspr(vcpu, inst.word); + er = EMULATE_FAIL; + switch (((inst.word >> 24) & 0xff)) { + case 0x0: + /* cpucfg GSPR */ + if (inst.reg2_format.opcode == 0x1B) { + rd = inst.reg2_format.rd; + rj = inst.reg2_format.rj; + ++vcpu->stat.cpucfg_exits; + index = vcpu->arch.gprs[rj]; + + vcpu->arch.gprs[rd] = read_cpucfg(index); + /* Nested KVM is not supported */ + if (index == 2) + vcpu->arch.gprs[rd] &= ~CPUCFG2_LVZP; + if (index == 6) + vcpu->arch.gprs[rd] &= ~CPUCFG6_PMP; + er = EMULATE_DONE; + } + break; + case 0x4: + /* csr GSPR */ + er = _kvm_handle_csr(vcpu, inst); + break; + case 0x6: + /* iocsr,cache,idle GSPR */ + switch (((inst.word >> 22) & 0x3ff)) { + case 0x18: + /* cache GSPR */ + er = EMULATE_DONE; + trace_kvm_exit_cache(vcpu, KVM_TRACE_EXIT_CACHE); + break; + case 0x19: + /* iocsr/idle GSPR */ + switch (((inst.word >> 15) & 0x1ffff)) { + case 0xc90: + /* iocsr GSPR */ + er = _kvm_emu_iocsr(inst, run, vcpu); + break; + case 0xc91: + /* idle GSPR */ + er = _kvm_emu_idle(vcpu); + break; + default: + er = EMULATE_FAIL; + break; + } + break; + default: + er = EMULATE_FAIL; + break; + } + break; + default: + er = EMULATE_FAIL; + break; + } + + /* Rollback PC only if emulation was unsuccessful */ + if (er == EMULATE_FAIL) { + kvm_err("[%#lx]%s: unsupported gspr instruction 0x%08x\n", + curr_pc, __func__, inst.word); + + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->arch.pc = curr_pc; + } + return er; +} + +/* + * Execute cpucfg instruction will tirggerGSPR, + * Also the access to unimplemented csrs 0x15 + * 0x16, 0x50~0x53, 0x80, 0x81, 0x90~0x95, 0x98 + * 0xc0~0xff, 0x100~0x109, 0x500~0x502, + * cache_op, idle_op iocsr ops the same + */ +static int _kvm_handle_gspr(struct kvm_vcpu *vcpu) +{ + enum emulation_result er = EMULATE_DONE; + int ret = RESUME_GUEST; + + er = _kvm_trap_handle_gspr(vcpu); + + if (er == EMULATE_DONE) { + ret = RESUME_GUEST; + } else if (er == EMULATE_DO_IOCSR) { + vcpu->run->exit_reason = KVM_EXIT_LOONGARCH_IOCSR; + ret = RESUME_HOST; + } else { + kvm_err("%s internal error\n", __func__); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + ret = RESUME_HOST; + } + return ret; +} From patchwork Thu Aug 31 08:30:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137270 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp155041vqu; Thu, 31 Aug 2023 03:48:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEjWKuUrTlKbzWHI7ldqX3MM7D5XA2yIpPZKM1wxASNNbe5cYd8+A3o5qEG2iJt+g6/pF+U X-Received: by 2002:a05:6402:149a:b0:523:4996:a4f9 with SMTP id e26-20020a056402149a00b005234996a4f9mr3967920edv.34.1693478917751; Thu, 31 Aug 2023 03:48:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693478917; cv=none; d=google.com; s=arc-20160816; b=BlGNAlmxFS7QesiDuElOrjcTuawcYM4miUyvtsKh5RO7FX4qqws6PTen2ZwRiOx5uS Nux/EiGmmgDIA8OBOk26LnLece4F9rJgCvPfpRsZ5jXuq3Z9Aq44Ayt2LDEUWJMRoBK6 0wJzZ9bD1mjmLDTzWHrZnV27pPvVVVxxbE6l0TQUBQoiWQB37L2qahDyQhZ4/yXXkRyQ MuLF5ZloeHlYmt8FDMlXdXJZJXWoSc+qXZ0S8om8E7RMSoJR4b+JQrkVgDpx5Npi766C 5hnitTrlzprHywKQudGP66Q7OXrIq3kpSzGc0Cw7M9lt7zfstJ/ugWZAfZkAlb9vSRO/ s3kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=tz6MXtUx4/Y8HB62Na7iHQSOCqVGXGMunSo4qarHZMo=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=M2nOZi2bSjpm5oCPKd0l+8fd4cieSZm37Ud2kRTXSrf41977TZDJ6db+9wg4Blmscw BuMZyamWXhdMkqjokStU2zHgZyDzgYyqDn+OFVPmo8w8sVqAPX/IJv9+yb5z3NMXS5+n jmFLt1hPre1Kh71NNjqZC0i/6/aGjX63Vz8ufC28rXKmcmRseYQ+6FrrZ21hDtgb2bvm 8RPNpLhXI4NFYjlaMxniENIxzEovzHJ16z1tmVOt3hNZ0pSgIpIsC6We95uCp0TGL6v1 MRyFEILL374IjHEYS8W8x6RKqmIvzD4ZxPbfukZxBG5M0BXBNcXE7+ySO9NJByM4xsT1 iWaw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w9-20020aa7dcc9000000b0052572ac6eb4si829297edu.534.2023.08.31.03.48.08; Thu, 31 Aug 2023 03:48:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237312AbjHaIcp (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242877AbjHaIbx (ORCPT ); Thu, 31 Aug 2023 04:31:53 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 007A310FD; Thu, 31 Aug 2023 01:30:47 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxRvGuT_Bk4F4dAA--.60072S3; Thu, 31 Aug 2023 16:30:38 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S26; Thu, 31 Aug 2023 16:30:37 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 24/30] LoongArch: KVM: Implement handle mmio exception Date: Thu, 31 Aug 2023 16:30:14 +0800 Message-Id: <20230831083020.2187109-25-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S26 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775741349876917445 X-GMAIL-MSGID: 1775741349876917445 Implement handle mmio exception, setting the mmio info into vcpu_run and return to user space to handle it. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 308 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 308 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index b0781ea100..491d1c39a9 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -209,6 +209,265 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu) return EMULATE_DONE; } +int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst) +{ + struct kvm_run *run = vcpu->run; + unsigned int rd, op8, opcode; + unsigned long rd_val = 0; + void *data = run->mmio.data; + unsigned long curr_pc; + int ret; + + /* + * Update PC and hold onto current PC in case there is + * an error and we want to rollback the PC + */ + curr_pc = vcpu->arch.pc; + update_pc(&vcpu->arch); + + op8 = (inst.word >> 24) & 0xff; + run->mmio.phys_addr = vcpu->arch.badv; + ret = EMULATE_DO_MMIO; + if (op8 < 0x28) { + /* stptrw/d process */ + rd = inst.reg2i14_format.rd; + opcode = inst.reg2i14_format.opcode; + + switch (opcode) { + case stptrd_op: + run->mmio.len = 8; + *(unsigned long *)data = vcpu->arch.gprs[rd]; + break; + case stptrw_op: + run->mmio.len = 4; + *(unsigned int *)data = vcpu->arch.gprs[rd]; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 < 0x30) { + /* st.b/h/w/d process */ + rd = inst.reg2i12_format.rd; + opcode = inst.reg2i12_format.opcode; + rd_val = vcpu->arch.gprs[rd]; + + switch (opcode) { + case std_op: + run->mmio.len = 8; + *(unsigned long *)data = rd_val; + break; + case stw_op: + run->mmio.len = 4; + *(unsigned int *)data = rd_val; + break; + case sth_op: + run->mmio.len = 2; + *(unsigned short *)data = rd_val; + break; + case stb_op: + run->mmio.len = 1; + *(unsigned char *)data = rd_val; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 == 0x38) { + /* stxb/h/w/d process */ + rd = inst.reg3_format.rd; + opcode = inst.reg3_format.opcode; + + switch (opcode) { + case stxb_op: + run->mmio.len = 1; + *(unsigned char *)data = vcpu->arch.gprs[rd]; + break; + case stxh_op: + run->mmio.len = 2; + *(unsigned short *)data = vcpu->arch.gprs[rd]; + break; + case stxw_op: + run->mmio.len = 4; + *(unsigned int *)data = vcpu->arch.gprs[rd]; + break; + case stxd_op: + run->mmio.len = 8; + *(unsigned long *)data = vcpu->arch.gprs[rd]; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else + ret = EMULATE_FAIL; + + if (ret == EMULATE_DO_MMIO) { + run->mmio.is_write = 1; + vcpu->mmio_needed = 1; + vcpu->mmio_is_write = 1; + } else { + vcpu->arch.pc = curr_pc; + kvm_err("Write not supporded inst=0x%08x @%lx BadVaddr:%#lx\n", + inst.word, vcpu->arch.pc, vcpu->arch.badv); + kvm_arch_vcpu_dump_regs(vcpu); + /* Rollback PC if emulation was unsuccessful */ + } + + return ret; +} + +int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst) +{ + unsigned int op8, opcode, rd; + struct kvm_run *run = vcpu->run; + int ret; + + run->mmio.phys_addr = vcpu->arch.badv; + vcpu->mmio_needed = 2; /* signed */ + op8 = (inst.word >> 24) & 0xff; + ret = EMULATE_DO_MMIO; + + if (op8 < 0x28) { + /* ldptr.w/d process */ + rd = inst.reg2i14_format.rd; + opcode = inst.reg2i14_format.opcode; + + switch (opcode) { + case ldptrd_op: + run->mmio.len = 8; + break; + case ldptrw_op: + run->mmio.len = 4; + break; + default: + break; + } + } else if (op8 < 0x2f) { + /* ld.b/h/w/d, ld.bu/hu/wu process */ + rd = inst.reg2i12_format.rd; + opcode = inst.reg2i12_format.opcode; + + switch (opcode) { + case ldd_op: + run->mmio.len = 8; + break; + case ldwu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 4; + break; + case ldw_op: + run->mmio.len = 4; + break; + case ldhu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 2; + break; + case ldh_op: + run->mmio.len = 2; + break; + case ldbu_op: + vcpu->mmio_needed = 1; /* unsigned */ + run->mmio.len = 1; + break; + case ldb_op: + run->mmio.len = 1; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else if (op8 == 0x38) { + /* ldxb/h/w/d, ldxb/h/wu, ldgtb/h/w/d, ldleb/h/w/d process */ + rd = inst.reg3_format.rd; + opcode = inst.reg3_format.opcode; + + switch (opcode) { + case ldxb_op: + run->mmio.len = 1; + break; + case ldxbu_op: + run->mmio.len = 1; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxh_op: + run->mmio.len = 2; + break; + case ldxhu_op: + run->mmio.len = 2; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxw_op: + run->mmio.len = 4; + break; + case ldxwu_op: + run->mmio.len = 4; + vcpu->mmio_needed = 1; /* unsigned */ + break; + case ldxd_op: + run->mmio.len = 8; + break; + default: + ret = EMULATE_FAIL; + break; + } + } else + ret = EMULATE_FAIL; + + if (ret == EMULATE_DO_MMIO) { + /* Set for _kvm_complete_mmio_read use */ + vcpu->arch.io_gpr = rd; + run->mmio.is_write = 0; + vcpu->mmio_is_write = 0; + } else { + kvm_err("Load not supporded inst=0x%08x @%lx BadVaddr:%#lx\n", + inst.word, vcpu->arch.pc, vcpu->arch.badv); + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->mmio_needed = 0; + } + return ret; +} + +int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; + enum emulation_result er = EMULATE_DONE; + + /* update with new PC */ + update_pc(&vcpu->arch); + switch (run->mmio.len) { + case 8: + *gpr = *(s64 *)run->mmio.data; + break; + case 4: + if (vcpu->mmio_needed == 2) + *gpr = *(int *)run->mmio.data; + else + *gpr = *(unsigned int *)run->mmio.data; + break; + case 2: + if (vcpu->mmio_needed == 2) + *gpr = *(short *) run->mmio.data; + else + *gpr = *(unsigned short *)run->mmio.data; + + break; + case 1: + if (vcpu->mmio_needed == 2) + *gpr = *(char *) run->mmio.data; + else + *gpr = *(unsigned char *) run->mmio.data; + break; + default: + kvm_err("Bad MMIO length: %d,addr is 0x%lx", + run->mmio.len, vcpu->arch.badv); + er = EMULATE_FAIL; + break; + } + + return er; +} + static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) { enum emulation_result er = EMULATE_DONE; @@ -320,3 +579,52 @@ static int _kvm_handle_gspr(struct kvm_vcpu *vcpu) } return ret; } + +static int _kvm_handle_mmu_fault(struct kvm_vcpu *vcpu, bool write) +{ + struct kvm_run *run = vcpu->run; + unsigned long badv = vcpu->arch.badv; + larch_inst inst; + enum emulation_result er = EMULATE_DONE; + int ret; + + ret = kvm_handle_mm_fault(vcpu, badv, write); + if (ret) { + /* Treat as MMIO */ + inst.word = vcpu->arch.badi; + if (write) { + er = _kvm_emu_mmio_write(vcpu, inst); + } else { + /* A code fetch fault doesn't count as an MMIO */ + if (kvm_is_ifetch_fault(&vcpu->arch)) { + kvm_err("%s ifetch error addr:%lx\n", __func__, badv); + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + return RESUME_HOST; + } + + er = _kvm_emu_mmio_read(vcpu, inst); + } + } + + if (er == EMULATE_DONE) { + ret = RESUME_GUEST; + } else if (er == EMULATE_DO_MMIO) { + run->exit_reason = KVM_EXIT_MMIO; + ret = RESUME_HOST; + } else { + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + ret = RESUME_HOST; + } + + return ret; +} + +static int _kvm_handle_write_fault(struct kvm_vcpu *vcpu) +{ + return _kvm_handle_mmu_fault(vcpu, true); +} + +static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu) +{ + return _kvm_handle_mmu_fault(vcpu, false); +} From patchwork Thu Aug 31 08:30:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137301 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp259150vqu; Thu, 31 Aug 2023 06:59:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEoJUeVmjBsCelKc0x0FWtyh8ZMmjri+ZRXTdsUWIIFdKQByQFX/9slyVFLjhyHkgCKhC0b X-Received: by 2002:a17:902:db0e:b0:1c0:b3d1:5826 with SMTP id m14-20020a170902db0e00b001c0b3d15826mr5999531plx.53.1693490357740; Thu, 31 Aug 2023 06:59:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693490357; cv=none; d=google.com; s=arc-20160816; b=Q5XV+hGNRZxLA9/P5Bh4kgBGfyypwuiCARrFgNuqef+GmHFdwEfzwniUYXqDF6uKDw QBxDIeMO6GJEH2zepS0xFbXaTGm/+1M5VjEZymLZZwlOawVcqBv7/IJSJWrT2q3tmFDQ uq0dO3+HTXgk3FxkDzHves7vzsxDnXTkXEgGl3IultDIc/qNLyuDHgiQoclt/k9C5fyE zq+EU+j9XUuznP3jeBAc5Dd9+TEaNanfHi+0xrELg+yg0G49vWb2knD2ZgXieM/q7Kii mbwyr6lVT+ckgjgb6mPzkSTN8FbNox/qLjNFXo+FObdGCM0KFekCSP0mRYzxlqEWIzUA yJfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=pgNYAJVTc0HQBq74T1mvWteJYsvANvKwh1hTnuVTBbY=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=ajiBrI7wdH89VqnjnpCcE/QX223ONkIih6fB1vYFXSkAufu56rgFKNzrEZZkZLnKgR 4PvPM8kzW8XtDPeVBRhRpdOpYW62JAhl+IdBW7rR0uyiAkiwU1J0V8qaij4mztGHxcnp qEwGY4Brt/CE3kZQ1pmr9ktf4Bt3goKwgK0CiP9/5zOGmCsVIFlKm8tySvQi0dnYz/MH a6uZZUURoyjtfPa20V/UUaIQvhSyJ2e3QucPRnxe7f+7VIi3TsGGu3w5n+jilRXpiJOp HHsNDNdOybkT6YEtkxgD1zBGJU0BKD6sfHPq74gqzqsWnWEtL2qN8a5btno7uQmHrr2l i/CQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ko12-20020a17090307cc00b001b886a0c366si1174701plb.122.2023.08.31.06.59.02; Thu, 31 Aug 2023 06:59:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345099AbjHaIcL (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbjHaIb1 (ORCPT ); Thu, 31 Aug 2023 04:31:27 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5D3DC10D9; Thu, 31 Aug 2023 01:30:45 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxJuiuT_Bk1l4dAA--.1007S3; Thu, 31 Aug 2023 16:30:38 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S27; Thu, 31 Aug 2023 16:30:37 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 25/30] LoongArch: KVM: Implement handle fpu exception Date: Thu, 31 Aug 2023 16:30:15 +0800 Message-Id: <20230831083020.2187109-26-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S27 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775753344923218910 X-GMAIL-MSGID: 1775753344923218910 Implement handle fpu exception, using kvm_own_fpu to enable fpu for guest. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index 491d1c39a9..f9c7951261 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -628,3 +628,29 @@ static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu) { return _kvm_handle_mmu_fault(vcpu, false); } + +/** + * _kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host + * @vcpu: Virtual CPU context. + * + * Handle when the guest attempts to use fpu which hasn't been allowed + * by the root context. + */ +static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) +{ + struct kvm_run *run = vcpu->run; + + /* + * If guest FPU not present, the FPU operation should have been + * treated as a reserved instruction! + * If FPU already in use, we shouldn't get this at all. + */ + if (WARN_ON(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) { + kvm_err("%s internal error\n", __func__); + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + return RESUME_HOST; + } + + kvm_own_fpu(vcpu); + return RESUME_GUEST; +} From patchwork Thu Aug 31 08:30:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137350 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp507005vqu; Thu, 31 Aug 2023 14:27:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IExdrtwkWmhVZk5piwuKM0OXm27ijkBoYgayNz5SOVgB4b4mPymc+UKvAYe0sOcKIiI+ccn X-Received: by 2002:a17:906:1d:b0:9a2:28dc:4166 with SMTP id 29-20020a170906001d00b009a228dc4166mr414269eja.75.1693517268246; Thu, 31 Aug 2023 14:27:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693517268; cv=none; d=google.com; s=arc-20160816; b=puRim2zTkFGHRvqXFZQ0MU/cL7CyRjbfa9Yu3Vw1ZvfjO3DDEMDbgQC1EuwaKvZ7o6 cDdVW2j8udegw9iFy3P4QZiWb2zVL/qT1TI9iNOHSWGlP5DWmvCRxElCrF/abtLE2L/G ul032wEH780JOdc061t8R9AQ3oot1S1EgBPazW0v286RqinybP855yce2IeLfo3JnWz/ xrBn5P055k2R1D3AOkHIWLUwnaMDEw5mGA/pBqyB6E+veBwzhn1ZV45bWM5T/dV8mLzr dpSaAuu0+18cyS/rrVcl4Nkl9iSoTcOI8KXgHy+5bYJwe+lNN+1pGL3UK+plthhdoYeb mhYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DT8opXme4ZB6R4g1Kx0rs/+Mh4v3H9yA5C/5+3WTn8c=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=LR+w74OwQySPrtLz/efQs6vvwJzAPbAAGugI9NpTpdsijQRlREoDxasHnGyxwD8F1c Q0KvpRQUIqdXif3OTI2FHgMDsT+ERA3T3zEMmGwC5rX0zkSCmJ1tMEYkeNgV++q6PtYD nQZ9jlEwLbymR/Dhr64XSBP3HKO0VtK9lHoiJhH3MLE7q/j6W6ef9wkkYUwf3Z7T5OPI xtqBCXxNB5uDyYmHdh4vI1NGjCwjscd6ebQUSFwctr9/BHIz4kFqJqwENhpDb0uqBEBT 6VgRONFMXSgFMjHIh34mjbtnQdhrM96U5Rfy8CzAleRVPrk7/kfG2iOuEPvDZiimQ9YT c6bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rh28-20020a17090720fc00b0099307a6635esi1539290ejb.701.2023.08.31.14.27.21; Thu, 31 Aug 2023 14:27:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231237AbjHaIcO (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236261AbjHaIb3 (ORCPT ); Thu, 31 Aug 2023 04:31:29 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E31B210E3; Thu, 31 Aug 2023 01:30:45 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxEvCuT_Bk3V4dAA--.59308S3; Thu, 31 Aug 2023 16:30:38 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S28; Thu, 31 Aug 2023 16:30:38 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 26/30] LoongArch: KVM: Implement kvm exception vector Date: Thu, 31 Aug 2023 16:30:16 +0800 Message-Id: <20230831083020.2187109-27-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S28 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775781563169528449 X-GMAIL-MSGID: 1775781563169528449 Implement kvm exception vector, using _kvm_fault_tables array to save the handle function pointer and it is used when vcpu handle exit. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kvm/exit.c | 46 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index f9c7951261..a6df2b56ed 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -654,3 +654,49 @@ static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) kvm_own_fpu(vcpu); return RESUME_GUEST; } + +/* + * Loongarch KVM callback handling for not implemented guest exiting + */ +static int _kvm_fault_ni(struct kvm_vcpu *vcpu) +{ + unsigned long estat, badv; + unsigned int exccode, inst; + + /* + * Fetch the instruction. + */ + badv = vcpu->arch.badv; + estat = vcpu->arch.host_estat; + exccode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; + inst = vcpu->arch.badi; + kvm_err("Exccode: %d PC=%#lx inst=0x%08x BadVaddr=%#lx estat=%#lx\n", + exccode, vcpu->arch.pc, inst, badv, read_gcsr_estat()); + kvm_arch_vcpu_dump_regs(vcpu); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + + return RESUME_HOST; +} + +static exit_handle_fn _kvm_fault_tables[EXCCODE_INT_START] = { + [EXCCODE_TLBL] = _kvm_handle_read_fault, + [EXCCODE_TLBI] = _kvm_handle_read_fault, + [EXCCODE_TLBS] = _kvm_handle_write_fault, + [EXCCODE_TLBM] = _kvm_handle_write_fault, + [EXCCODE_FPDIS] = _kvm_handle_fpu_disabled, + [EXCCODE_GSPR] = _kvm_handle_gspr, +}; + +void _kvm_init_fault(void) +{ + int i; + + for (i = 0; i < EXCCODE_INT_START; i++) + if (!_kvm_fault_tables[i]) + _kvm_fault_tables[i] = _kvm_fault_ni; +} + +int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault) +{ + return _kvm_fault_tables[fault](vcpu); +} From patchwork Thu Aug 31 08:30:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137304 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp314475vqu; Thu, 31 Aug 2023 08:20:21 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEaE4laTu8q3wZtsddb5gHMVBFhHv8CO7SCe2snqxm5IoVzkzpJyHlNbHy7YsoxnsL+zwah X-Received: by 2002:a05:6402:2028:b0:525:5886:2f69 with SMTP id ay8-20020a056402202800b0052558862f69mr4736395edb.36.1693495221387; Thu, 31 Aug 2023 08:20:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693495221; cv=none; d=google.com; s=arc-20160816; b=aFFVbacLGmBadri1vmHqn/es6c2Tqc6kX+ts+D0jXu2g9IttY0eZ1a8UsBy8R+y1C9 8emvViHEdOChidhKp/B/EM/ipU7z0VdM8fQKx+H7teetZlj7lTX3/DOKxzISwc3P4+R/ NMp3Y0LXPQnWox0SVcTtLJ39rfg2RnaSLs6GRMslmHa9mocS23oAYn92pDkTwTBG8glR BCGw0kuWpNHrIhJMmQHcAHVWCkuB6mXpRvvmJF+A/RBsaMA0mCAykT3dMtzeImHnyg6M dgANLsuRdvt0EChZ/UCyg0DU35jrnwp7Ile85L61ziryQu7L7MfcZ+Jl9KZB5DyHlx7q oV7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=GAoEGV9ByKOk6OMKJR5Oi1M+08hTlmZt3CF314eo8w8=; fh=ejoaVAUUbBXsA/6jM/beuI9xCYtV2osdR9SpI7hjY84=; b=jq7oM0Zc2vqDUcHTDDVj220CTLLDZImvz9OJt+4b78d13yJ5Ntrp0R7V2scusj+AVc fdwWDG9UPHuaJCgqwnfX+7NlOTlx+gWHDfl2Vwxhi8xygIRbaKXZYRcqOFyeVJWK0vN0 FPNP/CFBhOHCnmb8X/S9IneetM+wvZqly9AJ2KFaY/TmqyF8ADnWS79Yqr7HHF/GCB9u y+TQ32hqS5uY1hrMLf2VAJsIf0EooEOhcU6PJLUtGKXfpwhLx/YQqNZT6vciMDxi1VT0 VlcUXrWA0ThyXq1dw9pM0oCrmv49RfW3ejtFFo59341SWfT38x0yPGBh/56OxVOpXyKL EdFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h20-20020aa7c614000000b0052a3a3355b1si1250786edq.637.2023.08.31.08.19.51; Thu, 31 Aug 2023 08:20:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242067AbjHaIcb (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344664AbjHaIbq (ORCPT ); Thu, 31 Aug 2023 04:31:46 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 553B9170E; Thu, 31 Aug 2023 01:30:49 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8AxEvCwT_Bk_l4dAA--.59310S3; Thu, 31 Aug 2023 16:30:40 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S29; Thu, 31 Aug 2023 16:30:38 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch Date: Thu, 31 Aug 2023 16:30:17 +0800 Message-Id: <20230831083020.2187109-28-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S29 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775758445172311263 X-GMAIL-MSGID: 1775758445172311263 Implement LoongArch vcpu world switch, including vcpu enter guest and vcpu exit from guest, both operations need to save or restore the host and guest registers. Reviewed-by: Bibo Mao Signed-off-by: Tianrui Zhao --- arch/loongarch/kernel/asm-offsets.c | 32 ++++ arch/loongarch/kvm/switch.S | 255 ++++++++++++++++++++++++++++ 2 files changed, 287 insertions(+) create mode 100644 arch/loongarch/kvm/switch.S diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index 505e4bf596..d4bbaa74c1 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -285,3 +286,34 @@ void output_fgraph_ret_regs_defines(void) BLANK(); } #endif + +static void __used output_kvm_defines(void) +{ + COMMENT(" KVM/LOONGARCH Specific offsets. "); + + OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr); + OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc); + BLANK(); + + OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch); + OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm); + OFFSET(KVM_VCPU_RUN, kvm_vcpu, run); + BLANK(); + + OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp); + OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp); + OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit); + OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd); + OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry); + OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc); + OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs); + OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat); + OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv); + OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi); + OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg); + OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry); + OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu); + + OFFSET(KVM_GPGD, kvm, arch.pgd); + BLANK(); +} diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S new file mode 100644 index 0000000000..f637fcd56c --- /dev/null +++ b/arch/loongarch/kvm/switch.S @@ -0,0 +1,255 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include + +#define PT_GPR_OFFSET(x) (PT_R0 + 8*x) +#define GGPR_OFFSET(x) (KVM_ARCH_GGPR + 8*x) + +.macro kvm_save_host_gpr base + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 + st.d $r\n, \base, PT_GPR_OFFSET(\n) + .endr +.endm + +.macro kvm_restore_host_gpr base + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 + ld.d $r\n, \base, PT_GPR_OFFSET(\n) + .endr +.endm + +/* + * save and restore all gprs except base register, + * and default value of base register is a2. + */ +.macro kvm_save_guest_gprs base + .irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 + st.d $r\n, \base, GGPR_OFFSET(\n) + .endr +.endm + +.macro kvm_restore_guest_gprs base + .irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 + ld.d $r\n, \base, GGPR_OFFSET(\n) + .endr +.endm + +/* + * prepare switch to guest, save host reg and restore guest reg. + * a2: kvm_vcpu_arch, don't touch it until 'ertn' + * t0, t1: temp register + */ +.macro kvm_switch_to_guest + /* set host excfg.VS=0, all exceptions share one exception entry */ + csrrd t0, LOONGARCH_CSR_ECFG + bstrins.w t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT + csrwr t0, LOONGARCH_CSR_ECFG + + /* Load up the new EENTRY */ + ld.d t0, a2, KVM_ARCH_GEENTRY + csrwr t0, LOONGARCH_CSR_EENTRY + + /* Set Guest ERA */ + ld.d t0, a2, KVM_ARCH_GPC + csrwr t0, LOONGARCH_CSR_ERA + + /* Save host PGDL */ + csrrd t0, LOONGARCH_CSR_PGDL + st.d t0, a2, KVM_ARCH_HPGD + + /* Switch to kvm */ + ld.d t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH + + /* Load guest PGDL */ + li.w t0, KVM_GPGD + ldx.d t0, t1, t0 + csrwr t0, LOONGARCH_CSR_PGDL + + /* Mix GID and RID */ + csrrd t1, LOONGARCH_CSR_GSTAT + bstrpick.w t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT + csrrd t0, LOONGARCH_CSR_GTLBC + bstrins.w t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT + csrwr t0, LOONGARCH_CSR_GTLBC + + /* + * Switch to guest: + * GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0 + * ertn + */ + + /* + * Enable intr in root mode with future ertn so that host interrupt + * can be responsed during VM runs + * guest crmd comes from separate gcsr_CRMD register + */ + ori t0, zero, CSR_PRMD_PIE + csrxchg t0, t0, LOONGARCH_CSR_PRMD + + /* Set PVM bit to setup ertn to guest context */ + ori t0, zero, CSR_GSTAT_PVM + csrxchg t0, t0, LOONGARCH_CSR_GSTAT + + /* Load Guest gprs */ + kvm_restore_guest_gprs a2 + /* Load KVM_ARCH register */ + ld.d a2, a2, (KVM_ARCH_GGPR + 8 * REG_A2) + + ertn +.endm + + /* + * exception entry for general exception from guest mode + * - IRQ is disabled + * - kernel privilege in root mode + * - page mode keep unchanged from previous prmd in root mode + * - Fixme: tlb exception cannot happen since registers relative with TLB + * - is still in guest mode, such as pgd table/vmid registers etc, + * - will fix with hw page walk enabled in future + * load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to KVM_TEMP_KS + */ + .text + .cfi_sections .debug_frame +SYM_CODE_START(kvm_vector_entry) + csrwr a2, KVM_TEMP_KS + csrrd a2, KVM_VCPU_KS + addi.d a2, a2, KVM_VCPU_ARCH + + /* After save gprs, free to use any gpr */ + kvm_save_guest_gprs a2 + /* Save guest a2 */ + csrrd t0, KVM_TEMP_KS + st.d t0, a2, (KVM_ARCH_GGPR + 8 * REG_A2) + + /* a2: kvm_vcpu_arch, a1 is free to use */ + csrrd s1, KVM_VCPU_KS + ld.d s0, s1, KVM_VCPU_RUN + + csrrd t0, LOONGARCH_CSR_ESTAT + st.d t0, a2, KVM_ARCH_HESTAT + csrrd t0, LOONGARCH_CSR_ERA + st.d t0, a2, KVM_ARCH_GPC + csrrd t0, LOONGARCH_CSR_BADV + st.d t0, a2, KVM_ARCH_HBADV + csrrd t0, LOONGARCH_CSR_BADI + st.d t0, a2, KVM_ARCH_HBADI + + /* Restore host excfg.VS */ + csrrd t0, LOONGARCH_CSR_ECFG + ld.d t1, a2, KVM_ARCH_HECFG + or t0, t0, t1 + csrwr t0, LOONGARCH_CSR_ECFG + + /* Restore host eentry */ + ld.d t0, a2, KVM_ARCH_HEENTRY + csrwr t0, LOONGARCH_CSR_EENTRY + + /* restore host pgd table */ + ld.d t0, a2, KVM_ARCH_HPGD + csrwr t0, LOONGARCH_CSR_PGDL + + /* + * Disable PGM bit to enter root mode by default with next ertn + */ + ori t0, zero, CSR_GSTAT_PVM + csrxchg zero, t0, LOONGARCH_CSR_GSTAT + /* + * Clear GTLBC.TGID field + * 0: for root tlb update in future tlb instr + * others: for guest tlb update like gpa to hpa in future tlb instr + */ + csrrd t0, LOONGARCH_CSR_GTLBC + bstrins.w t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT + csrwr t0, LOONGARCH_CSR_GTLBC + ld.d tp, a2, KVM_ARCH_HTP + ld.d sp, a2, KVM_ARCH_HSP + /* restore per cpu register */ + ld.d u0, a2, KVM_ARCH_HPERCPU + addi.d sp, sp, -PT_SIZE + + /* Prepare handle exception */ + or a0, s0, zero + or a1, s1, zero + ld.d t8, a2, KVM_ARCH_HANDLE_EXIT + jirl ra, t8, 0 + + or a2, s1, zero + addi.d a2, a2, KVM_VCPU_ARCH + + /* resume host when ret <= 0 */ + bge zero, a0, ret_to_host + + /* + * return to guest + * save per cpu register again, maybe switched to another cpu + */ + st.d u0, a2, KVM_ARCH_HPERCPU + + /* Save kvm_vcpu to kscratch */ + csrwr s1, KVM_VCPU_KS + kvm_switch_to_guest + +ret_to_host: + ld.d a2, a2, KVM_ARCH_HSP + addi.d a2, a2, -PT_SIZE + kvm_restore_host_gpr a2 + jr ra + +SYM_INNER_LABEL(kvm_vector_entry_end, SYM_L_LOCAL) +SYM_CODE_END(kvm_vector_entry) + +/* + * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu) + * + * @register_param: + * a0: kvm_run* run + * a1: kvm_vcpu* vcpu + */ +SYM_FUNC_START(kvm_enter_guest) + /* allocate space in stack bottom */ + addi.d a2, sp, -PT_SIZE + /* save host gprs */ + kvm_save_host_gpr a2 + + /* save host crmd,prmd csr to stack */ + csrrd a3, LOONGARCH_CSR_CRMD + st.d a3, a2, PT_CRMD + csrrd a3, LOONGARCH_CSR_PRMD + st.d a3, a2, PT_PRMD + + addi.d a2, a1, KVM_VCPU_ARCH + st.d sp, a2, KVM_ARCH_HSP + st.d tp, a2, KVM_ARCH_HTP + /* Save per cpu register */ + st.d u0, a2, KVM_ARCH_HPERCPU + + /* Save kvm_vcpu to kscratch */ + csrwr a1, KVM_VCPU_KS + kvm_switch_to_guest +SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL) +SYM_FUNC_END(kvm_enter_guest) + +SYM_FUNC_START(kvm_save_fpu) + fpu_save_csr a0 t1 + fpu_save_double a0 t1 + fpu_save_cc a0 t1 t2 + jr ra +SYM_FUNC_END(kvm_save_fpu) + +SYM_FUNC_START(kvm_restore_fpu) + fpu_restore_double a0 t1 + fpu_restore_csr a0 t1 + fpu_restore_cc a0 t1 t2 + jr ra +SYM_FUNC_END(kvm_restore_fpu) + + .section ".rodata" +SYM_DATA(kvm_vector_size, .quad kvm_vector_entry_end - kvm_vector_entry) +SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest) From patchwork Thu Aug 31 08:30:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137291 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp206562vqu; Thu, 31 Aug 2023 05:29:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH66oqNQG8b/QsKJq0euMTUXMIIN2we4XN6gVoEMcdAEPwC9iiDmpSU2LPoqDy9NNBMDXvT X-Received: by 2002:a17:906:31c7:b0:9a5:b878:7336 with SMTP id f7-20020a17090631c700b009a5b8787336mr3760767ejf.7.1693484959546; Thu, 31 Aug 2023 05:29:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693484959; cv=none; d=google.com; s=arc-20160816; b=fKh3ikxG/e+UEKX63S4jIEyL2ZJZ0CoPRB/Csu4zhLi4obsWNHtJB+HhitlnfkZtsW pkr/MpTSA+ESXWCPIsthmhokIDUi6KKtBnzoxmWjBkGcbmqR+4yzBlezmI8njQDPQSr0 9D7coDip5sG5luLjjRbSn3t5pDVdUSph1HT6Zz1WTN7yQYvT+XBi1ItCt+kOPlQZddcp lYC5NwDbIVrI80n48QVXYqCDwgrqUHLqnei/r17Bzr7UicJ9Qos3U6n7hNPuQrVB2kkw VjCUVSBsd1lDvbjYMpAtFzuHheXr2Z+7tbIs9GP8CMB6kwlX3ib0wreOV2ysOmil+Q4s e4ZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=a7zkw6a5/t9a5S5ODPYmuogCNktl6E5Zu2Jr/XtIN+A=; fh=Pv+2Bw+ivEIk9JV7RELMDVOUN/JJnoAjjY7JFPRTPYw=; b=dty6P0jmBRiN5Eg2YY/zB8xVBlYBX2JDY0Sok3rzAOOy5lp/kkZgRkN74JglD9teSa /cWKQn7XJ0iUZt5Tqw2ePM3aaY5ycr/SZYMAhC+erpqwObDWqyEteMpGfjAOhA+hGsdw tRnu/x62oNzXEfx6WRQyY3FpAD3ssX3q4b+t5L6gAhHE9EsN/u+sBaN6/XVr6TTiTgvu k8qUSL13dK61ORhGzcZTP5KeFVlqllyyMsKLjfG60BdIF4TIENvwVR10phpTmkEHws7+ XieyDESnsegTaHBFWHHHwJtOhf9joQAziI8XIcKw+GwmJxquxNfOShUKp1XG1Lq+nUwR mKSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e25-20020a170906375900b0098e2a30ec3asi853968ejc.385.2023.08.31.05.28.45; Thu, 31 Aug 2023 05:29:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229722AbjHaIcW (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236174AbjHaIbk (ORCPT ); Thu, 31 Aug 2023 04:31:40 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7626C1703; Thu, 31 Aug 2023 01:30:48 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Cxh+ivT_Bk8F4dAA--.24482S3; Thu, 31 Aug 2023 16:30:39 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S30; Thu, 31 Aug 2023 16:30:38 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn, kernel test robot Subject: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Date: Thu, 31 Aug 2023 16:30:18 +0800 Message-Id: <20230831083020.2187109-29-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S30 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775747685005927783 X-GMAIL-MSGID: 1775747685005927783 Enable LoongArch kvm config and add the makefile to support build kvm module. Reviewed-by: Bibo Mao Reported-by: kernel test robot Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/ Signed-off-by: Tianrui Zhao --- arch/loongarch/Kbuild | 1 + arch/loongarch/Kconfig | 3 ++ arch/loongarch/configs/loongson3_defconfig | 2 + arch/loongarch/kvm/Kconfig | 45 ++++++++++++++++++++++ arch/loongarch/kvm/Makefile | 22 +++++++++++ 5 files changed, 73 insertions(+) create mode 100644 arch/loongarch/kvm/Kconfig create mode 100644 arch/loongarch/kvm/Makefile diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild index b01f5cdb27..40be8a1696 100644 --- a/arch/loongarch/Kbuild +++ b/arch/loongarch/Kbuild @@ -2,6 +2,7 @@ obj-y += kernel/ obj-y += mm/ obj-y += net/ obj-y += vdso/ +obj-y += kvm/ # for cleaning subdir- += boot diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index ecf282dee5..7f2f7ccc76 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -123,6 +123,7 @@ config LOONGARCH select HAVE_KPROBES select HAVE_KPROBES_ON_FTRACE select HAVE_KRETPROBES + select HAVE_KVM select HAVE_MOD_ARCH_SPECIFIC select HAVE_NMI select HAVE_PCI @@ -650,3 +651,5 @@ source "kernel/power/Kconfig" source "drivers/acpi/Kconfig" endmenu + +source "arch/loongarch/kvm/Kconfig" diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig index d64849b4cb..7acb4ae7af 100644 --- a/arch/loongarch/configs/loongson3_defconfig +++ b/arch/loongarch/configs/loongson3_defconfig @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y CONFIG_EFI_CAPSULE_LOADER=m CONFIG_EFI_TEST=m +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=m CONFIG_MODULES=y CONFIG_MODULE_FORCE_LOAD=y CONFIG_MODULE_UNLOAD=y diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig new file mode 100644 index 0000000000..bf7d6e7cde --- /dev/null +++ b/arch/loongarch/kvm/Kconfig @@ -0,0 +1,45 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# KVM configuration +# + +source "virt/kvm/Kconfig" + +menuconfig VIRTUALIZATION + bool "Virtualization" + help + Say Y here to get to see options for using your Linux host to run + other operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and + disabled. + +if VIRTUALIZATION + +config AS_HAS_LVZ_EXTENSION + def_bool $(as-instr,hvcl 0) + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on HAVE_KVM + depends on AS_HAS_LVZ_EXTENSION + select MMU_NOTIFIER + select ANON_INODES + select PREEMPT_NOTIFIERS + select KVM_MMIO + select KVM_GENERIC_DIRTYLOG_READ_PROTECT + select KVM_GENERIC_HARDWARE_ENABLING + select KVM_XFER_TO_GUEST_WORK + select HAVE_KVM_DIRTY_RING_ACQ_REL + select HAVE_KVM_VCPU_ASYNC_IOCTL + select HAVE_KVM_EVENTFD + select SRCU + help + Support hosting virtualized guest machines using hardware + virtualization extensions. You will need a fairly processor + equipped with virtualization extensions. + + If unsure, say N. + +endif # VIRTUALIZATION diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile new file mode 100644 index 0000000000..2335e873a6 --- /dev/null +++ b/arch/loongarch/kvm/Makefile @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for LOONGARCH KVM support +# + +ccflags-y += -I $(srctree)/$(src) + +include $(srctree)/virt/kvm/Makefile.kvm + +obj-$(CONFIG_KVM) += kvm.o + +kvm-y += main.o +kvm-y += vm.o +kvm-y += vmid.o +kvm-y += tlb.o +kvm-y += mmu.o +kvm-y += vcpu.o +kvm-y += exit.o +kvm-y += interrupt.o +kvm-y += timer.o +kvm-y += switch.o +kvm-y += csr_ops.o From patchwork Thu Aug 31 08:30:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137315 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp334715vqu; Thu, 31 Aug 2023 08:57:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEIEUeXnmSl9jKKKcuZgaUrRvc+35nEpz/3wWe3vzOiy40+QQLrRULL3M4XM6828uEWHZ6z X-Received: by 2002:a05:6808:180f:b0:3a7:6ff5:c628 with SMTP id bh15-20020a056808180f00b003a76ff5c628mr6912062oib.11.1693497427898; Thu, 31 Aug 2023 08:57:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693497427; cv=none; d=google.com; s=arc-20160816; b=tM4oCJl9fOUl3Au110F8pNdWQ8nD1LfrKJYlXGVRxigai4vnQJy4LUTX9ssBU/73jR njqSs7/T77rQVVRNrxHiGVy8ACR99Xq1fHjJ6JXC5IrDc+B/q77gasWkCAsukYmzFrdU ev6md+ru6t206EuP4O9P3bh1o1sySlerFXjdXTpkwx24oGsEafiFqyLQS+AVEPIGeb62 8z/5ru2W6aT3qhYTmRYEKaagQam5VrSX5z+y/S92f+gl67IIVN8yLnvMvPLfa/5zA+fw e7zn6DqeqPASfQnTeiCoPW6xLZ0mav6/IhKyvHOr7qGL5jt08nshKhxBs7bJa7OW1xwQ OXMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=v9RkGZJ0EMZPTtqrrU5OMa6T/kZ6McXGNGBnzOck93w=; fh=DndB7O6DTG6Sgonrk2zkDJR5MPyE/Qek5uYuwsra5zw=; b=LT3D2tvhLKaG+dgIaAzri+TpRDBLNba/uRAz7ObH4C+LMDtXajQP8KGle0FZg7oFVE h/v7Zcw3UMEXf3LKNe9PLwS4oMhDzrtLAMrslGA0Eg9KqM5UnoL5fTcPN0Nl/krgZ4Gw RuzqlTq16VkKmu4OgOHyrvTmbTqeGNSJBQ+9tjjzfqwhsX10ZtsYAkU+f9CDyr2rgySK K66hZ+R+boojJ3DnpvS14XRL951Mc4nkNgXsjneeQyxVGXChe2riMf0w4vifTeW6CN2h Q7ClCfISeRQ827pa/53cg+OUfxe7J4py722Dl+2vRVXC7FLuA5CSKnWXKFl/xPJlEuCf nUQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g187-20020a636bc4000000b00565dd3fbfdfsi1450581pgc.214.2023.08.31.08.56.48; Thu, 31 Aug 2023 08:57:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345451AbjHaIcc (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345795AbjHaIbq (ORCPT ); Thu, 31 Aug 2023 04:31:46 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AA467170B; Thu, 31 Aug 2023 01:30:49 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8BxHOuwT_Bk_V4dAA--.55402S3; Thu, 31 Aug 2023 16:30:40 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S31; Thu, 31 Aug 2023 16:30:39 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn, Huacai Chen Subject: [PATCH v20 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Date: Thu, 31 Aug 2023 16:30:19 +0800 Message-Id: <20230831083020.2187109-30-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S31 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775760758635876909 X-GMAIL-MSGID: 1775760758635876909 Supplement kvm document about LoongArch-specific part, such as add api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE, etc. Reviewed-by: Huacai Chen Signed-off-by: Tianrui Zhao --- Documentation/virt/kvm/api.rst | 70 +++++++++++++++++++++++++++++----- 1 file changed, 61 insertions(+), 9 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index c0ddd30354..8ad10ec17a 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -416,6 +416,13 @@ Reads the general purpose registers from the vcpu. __u64 pc; }; + /* LoongArch */ + struct kvm_regs { + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ + unsigned long gpr[32]; + unsigned long pc; + }; + 4.12 KVM_SET_REGS ----------------- @@ -506,7 +513,7 @@ translation mode. ------------------ :Capability: basic -:Architectures: x86, ppc, mips, riscv +:Architectures: x86, ppc, mips, riscv, loongarch :Type: vcpu ioctl :Parameters: struct kvm_interrupt (in) :Returns: 0 on success, negative on failure. @@ -592,6 +599,14 @@ b) KVM_INTERRUPT_UNSET This is an asynchronous vcpu ioctl and can be invoked from any thread. +LOONGARCH: +^^^^^^^^^^ + +Queues an external interrupt to be injected into the virtual CPU. A negative +interrupt number dequeues the interrupt. + +This is an asynchronous vcpu ioctl and can be invoked from any thread. + 4.17 KVM_DEBUG_GUEST -------------------- @@ -737,7 +752,7 @@ signal mask. ---------------- :Capability: basic -:Architectures: x86 +:Architectures: x86, loongarch :Type: vcpu ioctl :Parameters: struct kvm_fpu (out) :Returns: 0 on success, -1 on error @@ -746,7 +761,7 @@ Reads the floating point state from the vcpu. :: - /* for KVM_GET_FPU and KVM_SET_FPU */ + /* x86: for KVM_GET_FPU and KVM_SET_FPU */ struct kvm_fpu { __u8 fpr[8][16]; __u16 fcw; @@ -761,12 +776,21 @@ Reads the floating point state from the vcpu. __u32 pad2; }; + /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */ + struct kvm_fpu { + __u32 fcsr; + __u64 fcc; + struct kvm_fpureg { + __u64 val64[4]; + }fpr[32]; + }; + 4.23 KVM_SET_FPU ---------------- :Capability: basic -:Architectures: x86 +:Architectures: x86, loongarch :Type: vcpu ioctl :Parameters: struct kvm_fpu (in) :Returns: 0 on success, -1 on error @@ -775,7 +799,7 @@ Writes the floating point state to the vcpu. :: - /* for KVM_GET_FPU and KVM_SET_FPU */ + /* x86: for KVM_GET_FPU and KVM_SET_FPU */ struct kvm_fpu { __u8 fpr[8][16]; __u16 fcw; @@ -790,6 +814,15 @@ Writes the floating point state to the vcpu. __u32 pad2; }; + /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */ + struct kvm_fpu { + __u32 fcsr; + __u64 fcc; + struct kvm_fpureg { + __u64 val64[4]; + }fpr[32]; + }; + 4.24 KVM_CREATE_IRQCHIP ----------------------- @@ -1387,7 +1420,7 @@ documentation when it pops into existence). ------------------- :Capability: KVM_CAP_ENABLE_CAP -:Architectures: mips, ppc, s390, x86 +:Architectures: mips, ppc, s390, x86, loongarch :Type: vcpu ioctl :Parameters: struct kvm_enable_cap (in) :Returns: 0 on success; -1 on error @@ -1442,7 +1475,7 @@ for vm-wide capabilities. --------------------- :Capability: KVM_CAP_MP_STATE -:Architectures: x86, s390, arm64, riscv +:Architectures: x86, s390, arm64, riscv, loongarch :Type: vcpu ioctl :Parameters: struct kvm_mp_state (out) :Returns: 0 on success; -1 on error @@ -1460,7 +1493,7 @@ Possible values are: ========================== =============================================== KVM_MP_STATE_RUNNABLE the vcpu is currently running - [x86,arm64,riscv] + [x86,arm64,riscv,loongarch] KVM_MP_STATE_UNINITIALIZED the vcpu is an application processor (AP) which has not yet received an INIT signal [x86] KVM_MP_STATE_INIT_RECEIVED the vcpu has received an INIT signal, and is @@ -1516,11 +1549,14 @@ For riscv: The only states that are valid are KVM_MP_STATE_STOPPED and KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not. +On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect +whether the vcpu is runnable. + 4.39 KVM_SET_MP_STATE --------------------- :Capability: KVM_CAP_MP_STATE -:Architectures: x86, s390, arm64, riscv +:Architectures: x86, s390, arm64, riscv, loongarch :Type: vcpu ioctl :Parameters: struct kvm_mp_state (in) :Returns: 0 on success; -1 on error @@ -1538,6 +1574,9 @@ For arm64/riscv: The only states that are valid are KVM_MP_STATE_STOPPED and KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not. +On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect +whether the vcpu is runnable. + 4.40 KVM_SET_IDENTITY_MAP_ADDR ------------------------------ @@ -2839,6 +2878,19 @@ Following are the RISC-V D-extension registers: 0x8020 0000 0600 0020 fcsr Floating point control and status register ======================= ========= ============================================= +LoongArch registers are mapped using the lower 32 bits. The upper 16 bits of +that is the register group type. + +LoongArch csr registers are used to control guest cpu or get status of guest +cpu, and they have the following id bit patterns:: + + 0x9030 0000 0001 00 (64-bit) + +LoongArch KVM control registers are used to implement some new defined functions +such as set vcpu counter or reset vcpu, and they have the following id bit patterns:: + + 0x9030 0000 0002 + 4.69 KVM_GET_ONE_REG -------------------- From patchwork Thu Aug 31 08:30:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 137266 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c792:0:b0:3f2:4152:657d with SMTP id b18csp149316vqu; Thu, 31 Aug 2023 03:34:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEpdiy9bFA2NNGAy2gIOFZk/xQgbxQcGSDNNkxTBvvhKfqztwcGButpqkgmrCz0cAheM/IO X-Received: by 2002:a05:6a20:3c88:b0:14d:2b8d:d62f with SMTP id b8-20020a056a203c8800b0014d2b8dd62fmr5939998pzj.47.1693478057442; Thu, 31 Aug 2023 03:34:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693478057; cv=none; d=google.com; s=arc-20160816; b=dcivEJwQCzPt6bP0Vlxlas1CYnEVRQsluiHCBhxmwZ8fjoUztwUaWYxuv6Asc3q2sp op4nM2ZfNU+kaRB8V7UIpSOX1wXbl90IPP4UQEJSlsCnUWranPgdV/i8MSH8hbBUDdqP QYtZxE7OnUgy/3gH6Dp0zOJrX1dLtN0cn19pWlEEfzgvhbGqOM5JC6iM04xpQoLsYz7r IJtr0kRFXOlASd3jUMLmvMTRVJcmXoT1FbQlWBWtwVOhpVMMgDma2rs1PXHCQ2uk25+G 3ICzGNeOjAo/gX01/S6qpUwyepcb/7A6tjSuGUnLtFvlzpFOW+E5x8iz1cE3esINv7DY JWVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lXfmHm0oeSCADdZWY6JrD8oYCFTtW3JLATwoc/BSx4s=; fh=DndB7O6DTG6Sgonrk2zkDJR5MPyE/Qek5uYuwsra5zw=; b=NkxX4A1K0CcQP+hFS8jeiBDI+x8rAfoHzt4w3Sk7fV+96rtPj+5hq4ykPrK7DEgbZ5 7u34BDGbBjVLrRg8nPOySgPy0yB5M1OYU3rDOOFDmV3jcuPy/KiPIsIBE+7zoB04JTse MOMaRRd9IZjrr4CFp1mYlpyyUsJ5pCprxYGwp9uvvUXdhJJxYPYUgnCNjMuWdO142fl5 qfjv78R22GkPdNeqe3V335qZiMfxrE3di2iutj8YORFIcRru+DZ+kJWxLjz6W28Y/mrZ m5BWw8PRfJ3GELigRauZHBzWTYWGIYePIQfKCq/oBJHxr17aLYbO41RczOyxl0wRTbdq Shfg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lc12-20020a170902fa8c00b001b9c61c221bsi902028plb.565.2023.08.31.03.33.59; Thu, 31 Aug 2023 03:34:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233337AbjHaIcu (ORCPT + 99 others); Thu, 31 Aug 2023 04:32:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346052AbjHaIbr (ORCPT ); Thu, 31 Aug 2023 04:31:47 -0400 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CDFE61718; Thu, 31 Aug 2023 01:30:54 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8DxBfGzT_BkFl8dAA--.60190S3; Thu, 31 Aug 2023 16:30:43 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Ax3c6gT_BkOPVnAA--.55892S32; Thu, 31 Aug 2023 16:30:39 +0800 (CST) From: Tianrui Zhao To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn, Huacai Chen Subject: [PATCH v20 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Date: Thu, 31 Aug 2023 16:30:20 +0800 Message-Id: <20230831083020.2187109-31-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230831083020.2187109-1-zhaotianrui@loongson.cn> References: <20230831083020.2187109-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Ax3c6gT_BkOPVnAA--.55892S32 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1775740447260728285 X-GMAIL-MSGID: 1775740447260728285 Add maintainers for LoongArch KVM. Acked-by: Huacai Chen Signed-off-by: Tianrui Zhao --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 242178802c..11eb27dd66 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11472,6 +11472,18 @@ F: include/kvm/arm_* F: tools/testing/selftests/kvm/*/aarch64/ F: tools/testing/selftests/kvm/aarch64/ +KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch) +M: Tianrui Zhao +M: Bibo Mao +M: Huacai Chen +L: kvm@vger.kernel.org +L: loongarch@lists.linux.dev +S: Maintained +T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git +F: arch/loongarch/include/asm/kvm* +F: arch/loongarch/include/uapi/asm/kvm* +F: arch/loongarch/kvm/ + KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips) M: Huacai Chen L: linux-mips@vger.kernel.org