From patchwork Fri Oct 28 23:07:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 12588 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1088103wru; Fri, 28 Oct 2022 16:11:33 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7Yxipzn9G2GrVQS3bxMbkgLPf2j/sxqegCsf77b+9v6D/xvsVIAUca/+ad0dqC6r6cyVd7 X-Received: by 2002:a05:6402:f0e:b0:461:aaa3:a11c with SMTP id i14-20020a0564020f0e00b00461aaa3a11cmr1703726eda.53.1666998692828; Fri, 28 Oct 2022 16:11:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666998692; cv=none; d=google.com; s=arc-20160816; b=BgFNMuprrSYQ3IlaBvSD95oKXH2JBJDa+cYZ1lOftd52uD/JKxLHPDPBg7QER3lykv yP7oawZxFgQTaeQspW+VHJVGJjh63vGSUauFA0sR6ep1aBvWj2KaRJuDKUfoHws5Nnec ahlLzsKjHXpYDGuFzK5i6FoFIKIz7QotMl8cEeXD3pUefJ4l20v3DpZG1BQF8jqsdjz4 0piKjqLE6WPMvLxSl8XHxUYIDqR1II94Fzx3PX586iL2HxlW+C9dLj0SDhIfK/5UoLKJ 3D+ytO4VLMx2zqSc1xCqUDqn2d6aPXaCSA2NxJqeEJlD/yncab67towKH51V0yQPSqCa 9MEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=vHRZxr5qqWRrp7NTnD6fUIb58CW6btv3cI7HhdrmD28=; b=CNDL/kN8qiTom1WFRtAULQZbG/Wa1g76yahp60PHTZUmi5h1JBF5PacBVkmb50dUDN jBWZhbktrYeuEztXoyIpSl+mA9Nqk8fhd+u3JPVRCaNKJvHo8gu3fqBpsF9rG7Yb2Fs/ zt2Fp2mh+qGdKkMQ0WZt6aQ/N7qbdO70y80Bbs6KxGDhKUcAUjU0Qb5r9O2D9KeRJ/Id T+tsPzu2nuNazXMBoTUA7HJIpP2I0DRyRH2yexh4XW+xSeCAwAVx2JH/Ya/0rbo7lIu0 tk/twvGVwYkkH5WUUo+9CEta1k2bJ0+aFP32sn2b5GSifoRKI8fR3PtMASgh+UgQNOnR KHZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="RRyv/zC9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i14-20020a1709064fce00b0077ef2f9c8b7si6094512ejw.922.2022.10.28.16.11.09; Fri, 28 Oct 2022 16:11:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="RRyv/zC9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229714AbiJ1XJg (ORCPT + 99 others); Fri, 28 Oct 2022 19:09:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbiJ1XJd (ORCPT ); Fri, 28 Oct 2022 19:09:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99A2123797C for ; Fri, 28 Oct 2022 16:07:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666998473; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vHRZxr5qqWRrp7NTnD6fUIb58CW6btv3cI7HhdrmD28=; b=RRyv/zC92FwhLsVpB+kpqhYex4V26T0b6qNRTRtmMhNSzr/wvWQvcann7T9B5s+L16wINi IivklFJw5QPqrJwT0SuJAyLAyMPIsGGNZQsxRT6HfsEJl3npnCLQMld+19mhOJjjUfoeQs 0UjWl1ThBm3sak+7QVKlfUC6HspR2ko= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-131-otlT118uM0awXUmu3OHeUQ-1; Fri, 28 Oct 2022 19:07:50 -0400 X-MC-Unique: otlT118uM0awXUmu3OHeUQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1BC9D857D19; Fri, 28 Oct 2022 23:07:49 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id D1B7540C94EC; Fri, 28 Oct 2022 23:07:37 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: jmattson@google.com, seanjc@google.com, jpoimboe@kernel.org Subject: [PATCH 4/7] KVM: SVM: replace argument of __svm_vcpu_run with vcpu_svm Date: Fri, 28 Oct 2022 19:07:20 -0400 Message-Id: <20221028230723.3254250-5-pbonzini@redhat.com> In-Reply-To: <20221028230723.3254250-1-pbonzini@redhat.com> References: <20221028230723.3254250-1-pbonzini@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747974821290355048?= X-GMAIL-MSGID: =?utf-8?q?1747974821290355048?= Since registers are reachable through vcpu_svm, and we will need to access more fields of that struct, pass it instead of the regs[] array. Signed-off-by: Paolo Bonzini --- arch/x86/kernel/asm-offsets.c | 6 ++++++ arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/svm/vmenter.S | 36 +++++++++++++++++------------------ 4 files changed, 26 insertions(+), 20 deletions(-) diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index 90da275ad223..7f1dd1138117 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -20,6 +20,7 @@ #include #include #include "../kvm/vmx/vmx.h" +#include "../kvm/svm/svm.h" #ifdef CONFIG_XEN #include @@ -109,6 +110,11 @@ static void __used common(void) OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); OFFSET(TSS_sp2, tss_struct, x86_tss.sp2); + if (IS_ENABLED(CONFIG_KVM_AMD)) { + BLANK(); + OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs); + } + if (IS_ENABLED(CONFIG_KVM_INTEL)) { BLANK(); OFFSET(VMX_vcpu_arch_regs, vcpu_vmx, vcpu.arch.regs); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b793cfdce68d..64f5f0544b4f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3932,7 +3932,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) * vmcb02 when switching vmcbs for nested virtualization. */ vmload(svm->vmcb01.pa); - __svm_vcpu_run(vmcb_pa, (unsigned long *)&vcpu->arch.regs); + __svm_vcpu_run(vmcb_pa, svm); vmsave(svm->vmcb01.pa); vmload(__sme_page_pa(sd->save_area)); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 222856788153..5f8dfc9cd9a7 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -484,6 +484,6 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); /* vmenter.S */ void __svm_sev_es_vcpu_run(unsigned long vmcb_pa); -void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); +void __svm_vcpu_run(unsigned long vmcb_pa, struct vcpu_svm *svm); #endif diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 723f8534986c..8fac744361e5 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -8,23 +8,23 @@ #define WORD_SIZE (BITS_PER_LONG / 8) /* Intentionally omit RAX as it's context switched by hardware */ -#define VCPU_RCX __VCPU_REGS_RCX * WORD_SIZE -#define VCPU_RDX __VCPU_REGS_RDX * WORD_SIZE -#define VCPU_RBX __VCPU_REGS_RBX * WORD_SIZE +#define VCPU_RCX (SVM_vcpu_arch_regs + __VCPU_REGS_RCX * WORD_SIZE) +#define VCPU_RDX (SVM_vcpu_arch_regs + __VCPU_REGS_RDX * WORD_SIZE) +#define VCPU_RBX (SVM_vcpu_arch_regs + __VCPU_REGS_RBX * WORD_SIZE) /* Intentionally omit RSP as it's context switched by hardware */ -#define VCPU_RBP __VCPU_REGS_RBP * WORD_SIZE -#define VCPU_RSI __VCPU_REGS_RSI * WORD_SIZE -#define VCPU_RDI __VCPU_REGS_RDI * WORD_SIZE +#define VCPU_RBP (SVM_vcpu_arch_regs + __VCPU_REGS_RBP * WORD_SIZE) +#define VCPU_RSI (SVM_vcpu_arch_regs + __VCPU_REGS_RSI * WORD_SIZE) +#define VCPU_RDI (SVM_vcpu_arch_regs + __VCPU_REGS_RDI * WORD_SIZE) #ifdef CONFIG_X86_64 -#define VCPU_R8 __VCPU_REGS_R8 * WORD_SIZE -#define VCPU_R9 __VCPU_REGS_R9 * WORD_SIZE -#define VCPU_R10 __VCPU_REGS_R10 * WORD_SIZE -#define VCPU_R11 __VCPU_REGS_R11 * WORD_SIZE -#define VCPU_R12 __VCPU_REGS_R12 * WORD_SIZE -#define VCPU_R13 __VCPU_REGS_R13 * WORD_SIZE -#define VCPU_R14 __VCPU_REGS_R14 * WORD_SIZE -#define VCPU_R15 __VCPU_REGS_R15 * WORD_SIZE +#define VCPU_R8 (SVM_vcpu_arch_regs + __VCPU_REGS_R8 * WORD_SIZE) +#define VCPU_R9 (SVM_vcpu_arch_regs + __VCPU_REGS_R9 * WORD_SIZE) +#define VCPU_R10 (SVM_vcpu_arch_regs + __VCPU_REGS_R10 * WORD_SIZE) +#define VCPU_R11 (SVM_vcpu_arch_regs + __VCPU_REGS_R11 * WORD_SIZE) +#define VCPU_R12 (SVM_vcpu_arch_regs + __VCPU_REGS_R12 * WORD_SIZE) +#define VCPU_R13 (SVM_vcpu_arch_regs + __VCPU_REGS_R13 * WORD_SIZE) +#define VCPU_R14 (SVM_vcpu_arch_regs + __VCPU_REGS_R14 * WORD_SIZE) +#define VCPU_R15 (SVM_vcpu_arch_regs + __VCPU_REGS_R15 * WORD_SIZE) #endif .section .noinstr.text, "ax" @@ -32,7 +32,7 @@ /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode * @vmcb_pa: unsigned long - * @regs: unsigned long * (to guest registers) + * @svm: struct vcpu_svm * */ SYM_FUNC_START(__svm_vcpu_run) push %_ASM_BP @@ -47,13 +47,13 @@ SYM_FUNC_START(__svm_vcpu_run) #endif push %_ASM_BX - /* Save @regs. */ + /* Save @svm. */ push %_ASM_ARG2 /* Save @vmcb. */ push %_ASM_ARG1 - /* Move @regs to RAX. */ + /* Move @svm to RAX. */ mov %_ASM_ARG2, %_ASM_AX /* Load guest registers. */ @@ -89,7 +89,7 @@ SYM_FUNC_START(__svm_vcpu_run) FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE #endif - /* "POP" @regs to RAX. */ + /* "POP" @svm to RAX. */ pop %_ASM_AX /* Save all guest registers. */