Message ID | 20230116143645.589522290@infradead.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1232559wrn; Mon, 16 Jan 2023 06:54:19 -0800 (PST) X-Google-Smtp-Source: AMrXdXu/nmTPOcD3tX16dCn7zd0YP3mkrvvJjJoHN6OT4Iw5gJ0njwkwH49XnLwBIaE5F/i7Br4Y X-Received: by 2002:a17:90a:6449:b0:229:9369:9d94 with SMTP id y9-20020a17090a644900b0022993699d94mr495762pjm.30.1673880859219; Mon, 16 Jan 2023 06:54:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673880859; cv=none; d=google.com; s=arc-20160816; b=qGKOg2SqC89EsUMtRd4qcclo1NaeE+2Ob7/pFRV4OXlXQYVKKVAHWliyuDoyc0/cBK vhMPainht/6GHJ1mGhmWBxnHGwgYR2xRBcNGHMOBhDtYdVLgh9Q78BgEAsDtpDsK5ZbK 1kXLzTXDXkC9svJFtma2ETWe0ycjDqVWyOUnDb8y3m9wcgiVf8ANs/qsVRd+HBUYI/ZE rconjvrlLnMawcZjqlWSOvhwyNtk3r27CXKFqCu2TU3CyJdWAz+rmEZ2ZUbsuNDt91AX UFJBYPUf0QF9Ckh0QbV6Qk3SM/QB8FQLe9Rbs7j6Ad0UvFitONQW+AwCoMg5krHwZfeP 4gow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=rHjspetb6Sj1qMw4IETTuTuaCFumb5EYx3riiNCTYv8=; b=msVNQkGTdiBZjW59J6VS6c+Fxjh9FW/tCZtTLPCC/QU+hG9QCyN/zv4V+XpPHdr/BA y7+2pc1Umf04NxbnpqLsKnlYE/qg0w+g7rjV4rLeLaPCgGsM8W1BHKb2rh1suaz1OvKW toT806Ht6OxZi1b0mor8aLufSrm1gJ4alaB3Qo4YyEopdM28GURjcg39FovvlwzHjmgA 9lLTcMtQZd75QA6DKKKaC81F5Ten9ANblo39qsFvyQe2CVrITYHRC2tn6eAYBgTq2Ftz UwSLcz+Ec685fRXYut+VE6+uLpPWzvaZ0Zpfc4L89SV9YjHghgzbXRN2S0jz73KOp6WG y6hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cxz2ZJ+y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q4-20020a17090a938400b002266735a4b8si9811045pjo.81.2023.01.16.06.54.07; Mon, 16 Jan 2023 06:54:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cxz2ZJ+y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231516AbjAPOxB (ORCPT <rfc822;stefanalexe802@gmail.com> + 99 others); Mon, 16 Jan 2023 09:53:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231209AbjAPOwN (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 16 Jan 2023 09:52:13 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52AF123DAD for <linux-kernel@vger.kernel.org>; Mon, 16 Jan 2023 06:38:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=rHjspetb6Sj1qMw4IETTuTuaCFumb5EYx3riiNCTYv8=; b=cxz2ZJ+y57FVH0axH7GJ3/SDvk SfIli5aMNXUCBqSZ5s0hpQ541ETtZeXdeKeGN56Ebrzzk0C4N9eLXGVlRGHVrQ5JzV49oWAUTb+Qe Kn4rI0WMLi8kkdccgmv96vW1qIQzU99aMQ1+XNeo6itRotmYQ0j8+xlhlEUBQ0ztiZvuG2cxAChr6 22QxIhtpXZ6ZvAkrMLo5HT7ZYfsHVWzIJ2yKMB5H3w81SPg3IvAOF/XqgpTenKxT3AR/Anre1jbBa 5T4FbzmfMt67wCGdvro8HFHivVFFtpRbdX2x3wnviudr35z2uOGNZgbxfcNisa/ZFz7Fzu5vDAznP UN6EINCw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pHQcT-008oZ5-S7; Mon, 16 Jan 2023 14:37:54 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E59A8300C0C; Mon, 16 Jan 2023 15:37:38 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id ADBD820B75F3D; Mon, 16 Jan 2023 15:37:38 +0100 (CET) Message-ID: <20230116143645.589522290@infradead.org> User-Agent: quilt/0.66 Date: Mon, 16 Jan 2023 15:25:34 +0100 From: Peter Zijlstra <peterz@infradead.org> To: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com> Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, Juergen Gross <jgross@suse.com>, "Rafael J. Wysocki" <rafael@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>, Kees Cook <keescook@chromium.org>, mark.rutland@arm.com, Andrew Cooper <Andrew.Cooper3@citrix.com>, =?utf-8?b?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>, "H. Peter Anvin" <hpa@zytor.com>, jroedel@suse.de Subject: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() References: <20230116142533.905102512@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755191296241935759?= X-GMAIL-MSGID: =?utf-8?q?1755191296241935759?= |
Series |
x86: retbleed=stuff fixes
|
|
Commit Message
Peter Zijlstra
Jan. 16, 2023, 2:25 p.m. UTC
The boot trampolines from trampoline_64.S have code flow like:
16bit BIOS SEV-ES 64bit EFI
trampoline_start() sev_es_trampoline_start() trampoline_start_64()
verify_cpu() | |
switch_to_protected: <---------------' v
| pa_trampoline_compat()
v |
startup_32() <-----------------------------------------------'
|
v
startup_64()
|
v
tr_start() := head_64.S:secondary_startup_64()
Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
touch the APs), there is already a verify_cpu() invocation.
Removing the verify_cpu() invocation from secondary_startup_64()
renders the whole secondary_startup_64_no_verify() thing moot, so
remove that too.
Cc: jroedel@suse.de
Cc: hpa@zytor.com
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/include/asm/realmode.h | 1 -
arch/x86/kernel/head_64.S | 16 ----------------
arch/x86/realmode/init.c | 6 ------
3 files changed, 23 deletions(-)
Comments
* Peter Zijlstra <peterz@infradead.org> wrote: > The boot trampolines from trampoline_64.S have code flow like: > > 16bit BIOS SEV-ES 64bit EFI > > trampoline_start() sev_es_trampoline_start() trampoline_start_64() > verify_cpu() | | > switch_to_protected: <---------------' v > | pa_trampoline_compat() > v | > startup_32() <-----------------------------------------------' > | > v > startup_64() > | > v > tr_start() := head_64.S:secondary_startup_64() oh ... this nice flow chart should move into a prominent C comment I think, it's far too good to be forgotten in an Git commit changelog. > Since AP bringup always goes through the 16bit BIOS path (EFI doesn't > touch the APs), there is already a verify_cpu() invocation. > > Removing the verify_cpu() invocation from secondary_startup_64() > renders the whole secondary_startup_64_no_verify() thing moot, so > remove that too. > > Cc: jroedel@suse.de > Cc: hpa@zytor.com > Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking") > Reported-by: Joan Bruguera <joanbrugueram@gmail.com> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Thanks, Ingo
On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote: > The boot trampolines from trampoline_64.S have code flow like: > > 16bit BIOS SEV-ES 64bit EFI > > trampoline_start() sev_es_trampoline_start() trampoline_start_64() > verify_cpu() | | > switch_to_protected: <---------------' v > | pa_trampoline_compat() > v | > startup_32() <-----------------------------------------------' > | > v > startup_64() > | > v > tr_start() := head_64.S:secondary_startup_64() > > Since AP bringup always goes through the 16bit BIOS path (EFI doesn't > touch the APs), there is already a verify_cpu() invocation. So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs -- can any of the TDX capable folks tell me if we need verify_cpu() on these? Aside from checking for LM, it seems to clear XD_DISABLE on Intel and force enable SSE on AMD/K7. Surely none of that is needed for these shiny new chips? I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry point, but it seems really sad to need that on modern systems.
On Wed, Jan 18, 2023 at 10:45:44AM +0100, Peter Zijlstra wrote: > On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote: > > The boot trampolines from trampoline_64.S have code flow like: > > > > 16bit BIOS SEV-ES 64bit EFI > > > > trampoline_start() sev_es_trampoline_start() trampoline_start_64() > > verify_cpu() | | > > switch_to_protected: <---------------' v > > | pa_trampoline_compat() > > v | > > startup_32() <-----------------------------------------------' > > | > > v > > startup_64() > > | > > v > > tr_start() := head_64.S:secondary_startup_64() > > > > Since AP bringup always goes through the 16bit BIOS path (EFI doesn't > > touch the APs), there is already a verify_cpu() invocation. > > So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs -- > can any of the TDX capable folks tell me if we need verify_cpu() on > these? > > Aside from checking for LM, it seems to clear XD_DISABLE on Intel and > force enable SSE on AMD/K7. Surely none of that is needed for these > shiny new chips? TDX has no XD_DISABLE set and it doesn't allow to write to IA32_MISC_ENABLE MSR (triggers #VE), so we should be safe.
On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead.org> wrote: >On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote: >> The boot trampolines from trampoline_64.S have code flow like: >> >> 16bit BIOS SEV-ES 64bit EFI >> >> trampoline_start() sev_es_trampoline_start() trampoline_start_64() >> verify_cpu() | | >> switch_to_protected: <---------------' v >> | pa_trampoline_compat() >> v | >> startup_32() <-----------------------------------------------' >> | >> v >> startup_64() >> | >> v >> tr_start() := head_64.S:secondary_startup_64() >> >> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't >> touch the APs), there is already a verify_cpu() invocation. > >So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs -- >can any of the TDX capable folks tell me if we need verify_cpu() on >these? > >Aside from checking for LM, it seems to clear XD_DISABLE on Intel and >force enable SSE on AMD/K7. Surely none of that is needed for these >shiny new chips? > >I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry >point, but it seems really sad to need that on modern systems. Sad, perhaps, but really better for orthogonality – fewer special cases.
On Thu, Jan 19, 2023 at 11:35:06AM -0800, H. Peter Anvin wrote: > On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead.org> wrote: > >On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote: > >> The boot trampolines from trampoline_64.S have code flow like: > >> > >> 16bit BIOS SEV-ES 64bit EFI > >> > >> trampoline_start() sev_es_trampoline_start() trampoline_start_64() > >> verify_cpu() | | > >> switch_to_protected: <---------------' v > >> | pa_trampoline_compat() > >> v | > >> startup_32() <-----------------------------------------------' > >> | > >> v > >> startup_64() > >> | > >> v > >> tr_start() := head_64.S:secondary_startup_64() > >> > >> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't > >> touch the APs), there is already a verify_cpu() invocation. > > > >So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs -- > >can any of the TDX capable folks tell me if we need verify_cpu() on > >these? > > > >Aside from checking for LM, it seems to clear XD_DISABLE on Intel and > >force enable SSE on AMD/K7. Surely none of that is needed for these > >shiny new chips? > > > >I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry > >point, but it seems really sad to need that on modern systems. > > Sad, perhaps, but really better for orthogonality – fewer special cases. I'd argue more, but whatever. XD_DISABLE is an abomination and 64bit entry points should care about it just as much as having LM. And this way we have 2/3 instead of 1/3 entry points do 'special' nonsense. I ended up with this trainwreck, it adds verify_cpu to pa_trampoline_compat() because for some raisin it doesn't want to assemble when placed in trampoline_start64(). Is this really what we want? --- --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -689,9 +689,14 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmo jmp 1b SYM_FUNC_END(.Lno_longmode) - .globl verify_cpu #include "../../kernel/verify_cpu.S" + .globl verify_cpu +SYM_FUNC_START_LOCAL(verify_cpu) + VERIFY_CPU + RET +SYM_FUNC_END(verify_cpu) + .data SYM_DATA_START_LOCAL(gdt64) .word gdt_end - gdt - 1 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -321,6 +321,11 @@ SYM_FUNC_END(startup_32_smp) #include "verify_cpu.S" +SYM_FUNC_START_LOCAL(verify_cpu) + VERIFY_CPU + RET +SYM_FUNC_END(verify_cpu) + __INIT SYM_FUNC_START(early_idt_handler_array) # 36(%esp) %eflags --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -345,6 +345,12 @@ SYM_CODE_START(secondary_startup_64) SYM_CODE_END(secondary_startup_64) #include "verify_cpu.S" + +SYM_FUNC_START_LOCAL(verify_cpu) + VERIFY_CPU + RET +SYM_FUNC_END(verify_cpu) + #include "sev_verify_cbit.S" #ifdef CONFIG_HOTPLUG_CPU --- a/arch/x86/kernel/verify_cpu.S +++ b/arch/x86/kernel/verify_cpu.S @@ -31,7 +31,7 @@ #include <asm/cpufeatures.h> #include <asm/msr-index.h> -SYM_FUNC_START_LOCAL(verify_cpu) +.macro VERIFY_CPU pushf # Save caller passed flags push $0 # Kill any dangerous flags popf @@ -46,31 +46,31 @@ SYM_FUNC_START_LOCAL(verify_cpu) pushfl popl %eax cmpl %eax,%ebx - jz .Lverify_cpu_no_longmode # cpu has no cpuid + jz .Lverify_cpu_no_longmode_\@ # cpu has no cpuid #endif movl $0x0,%eax # See if cpuid 1 is implemented cpuid cmpl $0x1,%eax - jb .Lverify_cpu_no_longmode # no cpuid 1 + jb .Lverify_cpu_no_longmode_\@ # no cpuid 1 xor %di,%di cmpl $0x68747541,%ebx # AuthenticAMD - jnz .Lverify_cpu_noamd + jnz .Lverify_cpu_noamd_\@ cmpl $0x69746e65,%edx - jnz .Lverify_cpu_noamd + jnz .Lverify_cpu_noamd_\@ cmpl $0x444d4163,%ecx - jnz .Lverify_cpu_noamd + jnz .Lverify_cpu_noamd_\@ mov $1,%di # cpu is from AMD - jmp .Lverify_cpu_check + jmp .Lverify_cpu_check_\@ -.Lverify_cpu_noamd: +.Lverify_cpu_noamd_\@: cmpl $0x756e6547,%ebx # GenuineIntel? - jnz .Lverify_cpu_check + jnz .Lverify_cpu_check_\@ cmpl $0x49656e69,%edx - jnz .Lverify_cpu_check + jnz .Lverify_cpu_check_\@ cmpl $0x6c65746e,%ecx - jnz .Lverify_cpu_check + jnz .Lverify_cpu_check_\@ # only call IA32_MISC_ENABLE when: # family > 6 || (family == 6 && model >= 0xd) @@ -81,60 +81,62 @@ SYM_FUNC_START_LOCAL(verify_cpu) andl $0x0ff00f00, %eax # mask family and extended family shrl $8, %eax cmpl $6, %eax - ja .Lverify_cpu_clear_xd # family > 6, ok - jb .Lverify_cpu_check # family < 6, skip + ja .Lverify_cpu_clear_xd_\@ # family > 6, ok + jb .Lverify_cpu_check_\@ # family < 6, skip andl $0x000f00f0, %ecx # mask model and extended model shrl $4, %ecx cmpl $0xd, %ecx - jb .Lverify_cpu_check # family == 6, model < 0xd, skip + jb .Lverify_cpu_check_\@ # family == 6, model < 0xd, skip -.Lverify_cpu_clear_xd: +.Lverify_cpu_clear_xd_\@: movl $MSR_IA32_MISC_ENABLE, %ecx rdmsr btrl $2, %edx # clear MSR_IA32_MISC_ENABLE_XD_DISABLE - jnc .Lverify_cpu_check # only write MSR if bit was changed + jnc .Lverify_cpu_check_\@ # only write MSR if bit was changed wrmsr -.Lverify_cpu_check: +.Lverify_cpu_check_\@: movl $0x1,%eax # Does the cpu have what it takes cpuid andl $REQUIRED_MASK0,%edx xorl $REQUIRED_MASK0,%edx - jnz .Lverify_cpu_no_longmode + jnz .Lverify_cpu_no_longmode_\@ movl $0x80000000,%eax # See if extended cpuid is implemented cpuid cmpl $0x80000001,%eax - jb .Lverify_cpu_no_longmode # no extended cpuid + jb .Lverify_cpu_no_longmode_\@ # no extended cpuid movl $0x80000001,%eax # Does the cpu have what it takes cpuid andl $REQUIRED_MASK1,%edx xorl $REQUIRED_MASK1,%edx - jnz .Lverify_cpu_no_longmode + jnz .Lverify_cpu_no_longmode_\@ -.Lverify_cpu_sse_test: +.Lverify_cpu_sse_test_\@: movl $1,%eax cpuid andl $SSE_MASK,%edx cmpl $SSE_MASK,%edx - je .Lverify_cpu_sse_ok + je .Lverify_cpu_sse_ok_\@ test %di,%di - jz .Lverify_cpu_no_longmode # only try to force SSE on AMD + jz .Lverify_cpu_no_longmode_\@ # only try to force SSE on AMD movl $MSR_K7_HWCR,%ecx rdmsr btr $15,%eax # enable SSE wrmsr xor %di,%di # don't loop - jmp .Lverify_cpu_sse_test # try again + jmp .Lverify_cpu_sse_test_\@ # try again -.Lverify_cpu_no_longmode: +.Lverify_cpu_no_longmode_\@: popf # Restore caller passed flags movl $1,%eax - RET -.Lverify_cpu_sse_ok: + jmp .Lverify_cpu_ret_\@ + +.Lverify_cpu_sse_ok_\@: popf # Restore caller passed flags xorl %eax, %eax - RET -SYM_FUNC_END(verify_cpu) + +.Lverify_cpu_ret_\@: +.endm --- a/arch/x86/realmode/rm/trampoline_64.S +++ b/arch/x86/realmode/rm/trampoline_64.S @@ -34,6 +34,8 @@ #include <asm/realmode.h> #include "realmode.h" +#include "../kernel/verify_cpu.S" + .text .code16 @@ -52,7 +54,8 @@ SYM_CODE_START(trampoline_start) # Setup stack movl $rm_stack_end, %esp - call verify_cpu # Verify the cpu supports long mode + VERIFY_CPU # Verify the cpu supports long mode + testl %eax, %eax # Check for return code jnz no_longmode @@ -100,8 +103,6 @@ SYM_CODE_START(sev_es_trampoline_start) SYM_CODE_END(sev_es_trampoline_start) #endif /* CONFIG_AMD_MEM_ENCRYPT */ -#include "../kernel/verify_cpu.S" - .section ".text32","ax" .code32 .balign 4 @@ -180,6 +181,8 @@ SYM_CODE_START(pa_trampoline_compat) movl $rm_stack_end, %esp movw $__KERNEL_DS, %dx + VERIFY_CPU + movl $(CR0_STATE & ~X86_CR0_PG), %eax movl %eax, %cr0 ljmpl $__KERNEL32_CS, $pa_startup_32
--- a/arch/x86/include/asm/realmode.h +++ b/arch/x86/include/asm/realmode.h @@ -73,7 +73,6 @@ extern unsigned char startup_32_smp[]; extern unsigned char boot_gdt[]; #else extern unsigned char secondary_startup_64[]; -extern unsigned char secondary_startup_64_no_verify[]; #endif static inline size_t real_mode_size_needed(void) --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -143,22 +143,6 @@ SYM_CODE_START(secondary_startup_64) * after the boot processor executes this code. */ - /* Sanitize CPU configuration */ - call verify_cpu - - /* - * The secondary_startup_64_no_verify entry point is only used by - * SEV-ES guests. In those guests the call to verify_cpu() would cause - * #VC exceptions which can not be handled at this stage of secondary - * CPU bringup. - * - * All non SEV-ES systems, especially Intel systems, need to execute - * verify_cpu() above to make sure NX is enabled. - */ -SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL) - UNWIND_HINT_EMPTY - ANNOTATE_NOENDBR - /* * Retrieve the modifier (SME encryption mask if SME is active) to be * added to the initial pgdir entry that will be programmed into CR3. --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -74,12 +74,6 @@ static void __init sme_sev_setup_real_mo th->flags |= TH_FLAGS_SME_ACTIVE; if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) { - /* - * Skip the call to verify_cpu() in secondary_startup_64 as it - * will cause #VC exceptions when the AP can't handle them yet. - */ - th->start = (u64) secondary_startup_64_no_verify; - if (sev_es_setup_ap_jump_table(real_mode_header)) panic("Failed to get/update SEV-ES AP Jump Table"); }