From patchwork Tue Oct 17 21:24:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 154569 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4407243vqb; Tue, 17 Oct 2023 14:26:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH0XybJ1qv1TUIm+v2CaCj/YoWhbRnBXaON3LGwweQN0BQT4KRsygdFWGkc312hLvDszoek X-Received: by 2002:a05:6a21:3987:b0:133:6e3d:68cd with SMTP id ad7-20020a056a21398700b001336e3d68cdmr3417911pzc.3.1697578004957; Tue, 17 Oct 2023 14:26:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697578004; cv=none; d=google.com; s=arc-20160816; b=bWwgsRftIAy4QBQCiMgGkMFSleZPzsCwv6m41v8w4o2rVMGraAJyk8yK37lzHpo0S4 Wo2JKjsd7ZztYs4mULZtw1tnVqNtSaT7dm5IBYbo5keNP4T8m+0jgDKO5Ufq6SRGwNRY Q8XDu//mHJWbKcIY10VbagNkuC28Yu775/k56/xdAbEBNEUdiPsU9lvPeHOzUVUrFe5k mJIYqA9xK4JzRbq87UIq4cyU/W3XI0/AsG8dzQQM1/s2X0vIVvH4a1dg+L3VCWt4E3l/ hL+/wtvhBmhnSPjuT8dpHUcUg2sl65hEngd601EsiyvRSQn2bikBXUCUHkJdM0/SYSN1 vAGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:date:mime-version:references:subject:cc:to:from :dkim-signature:dkim-signature:message-id; bh=1B5sHXZsVnKtrOfKNlTlVMA9O9//zwk+GRrR9QnN4Ps=; fh=8iy+GLBXZdpcs/hIddJ7mbUapYjefwP9Gb111pAzFms=; b=CX8JqvbCsHDYhiFw3GX6tudcjonXT3MCHbsQq7IqJ2djvSwVvnjAiHLq9mKAb3YVim S4Ls4iNTr6YA7ntLJ45pqiP4KuBlDCRXlGvIpo132aNcaQFpBLVLETRFHXul1/rvBsyp 6gCJs/TTxgY9vMgh8W1Lan4rcO0obTtYbBnu+7r4tbW/ou3eJJdCYIj1f3TtvsbXrI+i qbgf7A0Pm96bQZNhkOvAXht6NBsIDj12bP/jNcHPFNo0ZFC1tmSKXUxqyCcJEOctf+Q7 fT8JAv00RpeIRUTP78h6X9oywjYu9+L9lbGDt7NFaGuPSJol7NNY1lNAWdlwzxzuN6Pe jM+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=GUhDRkt8; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id i191-20020a6387c8000000b00578cbee11easi640631pge.9.2023.10.17.14.26.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 14:26:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=GUhDRkt8; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id EBD24801B180; Tue, 17 Oct 2023 14:26:41 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232593AbjJQV0K (ORCPT + 21 others); Tue, 17 Oct 2023 17:26:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344463AbjJQVZ2 (ORCPT ); Tue, 17 Oct 2023 17:25:28 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FDD81BD5 for ; Tue, 17 Oct 2023 14:24:12 -0700 (PDT) Message-ID: <20231017211723.856859665@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1697577851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1B5sHXZsVnKtrOfKNlTlVMA9O9//zwk+GRrR9QnN4Ps=; b=GUhDRkt8FEtrhPCJURYt8xdkPK0gwhVg5J96yxSgnXhlnJB5CdEKSG3KrElemk5ZG5ZDUB 64o6LbllLBUlOO6yyD5jrc4c+WOsCsr/VKNcE2aDc33iWX6/ukNaW8fexN+s0UrS5hz/1i bb+GNGQkyq+3WjkZltWz/xJA4wMxGS3GO8DgLK5rbbvvuK9MiBXlYRBS6TkGHU6P8fVUz8 kPQ44w7bVOkSimaJ0XZCniGaKd8NtdCAYzZ3CR+0zyRAKcnz27MQmZsyF5YmmyEbNXsdqQ lOXgPTFuD9kaPhcWO1xD+xeJb+tIQA461ipq1wgurPlq4Fj0hebhrZTOa/aSiw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1697577851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1B5sHXZsVnKtrOfKNlTlVMA9O9//zwk+GRrR9QnN4Ps=; b=P5KgVL7ZhSXKs0T5pZMmOqR0Bnt4ph9J15UctnJ3DpgQ+c/4udLd2TEFZdM1GccJ/bdDee wHVUSMUt5sxhydCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov Subject: [patch V5 34/39] x86/microcode: Rendezvous and load in NMI References: <20231017200758.877560658@linutronix.de> MIME-Version: 1.0 Date: Tue, 17 Oct 2023 23:24:10 +0200 (CEST) X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 17 Oct 2023 14:26:42 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780039553712684187 X-GMAIL-MSGID: 1780039553712684187 From: Thomas Gleixner stop_machine() does not prevent the spin-waiting sibling from handling an NMI, which is obviously violating the whole concept of rendezvous. Implement a static branch right in the beginning of the NMI handler which is nopped out except when enabled by the late loading mechanism. The late loader enables the static branch before stop_machine() is invoked. Each CPU has an nmi_enable in its control structure which indicates whether the CPU should go into the update routine. This is required to bridge the gap between enabling the branch and actually being at the point where it is required to enter the loader wait loop. Each CPU which arrives in the stopper thread function sets that flag and issues a self NMI right after that. If the NMI function sees the flag clear, it returns. If it's set it clears the flag and enters the rendezvous. This is safe against a real NMI which hits in between setting the flag and sending the NMI to itself. The real NMI will be swallowed by the microcode update and the self NMI will then let stuff continue. Otherwise this would end up with a spurious NMI. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 12 ++++++++ arch/x86/kernel/cpu/microcode/core.c | 42 ++++++++++++++++++++++++++++--- arch/x86/kernel/cpu/microcode/intel.c | 1 arch/x86/kernel/cpu/microcode/internal.h | 3 +- arch/x86/kernel/nmi.c | 4 ++ 5 files changed, 57 insertions(+), 5 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -72,4 +72,16 @@ static inline u32 intel_get_microcode_re } #endif /* !CONFIG_CPU_SUP_INTEL */ +bool microcode_nmi_handler(void); + +#ifdef CONFIG_MICROCODE_LATE_LOADING +DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); +static __always_inline bool microcode_nmi_handler_enabled(void) +{ + return static_branch_unlikely(µcode_nmi_handler_enable); +} +#else +static __always_inline bool microcode_nmi_handler_enabled(void) { return false; } +#endif + #endif /* _ASM_X86_MICROCODE_H */ --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,7 @@ #include #include +#include #include #include #include @@ -265,8 +267,10 @@ struct microcode_ctrl { enum sibling_ctrl ctrl; enum ucode_state result; unsigned int ctrl_cpu; + bool nmi_enabled; }; +DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); static atomic_t late_cpus_in; @@ -282,7 +286,8 @@ static bool wait_for_cpus(atomic_t *cnt) udelay(1); - if (!(timeout % USEC_PER_MSEC)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % USEC_PER_MSEC)) touch_nmi_watchdog(); } /* Prevent the late comers from making progress and let them time out */ @@ -298,7 +303,8 @@ static bool wait_for_ctrl(void) if (this_cpu_read(ucode_ctrl.ctrl) != SCTRL_WAIT) return true; udelay(1); - if (!(timeout % 1000)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % 1000)) touch_nmi_watchdog(); } return false; @@ -374,7 +380,7 @@ static void load_primary(unsigned int cp } } -static int load_cpus_stopped(void *unused) +static bool microcode_update_handler(void) { unsigned int cpu = smp_processor_id(); @@ -383,7 +389,29 @@ static int load_cpus_stopped(void *unuse else load_secondary(cpu); - /* No point to wait here. The CPUs will all wait in stop_machine(). */ + touch_nmi_watchdog(); + return true; +} + +bool microcode_nmi_handler(void) +{ + if (!this_cpu_read(ucode_ctrl.nmi_enabled)) + return false; + + this_cpu_write(ucode_ctrl.nmi_enabled, false); + return microcode_update_handler(); +} + +static int load_cpus_stopped(void *unused) +{ + if (microcode_ops->use_nmi) { + /* Enable the NMI handler and raise NMI */ + this_cpu_write(ucode_ctrl.nmi_enabled, true); + apic->send_IPI(smp_processor_id(), NMI_VECTOR); + } else { + /* Just invoke the handler directly */ + microcode_update_handler(); + } return 0; } @@ -404,8 +432,14 @@ static int load_late_stop_cpus(void) */ store_cpu_caps(&prev_info); + if (microcode_ops->use_nmi) + static_branch_enable_cpuslocked(µcode_nmi_handler_enable); + stop_machine_cpuslocked(load_cpus_stopped, NULL, cpu_online_mask); + if (microcode_ops->use_nmi) + static_branch_disable_cpuslocked(µcode_nmi_handler_enable); + /* Analyze the results */ for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { switch (per_cpu(ucode_ctrl.result, cpu)) { --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -595,6 +595,7 @@ static struct microcode_ops microcode_in .collect_cpu_info = collect_cpu_info, .apply_microcode = apply_microcode_late, .finalize_late_load = finalize_late_load, + .use_nmi = IS_ENABLED(CONFIG_X86_64), }; static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,7 +31,8 @@ struct microcode_ops { enum ucode_state (*apply_microcode)(int cpu); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); void (*finalize_late_load)(int result); - unsigned int nmi_safe : 1; + unsigned int nmi_safe : 1, + use_nmi : 1; }; extern struct ucode_cpu_info ucode_cpu_info[]; --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -343,6 +344,9 @@ static noinstr void default_do_nmi(struc instrumentation_begin(); + if (microcode_nmi_handler_enabled() && microcode_nmi_handler()) + goto out; + handled = nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); if (handled) {