[tip:,x86/percpu] x86/smp: Move the call to smp_processor_id() after the early exit in native_stop_other_cpus()

Message ID 170137898974.398.10685540680447334314.tip-bot2@tip-bot2
State New
Headers
Series [tip:,x86/percpu] x86/smp: Move the call to smp_processor_id() after the early exit in native_stop_other_cpus() |

Commit Message

tip-bot2 for Thomas Gleixner Nov. 30, 2023, 9:16 p.m. UTC
  The following commit has been merged into the x86/percpu branch of tip:

Commit-ID:     9d1c8f21533729b6ead531b676fa7d327cf00819
Gitweb:        https://git.kernel.org/tip/9d1c8f21533729b6ead531b676fa7d327cf00819
Author:        Uros Bizjak <ubizjak@gmail.com>
AuthorDate:    Thu, 23 Nov 2023 21:34:22 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 30 Nov 2023 20:25:09 +01:00

x86/smp: Move the call to smp_processor_id() after the early exit in native_stop_other_cpus()

Improve code generation in native_stop_other_cpus() a tiny bit:
smp_processor_id() accesses a per-CPU variable, so the compiler
is not able to move the call after the early exit on its own.

Also rename the "cpu" variable to a more descriptive "this_cpu", and
use 'cpu' as a separate iterator variable later in the function.

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20231123203605.3474745-1-ubizjak@gmail.com
---
 arch/x86/kernel/smp.c |  9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)
  

Patch

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 6eb06d0..65dd44e 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -148,14 +148,15 @@  static int register_stop_handler(void)
 
 static void native_stop_other_cpus(int wait)
 {
-	unsigned int cpu = smp_processor_id();
+	unsigned int this_cpu;
 	unsigned long flags, timeout;
 
 	if (reboot_force)
 		return;
 
 	/* Only proceed if this is the first CPU to reach this code */
-	if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1)
+	this_cpu = smp_processor_id();
+	if (atomic_cmpxchg(&stopping_cpu, -1, this_cpu) != -1)
 		return;
 
 	/* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */
@@ -190,7 +191,7 @@  static void native_stop_other_cpus(int wait)
 	 * NMIs.
 	 */
 	cpumask_copy(&cpus_stop_mask, cpu_online_mask);
-	cpumask_clear_cpu(cpu, &cpus_stop_mask);
+	cpumask_clear_cpu(this_cpu, &cpus_stop_mask);
 
 	if (!cpumask_empty(&cpus_stop_mask)) {
 		apic_send_IPI_allbutself(REBOOT_VECTOR);
@@ -234,6 +235,8 @@  static void native_stop_other_cpus(int wait)
 		 * CPUs to stop.
 		 */
 		if (!smp_no_nmi_ipi && !register_stop_handler()) {
+			unsigned int cpu;
+
 			pr_emerg("Shutting down cpus with NMI\n");
 
 			for_each_cpu(cpu, &cpus_stop_mask)