[v5,06/18] add_timer_on(): Make sure callers have TIMER_PINNED flag

Message ID 20230301141744.16063-7-anna-maria@linutronix.de
State New
Headers
Series timer: Move from a push remote at enqueue to a pull at expiry model |

Commit Message

Anna-Maria Behnsen March 1, 2023, 2:17 p.m. UTC
  The implementation of the hierachical timer pull model will change the
timer bases per CPU. Timers, that have to expire on a specific CPU, require
the TIMER_PINNED flag. Otherwise they will be queued on the dedicated CPU
but in global timer base and those timers could also expire on other
CPUs. Timers with TIMER_DEFERRABLE flag end up in a separate base anyway
and are executed on the local CPU only.

Therefore add the missing TIMER_PINNED flag for those callers who use
add_timer_on() without the flag. No functional change.

Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: John Stultz <jstultz@google.com>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
---
v5: Add comment in workqueue.c that it's only a workaround for now
---
 arch/x86/kernel/tsc_sync.c |  3 ++-
 drivers/char/random.c      |  2 +-
 kernel/time/clocksource.c  |  2 +-
 kernel/workqueue.c         | 15 +++++++++++++--
 4 files changed, 17 insertions(+), 5 deletions(-)
  

Patch

diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c
index 9452dc9664b5..eab827288e0f 100644
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -110,7 +110,8 @@  static int __init start_sync_check_timer(void)
 	if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)
 		return 0;
 
-	timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);
+	timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn,
+		    TIMER_PINNED);
 	tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;
 	add_timer(&tsc_sync_check_timer);
 
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ce3ccd172cc8..db6a7c0695de 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1007,7 +1007,7 @@  static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
 #define FASTMIX_PERM HSIPHASH_PERMUTATION
 	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
 #endif
-	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
+	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, TIMER_PINNED)
 };
 
 /*
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index 91836b727cef..e982c119e3c9 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -561,7 +561,7 @@  static inline void clocksource_start_watchdog(void)
 {
 	if (watchdog_running || !watchdog || list_empty(&watchdog_list))
 		return;
-	timer_setup(&watchdog_timer, clocksource_watchdog, 0);
+	timer_setup(&watchdog_timer, clocksource_watchdog, TIMER_PINNED);
 	watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
 	add_timer_on(&watchdog_timer, cpumask_first(cpu_online_mask));
 	watchdog_running = 1;
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b8b541caed48..a428d94084ee 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1677,10 +1677,21 @@  static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
 	dwork->cpu = cpu;
 	timer->expires = jiffies + delay;
 
-	if (unlikely(cpu != WORK_CPU_UNBOUND))
+	if (unlikely(cpu != WORK_CPU_UNBOUND)) {
+		/*
+		 * TODO: Setting the flag is a workaround for now; needs to
+		 * be cleaned up with new work initializers and defines
+		 */
+		timer->flags |= TIMER_PINNED;
 		add_timer_on(timer, cpu);
-	else
+	} else {
+		/*
+		 * TODO: Resetting the flag is a workaround for now; needs
+		 * to be cleaned up with new work initializers and defines
+		 */
+		timer->flags &= ~TIMER_PINNED;
 		add_timer(timer);
+	}
 }
 
 /**