[v2] workqueue: make workers threads stick to HK_TYPE_KTHREAD cpumask

Message ID 20221104102708.849989-1-l3b2w1@gmail.com
State New
Headers
Series [v2] workqueue: make workers threads stick to HK_TYPE_KTHREAD cpumask |

Commit Message

Binglei Wang Nov. 4, 2022, 10:27 a.m. UTC
  From: Binglei Wang <l3b2w1@gmail.com>

    When worker thread newly created or rebinded to hotplug oneline cpu,
    set its affinity to HK_TYPE_KTHREAD cpumask.
    Make workers threads stick to HK_TYPE_KTHREAD cpumask all the time
    to keep the explicitly isolated(nohz_full) cpus away from interference.

Signed-off-by: Binglei Wang <l3b2w1@gmail.com>
Reported-by: kernel test robot <lkp@intel.com>
---

Notes:
    v1 -> v2 : fix robot warning and error
    
    v1: https://lkml.org/lkml/2022/11/2/1566
    All error/warnings (new ones prefixed by >>):
    
    >> kernel/workqueue.c:1958:11: error: incompatible pointer types assigning to 'const struct cupmask *' from 'const struct cpumask *' [-Werror,-Wincompatible-pointer-types]
    cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
    	^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    >> kernel/workqueue.c:1959:42: warning: pointer type mismatch ('const struct cupmask *' and 'struct cpumask *') [-Wpointer-type-mismatch]
    kthread_bind_mask(worker->task, cpumask ? cpumask : pool->attrs->cpumask);
    ^ ~~~~~~~   ~~~~~~~~~~~~~~~~~~~~
    1 warning and 1 error generated.

 kernel/workqueue.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)
  

Patch

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7cd5f5e7e..3a780f1a1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1928,6 +1928,7 @@  static struct worker *create_worker(struct worker_pool *pool)
 	struct worker *worker;
 	int id;
 	char id_buf[16];
+	const struct cpumask *cpumask;
 
 	/* ID is needed to determine kthread name */
 	id = ida_alloc(&pool->worker_ida, GFP_KERNEL);
@@ -1952,7 +1953,12 @@  static struct worker *create_worker(struct worker_pool *pool)
 		goto fail;
 
 	set_user_nice(worker->task, pool->attrs->nice);
-	kthread_bind_mask(worker->task, pool->attrs->cpumask);
+
+	if (housekeeping_enabled(HK_TYPE_KTHREAD))
+		cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+	else
+		cpumask = (const struct cpumask *)pool->attrs->cpumask;
+	kthread_bind_mask(worker->task, cpumask);
 
 	/* successful, attach the worker to the pool */
 	worker_attach_to_pool(worker, pool);
@@ -5027,20 +5033,26 @@  static void unbind_workers(int cpu)
 static void rebind_workers(struct worker_pool *pool)
 {
 	struct worker *worker;
+	const struct cpumask *cpumask = NULL;
 
 	lockdep_assert_held(&wq_pool_attach_mutex);
 
+	if (housekeeping_enabled(HK_TYPE_KTHREAD))
+		cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+
 	/*
 	 * Restore CPU affinity of all workers.  As all idle workers should
 	 * be on the run-queue of the associated CPU before any local
 	 * wake-ups for concurrency management happen, restore CPU affinity
 	 * of all workers first and then clear UNBOUND.  As we're called
 	 * from CPU_ONLINE, the following shouldn't fail.
+	 *
+	 * Also consider the housekeeping HK_TYPE_KTHREAD cpumask.
 	 */
 	for_each_pool_worker(worker, pool) {
 		kthread_set_per_cpu(worker->task, pool->cpu);
 		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
-						  pool->attrs->cpumask) < 0);
+						  cpumask ? cpumask : pool->attrs->cpumask) < 0);
 	}
 
 	raw_spin_lock_irq(&pool->lock);